Algorithmic class structures

I’ve been dipping my toe into the murky waters of artificial intelligences recently. For some background reading over the Christmas period, I urge you to read this fantastic two part explanation of the field by Tim Urban.

Anyhow, last night over dinner a thought emerged. We currently live in a world of Artificial Narrow Intelligence (ANI), which is to say we have intelligent algorithms, but they’re very focussed. We can build a computer to play chess very well, but it can’t play tennis, or the piano. Over time we will begin to join these together to approach the kind of complex cross-functional computing that humans perform, called Artificial General Intelligence (AGI). This is where things get interesting. Where it get’s really interesting is the day after AGI, when computing ability steps beyond that of a human. This is known as superintelligence or ASI, and has a large portion of the scientific community excited and terrified, because we’re not exactly sure how things will go when we create something smarter than ourselves. Whichever way things go, I am interested in the ways in which this transition will happen. As fans of the future mundane will know, the world is an accretive space. These AGI’s will not appear all at once, but in an uneven Gibsonian manner. This leads to an interesting construct: In order to progress from an AGI to an ASI, it’s likely that a system will enroll the abilities of other algorithms, perhaps other ANI’s. In so doing it becomes increasingly able, and progresses towards superintelligence.

Here’s the rub: if the target goal of any AI is to become better at something, then the apogee of better is ‘best’. In aiming to be the best, it’s not inconceivable that it may be in the interests of a system to suppress or even exploit other algorithms, creating something akin to a class structure.

Could ‘working class’ algorithms be exploited for the gain of a privileged elite superintelligence? Could we see slave systems tricked into performing menial tasks to further the capitalist goals of a few masters? Over time will we see assets diverted away from underdeveloped intelligences to help the progress of a small number of high functioning algorithms, creating something akin to an ‘intelligence gap’ across which it is increasingly difficult to pass…..?

I’ll be writing more about this field and the impact it will have on the design of objects soon.