The Economics of Artificial General Intelligence Takeoff
There is a corner of the web where super smart people debate the future of artificial intelligence, and in that corner, there is an ongoing debate about whether we will experience what is known as a fast takeoff scenario for Artificial General Intelligence (AGI), or what these folks call – “foom” (as in the sound effect for something sudden).
Artificial General Intelligence, for those who aren’t familiar with the term, is the kind of AI we tend to see in movies – very human-like. It’s not the kind of narrow AI that just beat Lee Sedol in the game of Go, or the kind that will drive a car or sort your pictures on Google Photos. Artificial General Intelligence is something quite remarkable. It doesn’t exist (yet), but if and when it does, it will be a total game-changer.
The question that I tend to think a lot about are the economic structures that would lead to something like this. How might an AGI actually get built? Would it be one company? A guy or gal in a basement? Or will it have to be a much larger collaborative effort? There are many reasons why this matters, but the one I’m most interested in centers on the notion of what I call “the code behind the code.” All designed systems have underlying assumptions, biases and intentions baked into them, and whatever form of collaboration it is that builds an AGI will have this code behind the code.
So, it’s interesting to me to see a debate going on right now about how localized the actual development process might be in the lead up to an AGI:
It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.
However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.