Compensating User-Contributed Data in Machine Learning
We’ve learned that complex machine learning requires lots of processing power (jet engine) and lots of data (jet fuel). So it’s no surprise that the big winners are those companies like Facebook, Google, Uber and others that sit atop massive systems for gathering user feedback.
In this piece, Alvis Brigis asks whether there is an economic model for compensating end users who contribute to that learning. A couple years back, I spent some time trying to model what that might look like using data from Tsu (remember them??). Color me a bit skeptical. What I learned was that without some mechanisms for concentrating that income (which is what Tsu did through its affiliate system), it’s really hard to generate meaningful income for an individual user.
All that said, perhaps if there is enough income coming in from all the different companies benefiting from our work to train these systems, it will serve as at least a meaningful part of the new income-generating solutions (including Basic Income) for generating non-wage income.
HT Wayne Radinsky.
Originally shared by Wayne Radinsky
As AI replaces traditional jobs, it will create new jobs in the form of AI trainers, posits Alvis Brigis. “As the companies now trailblazing AI (Google, Amazon, Apple, Microsoft, Facebook, Tesla, Uber, etc) have generated more value through machine learning, they’ve realized that 1) machine learning can be applied to infinitely more domains/problems, 2) that more complex, creative problems require more human-in-the-loop intervention, and 3) that more value can be created by integrating the machine learning they’ve already done — a cumulative effect, eg Google’s recent breakthrough in translation, which ultimately required billions or trillions of human-in-the-loop (including you, if you ever used Google Translate) machine learning cycles to finally break through to another level of automatic functionality.”
“As the Great AI Race heats up and more companies, countries and other actors come to realize the narrow and broader potential of human-in-the-loop machine learning, the demand for machine learning pros, machine learning guides and content workers will grow proportionately, driving up their share of the pie as they help to build more intelligent superstructures brick by brick.”
“The amount of value shared with users will depend on the size of the pie. With Kurzweil’s Law of Accelerating Returns in full effect, that pie is likely to grow MASSIVELY.”
Ok, now that I have summarized the argument (hopefully fairly, but you can go read the whole post and judge for yourself), I’d like to tack on my own commentary. As a counterargument to this, I would posit that:
1) People paid to train AIs already exist; they are the people who work labeling training data on Amazon Mechanical Turk. Rather than repeat that post, I’ll just link to it:
But I will summarize the key point, which is that AI training jobs are crappy jobs. The pay is low, the work is dull, and, if you want to make enough money to actually live on, you have to sacrifice a sane sleeping schedule because you have to jump on the jobs fast enough otherwise other people will eat them all up before you have a chance to work on them.
2) It seems unlikely the number of these jobs is going to equal the number of jobs displaced. I realize in saying this that AI automates tasks, which are slices of “jobs”, not whole jobs, so this is not a one-to-one correspondence. Even if it does, that situation is temporary, because
3) The endgame is for AI to be able to do everything the human brain can do, and if that’s the case, then AI will be able to do all the crappy training jobs as well. (More precisely, the need for such jobs will and must cease to exist at some point.) I realize this is not imminent and probably won’t happen in any of our lifetimes, so during our lifetimes we will experience a “transition period,” and during that period, the number of AI training jobs will grow until it reaches some maximum at which point it will decline. So the question is whether the maximum is sufficient to generate enough paid jobs for billions of people.
4) To me, this argument seems to stem from the thinking that people who think technology destroys jobs are “Luddites” and are falling for the “Luddite fallacy”, while in reality, while jobs are destroyed, other jobs are always created in some other part of the economy. (See also: lump of labor fallacy). There is evidence this time it’s different. First, for as long as the data has been tracked, the proportions of GDP going to capital and labor have stayed within a narrow band, but starting in about 2005, it went out of that band, indicating that this time, it’s different. This graph shows the labor share going out of its previous band around 2005:
Returns to capital is the inverse of this graph, just flip it upside down. Here’s a related graph of corporate profits, showing corporate profits are higher than they’ve been since the World War II period, and there have even been recent years that exceeded the World War II period:
(As an aside, anyone who thinks that cutting taxes on corporations will generate jobs is wrong — corporations already have extremely high profits, and making them higher won’t result in more hiring. Apple, to cite one example, is sitting on $237.6 billion in cash. Increasing that to $250 billion or $300 billion won’t result in hiring — if Apple wanted to hire people, they could hire thousands of people with the cash they have right now. But they aren’t, and they won’t.)
Finally, there’s this famous graph showing the divergence of the productivity of the economy vs labor income.
As you can see, starting in the 80s — actually the first hint was in the late 70s (!) — productivity and income start to diverge. Labor gets less and less of the fruits of the productivity of the economy.
Applying this to our Mechanical Turk scenario, this suggests that the economic value created by Mechanical Turk workers will go to Google and Facebook shareholders, etc, and not to Mechanical Turk workers.