This process is inevitable.

This process is inevitable. The question then about what happens next becomes not a technical one but a political and economic one. Who gets to benefit from this advancement is related directly to who gets to own the means of production. Right now the answer is looking like a handful of individuals while the other seven billion shuffle around looking for gigs.

Originally shared by Wayne Radinsky

As AI replaces traditional jobs, it will create new jobs in the form of AI trainers, posits Alvis Brigis. “As the companies now trailblazing AI (Google, Amazon, Apple, Microsoft, Facebook, Tesla, Uber, etc) have generated more value through machine learning, they’ve realized that 1) machine learning can be applied to infinitely more domains/problems, 2) that more complex, creative problems require more human-in-the-loop intervention, and 3) that more value can be created by integrating the machine learning they’ve already done — a cumulative effect, eg Google’s recent breakthrough in translation, which ultimately required billions or trillions of human-in-the-loop (including you, if you ever used Google Translate) machine learning cycles to finally break through to another level of automatic functionality.”

“As the Great AI Race heats up and more companies, countries and other actors come to realize the narrow and broader potential of human-in-the-loop machine learning, the demand for machine learning pros, machine learning guides and content workers will grow proportionately, driving up their share of the pie as they help to build more intelligent superstructures brick by brick.”

“The amount of value shared with users will depend on the size of the pie. With Kurzweil’s Law of Accelerating Returns in full effect, that pie is likely to grow MASSIVELY.”

Ok, now that I have summarized the argument (hopefully fairly, but you can go read the whole post and judge for yourself), I’d like to tack on my own commentary. As a counterargument to this, I would posit that:

1) People paid to train AIs already exist; they are the people who work labeling training data on Amazon Mechanical Turk. Rather than repeat that post, I’ll just link to it:

https://plus.google.com/+WayneRadinsky/posts/U6vktFsPYFC

But I will summarize the key point, which is that AI training jobs are crappy jobs. The pay is low, the work is dull, and, if you want to make enough money to actually live on, you have to sacrifice a sane sleeping schedule because you have to jump on the jobs fast enough otherwise other people will eat them all up before you have a chance to work on them.

2) It seems unlikely the number of these jobs is going to equal the number of jobs displaced. I realize in saying this that AI automates tasks, which are slices of “jobs”, not whole jobs, so this is not a one-to-one correspondence. Even if it does, that situation is temporary, because

3) The endgame is for AI to be able to do everything the human brain can do, and if that’s the case, then AI will be able to do all the crappy training jobs as well. (More precisely, the need for such jobs will and must cease to exist at some point.) I realize this is not imminent and probably won’t happen in any of our lifetimes, so during our lifetimes we will experience a “transition period,” and during that period, the number of AI training jobs will grow until it reaches some maximum at which point it will decline. So the question is whether the maximum is sufficient to generate enough paid jobs for billions of people.

4) To me, this argument seems to stem from the thinking that people who think technology destroys jobs are “Luddites” and are falling for the “Luddite fallacy”, while in reality, while jobs are destroyed, other jobs are always created in some other part of the economy. (See also: lump of labor fallacy). There is evidence this time it’s different. First, for as long as the data has been tracked, the proportions of GDP going to capital and labor have stayed within a narrow band, but starting in about 2005, it went out of that band, indicating that this time, it’s different. This graph shows the labor share going out of its previous band around 2005:

https://fred.stlouisfed.org/series/PRS85006173

Returns to capital is the inverse of this graph, just flip it upside down. Here’s a related graph of corporate profits, showing corporate profits are higher than they’ve been since the World War II period, and there have even been recent years that exceeded the World War II period:

https://fred.stlouisfed.org/graph/?g=cSh

(As an aside, anyone who thinks that cutting taxes on corporations will generate jobs is wrong — corporations already have extremely high profits, and making them higher won’t result in more hiring. Apple, to cite one example, is sitting on $237.6 billion in cash. Increasing that to $250 billion or $300 billion won’t result in hiring — if Apple wanted to hire people, they could hire thousands of people with the cash they have right now. But they aren’t, and they won’t.)

Finally, there’s this famous graph showing the divergence of the productivity of the economy vs labor income.

https://thecurrentmoment.files.wordpress.com/2011/08/productivity-and-real-wages.jpg

As you can see, starting in the 80s — actually the first hint was in the late 70s (!) — productivity and income start to diverge. Labor gets less and less of the fruits of the productivity of the economy.

Applying this to our Mechanical Turk scenario, this suggests that the economic value created by Mechanical Turk workers will go to Google and Facebook shareholders, etc, and not to Mechanical Turk workers.

3 thoughts on “This process is inevitable.

  1. “AI training jobs are crappy jobs.” I have to vigorously disagree. This applies to the most basic classification training, which happens to be very suitable for cost-effective data entry via the Mechanical Turk format. More generally training AIs is no different than training people. You need teachers with the specific expertise to train them. If these teachers can use lower-skilled assistants then that helps to provide more jobs. Also there are many more K-12 teachers than top professors. If all of society’s needs are provided by robots it is conceivable that everyone could become a teacher or expert in some domain, especially with their own body’s likes and dislikes.

    Like

  2. Totally agree.

    Unlike humans, machines for the most part won’t require individual caretakers to ‘raise’ them on a one-to-one or small group basis. Rather, prototypes will be optimally trained by very highly-skilled human and machine instructors and/or crowdsourced and/or simulated data, cognitive skills will be packaged and assembled, and new minds cloned and downloaded to hardware as needed. There is no future mass employment opportunity in AI training.

    Moreover, right now, today, for the most part, teachers and nurturers of valuable human beings have among the crappiest jobs going. Public school teaching jobs are remarkably difficult to qualify for, badly paid and resourced, filled with stress and cognitive dissonance, and have very short burnout cycles. Why would we expect that ‘training AIs’ would be structured differently?

    Finally: one of the first categories of jobs to be automated away by human-scale AI is going to be any and all forms of nurture-care and teaching (of humans). Ironic, eh? But it’s like gravity. Factory bots, sex-bots, nurse-bots, amanuensis-bots, professor-bots — like gravity.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.