MIT economist and Kenan Institute Distinguished Fellow David Autor gave the first keynote of the 2024 Frontiers of Business Conference. His talk, “Expertise, Artificial Intelligence, and the Work of the Future,” explained the value of human expertise, described how machines have historically augmented human capabilities and boosted productivity, and discussed artificial intelligence technologies’ implications for the future of work.
Consider this: An air traffic controller could be a crossing guard while the reverse does not hold true.
What is the difference? It’s not their inherent social value – after all, we highly value the education and safety of school children. What separates the two roles is expertise
In industrialized economies, expertise makes human labor valuable – and yet not all expertise. For it to have economic relevance, the expertise must enable a goal that has market value. It must also be scarce because “if everyone is an expert, no one is an expert,” as Autor explains.
There is a strong positive association between expertise and wages: Jobs requiring greater expertise tend to pay better than nonexpert work. This trend holds true for separate jobs in the same firm or profession, and the more expertise required by your individual role, the better you will be compensated. The relationship between expertise and economic value maps onto macroeconomies as well, which means that human labor accounts for the largest share of the most advanced economies. In North America, more than 60% of total capital expenditures goes to labor, the highest portion of any of the world’s regions. Meanwhile, in developing countries where living standards are lower and labor is cheaper, tasks are less likely to be automated. So rather than fear automation, Autor argues, we should strive to live in a world where automating tasks is a worthwhile investment, as automation indicates that human labor is highly valued.
The world today is abuzz with excitement and fear about artificial intelligence technologies and their implications for work. For millennia, machines have revolutionized work again and again, and humans have, on net, benefited tremendously from these transformations. Yet AI is different from other machines, or at least that is the common sentiment.
What makes AI different? For one, AI involves machine learning, which means that AI technologies can learn from tacit information, as humans do. We cannot explain exactly how we do many common human activities – things almost all of us can do – because they are based on tacit knowledge. When a parent teaches her child how to ride a bike, she places the kid on the bike and pushes, and the child’s innate senses take over. We can program a computer to calculate pi to an extraordinary number of decimal points but not to make a convincing argument or do tasks that require judgment calls, including tasks as straightforward as cleaning a hotel room where someone had left their belongings strewn about. In this way, even jobs we consider to be low expertise require more expertise than a machine can supply. Yet AI technologies are beginning to crack the code of tacit knowledge and exercise humanlike judgment.
“AI is a tool,” Autor emphasizes, and tools generally augment the value of human expertise – think computers and stethoscopes. “Tools shorten the distance,” Autor continues, “from knowledge and intent to result.” With this optimistic view, AI could complement workers and help them to perform a greater number of tasks requiring more expertise. The peril is when AI technologies commodify or “strand” human expertise. In these instances, the new technology devalues human expertise because, as Autor reiterates, if everyone is an expert, no one is.
The more immediate challenge concerning AI implementation, Autor finds, is in designing the new technologies and human-machine interfaces to interact in complementary ways. There is, for example, a promising AI innovation that reads medical patients’ chest X-rays and provides diagnoses, and this technology is as accurate as about 65% of radiologists. When the doctors and the technology team up, however, the doctors are worse at providing accurate diagnoses than they are when working on their own. The machine, which is quite good on its own, lowers human expert competency. This outcome reflects “correlated uncertainty”: when both the doctor and the AI are not certain of a diagnosis, the doctor has trouble judging the machine’s output and defers too much to the technology’s appearance of expertise. This is the sort of process design problem that, Autor argues, will be inherent to AI development for a long time to come.
Despite the challenges, there are many examples of AI technologies’ great potential to augment human expertise. Studies have shown, for instance, that using ChatGPT – OpenAI’s generative AI chatbot – helps writers write faster and better, largely by closing the skill gap among sets of professional writers. In customer support and software development too, AI technologies have been used to level up expertise and make valuable processes more efficient. Of course, Autor cautions, AI implementation will cause job displacement and expertise dilution, which can result in fewer jobs in affected fields or more jobs with lower pay – think the “gig economy.”
“Most innovations do not automate work but extend human capabilities – a new technology may automate a portion of a job’s tasks while greatly expanding the expertise needed to do that job and other newly created jobs.”
In the short and medium term, AI innovations do not portend mass unemployment or excessive amounts of stranded expertise, as the technology’s induced disruptions are likely to create more jobs than they make redundant. Yet the transitions will have to be thoughtfully managed by people. AI will impose real costs, and, as Autor notes, “we should be resilient to that disruption.” Large benefits, he continues, will come from people having better tools at their disposal, “and the best case is that we extend human expertise to allow people to do higher value-added work without as much formal training.” And while there is both a lot of pessimism and a lot of optimism concerning AI technologies, Autor argues that we are concerned about the wrong things and focused on an unrealistic time horizon. These new computing technologies fit well under Amara’s law, which finds that people tend to overestimate the effects of new technologies in the short run and underestimate them in the long run.
Most innovations, he continues, do not automate work but extend human capabilities – a new technology may automate a portion of a job’s tasks while greatly expanding the expertise needed to do that job and other newly created jobs. More than three out of five workers today are doing a job that did not exist 80 years ago. “Because as we innovate,” Autor concludes, “we change what’s possible. We also change our wealth, our income and our tastes. And those all create new demand for work.”
Autor ended his talk posing the key questions we should be asking about new technological development: “Will we use AI to empower or extend expertise? I say ‘will’ because AI is not going to make these decisions, right? AI is a tool – it is not the actor. Many people say, ‘What will AI do to the future?’ when we should be asking, ‘What will we do to the future with AI?’
“The future is not a prediction problem,” Autor continues, “it is a design problem. So, we should be asking how we can use it to extend, rather than replace, expertise. While that’s a nice thing to say, the key question is, How do we do it? That’s a very hard question to answer. Getting to the answer is an R&D opportunity or actually an R&D necessity, and that has to be an outcome of civil society. Industry, government, universities, labor representatives, they’re all going to shape the future of work.”