Will AI give humans superpower? The value it can add to the world is shocking
Portfolio: How do you see the hype around artificial intelligence? Do you think AI will really be able to change the future of humanity?
Eric Hazan: I think it will. AI has been around since the 1950s or so, but it has been accelerating in recent years, with AI adoption more than doubling since 2017. This is reinforced by the rise of generative AI, which has taken AI from a topic technology functions topic to a priority for business leaders. For example, 1/3 of respondents to our State of AI study said their organizations are already regularly using generative AI in at least one function, meaning that 60% of organizations claiming to have adopted AI are using generative AI.
Between 2017 and 2022, global investment in generative AI increased by a factor of 40, while investment in AI in general increased by a factor of 4. This massive investment in a very small part of AI has enabled this technology to evolve significantly and become visible. Today, it has become attractive thanks to its exceptional performance and its ability to process enormous data resources, including unstructured data. We conducted our first analyses on the subject in 2017, and all our models were accelerated by 10 years! It's a highly transformative technology for a number of reasons. One of them is value creation: this includes, for example, efficiency gains.
If we look at this specific factor, AI can add value to the world that is equivalent to the size of the UK economy every year (between $2.6 trillion and $4.4 trillion in productivity per year).
That's a pretty big deal. If we consider the impact of AI on jobs, theoretically, 70% of today's tasks could be automated up to 50% between 2030 and 2060, with a midpoint in 2045. So it might not completely change the future, but it will certainly have a big impact.
Eric Hazan is a managing partner at McKinsey and co-leads the Marketing & Sales Practice in Western Europe. Eric advises senior executives in several European countries, particularly in the area of digital transformation. He has contributed to numerous McKinsey research programs, including a recent study led by McKinsey’s French office regarding the digital transformation of French companies. He co-leads major McKinsey research initiatives on digital media and consumer trends worldwide. Over the past ten years, Eric has also advised several governments and public leaders on topics related to digital, innovation, and industry policies. Prior to joining McKinsey, Eric was a senior partner at Arthur D. Little, where he led the global TIME (telecoms, Internet, media, and entertainment) practice and the consumer practice. He started his career in marketing and sales in consumer goods at Kraft Jacobs Suchard and at Danone. He holds a master of science degree in management from HEC Paris, where he is a professor of business strategy.
What are the ethical limits of AI? There are those who believe that AI can only be used if it makes no mistakes at all, but there are also those who believe that it is only important that it makes fewer mistakes than humans. What do you think about this?
E.H.: It depends. Look, AI covers a lot of areas. Let me give you an example: 10 or 15 years ago, an AI was created and its job was to recognize the same cat in every picture. It could do this with 80-85% accuracy at the time, whereas humans hit it 95% of the time. Today, thanks to huge advances, AI can guess when that particular cat is in the picture with 98-99% accuracy. In other words, nothing has been discovered here, it's just that AI has become more accurate.
But I'll give you another example: in the United States, a lawyer prepared for a trial using a language model. He asked you to send him relevant cases from previous years so he could compare them with the current situation.
The AI sent relevant cases that reflected the main points, but there was only one problem: they were not real, the AI had made them up.
I think this says a lot about the relationship between AI and responsibility. According to one of our recent research, only 21% of executives who have adopted AI believe their organizations have implemented policies on employees' use of generative AI technologies in their work. Only 32% say they have inaccuracy mitigation policies in place (the most pervasive risk), and 38% say they have policies in place relating to cybersecurity risks (51% for AI as a whole).
AI has enormous power, but it only reflects how we use it. I wouldn't say we can’t go ahead, but we need to determine exactly what we want to do with it and how.
So, you're saying control is key here.
E.H.: Yes, it is important. AI is a wonderful thing, but we need to have control over its use and the content it creates.
Will AI eliminate jobs or create new ones? What are the jobs that will be done by humans in the future and what will AI replace?
E.H.: I think AI will give humans superpowers. We'll probably all have to learn to live with an AI co-pilot, just as we've learned to live with a computer. It is particularly true regarding the knowledge jobs, which until now have been spared from AI-induced automation.
So there will be people who will use that superpower to create new jobs, but as we talked about earlier, there will be a lot of jobs that will be automated.
The real question is: how can we upskill and reskill people to adapt to the age of AI so that they can use this superpower for their own work.
If we manage the transition well, we could emerge from the phase of secular stagnation in productivity, and end up with an augmented humanity, rather than an automated one.
How does the European market compare to the US AI market?
E.H.: I see more and more European players starting to raise money for the technology. It's still true that most of the big investors are in the US and Asia: AI companies based in the US have raised around $8 billion between 2020 and 2022, or 75% of total investment. But I'm seeing more and more people who have migrated from the US back to Europe and are setting up companies on this continent, they're attracting investors. I think that is very positive.
I also think that Europe has a very good human capital. European researchers take part in leading conferences, regularly publish articles in reputed scientific journals and contribute to advances in generative AI.
This is also a sign that if the investment is here, it is possible to develop the European level with good organisation. We need to strengthen Europe's investment capacity, speed of adoption and ability to work collectively within the AI ecosystem, in order to compete with the USA or even Asia.
How does McKinsey use artificial intelligence?
E.H.: We built our own tool, Lilli, named after the first female mathematician McKinsey hired in the 1950s. Within the tool, we have two powerful language models and an interface that has access to our 135 proprietary databases.
It used to take a consultant two or three days' work to start analyzing a market. Now, with Lilli, just a few hours are enough!
The team's productivity and added value win out. McKinsey currently employs 43,000 people, 35,000 have already used Lilli. I see it adding even more value and speed to our work in the future. Our job is evolving, and so are our customers' expectations.
Do you use it?
E.H.: Well, not all the time, but yes, sometimes. Mostly I use it to summarize texts, for example, to abbreviate books or studies.
Cover photo: McKinsey