Goldman Sachs has predicted that popular AI tools such as ChatGPT could expose as many as 300 million jobs to automation. So what does that mean for managers? Is AI coming for your job? The short answer is a resounding “it depends”. Just because a job is exposed to automation, does not mean it will disappear entirely. It may simply mean that certain tasks will be automated.
Historical data on new technologies provides some assurance here. Electricity, mass transportation, and computers all required adjustments but eventually raised productivity substantially. So it is not surprising that we have record low unemployment, 10 years after an Oxford study predicted that 47 percent of US jobs are at the risk of automation.
With this in mind, the question is not whether ChatGPT is coming for manager’s jobs, but how it will change what they are doing.
Idea generation: managers are more likely to come up with atypical ideas but AI might enhance their abilities
To gain competitive advantage, firms either outdo their competitors in a highly contested space or they avoid competition all-together. Disciples of Blue Ocean strategy will tilt towards the latter.
Looking for blue ocean ideas is not a task which fits generative AI best. Large language models such as ChatGPT learn the patterns and structure of their input training data and then generate data that has similar characteristics.
With the tendency to converge towards the middle in mind, AI tools can replace average managers but not great ones. However, great managers can use the tool and – with the right prompts – support their search for outliers.
Problem solving: generative AI is great with analogies
Managers think in analogies. Until recently we thought that machines could not do this. However, a recent experiment by Maciej Workiewicz from ESSEC Business School, Phanish Puranam from INSEAD, and Prothit Sen from the Indian School of Business shows that they actually can and can do so better than humans.
In the experiment, they shared two stories with MBA students and ChatGPT (GPT-4). One story highlighted the problem of survivor bias, the other was the well-known story about radiology where treating a tumour with a single source of radiology would kill a patient while tackling it from different sources was effective. Afterwards, two business situations were shared with the participants which could be solved by using analogies from one of these stories. The authors constructed the problems themselves to make sure they were not in the model’s training set.
When the AI model was told it could use prior stories, it was the decisive winner. Without a hint, the difference between AI and human subjects was small. Similarly, when ChatGPT got it wrong, the mistakes were often very humanlike in character.
Workiewicz noted that “We are finding that what works really well is to offer several models and ChatGPT checks which one is the best fit. So if you, like Charlie Munger, have several models in mind when you want to figure out how promising a business idea is, you can use the tool to check whether an idea matches one of the models.”
Implementing ideas: hard to replace managers
One of main jobs of managers is to get people to do things. Most initiatives fail during implementation. That’s not something machines can offer. For managers it works best when they involve people. Here AI tools can be helpful once again, as it enables the involvement of large groups of people. In a discussion forum involving thousands AI can help moderators to connect different ideas both live and in subsequent analysis.
ChatGPT can also help drafting stories to translate a bone-dry strategy that no-one remembers into something memorable.
Training: unexpected outcomes
Another important job of managers it to mentor and train people, to get them ready for their jobs. And for those aspiring to become managers, training is obviously part of the journey too. It’s tempting to replace in-person training with ChatGPT. But using AI as a trainer has interesting implications.
Fabian Gaessler from Pompeu Fabra University and Henning Piezunka from INSEAD compared chess players from the former Soviet Union and the West. The former did not train with chess computers while the latter did. As their study shows, those training with chess computers were more perfect players but they failed to spot opportunities when others made mistakes. Of course, humans (and by extension your competitors) make mistakes. This is something you want to exploit. If your staff is only trained by machines, your chances are that they are too rational.
AI and your job
Dissecting the job of managers into different task suggests that AI is primarily complementary. Of course, those unwilling or unable to adjust are likely to struggle. A more general study on jobs and AI by James Hayton from Warwick Business School also painted an optimistic picture, as long as firms equip workers with the right skills.
Read the full article here