AI Job Apocalypse or Overblown? Experts Debate Future of Work at DTX Manchester
AI Job Apocalypse or Overblown? DTX Manchester Debate

We have all heard the apocalyptic predictions that artificial intelligence will take millions of jobs. That is difficult to hear for those with years of work experience, but it is even bleaker for young people starting their careers, who read constantly that there apparently will not be any careers to have. So is it true? And if it is—even partly—what can we do to ensure young people can still start their careers?

The Digital Transformation Expo Manchester, known as DTX, focuses on understanding what technological change means for businesses and employees. On the main stage, a group of experts gathered to debate the “elephant in the room”: if AI really eliminates junior and entry-level roles where people learn on the job, how can companies develop the next generation of leaders?

Debate on AI and Career Development

Host Gwyn Slee, chief technology officer and “AI Evangelist” at G-Star Intelligence, noted that using AI in the short term might seem sensible because it is “faster and cheaper.” However, he questioned whether that approach stores up problems down the line because young skilled people are not being hired. Richard Whittle, professor of artificial intelligence and public policy at the University of Salford, was blunt: our current way of training and developing experts is over. “There is little point in your organisation hiring as you were 10 or 20 years ago,” he said. Instead of recruiting young graduates as generalists, companies might need to move to “deep specialisation,” hiring people with very specific and often highly technical skills.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Keeley Crockett, professor in computational intelligence at Manchester Metropolitan University, emphasised that young recruits need ethical skills and the ability to verify information. She said the UK must offer hope to its young people, as we need a “productive workforce of all ages” with opportunities open to everyone. Caroline Ellis, AI and data ethics lead at NatWest Group, agreed that firms need to rethink what “entry level” means to ensure career opportunities remain for young people. She asked bosses: “How is your business going to run in a few years’ time?” If there are no junior or mid-managers, who will run those businesses once existing leaders move on or retire?

AI as a Tool, Not a Replacement

The panel later debated how AI should be used in organisations and what skills are most vital to make it work. Keeley said AI could be well used as a tool to “augment” human work, but it needs to be used by people with the right core knowledge and cognitive skills. “Unless you know how machine learning models work… how can you possibly verify what that AI comes out with? I’m very worried about complacency down the line,” she said. Caroline agreed that companies need the right culture of AI use, ensuring they are using “the right tools in the right place.” Host Gwyn added: “AI can be superhuman in some areas… but if you want it to tell you a joke, it’s absolute rubbish.”

Richard then asked the audience to step back and consider why AI is being used, particularly when it can give worse outcomes than human output. “Because,” he said, “it’s really cheap.” He added: “Why are we doing this? I would argue it’s because of the wider economic forces all our businesses are facing at the moment… This is why we’re seeing an absolute explosion in very poor but very cheap AI output.” Richard said firms need to assess where the “humans in the loop” should be and make realistic cost assessments. Keeley added that companies need to stay accountable to all their stakeholders in the age of AI. If businesses lose core skills through job cuts or lack of recruitment, and then an AI gets something wrong, “where is the liability?”

Pickt after-article banner — collaborative shopping lists app with family illustration

Rising Costs and Long-Term Risks

One upcoming issue for AI users will be the costs of those services. In the early days of mass AI adoption, tools have been cheap or even free to use. But in due course, once AI is embedded in business, that will change. Gwyn asked what will happen to AI use once that cheap pricing ends. Richard reflected on how other tech businesses, such as ride-sharing apps, have relied on cheap prices to grab market share before raising prices. He said that when AI use prices rise, businesses will need to reflect on how much they are using it. “We are using AI for weird things we wouldn’t be using it for,” he said, and asked if people would still use AI to generate simple emails or social media posts if each use cost more. If people were paying the true cost for those services, he said, they would not use them as much.

So what happens if companies have shed jobs and abandoned graduate training schemes, hoping to save cash, “and then the price goes up 10 or 15 times?” Unless you have thought about your organisation’s long-term future and the skills you will need, Richard said, you may end up with total dependency on AI when “the pricing is outside your control.” Caroline agreed, saying: “You are paying less than we should and they are going to put the prices up. If you were paying 12 times what you pay now, would you still choose to deploy it?” Richard added that he had “honestly not seen” a long-term financial AI use forecast that was correct because people assume AI will stay free or cheap. So organisations should assess carefully what they need to use AI for and what still needs people.

Later, he said it would be hard and expensive for firms to develop human expertise inside an organisation once it is gone. He referenced EM Forster’s prophetic science fiction story The Machine Stops, in which people become dependent on technology, and said: “Part of the answer is – how do we value future expertise now?” Keeley said organisations need to value their people and the skills and deep knowledge they have. “If you lose that, maybe short term you’ll have a solution, but in three to five years time you won’t,” she said. Caroline added: “Plus if AI takes all the jobs, who’s going to buy your products and services?”

Lessons from the Post Office Scandal

A sobering reminder of what can go wrong when companies rely on tech came later when Bryan Glick, editor of Computer Weekly, spoke about the Post Office scandal, when hundreds of people were wrongly prosecuted for failures of the Horizon IT system. For many, the sheer scale of the scandal only became clear in 2024 when the TV drama Mr Bates vs The Post Office aired, leading then Prime Minister Rishi Sunak to announce new measures to compensate the hundreds of subpostmasters and sub-postmistresses who were wrongfully convicted. But Bryan pointed out this was “a scandal whose roots went back 24 years” and that Computer Weekly had been reporting on it for that time, recognising that the problems at the Post Office had the “noxious whiff” of a serious problem with officials not accepting that there could be any issues with the Horizon system despite mounting evidence.

The 2025 report into the scandal by Sir Wyn Williams said the Post Office “maintained the fiction that its data was always accurate.” Bryan said that the Post Office scandal and other high-profile scandals involving government seemed like perfect storms in their own right. It would be easy to look at each individual scandal and say that could not happen again. He warned: “I’m here now to say to you, think again.” It will be harder in future to solve any tech crises involving AI, he said, as we often do not know exactly what is going on inside those models. The key issue for organisations using AI and other tech suites is accountability. Leaders and employees alike need to know when to ask questions of the technology and of each other, and must not assume technology is infallible.

Bryan was speaking on stage to Sharron Gunn, chief executive of BCS the Chartered Institute for IT. She said that with AI there always needs to be a “human in the loop” and everyone at a company, including board members, needs to know what to ask about technology. All directors, she suggested, should have basic tech training. Asked about what lessons government and business could take from the Post Office scandal, Bryan said organisations need to understand that “technology is a tool” and “not a magic solution to all your economic and social problems.” And, he said, people are vital to the success of any implementation of technology. “AI can help,” he said, “but government needs to listen to the experiences of the people who implement it” – not just to tech firms.