Yves here. While it’s good to see discussion of how to mitigate the employment effect of greater use of AI and what to do to mitigate it, I am bothered by Richard Murphy making unironic use of a chat with ChatGPT to get proposals. In a variant on Lenin, any worker, but above all knowledge workers, who interact with AI are updating Lenin by helping AI weave the rope with which it will hang them. Why are they so busily training their replacements? If you have any interest in self-preservation, you should stay well away from AI interactions unless you can find a way to corrupt their training sets.

It is unsurprising that the bot served up hackneyed ideas like “Let them eat training.”

By Richard Murphy, a chartered accountant and a political economist. He has been described by the Guardian newspaper as an “anti-poverty campaigner and tax expert”. He is Professor of Practice in International Political Economy at City University, London and Director of Tax Research UK. He is a non-executive director of Cambridge Econometrics. He is a member of the Progressive Economy Forum. Originally published at Tax Research UK

There is much discussion on the media this morning on the possibility that AI will cost tens, if not hundreds, of millions of jobs in the world economy, with the FT having articles on the issue today, none of which add much to debate except by sending out warning signals.

The obvious question to ask is, will AI create unemployment and what can be done about it? So I asked ChatGPT, an AI tool, and this was its best response:

AI has the potential to create unemployment in certain industries and job roles, particularly those that involve repetitive tasks or can be easily automated. However, it’s important to note that AI can also create new job opportunities and help individuals to be more productive and efficient in their work.

To mitigate the potential negative effects of AI on employment, there are several things that can be done:

  1. Invest in education and retraining programs: Governments, educational institutions, and businesses can invest in education and retraining programs to help workers develop the skills needed for new jobs that are emerging as a result of AI.
  2. Encourage entrepreneurship: Encouraging and supporting entrepreneurship can create new job opportunities and help individuals to start their own businesses.
  3. Implement policies to support workers: Governments can implement policies such as unemployment insurance, healthcare, and retirement benefits to support workers who may be displaced by AI.
  4. Foster a culture of lifelong learning: Encouraging and supporting lifelong learning can help workers to adapt to new technologies and job roles as they emerge.

Overall, it’s important to take a proactive approach to address the potential impact of AI on employment, and to focus on creating new job opportunities and supporting workers through education and retraining programs.

As far as it goes, that’s not bad. A good A level answer, maybe.

But the fact is, it does not go nearly far enough because what this answer does not do is address the fundamental issues that flow from the suggestion it makes. That suggestion is that people engaged in processes where employment is replaced by AI technology should be retrained to work in sectors where there could be growth. In another iteration of the answer (I asked several times) it was suggested that new employment might come in renewable energy and, most especially, in healthcare and education where, it said, there is ‘likely to be significant growth in coming years’.

That may well be true, but the missing piece in all this is how this transformation is going to be paid for alongside all the other changes that are going to necessarily happen over the years to come.

We can, of course, have significant retraining programmes. And we can employ more people in renewables, healthcare and education, but all of these are likely to require significant government support and right now we have a prevailing attitude that, firstly, there is no more money and, secondly, there must be no more tax.

Neither of those attitudes is sustainable in the face of AI. What AI does is increase the return to capital in the economy by reducing the return to labour. That is the whole basis of its appeal. I suggest it is pointless for anyone to argue otherwise: this is what will happen in the absence of intervention from governments.

Historically when such transitions took place three things happened. First, there were major economic downturns (this happened in the last three decades of the 19th century, almost continually). Second, there was poverty. I would note the same era, prior to the creation of trade unions with their vital role in addressing this. And third, new sectors did develop, eventually, to employ the displaced labour.

The first two are real possibilities with the widespread use of AI. After all, unemployed people cannot spend much: the risk of real economic decline even as the return to capital increases is big.

What worries me most, though, is that I see almost no chance of new private sector activity that seeks to employ those people being made redundant by AI. It is true that we do need more education, healthcare and social care, as well as better pensions. That is true irrespective of AI. AI makes all of them possible by making the necessary resources available, but only if government can employ the people involved.

Of course, a government like that in the UK can create the funds required to do this. We know that is possible. But to control inflation it will also need to increase tax, and there is only one obvious sector on which that additional tax should be charged – which is on the owners of the benefits arising from AI. That, inevitably, means that taxes on capital will have to increase very significantly.

The question is, will that be possible? If it is, the transition to AI could be beneficial. If it is not – because tax havens hold out and resist this change in a way that makes it impossible for this shift in reward towards capital to be properly addressed and corrected through the tax system – then we could be in deep trouble.

I think this is the first time I have addressed this issue. I see it as a new frontier in tax justice. We either work out how to tax capital effectively (and not by taxing machines as Bill Gates once suggested, but by taxing the ownership of capital itself and the rewards flowing from it) then AI creates massive economic risk.

I suspect this is going to be a recurring theme here in the future.

This entry was posted in Free markets and their discontents, Guest Post, Income disparity, Social policy, Technology and innovation, The destruction of the middle class on by Yves Smith.