Your humble blogger has been reluctant to dignify AI, even in the face of technologists we know and respect saying that it is truly revolutionary. But then the question becomes “Revolutionary for what?”

The enthusiasm for AI, aside from investors in its realm and various professional hangers-on, comes from businesses out of the prospect of cost savings due to productivity increases. And most are unabashed in saying that this means replacing workers.

But as we will soon show, AI mainly decreases rather than increases productivity. So if that is the case, why has the fanfare continued at a fever pitch?

It is not hard to discern that, irrespective of actual performance, AI is yet another tool to discipline labor, here the sort of white collar and professional laborers that management would tend to view as uppity, particularly those that push back over corners-cutting and rules-breaking.

In this it falls in the proud tradition of other labor-bargaining-power-reducing yet overhyped gimmicks like outsourcing and offshoring.

Let me quote IT expert Robert Cringely from an important 2015 article on the use of H1-B visas and offshoring. Cringley said it was an open secret that offshoring was not working, but in a modern analogue to footbinding, no one dared stop because investors would punish the company based on wrong-headed assumptions. From Cringley:

Now let’s look at what this has meant for the U.S. computer industry.

First is the lemming effect where several businesses in an industry all follow the same bad management plan and collectively kill themselves…

This mad rush to send more work offshore (to get costs better aligned) is an act of desperation. Everyone knows it isn’t working well. Everyone knows doing it is just going to make the service quality a lot worse. If you annoy your customer enough they will decide to leave.

The second issue is you can’t fix a problem by throwing more bodies at it. USA IT workers make about 10 times the pay and benefits that their counterparts make in India. I won’t suggest USA workers are 10 times better than anyone, they aren’t. However they are generally much more experienced and can often do important work much better and faster (and in the same time zone). The most effective organizations have a diverse workforce with a mix of people, skills, experience, etc. By working side by side these people learn from each other. They develop team building skills. In time the less experienced workers become highly effective experienced workers. The more layoffs, the more jobs sent off shore, the more these companies erode the effectiveness of their service. An IT services business is worthless if it does not have the skills and experience to do the job.

The third problem is how you treat people does matter. In high performing firms the work force is vested in the success of the business. They are prepared to put in the extra effort and extra hours needed to help the business — and they are compensated for the results. They produce value for the business. When you treat and pay people poorly you lose their ambition and desire to excel, you lose the performance of your work force. It can now be argued many workers in IT services are no longer providing any value to the business. This is not because they are bad workers. It is because they are being treated poorly.

Let’s turn more briefly to offshoring which America is now amusingly trying to reverse. Through the McKinsey mafia and other contacts, I have heard quite a few tales about decisions to move manufacturing abroad. In a substantial majority, the business case was not compelling and/or the company could have achieved similar results through just-in-time and other improvements. But they went ahead because management wanted to look like it was keeping up with the Jones and/or they knew it was what investors wanted to see.

Moreover, from what I could tell, no one risk-adjusted the alleged improvement in results. What about the cost of contracting? Of disputes and finger-pointing about goods quality and delivery times? Of all of the extra coordination and supervision? Of catastrophic events at the vendor’s plant? And as Cringley alluded, the loss of basic know-how?

Keep in mind that for most manufactured goods, direct factory labor is 3% to 7% of total product cost, typically 3% to 5%. Offshoring is best though of as not a cost savings but a transfer from direct factory labor to higher executives, middle managers, and to a lesser degree, various outside parties (lawyers, outsouring consultants), all of whom have more to do in devising and minding a more complex and fragile business system.

Now back to AI.

I am not saying that there are no implementations where AI would not be a net plus, even allowing for increased risk. But there’s way way way too much treating AI output as if it were an oracle when it’s often wrong (and I see this well over 50% of the time when readers quote AI results in comment on topics in which I have expertise). And I’ve been shown cases of literally dangerously output in high-stakes environments, like medicine.

One of our normally skeptical tech experts who was impressed by AI made clear its limits: it was like having a 3.9 GPA freshman as your assistant. It provides a very good pass but its results still have to be reviewed and revised. But how often is that happening in practice?

And surveys so far have been finding that AI is a productivity dampener. For instance, from Inc Magazine last July:

When corporate executives look at AI, many of them see a means of boosting productivity. But ask employees how they view the tech and you get a much more pessimistic perspective.

That’s the big takeaway from a survey that freelance job platform Upwork just published. According to the firm’s research arm, 96 percent of C-suite executives “expect the use of AI tools to increase their company’s overall productivity levels.” Yet at the same time, 77 percent of employees who use AI tools have found that the technology has “actually decreased their productivity and added to their workload.”

That’s for a variety of reasons, the survey indicates, including the time employees now have to spend learning how to use AI, double-checking its work, or keeping up with the expectations of managers who think AI means they can take on a bigger workload.

And from Vimoh’s Ideas in December:

…when AI tools came over the horizon, we were hearing a lot about how they’re going to make people more productive. And there were studies to this effect. There is at least one study by McKinsey, which predicted a productivity growth of 0.1 to 0.6% by 2040 from AI use. But 2040 is far away, and until now, we haven’t seen that. In fact, we may actually be seeing the opposite because a recent study done by Intel says that productivity is actually down.

They followed 6,000 employees in Germany, France, and the UK and found that AI PC owners were spending longer on digital chores than using traditional PCs. The reason behind this, of course, is that you cannot hold AI tools accountable. If you are someone who has AI tools, who has a workplace where AI tools are being used to achieve something, you can’t fire an AI tool. In fact, you’re paying money to use the AI tool; you’re paying money to the company that made the AI tool. At the end of the day, the person you can hold responsible, the person you can hold accountable, is your employee. You can tell them that if this job does not get done, your job is on the line. You can’t say that to an AI tool.

So, the work at the end of the day is still being done by someone who’s using the AI tool. And now, while earlier they just had to do the job, now they have to train the AI to do the job, make the AI do the job, and check what the AI has done. And in some cases, probably many cases, fix the mistakes being made by the AI.

I myself have tried to use AI tools to do some jobs that I don’t like to do. And in every single case, it has to be checked. In every single case, it has to be fixed. So, Intel thinks this is a problem of employees not knowing how to use the tools. I think that short of AI becoming agents in their own right, where they take decisions and perform tasks and are proactive, we’re not going to solve this problem. And that the buck will continue to stop at humans and the humans who hire them to do work.

The way Vimoh breaks his argument down suggests he’s had to say this sort of thing to resistant higher ups before.

The AI optimists among you might contend that surely the employees or the companies will get better at AI implementation. Erm, bad tools are bad tools. But even if workers get better at finessing them, there will still have been the phase of productivity losses. And that’s front loaded, which makes it more expensive in net-present-value terms. Is there any reason to think that this will eventually be recovered?

Again, before you try saying yes, consider the counter-evidence, which is the low level of tech competence generally. From I Will Fucking Piledrive You If You Mention AI Again:

Unless you are one of a tiny handful of businesses who know exactly what they’re going to use AI for, you do not need AI for anything – or rather, you do not need to do anything to reap the benefits. Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain…

Consider the fact that most companies are unable to successfully develop and deploy the simplest of CRUD applications on time and under budget. This is a solved problem – with smart people who can collaborate and provide reasonable requirements, a competent team will knock this out of the park every single time, admittedly with some amount of frustration….But most companies can’t do this, because they are operationally and culturally crippled….

Most organizations cannot ship the most basic applications imaginable with any consistency, and you’re out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has neverused a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn’t a recipe for disaster, it’s a cookbook for someone looking to prepare a twelve course fucking catastrophe.

Mind you, this generally sorry picture does not seem likely to be improved by DeepSeek or similarly more efficient models with different underlying paradigms. We are at least relieved that DeepSeek and its ilk may and should derail OpenAI, ChatGPT, and other US AI flagships that are monster energy hogs. At least the level of planetary destruction will be reduced.

Or perhaps not. AI hype has made a lot of people obscenely rich, so the incentive to keep the grift going is very large. So expect more strained justifications about necessity and improved competitiveness, when the evidence of either among users remains thin.

This entry was posted in Dubious statistics, Ridiculously obvious scams, Technology and innovation, The destruction of the middle class on by Yves Smith.