Yves here. Given the willingness of far too many businesses (and one assumes government agencies) to implement at best flukey and at worst error-producing AI with a lack of adult supervision, one can understand why many are calling for regulation. But the wee problem is the AI overlords have also been engaging in AI scaremongering…so as to get regulation. Why? They are afraid that too many AI implementations have low barriers to entry (as in could be developed with pretty small and specific training sets). Having to comply would limit who could develop AI by imposing compliance costs. So odds greatly favor that any regulatory regime will be designed to further enrich AI titans.

My preferred remedy is liability. Make AI developers and implementers liable, with treble damages and recovery of legal costs if they could have foreseen the bad outcomes. This can be done by statute. That would lead AI entrepreneurs to be a lot more careful before they foisted their creations on the public at large.

By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at jazzbird@outlook.com. Originally published at Common Dreams

By virtue of luck or just being in the right place at the right time, I was the first journalist to report on the advent of the public internet.

In the early 1990s, I was editor-in-chief of a trade magazine called Telecommunications. Vinton Cerf, widely considered to be “father of the internet,” was on our editorial advisory board. Once Sunday afternoon, Vint contacted me to let me know that the federal government was going to make its military communication system, ARPANET, available to the general public. After reading his email, I more or less shrugged it off. I didn’t think much of it until I started investigating what that would really mean. After weeks of research and further discussions, I finally realized the import of what Vint had told me with its deeper implications for politics, society, culture, and commerce.

As the internet grew in size and scope, I started having some serious concerns. And there was a cadre of other researchers and writers who, like myself, wrote books and articles offering warnings about how this powerful and incredible new tool for human communications might go off the rails. These included Sven Birkerts, Clifford Stoll, and others. My own book Digital Mythologies was dedicated to such explorations.

While we all saw the tremendous potential that this new communications breakthrough had for academia, science, culture, and many other fields of endeavor, many of us were concerned about its future direction. One concern was how the internet could conceivably be used as a mechanism of social control—an issue closely tied to the possibility that corporate entities might actually come to “own” the internet, unable to resist the temptation to shape it for their own advantage.

The beginning of the “free service” model augured a long slow downward slide in personal privacy—a kind of Faustian bargain that involved yielding personal control and autonomy to Big Tech in exchange for these services. Over time, this model also opened the door to Big Tech sharing information with the NSA and many businesses mining and selling our very personal data. The temptation to use free services became the flypaper that would trap unsuspecting end users into a kind of lifelong dependency. But as the old adage goes: “There is no free lunch.”

Since that time, the internet and the related technology it spawned such as search engines, texting, and social media, have become all-pervasive, creeping into every corner of our lives. By default, and without due process of democratic participation or consent, these services are rapidly becoming a de facto necessity for participation in modern life. Smartphones have become essential tools that mediate these amazing capabilities and are now often essential tools for navigating both government services and commercial transactions.

Besides the giveaway of our personal privacy, the problems with technology dependence are now becoming all too apparent. Placing our financial assets and deeply personal information online creates significant stress and insecurity about being hacked or tricked. Tech-based problems then require more tech-based solutions in a kind of endless cycle. Clever scams are increasing and becoming more sophisticated. Further, given the global CrowdStrike outage, it sometimes seems like we’re building this new world of AI-driven digital-first infrastructure on a foundation of sand. And then there’s the internet’s role in aggravating income and social inequality. Unfortunately, this technology is inherently discriminatory, leaving seniors and many middle- and lower-income citizens in the dust. To offer a minor example, in some of the wealthier towns in Massachusetts, you can’t park your car in public lots without a smartphone.

Will AI Wreck the Internet?

Ironically, the Big Tech companies working on AI seem oblivious to the notion that this technology has the potential to be a wrecking ball. Conceivably, it could diminish everything that’s been good and useful about the internet while creating unprecedented levels of geopolitical chaos and destabilization. Recent trends with search engines offer a good example. Not terribly long ago, search results yielded a variety of opinions and useful content on any given topic. The searcher could then decide for her or himself what was true or not true by making an informed judgment.

With the advent of AI, this has now changed dramatically. Some widely used search engines are herding us toward specific “truths” as if every complex question had a simple multiple-choice answer. Google, for example, now offers an AI-assisted summary when a search is made. This becomes tempting to use because manual search now yields an annoying truckload of sponsored ad results. These items then need to be systematically ploughed through rendering the search process difficult and unpleasant.

This shift in the search process appears to be by design in order to steer users towards habitually using AI for search. The implicit assumption that AI will provide the “correct” answer however nullifies the whole point of a having a user-empowered search experience. It also radically reverses the original proposition of the internet i.e. to become a freewheeling tool for inquiry and personal empowerment, threatening to turn the internet into little more than a high-level interactive online encyclopedia.

Ordinary citizens and users of the internet will be powerless to resist the AI onslaught. The four largest internet and software companies Amazon, Meta, Microsoft, and Google are projected to invest well over $200 billion this year on AI development. Then there’s the possibility that AI might become a kind of “chaos agent” mucking around with our sense of what’s true and what’s not true—an inherently dangerous situation for any society to be in. Hannah Arendt, who wrote extensively about the dangers of authoritarian thinking, gave us this warning: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.”

Summing up, we need to radically reassess the role of the internet and associated technologies going forward and not abandon this responsibility to the corporations that provide these services. If not, we risk ending up with a world we won’t recognize—a landscape of dehumanizing interaction, even more isolated human relationships, and jobs that have been blithely handed over to AI and robotics with no democratic or regulatory oversight.

In 1961, then FCC Chairman Newton Minow spoke at a meeting of the National Association of Broadcasters. He observed that television had a lot of work to do to better uphold public interest and famously described it as a “vast wasteland.” While that description is hardly apt for the current status of the internet and social media, its future status may come to resemble a “black forest” of chaos, confusion, misinformation, and disinformation with AI only aggravating, not solving, this problem.

What then are some possible solutions? And what can our legislators do to ameliorate these problems and take control of the runaway freight train of technological dependence? One of the more obvious actions would be to reinstate funding for the Congressional Office of Technology Assessment. This agency was established in 1974 to provide Congress with reasonably objective analysis of complex technological trends. Inexplicably, the office was defunded in 1995 just as the internet was gaining strong momentum. Providing this kind of high-level research to educate and inform members of Congress about key technology issues has never been more important than it is now.