Artificial intelligence thinkers seem to emerge from two communities. One is what I call blue-sky visionaries who speculate about the future possibilities of the technology, invoking utopian fantasies to generate excitement. Blue-sky ideas are compelling but are often clouded over by unrealistic visions and the ethical challenges of what can and should be built.
In contrast, what I call muddy-boots pragmatists are problem- and solution-focused. They want to reduce the harms that widely used AI-infused systems can create. They focus on fixing biased and flawed systems, such as in facial recognition systems that often mistakenly identify people as criminals or violate privacy. The pragmatists want to reduce deadly medical mistakes that AI can make, and steer self-driving cars to be safe-driving cars. Their goal is also to improve AI-based decisions about mortgage loans, college admissions, job hiring and parole granting.
As a computer science professor with a long history of designing innovative applications that have been widely implemented, I believe that the blue-sky visionaries would benefit by taking the thoughtful messages of the muddy-boots realists. Combining the work of both camps is more likely to produce the beneficial outcomes that will lead to successful next-generation technologies.
While the futuristic thinking of the blue-sky speculators sparks our awe and earns much of the funding, muddy-boots thinking reminds us that some AI applications threaten privacy, spread misinformation and are decidedly racist, sexist and otherwise ethically dubious. Machines are undeniably part of our future, but will they serve all future humans equally? I think the caution and practicality of the muddy-boots camp will benefit humanity in the short and long run by ensuring diversity and equality in the development of the algorithms that increasingly run our day-to-day lives. If blue-sky thinkers integrate the concerns of muddy-boots realists into their designs, they can create future technologies that are more likely to advance human values, rights and dignity.
Blue-sky thinking started early in the development of AI. The literature was dominated by authors who pioneered the technology and heralded its inevitable transformation of society. The “fathers” of AI are usually considered to be Marvin Minsky and John McCarthy from MIT and Allen Newell and Herb Simon from Carnegie Mellon University. They gathered at meetings, such as the 1956 Dartmouth Conference, generating enthusiasm exemplified by Simon’s 1965 prediction that “machines will be capable, within 20 years, of doing any work a man can do.”
There have been many other contributors to AI, including the three Turing Award winners in 2018: Geoffrey Hinton, Yoshua Bengio and Yann LeCun. Their work on deep-learning algorithms was an important contribution, but their continued celebrations of AI’s importance and inevitability included Hinton’s troubling 2016 quote that “people should stop training radiologists now. It’s just completely obvious that within five years deep learning is going to do better than radiologists.” A more human-centered view is that deep-learning algorithms will become another tool, like mammograms and blood tests, that empower radiologists and other clinicians to make more accurate diagnoses and offer more appropriate treatment plans.
The theme of robots replacing people, thereby creating widespread unemployment, was legitimized by a 2013 report from Oxford University, which claimed that 47 percent of all jobs could be automated. Futurist Martin Ford’s 2015 book Rise of the Robots latched on to this idea, painting a troubling picture of low- and high-skilled jobs becoming so completely automated that governments would have to supply a universal basic income because there would be few jobs left. The reality is that well-designed automation increases productivity, which lowers prices, raises demand and brings benefits to many people. These changes trigger a parallel phenomenon of vigorous creation of new jobs, which has helped lead to the current high levels of employment in the U.S. and some other nations.
Yes, there were authors who offered cautionary tales and a different vision, such as MIT professor Joseph Weizenbaum in his 1976 book Computer Power and Human Reason, but these were exceptions.
The muddy-boots pragmatists started a new wave of thoughtful AI critiques. They shifted the discussion from blue-sky optimism to clearly identifying the threats to human dignity, fairness and democracy. Op-Ed pieces and a 2016 White House symposium were helpful initiatives, and mathematician Cathy O’Neil’s 2016 book Weapons of Math Destruction broadened the audience. She focused on how opaque AI algorithms could be harmful when applied at scale to decide on parole, mortgage and job applications. O’Neil’s powerful examples promoted human-centered thinking.
Other books, such as Ruha Benjamin’s Race After Technology: Abolitionist Tools for the New Jim Code, followed on how algorithms needed to be changed to increase economic opportunities and decrease racial bias.
Social psychologist Shoshanna Zuboff’s 2019 book The Age of Surveillance Capitalism showed the change from Google’s early motto of “Don’t be evil” to calculated efforts “to obfuscate these processes and their implications.” Zuboff’s solution was to call for a change in business models, democratic oversight and privacy sanctuaries. Scholar Kate Crawford delivered another devastating muddy-boots analysis in her 2021 book Atlas of AI, which focused on the extractive and destructive power of AI on jobs, the environment, human relationships and democracy. She refined her message in a captivating lecture for the National Academy of Engineering, describing constructive actions that AI researchers and implementers could take, while encouraging government regulation and individual efforts to protect privacy.
Muddy-boots activists are gaining recognition for their positive research contributions, which offer clever designs that benefit people. In October 2021, Cynthia Rudin received the $1 million prize for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence. Her work on interpretable forms of AI was a response to the bewildering complexity of opaque black box algorithms, which made it hard for people to understand why they were rejected for paroles, mortgages or jobs.
Many of the muddy-boots thinkers are women, but men have also spoken up about the need for humane oversight. Technology pioneer Jaron Lanier also raises concerns in his Ten Arguments for Deleting Your Social Media Accounts Right Now, which identifies the harms from social media and suggests users take more control over their use of it. Legal scholar Frank Pasquale’s New Laws of Robotics explains why AI developers should value human expertise, avoid technological arms races and take responsibility for the technologies they create. However, ensuring human control by way of human-centered designs will take substantial changes in national policies, business practices, research agendas and educational curricula.
The diverse workers of this camp—including women, nonbinary people, people with disabilities and people of color—have important messages to ensure that the blue-sky dreams can be channeled into realizable products and services that benefit people and preserve the environment.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.