Politico recently ran an important story on how the AI policy sausage-making is being done that does not appear to be getting the attention it warrants. That may be because the piece, How a billionaire-backed network of AI advisers took over Washington, tried doing too many things at once. It presents the main nodes in this sprawling undertaking. Politico being oriented towards Beltway insiders, showing how this enterprise has entrenched itself in policy-making and identifying some of the key players in this enterprise is no small undertaking, considering that most are Flexians, as in wear many hats and are linked to multiple influence groups. For instance:

RAND, the influential Washington think tank, received a $5.5 million grant from Open Philanthropy in April to research “potential risks from advanced AI” and another $10 million in May to study biosecurity, which overlaps closely with concerns around the use of AI models to develop bioweapons. Both grants are to be spent at the discretion of RAND CEO Jason Matheny, a luminary in the effective altruist community who in September became one of five members on Anthropic’s new Long-Term Benefit Trust. Matheny previously oversaw the Biden administration’s policy on technology and national security at the National Security Council and Office of Science and Technology Policy…

In April, the same month Open Philanthropy granted RAND more than $5 million to research existential AI risk, Jeff Alstott, a well-known effective altruist and top information scientist at RAND, sketched out a plan to convince Congress to pass licensing requirements that would “constrain the proliferation” of advanced AI systems.

In an April 19 email sent to several members of the Omidyar Network, a network of policy groups established by billionaire eBay founder Pierre Omidyar, Alstott attached a detailed AI licensing proposal which he claimed to have shared with approximately “40 Hill staffers of both parties.”

The RAND researcher stressed that the proposal was “not a RAND report,” and asked recipients to “keep this document and attribution off the public internet.”

You can see how hard this is to keep straight.1 And the fact that people pretend to draw nice tidy boxes around their roles is hard to take seriously. Someone senior at RAND could conceivably be acting independently in writing an op-ed or giving a speech. Those are not hugely time intensive and the individual could think it’s important to correct misperceptions, highlight certain issues under debate, or simply elevate their professional standing by giving an informative talk in an area where they have expertise. But Alstott’s scheme and his promotion of it sounded like it took much more effort, raising the question of how a busy professional found the time to do that much supposed freelancing.

You can see how the need to prove up and then describe how the network operates consumes a lot of real estate, particularly when further larded up with having to quote the various protests of innocence by apparent perps.

So we’ll give short shrift to the description of the key actors to focus on the policies they are pushing, which is to hype the danger of AI turning into Skynet and endangering us all, and ignoring real and present hazard like bias and just plain bad results that users rely on because AI.

While this is all helpful, it still does not get to what we were told months ago by a surveillance state insider about the underlying economic motivations for of all people, diehard Silicon Valley libertarians to be acting so out of character as to be seeking regulation. His thesis is that AI investors have woken up and realized there is nothing natively protectable or all that skill intensive about AI. All you need is enough computing power. And computing power is getting cheaper all the time. On top of that, users could come up with narrow applications and comparatively small training sets, like a law firm training on its own correspondence so as to draft certain types of client letters.

So the promoters are creating a panic about purported AI dangers so as to restrict AI development/ownership to “safe hands” as in big tech incumbents, and bar development and use by small fry.

So it’s disappointing and frustrating to see such an in-depth piece get wrapped around the axle of who is doing what to whom and not get all that far in considering the critically important “why.” It’s that tech players have gotten used to having or creating barriers to entry, via scale economies and customer switching costs (who wants to learn a new spreadsheet program?). They are not used to operating in a setting where small players or even customers themselves can eat a lot of their lunch.

To recap the article, an organization called Open Philanthropy,2 funded mainly by billionaire Facebook co-founder Dustin Moskovitz and his wife Cari Tuna, is paying for ” more than a dozen AI fellows” who work as Congressional staffers or in Federal agencies or think tanks. This is presented as a purely charitable activity since the sponsor is a [squillionare financed] not for profit, even though it is clearly pushing an agenda designed to protect and increase the profits of Silicon Valley incumbents and venture capitalists who are investing in AI. But there is yet another layer of indirection, in that the Open Philanthropy monies are being laundered through the Horizon Institute for Public Service, yet another not-for-profit….created by Open Philanthropy.3

Here is the guts of the story:

Horizon is one piece of a sprawling web of AI influence that Open Philanthropy has built across Washington’s power centers. The organization — which is closely aligned with “effective altruism,” a movement made famous by disgraced FTX founder Sam Bankman-Fried that emphasizes a data-driven approach to philanthropy — has also spent tens of millions of dollars on direct contributions to AI and biosecurity researchers at RAND, Georgetown’s CSET, the Center for a New American Security and other influential think tanks guiding Washington on AI.

In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda…

Despite concerns raised by ethics experts, Horizon fellows on Capitol Hill appear to be taking direct roles in writing AI bills and helping lawmakers understand the technology. An Open Philanthropy web page says its fellows will be involved in “drafting legislation” and “educating members and colleagues on technology issues.” Pictures taken inside September’s Senate AI Insight Forum — a meeting of top tech CEOs, AI researchers and senators that was closed to journalists and the public — show at least two Horizon AI fellows in attendance.

Author Brandon Bordelon quotes experts over the course of the article who depict the “AI will soon rule humans” threat as far too speculative to worry about, particularly when contrasted with concrete harm that AI is doing now, like too often misidentifying blacks in facial recognition programs.

Perhaps your humble blogger is reading the wrong press, but I have not seen much amplification of the “AI as Skynet” meme, beyond short remarks by the like of Elon Musk. That may be because the Big Tech movers and shakers are so confident of their takeover of the AI agenda in the Beltway that they don’t feel the need to worry about mass messaging.

Bordelon describes the policies the Open Philanthropy combine is promoting, and points out the benefits to private sector players that have close connections to major Open Philanthropy backers:

One key issue that has already emerged is licensing — the idea, now part of a legislative framework from Blumenthal and Sen. Josh Hawley (R-Mo.), that the government should require licenses for companies to work on advanced AI. [Deborah] Raji [an AI researcher at the University of California, Berkeley,] worries that Open Philanthropy-funded experts could help lock in the advantages of existing tech giants by pushing for a licensing regime. She said that would likely cement the importance of a few leading AI companies – including OpenAI and Anthropic, two firms with significant financial and personal links to Moskovitz and Open Philanthropy…

In 2016, OpenAI CEO Sam Altman led a $50 million venture-capital investment in Asana, a software company founded and led by Moskovitz. In 2017, Moskovitz’s Open Philanthropy provided a $30 million grant to OpenAI. Asana and OpenAI also share a board member in Adam D’Angelo, a former Facebook executive.

Having delineated the shape of the network, Bordelon can finally describe how the “AI is gonna get you” narrative advances the interests of the big AI incumbents:

Altman has been personally active in giving Washington advice on AI and has previously urged Congress to impose licensing regimes on companies developing advanced AI. That proposal aligns with effective-altruist concerns about the technology’s cataclysmic potential, and critics see it as a way to also protect OpenAI from competitors.

The article describes how an Open Philanthropy spokescritter tried claiming that a licensing regime would hobble the big players more than the small fry. That’s patent nonsense since a large firm has more capacity to bear the financial and administration costs. Not surprisingly, knowledgeable parties lambasted this claim:

Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. [Suresh] Venkatasubramanian [ a professor of computer science at Brown University] said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.

“There is an agenda to control the development of large language models — and more broadly, generative AI technology,” Venkatasubramanian said.

The article closes by describing how other groups like Public Citizen and the Algorithmic Justice League are trying to enlist support for addressing AI risk to civil liberties. But it concludes that they are outmatched by the well-funded and coordinated Open Foundation effort.

So more and more of what could be the commons is being grabbed by the rich. Welcome to capitalism in its 21st century incarnation.
_____

1 The fact that the article refers to the effective altruist community so many times is also creepy. It appears “effective altruism” still has good brand associations in Washington and Silicon Valley, even though Sam Bankman-Fried’s outsized role should have tarnished it permanently. I heard that phrase and it makes me think that rich people are keen to extend their control over society to promote their goodthink and good action, and because they do it though not-for-profits, there can’t conceivably be ulterior motives, like learning how to get policies implemented, building personal relationships with influential insiders, and ego gratification. Even the official gloss comes off as a power trip:

2 The use of “open” in the name of a not-for-profit should come to have the same negative association as restaurants called “Mom’s”. I attended a presentation at an INET conference in 2015. Chrystia Freeland was interviewing George Soros. Soros bragged that his Open Society foundation had directly or indirectly given a grant to every major figure in Ukraine government. Since it was known even then that Banderite neo-Nazis were disproportionately represented, at at least 15% versus about 2% in the population, that meant Soros was touting his promotion of fascists.

3 The article contains many pious defenses of this arrangement, like Open Philanthropy is not promoting specific policies via the Horizon Institute, has no role in the selection of its fellows, etc.

This entry was posted in Media watch, Politics, Regulations and regulators, Technology and innovation on by Yves Smith.