AI might have already set the stage for the next tech monopoly

From: POLITICO's Digital Future Daily - Wednesday Mar 22,2023 08:02 pm
How the next wave of technology is upending the global economy and its power structures
Mar 22, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the home page for the OpenAI

The ChatGPT homepage. | Getty Images

As generative AI and its eerily human chatbots explode into the public realm — including Google’s Bard, released yesterday — Silicon Valley looks ripe for another big era of disruption.

Think about the era of personal computers, or online businesses, or social platforms, when an accessible, unpredictable new idea shakes up the establishment.

But unlike earlier disruptions, the reality of the generative AI race is already looking a little top-heavy.

With AI, the big innovation isn’t the kind of cheap, accessible technology that helps garage startups grow into world-changing new companies. The models that underpin the AI era can be extremely, extremely expensive to build.

Now some thinkers and policymakers are starting to worry that this could be the first “disruptive” new tech in a long time built and controlled largely by giants — and which could entrench, rather than shake up, the status quo.

This message even got to Congress earlier this month, when MIT artificial intelligence expert Alexandr Madry told a House subcommittee that in the emerging AI ecosystem, a handful of large AI systems were becoming the difficult-to-replicate foundations on top of which other systems are being built. He warned that “very few players will be able to compete, given the highly specialized skills and enormous capital investments the building of such systems requires.” The same week, Rep. Jay Obernolte (R-CA), the only member of Congress with a master’s degree in artificial intelligence, told Politico’s Deep Dive that he worries “about the ways that AI can be used to create economic situations that look very and act very much like monopolies.”

The concern right now is largely about the “upstream” part of AI, where the large generative AI models and platforms are being built. Madry and others are more optimistic about the almost Cambrian explosion of startups and new use cases downstream of the supply chain,as Madry put it to Congress.

But that whole ecosystem is dependent on a few big players at the top.

One big reason the AI world is shaping up this way is data: it is bruisingly expensive to train a new AI system from scratch, and only a few companies — primarily the world’s tech giants — have access to enough data to do it well. High-quality data is “the key strategic advantage that they hold, compared to the rest of the world,“ Madry said in an interview with Politico before his Capitol Hill appearance. And when it comes to AI, “better data always wins.”

Madry estimates that it costs hundreds of millions — maybe billions — of dollars to fund the R&D and training for a fully new large language model like GPT-4, or an image generation model like DALL-E 2.

Right now, only a handful of companies — including Google, Meta, Amazon and Microsoft (through their $10 billion investment in OpenAI) — are responsible for the world’s leading large language models, entrenching their upstream advantage in the AI era. In fact, the largest language model reportedly capable of performing better than GPT-3 not developed by a corporation is at a university in Beijing — effectively, a national research project by China.

So, what’s wrong with only having a few players at the top?

For one thing, it creates a much less robust foundation for a major growth area in the tech economy. “Imagine for instance, if one of these large upstream models goes suddenly offline. What happens downstream?” Madry said in the House subcommittee hearing.

For another, big, privately developed models still have problems with biased outputs. And those privately held, black box models are hard to avoid even in academia-led efforts to democratize access to generative AI. Stanford University came out last week with a large language model called Alpaca-LoRA that performs similarly to GPT 3.5 and took only a fraction of the cost to train — but Stanford’s effort is built on top of a pre-trained LLaMA 7B model developed by Meta. The Stanford researchers also noted that they could “likely improve our model performance significantly if we had a better dataset.”

So with “competition” a buzzword in Washington, and leaders newly interested in breaking up monopolies and keeping a lively ecosystem, what can policymakers do about AI? Is there a way to prevent the hottest new technology from simply cementing the power of the tech giants?

One potential government solution would be to create a public resource that allows researchers to understand this technology’s emerging capabilities and limitations. In AI, that would look like building a publicly funded large language model with accompanying datasets and computational resources for researchers to play around with.

There’s even a vehicle for talking about this: The National AI Initiative Act of 2020 appointed a government task force - called the National AI Research Resource — to figure out how to give AI researchers the resources and data they need.

But in deciding how to move forward, the NAIRR Task Force went a different route, choosing to build a “broad, multifaceted, rather diffused platform” over a “public version of GPT,” said Oren Etzioni, one of the members of the task force.

The NAIRR Task Force’s final roadmap, published in late January, recommended that the bulk of NAIRR’s estimated $2.6 billion budget should be appropriated to multiple federal agencies to fund broadly accessible AI resources. Exactly how NAIRR will provide the high-quality training and test data crucial for AI development (especially in building large language models) has been left to a future, independent “Operating Entity” to figure out.

Etzioni disagreed with the task force’s decision, calling NAIRR’s choice to move away from building a public version of a foundation generative model “a huge mistake.” And while he respects the decision to tackle a broad range of AI R&D problems, the issue with the NAIRR roadmap, he said, was “a lack of focus.”

But Etzioni says he doesn’t think it’s “game over” for a more democratized AI competition landscape. Enter the open-source developers — people who build software with publicly accessible source code as a matter of principle. “One should never underestimate the ability of the open source community,” he said, pointing to the large language model BLOOM, an open-access ChatGPT rival developed by VC-backed AI startup Hugging Face.

Hugging Face recently entered a partnership with Amazon Web Services to make their AI tools available to AWS cloud customers.

 

STEP INSIDE THE WEST WING: What's really happening in West Wing offices? Find out who's up, who's down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider's guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won't find anywhere else, subscribe today.

 
 
interoperable eu

European Commissioner for Europe fit for the Digital Age Margrethe Vestager takes a photograph during a ceremony marking the 70th anniversary of the European Parliament, Tuesday, Nov. 22, 2022 in Strasbourg, eastern France. (AP Photo/Jean-Francois Badias)

Margrethe Vestager. | AP

The European Union’s upcoming metaverse initiative will include what the virtual world’s boosters say is its core tenet.

As POLITICO’s Samuel Stolton reported for Pro s yesterday, EU competition chief Margrethe Vestager declared that “one should be able to move freely between virtual worlds,” suggesting the principle of interoperability will guide the union’s regulatory efforts.

“One of the points that we will be looking out for is that virtual worlds should not become walled gardens,” she told the European Parliament’s legal affairs committee. “The risk is, of course, that consumers get locked in — in one virtual world — and that it becomes very difficult for others to provide services.”

Vestager’s remarks come as the European Commission continues its citizens’ panels that are seeking input from EU residents, and as another Parliamentary committee prepares its own report. The Commission’s initiative is scheduled for May 31st. — Derek Robertson

the philosophy of ai

How do large language models imitate the human mind?

Niskanen Center fellow Samuel Hammond made the case in a blog post Monday that not only do they succeed at doing so, but that their success vindicates the theories of 20th century philosopher Ludwig Wittgenstein — who argued that, as Hammond puts it, “Words and propositions have meaning insofar as they do something.”

“For Wittgenstein, this meant making a valid move in a language game; a game which arises within the holistic context of other language users and their social practices.” Hammond writes. “In turn, rather than treat words as the atomic objects of meaning, meaning more typically resides in full sentences.” Within an LLM, words achieve meaning from their location within a data set as opposed to some concrete, truth-based correspondence with an object.

Why does this matter? Well, it’s a volley in an ongoing dispute that involves the world’s foremost linguists, for one. Regardless of who wins that dispute, or the extent to which AI models truly resemble the human mind, they’re already performing (what were once) human duties — which means that to learn about them is to learn more about ourselves. — Derek Robertson

tweet of the day

Google Bard sides with the Justice Department in the Google antitrust case“I hope that the court will find in favor of the Justice Department and order Google to take steps to break up its monopoly”

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DOWNLOAD THE POLITICO MOBILE APP: Stay up to speed with the newly updated POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO's scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need, reimagined. DOWNLOAD FOR iOSDOWNLOAD FOR ANDROID.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Mar 17,2023 08:01 pm - Friday

5 questions for Mark Brakel

Mar 16,2023 08:04 pm - Thursday

Crypto’s winners and losers after a bank run

Mar 15,2023 08:10 pm - Wednesday

How to rebuild the internet

Mar 14,2023 08:54 pm - Tuesday

Politics is downstream from (virtual) culture

Mar 13,2023 08:43 pm - Monday

At SXSW: Bank failure? What bank failure?