An open-and-closed case in Austin

From: POLITICO's Digital Future Daily - Wednesday Mar 13,2024 08:02 pm
How the next wave of technology is upending the global economy and its power structures
Mar 13, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

In an aerial view, the downtown skyline is seen in Austin, Texas.

Austin, Texas. | Brandon Bell/Getty Images

AUSTIN, Texas — The policy-related programming at this year’s South by Southwest is largely over, but the technologists building the future are still hashing out their plans on the conference stage.

And given the laissez-faire status quo for U.S. tech regulation (notwithstanding today’s blow to TikTok in Congress), what they decide in Austin or Silicon Valley will likely matter more than anything being discussed in Washington right now. That’s why I was fascinated by one particular debate that played out this week at SXSW: What, exactly, qualifies as “open source” AI and what are the benefits and drawbacks of building it?

This is not an idle, abstract question in the tech world right now. Open and closed AI systems are often described with a stark dichotomy, proponents of the former saying the latter want to choke off public access to AI systems to make a profit, and the latter saying the former will endanger the world. Elon Musk went to court to accuse OpenAI of abandoning its original mission to build open source AI, and he announced this week that his own company’s AI chatbot will be open source. In Europe, Mistral AI, the French company seen as the continent’s great hope for a globally competitive AI firm, just released its open source Claude 3 model, which has been lauded as just as good as American competitors like GPT-4.

The AI community, regulators, and watchdogs also criticized Mistral harshly a couple of weeks ago for announcing a partnership with the closed source Microsoft, just weeks after the French company pointed to its open source development philosophy to receive lenient treatment under the European Union’s AI Act. It’s an effective case study for the open source conversation, where questions about the risk or reward of making AI source code open to the public often mask the raw power and profit dynamics at the heart of the decisions these companies make.

“Open source is an interesting, complicated question, because with most of these companies, we don't know anything” about how their models are trained, said researcher and all-around AI gadfly Gary Marcus when I spoke to him this week on a sunny bench ahead of his SXSW talk about “AI and the Future of Truth.”

“I can understand an argument that says, ‘Hey, we're a commercial company, our secret sauce is the way we build our model,’” Marcus said. “But because society is affected deeply by this data, it’s being used to determine people's livelihoods, whether or not they get a job, and these companies are not really handling the negative externalities that they're causing, the scientific community needs to understand what's going on and we need some openness there.”

Whether any given AI system is open source is not exactly a binary choice, but more a matter of determining where it falls on a spectrum. When Meta touted its LLaMA-2 large language model as open source last year, it also noted in its documentation that it was attaching some strings on who could use it. That led to criticism from the nonprofit Open Source Initiative that prohibiting anyone from using the data for commercial purposes means LLAaMA-2 doesn’t meet their definition of open source.

Whatever its limitations, Meta is unique among the big Silicon Valley tech companies in taking the open source approach to AI. But another major player taking that route is the New York-based IBM, which sponsored SXSW’s AI programming track this year and whose director of research Dario Gil moderated a panel on “Why the Future Should Be Open” that included Meta’s generative AI product director Joseph Spisak and other pro-open source voices.

Rebecca Finlay, CEO of the nonprofit Partnership on AI, lauded what she described as a recent “step back from the binary choice between closed and open models.”

“There’s been a lot of work done to understand the spectrum of release approaches… frankly, from my perspective, I think [choosing an approach is] a business decision. Companies are making decisions about the business model that is most appropriate for them, and for the products that they want to release and the work that they're doing with their customers.”

Those commercial concerns aside, businesses using a closed system like OpenAI and AI critics share a common worry about open source: Going too far toward that end of the spectrum could empower bad actors to use these powerful tools for nefarious purposes, like fraud and spreading misinformation.

But as with so much of the rest of the hope, anxiety, and hype surrounding AI right now, the ultimate impact of releasing sophisticated AI source code into the world is unknowable until the dust settles.

“Anybody who tells you they know for sure that we should open source or we shouldn't, which is 95 percent of the people on one side of that issue or the other, is actually kind of full of it,” Marcus told me. “Because the truth is, we don't really know what these models are going to be used for.”

 

YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. 

 
 
zero hour for the ai act

It’s finally, truly official: European Union lawmakers overwhelmingly passed the bloc’s AI Act into law today, putting into place the world’s first sweeping regulations for artificial intelligence.

POLITICO’s Gian Volpicelli reported on the vote, which passed 523-46 with 49 abstentions. The rules establish an elaborate framework of risk categories into which an AI system or its use fall, with tighter restrictions for riskier uses, like the AI system deployed in the Netherlands’ infamous welfare fraud scandal. It also establishes a set of testing and reporting requirements for the most powerful “general-purpose” AI models, which will be overseen by the EU’s new AI Office.

Gian writes that the Council of the EU will officially adopt the text in April, and its bans on social scoring and exploitative practices will take force in late 2024 with rules on general-purpose AI delayed until early 2025, and the rest of its strictures limiting facial recognition technology and other miscellaneous uses in 2026.

politico tech at sxsw

In case you just can’t get enough of our SXSW coverage: I appeared on the POLITICO Tech podcast with Steven Overly yesterday, breaking down the biggest trends and conversations happening at this year’s conference.

A few big takeaways: At least in Austin, people are starting to get a little more pragmatic about AI, which doesn’t at all diminish their vision for its potential. And we chatted about the involvement of the U.S. Army as a major sponsor, which has caused a major controversy with the festival’s cultural arm.

one last thing...

The House of Representatives voted today to force the sale of TikTok from a Chinese-owned company in order to continue operating in the United States.

POLITICO’s Rebecca Kern has the full story on the move, which is meant to mitigate the potential risks of a Chinese owned-company from owning a major American communications platform. We haven’t covered this story that much because it’s decidedly one about our digital present, but it’s a notable sign of Washington’s changing relationship to big tech and the role of digital platforms in geopolitics. (Bonus: If you want to read my thoughts about why, national security aside, TikTok is a pretty lousy news source, click here.)

Tweet of the Day

hiring this guy at the right stage gives your company a 10x better chance of going public or having a successful exit. but if you hire him at the wrong time your company will spiral into a tangled web of bureaucracy

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

JOIN US ON 3/21 FOR A TALK ON FINANCIAL LITERACY: Americans from all communities should be able to save, build wealth, and escape generational poverty, but doing so requires financial literacy. How can government and industry ensure access to digital financial tools to help all Americans achieve this? Join POLITICO on March 21 as we explore how Congress, regulators, financial institutions and nonprofits are working to improve financial literacy education for all. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Mar 12,2024 08:35 pm - Tuesday

The culture-war fishbowl at SXSW

Mar 08,2024 09:02 pm - Friday

5 questions for Trae Stephens

Mar 07,2024 09:01 pm - Thursday

The metaverse wars heat up

Mar 06,2024 09:04 pm - Wednesday

The future stays out of the 2024 spotlight

Mar 05,2024 09:21 pm - Tuesday

Elon Musk and the dream of a machine God

Mar 04,2024 09:48 pm - Monday

The other global chip race