Let's nationalize AI. Seriously.

From: POLITICO's Digital Future Daily - Monday Aug 21,2023 08:01 pm
How the next wave of technology is upending the global economy and its power structures
Aug 21, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Charles Jennings

With help from Derek Robertson

An illustration shows the Great Seal of the United States rendered in neon blue cybernetic lines.

POLITICO illustration/Photos by iStock.

The following is an excerpt of an essay published this weekend in POLITICO Magazine by Charles Jennings, the former CEO of an AI company partnered with the California Institute of Technology and NASA’s Jet Propulsion Lab. Listen to his conversation with POLITICO Tech’s Steven Overly below.

Play audio

Listen to today's Tech podcast

Nine years ago, in a commercial AI lab affiliated with Caltech, I witnessed something extraordinary.

My colleague Andrej Szenasy was wrapping up a long day’s work training NeuralEye, an AI initially developed for the Mars Rover program, and I was a few cubicles away, plowing through NeuralEye’s test data. “Hey, check this out!” he shouted.

Our lab’s mission was to train NeuralEye to see as humans do, with the ability to recognize things, not just record them as a camera does. NeuralEye was built originally to discern different soil types on Mars, but we were teaching it to identify Earth’s inhabitants: animals, plants and individual humans. We believed AI could greatly improve face recognition, so that it could be used in cybersecurity, replacing passwords.

The first step in teaching NeuralEye to identify people was to get it to match various photos of a single person’s face. Typically, one photo would reside in NeuralEye’s training dataset of 14,000 faces; another — a different photo of the same person — would serve as the “prompt.” When NeuralEye successfully matched these two photos out of the thousands in its dataset, it got the digital equivalent of a doggie treat. In AI, this method is known as reinforcement learning, and with NeuralEye, it was working.

That night in the lab, for fun, Szenasy had prompted NeuralEye with a photo of his son, Zachie. Szenasy’s face was in NeuralEye’s dataset; Zachie’s wasn’t. Zachie, who has Down Syndrome, was a sweet 8-year-old. Round face, thick glasses, mop of black hair. Dad was tall and thin, no glasses, blonde with a receding hairline. If there was a physical resemblance between them, I couldn’t see it.

Szenasy sat me in front of his computer and again prompted NeuralEye with a photo of Zachie’s face. NeuralEye spun through its cache of stored faces looking for Zachie —and up popped a photo of Szenasy. Without any specific instruction, NeuralEye had somehow picked up a faint family resemblance. Out of those 14,000 faces, it selected Szenasy’s face as the third closest match with Zachie’s.

The next morning I phoned the AI engineer who’d written NeuralEye’s algorithm while at the Jet Propulsion Lab, home of the Mars Rover program. I asked him how NeuralEye could have seen a connection between Zachie and his father. He waxed philosophical for a few minutes, and then, when pressed, admitted he had no clue.

That’s the thing about AI: Not even the engineers who build this stuff know exactly how it works.

This Zachie episode took place in 2014, a time in AI that now seems prehistoric. Training datasets then had records in the thousands, not hundreds of millions, and large language models like GPT were just a gleam in Sam Altman’s eye. Today, AIs are writing novels, passing the bar exam, piloting warfighter drones. According to a recent University of Texas study widely reported on cable news, an AI in Austin is effectively reading minds: After an in-depth CAT-scan and 16 hours of one-on-one training with someone, it can read neural brain patterns and suggest what the subject is thinking with surprising accuracy. But in those halcyon AI days nearly a decade ago, we in our small lab were amazed that NeuralEye could do something as basic as spot a link between Szenasy and his son.

While the best AI scientists obviously know a great deal about AI, certain aspects of today’s thinking machines are beyond anyone’s understanding. Scientists cleverly invented the term “black box” to describe the core of an AI’s brain, to avoid having to explain what’s going on inside it. There’s an element of uncertainty — even unknowability — in AI’s most powerful applications. This uncertainty grows as AIs get faster, smarter and more interconnected.

The AI threat is not Hollywood-style killer robots; it’s AIs so fast, smart and efficient that their behavior becomes dangerously unpredictable. As I used to tell potential tech investors, “The one thing we know for certain about AIs is that they will surprise us.”

When an AI pulls a rabbit out of its hat unexpectedly, as NeuralEye did on a small scale with Zachie, it raises the specter of runaway AI — the notion that AI will move beyond human control. Runaway AIs could cause sudden changes in power generation, food and water supply, world financial markets, public health and geopolitics. There is no end to the damage AIs could do if they were to leap ahead of us and start making their own arbitrary decisions — perhaps with nudges from bad actors trying to use AI against us.

Yet AI risk is only half the story. My years of work in AI have convinced me a huge AI dividend awaits if we can somehow muster the political will to align AI with humanity’s best interests.

With so much at stake, it’s time we in the United States got serious about AI policy. We need garden variety federal regulation, sure, but also new models of AI leadership and governance. And we need to consider an idea that would have been unthinkable a year ago.

We need to nationalize key parts of AI.

Read the rest of Charles’ essay here.

 

A NEW PODCAST FROM POLITICO: Our new POLITICO Tech podcast is your daily download on the disruption that technology is bringing to politics and policy around the world. From AI and the metaverse to disinformation and cybersecurity, POLITICO Tech explores how today’s technology is shaping our world — and driving the policy decisions, innovations and industries that will matter tomorrow. SUBSCRIBE AND START LISTENING TODAY.

 
 
china's metaverse push

China is getting aggressive with its metaverse development, leaving some in the West worried the 3D digital world might follow the same draconian rules as China’s 2D one.

That’s according to a report from POLITICO’s Gian Volpicelli, who writes about a series of proposals from Chinese telecom China Mobile that would institute a social credit-like system in the virtual world. They recommend keeping track of “natural” and “social” characteristics of the metaverse’s users, including occupations and their visual “identifiable signs,” all of which they say should be stored permanently and shared with law enforcement.

For proponents of a more open, less surveillance-friendly internet, that’s alarming. More alarming for them is that Gian spoke with an expert contributing to the International Telecommunication Union, the United Nations' telecoms agency that sets global rules for how technology works, who says that Chinese organizations are making a bid for mastery of metaverse rulemaking by filing more proposals than the U.S. or Europe.

“They are trying to play the long game,” they told Gian. “When the metaverse comes around, they’ll say, ‘these are the standards.’” — Derek Robertson

the human in the machine

What goes on behind the scenes to make ChatGPT and its ilk seem so… well, human?

John P. Nelson, a postdoctorate fellow in ethics and societal implications of artificial intelligence at the Georgia Institute of Technology, wrote in an op-ed for The Conversation about the role humans play in making large language models work. It’s not just scraping human-generated text for data: It’s in the massive amount of feedback humans provide as those models are trained, a process that can be fraught and even psychologically torturous when reviewers are confronted with hateful or violent text.

The other side effect he notes of the massive amount of human labor involved in making chatbots work is that there are certain things that they just can’t do without us.

“They can’t evaluate whether news reports are accurate or not. They can’t assess arguments or weigh trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie,” Nelson writes. “If the common wisdom on some topic changes – for example, whether salt is bad for your heart or whether early breast cancer screenings are useful – they will need to be extensively retrained to incorporate the new consensus.” — Derek Robertson

Tweet of the Day

Would you dare this? Robots will be everywhere and this might look simple, but imagine when this becomes high tech and 100x better Some industrialists understands that robots are the next big thing. Links in next tweet

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DON’T MISS POLITICO’S TECH & AI SUMMIT: America’s ability to lead and champion emerging innovations in technology like generative AI will shape our industries, manufacturing base and future economy. Do we have the right policies in place to secure that future? How will the U.S. retain its status as the global tech leader? Join POLITICO on Sept. 27 for our Tech & AI Summit to hear what the public and private sectors need to do to sharpen our competitive edge amidst rising global competitors and rapidly evolving disruptive technologies. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Aug 18,2023 08:01 pm - Friday

5 questions for Katherine Boyle

Aug 17,2023 08:02 pm - Thursday

The future on your dinner plate

Aug 16,2023 08:02 pm - Wednesday

A more immediate 'AI risk'

Aug 15,2023 08:13 pm - Tuesday

One think tank vs. 'god-like' AI

Aug 14,2023 09:20 pm - Monday

Hackers in Vegas take on AI

Aug 11,2023 08:55 pm - Friday

5 questions for Austin Carson

Aug 10,2023 08:58 pm - Thursday

The NYPD's new robotic pet