Welcome to AI university

From: POLITICO's Digital Future Daily - Wednesday Jan 24,2024 09:04 pm
Presented by the Information Technology Industry Council (ITI): How the next wave of technology is upending the global economy and its power structures
Jan 24, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Steven Overly

Presented by

Information Technology Industry Council (ITI)

With help from Derek Robertson

A ChapGPT logo is seen in West Chester, Pa., Wednesday, Dec. 6, 2023. (AP Photo/Matt Rourke)

The ChatGPT app in Apple's App Store. | Matt Rourke/AP

In the conversation about the future of artificial intelligence in society, universities sit at a crucial spot.

They conduct revolutionary research that could be accelerated by AI, or fall victim to its “hallucinations.” They hold mountains of personal data and intellectual property that could train AI to be smarter but must be fiercely protected. They need to prepare a generation of workers who face career prospects radically transformed by automation.

And the many productivity gains that AI promises in the professional world — such as generating drafts, summarizing texts or brainstorming fresh ideas — fall into a different category in the academic world: cheating. AI has been a prime tool for students seeking shortcuts, and professors still have little idea what to do about it.

All of that creates a fundamental tension facing every university right now: At places meant to foster human intelligence, where exactly does artificial intelligence fit in?

Some universities are approaching the answer with great caution — or even resistance.

Arizona State University, on the other hand, is going all in. Last week, ASU became the first university to ink a partnership with OpenAI and gain access to ChatGPT Enterprise, a business-grade version of the company’s paradigm-shifting AI chatbot.

ASU has a tech-forward track record that makes its embrace of AI not entirely surprising. It’s by some counts the largest university in the country, due in large part to the more than 65,000 students taking classes online. It runs virtual science labs, hosts student hackathons and, even before the OpenAI deal, was using AI in some class settings.

ASU’s chief information officer, Lev Gonick, told me that’s now poised to expand.

“The way, mechanically, we're rolling this out is really just a call for great ideas or interesting ideas,” Gonick said on the POLITICO Tech podcast. That call elicited about 60 to 70 pitches from faculty and staff across campus this weekend alone, he said. “To be honest, we're overwhelmed.”

Some ideas already under consideration: An AI bot that gives personalized feedback on papers in English composition, the university’s largest undergraduate course. Another that assists biology students enrolled in a virtual reality-based research lab called Dreamscape Learn. And a third that guides students through the financial aid application process and predicts when their loan will be issued.

“Here at ASU, we talked about the ‘A’ in AI as augmenting human intelligence, augmenting education, augmenting the ways in which we teach and learn,” Gonick said. “In that environment, I could see all kinds of ways in which faculty didn't have to do a lot of the tedium that is part of the teaching and learning process, and hopefully unleashing more creative juices.”

And in terms of AI’s potential to transform higher education, Gonick says it may have a deeper impact than the arrival of the consumer internet three decades ago.

“Between now and then, I haven't seen anything with the kind of potential that we are seeing with generative AI,” Gonick said. “As a moment in time, I see it as tectonic. I see it as shifting the underlying ways in which we have access to and synthesis of information that hopefully will guide positive educational outcomes.”

For all its enthusiasm, ASU is not immune to the risks of AI, such as vacuuming up data and intellectual property or spreading false information. The terms of its agreement with OpenAI may give more insight into those concerns than ASU’s plans for the classroom.

Gonick said the university combed through the technical framework of its agreement to ensure none of its data would be funneled back to OpenAI or used to train its algorithms. “That’s sort of the central issue, not only for a university, but you can appreciate any large organization, any organization that has as one of its most important currencies its intellectual property,” he said.

The AI will be informed by the university’s own resources to avoid misleading or fabricated results. “Inside the enterprise version, basically what we put in and what we curate is actually what the machine gives us back,” he said.

The result has been described as a “walled garden” that has more stringent safeguards than the consumer version of ChatGPT most people can access today. But even Gonick acknowledges it's still not a perfect system — and with education technology companies flooding the market promising AI-powered software, the potential for bad actors to abuse the technology will grow.

Gonick said ASU has begun floating the idea of a certification program to validate new technologies meeting security and privacy standards, similar to how bond-rating agencies evaluate financial institutions. In his view, that process should be overseen by academia and industry — with the blessing of regulators. Gonick said he’s in talks with state and federal regulators, and he said the announcement of the OpenAI partnership has triggered a landslide of interest.

“It's more invitations to go to Washington, D.C. than I probably received in the last year,” he said.

Listen to the full interview and subscribe to POLITICO Tech.

 

A message from the Information Technology Industry Council (ITI):

Network with policy leaders, industry executives and expert voices at The Intersect 2024: A Tech + Policy Summit. Reserve your seat now.

 
office space

It’s a good day for AI bureaucracy, as the National Science Foundation launched a pilot program for more AI research infrastructure and the European Commission officially opened its Artificial Intelligence Office.

Here in the states, POLITICO’s Mohar Chatterjee reported for Pro s on the launch of the National AI Research Resource, or NAIRR, which will give researchers access to advanced AI tools as mandated by the White House’s executive order on AI. The program was initially proposed in 2019 by the Stanford University Institute for Human-Centered Artificial Intelligence, and in January 2023 a federal task force put a $2.6 billion price tag on it. Heather Vaughan, director of communications for the Republicans on the House Science Committee, told Mohar they would discuss the pilot soon.

Meanwhile in Europe, the Commission’s new AI Office is charged with supervising the rollout of the European Union’s forthcoming AI Act, as POLITICO’s Gian Volpicelli reported for Pro s today. It’ll both make sure companies comply with the act’s strictures and seek out opportunities to use AI in government across the bloc. Gian writes that its “main task will be ensuring that developers and companies observe the AI Act’s rules on advanced general-purpose AI models, and investigating any infringement.” Sounds good, but there’s one small detail left to be hammered out: The actual passage of the AI Act, which is still being finalized. A vote is expected by the end of February. — Derek Robertson

 

A message from the Information Technology Industry Council (ITI):

Advertisement Image

 
a matter of trust

Most people are pretty ambivalent about AI, according to a survey of public trust by a PR firm.

This year’s Edelman Trust Barometer finds from more than 32,000 respondents in 28 countries (in an online survey with a 0.7 percent margin of error), 35 percent “reject” AI as an innovation while 30 percent “accept” it. (Those numbers almost exactly measure those for “gene-based medicine,” a technology that occupies a similarly spooky, Promethean role in the public imagination.)

Furthermore, 59 percent of global respondents said they did not trust government to regulate “emerging innovations” including AI responsibly, with U.S. respondents outstripping that average at 63 percent.

The poll also found a partisan split over innovation that resembles results from a poll by the AI Policy Institute shared exclusively with DFD yesterday. Edelman found that 53 percent of those on the right in the U.S. “rejected innovation” across areas including green energy, AI, and gene-based medicine, while only 12 percent of those on the left did. — Derek Robertson

 

YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. 

 
 
Tweet of the Day

Lilith is a single-issue kitty: she wants treats on demand, not according to some obscure algorithm determined by her human.

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from the Information Technology Industry Council (ITI):

The Intersect is D.C.’s premier tech policy summit, and it’s happening on February 7. Be there to network with the industry experts and policymakers who are shaping AI’s future. Reserve your seat today.

 
 

JOIN 1/31 FOR A TALK ON THE RACE TO SOLVE ALZHEIMER’S: Breakthrough drugs and treatments are giving new hope for slowing neurodegenerative diseases like Alzheimer’s disease and ALS. But if that progress slows, the societal and economic cost to the U.S. could be high. Join POLITICO, alongside lawmakers, official and experts, on Jan. 31 to discuss a path forward for better collaboration among health systems, industry and government. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Jan 23,2024 09:24 pm - Tuesday

Poll: AI is looking more partisan

Jan 22,2024 09:26 pm - Monday

Silicon Valley's crush on fusion

Jan 19,2024 09:02 pm - Friday

5 questions for Julius Krein

Jan 18,2024 09:02 pm - Thursday

The looming AI monopolies

Jan 17,2024 09:02 pm - Wednesday

Davos and the global state of quantum

Jan 16,2024 09:50 pm - Tuesday

One step closer to 'Graphene Valley'

Jan 12,2024 09:02 pm - Friday

5 questions for XRA's Liz Hyman