In the conversation about the future of artificial intelligence in society, universities sit at a crucial spot. They conduct revolutionary research that could be accelerated by AI, or fall victim to its “hallucinations.” They hold mountains of personal data and intellectual property that could train AI to be smarter but must be fiercely protected. They need to prepare a generation of workers who face career prospects radically transformed by automation. And the many productivity gains that AI promises in the professional world — such as generating drafts, summarizing texts or brainstorming fresh ideas — fall into a different category in the academic world: cheating. AI has been a prime tool for students seeking shortcuts, and professors still have little idea what to do about it. All of that creates a fundamental tension facing every university right now: At places meant to foster human intelligence, where exactly does artificial intelligence fit in? Some universities are approaching the answer with great caution — or even resistance. Arizona State University, on the other hand, is going all in. Last week, ASU became the first university to ink a partnership with OpenAI and gain access to ChatGPT Enterprise, a business-grade version of the company’s paradigm-shifting AI chatbot. ASU has a tech-forward track record that makes its embrace of AI not entirely surprising. It’s by some counts the largest university in the country, due in large part to the more than 65,000 students taking classes online. It runs virtual science labs, hosts student hackathons and, even before the OpenAI deal, was using AI in some class settings. ASU’s chief information officer, Lev Gonick, told me that’s now poised to expand. “The way, mechanically, we're rolling this out is really just a call for great ideas or interesting ideas,” Gonick said on the POLITICO Tech podcast. That call elicited about 60 to 70 pitches from faculty and staff across campus this weekend alone, he said. “To be honest, we're overwhelmed.” Some ideas already under consideration: An AI bot that gives personalized feedback on papers in English composition, the university’s largest undergraduate course. Another that assists biology students enrolled in a virtual reality-based research lab called Dreamscape Learn. And a third that guides students through the financial aid application process and predicts when their loan will be issued. “Here at ASU, we talked about the ‘A’ in AI as augmenting human intelligence, augmenting education, augmenting the ways in which we teach and learn,” Gonick said. “In that environment, I could see all kinds of ways in which faculty didn't have to do a lot of the tedium that is part of the teaching and learning process, and hopefully unleashing more creative juices.” And in terms of AI’s potential to transform higher education, Gonick says it may have a deeper impact than the arrival of the consumer internet three decades ago. “Between now and then, I haven't seen anything with the kind of potential that we are seeing with generative AI,” Gonick said. “As a moment in time, I see it as tectonic. I see it as shifting the underlying ways in which we have access to and synthesis of information that hopefully will guide positive educational outcomes.” For all its enthusiasm, ASU is not immune to the risks of AI, such as vacuuming up data and intellectual property or spreading false information. The terms of its agreement with OpenAI may give more insight into those concerns than ASU’s plans for the classroom. Gonick said the university combed through the technical framework of its agreement to ensure none of its data would be funneled back to OpenAI or used to train its algorithms. “That’s sort of the central issue, not only for a university, but you can appreciate any large organization, any organization that has as one of its most important currencies its intellectual property,” he said. The AI will be informed by the university’s own resources to avoid misleading or fabricated results. “Inside the enterprise version, basically what we put in and what we curate is actually what the machine gives us back,” he said. The result has been described as a “walled garden” that has more stringent safeguards than the consumer version of ChatGPT most people can access today. But even Gonick acknowledges it's still not a perfect system — and with education technology companies flooding the market promising AI-powered software, the potential for bad actors to abuse the technology will grow. Gonick said ASU has begun floating the idea of a certification program to validate new technologies meeting security and privacy standards, similar to how bond-rating agencies evaluate financial institutions. In his view, that process should be overseen by academia and industry — with the blessing of regulators. Gonick said he’s in talks with state and federal regulators, and he said the announcement of the OpenAI partnership has triggered a landslide of interest. “It's more invitations to go to Washington, D.C. than I probably received in the last year,” he said. Listen to the full interview and subscribe to POLITICO Tech.
|