AI took its star turn through Congress this week, with lawmakers doing their best to demonstrate their awareness of how the tech is already disrupting society. The highlight was a hearing featuring OpenAI's celebrity CEO Sam Altman, who welcomed the idea of flagging AI-generated content by default, and even standing up a new regulatory agency. Sen. Gary Peters (D-Mich.), chair of the Senate Committee on Homeland Security and Governmental Affairs, went even wonkier, holding a hearing on how the technology can or should be promulgated throughout the federal bureaucracy. But even Congress' best, bipartisan foot forward might still be a step behind. Because after all, those are today's problems. Or even yesterday’s, as lawmakers largely apply a lexicon that was developed for the social media era’s data privacy and safety issues to an entirely new technology. When Congress turned its eye to the World Wide Web in the early 1990s, there was no way of knowing they were laying the first rails of a track that would lead to our currently-raging debate around TikTok, for example. What could they be missing now? “People are borrowing mental models from data privacy debates from five years ago,” said Samuel Hammond, a senior economist at the Foundation for American Innovation who recently wrote in POLITICO Magazine about an entirely different, existential policy issue AI might pose. “It’s completely unwieldy — how do you even define the scope of the scene, when you’re taking the hype around AI and conflating general purpose systems that have uncanny levels of understanding and reasoning with stuff that was around 10 years ago?” Hammond wrote in his op-ed about the need to place guardrails around the development of a potential “artificial general intelligence” that would supersede even humanity’s capabilities. But you can turn the “science fiction” knob a little bit further to the left and find more concrete examples where the pace of development might outstrip our regulatory capacity: Hammond noted today that in this week’s hearings even Altman, generally a supporter of the current open-source AI development ecosystem, called for federal licensing only in the case of hyper-sophisticated autonomous agents, or AI that could design a novel pathogen. In today’s Morning Tech newsletter POLITICO’s Mallory Culhane nodded to the general understanding in the tech industry that AI regulation will require specific, industry-level expertise and judgment — especially given how rapidly the technology is developing. “If the United States wants to have a regulatory environment for AI that is flexible, responsive, and adaptable to emerging risks, it should lean into the sector-specific approach it has taken to regulation,” Hodan Omaar, a senior policy analyst at the nonpartisan Center for Data Innovation, told Mallory. “Federal regulators are the best placed to regulate issues in different domains because they have industry-specific knowledge.” Veteran regulator and former Federal Communications Commission Chairman Tom Wheeler praised the Digital Platform Commission Act re-introduced today by Sens. Michael Bennet (D-Colo.) and Peter Welch (D-Vt.), which would stand up a new agency specifically to oversee digital platforms and address algorithmic harm. He said that while existing agencies have plenty of tools for tackling AI — the Equal Employment Opportunity Commission punishing AI discrimination in hiring, for example, or the CFPB policing AI-driven financial fraud — a new platform, built on new regulatory principles, could have the “agility” to meet the unforeseen threats AI might pose. “What you need to have is a structure that is based on the English common law concept of ‘duty of care’ that says a company has a responsibility to identify and mitigate potential harms that come from its product or service,” Wheeler said. “Technology is changing, the marketplace is changing, and we have to be agile as regulators.” Of course, navigating the “foreseeability” on which the “duty of care” concept is based is a little bit tricky when it comes to something like AI tools, where sometimes even the developers of a machine learning system aren’t quite sure how to explain what’s going on inside it. Some more libertarian-minded thinkers believe that in that case it’s better to leave well enough alone until a tangible risk emerges. “When I see legislation dropping, or proposals for a whole new regulatory agency for AI, I'm puzzled by what problem people are trying to solve,” said Neil Chilson, senior research fellow at Stand Together. “I worry that if we get it wrong, we throw out a bunch of benefits to consumers and cede ground to China, where what they will do with this technology is not in our best interest.” That leaves Washington doing what it can with the issues that are in front of it. And as POLITICO’s Mohar Chatterjee and Rebecca Kern reported yesterday, those might be far closer to home than any threat of AGI or runaway software — as a House subcommittee debated issues of rights and provenance around the images, essays, and even songs produced by generative AI. Yes, the development and spread of powerful AI is a promethean technological moment on par with the printing press or the internet. But for now — and, it’s worth keeping in mind, as with both of those technologies — the average person is just here for the memes.
|