In search of ideas for ensuring a safe future with increasingly powerful AI, people have looked to lawmakers, coders, scientists, philosophers and activists. They may be overlooking the most important inspiration of all: Accountants. New polling shared first with DFD finds that a wonky policy idea enjoys surprising popularity among American adults: requiring mandatory safety audits of AI models before they can be released. Audits as a way to control AI don't literally involve accountants; they're an evolving idea for how to independently assess the risks of a new system. Like financial audits, they aren’t exactly sexy, especially when more dramatic responses like bans, nationalization, and new Manhattan Projects are on the table. That may explain why they have not played an especially prominent role in policy discourse. “It's under-represented, under-understood,” said Ryan Carrier, a chartered financial analyst who advocates for AI audits. But the Artificial Intelligence Policy Institute — a new think tank focused on existential AI risk — found that when it asked about 11 potential AI policy responses in head-to-head preference questions, respondents chose the AI safety audit idea over others two-thirds of the time (making it second only to the vaguer response of “Preventing dangerous and catastrophic outcomes”). In fact, the idea of government-mandated audits of digital technology is already starting to gain traction. The EU’s year-old Digital Services Act mandates that the largest online platforms — like Amazon, YouTube, and Wikipedia — submit to annual independent audits of their compliance with its provisions. And an AI policy framework unveiled last month by senators Josh Hawley of Missouri, a Republican, and Richard Blumenthal of Connecticut, a Democrat, calls for an independent oversight body to license and audit risky models. AI Policy Institute’s founder, Daniel Colson, said he decided to include audits among the policy responses after finding it was popular in surveys of experts, including one published in May by the Center for Governance of AI, a nonprofit that was spun off of Oxford’s Future of Humanity Institute. “It’s in the sweet spot of something that’s maybe feasible but also a major priority of the safety community,” Colson said. How would this actually work? It turns out that AI safety proponents have been fleshing the idea out for years. Pre-release audits break down into two main types: Pre-deployment audits, which examine the plans for the AI model, and post-deployment audits, which examine the functioning of a model after it’s been built but before it’s been put into use in the real world. As for the legal and procedural framework, one version of the idea calls for replicating the system that already exists in the financial world, where public companies must submit to audits by independently certified accountants who are liable for their conclusions. “If you adapt it correctly from financial audits, it’s got a 50-year track record,” said Carrier, who founded a charity, ForHumanity, in 2016 to develop an infrastructure, like standards and auditor exams, for AI audits. A former hedge fund operator, Carrier said he got an up-close look at the potential for unregulated AI to run amok when his fund began building it into its automated trading tools. In addition to a pre-release safety audit, Carrier said that high-risk AI systems — which can evolve over time — should be subjected to annual independent audits, just as publicly traded companies are. Of course, auditing only goes so far: As fiascos like the Enron implosion have made clear in the world of finance, even strict auditing requirements aren't foolproof. They can be thwarted by the old-school pitfalls of corporate fraud or greed, with auditing firms going easy on important clients. And AI "audits" have a unique layer of complexity. Because even the designers of large language models don't fully understand their inner workings, the models themselves cannot be audited directly, said Ben Schneiderman, a professor emeritus of computer science at the University of Maryland and the author of “Human-Centered AI.” Schneiderman — who along with Carrier and 18 other researchers authored a 2021 paper calling for AI audits — said that instead, auditors would need access to a model’s training data, and would then need to rely on observations of the model’s inputs and outputs. At a time when much U.S. public polling shows low trust in government bodies and high levels of anxiety about AI, the idea of farming out supervision of an opaque technology to a standardized process, rather than a powerful agency, could have legs. “I like the phrase ‘independent oversight,’” Schneiderman said.
|