The rise of new, powerful AI models over the past year has raised huge questions about who, or what, is supposed to oversee their growth and impact on the world. Congress has struggled to get its hands around the issue; the White House is in voluntary-guideline-and-press-conference mode. Europe is struggling to update its AI policies to account for the speed of tech development. Two weeks ago, the industry itself stepped in with a move: The top AI builders said they were launching the Frontier Model Forum, a group that promises to work on establishing best practices for the industry and researching AI safety. ("Frontier model" is the term for the biggest, most cutting-edge generative AI models, like OpenAI’s GPT-4 and DALL-E and Google’s BERT.) The group has only four members so far: Microsoft, OpenAI, Google and Anthropic — companies building the biggest and most powerful AI platforms. But it could be very important to AI policy: one of its core objectives is to be a conduit between the exploding AI industry and policymakers. So… what’s the forum really going to do? That’s the question on a lot of minds, and we started asking. The group doesn’t have an address or an email yet, and spokespeople from the individual members (we reached out to all four) didn’t have much to share. Its opening blog post promises to establish an advisory board over the next couple of months to determine its priorities. A forum spokesperson from Microsoft said more news about the board and potential new group members would be coming down the pike soon — around early September. The forum has also signaled international ambitions. In their launch announcement, the member companies said they want to influence multilateral government AI efforts like the G7 Hiroshima process; the OECD’s work on AI risks, standards, and social impact; and the AI work underway at the U.S.-EU Trade and Technology Council. Of course, a small group of companies banding together to push their own interests is a move Washington has seen before. In one sense, the Frontier Model Forum is a normal industry group, this time for the very rarefied club of companies deploying super-sophisticated AI platforms. “This is a familiar kind of institution for new kinds of technologies,” said Nick Garcia, policy counsel at Public Knowledge, a public interest nonprofit. With AI, though, even skeptical watchdogs agree that something like this might actually be useful for society more broadly. “There absolutely is a need for private voluntary action to deal with the safety and ethics issues around generative AI,” said Robert Weissman, president of Public Citizen, the progressive consumer rights advocacy nonprofit. “Normally, our view would be: industry doesn't have any role in policy development,” Weissman said. “But in this case, I think that's not feasible or even desirable. There’s too much that’s confusing, or unknown, or fast moving.” Garcia agrees with the need to have industry involved in standard-setting, and drew a comparison to the telecommunications industry: “Certainly, a lot of the internet is built on standards organizations that work collaboratively in order to figure things out. A lot of telecom relies on these standards organizations,” he said. But some are still wary of what the forum could become. “We have years of history with corporate self-regulation, where it tends to mean very little,” said Wendell Wallach, a fellow at the Carnegie Council for Ethics in International Affairs and co-director of its Artificial Intelligence and Equality Initiative. Weissman voiced similar concerns: “If this is just another trade association — another formulation in a different guise of industry leveraging its influence to block meaningful controls and maintain maximum freedom for generative AI companies — that would be very harmful.” For now, what they are looking for from the forum is specificity: Advocates want AI companies to propose clear, shared rules for AI model audits with information on what’s being tested and by whom. So far, they haven’t seen those kinds of specifics. “If you look at the White House agreement with the companies a couple of weeks ago, it's relatively high level of generality. And in almost all of the company statements of principles, you see a high level of generality,” Weissman said. OpenAI and Anthropic did not respond by deadline to a request for comment.
|