On the surface, Big Tech didn’t have a great day on the Hill yesterday. In a Senate hearing on children's online safety, Mark Zuckerberg was forced to apologize to parents, and sit there quietly while Sen. Lindsey Graham (R-S.C.) said he had “blood on [his] hands.” Sen. Sheldon Whitehouse (D-R.I.) looked at five CEOs across from him and declared: “Collectively, your platforms really suck at policing themselves.” Sen. Tom Cotton (R-Ark.) hounded TikTok CEO Shou Zi Chew (inaccurately) about his personal ties to China. To many in the industry, however, the story isn’t what happens on online safety — it’s how leaders are already planning to avoid this kind of messy, damaging conflict around the next big technology. Artificial intelligence is set to supercharge the same issues around safety, privacy and “misinformation” that have racked the social media era. But unlike with social media — whose corporate founders largely ignored Washington until they were forced to show up and answer questions — AI moguls are making serious efforts to get ahead of the curve and work with lawmakers up front. OpenAI CEO Sam Altman eagerly pleaded in his first hearing last year for regulatory collaboration with Washington. And executives have showed up on demand for periodic summits with President Joe Biden, or Senate Majority Leader Chuck Schumer’s ongoing “Artificial Intelligence Insight Forums.”. “These things do mark an attitude shift from industry,” Meghan Chilappa, policy counsel at the tech policy firm Access Partnership told me this morning. It’s becoming conventional wisdom in D.C. that politicians feel like they missed the boat on social media rules and don’t want to make the same mistake again. Companies worried about tough new laws, or embarrassing public grillings, are trying to avoid that mistake too. “Instead of waiting to engage on policy matters when they’re fined or asked to testify, they’re now more proactive,” Chilappa said. “And that might spur other companies to follow suit.” It’s never clear how sincere the giants really are in all this; tech critics on both the left and right have accused industry leaders like Meta and Google of simply attempting to protect their market status by pushing for new rules — boxing out smaller competitors less equipped to deal with a formidable Washington regulatory apparatus. But even smaller companies have a reason to hope for clear new rules, said Chris Martin, Access Partnership’s head of policy innovation. In a field moving as fast as AI, “they also want regulatory certainty about what they can and can't do,” he said, especially “if they pursue a line of business that could be upended by a potential requirement or suite of regulations.” The hearing was also a vivid illustration of the difference between established and upstart players as an industry comes to terms with the demands and rituals of Washington. Meta and TikTok came under the most fire — and also had the most practiced responses. Newer companies like Discord, whose CEO Jason Citron made his first appearance before Congress yesterday, can find themselves somewhat at a disadvantage when dealing with Washington. “There’s still a sense of confusion or bewilderment [from some executives] at some of the comments that were being made,” said Daniel Castro, vice president of the Information Technology and Innovation Foundation, “where some of the CEOs realized they were basically there as a prop, not there to give answers.” “There’s always a learning curve figuring out how to engage with Washington,” he said. Another reason for the relationship reset on AI might be that tech companies hope to avoid what Castro sees as an unhelpful, one-size-fits-all regulatory approach that Congress has proposed for social media platforms: Working around the shield of Section 230 to hold companies legally liable for harms caused through their products. On social media, he said “When you look at most of the proposals, they're not proposals about specific design changes, for the most part it's just to make them liable so that they'll figure it out.” Companies want a more tailored approach to AI, and are hoping to shape it by getting involved. One reason companies might avoid that fate in the dawning AI era is that there’s already a clear-eyed (many even say excessive) view of the technology’s potential harms, from nonconsensual sexually explicit images to bias and discrimination to, uh, robots developing sentience and killing everybody. If AI spirals out of control, leading to (among other things) another decade of Silicon Valley executives being hauled in front of Congress for a public dressing-down, it won’t be because no one warned them, or because the sunny optimism of the early days of Facebook blinded them to risk. “There’s a utopian and a dystopian camp that have emerged” around AI, Access Partnership’s Martin said. “But everyone is more ready to be regulated than we've seen in the past.” |