Washington and Brussels are both preparing for a future dominated by artificial intelligence — but first, they need to get out of each other’s way. Tech regulators on both sides of the Atlantic hope to prevent a split on AI rules like one seen on data privacy, where regulators in Europe got out ahead of their U.S. counterparts and sparked all kinds of havoc that continue to threaten transatlantic data flows . “There is a lot of interest to avoid having segmented approaches,” said Elham Tabassi, chief of staff in the Information Technology Laboratory at the National Institute of Standards and Technology. “It’s bad for the market. It’s bad for the economy.” But regulators in the EU and U.S. are already taking different approaches to the multi-trillion-dollar transatlantic tech economy. The EU is plowing ahead with mandatory AI rules meant to safeguard privacy and civil rights while the U.S. focuses on voluntary guidelines. And there’s another fundamental divide — the U.S. wants to promote ethical research and use of the technology while Europe focuses on potentially banning, restricting or auditing specific lines of code. Put another way: While Brussels worries AI tools might harm people, Washington worries the people developing or using those tools might harm people — and they’re crafting their rules accordingly. Michael Nelson, a senior fellow in the Carnegie Endowment’s Technology and International Affairs Program, said the imbalance could cause U.S. tech companies to lose out on transatlantic business — and cause Europeans to miss new technological opportunities. “You're going to take the most stringent requirements that might be needed for a medical application and apply it to these general purpose algorithms that are being used everywhere,” said Nelson. “That is just going to hold back everything.” The two sides aren’t starting from opposite corners. Victoria Espinel, CEO of the Business Software Alliance and a member of the National AI Advisory Committee, noted Europe’s still-evolving AI Act and NIST’s incomplete guidelines both seek to weigh the relative risks of AI tools. In both regimes, riskier AI codes or use cases receive heightened scrutiny compared with those deemed low-risk. “That’s a very significant step,” Espinel told me. But EU plans to directly regulate specific AI codes are spooking their U.S. counterparts. “One size fits all is bad,” said Tabassi. “If I use face recognition for law enforcement applications, it's a lot more risky than if I use face recognition for unlocking my phone.” Tabassi said Europe’s proposed rules lack the “context” required to effectively regulate AI. “You can’t just get one application and label high, or low, or medium risk,” she said. The U.S. side also argues the European approach is impractical, given how AI tools operate. “The whole point of machine learning is that you dump in a bunch of data and the code rewrites itself,” Nelson said. “If you somehow certify that the code is ethical, and then you run it and it rewrites itself, how do you ensure that it still meets your criteria?” Nelson and others said EU regulators are increasingly reaching the same conclusion. And while Europe is unlikely to ever completely agree with the U.S. approach, there’s optimism the two sides will find enough common ground to avoid a privacy-style crackup. Negotiators on the U.S.-EU Trade and Technology Council, which was announced to great fanfare last year, are attempting to find that common ground. They have their work cut out for them.
|