Janet Haven doesn’t dismiss fears that artificial intelligence may cause an apocalypse. She just doesn’t believe that’s the technology’s biggest existential threat. Haven is the executive director of the nonprofit Data & Society, which tracks the social effects of data, automation and AI, and a member of the White House’s National Artificial Intelligence Advisory Committee. That group has met regularly since its creation last year, and published its inaugural annual report in May that called on the federal government to take a more hands-on approach to AI. “I don’t discount the sort of longer-term existential threats and concerns that have been raised,” she told Digital Future Daily. “But they are discounting the immediate and very visible harms that are already part of our use of AI.” Those concerns lie at the ongoing battle for the hearts and minds of policymakers and politicians as everyone from Joe Biden and Sen. Chuck Schumer, to Microsoft’s Brad Smith and Mozilla’s Mark Surman try to calm the public’s concerns that artificial intelligence will either take their jobs, revolutionize the American economy or, in the worst case scenario, end the world. On one side are technological advocates who have called on Washington to mitigate the potential of AI to end the world, even if that possibility is decades away. On the other side are mostly human rights campaigners who want attention focused squarely on how algorithms — full of data biases — are harming Americans, now. For Haven, politicians should look past the AI hype and focus on what real-world risks may result when everything from people’s Social Security benefits to housing allocations becomes automated via complex machine learning algorithms. “Policymakers and politicians should be paying attention to the protections of fundamental rights, and the durability of any (AI) governance framework,” said Haven, who spent more than a decade working with the Open Society Foundations, a grant-making foundation focused on promoting human rights and social justice. She points to the White House’s (much-maligned) Blueprint for an AI Bill of Rights, or regulatory guidance for federal agencies in how they oversaw the technology as examples of how officials can rein in the worst excesses of AI without limiting, too much, how companies roll out the technology. Those proposals, which immediately came under criticism from all sides for either being too onerous or not going far enough, included data privacy protections and efforts to stop algorithmic bias creeping into these AI systems. Haven also urged Washington to not get caught up in the extensive corporate lobbying that has sprouted up over the last year in the wake of ChatGPT’s rise. She worried that policymakers have almost exclusively focused on so-called generative AI, which is only a small part of what wider AI systems are capable of. It also has led some of Silicon Valley’s biggest names to dominate a national conversation on AI, just as people’s attention has finally woken up to a technology that has been around for decades. “The main public discourse understands AI as ChatGPT and that is a really narrow part of AI,” she said. “It is also a type of AI that has been developed specifically to meet two objectives,” Haven added. “One: to push towards this imaginary artificial general intelligence, which is conceived of by a very small group of people. And two: it has been created to sell a product.” So far, though, Haven’s near-term concerns have been somewhat disregarded by officials — both in the White House and on Capitol Hill — in favor of Armageddon-planning around the worst-case scenarios if AI really goes rogue. That’s especially true when many now in Washington view the technology’s development via the prism of America’s competition with China. “The most bipartisan issue in Washington right now is being concerned about China,” she admitted. “That is dramatically impacting how AI governance is being discussed and rolled out.”
|