Last Thursday POLITICO’s Mark Scott, author of the Digital Bridge newsletter, interviewed the computer scientist and activist Timnit Gebru about a recent open letter from her Distributed AI Research Institute that argued — contra the Future of Life Institute’s high-profile letter calling for an “AI pause” — that the major harms caused by AI are already here, and therefore “Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.” Mark asked her what she thinks regulators’ role should be in this fast-moving landscape, and how society might take a more proactive approach to shaping AI before it simply shapes us. This conversation has been edited for length and clarity. Why is it important to increase the transparency and accountability for how AI systems are deployed, and how would it benefit people's understanding of how the technology works? First, this would show us what data is being used. Was it obtained with opt-in informed consent or stolen? Second, it would show us what the quality of the data is. What data sources are they using? It’s important that the onus be on the corporations to show us these things before deployment, rather than understaffed agencies auditing or inquiring about them after the fact. What concerns do you have about a small group of companies potentially dominating AI, and how would you mitigate that threat? These corporations want one model that does everything for everyone, everywhere, so that we all pay one or two companies to literally do any task in our lives. OpenAI founder Sam Altman said that "In the next five years, computer programs that can think will read legal documents and give medical advice." He has no evidence for this claim, but people will actually think it’s true and start to use these systems as such. Why would we want to have a world where every single task is done by one model from OpenAI, and the whole world just pays them to do it? What is your appeal to policymakers? What would you want Congress and regulators to do now to address the concerns you outline in the open letter? Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of “powerful digital minds.” This, by design, ascribes agency to the products rather than the organizations building them. This language obfuscates the amount of data that is being collected — and the amount of worker exploitation involved with those who are labeling and supplying the datasets, and moderating model outputs. Congress needs to ensure corporations are not using people's data without their consent, and hold them responsible for the synthetic media they produce — whether it is text or media spewing disinformation, hate speech or other types of harmful content. Regulations need to put the onus on corporations, rather than understaffed agencies. There are probably existing regulations these organizations are breaking. There are mundane "AI" systems being used daily; we just heard about another Black man being wrongfully arrested because of the use of automated facial analysis systems. But that's not what we're talking about, because of the hype. The European Union is moving ahead with AI-specific legislation, and already has expansive privacy regulations that address some of the issues you mention in the open letter. Are you optimistic about what the Europeans are doing? They're doing something, which is much better than doing nothing. However, with things like their General Data Privacy Regulation, the onus is on individuals to prove harm, rather than corporations to prove a priori before they release a product that it fulfills a certain set of requirements. I'd like regulation that doesn't put the onus on individuals and understaffed agencies to prove harm after AI products have already proliferated. What do you mean by this phrase from the open letter: “We should be building machines that work for us, instead of ‘adapting’ society to be machine readable and writable.” In the Future of Life Foundation AI “pause” letter, they used words like "cope" to deal with "disruptions to democracy." Why should we cope? This sounds so ridiculous to me. Society should build technology that helps us, rather than simply adjusting to whatever technology comes our way. Paris Marx’s “The Road To Nowhere” describes suggestions about what people should wear in order to co-exist with self-driving cars, because the designers simply assumed that self-driving cars need to exist at any cost — and we have to be made “machine readable,” to adjust who we are, in order to co-exist with them. This frames technology not as a tool to help us exist how we want to exist, but as a thing that has to exist, and we have to bend to its will. It is a very strange framing.
|