Hello, and welcome back to The Future in 5 Questions. This Friday, we have AI ethicist Rumman Chowdhury, formerly the director of Twitter’s META (Machine Learning Ethics, Transparency, and Accountability) team. Both the White House and Congress have tapped Chowdhury’s AI expertise as Washington figures out how it wants to regulate a technology that’s taken the world by storm. She has provided expert Capitol Hill testimony and is helping organize the White House’s upcoming exercise to “red-team” AI systems at the hacker conference known as DEFCON. Chowdhury is currently a Responsible AI Fellow at the Harvard Berkman Klein Center and chief scientist at Parity Consulting.
Responses have been edited for length and clarity. What's one underrated big idea? Giving a wider range of people the ability to provide feedback on machine learning models and AI models in a meaningful fashion. There are many ways people are providing feedback to models. But they're not very structured. So on one end of the spectrum, people go viral on social media because they found this one big, bad, biased thing. Or on the other end, there is this significant problem of network bias, where companies are going out and they're talking to luminaries, but they're going out to people who are friends and friends of friends, right? What I'm working on is creating ways for people who are not in that normal advisory network to be included in the conversation. What’s the technology that you think is overhyped? The current state of large language models is way overhyped. And I don't know what leads to these annual insane hype cycles. Actually, I have my theories. But the investment that's being put into the current iteration of language models is disturbing to me because it really is the chatbot craze of 2017 all over again. Honestly, sometimes I feel like I'm in the movie “Groundhog Day” — like I'm reliving six years ago all over again. Because when I first joined Accenture, when I first started doing responsible AI, the hype of the week was chatbots. Everybody was investing in chatbots. I do think large language models are going to be very interesting. Generative AI will come together in interesting ways. Generative AI takes many forms. It's not just text. It’s images, audio and video. But we are hyper-fixated on the text aspect of things. I think it is because it’s the most human-feeling to us. Because it feels like you are talking to it. What book most shaped your conception of the future? So it's the series called “Monk & Robot” and it's by this author named Becky Chambers. It’s fiction. The general premise is that it’s set in a sort of a post-apocalyptic future — but it's not a post-apocalyptic future. Humans created robots. Robots became sentient. And then they said, “Hey, would you mind just leaving us alone.” And humans said, “Absolutely.” And then the robots go off into the woods. And humans realize that, “Hey, we're really screwing up the planet with climate change. Maybe we should reconstruct our society.” So it's in this future, where there used to be this heavy industrial world. But people live in these little villages now. It's very fascinating. It's a story of a human who meets a robot, after hundreds of years of humans and robots not communicating. And it is a philosophical reflection on what it means to be human. Because the robot is there to check on the humans to see if they're okay. Because robot society was like, “Hey, we haven't heard from these people in a bit. Let's see what's up with them.” So this robot goes out, and is just kind of like, “How are people doing? Like, can I talk to some of them?” And it is so fascinating, because this individual they meet, the human, is sort of having a midlife crisis and is trying to figure out the meaning of their existence. On their journey, in their conversations, the robot actually helps the human understand their purpose for being. Along the way, they actually address issues about technology and society. Even about some of the issues that we worry about today, just in the real world. You should read it. What could government be doing regarding technology that it isn't? Investing in methods of third-party governance [of new technology]. In particular, financially, but also legal protections. So there is a history of third-party hackers — so people who are not employed by companies — identifying actual serious flaws in technology and then being stifled by these companies through threats of litigation. Because they're using a technology “inappropriately,” or in violation of the DMCA. To be honest, these are not my ideas. These are things I've learned from the hacker community. But this is an evolution of that to say: We are going to have this in AI. We will have people who should be empowered — because they're not working on behalf of a government. They're not working on behalf of a company. You're actually only working on behalf of society. You should be empowered to identify and report serious problems with technology. These people will not just need financial support, these people will need legal protections. The one narrative I do not hear in any of these rooms is, how are people being included? I think it's assumed that within government, people are included. But we know that's not actually the case. What surprised you most this year? It's surprising to me how ahistorical tech can be. The fact that we had so many of these stories and these narratives and these conversations years ago, and we're acting like these are new conversations, that is actually surprising to me. It maybe shouldn't be surprising to me. Very few of these problems are new problems. We have seen them in other forms, and we're actually going through the same cycle. It's surprising and frustrating. It's many things. We’ll get deeper into the conversation in The Recast next week. |