It’s Friday! Good tidings and welcome back to The Future in 5 Questions. Today we have Yasmin Green — CEO of Jigsaw, a division of Google that researches and addresses online toxicity, violent extremism and disinformation. Vogue once called her Google’s "head international troll slayer." A former Booz Allen Hamilton consultant, Green’s previous roles at Google include negotiating deals to syndicate content and heading sales strategy. In the mid-2010s, she co-chaired the European Commission’s working group on online radicalization. She chairs Aspen Digital’s U.S. Cybersecurity Group and serves on the board of the Anti-Defamation League. Read on to hear Green’s thoughts on productive conflict, the trajectory of deepfakes and using AI to cut through the noise of democratic debate. This interview has been edited for length and clarity. What’s one underrated big idea? With the backdrop that a lot of the discussion about AI and democracy has been negative, the big idea is that AI can be used to facilitate productive discourse. At Jigsaw, we do a lot of work on how autocrats have actually developed an internet that aligns perfectly with their governing ideology. They have a much easier job because there's no dissent allowed in autocracies. In democracies, we need ideas to come into conflict with each other. And we do have an internet where that happens. But it's not done in a way that promotes understanding or compromise or empathy or any of the things that you want from productive discourse. There’s a term coined by a Harvard fellow, Aviv Ovadya, called bridging systems — algorithms to bridge, I suppose, the rifts between communities. For the last year, we’ve been thinking about Jigsaw’s Perspective API — a suite of classifiers for moderating comment systems — through the lens of bridging systems. The New York Times, The Wall Street Journal, Wikipedia, Reddit — they all use the API. And the goal is to actually bring people into conflict with opposing viewpoints. You're not trying to resolve or eliminate the conflicts — you're trying to transform it into productive conflict. So we have these experimental classifiers like constructiveness, personal anecdotes, understanding-seeking. With the latest, very sophisticated LLMs, we're able to identify those attributes of discourse and give moderators the opportunity to to rank on the basis of these new classifiers. People in comment spaces are much happier with this new approach, which I think is an example of a bridging methodology. What’s a technology you think is overhyped? Deepfake detection as a standalone, long-term solution. That's never going to be the foolproof solution, partly because detectors are trained on a dataset that contains deepfakes from a certain set of generators. And the more diverse and prevalent the generators of deepfakes become, the less likely it is that any one detector is going to generalize across all of them. Then, of course, you have motivated bad actors who are expressly trained to circumvent or bypass detection. The expectation of deepfakes is that eventually they become indistinguishable from real. If you’re creating an image that could have been taken from a camera, then no algorithm is going to be able to detect the difference. And I think that's the trajectory. What book most shaped your conception of the future? Ray Kurzweil’s “How to Create a Mind: The Secret of Human Thought Revealed.” It's basically him describing how the brain works and how machine learning is attempting to emulate that. At the time [I read it] I had a young toddler and she was learning things through pattern recognition. And we were developing Perspective API — the machine learning models I mentioned to you — at the same time. I'd be like, “Wow, the models can evaluate that comment they've never seen before because they've seen so many other comments.” And then I was like, “Wow, my daughter is able to understand how to say that sentence.” But the funny thing was that, then we'd be like: “What do you mean the training data had this systemic bias?” It ends up that whenever people say the word “Muslim” in comments, it is invariably a negative comment. So the model intuited that “Muslim” had a negative association. And I’m hitting my head against the desk. Then I go home and my daughter uses incorrect grammar. It’s not correct because she was observing, in her learning process, the biased training data of her life. So the whole thing was like fireworks going off for me. What could government be doing regarding tech that it isn’t? Within the kind of bridging systems universe, there is this idea of AI to help with deliberative democracy. The problem statement now is that you cannot have a lot of people participate in a conversation and also be heard. We've just had the U.N. General Assembly week in New York. If the format for these things is you get three minutes of speaking, you have to sit there like a potted plant and wait like an hour and a half for another 30 people to have that three minutes of speaking. We haven't really figured out, either online or offline, a mechanism to have a large number of people participate in a discussion and also have their voices be heard. The really cool thing is that the Taiwanese government has applied a software called Polis — an open-source software designed to facilitate democratic discourse — in something called vTaiwan. They've actually used it to craft 26 different legislations. They have a really pioneering digital minister, Audrey Tang, who’s doing that. This is a much more constrained, structured conversation. You have a prompt, you're inviting everybody to speak, but they're not replying to each other. If you ask the digital minister of Taiwan, she says one of the main reasons why they don't have toxicity is that people are not replying to each other. Especially with AI, you can synthesize and visualize people's positions on topics and you can have them vote and have it be iterative. I would love to see the government be a use case for AI to promote democratic outcomes. What has surprised you most this year? The internet has been rocked by the changes to Twitter [now renamed X] — its ownership and its name and its policies and its culture. The thing that surprised me is just the sheer number of Twitter clones and spinoffs and micro-blogging websites that have been built in pursuit of the promise Twitter represented. We really could be innovating away from the Twitter model for speech. What Musk has done is just remove a bunch of the moderation safeguards that were in place. People may be thinking of putting them back. But what about a different design for discourse? I'm surprised that so much resources and energy from brilliant people has gone into essentially emulating the micro-blogging website design of the early 2000s.
|