Happy Friday and welcome back to our weekly feature: The Future in 5 Questions. Today we have Dana Rao, executive vice president and general counsel at Adobe who leads the company’s legal, security and policy organization. He’s also Adobe’s chief trust officer. Rao is one of the driving minds behind Adobe’s Content Authenticity Initiative, which is trying to counter the rise of misinformation by creating tools to verify the origin of digital content. Read on to hear his thoughts on the challenges of renewing trust in online content (and why it’s necessary), learning from the “black box” of the human brain and governing AI by looking at what it’s being used for. Responses have been edited for length and clarity. What’s one underrated big idea? Trust. When everything's digital, everything can be moved or monetized, or manipulated. In the last few years, you've seen dangerous mysteries spread on COVID-19 or global elections. And with technologies like generative AI advancing so quickly — it's even easier to create and spread these global untruths faster than ever. The solutions out there for misinformation, they've been focused on trying to detect what's fake. They do that through AI detection technology or through fact-checkers. And both are largely ineffective at catching misinformation. The detection tech is inaccurate and the fact-checkers are slow. By the time you put a label on something saying it's a lie, everyone's already read it. Videos go viral in seconds, not weeks. The problem we see at Adobe is not that you get fooled by everything, but that you start to realize you shouldn't trust anything you see or hear because it could be manipulated. And once that happens, it's going to be difficult to have a good discussion with anyone about what is actually happening in the world. Because the way we're all consuming information is digital and you're going to be very skeptical that the things you're seeing are true. What’s a technology you think is overhyped? Sometimes overhyped means not ready yet versus not ready ever. For me, blockchain and crypto still seem like solutions looking for a problem. I was an electrical engineering undergrad, so I've thought a lot about the technology of decentralized distributed ledgers. It’s been 30 years since blockchain was invented and it's still on the fringes of economic transactions with Bitcoin. It's a cool technology. But what does it solve, really, that you can't solve otherwise? India and Singapore recently announced an agreement where they can do cross-border transactions using mobile banking infrastructure. Those cross-border transactions are a huge value proposition for Bitcoin. Another place where people talk about blockchain being useful is non-fungible tokens. For an entity like Adobe, there’s value in artists getting attribution and compensation for digital art. But you can do NFTs without blockchain. We have projects at Adobe where you can embed a digital executable license right in the image. And people can find out it's your work and transact with the image wherever it goes. We also need to solve the climate impact of blockchain before it really makes sense. So I think people shouldn't just be playing around with it, just to solve for something that may happen. What book most shaped your conception of the future? When I was in law school, I was trying to write a journal paper and couldn’t think of anything. So I went into my dad’s study and he had a book called “Understanding Neural Networks and Fuzzy Logic.” This is back in 1996. That book really opened my eyes to what AI was and how much of the human brain we were trying to reproduce mathematically. These neural networks that you're creating to do the AI work, they’re just like the brain. And just like the brain, the decisions being made are coming out of a black box. You don’t actually know why the AI is doing what it's doing. At the time, I was really focused on intellectual property law. So I decided to write my journal paper on whether AI can invent or create art. Who owns the output? There was very little on it back in 1997, because no one cared about AI back then. In the paper I wrote, I decided the answer was no — that the law requires a human to be involved in order for a patent or copyright to be granted. And that's still true. What could government be doing regarding tech that it isn’t? When it comes to AI, we need to balance innovation and responsible innovation. When you do it right, human creativity is amplified, productivity goes up, and all that routine, mundane work goes away. But you can also create harm, right? Automated AI decision-making can cause problems. So to promote innovation, we think governments need to create systems that promote innovation by allowing it to flourish in low-risk cases. We always give this example: if we use AI to help you create a great font, it’s hard to see how that could possibly cause bias. And so we want to create a regime where low-risk innovation like that just goes out the door. On the other hand, the government should have a high-risk system that's based on the use case. In critical areas — like cars, banking and employment — they should put regulations in place because AI can do harm if not regulated. So the government needs to think about it in a couple of ways. One is by use case: high risk versus low risk. [Note: IBM’s Christina Montgomery espoused a similar risk-based governance model in her interview.] And the second is general purpose versus special purpose. There's a trap in this general purpose versus special purpose AI, which is if you have a general purpose AI but all you're doing is making loan recommendations, it's high risk. It doesn’t matter that the AI is doing a hundred low-risk things — if the purpose that it is being used for is to make loan recommendations, it's high risk, and you should go check to make sure it does not have bias in its output. And if you have a special purpose AI, but all it is doing is making font recommendations, that is fine. So it's not like general purpose or special purpose is inherently bad or good. It's really the use case you should be scrutinizing. What has surprised you most this year? The speed at which generative AI has taken off. It's been around for a while. And we've certainly been working on it internally. But the speed at which we’re seeing it grow in popularity and adoption — finding its niche in terms of use cases — in just over the last month and a half has been surprising. My chief product officer sent me a snippet where he asked chatGPT: “What is advice that the general counsel should give a chief product officer?” And it gave really good advice — advice I could see a fifth-year lawyer giving. That was really eye-opening for me. You wouldn't use it for a very specific thing. But if you need a good general understanding, it's really impressive. The downside is its use for misinformation and deep fakes. It portrays inaccurate information convincingly, and people are not yet ready to be discriminating about what they're reading and seeing. Digital literacy has to be top of mind for governments — educating the public on when to believe and when to be skeptical and giving people the tools to decide when to trust and when to do their own fact-checking.
|