Happy Friday — and welcome to the latest installment of our regular feature, The Future in 5 Questions. This week I put our list of questions to Navrina Singh, the founder of Credo AI, whose work lies right at the intersection of AI’s growth and its increasing attention from rulemakers. Credo isn’t an AI development company per se — it’s a platform meant to help other companies ensure their AI tools comply with the rapidly growing constellation of laws, regulations and “recommendations” governing its usage. She understands both the rules and the underlying tech back to front, having served on the Biden administration’s National AI Advisory committee and worked on AI for several years at both Microsoft and the World Economic Forum. We talked about her belief that governance can actually accelerate innovation, the limits of what AI can really do on its own and her view of the different philosophical approaches the U.S. and EU have taken to AI. Responses have been edited for length and clarity. What’s one underrated big idea? How governance actually can increase the benefits from artificial intelligence. We've seen again and again where artificial intelligence has shifted from experimentation and iteration to actual production scenarios — across financial services, insurance, healthcare, educatio — how governance and having the right set of guardrails results in lower failure rates and more risk-aware systems. It results in more compliant systems. Governance actually can be a force multiplier for your technology bets. What’s a technology you think is overhyped? AI itself, which is not going to solve all our problems. With new technologies like generative AI, if you don’t know the technology well you can very quickly be seduced by this idea of, “Oh my God, this is so powerful, It can solve all my problems.” But as you start unpacking the layers of how ChatGPT, as an example, responds, you start to sort of unpack how much unrealistic, non-factual information is in those answers. You can't use these systems in high-risk scenarios, and that's what we are trying to unpack with generative AI governance this year. What book most shaped your conception of the future? If you think geopolitically about how technology is shaping the world, it’s “The World Is Flat” by Thomas Friedman, which is one of my all-time favorite books. But when it comes to recent developments in artificial intelligence, I would say “Weapons of Math Destruction” by Cathy O'Neil, which is certainly one of my favorites. But there are still not enough books being written about why guiding this technology is going to be critical for how we shape humanity. That's the book I'm looking to write. What could government be doing regarding tech that it isn’t? I sit on the National AI Advisory Committee to President Biden, so it’s important to qualify that this is my perspective and I’m not speaking on behalf of the committee. For the past seven years, I've looked across the globe to Europe, Canada, Singapore, the U.S. and China, to see what governments are doing and not doing. I think there's a wake-up moment happening right now among governments that are recognizing there needs to be the political will to put guardrails around tech. The way that you can put the right guardrails is not just by policymakers getting together, but through an open and honest multi-stakeholder public-private discourse. But what's not happening well is that we need to move toward a better understanding particularly of artificial intelligence. We need to move much faster to come up with mechanisms to make AI governance a reality. As an example, some sort of transparency reporting or disclosure reporting in terms of how a company is acquiring AI, or how they're building artificial intelligence systems, is a fantastic step that government can mandate to ensure AI is actually in service to the citizens. What has surprised you most this year? The reception that generative AI technologies have received in the past year, given that so many people like myself have been working on these technologies for a long time. Seeing consumers use it so quickly, and seeing the scale that these generative AI technologies have reached over the past six to seven months, has been astonishing for me. I’m excited to see how governance is going to start reining in generative AI’s use in critical applications.
|