(Spoiler: There’s a tiny scoop about GPT-4’s internal release date at the end.) Responses below have been edited for length and clarity. What’s one underrated big idea? That responsible AI is a practice — not just a slogan or a set of principles. One of the things I'm most proud of during my time at Microsoft is operationalizing responsible AI across the company. We've engaged the very best to develop an actionable, responsible AI standard. This is our internal playbook for how we develop and deploy AI systems. We're actually on the second version of the Responsible AI Standard, because our product teams were thirsty for more concrete guidance and processes. We've come to this AI moment with more than six years of work building the infrastructure for Responsible AI. We've developed a practice that puts us in a good position to look ahead to these exciting, transformative use cases of the future. Some of the learnings from our program, I think, are very helpful for feeding into a public policy conversation where we agree that we've got the same objective. Building AI systems is not like building Word or Excel. Having multidisciplinary groups is critically important. And Microsoft cannot do this alone. In fact, Microsoft benefits from outside insights and initiatives, and research. You need to make sure there's a two-way exchange with the world. What’s a technology you think is overhyped? Thinking about general purpose technologies like AI in these monolithic or abstract terms. When we lump a broad range of technologies into a single category, sometimes you end up with an all-or-nothing approach, or a one-size-fits-all solution. In reality, there are countless different ways that a diverse set of AI technologies can be applied. Teasing apart those scenarios just leads to a more productive path forward. So we’re thinking about large language models today mostly in terms of chatbots. But in fact, there are exciting new applications like helping security operation centers around the world get ahead of their adversaries. We need to stop thinking about things in the abstract. We need to focus more on what we are trying to achieve, what we are trying to avoid, and to try and calibrate those guardrails appropriately. What book most shaped your conception of the future? Azeem Azhar’s “The Exponential Age” has some great insights for the particular AI moment we're in right now. He digs in on four general-purpose technologies — computing, biology, renewable energy and manufacturing — and he exposes this exponential gap between the advances powered by those technologies and the ability of our societal institutions to respond. He had me interested from the very first chapter where he described his own first encounters with personal computing. I remember my dad bringing home our first Amiga 500 computer. His book does a great job connecting the dots between social, political, economic and technological trends. Ultimately, I think he's right to conclude that technology is something that we can control. And humans are ingenious at forging the world that we want in response to technological change. As it happens, he also has a great substack. What could government be doing regarding tech that it isn’t? Two things. First, I think it's helpful to spend time engaging with technology companies and academics to better understand the technology. There is more to AI than just the models that we often talk about. There’s AI supercomputers. There's clever applications that sit on top of these models. It will help to design better regulations for high-risk uses if our policy stakeholders have a better understanding of the technology and the policy intervention point. The second thing is to bring together civil society and academia, technology companies and government leaders, to help chart a path forward. Serving as a convener during this important period is essential to charting the right path forward. What has surprised you most this year? Well, I had a bit more of a sneak peek than other people. What surprised me is how quickly people are adopting this next wave of AI personally and professionally. We see more and more businesses taking up this technology — from Fortune 500 companies to startups and scale-ups. They're taking this core technology and building exciting new scenarios: More efficient customer service, helping citizens fill out government forms, and reducing paperwork for doctors so that they can spend more time with their patients. These large language models and multimodal models, they're quickly becoming a new computing paradigm. We're all going to have the benefit of a much more natural interface with computing.
|