PROGRAMMING NOTE: We’ll be off next week for the holidays but back to our normal schedule on Tuesday, Jan. 2. Hello, and welcome to this week’s installment of the Future In Five Questions. Well, sort of: As we did last year, we’re ending Digital Future Daily’s 2023 with a look back at some of the most thought-provoking responses to our weekly questionnaire on some of the biggest topics of the year — including the AI boom, quantum computing and the ever-changing nature of the political relationship between Washington and Silicon Valley. Thank you all for reading this year, and as always, if you enjoy the newsletter, please encourage your friends to sign up at the link at the top of this page, or drop a line with any questions, comments or concerns at drobertson@politico.com. Without further ado: On artificial intelligence “There’s been so much focus on AI chatbots that we risk missing the bigger picture. AI is revolutionizing the way we do scientific exploration, whether that’s quantum computing, or material science, or agricultural productivity, or nuclear fusion, or drinkable water… Scientific progress lies at the heart of economic productivity, and increasing living standards, which has been a key challenge over the past couple of decades.” — Kent Walker, president of global affairs and chief legal officer at Google’s parent company Alphabet “As AI increasingly augments decision-making quality and decentralization, as a contributor, you need less guidance from your manager. At the same time, generative AI gives a good first draft, so what is the value of the contributor?” — François Candelon, managing director and senior partner at Boston Consulting Group “The idea of AI “sentience” [is overrated]. This is an intellectual question that’s worth exploring, and human sentience is also still a question among philosophers. But the hype here is around the doomy rhetoric about AI sentience, which takes away from recognizing the technology’s immediate, real and potentially catastrophic risks. We need a lot of hands on deck to work on those. If we become so over-focused on a deeply, profoundly intellectual debate about sentience, we take away some of the more immediate focus.” — Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence and member of the Biden administration’s National Artificial Intelligence Research Resource Task Force On quantum computing “Based on the quantum algorithms that we’ve actually discovered over the past 30 years, we know how to get at most modest speed-ups for these problems [to which current quantum computers are applied]. When the speed-ups are modest, because of the huge overhead of running a quantum computer at all it doesn’t become a net win until much, much further into the future. Of course, people are free to hope for that, and they should do research, and they should try to learn the truth of the matter, but in the meantime I think presenting to the public or to investors that we know how to get big quantum speed-ups for machine learning and optimization in the near future is fundamentally dishonest.” — Scott Aaronson, theoretical computer scientist and director of the University of Texas Austin’s Quantum Information Center “I think the U.S. government is just totally biased in the way it looks at quantum technology. It listens to IBM, and Google, and big tech in general, that the gate model is the only way to go and that you need to fund a lot of long-term research before we’re going to get there. They’re missing the fact that there’s more than one approach to quantum. There’s more than one system out there, and it’s possible to get real benefits today. The U.S. government needs to be far more focused on near-term applications and being inclusive of all quantum technologies.” — Alan Baratz, CEO of commercial quantum computing company D-Wave On the power of Big Tech “Not even a decade ago, Silicon Valley felt so isolated from the rest of the country’s needs, that the idea of building companies that support the national interest… was very much out of style. But we’re seeing a radical shift in how founders and engineers view public service: that some of our greatest problems can be solved through building technology companies for America, whether it’s manufacturing drones to shore up the defense industrial base or building AI tutors to help elementary students learn math.” — Katherine Boyle, general partner at Andreessen Horowitz “...There are genuine people with real skills, I assume acting in good faith in positions of extraordinary power, who don’t understand the basic political economic facts of this ecosystem, which are that the infrastructure providers have the control. That’s where the money is, and that’s how this political economic configuration works. From my perspective as someone attentive to the ingress and egress, who recognizes this as where power lives in a capitalist economy, it was surprising to see that this was not understood at such a high level [at OpenAI amid Sam Altman’s ouster].” — Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute On the power of Washington (or lack thereof) “I unfortunately joke that D.C. has been on a 25-year tape delay, because of the fact that we have a gerontocracy and the politics are so divorced from the policies. We saw the total absence of intelligent regulation of social media. Decades later we’re seeing the effects of that. I don’t think the feds are going to be asleep at the switch for 20 years where AI is concerned, but you want a nimble, dedicated set of regulators that are directly in touch with the technologists and firms, and companies that are deploying these tools and developing them.” — Andrew Yang, former Democratic presidential candidate “Virtually everything associated with the internet, smartphones, and digital applications was made possible by the American taxpayer, yet the government that created the revolution has failed to meaningfully oversee the results… The story of a couple of geniuses and their dog in a garage coming up with an immaculate innovation is a great image, but that creation began long before thanks to the support of American citizens. American government activities, appropriately, began the digital revolution and American government oversight of its results is just as appropriate.” — Tom Wheeler, visiting fellow in governance studies at The Brookings Institution and former chairman of the Federal Communications Commission …And some favorite books “‘Klara and the Sun’ by Kazuo Ishiguro. It’s a book that raises profound questions about what it means to be human and inspires us to think critically about that relationship between technology and human life, the benefits, the limitations, and how the decisions we make today about the use of technology will impact our sense of humanity in the future. It’s quite relevant as we’re grappling with the future of AI.” — Erin Egan, co-chief privacy officer at Meta “One of my favorite stories is Plato’s ‘Phaedrus,’ where he has the god Theuth meeting King Thamus, and the god is going to give humans the gift of the written word and the king shoots back, ‘No, you shouldn’t do that, because then they’ll lose the cognitive capability to remember long folk tales and tell them around the campfire.’ They both had a point because the god was correct it would have benefits, and the king was right in saying we’re gonna lose our ability to remember long passages. Using information technologies to expand or enhance human capabilities will always inspire a raging debate, but one that we should be open to considering before shutting it down based on risk.” — Adam Thierer, analyst at the R Street Institute
|