Hello, and welcome to this week’s edition of The Future in 5 Questions. This week we interviewed Stanford University’s Fei-Fei Li, a computer scientist and co-director of the Stanford Institute for Human-Centered Artificial Intelligence who served on the Biden administration’s National Artificial Intelligence Research Resource Task Force. In her new book “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI,” she writes about the importance of centering AI systems around human goals and desires, the founding of her institute to achieve that goal, and the importance of widening the circle of voices involved in AI policy — all subjects covered in this interview. This conversation has been edited and condensed for clarity: What’s one underrated big idea? The discussion around AI is taking human agency out of the conversation, from the technology itself to policy. Humans have a profound relationship with machines and technology that we should truly invest in, and explore. There is a huge possibility that this technology could super-power humans. Large language models have huge potential to be a co-pilot for knowledge workers, but it butts up against concerns about changes to jobs. If we take advantage of what this technology can offer and develop it in a direction that is much more collaborative, rather than as a replacement technology, we can truly tap into human capital and human talents in profound ways. There's so much that AI can do, but humans have to be put in the center. What’s a technology that you think is overhyped? The idea of AI “sentience.” This is an intellectual question that's worth exploring, and human sentience is also still a question among philosophers. But the hype here is around the doomy rhetoric about AI sentience, which takes away from recognizing the technology's immediate, real, and potentially catastrophic risks. We need a lot of hands on deck to work on those. If we become so over-focused on a deeply, profoundly intellectual debate about sentience, we take away some of the more immediate focus. What book most shaped your conception of the future? “Homo Deus” is a good book for this conversation. I can’t say I agree with everything in it, but it's a good book coming from a philosopher slash historian, who is looking at humanity and civilization’s trajectory, and the relationship between humans and the bigger force of the divine, eventually arriving at a conjecture that powerful algorithms will function in such a profound and impactful way that it will shape humans’ relationship with the concept of God. It’s an example of a multidisciplinary discourse that invites not only technologists, but humanists, and I think it's really, really important. Even when I was writing my book, I didn't want to just write a book about pure tech. The arc of the book is a scientist eventually turning into a humanist, because in the future we will need scientists, social scientists, humanists, and everyone on the planet to have a moral understanding of the world as powerful technologies like AI shape our civilization. What could government be doing regarding technology that it isn’t? What is concerning to me is that the attention of the government to AI seems to be more of a total enamoredness of the product based on commercial efforts. The conversation that's happening right now is centered around the commercial leaders. I'm not saying there’s zero involvement with the public sector, but the lack of public sector leadership engagement, the lack of public sector voices in AI and policy, is deeply concerning. For example, recently, we released an evaluation of all the publicly released large language models against the EU AI Act’s privacy or transparency indexes. Imagine if that came from a private company. Are you going to trust that? Government should invest not only in AI research, cloud, and data repositories, but also take a moonshot mentality and invest in national labs, regional labs, and personnel. I'm concerned that these voices are not at the table. What has surprised you the most this year? 2023 turned out to be the year of AI policy. I sat on the National AI Research Resource Task Force for the past two years, and we planned for two years and sent the plan to Congress, and now the CREATE AI Act is active and sponsored by bipartisan members. We really hope that kind of investment to the public sector will help to reshape the imbalance that we see and add some really direly needed resources. The government is paying attention, looking at investment, there’s the executive order, the summits, and the meetings called by government leaders.
|