Happy Friday, and welcome to the latest edition of our Future In Five Questions feature. It’s Rebecca Kern stepping in and I spoke with Jennifer Chayes, dean of the newly established College of Computing, Data Science and Society at the University of California, Berkeley. (They became an official college in May and the campus’s first new college in more than 50 years.) Chayes spent 23 years at Microsoft researching graph algorithms and machine learning. As one of the inventors of graphons – which are used in the machine learning of large-scale networks – she’s interested in using large language models like ChatGPT to process new datasets to solve pressing problems like climate change. And she’s also committed to expanding racial and gender diversity in STEM. This conversation has been edited and condensed for clarity: What’s one underrated big idea? One thing I’m not hearing enough about is AI for science. We're hearing about AI in very data-rich areas, like language and images. But AI for science — for which we’re developing new techniques for areas with relatively sparse data — is going to transform biomedicine and health. It's going to transform climate and sustainability. It’s going to transform the platforms on which we allocate resources in public health, in welfare and in other areas. So AI for science is really going to do that. I see that as being huge. Our College of Computing, Data Science and Society is training the people who come in with a commitment to public health or a commitment to climate and sustainability, or a commitment to human welfare and social justice, who are going to help to create these platforms that will enable us to really start to make progress in all of those domains for the good of society. What’s a technology you think is overhyped? I do think large language models (LLMs — like ChatGPT) are going to transform the world, but I think they’re overhyped. They have this uncanny ability to do things that we believe only human beings could do. And I think it's both overestimated and underestimated. I think it's very different from human intelligence. It’s very bad at planning. It tends to make egregious mistakes, although there are ways being developed to improve that. But on the other hand, I think it's underestimated in other areas, like the advances that it will have in health care in patient interaction. With therapy using LLM – there are startups out there that are doing that — we're going to be able to teach people better with LLMs conversing with them. So I think it's going to personalize a lot of things that we just don't have the resources to do. What book most shaped your conception of the future? I’m a multi-disciplined kind of person. I've done mathematics. I've done physics. I’ve done economics. I've done biology. I’ve done computer science. And in the same way, I am not a single-book kind of person nor a single-genre person. I very much believe in bringing diverse perspectives together, including both fiction and nonfiction. Kazuo Ishiguro — anything he writes, I would be devouring immediately — “Never Let Me Go,” “Klara and the Sun.” His work shows human beings being distorted by not only technology but bizarre incentives. Then as a nonfiction writer — Yuval Harari. “Sapiens” and “21 Lessons for the 21st Century” and “Homo Deus.” I love his books, I don’t agree with everything, but I find him very, very thought-provoking. What could the government be doing regarding tech that it isn’t? Health intervention. We need larger datasets than we have, and I’d love to see the government require that there must be sharing of data for health care government funding, and a lot of health care is government funded. Because then we're going to be able to make many more choices, and the choices we make are also going to be tailored to different groups. On the other hand, I believe that the government should impose tremendous penalties for the misuse of data. So I think make the data more accessible, but create penalties for misuse. In terms of AI regulation, I think we have to be very careful. Horizontal regulation of technologies is going to be very difficult. There's also a question of whether we want to do vertical regulation of the uses of technology in different industries. I think that may be a more reasonable way to do it as we're understanding it best in that case. What has surprised you most this year? Two things — one is technical and one is societal. One is ChatGPT. I knew that something like that was going to happen eventually, but I was surprised by how quickly it happened. I would have predicted five or ten years from now. Then I'm also surprised by the reaction: How strong, how rapid the reaction has been and how it's been extrapolated that people attribute to it powers that it does not have. And how some of the reactions I think are not necessarily scientifically based. The nice part about the reaction though is the feeling that now a lot of the world's gonna come together with the experts in computing and AI to make sure that these powers are used for the good of society and not to harm it. |