Welcome back to our weekly feature: The Future in 5 Questions. Today we have Mark Brakel — director for policy for the nonprofit Future of Life Institute. FLI’s transatlantic policy team aims to reduce extreme, large-scale AI risks by advising near-term governance efforts on emerging technologies. FLI has worked with the National Institute of Standards and Technology in the U.S. on their AI Risk Management Framework and provided input to the European Union on their AI Act. Read on to hear Brakel’s thoughts about slowing down AI releases, not taking system robustness for granted and cross-border regulatory collaboration. Responses have been edited for length and clarity. What’s one underrated big idea? International agreement through diplomacy is hugely underrated. Policymakers and diplomats seem to have forgotten that in 1972 — at the height of the Cold War — the world agreed on a Biological Weapons Convention. The Convention came about because the U.S. and Russia were really concerned about the proliferation risks of these weapons — how easy it would be for terrorist groups or non-state armed groups to produce these types of weapons. At least to us at FLI, the parallel with autonomous weapons is obvious — it will also be really easy for terrorists or a non-state armed group to produce autonomous weapons at relatively low cost. So the proliferation risks are therefore enormous. We were one of the first organizations to reach out to the public about autonomous weapons building through our Slaughterbots video on YouTube in 2017. Three weeks ago, I was in Costa Rica, at the first conference on autonomous weapons between governments outside of the U.N.. All of the Latin American and Caribbean States came together to say we need a treaty. And despite the ongoing strategic rivalry dynamic between the US and China, there will definitely be areas where it will be possible to find an international agreement. I think that's an idea that's slowly gone out of fashion. What’s a technology you think is overhyped? Counter-intuitively, I'm going to say AI and neural nets. It’s the founding philosophy of FLI that we worry about AI’s long term potential. But in the same week that we’ve had all this GPT 4 craziness, we've also had a human beat a successor to AlphaGo at the Go game for the first time in seven years, almost to the day, after we'd basically surrendered that game to computers. We found out that actually, systems based on neural nets weren't as good as we thought they were. If you make a circle around the stones of the AI’s game and you distract it in a corner, then you're able to win. There's important lessons there because it shows these systems are more brittle than we think they are, even seven years after we thought they had reached perfection. An insight that Stuart Russell — AI professor and one of our advisors — shared recently is that in AI development, we put too much confidence in systems that, upon inspection, turn out to be flawed. What book most shaped your conception of the future? I am professionally bound to say “Life 3.0,” because it was written by our president, Max Tegmark. But the book that really gripped me most is “To Paradise” by Hanya Yanagihara. It's a book in three parts. Part three is set in New York in 2093. It's this world where there have been four pandemics. And you can only really buy apples in January, because that's when it's cool enough to grow them. You have to wear your cooling suit when you go out otherwise. It’s this eerily realistic view of what the world would be like to live in after four pandemics, huge bio risk and climate crisis. AI doesn't feature so you have to suspend that thought. What could government be doing regarding tech that it isn’t? Take measures to slow down the race. I saw this article earlier today that Baidu put out Ernie. And I was like, “Oh, this is another example of a company feeling pressure from the likes of OpenAI and Google to also come out with something.” And now their stock has tumbled because it isn't as good as they claimed. And you have people like Sam Altman coming out to say it's really worrying how these systems might transform society — we should be quite slow in terms of letting society and regulations adjust. I think government should step in here to help make sure that happens — so forcing people through regulation to test their systems, to do a risk management analysis before you put stuff out, rather than give people this incentive to just one up each other and put out more and more systems. What has surprised you most this year? How little the EU AI act gets a mention in the U.S. debate around chatGPT and large language models. All this work has already been done — like writing very specific legal language on how to deal with these systems. Yet, I've seen some one liners from various CEOs saying they support regulation, but it's going to be super difficult. I find that narrative surprising because there is this quite concise draft that you can take bits and pieces from. One cornerstone of the AI act is its transparency requirements — that if a human communicates with an AI system, then it needs to be labeled. That's a basic transparency requirement that would work very well in some U.S. states or at the federal level. There's all these good bits and pieces that legislators can and ought to look at. |