Welcome back to our weekly feature: The Future in 5 Questions. Today we have Sunmin Kim, director of public policy for Applied Intuition — the software startup that the U.S. Army tapped in November to build a modeling and simulation platform for their next generation Robotic Combat Vehicles. Before joining the private sector, Kim was chief of the Joint Artificial Intelligence Center (JAIC)’s policy division within the U.S. Department of Defense. She was also Sen. Brian Schatz’s (D-HI) technology policy advisor. Read on to hear her thoughts on standing up a new agency to test AI systems, how incremental tech advances are not always consumer-friendly and how little has changed policy-wise in the realm of automated driving systems. Responses have been edited for length and clarity. What’s one underrated big idea? An agency to facilitate oversight of digital technology. This wouldn't be a regulatory agency — more like a General Services Administration (GSA), where the agency would provide legal policy and technical infrastructure needed to do oversight of digital technologies like AI. When I was at DoD, we were really terrible at doing contracts for AI software. We were really bad at asking people to set aside training data so we could do independent testing and evaluation (T&E) of those softwares. So the agency could provide things like model contract language and legal advice to other agencies, because it’s not just DoD that has had this issue. The facilitating agency could also provide digital environments for the government to do independent T&E for AI technologies. I know a lot of companies are really concerned that they would lose their intellectual property rights if they allow a regulatory agency to access their tech. I'm generally not a fan of calling for new agencies — I think policies should be as tech-neutral as possible. But I think in this case, I think there is a shared need across agencies to do independent T&E of AI technologies. What’s a technology you think is overhyped? Level 3 autonomous vehicles. So these are levels of autonomy that the Society of Automotive Engineers put together. Level 3 is a driverless car where a driver is expected to intervene if something goes wrong. It’s an example of a bridge technology — a technology that's not super novel, but something a company has made incrementally better and they're now hyping it as a completely new transformative thing or something that precludes a big new thing that’s not ready for commercial use. It’s a very natural progression between Level 2, which includes things like lane keeping (so smaller automated tasks) and Level 4, where the car can drive in a fully automated mode and the driver is not expected to engage. For autonomous vehicle (AV) developers, it’s easier to develop for Level 3 rather than jumping from Level 2 to Level 4. But for consumers, it's more dangerous to put those cars on the road. I don't think it's fair to expect drivers to be able to engage at any point at any moment. Personally I would last 30 seconds on a good day to watch my car just drive itself and then get ready to intervene if something goes wrong. What book most shaped your conception of the future? This is not a book. There was an essay by Bill Wasik in the New York Times Magazine, published in 2015. It was on digital imperialism and the idea that our tech products import cultural values. The government's not watching it; companies sort of celebrate it because they think the tech sector has a culture of its own. But I think there's a lot of foreign policy implications of importing these technologies. That piece of writing was very influential to me. What could government be doing regarding tech that it isn’t? Virtual testing for Advanced Driver Assistance Systems (ADAS) and Automated Driving (AD) Systems — the specific technologies that autonomous vehicles are equipped with. Currently, the way the U.S. Department of Transportation tests for vehicle safety is that they primarily care about whether a car could kill you once it crashes. So they do a lot of lab testing and test track testing. But the new technologies on cars today are designed to prevent those crashes in the first place. And DOT doesn't really test for those because one: they don't have safety performance standards that they are responsible for the regulatory compliance of; and two: they lack the virtual testing tools. Virtual testing is also important because it allows regulators to test for edge cases. Edge cases are what engineers call scenarios that are unsafe, expensive or sometimes impossible to replicate in the real world. So imagine you're driving down the road and something falls off a car in front of you. And it's also a sunny and a rainy day at the same time — that could really affect the sensors driving the cars. But it's really hard to test for those things in the real world at scale. So I would love to see us adopt virtual testing as one of the core methodologies to validate new technologies. What has surprised you most this year? I last worked on AD policy in the 115th Congress — which was the 2017-2019 session. And then I didn't work on AD policy until this job — which I took about a year ago. It has really surprised me how little has changed in AD policy in terms of results, despite a lot of effort by different members of Congress and different coalitions who have wanted to see progress. We haven't seen any new rulemaking by DOT on things like AD safety standards in the past five years. And we obviously haven't seen Congress pass any bills into law on Automated Driving Systems. So that was surprising to me. I mean, it was a pleasant surprise because it meant I didn't have to do a lot of catching up.
|