5 questions for MIT's Neil Thompson

From: POLITICO's Digital Future Daily - Friday Jun 02,2023 08:08 pm
How the next wave of technology is upending the global economy and its power structures
Jun 02, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

MIT's Neil Thompson.

MIT's Neil Thompson.

Hello, and welcome back to The Future in 5 Questions. This Friday, we have Neil Thompson — director of the FutureTech project at the Massachusetts Institute of Technology’s computer science and artificial intelligence lab and a principal investigator at MIT’s Initiative on the Digital Economy. Thompson’s expertise lies in the intersection of industry and the next generation of computing, which is especially relevant now, given the burgeoning resource footprint of AI.  He has advised businesses and governments about the future of Moore’s Law  — an empirical principle that explains the speed and capability of computers over time.

Read on to hear Thompson’s thoughts about the physics limit of current chips, how the U.S. can keep its lead in computing power, and how researchers use large language models.

Responses have been edited for length and clarity.

What’s one underrated big idea?

A big underrated idea is that the pace of computer progress is slowing down. This, I think, is very counterintuitive, because we keep seeing results in the news about how incredible AI is and what it's doing. But the underpinnings have changed.

Almost all of this stuff that we're doing with artificial intelligence is based on this technology called deep learning. Deep learning has this characteristic where you spend a ton more to do a ton more. That's different from how we historically got improvements in computing, which was through Moore's law and improvements in computer hardware. In that case, you didn't have to spend more, to get much more.

So now, there's an intrinsic cap on what you're able to do with deep learning and just spending more. Suddenly, you get systems that cost so enormously much that you don't really want to keep investing in the “more.” For example, we're already hearing that GPT-4 cost more than $100 million to train. And people there are thinking, well, actually, there probably won't be another step in the same pattern because of the escalation in costs. That's notably different compared to the decades and decades of improvement we've had from Moore's law. And I think we're going to be reckoning with that a lot more in the coming sort of 5-10 years.

What’s a technology you think is overhyped? 

Blockchain. In the discussions I've had with people, I keep coming back to this idea of why is this better than a database? There are some cases like cryptocurrencies where you can make a strong case for it. But in many of the things that people are talking about, the distributed nature of blockchain actually has some drawbacks. Because you often actually want to have some way of making changes. You want to be able to say, well, that transaction actually is fraudulent. Like, it may be that people on both sides of the transaction agreed to a transfer. But if the thing wasn't actually delivered, you need some way to reverse it.

And so I think many of the benefits of blockchain are really benefits that come from databases. It's only a small number of cases where you really get the incremental benefit that comes from all the machinery of blockchain.

What book most shaped your conception of the future?

I want to answer in a slightly different way than you've asked. I want to talk about a graph that shaped my view of the future.

This was a graph that Horst Simon, who used to be the head of computation at Lawrence Berkeley National Lab, presented and I saw it in a seminar one time. It was this graph of Moore's law as it evolved over time. Moore's law is meant to characterize the sort of doubling of computing power every couple of years. But actually there's an underlying trend — which I think is even more important. Probably twenty years beforehand, Richard Feynman identified miniaturization as a key thing that could be done.

Turns out that the miniaturization of parts of the computer has many benefits. One is that you can just get much more on the chip. And that’s just geometry, right. If you shrink it, the same-sized chip can hold more. But perhaps a more important piece is that as transistors get smaller, they actually use less power, even proportionally. And that meant that you could run chips faster. And that's hugely important.

In fact, for decades, we had exponential increases in speed. And Horst Simon’s graph was showing that even though we have continued to miniaturize, the physics that meant we used less power stopped. So that power use has plateaued. That means that the speed of our chips has plateaued. This is why chips today — and computers today — run at about the same speed they did even 15 years ago. And that really suggested just a complete change to me.

Horst Simon had shown that the nature of Moore's law had gone from a whole bunch of things improving all at the same time to the speed capping out. That led me to realize that actually, Moore's law — which used to benefit everyone who's running programs — was now really differentially benefiting people who had parallel programs and much less the people who were doing things serially. And unfortunately, you know, most of the things we're doing are serial. That led me to do my PhD work showing that, indeed, there was a break there.

What could government be doing regarding tech that it isn’t?

This really comes back to my first answer about the pace of computer progress slowing. We need to be investing a lot more — and by a lot more, I mean ten, a hundred times more — in trying to figure out what the next era of computing looks like. That's because computing has already been slowing for more than 15 years. But it's worth a ton to the U.S. economy.

And right now, we're not investing anywhere near enough in thinking about how to improve those things. We recently started to take good first steps with the recent CHIPS Act. But there's much more we need to be doing in terms of post-CMOS technologies [the circuitry designs and fabrication processes needed to make the next generation of semiconductors.]

The history of computing is one that the United States has been incredibly dominant in. A huge proportion of the algorithms that have pushed computing forward have come out in the United States. Many of the biggest supercomputers have been here. That overflows into all these other areas of society and gives them benefits.

We're actually really losing that lead. We need to make sure we have good secure factories that can produce cutting-edge semiconductors. The CHIPS Act covers that. And people are starting to invest in some of these post-CMOS technologies — but it just needs to be much more. These are incredibly important technologies.

This is not just about AI —these are some of the fundamental things that define what the next generation of computers will look like. Could we make them 1,000 times more power efficient, 100 times faster? Those kinds of questions.

What has surprised you most this year?

I'm sure this answer you get frequently, but it would be how effective large language models have been. It was clear that these deep learning systems were growing very rapidly. And indeed, we have done some work in the lab on those. What we've seen is that there's enormous escalation in the data — and therefore the cost — of these models. And, boy, has that escalated quickly. OpenAI recently said GPT4 cost more than $100 million. So that's pretty shocking.

But at the same time, it is truly remarkable the number of things that they can do — how useful that is to people. I'll give you a very practical example that I thought was really interesting.

One of my students, when he's trying to make an argument in a paper, he will put the paragraph in ChatGPT and say, “What would a reviewer in economics say, if they read this and wanted to critique it?”

And so it will then give an argument. Sometimes he'll agree with it and sometimes he won't. But he said, typically, he'll get at least one or two good ideas of ways to better communicate what he’s written. And that seemed to me like an incredibly interesting usage that I hadn't ever thought of.

 

DON’T MISS POLITICO’S HEALTH CARE SUMMIT: The Covid-19 pandemic helped spur innovation in health care, from the wide adoption of telemedicine, health apps and online pharmacies to mRNA vaccines. But what will the next health care innovations look like? Join POLITICO on Wednesday June 7 for our Health Care Summit to explore how tech and innovation are transforming care and the challenges ahead for access and delivery in the United States. REGISTER NOW.

 
 
about that drone...

Twitter was abuzz yesterday after a (now-deleted) viral post reported that a U.S. Air Force drone might have — oops — sought to kill its human operator as part of a software simulation.

The report, noted in a brief from the U.K.’s Royal Aeronautic Society, was quickly denied by the Air Force, which said its colonel’s description of the incident was “taken out of context.” But as I read this, another thought occurred to me: Isn’t this kind of “deadly” experiment exactly what AI testing is for — to ensure that such behavior is observed, planned for, and programmed out before it exits a software simulation and enters the real world?

I pinged AI expert Louis Rosenberg, who I spoke with earlier this week to clarify the most pressing dangers around AI implementation, and posed the question to him. As it turns out, the answer is “yes,” but that’s only the very first step in a long, fraught, potentially dangerous process.

“The whole reason the Air Force is doing this research in simulated environments is to discover the unexpected consequences of their training methods,” Rosenberg wrote.

“Every AI system is trained using a reward structure that drives the AI to optimize its ability to achieve goals, but the wrong reward structure could lead to unexpected results like the drone killing the operator in the simulated test. Now that this ‘negative outcome’ has been discovered, they can adapt their training method and avoid such consequences.”

Great! Well… for now: “The problem, of course, is that there are an unlimited supply of unexpected consequences. I don't think anyone can find them all. This is generally called the alignment problem.” — Derek Robertson

tornado season

The U.S. Treasury building is shown. | AP Photo

The U.S. Treasury building.

The dizzying legal saga around the U.S. Treasury’s decision to sanction the cryptocurrency mixer Tornado Cash is continuing to heat up.

The crypto think tank Coin Center sued the Treasury in October, saying the punishment is illegal, and now the major crypto industry group the Blockchain Association is backing them up with an amicus brief. Coin Center’s case is based on the assertion that, as they wrote in an April blog, “a peer-to-peer transaction likely cannot be subject to the relevant reporting and KYC requirements specified in the Bank Secrecy Act” because crypto platforms that enable peer-to-peer exchanges are definitionally not “financial institutions.”

“Punishing [Tornado Cash] itself simply because it can be used by anyone, including bad actors, runs contrary to the values this country was founded upon,” Blockchain Association President Kristin Smith said in a statement. “Blockchain Association stands with Coin Center, advocating for the responsible and lawful use of blockchain technology. Regulatory actions should only be targeted at bad actors who abuse this tool for illegal purposes.”

The Treasury’s initial sanctions were inspired by the revelation that Tornado Cash had been used to launder more than $7 billion since its creation in 2019, including nearly $450 million from a group of notorious North Korean hackers. The Lawfare blog reported this month that the legal challenges presented by the lawsuit could have major ramifications for the federal government’s ability to regulate smart contracts. — Derek Robertson

Tweet of the Day

Guy: Elon, you’re the CEO of Tesla, SpaceX, Neuralink, the boring company, owner of twitter, father of 69 children… do you have any spare time to serve as the moderator for America’s culture wars? Elon: Sure sounds like fun!

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Jun 01,2023 08:43 pm - Thursday

Move fast and break… your brain?

May 31,2023 08:10 pm - Wednesday

The expanding AI hall of shame

May 30,2023 08:29 pm - Tuesday

The 2024 social media race has started

May 26,2023 08:02 pm - Friday

5 questions for Qualcomm’s Durga Malladi

May 23,2023 08:14 pm - Tuesday

How lawyers use AI

May 22,2023 08:03 pm - Monday

Open-source wants to eat the internet