5 questions for Samuel Hammond

From: POLITICO's Digital Future Daily - Friday Jan 05,2024 09:02 pm
How the next wave of technology is upending the global economy and its power structures
Jan 05, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Samuel Hammond speaking at the 2023 National Conference of State Legislatures.

Samuel Hammond speaking at the 2023 National Conference of State Legislatures. | National Conference of State Legislatures

Hello, and welcome to this week’s edition of the Future In Five Questions. To kick off 2024, I invited the Foundation for American Innovation’s senior economist Samuel Hammond to answer our questionnaire. Hammond will be a familiar voice to DFD readers, as he recently wrote in POLITICO Magazine that we need a Manhattan Project for AI safety as artificial intelligence that could surpass or match most humans might emerge in less than a decade. Here we spoke about the false promise of synthetic meat, the mission of capitalism, and the desperate need for government to catch up to the AI era. The following has been edited and condensed for clarity:

What’s a technology that you think is overhyped?

Cultured meat and plant-based meat substitutes. Environmentalists and opponents of factory farming have hyped synthetic meat for a decade now. And to their credit, the strategy makes sense. Humans are omnivores and not about to be scolded into becoming vegans. But if a delicious steak or hamburger can be grown in the lab and sold at an affordable price, people might switch over automatically. Many startups were launched around this thesis, and millions of dollars in venture capital was raised, but investors have started to realize the technology simply isn’t ready for commercialization. Take Beyond Meat, the maker of the plant-based Impossible Burger. Their stock was trading at $140 a few years ago but has since collapsed to under $9 a share, and they’re now reportedly in survival mode.

There’s still a chance that meat substitutes will eventually work with more R&D and advancements in synthetic biology, but these recent efforts were badly mistimed. Investors simply underestimated the engineering challenge of replicating the full taste and texture of real meat in all its variations. Until that threshold is crossed, I suspect cultured meats and other meat substitutes will remain super niche, much less have the consumer demand to justify tech industry valuations.

What’s one underrated big idea?

Shareholder primacy. Stakeholder capitalism is in vogue these days, but I think it fundamentally misunderstands what corporations are for. As the legal scholar Henry Hansmann pointed out, all companies can be thought of as a kind of co-op. Just as milk producers may form a supplier co-op to pool their milk, joint stock corporations are like a lender co-op; investors pool their capital at a discount with the promise of a future return. What they have in common is that both milk and financial capital are homogenous (no pun intended). It’s straightforward to share profits in a milk co-op by simply calculating who contributed the most milk. Yet for more complex forms of production, it’s not always clear how to assign credit. Shareholder corporations solve this by sharing profits according to who contributed the most capital. Ownership therefore flows to wherever the negotiation costs are lowest. Indeed, without the joint stock corporation, it would be impossible to build and finance anything of significant scale or complexity.

The view popularized by Milton Friedman – that the only obligation of a corporation is to maximize shareholder value – has misled the debate. We should instead think of capitalism as a structured competition, like a football game, where companies compete aggressively on product quality and cost to create value for society as a whole. If you maximize profits by polluting a river or defrauding your customers, you aren’t playing fair. But these rules should be enforced through laws and regulatory referees. If you instead make the environment or your customers an explicit “stakeholder,” it corrupts the purpose of corporate governance, as the interests of different stakeholders often cut in different directions and aren’t easily weighted by relative contribution.

What book most shaped your conception of the future?

Robert Wright’s “Nonzero: The Logic of Human Destiny” and “The Evolution of God” had a big influence on me. “Nonzero” is all about the logic of positive-sum, win-win games in steering human history. Take globalization. If there are large potential gains from two or more countries, there will be a kind of economic gravity that pulls them toward mutual cooperation and integration. “The Evolution of God” applied that same lens to the history of religion. The Christian message of loving thy neighbor emerged during a period of regional globalization, for example.

Yet we can’t take this analysis for granted, as if there were an inevitable arc to history in favor of economic interdependence and peace. Interdependence can also be weaponized, globalization can impose costs on workers who aren’t automatically compensated, and economic integration can creep into forms of political integration that undermine popular sovereignty and invite a backlash. These are lessons we’re now learning the hard way. Therefore, my conception of the future, and of the historical process more generally, hinges on whether our institutions can appropriately adapt to avoid unnecessary conflict and ensure the next stage of technological disruption is truly win-win.

What could the government be doing regarding technology that it isn’t?

The U.S. government is in desperate need of full-stack modernization. By that, I don’t just mean upgrading IT systems or tweaking the procurement process on the margin. Rather, I suspect the advent of artificial general intelligence will require our institutions to be redesigned from the ground-up. The last time this happened was in the early 20th century, when the industry leaders of the Progressive Era brought the new science of management to bear on the modern administrative state. We need something analogous for the AI era: a pathway for leaders in the tech industry to enter government, cut through antiquated processes and procedures, and apply their knowledge of how to implement technology at scale.

What has surprised you the most this year?

I was pleasantly surprised by the inclusion of a compute threshold in the White House’s executive order on AI. The EO invokes the Defense Production Act to require anyone training an AI model with 10^26 or more FLOPS of compute to disclose their safety testing to the Department of Commerce. For context, that is roughly two orders of magnitude more compute than has been used in training any AI model released to date. It suggests the White House is looking ahead and taking the risks from scaling-up powerful, AGI-like systems seriously, rather than being exclusively focused on near-term risks like bias and discrimination. While the EO has its critics, it represents a far more rational and light-touch approach to AI safety than, say, the EU’s AI Act. Whether or not it goes far enough, at least someone is paying attention.

worlds collide

European Commission Vice President Margrethe Vestager speaks during a press conference on artificial intelligence.

European Commission Vice President Margrethe Vestager speaks during a press conference on artificial intelligence. | Pool photo by Olivier Hoslet

The European Union’s digital chief is coming to America.

POLITICO’s Edith Hancock and Giovanna Faggionato reported this morning (for Pros!) on an impending visit from EU Commissioner for Competition Margrethe Vestager, who will meet with chief executives of Apple, Google, Nvidia, OpenAI and Broadcom after speaking at next week’s Tech Antitrust Conference in Palo Alto, California.

The trip is Vestager’s first to the United States since she returned to her position as vice president of the European Commission in December after an unsuccessful bid to lead the European Investment Bank. The Danish trust buster’s schedule includes conversations with OpenAI’s Chief Technology Officer Mira Murati and Chief Strategy Officer Jason Kwon as the EU contemplates the final stages of its planned AI Act.

to the cosmos

A high-profile group of AI and policy thinkers are launching a non-profit that will try to build a philosophical framework for AI development.

In a Substack post yesterday Brendan McCord, a visiting fellow in philosophy at the University of Oxford and former Department of Defense technologist, announced the launch of the Cosmos Institute to build a “framework that identifies how AI could enhance human freedom and excellence, while being realistic about trade-offs and downsides.”

McCord writes that while their concerns about AI risk include the catastrophic and existential to some extent, their “risk concerns are along humanistic lines.”

“How can we avoid a reduction of human freedom, diminished expectations for human excellence, a loss of meaning and ordinary purpose, and the erasure of human dignity, not to mention panic, disorientation, and upheaval?” he writes. “What elements of our humanity can technology enhance? What elements do we risk losing?”

The institute’s founding fellows also include the influential AI writer Jack Clark, the Oxford philosopher Philipp Koralus, and Tyler Cowen, the George Mason University economist and author of the Marginal Revolution blog.

Tweet of the Day

Study by @KatjaGrace et al of nearly 3,000 AI researchers finds that the estimated AI timelines have shrunk.And many researchers think that the risk of extinction from AI is substantial.However, there's a lot of disagreement and also some inconsistency between questions.

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com), Daniella Cheslow (dcheslow@politico.com), and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Jan 04,2024 09:35 pm - Thursday

As goes Elon, so goes the world

Jan 03,2024 09:02 pm - Wednesday

Judges to AI: We object!

Jan 02,2024 09:57 pm - Tuesday

Rise of the AI psychbots

Dec 22,2023 09:01 pm - Friday

The year in 5 questions

Dec 21,2023 09:01 pm - Thursday

The year of the bosses

Dec 20,2023 09:25 pm - Wednesday

A futurist who isn't worried about AI

Dec 19,2023 09:01 pm - Tuesday

The future, in books