The White House’s Office of Science and Technology Policy has a new taxonomy of “critical and emerging technologies” on which it wants federal agencies to focus. Or rather, an updated one. The list published late Tuesday night revamps and expands on a document last revised in 2022, by regrouping and reclassifying some of the cutting-edge technologies in question and adding a focus on data privacy and cybersecurity. The document’s priorities are not mandates, but more of a suggestion. It tracks with some of the biggest talking points in tech policy on Capitol Hill: artificial intelligence (of course), quantum technology, and human-machine interfaces (like Elon Musk’s Neuralink project), among others. Still, bureaucratic goal-setting can be a big deal. The Biden administration mandated in its executive order on AI last year that agencies research the tech and plan for its implementation and procurement, and quantum is racking up billions of dollars in federal investment. The list is pitched by the OSTP as an effort to continue the nation’s tech leadership, establish international cooperation, and respond to security threats — and for those reasons, it’s an informative window into what the Biden administration is prioritizing. “It signals to agencies that hey, these are things you should watch out for,” Adam Kovacevich, founder and CEO of the Chamber of Progress, a tech industry coalition funded by corporations including Apple, Amazon and Google, told me today. “These are industries that are strategically important to the United States, and therefore we have a economic security argument and a national security argument for maintaining our competitiveness.” An OSTP official responded to a request for comment by saying the list was written by a subcommittee that includes members from “across the federal government… focusing on core technologies that continue to emerge and modernize, and which remain potentially critical to a free, open, secure, and prosperous world.” The official additionally emphasized that it’s “not intended to serve as a list of priorities for either policy development or funding, but may be useful in informing government-wide and agency-specific efforts focused on promoting U.S. technological competitiveness and national security.” With that in mind, it’s worth taking a look at some of the key areas listed (and changes to them) and reviewing their recent developments. A few to begin with: Artificial intelligence. The big one, naturally: The OSTP listed AI as a priority in 2022, with a fleet of subfields including machine learning and generally “safe and/or secure AI.” In keeping with the laser focus government has put on the technology since last year’s generative AI boom, the OSTP’s laundry list is now quite a bit more specific in the fields of AI development it thinks are worth exploring, listing foundation models, generative AI itself, and synthetic data approaches for training. Those are all in keeping with the Biden administration’s overall approach to AI, which has placed a large emphasis on safety (earning voluntary commitments from top AI companies to safeguard their most powerful models) and validated the concerns of AI activists who worry that training these models on real-world data might both endanger privacy and recreate existing biases within human-generated data sets. Last month, the Federal Chief Data Officers Council put out a request for information on the creation of synthetic data, or computer-generated “fake” data, that could be used to train models without some of the risks involved. Privacy and security. One entirely new subheading on the OSTP’s list is “Data Privacy, Data Security, and Cybersecurity Technologies.” The office uses the heading to consolidate some of the subject material included in the most recent version of the document, like digital assets and ledger technologies, data fusion, and biometrics. But it also adds a hodgepodge of new subject areas like “distributed confidential computing,” an effort to secure the cloud that’s backed by the National Science Foundation, securing the computing industry’s supply chain, and security within augmented and virtual reality, the latest positive sign for a long-burgeoning push to get the metaverse on government’s policy radar. (Additionally “spatial computing,” Apple’s preferred term for the technology, was added to the document’s “advanced computing” heading.) Nuclear gets a downgrade. Recent years have seen both a renewed wave of interest in nuclear energy, and uneven progress on new forms of it, like the deployment of small microreactors. Despite that, nuclear energy and fusion energy research got reshuffled in the OSTP’s new list from a category of their own to under the umbrella of “Clean Energy Generation and Storage,” next to green tech like renewables and hybrids. “Space nuclear power and propulsion systems,” whereby space vehicles would be powered by nuclear energy, are now absent from the list altogether. Other technologies, like “human-machine interfaces,” advanced semiconductor development, and laser energy, remain largely the same.
|
Critics of the United Kingdom’s AI strategy are worried that a big investment in the country’s supercomputing capacity is unlikely to pay off. In today’s edition of POLITICO Morning Technology U.K., Joseph Bambridge writes about the competition being held to host a half-a-billion-British pound (or roughly $628 million) supercomputer that will comprise at least 2,000 GPUs to combine with 5,000 already operating in a cluster in Bristol. But Joseph notes that a report from the Tony Blair Institute last year argued the U.K.’s AI Research Resource should be gunning for something more like 30,000, the better to compete with the 25,000 used to train OpenAI’s GPT-4, much less the hundreds of thousands Mark Zuckerberg hopes to accumulate to develop “artificial general intelligence.” Amba Kak, executive director of the New York-based AI Now Institute, told POLITICO that “As governments start shelling out more public funds towards AI development, the real question needs to be: why? … Without a vision that defines public AI innovation outside of the terms set by industry, what we’re going to be seeing from these public options is more of the same.”
|