Programming note: Digital Future Daily will be on the ground at CES this week, providing coverage Tuesday through Friday. If you’re attending this year’s conference, drop DFD’s Derek Robertson a line at drobertson@politico.com. LAS VEGAS — The country’s biggest consumer tech show, CES 2024, kicked off today in a very different world from last year’s, with a decidedly more singular focus: Artificial intelligence and its seemingly overnight takeover of everything from the microchip industry to your car to, uh, your bird feeder. The impact of the AI boom is an obvious bonanza for a trade show like CES, where the biggest tech companies in the world show off their wares in an appropriately glitzy Las Vegas setting. The national anxiety around the technology is also making the convention even more of a policy locus than it has been in recent years. As companies show off a parade of AI-powered devices, there’s a parallel current of panels and discussions — as well as an influx of politicians and thought-leader types — wrestling with what this all means for America’s relationship with this new technology, and what, if anything, needs to be done about it. “Companies are very focused on not only the user experience, but also what it means for society,” Consumer Technology Association President and CEO Gary Shapiro said on the POLITICO Tech podcast today. It’s not just companies, of course: As AI devices infiltrate American homes, workplaces, schools, and pretty much everything else, governments and civil-society groups are scrambling to figure out the implications for privacy, copyright, and even mental health. Overall, the industry’s message at CES this year is that it’s open to the conversation — more or less. Shapiro, when POLITICO’s Steven Overly spoke to him ahead of the conference, sang from a familiar tech-industry hymnal: We’re happy to work with government in the public interest, but don’t put the bridle on too tight. “The view of the tech industry is, tell us what's legal, but don't make us ask permission for everything we do, because otherwise we'll lose out,” Shapiro said. “The U.S. for years has been an elite of innovation, we want to maintain that, but we also want to do the right thing.” What does that look like in practice? Today, the CTA hosted an entire day of panels in cooperation with the American Psychological Association, exploring the ethical use and mental health implications of AI design and implementation. In a morning panel on “Harnessing the Power of AI Ethically” featuring the APA’s chief of ethics and a handful of expert academics, they struck an uneasy, cautious tone about the pace of the technology’s development compared to our understanding of its effects. If “you want to look at the consequences of technologies and make sure that they're better than… harmful, it's very difficult when we don't know what the outcomes are,” said Nathanael Fast, a professor at the University of Southern California and director of the ethics-focused Neely Center. The researchers were especially concerned about the implications of data collection, noting the extremely thorny trade-offs involved in designing devices that are meant to improve people’s lives but depend on voluminous amounts of the most sensitive kinds of data. “Your children are using these systems. These companies are collecting that data about your child, and with transparency around this, people need some level of control over it,” said David Luxton, a professor in psychiatry and behavioral sciences at the University of Washington. “It requires more than just a technological solution; it’s about values and ethics.” Of course, one only has to look to the most recent society-shaping tech revolution — the rise of social media — for an example of the extent to which Americans are willing to not just tolerate, but celebrate and center their lives around technologies that gather extremely personal data largely out of the reach of regulators. After the panel I spoke with Dr. Juliana Schroeder, a University of California Berkeley psychologist who worked with Fast to develop the Neely Ethics & Technology Indices that track user experiences with social media and artificial intelligence, and she drew a direct comparison between the two technologies and that well-documented “privacy paradox” users are willing to accept in their use. That potential for AI to turbocharge trends in privacy and data collection is part of the reason government regulators are on the ground in Vegas in full force to discuss their relationship to the AI boom, in a series of planned panels across the remainder of the conference — which you’ll be hearing all about in this week’s editions of DFD.
|