Happy Friday! And tomorrow, happy National Data Privacy Day. Today we have Christina Montgomery, IBM’s chief privacy officer, taking on The Future in Five Questions. The century-old company was a tech giant before there was a tech industry, and these days its data-driven software powers everything from loan approvals to airline reservations — putting the company at the center of the discussion of both consumer data use and AI, which powers many of its products. Montgomery, who is also chair of the company’s AI ethics board, spoke with us about what a company with IBM’s business model wants to see from federal data privacy regulation; what doesn’t work about the existing proposal for a law; and — looking further ahead — humanity’s role in stewarding emerging technology. Responses have been edited for length and clarity. What’s one underrated big idea? The need to differentiate between low-risk and high-risk business models — or precision regulation in the data privacy space. I spend a lot of time talking about data privacy legislation, particularly now, given the activity over the past year in the US with the ADPPA [American Data Privacy and Protection Act]. One thing I’ve noticed in those conversations is that often every technology for every company gets thrown into one bucket, regardless of our business models. Not every company is a platform company. Different business models pose very different levels of risk to consumers. I put out a paper in November, calling on policymakers to differentiate between low-risk business models that use data to deliver or improve a company's operations, products and services (called internal data monetization, or data valorization in the paper) and a higher risk business model, where companies use consumer data as a revenue stream, called external data monetization. What’s a technology you think is overhyped? The metaverse, frankly, is overrated. We spent 2020 through 2022 living in these small virtual worlds and zooming in to family gatherings, into work or happy hours with friends. It reminded me that technology, no matter how good it is, is not going to replace human contact. I do think the metaverse has a really valuable place in applications like gaming and training scenarios — where you learn something. But the way that the metaverse is being marketed as humans living in this virtual world, zooming into their workplace as avatars in a virtual conference room… I don't really think that that's where we're headed as the human race. What book most shaped your conception of the future? “Cloud Cuckoo Land” by Anthony Doerr. It's a saga, connecting multiple storylines spanning hundreds of years — the past, present and future. The timelines are interconnected through a single book that survives throughout the generations due to the care and stewardship of humans. So, I'm an English major and the book is actually dedicated to librarians, which I think is fascinating. It really focuses on the lasting power of books — that books survive technology and tell our story. To me, that's a very powerful concept. I don't want to give away the ending, but there is an AI in the future called Sybil. And we see the limitations of that technology, juxtaposed with the resilience of humanity. What could government be doing regarding tech that it isn’t? We need to focus on passing national privacy legislation that both protects consumers and doesn't stifle innovation. The rest of the world is passing comprehensive privacy regulations. And by the end of the year five U.S. states will have enacted privacy legislation covering about 60 million Americans. We look like an outlier as a country if we can't get to it now. The ADPPA is a great starting point. But there are some areas in the bill that have prevented us from outright saying, “We support it.” Part of the reason the bill didn't get the bipartisan support it needed to move out of committee and into the Senate is the private right of action [i.e. U.S. citizens’ ability to enforce their rights through lawsuits]. We need to have the ability to enforce data privacy legislation, sure. But a private right of action is not the right path because you’re gonna get a fragmented interpretation of what’s essentially a new regulatory framework, driven case by case by plaintiff lawyers. What has surprised you most this year? The democratization of generative AI like ChatGPT, in terms of the tangible ways people can interact with it and the discussions surrounding it. It has reinforced the things I've been talking about for over three years — namely the importance of embedding ethical principles into AI, because AI doesn't have any kind of moral judgment or compass. Generative AI will become a tool for people to use as part of the creative process (although I don’t think it will replace creativity). We need to have those conversations now, as well. What does this mean for people who have been contributing their knowledge to the internet over the years? You've got an AI that's scraping the web. So what are the legal rules around web scraping that websites should follow? It’s also bringing up issues surrounding intellectual property as well — about crediting journalists and artists, about ownership over your own creations and your pictures. |