AUSTIN, Tex. — The past year’s upheaval in the tech world has turned conventional wisdom on its head — transforming also-rans into celebrities, titans into turkeys, and even the financial system itself upside down. So naturally, the cadre of ambitious techies gathered at SXSW this year are ready to capitalize on the opportunity. And not just necessarily to make a quick buck, but to ask some bigger questions, like: What if we used this moment as a real catalyst for change? Some even more ambitious digital futurists are pushing whole new ways to think about the internet — ways that account for the risks and possibilities posed by AI, the burnout and malaise social media users experience, or the constant reminders that our most personal information is woefully insecure. All of which was the topic of a panel yesterday afternoon titled “Open Innovation: Breaking The Default Together,” at which two Mozilla Foundation higher-ups, along with an industrial designer, outlined their vision for an internet free of the economic structures and incentives that have driven so much of the past decade’s digital heartburn. After the talk I spoke with one of the panelists: Liv Erickson, the team lead for Mozilla’s VR-focused Hubs project. We chatted about what’s at stake with how the internet is currently designed, and how we might make it better as technologies like AI and virtual reality reshape it before (sometimes literally) our eyes. The conversation has been condensed and edited for clarity: When you talk about fixing the internet, you focus quickly on digital ownership. Why is it so central to these big questions about the internet’s architecture? There’s a lot of evidence about how people are interacting with each other online, and it points to a very enthusiastic approach to content creation. People want to be sharing their experiences and talking about what's important to them. Right now, what we're seeing is that it's really hard to build audiences and have ownership over that content. That means that if a platform changes its terms of service or gets shut down because it's no longer profitable for that company, a lot of that content could just disappear. Just last week one of the earliest social VR platforms, Altspace, was shut down. People are mourning that experience because they lost not just videos and photos, but entire worlds that they've built, social connections that they've made, and versions of themselves. When we think about this next generation of the internet and what it can become, data ownership is a key component of it because of the immense amount of psychological and emotional attachment we have to our online identities. What are the future risks to internet users you’re most concerned about? Data collection is a big part of it. But then it's also about what applications are doing to respond to that. That's one of the reasons why generative AI is cool, but it's also scary. When you think about it on a longer-term, more dystopian horizon, what could people do with your information to immediately change the environment that you're in? Philip Rosedale spoke on Sunday about ethics in XR, and he made a really good point that in the physical world we know, generally, when we're being advertised to. But being here at SXSW you're advertised to all the time, and you don't necessarily know that it's an ad, which is one of the things that I think about in terms of immersive worlds and XR technologies. What happens when I think I'm interacting with a friend or a co-worker in VR, and it turns out it was actually just a bot that's developing a relationship with me, and just happens to always be online when I'm online, and starts telling me their political views, and I start to question whether those should be my political views? There’s a lot to deal with around the way information can be manipulated in virtual spaces. What role does U.S. tech policy have to play in safeguarding the future of the internet? When I look at what's being talked about in terms of data privacy, a lot of the words that are used to describe the types of personal information collected can kind of be [vague], like, “We're not actually collecting biometric data on a headset,” in many cases. But it's being inferred, and inferred data is not encompassed in some of these data privacy laws. It’s critical to think about this from a consumer protection perspective. I did a policy fellowship with the Aspen Institute a couple of years ago, and even then I was imagining this world where advertisers could scrape my Facebook friends’ profile pictures, and generate a human that looked like one of my friends, and start using them in their advertising. I would have no way of knowing that that's what they were doing. It's not technically my personal information, but it's meant to play on my emotional experiences. I think that's an area the FTC could be looking at. Has the rise of generative AI made your work seem drastically more urgent? I think that what we are able to do with these tools is incredible. I also want people to go another level deeper and understand the full scope of how they might be used for harm, and I think that’s usually where the conversation stops — at “I did this super cool thing.” I have friends and colleagues who will come and say, look, I made this cool art where I collaborated with an artist via generative AI and it's like… is that a collaboration if the other human’s not in the loop? What do you think of the idea that blockchain can be a solution to digital ownership problems? I don’t think tech in and of itself is ever really a solution. I am a big proponent of people owning the value they're creating, so I think a lot of the principles of distributed systems are really key and powerful. Something that's exciting about Web3 spaces is that more people are recognizing that it can be a tool for them to take back creative control and ownership over what they're doing. I also think there are places where it's being pushed as a solution to problems that it's not actually going to solve. Whenever you take a technology and say this technology is going to solve a human problem, that’s when my alarm bells go off. Is there a simple rule observers or policymakers can apply to tell whether a new technology or platform is designed with users in mind, in the way you describe it? How it generates revenue. That forces people to talk about the decisions they’re making, in terms of whether they’re selling data or not. And whether a technology is able to speak to a core, underlying need: What is this solving for people? What does it give them in their day to day lives? The most dangerous trap that we can set for ourselves is saying that we have to do things because it's the way we've always done them. This is a key moment, and I want as many people as possible to question that as we learn about these new technologies. The software that’s creating these virtual worlds is giving us the ability to try new things and say actually, “You know what, I liked that better. I liked that version of me better.”
|