Meta has had its fair share of human rights issues in the company’s history, from the Rohingya massacre to Cambridge Analytica. So it’s only natural that the human rights community would be skeptical of its promise to revolutionize the way we use the internet itself, via the 3D overlay on the world the metaverse promises. Whether or not they can prove those skeptics wrong might come down to what tradeoffs the company and its fellow virtual worldbuilders are willing to make. For now, they’re somewhat predictably keeping their cards close to their chest. Yesterday evening Meta’s director of human rights Miranda Sissons shed some light on the subject during a panel discussion disquietingly titled “Human rights and the metaverse: are we already behind the curve?” Sissons first touted the potential for AR/VR technology to improve quality of life in the real world, through its uses in fields like automotive safety or medical diagnostics. But that’s not the “metaverse.” And when it comes to the rules for the new virtual spaces Meta is building, well… they’re contingent. “Many of the salient risks are related to our behaviors as humans,” Sissons said. “And many of those behaviors can be mitigated or prevented through guardrails, standards and design principles, as well as design priorities.” But what are those principles, exactly? The human rights community provides a slew of formal tools through which to evaluate the impact of any given technology and prevent harms like those in the Rohingya and Cambridge Analytica cases, and Sissons argued that companies should follow the frameworks for human rights compliance put forth by groups like the United Nations and the World Economic Forum. The Electronic Frontier Foundation’s Katitza Rodriguez , who has called for companies to place strict regulations around the kind of data their devices might collect and store, including potential “emotion detection,” attended yesterday’s session virtually as well. She said Sissons’ vision might require Meta to make some uncomfortable trade-offs. “You have to educate and train engineers, marketing teams, etc., on the importance on human rights and the consequence of their product to society,” Rodriguez said. “It’s hard, but it’s important… How to mitigate human rights risks? Avoid including face recognition in the product. These are difficult choices to make.” And there’s no shortage of examples of what happens when these choices don’t get made early on in a tech’s lifespan. “What we've learned from other immersive worlds like video gaming is that the norms that are set early on really define the culture of the space,” Chloe Poynton, the panel moderator and an advisor at the consulting firm Article One, told me afterward. Daniel Leufer, a senior policy analyst for Access Now, argued passionately on the panel against the frequent refrain that it’s not possible for regulators to keep pace with the development of new technologies, saying “often very basic things like data protection, transparency, access to information, do so much work.” Brussels, where Leufer is based, has clearly caught onto this notion with its slew of regulations around data privacy and AI in recent years. As hazy as Meta’s promises might be at the moment, however, there are in fact signs that regulators stateside might be catching up — this week’s surprise bipartisan draft bill on privacy is beginning to provide clarity around who has the power to set and enforce privacy laws.
|