As algorithms take on an increasingly big share of society’s decision-making, a leading nonprofit is setting up a lab to track their impact on the actual humans at the end of those decision. In May, the independent nonprofit research organization Data & Society launched the Algorithmic Impacts Methods Lab, or AIMLab, to develop ways to study how automated decision-making systems affect society. Holding algorithms to account when they harm people has been a tough challenge for policymakers. For one thing, even public-sector decision-making software is often privately owned, a corporate black box that’s hard to see into. And widespread data about the impact of algorithms can be difficult to collect. AIMLab intends to fill that gap with original research, and also by collaborating with other nonprofits to put useful data in the hands of policymakers and industry leaders. We spoke to AIMLab’s head of research, Tamara Kneese, in an exclusive Q&A about her goals for the initiative, which include documenting how human labor is affected by the widespread use of AI and measuring the environmental impact of deploying AI models. Kneese was previously lead researcher at the Green Software Foundation; she has also worked at Intel, and was an assistant professor at the University of San Francisco. Responses have been edited for length & clarity. Data & Society has been around for almost a decade now. How have things changed since then, and why is it launching a new lab to determine the impact of algorithms? Data & Society was founded in the 2013-2014 era, when we were thinking about mitigating the harmful effects of data collection and surveillance on a mass scale. It was all about the various implications of big data. It's funny how the terminology has shifted over the years. So “big data” turned into conversations about platforms and algorithms. Now, AI is a kind of shorthand for the kinds of questions that people are having about power relationships, effects on labor, and effects on privacy. I was a professor at the University of San Francisco for five years, then left my job there to take a position at Intel as part of a very new sustainability team. It was eye-opening to see how decisions actually get made and implemented in a corporate context. I was interested in understanding what it would take to transform the culture from within. I also have a background as a labor organizer. With the lab, I see a way to bring in a rigorous academic approach to understanding algorithmic impacts. I also want to have the ear of sympathetic insiders within industry to make sure that whatever recommendations we're making can actually be implemented. The other important aspect is really centering workers and those who are going to be most impacted by the technologies that are being developed and deployed. I'm really interested in the expertise and knowledge of workers, rather than the siloed approach that happens within tech companies themselves. Quite a tightrope to walk. What’s your first project going to look like? I've been paying very close attention to what Veena Dubal calls algorithmic wage discrimination, thinking about partnering with groups like Rideshare Drivers United or Gig Workers Collective to figure out — especially in different regional contexts — how algorithmic wage discrimination is happening and what to do about it. I’ve also been fully immersed in climate tech for the past couple of years. I'm interested in looking at the intersection of climate justice and labor rights within AI. I think it needs a lot more attention, as laws like the EU’s AI Act come down the pipeline. We're not going to get through this with a simple ethics checklist. And if we're thinking about carbon costs, maybe we're leaving out other kinds of environmental impacts across the supply chain. What are the trade-offs between environmental impacts and impacts on labor? These are the questions I hope we can start addressing right out the gate. The issue of algorithmic impact is at the forefront of a lot of minds in government, but it’s not very clear who is taking charge. What federal agencies do you think are most aligned with your goals? Certainly the [Federal Trade Commission] is a natural ally. There are also people who were at one point affiliated with Data & Society and AI Now who have moved in between working for the FTC and their various nonprofits and academia. Also, the National Institute of Standards and Technology (NIST), as they've been putting forth a lot of really great content on responsible AI. Since you’re looking at this from the worker’s perspective, what's the main challenge facing workers in terms of widespread AI usage right now? That tension between wanting to save time or learn how to integrate particular tools into your workflow in order to make your job easier — which is the best you can hope for with generative AI — versus the fear of de-skilling and partial or full automation, as we see with things like content moderation. Rest of World just did a great deep dive, looking at workers all over the world and how they're using generative AI to save time. At the same time, their jobs are being threatened by generative AI. It’s tricky understanding how workers are using these tools, and what the tradeoffs are. That’s something that will be really important to document because things are moving so fast. I think it’s important to document things as they happen, and make sure we have that data to look back on. I wonder what the landscape will look like in another year — what will have changed? What do we lose if we don’t have that ethnographic qualitative data now? |