Happy Friday! For this installment in our weekly feature, The Future in Five Questions, we examine the role of ethics in a profitable tech enterprise with Kathy Baxter — the principal architect of Salesforce’s ethical AI practice. Salesforce’s software is ubiquitous in American business, with its customer relationship management (CRM) services embedded in over 150,000 American companies — and it’s Baxter’s job to try and keep that technology fair. Read on to hear her thoughts about the hazards of emotion recognition, responsive AI regulation and what makes for good business. Responses have been edited for length and clarity. What’s one underrated big idea? Responsible AI. We need a lot more companies recognizing that responsible AI isn't just a regulatory need — it's good business. DataRobot released a survey of more than 350 U.S. and U.K.-based business executives and development leads earlier this year who use or plan to use AI. And the results were pretty shocking. They found that 36 percent of respondents had suffered losses due to AI bias. Those respondents lost revenue, customers and employees plus incurred legal fees. Even if the kind of AI you're building doesn't fall under regulations, it's just good business to know whether the data that you're using to train and build your models is representative of everyone it impacts. Is there systemic bias in it? Are you making predictions or decisions that are harmful to any group? This is something every company should be investing in. What’s a technology you think is overhyped? Emotion recognition or emotion detection. Research over the last decade has demonstrated that automated emotion recognition or detection is biased and inaccurate. It tends to be based on pseudo science. So when it's used for consequential decision making — like employee monitoring, surveillance, or trying to determine if someone is lying — real harm can occur. We saw over the summer that Microsoft announced they're removing emotion recognition features from their facial recognition technology. That’s huge. Earlier in 2020, Google also said that they were blocking any new AI feature to analyze emotions, because of fears of cultural insensitivity. Then IBM also deprecated their tone analyzer. We're seeing major tech companies that have been working in this technology for years say, “This is potentially harmful. It's not equally accurate for everyone,” and stepping away from it. What book most shaped your conception of the future? I read Ursula K. Le Guin’s “The Left Hand of Darkness” in college. Having grown up in the rural south, it just was mind blowing for me. The concept is that the main character visits a planet where the people change gender throughout their lifetimes. Imagine: if we were all able to truly experience what it's like to be a different gender, or a different race, or from a completely different culture in your life. It would result in laws that are much more empathetic. We would have a better functioning society. Resources would be distributed more equitably. So how do we bring this into technology? How do we create technology that can be empathetic and equally accurate for everyone? What could government be doing regarding tech that it isn’t? Nearly every government in the world is trying to craft regulations or best practices to help mitigate the harms of AI. But AI or ML is many different kinds of technologies, applied in many different kinds of contexts. It is an incredibly holistic technology. And we're still debating the definitions of some of its foundational concepts. What do we mean when we say AI? What do we mean when we say something is biased? How do we know if something is actually explainable or interpretable? Trying to come up with standards for when something is safe enough for widespread use — it's just incredibly difficult to do. So we need more collaborations between government, civil society and industry, so that everybody is up to date. They should know what research can be applied to regulation-making for an industry. For the next year, I'm a visiting AI fellow at NIST to help develop a playbook for their AI risk management framework, so that practitioners like me know how to use it. What has surprised you most this year? I had expected more AI regulation this year. But we are making progress, with the AI Bill of Rights, the New York City AI bias law and the proposed EU AI Act. So progress is being made — I would always like for it to be a little bit faster. Again, it’s incredibly difficult to do, but I just can't stress enough the importance of governmental collaboration and information sharing. We have to stop arguing around definitions. Okay, maybe something isn't a perfect definition, but it’s the one we're agreeing on. And let’s move forward with that.
|