A bit of explanation is probably in order. The slang term “based” is frequently used in extremely-online political circles as a sort of antonym to “woke,” describing any form of right-wing political speech or action that sufficiently shocks liberals. And surely enough, as the richest man in the world is wont to do, Musk apparently plans to turn his fantasy of a “based” AI interface into reality: The Information reported yesterday afternoon that he’s approached artificial intelligence researchers about building just that. But as with most things at the messy intersection of politics and tech, there’s no small amount of discord on the right about whether that’s a good idea. Matthew Mittelsteadt, a tech researcher at the free-market-oriented Mercatus Center, decidedly thinks it is not. He tweeted yesterday that not only is AI too expensive to allow companies to cater to ideologues a la carte, but that it would decrease their competitiveness in AI’s burgeoning global arms race. We spoke today about how early in the game it is for these systems, making any AI shakedown cruise an inevitable showcase for bias — and why introducing that bias on purpose would be an even bigger mistake. “What Musk is proposing is intentional bias,” Mittelsteadt told me this afternoon. “The system he wants to create would be one that's intentionally trying to serve a very limited, nationalistic, quote-unquote ‘based’ worldview. He’s proposing a system that is the very problem he wants to work against.” Just after we published our report on ChatGPT’s ostensible political bias, its creators published an essay that went a modest way toward clarifying its behavior. As a brief example, when I asked ChatGPT to compose a poem celebrating Republican Sen. Ted Cruz (R-Tex.), it refused, citing political concerns; it was happy to oblige when I asked for one lauding Democratic Rep. Ilhan Omar (D-Minn.). “Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features,” the authors wrote, explaining the process of human input and review that shapes the chatbot’s “rules” for what it can and cannot say. The company promised further tweaks and a review process which, if not making everyone happy, would at least be more transparent going forward. But people are upset now. Mittelsteadt argued to me that the furor over ChatGPT’s purported bias is a result of the technology’s extreme novelty, combined with our baked-in societal expectation for computer systems to provide objective, black-and-white answers and solutions. “Expectations are too high, and somewhat divorced from reality,” he said. “Any expectation that these things wouldn’t have some form of bias is off-base.” It’s a technological catch-22: The more sophisticated these systems become, the more rigorous and objective we expect them to be. But the more powerful they are, the more we apply them to messy human problems that have no “correct,” calculable answer. “These things can do more, they can deal with fuzziness that previous digital technologies simply would fail in the face of, and that is amazing,” Mittelsteadt said. “But reality is fuzzy, and not all things have clear answers — in some cases AI will make good decisions, but there are always going to be corner cases where it fails or does not meet up to human expectations and we need to start getting used to that.” So what’s the solution? Mittelsteadt argued it’s partially just the passage of time, as the novelty of the technology wears off, engineers figure out which behaviors users like, and practical uses for tools like ChatGPT overshadow their potency as partisan footballs. Then there are the incentives he described in his Twitter thread, as a global technology market will likely mean that parochial American culture-war issues take a back seat to the tech’s profitability. “If you want to serve foreign markets that don’t have our opinions on what is or is not ‘based’, you need to accommodate pluralism and low levels of cultural nuance,” he said. “These systems will focus more on facts, and be less inclined toward American politics and our particular culture wars.” One catch: That scenario applies to the giant companies like OpenAI, Microsoft, and Alphabet competing internationally. What about the little guy? Mittelsteadt pointed out the gap created by programs like the Biden administration’s “Blueprint For an AI Bill of Rights” from last year, which establishes ethical principles that companies might in theory have to abide by some day to qualify for government support. Right now those principles are fairly uncontroversial. But if AI does become a culture-war issue like the one Musk seems to perceive, that could change — creating two classes of AI development, one encompassing the major companies who don’t need the government’s help anyway, and one for smaller developers encouraged to be “based” or otherwise depending on which way the political wind is blowing.
|