Seemingly from the first moments that members of Hamas began their attacks over the weekend, murdering and kidnapping hundreds of Israeli civilians, the internet erupted into a state of informational chaos. Different posts and platforms offered competing versions of what was happening on the ground. Horrific images and videos proliferated. Seemingly authoritative sources disagreed about what happened and who was responsible. The CEO of one Israeli social media monitoring company told POLITICO’s Joseph Gedeon that the conflict has generated three to four times more online disinformation than any other event his firm has encountered. Much of that content is proliferating on Elon Musk’s X, formerly called Twitter, which has loosened content restrictions and cut resources used to police the platform since the mogul took it over last year. In Europe, where expression is more tightly regulated than in the U.S. and where Musk could face fines of up to 6 percent of X's revenue, authorities have already sprung into action. Today, as POLITICO’s Mark Scott reports, The U.K.’s online content regulator is set to meet with social media companies about the conflict, while the European Commission has given Musk 24 hours to clean up graphic videos and disinformation linked to the attacks or face possible fines. On Monday, the Information reported that the company had recently stopped using software it previously employed to track coordinated information campaigns being conducted on its platform. Former employees panned the platform’s performance. “X has failed miserably at the basic job Twitter used to excel at — keeping people informed,” the platform’s former vice president for global communications, Brandon Borrman told DFD. “If they were looking to build the world's most efficient market for misinformation, it seems they succeeded.” “All of these wounds are self-inflicted,” said Nu Wexler, who previously headed up policy communications for Twitter in Washington. Wexler said cuts to the platform’s trust and safety team and the advent of paid verification are contributing to the confusion on the platform and across the internet. On Monday, the company touted its moderation approach in a post, saying it was taking additional steps to remove hateful content and encouraging users to add context to controversial posts through its "community notes" feature. But the confusion of the past few days goes far beyond Twitter, and offers a glimpse of something bigger, and perhaps more unsettling, about how the most momentous and contentious global political events could play out from now on. Though "truth is the first casualty of war” is a hoary maxim by now, the online media environment has gotten ever-more-freewheeling in recent years, giving more of the public more direct exposure to the rumors, lies and graphic violence that come with conflict. Whatever its flaws, Twitter also now finds itself at the center of an ever-more-freewheeling information environment in which even authoritative sources can push out their own misinfo, whether on purpose or by mistake. At the U.S. State Department, both the Office of Palestinian Affairs and Secretary of State Antony Blinken issued tweets that seemed to call on Israel not to retaliate, before deleting them. In Blinken’s case, he reissued a new tweet that removed language supporting a ceasefire. Conflicting reports have also emerged about several details of the attack. Some sources, including Sen. Ted Cruz and President Joe Biden have referenced claims that Hamas members raped some of their victims; But questions about the veracity of the claim remain: the Los Angeles Times retracted a mention of the alleged rapes from a column yesterday, saying the reports were unsubstantiated. The Israel Defense Forces, the country’s military, has relayed an account from one of its soldiers who reported finding babies decapitated by Hamas. That claim has spread widely and encountered skepticism, but an IDF spokesperson told Insider it does not plan to seek further corroboration for the claim. The precise role of Iran’s government has also been a subject of debate and confusion. Many politicians freely blamed Iran or U.S. Iran policy in the immediate aftermath of the attack, and The Wall Street Journal, citing unnamed leaders of Hamas and Hezbollah, a militant group backed by Iran, reported over the weekend that Iranian officials approved and helped plan Saturday’s attack. The Washington Post, citing unnamed former intelligence officials, reported Monday that “Iranian allies” provided support for the attack. But ABC News reported that Israeli officials have “backtracked” from their own initial public statements implicating Iran in the attack, and the Times, citing unnamed American officials, has said the U.S. has intelligence indicating the attack took Iranian officials by surprise. At the moment, internet users can find reports, many of them seemingly conflicting with other reports, to support a range of stories about Iran's relationship to the attack. As for threatening posts, look no further than the government of Turkey. Its deputy education minister, Nazif Yilmaz tweeted “You will die” at Israeli Prime Minister Benjamin Netanyahu yesterday. So, it comes as no surprise that there are calls for X and other platforms to impose more restrictions. But it's not at all clear how they will solve the larger problem. When it comes to policing the most sensitive conflict-related content, Shannon Raj Singh, a former human rights counsel at Twitter, said platforms should look to existing legal frameworks that govern the ways governments deal with conflict-related media as guideposts for developing content policies. The Geneva Convention, for example, binds signatories to protect prisoners of war from “insults and public curiosity,” which is sometimes interpreted to prohibit the dissemination of most images of POWs. It also bans the “humiliating and degrading treatment” of hostages, a prohibition that’s become especially relevant in the middle of what is perhaps the biggest hostage crisis of the social media era. “On the platform side, these are really tough decisions because some of the content is being shared to raise awareness of crisis situations,” Singh said. “When you have these tough decisions it's all the more reason for platforms to look to existing international legal frameworks to ground their content moderation policies.” Rebecca Kern contributed to this report.
|