Twitter deployed a “vast array of tools” to limit the impact of former President Donald Trump on the platform, even before the company decided to ban him following the Jan. 6 Capitol riot, according to journalist Matt Taibbi, citing internal Twitter documents Friday.
The decision to ban Trump was made not solely due to his actions or the actions of his supporters on or around Jan. 6, 2021, but “over the course of the election and frankly the last 4+ years,” one Twitter executive — whose name was redacted by Taibbi — wrote in an internal message, according to Taibbi. While Twitter had several teams to limit the visibility of users’ tweets, the majority were either automatic and rules-based or handled by high-ranking executives on a subjective, case-by-case basis, but those executives met with more and more federal officials as time went on, particularly before the 2020 election, Taibbi reported.
1. THREAD: The Twitter Files
THE REMOVAL OF DONALD TRUMP
Part One: October 2020-January 6th
— Matt Taibbi (@mtaibbi) December 9, 2022
Twitter “had a vast array of tools for manipulating visibility, most all of which were thrown at Trump (and others) pre-J6,” Taibbi tweeted.
Following Jan. 6, 2021, Twitter’s former head of trust and safety Yoel Roth said in an internal Slack message that he lacked sufficient “generic enough” meetings to fill his calendar and hide his “very interesting” meetings from view, according to Taibbi.
“DEFINITELY NOT meeting with the FBI I SWEAR,” reads one of Roth’s messages following Jan. 6, Taibbi reported. Another Twitter employee, whose name and profile picture on Slack were censored by Taibbi, simply responded “lmao.”
On Oct. 8, 2020, Twitter executives opened a Slack channel dedicated to election-related account removals, with a particular focus on “Very Important Tweeters” or “VITs,” according to Taibbi. There was some level of friction between Safety Operations, a larger department within Twitter with a more rules-oriented approach to content moderation, and high-ranking executives like Roth and then-head of legal, policy and trust Vijaya Gadde, as the latter group often made moderation decisions “only the fly, often in minutes and based on guesses, gut calls, even Google searches, even in cases involving the President,” Taibbi reported.
In response to a member of Twitter’s marketing team asking if they can say Twitter detects misinformation through “partnerships with outside experts,” Policy Director Nick Pickles asked for the unidentified employee to simply say “partnerships,” according to Taibbi.
“Can we just say ‘partnerships’ … not sure we’d describe the FBI/DHS as experts or some [Non Governmental Organizations] that aren’t academic,” Pickles replied, according to Taibbi.
On Dec. 10, 2020, Twitter executive Patrick Conlon, announced internally that Twitter would be launching a new mode of suppression known as “L3 deamplification,” according to Taibbi. This mode, which was announced the same day Trump tweeted or retweeted roughly 20 posts challenging the outcome of the 2020 presidential election, was a label that came with an automatic “deamplification” of the tweet in question, limiting its ability to be shared, Taibbi reported.
While some team members asked whether to deploy the new tool right away, Conlon opted to wait until the following day, when the policy was slated to officially go live, according to Taibbi. The team had also applied several “bots” to Trump’s account, monitoring both his claims and the claims of connected entities, such as right-wing news outlet Breitbart, Taibbi reported.
“The significance is that it shows that Twitter, in 2020 at least, was deploying a vast range of visible and invisible tools to rein in Trump’s engagement, long before J6,” according to Taibbi. “The ban will come after other avenues are exhausted.”
Post written by John Hugh Demastri. Republished with permission from DCNF. Images via Becker News.
"*" indicates required fields
OPINION: This article contains commentary which reflects the author's opinion.