The interdisciplinary team leading the University of Washington’s Center for an Informed Public, from left to right: Ryan Calo, Chris Coward, Kate Starbird, Emma Spiro and Jevin West. (University of Washington Photo)

The University of Washington’s Kate Starbird has been immersing herself in election disinformation, and what she’s found is deeply troubling. By tracking tweets, Facebook posts and news stories, Starbird and her academic colleagues have documented an ongoing, long-term effort to sow worries that the 2020 vote is rife with fraud, laying the groundwork for some to reject the election’s outcome.

The potential damage, Starbird said, extends well beyond this one race.

“Democracy fails if we lose trust in the process. If we can’t trust the results of our elections, then we don’t have a democracy anymore,” said Starbird, an associate professor with the UW’s Center for an Informed Public (CIP) in a recent live-streamed discussion with colleagues.

Since its launch in the fall of 2019, the CIP has been tracking the sharing and promotion of misinformation and the more ill-intentioned disinformation, which are falsehoods that are deliberately deceptive.

Kate Starbird
Kate Starbird of the University of Washington’s Center for an Informed Public, at the GeekWire Summit in October 2015. (GeekWire Photo)

During the election, that’s included erroneous tales about California ballots being chucked into dumpsters (they were empty envelopes from 2018) as well as a recent story claiming that presidential candidate Joe Biden had suspect Ukrainian connections through his son (the tabloid newspaper relied on President Trump’s lawyer Rudy Giuliani as its source and the story has not been corroborated elsewhere).

This summer, the multidisciplinary CIP also teamed up with researchers at Stanford University, the Digital Forensic Research Lab, and Graphika to create the nonpartisan Election Integrity Partnership.

The group on Monday released a report spelling out how disinformation could disrupt Election Day. They include the spread of images of lengthy voting lines, COVID-19 fears and threats of violence to discourage people from going to the polls; and anecdotal successes and failures in the voting process that can be over-emphasized to back different agendas. The report includes advice for journalists and the public to limit the damage caused by these efforts.

The group is a rapid response, election falsehoods SWAT-like team, quickly analyzing the disinformation, tracking it to its sources and calling on social media platforms to flag or remove it.

Some of the key findings from the partnership includes:

  • The evolution of a meta-narrative that in 2019 introduced the notion of a “color revolution,” a claim without a factual basis that Democrats are trying to steal the election. The storyline weaves in disconnected events including this year’s social protests and unsubstantiated claims of voter fraud to support the false narrative. It provides a framework for adding random rumors and reports as “evidence” to back the revolution conspiracy.
  • The spread of disinformation is coming less from foreign sources and more from domestic individuals and organizations than in 2016.
  • Social media posts shake out into four general strategies for undermining the election: procedural interference, such as sharing wrong information on when and where to vote; participation interference, like raising worries about voter safety and encouraging unauthorized poll watchers; fraud, including encouraging illegal voting and stories about destroyed ballots; and the delegitimization of election results.

The researchers traced the color revolution’s origins to Russian state-controlled media and other news outlets. It gradually gained mainstream traction with help from conservative strategist Steve Bannon, commentator Glenn Beck and a former Trump speechwriter in a Fox News interview. This month, a post about the color revolution was shared by Q, the ringleader behind the QAnon conspiracy theory community that believes Trump is battling a hidden cabal of Satanic pedophiles with Democratic ties.

Starbird and others with the partnership cautioned that widespread acceptance of the narrative creates the foundation for rejecting the election’s results and for confusion on Nov. 3 that’s ripe for exploitation.

That includes a distrust in vote tallies if a candidate’s lead ebbs or flows as more ballots are counted. Experts talk about a “blue shift” to describe a phenomenon where a Republican candidate could perform better in early results that include more in-person voting, but as mail-in ballots are counted a Democrat’s numbers improve. Citizens primed to believe the Democrats are masterminds of a revolution could see the blue shift as fulfillment of that prophecy, rather than an artifact of which ballots are counted first.

Simply identifying and drawing awareness to the meta-narrative, however, is not enough to stop its social media spread.

“It’s a very significant challenge for platform companies,” said Renée DiResta, a research manager at the Stanford Internet Observatory, during a briefing this week. “There’s no one single isolated incident for them to send to their fact checking partners. And many of these videos and articles alleging this phenomenon rely on a litany of events strung together, requiring that each be assessed.”

But the social media platforms do have a role to play, and the partnership has been analyzing policies that companies are leveraging to try to make users aware of suspect information, guide them to reliable sources and outright ban the most egregious posts.

The misinformation and disinformation can be grouped into four types:

Chart via Election Integrity Partnership.

The group reviewed and categorized the platforms’ policies according to three categories describing their responses to the different types of posts: none, indicating no policy; non-comprehensive, meaning it’s unclear as to what sort of language is covered; and comprehensive, which indicates the policy is direct as to what sort of language is covered.

The partnership conducted its policy analysis in August, and updated it this week. The red entries indicate policies that were changed between that time period. The approaches vary widely:

Chart via Election Integrity Partnership.

While Facebook and Twitter both have comprehensive policies for addressing problematic posts, their approach for evaluating the content differs. Facebook, which in the chart includes Instagram, is partnering with outside sources including the nonpartisan PolitiFact to help with its fact checking. Twitter makes the call using in-house expertise, an approach called “ad hoc” by one expert.

The researchers agreed that the platforms have improved over time, taking more aggressive steps to police information. One of the notable changes is the decision to label or pull posts by political leaders, as well as average users.

An ongoing challenge is the speed at which a platform responds to a tweet or post that violates policy. Earlier this summer it could take a site four hours to respond, and that’s dropped to an hour more recently, Starbird said, but even that can be too long. For a Twitter account with a huge following, a false tweet can spread rapidly in a matter of minutes and the harm is largely done.

Even as these policies are more aggressively targeting disinformation, there’s a bigger, underlying problem to consider, said Starbird, who is an associate professor with Human Centered Design & Engineering at the UW.

“We need to think about, ‘Why do these companies have so much power on our democratic discourse and we have so little ability to shape what’s happening there?” she said during a recent panel discussion. “It still seems that there’s something out of balance here in our society.”

Editor's Note: Funding for GeekWire's Impact Series is provided by the Singh Family Foundation in support of public service journalism. GeekWire editors and reporters operate independently and maintain full editorial control over the content.
Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.