twittermapuw
This graph shows the most-used hashtags during the Boston Marathon bombings. The lines represent hashtags used in the same tweet. Photo courtesy of the University of Washington.

Twitter was both a useful and detrimental tool during the 2013 Boston Marathon bombings. While the service helped people find out information faster than any other medium, there was also a bevy of content shared in tweets that was flat out wrong and misinformed.

twitter-s1A team of researchers from the University of Washington wants to develop a tool that may not necessarily tell us when tweets are bogus, but rather flag specific tweets in real time that are being questioned as untrue.

During last year’s bombings, Boston police asked the public to help identify suspects after releasing photo and video surveillance. That sparked a flurry of crowdsourced information associated with hashtags like #boston and #prayforboston, some of which was completely wrong yet still made its way around social media and caused more confusion.

The researchers, who received a top iConference award for their publication earlier this month, analyzed a robust set of data from Twitter when the bombings took place and found that “corrections to the misinformation emerge but are muted compared with the propagation of the misinformation.” Essentially, bad information continued spreading despite attempts by users to correct the rumor.

There was one inaccurate story last year about an 8-year-old girl dying at the bombings that spread on Twitter. The study looked at 92,700 tweets related to the rumor and found that nearly 98 percent of those had spread misinformation and only two percent were corrections.

UniversityofWashingtonTo combat this problem, the UW researchers hope to create something that could let users know when a particular tweet is being questioned as untrue by another tweet.

“We can’t objectively say a tweet is true or untrue, but we can say, ‘This tweet is being challenged somewhere, why don’t you research it and then you can hit the retweet button if you still think it’s true,’” Jim Maddock, a UW undergraduate student in Human Centered Design & Engineering, told UW News. “It wouldn’t necessarily affect that initial spike of misinformation, but ideally it would get rid of the persisting quality that misinformation seems to have where it keeps going after people try to correct it.”

The UW team plans to examine links within tweets and do further research to see if there are certain characteristics that help define misinformed tweets.

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.