The entrance to Facebook’s headquarters in Menlo Park, Calif. (Facebook Photo)

The people and companies who created social media told us it would be a democratizing force that would bring the world together and help us understand our differences. Turns out, it actually is a very effective tool for changing the course of history.

It was a bad week for big tech, called on the carpet before members of Congress to explain how they allowed a huge portion of the registered voters in the U.S. to be exposed to a massive disinformation campaign sponsored by Russia. Congressional hearings are usually more theatrical than effective, but the defensive posture Facebook, Google, and Twitter had to assume before two separate committees was clearly uncomfortable for the representatives of those world-changing companies.

“You bear the responsibility,” said Sen. Dianne Feinstein, the Democrat from California up for re-election next year, during this week’s hearings. “You created these platforms, and now they’re being misused — and you have to be the ones to do something about it, or we will.”

Facebook has already signaled it’s going to throw money at the problem — a time-honored approach — but that spending presumes Facebook and other companies can figure out a way to stay one step ahead of an army of professional con artists who have weaponized the open, connected nature of these products to confuse and distract hundreds of millions of people around the world.

Microsoft CEO Satya Nadella talked about “the responsibility of a platform company” during his appearance at our GeekWire Summit last month. If Facebook, Google, and Twitter are going to take that responsibility seriously — or if Congress forces them to — they’re going to have to make changes to their product strategies that go against their nature.

Terms of engagement

Google, Facebook, and Twitter are in the business of capturing massive audiences by delivering information over the internet. User engagement is one of the most important metrics looked at when running such a business.

A lot of these measures are relatively simple — a click or tap is probably the fundamental unit of engagement — but even humble community-oriented tech news sites can also tell how long someone lingers on their site, or how quickly they scroll through a mobile feed, and whether or not you actually read this far.

A Chartbeat dashboard, which is popular with publishers. Big web companies track a huge number of metrics on their sites. (Chartbeat Photo)

Big web companies measure far more of your behavior than that; at one point Facebook actually tracked whatever you typed into the status update box but decided for whatever reason not to post, and it also experimented with altering the presentation of content in the newsfeed to see if it could cause an emotional reaction in users.

We all use Google and Facebook in slightly different ways, and all of that data is used to make decisions about how the site is presented to users. Whatever gets the most engagement usually tends to win, and Google and Facebook are among the most powerful companies in the 21st century in large part because they have figured out how to make these decisions faster and better than anybody else.

At the same time they have separately made important breakthroughs in technology infrastructure design, ensuring that those engaging sites are always available and always snappy. And they are throwing billions of dollars at the next generation of these technologies, developing artificial intelligence systems that could foster even more personalized experiences on their sites.

Engagement, however, is not a measure of quality, and it certainly is not a measure of truth. It’s rare that the top-rated or best-selling versions of an information product overlap with the ones people hold up as the best.

As we’ve learned over the past year, computers are not very good at separating fact from fiction. And as we’ve know for some time, people are pretty good at exploiting weaknesses in computer systems.

Google’s original search algorithm was based on the idea that if lots of websites were linking to one particular website, that meant it was a good website. For a fair amount of time, that was actually true!

But along came link farms, keyword stuffing, and the other dark arts of the search-engine optimization industry. Google quickly learned out to sniff out the most egregious offenders and change its search recipe to penalize the worst, and it now relies on dozens of signals to rank search results.

Don’t feed the bear

Facebook now has a similar problem on its hands.

Russia attempted to sow chaos in the U.S. and Europe over the past year by taking advantage of Facebook’s open nature and enormous reach to flood the site with deliberately misleading or inflammatory content, aided and abetted by a army of trolls and bots designed to drive up engagement on those posts. Less nefarious but still dangerous groups are in the disinformation business just for the money; “profitgandists,” as Snopes co-owner and vice president of operations Vinny Green called them at the GeekWire Summit.

Embed from Getty Images

A fair amount of criticism has fallen on these web companies for profiting from these efforts, but as Apple CEO Tim Cook put it, ads on Facebook aren’t really the issue.

“The bigger issue is that some of these tools are used to divide people, to manipulate people, to get fake news to people in broad numbers, and so, to influence their thinking,” Cook said this week. “And this, to me, is the number one through ten issue.”

Disinformation is an ancient tactic, but the reach, speed, and stickiness of modern social media sites have completely changed the game. Facebook and Twitter aren’t just the global town square, they’re also the global town newspaper and the global town gossip mill, and when every piece of information on their sites looks more or less the same, even level-headed people with a decent grasp of reading comprehension can be swayed by clever disinformation.

We’ve known for sure these sites were being used for disinformation campaigns for about a year, and these companies — known for moving fast and breaking things — appear to have done very little to address the core issues. Take Twitter; the company responded to a massive concerted effort to hijack its platform by ramping up its efforts to deliver algorithmically curated feeds to its users, increasing the incentive to use bots and trolling to spread fake news into their feeds.

Google’s situation is a little different. This New York Times piece about how Google’s well-documented failure to build a competitive social network actually turned out to be a benefit in 2017 made me laugh in that sad and amazed way we laugh these days. But Google often surfaces fake news at the top of searches for given topics, and YouTube is home to an awful lot of disinformation and propaganda.

The Pottery Barn rule

There are two equally troubling realities that companies operating information businesses on the world wide web must now confront as they think about their product road maps over the next few years. The first is not of their making; human nature ensures that some people will always use whatever tools are available to them to inflict distress upon their fellow humans, and as some of the images actually used in the Russian disinformation campaign were shown in the hearing, it was difficult to understand how someone could think they were real.

The second, however, is. Social media operations need to develop better ways for detecting the massive spam operations designed to sway the minds of an electorate, because the current versions of these tools are not getting it done.

Google, Facebook, and Twitter are right when they argue that they shouldn’t be in the business of determining an objective “truth,” not only because placing that power in the hands of a corporation could be even scarier, but because their workforces are far too homogeneous to understand the reality faced by their billions of collective users.

But they face a stark choice: either admit that their technological approach to policing these issues has failed in a big way and introduce a much greater number of humans into the equation, or quickly find a technical way to sort out deliberate sowers of disinformation on their platforms. A former Twitter engineering leader thinks a combination of artificial intelligence and human editors is the best bet, pointing to Twitter Moments as a first step.

The 2018 election cycle will be one of the most scrutinized mid-term elections in modern American history. If Facebook, Google, and Twitter really do want to make the world a better place, they need to figure this out now and fashion a tech version of something every doctor knows: first, do no harm.

(Editor’s note: This post was updated to correct the title of Snopes’ Vinny Green.)

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.