It’s been a year since news broke of the Cambridge Analytica scandal, in which political operatives improperly took advantage of Facebook user data to influence elections in the U.S. and Europe. The fallout has caused consumer trust to plummet and encouraged regulators to look more seriously at the power that big tech companies wield.
The troubling state of consumer privacy was the subject of a panel discussion hosted by KUOW and GeekWire in Seattle on Thursday with Marketplace Tech host Molly Wood. She caught up with Giri Sreenivas, CEO of Seattle private email server startup Helm, Ryan Calo, co-director of the University of Washington Tech Policy Lab, and me, GeekWire’s civic editor, to dig into the data economy.
Wood said we’re in a “trust apocalypse,” where consumers have completely lost confidence that their data is being used safely. Sreenivas worried about “security nihilism,” leading people to give up on privacy altogether. Calo noted: “your data has become fodder for being manipulated and harmed.”
It paints a scary picture of the future, but not a hopeless one. Continue reading for highlights from the discussion, which delved into how personal data became the dominant business model of the web and what consumers and regulators can do to build better protections. Watch the full conversation here:
How did we get here? Before the panel kicked off, Wood gave a quick presentation about the state of the data economy and the tradeoffs consumers make when sharing their personal information. Thanks in large part to events such as the Snowden leak, the Equifax hack, and Cambridge Analytica, people are now starting to understand the business model of the internet, particularly with free services, and the repercussions for individual users when their data is shared more broadly than they thought.
“Our question is, simply put: is this a fair trade?” Wood asked.
Wood said “it’s only going to get worse” as companies look to inject artificial intelligence into their products and services.
Calo added that “privacy has shifted from a conversation about whether you have control over your personal information to how you can be controlled by other people that have your personal information.”
“That is a deep, deep shift,” he said. “And that is precisely I think what we’re dealing with right now.”
How much data do companies need? The concept of targeted advertising — how giants such as Google and Facebook make a bulk of their billions in revenue each quarter — dominated much of the conversation. Sreenivas, whose company sells a $499 device that lets customers operate their own email server, said targeted advertising is the “root of what causes this problem.”
“I’ve been on the Internet since the mid-90s and so it’s not like ads are a recent phenomenon. We’ve had ads for a long time,” he said. “It’s the specific targeting on very, very detailed information about individuals that has kind of taken this to the nth degree.”
Sreenivas said he’d like to see companies “bring their applications and services to the consumer.”
“Right now you actually go to these websites, you give them all of your data, and now more and more people are learning that that’s not necessarily a fair trade,” he explained. “I think it’s a fundamentally different and more decentralized model where you can actually invite a company to provide an application or service on a server like [Helm’s] and decide what it is you want to share with them or not.”
Wood brought up how Google CEO Sundar Pichai told Congress last year that his company doesn’t actually need that much data to advertise effectively. She wondered if there was a middle ground of using a “minimum viable data” model.
“Is some of this just philosophical and cultural?” she asked.
A new business model: One potential solution to the privacy issues could be to charge consumers to use services such as Google or Facebook.
“I have long been a proponent of the idea of the WhatsApp model before it was purchased, where you pay a couple dollars a year and you get this service in exchange for it,” Calo said. “And yes, there will be people that can’t afford it and we should subsidize it for those people.”
On the flip side, some have argued that consumers should be paid for their data. Calo isn’t a fan of that. He referenced the idea of “Shanghaiing,” a practice derived from people who were kidnapped or tricked into working as sailors.
“If we start paying people for the awful arrangement that they’re in, it’s a little bit like that in my mind,” Calo said. “It’s like we all got drunk on data, we woke up and all of a sudden we’re getting a little bit of money for it.”
Sreenivas said it’s up to consumers to voice their concerns to lawmakers and not end up drunk on data and wonder what happened.
“But the first part of it is figuring out what is it exactly that we do want,” he said. “There will be room for people to get paid for data that they provide. My biggest concern is if that doesn’t have kind of appropriate bounds on it and you end up with exactly what you described, which is a very dangerous future for us where people get drunk on data and they’re like, what happened? Where am I now?”