After a year of privacy scandals, consumers are beginning to realize that the volume of data tech companies collect on them exceeds what they could have imagined. What’s more, their info is sometimes used to achieve surprisingly sophisticated outcomes, like campaigns pushing political objectives and advertisers engaging in digital redlining.
Marketplace Tech host Molly Wood has been helping listeners untangle the complex web of the data economy on her show. She visited Seattle last week to discuss those issues with Giri Sreenivas, CEO of private email server startup Helm, Ryan Calo, co-director of the University of Washington Tech Policy Lab, and Monica Nickelsburg, GeekWire’s civic editor, during an event hosted by KUOW.
Listen to highlights from our conversation on this special episode of the GeekWire podcast, watch the archived live stream below, and continue reading for excerpts from the discussion.
Molly Wood: On my show, “Marketplace Tech,” our tagline is that we’re trying to demystify the digital economy, and that means helping people understand this business model, and I feel like we are at a time when everyone is collectively starting to realize the business model of the web and the impact that it has on all of our lives. So in tonight’s conversation, we’re going talk about these trade-offs that are part of the data economy, and the big trade-off really is the bargain that you thought you were making, which was, “I will trade some of my personal information, and I will get a free service out of it,” and that was a pretty straightforward bargain, versus the bargain that you really make, which was, “I will turn over all of my personal information, not just to one company but to a company that might monetize it by handing it over or selling it to multiple other companies and that I will be trading way more information than I thought I was.” I think that’s the realization a lot of us have come to, and so our question is, simply put, is this a fair trade? And it’s a super appropriate time to have this conversation because we’re coming up on what is basically the one-year anniversary of Cambridge Analytica and that realization.
So let’s go back in time a little bit to a moment when I think we learned a lot about the business model of the services that we use every day, and it’s this guy. You knew that was coming. Facebook CEO, Mark Zuckerberg. He had a very revealing exchange last April when he appeared on Capitol Hill:
Senator Hatch: How do you sustain a business model in which users don’t pay for your service?
Mark Zuckerberg: Senator, we run ads.
Hatch: That’s it?
Wood: That was Senator Orrin Hatch asking. I mean, we’re chuckling. People were chuckling. Twitter was chuckling, but let’s be honest, do you feel like you really understand how Facebook makes money? Because a lot of people don’t, and after that exchange, I got this tweet from a friend of mine who’s a very accomplished tech reporter and said he was staggered by the lack of understanding of the business model.
Frankly, I thought that moment was a win for Senator Orrin Hatch because it was the first time that Mark Zuckerberg did not say, “We connect people. We create communities. We make it easier for you to talk to your friends. We organize the world’s information,” like Google would have said. He said, “We run ads,” and that was a really revealing moment for a lot of people because that’s the moment when we started to realize that what they meant by that was, “We target ads for stuff you think you might want. We’re the reason that the same ad is following you all over the internet. We will do anything to get more information about you so that we can target these ads,” and that could include paying teenagers to download research apps that record everything they do. It could mean grabbing information about your friends and your friends of your friends, getting information from your cellphone carrier, buying information from third-party data brokers, your credit card company, the stores you shop at, the pet food place.
Facebook will literally, like these companies were saying, “We will literally manipulate your feelings as a research experiment to get more information to use to target ads to you, and we have promised other companies and political campaigns that our targeting is so good that it can actually affect your behavior,” and so even though we all chuckled at this moment, I think it’s the moment when we all realize that we’re living in what I like to call the “data economy.” … There are a whole bunch of factors that led us to step back and look at that whole advertising economy, which is part of what we’re doing tonight.
And look, I have to be honest. There were plenty of people also after the Cambridge Analytica hack who said, and in fact, I know a woman who’s a small business owner and who said to me, “Huh. I should probably be advertising on Facebook. Seems like this stuff really works.”
That is a reality of this conversation too, that I hope we’re going to have, but now we’re trying to grapple with this new reality, and here’s the thing: it’s only gonna get worse, like way worse. You had to know artificial intelligence was coming. You don’t get a bunch of geek son stage and not talk about AI at this point.
Everyone is trying to develop or sell an artificial intelligence or machine learning solution, and all that is is using huge amounts of data to draw conclusions and/or make predictions faster, better, and more efficiently than a human can do it, and a lot of these conclusions are great, right? It might be language translation or medical diagnosis or real-time mapping and traffic rerouting or logistics and supply chain analytics or plagiarism detection, but it all relies on huge amounts of data.
The real-time traffic information comes from millions of smartphones sending real-time location information to the mothership all day every day. You have companies now who are bemoaning the problem of what they call “dark data”. This is just the stuff they don’t know about you yet, and it drives them crazy because you can’t have computers making really great predictions unless you have all of the information, and so as these technologies become a bigger and bigger part of their lives, and they will, then that raises this new question: What’s next? And that’s what we’re gonna talk about tonight.
It’s also a question that people are starting to answer in some different, interesting ways, including should I be getting paid for this data? Which, you could argue, is kind of a radical idea, but maybe it’s the time to think about it … we’re starting to consider how to come at this question of the bargain, and we are seeing people put forward solutions.
This is why the GDPR in Europe, their big General Data Privacy Regulations have come to pass. That’s why California is passing data privacy laws. Senator Amy Klobuchar, in her presidential run, talked about data privacy. The governor of California just suggested that perhaps our state should actually pay people a data dividend like the people who live in Alaska get an oil dividend. These are becoming much more serious conversations as we are talking about size and power. The FTC is holding new hearings to discuss whether our existing anti-trust regulations and frameworks are enough to deal with companies that feel like a monopoly, except that they’re kind of in everything. I mean Amazon. It’s not an obvious one, but it feels like it should be. There are hearings on social media and influence. You have Elizabeth Warren proposing to break up the biggest tech companies. All of this really is just about trying to grapple with these influences that feel totally out of our control, so it’s no surprise that radical thinking is a part of that conversation.
Antitrust and Big Tech
Wood: Do you think that these conversations about anti-trust and breakups are actually — I mean, I know we’re in Seattle and this has kinda happened here before — but do you think these are real conversations?
Monica Nickelsburg: Well, I think the proposal that Senator Warren came out with was intentionally radical because there’s the policy objective, and then there’s the debate objective, and I think, more than anything, it’s forcing us to have this conversation, but I think there is going to need to be a little bit of a reevaluating of how we think about anti-trust. Amazon is a great example. I’m not saying that they are in violation of any anti-trust laws, but historically, the way that we’ve thought about anti-trust is big companies using their power to gain a competitive advantage and raise prices. Amazon uses their power to gain a competitive advantage and lower prices, so how do we define anti-trust then under those terms? It’s less clear-cut.
Amazon is also gobbling up huge troves of data. They’re not sharing it. They’re using it, and they’re using it very effectively. Is that as much of a concern as companies like Facebook that are gathering out data and sharing them with third parties? I don’t know. I think it’s an important question to ask though.
Wood: If we were going to talk about a market-based solution before regulation comes in, can privacy be a selling point?
Giri Sreenivas: I think it’s already become a selling point. Just today, Apple launched a new advertising campaign around their smartphone, and the number one message for that is around privacy. I think it’s a minute-long clip. You can check it out later, but if a company like that is going to put that front and center, and you could argue it’s to their benefit to do this because of the way they develop their products, we’re now seeing Facebook is trying to pivot their entire company around privacy. You could argue that’s to head off some of the regulation that’s coming their way, but I think it is becoming clear that there is a value that consumers are either having today or going to have in the future for products and the decision that they’re going to make about the products that they’re going to use, so absolutely.
Wood: What has the bar become if you promise that privacy? Like if you’re Apple and you’re promising privacy as a selling point, but my phone still runs on Verizon, which is gathering information about everywhere I go on my iPhone, you know? I feel like we’re at trust apocalypse right now when it comes to consumers and companies, and so that even if some parts of this conversation feel unfair to technologists or business people or venture capitalists, that it feels like there’s a whole bunch of people who don’t care at all, and then there’s a whole bunch of people who just aren’t going to believe you.
Sreenivas: So the contrast between what Apple can do for you versus what Verizon doesn’t do for you, the danger of that is you go down the slippery slope towards privacy nihilism or security nihilism. I think it’s important that people recognize that they can take steps that matter, and it’s going to be a combination of what the market can provide in terms of solutions but also what the regulatory frameworks can provide. I think the combination of those two things is going to drive a healthy environment for consumers to have their data and have ownership and control and know exactly what’s going on. I think with Cambridge Analytica, Cambridge Analytica wouldn’t have been as big of a deal as it is now if we didn’t have the Snowden revelations back in 2013. That is what I think really set the tone for how people are appreciating what’s going on with Cambridge Analytica.
The Snowden revelations were much more personalized, right? The government having access to my data, what does that mean for me? Cambridge Analytica showed us that there’s an existential threat that comes to our very way of life by these huge companies having access to this information in a way that can be manipulated, so I think it’s a combination of what the regulatory frameworks will do to paint some boundaries for these companies but also what the market will offer for these customers.
Nickelsburg: I think you need both. Because you need to have an option for people who are deeply immersed in these issues and concerned about them, and it’s worth paying a premium, but you don’t want to create a situation where privacy is just something for those who can afford it, so I think the regulatory piece helps with those who are still going to use free services because they cannot reasonably afford to pay for something, but that doesn’t mean that they shouldn’t be allowed the same basic rights that we’ve decided as in a society that you should have.
Ryan Calo: I’ll tell you, having studied privacy for some time now, one of the major, major shifts over the period of time I’ve been studying it, first in the academy, but now in the mainstream conversation has been that privacy has shifted from a conversation about whether you have control over your personal information, to how you can be controlled by other people that have your personal information. That is a deep, deep shift, and that is precisely I think what we’re dealing with right now.
You had raised Molly, this provocative idea of ‘Hey, maybe you should get paid for your data’, right? And this is something that I’ve heard … people have argued for. I’m not a fan of that approach, and I’ll explain why. Do you remember, I think maybe it’s possibly a slightly offensive term, but you remember how this idea that like, sailors in San Francisco would get like too drunk, and then they’d wake up, and they’d be on a ship, you know what I mean? And they were Shanghaied. You’d wake up, all of a sudden you’re in the service of his thing. Again, not a great term, but the idea basically is that you wake up in the service of somebody, and those folks, those sailors that woke up on those ships, they were paid. If we start paying people for the awful arrangement that they’re in, it’s a little bit like that in my mind. It’s like, we all got drunk on data, we woke up, and all of a sudden we’re getting a little bit of money for it. No thank you.
My view is that it’s much simpler, right? It’s much simpler. The idea is that you pay for things that are valuable, and I have long been, long been a proponent of the idea of the WhatsApp model before it was purchased where you pay a couple dollars a year, and you get this service in exchange for it, and yes, there will be people that can’t afford it, and we should subsidize it for those people.
Limited impact so far
Wood: I think this is a place though to point out that the effect has not been that chilling on these companies. That Facebook is still reporting that, you know, user growth might be somewhat flat in the United States, but it’s growing leaps, and bounds all over the [world], and that still a large percentage of people sincerely don’t care. They don’t see it, and so while we’re having this conversation in this bubble, how much do we have to consider the possibility that it’s not going to go that far? Just trying to be a little bit of a bummer for everyone.
Calo: I would say that people do all kinds of stuff that’s bad for them.
Wood: So, we have to help them help themselves?
Calo: We regulate it, you know what I mean? It’s just like people do all kinds of stuff that’s bad. And just on the point about online advertising, and targeting, I do want to draw a distinction I think is appropriate, because it’s almost excavating a distinction that once was very obvious, but has gone away, which is, there are a few different ways to target ads online. One of them is just about being contextual. You are reading an article about soccer, and you’re sold soccer balls. That is a very different thing from following you around, and supplementing that data from a third party, and getting the data from here, and building the psychological profile of you, and then serving you the ad that not only is of interest to you, but it’s actually served in such a way that it’s gonna be the most attractive to you.
For example, I’ve seen research showing that you can have the same ad, but you can couch it differently depending on what you understand about a person’s psychology. So for example, let’s say that for whatever reason Monica is like, ‘Well, all she cares about is what’s popular’, she’s just really into popularity.
Wood: Not your vibe.
Calo: Exactly. Exactly, not your vibe, which is why it’s funny, but the idea is you care about what’s really popular, and so when you get this advertisement, what you see is “our most popular item” and you’re like, “Oh, I’ll take that, that’s our the most popular item.” But for some reason, the way that I grew up, what I really care about is I’m really worried about scarcity. I just grew up in an environment where I don’t care. I buck the system, I don’t care what everybody else thinks, but I really worry about scarcity, and that same product is sold to me, and it says, “While supplies last.” I’ve seen research suggesting that that’s effective. So, there’s a huge delta between just showing you ads, because I know you like soccer, because he watches soccer, versus this private profiling.
I will say though that especially when you get into the political realm, there is a very unfortunate elephant in the room at one level, which is the First Amendment. If you begin to aggressively assert a prohibition on online advertising, you will get massive push back under the First Amendment. It’ll be like this. Imagine that a person who is running for Congress has a room like this in front of her, and she walks around talking to various people, and she tries to get the sense of what the sentiments are. So she just hears from you, and these are your concerns … and she comes up, and sits up here, and then she tells a story that seems to resonate with the people that talked to, and you tell her you can’t do that. You can’t select your message on the basis of talking to people about what their interests are. No way that flies in our system. So, what’s the distinction between that, and following people online, and tailoring your message on the basis? That’s the kind of argument you’re going to get from people who weaponize the first amendment.
I think that it’s very fraught to try to bang on a category of speech, whereas it’s much simpler to promote the paid option, which I think actually finally will take off.
Nickelsburg: While we’re talking about politics and the global impact of these companies, I think probably most people in the business community would say that the reason that America has all of the biggest, most valuable tech companies is because they’re in America, or because the climate here has created them. But when you think about them expanding out into the global economy, there’s a difference in value between what a middle class American trades off, in terms of their data for what they get from Facebook.
Maybe they have the luxury of looking at privacy as more of a concern than in the developing world where you might be a small business owner, and have absolutely no infrastructure to set up your own website, or online payment system, and Facebook is offering all of these things, which in that case maybe privacy feels like a fair trade-off. So as these companies expand out into the world, on the one hand you want to think about do we want to create a regulatory environment that discourages the next Google? On the other hand, do we have an opportunity in the U.S., because we contain all of these companies to create rules of the road that are really fair in countries where the consumers might not have as much say in advocating for what they really need.