Roger McNamee is a veteran investor in Silicon Valley, and one of the earliest people to invest in Facebook, even offering advice to Mark Zuckerberg when he was a young CEO.
A fan of Facebook for many years, McNamee became concerned with posts in his feed about the election early in 2016. His concerns soon grew into his own investigation into Facebook’s role in the spread of misinformation and its consequences, which he shares in his latest book, Zucked: Waking Up to the Facebook Catastrophe, and which we reviewed last week.
He tells us that reception for his book has been “fantastic” so far, and after one week, it has made The New York Times Hardcover Nonfiction Best Seller list.
“This is an issue whose time has come,” McNamee said. “The evidence surrounding the reception to Zucked has been confirmation that there is a lot of concern out there, and a lot of uncertainty about what sources of the problem are.”
Below, McNamee talks with us more about a few other big-picture ideas from the book:
On the larger issue of tech companies’ power and responsibility for the greater public good …
This is not about Facebook or Mark [Zuckerberg]. It’s not even about Facebook or Google. Really, it’s about a Silicon Valley culture that, for the better part of two decades, was given license to innovate and disrupt at will.
They were so successful at doing that that it began to have a global impact, and now we have unintended consequences that require change.
Facebook and Google were two of the best executed startups in humanity. There is so much to admire about both of them, so much good they create, but a combination of the Valley’s culture and the business models that took them to scale, at which they amassed political power for which they were unprepared that is currently unregulated and unaccountable, and it’s been super hard for people running these companies to separate issues of politics from the issues of their business.
By the way, I’m sympathetic to that. It’s really hard to run businesses like this. We’re in this place where things we love have unintended consequences. Something’s gotta give. My role is facilitating this conversation. I don’t have answers, I mostly have questions.
On how Facebook affects behaviors …
No, it’s there to manipulate your attention. If you have a real-time feedback loop, what do you get to do? First, you use notifications and like buttons and stuff like that to appeal to everyone’s innate need for rewards and get them coming back. If you want them to spend a long time on your site, it makes sense to appeal to lizard-brain emotions, like fear and outrage.
The problem is when you get them coming back to a highly personalized experience, it forms habits that are good for business, but for many, it turns into behavioral addictions that can then be manipulated by someone from the outside. Think of the manipulation by people who come in from the outside to use ad tools to do things that are not socially responsible.
Obviously, none of these platforms set out to create that. I think one thing that has been hard here for the founders of these companies is that people would use the products differently than they intended. Why would they think that someone’s going to throw an election? Why would someone do that? I get that.
On data and privacy concerns with Amazon and other smart-home devices…
Think about Alexa-based products. So, there are three levels of concern around the internet of things devices that use [Amazon Echo’s] Alexa or Google Home as the voice control. No. 1, you’re putting devices that listen all the time into contexts where your guard will be lowered and where things may occur where it would be uncomfortable being listened to.
I accept Amazon at their word that if you don’t say, “Hey, Alexa,” they won’t record, but I make the point that the challenge in the space is that there are lots of other people in that formula other than Amazon. Once the data is collected, there are new use cases that emerge and promises made at Point A are forgotten at Point C, so there is that whole issue.
I think Amazon is sincere today, but if a business-use case emerged where it was attractive to record or retain more stuff, we might not hear about that in real time. It’s not that they’re bad, they just don’t perceive it as important.
Secondly, all that hardware is being made by Chinese companies cited by our intelligence agencies as potentially hostile.
And third, it’s all based on Android, which is relatively easy to hack. Wasn’t there a hack last week of the Nest home security system? All products that come to market, there are no standards on what behavior is appropriate and what limits can be. We need to make sure to have that conversation before that happens.
On AI taking over …
AI you’ve got a different set of issues. What are the top three use cases? I would argue that three of top ones are eliminating white-collar jobs, telling people what to think with filter bubbles, and then telling them what to buy or enjoy with recommendation engines.
You can imagine all three as being a service to someone, but you can also imagine very significant groups that can be harmed by those things.
What makes us different? We do different kinds of work. We might believe different things, and we might enjoy different things. Are the best uses of AI to replace our jobs, what we think, and what we like?
I like convenience as much as the next person. Steve Jobs used to talk about bicycles for the mind, using technology to empower people. Those don’t seem like use cases that empower us.
On more regulations for tech companies …
Companies should do the smart thing and concede behavior. Regulation is a blunt tool. Google, in particular, misread Europe. They were politely asked to stop using data in search product, and basically said, ‘We don’t care,’ and get all-time-record $5 billion antitrust fine.
The basic message is when people have decided that your behavior is inappropriate, you’re supposed to take the first offer. From their point of view, they’re going to try their best to get you to alter your behavior and be nice in the beginning, and then pull out ever bigger hammers as they go on. In Europe, Google has not understood that.
It’s reminiscent when Microsoft was approached by Justice Department in the antitrust case in mid-’90s. Nobody remembers that Intel was part of this first query. Intel said we’ll do whatever you want. Microsoft fights every step of the way and gets hit pretty hard. That’s how it works. These guys, so far at least, don’t seem to have learned the lessons of history.
On the future of how tech does business …
Fundamentally, I’m a tech optimist. I believe AI should be in the 21st century, but we need to have an understanding about what rules are.
Why is it that companies are allowed to collect data on minors? Is it OK to sell data from credit card transactions? What is geolocation data from cell companies used for? Is it OK to sell that?
My role in this whole thing, and I’ve spent a career doing this, this isn’t about people whose cultural philosophy worked well for 50 years. There were meaningful changes after the turn of the century because we suddenly didn’t have constraints on underlying technology. We had enough bandwidth, memory, storage to do whatever we wanted. All of sudden, we could have global business models that you never could do before.
It’s been a long period of laissez-faire, they do what they want. I don’t think it makes sense to continue down this path. We’ve taken the traditional model of data, marketing and advertising, where we collect data from customers to improve product or service, that’s the old model. And we’ve now taken it to this surveillance-based thing, which is to gather data not so much to improve product or service, but mostly to take data to create new products or services you never benefit from.