Del Harvey is the director of trust and safety for Twitter — a big job that covers areas including account activity and abuse, user safety, spam, legal issues and many other areas.
This morning at the Privacy Identity Innovation conference, Harvey answered my questions on stage about many of those issues — explaining how Twitter’s approach to spam has evolved; how the company deals with subpoenas; and lessons learned from Twitter’s role in the address book uploading controversy.
Harvey has been at Twitter since 2008. She worked previously with the group Perverted Justice, which conducted sting operations on sexual predators. Harvey herself played the role of a decoy on episodes of Dateline NBC’s “To Catch a Predator” conducted in partnership with the group.
At Twitter, her team provides a counterbalance to the product teams. When engineers come up with a great new feature for making kittens, she said today, it’s her team’s role to point out that the same feature could also be used to shoot bullets — and to work with them to make it safe.
“We’re not really the idea people, we’re more the dream crushers,” she joked. “We have ideas about how to crush the dreams.”
Continue reading for excerpts from our conversation.
I’m a big Twitter user, as are many of the people audience. I still get that direct message from my friend who says that people are saying really nasty things about me on Twitter. Can you give us a sense for where things are in the battle against account hijacking and spam?
Harvey: This is something that happens to every possible site out there, in terms of folks saying, there’s actually higher social value if I can compromise an account that has a previous identity that identifies it as a good or legit account, and then use that to engage in some behavior. Obviously there’s a social norm, where you’re like, maybe that friend does know something bad about me. But still the number of people who click on those links that say, did you see these shocking picture of you posted from last Friday, I don’t know what these people are doing on their Friday nights, where they’re like, oh my God, they got those?
In terms of what we’re working on for stuff like that, it’s kind of an arms race, honestly. OK, we reset the passwords of folks who identify as being compromised, that’s great. But what folks actually do if you’re on the the phishing side of things is say you do your initial breach, you get 1,000 people’s user names and passwords. You’re then going to take probably around 500 of those and use them to spam. … The other 500 you’ve still got in your back pocket. Of that 500 you’re going to take about 25o of them and use them to phish others. The DM you get, “Hey I can’t believe this shocking thing you did … ” It’s sort of like you’re doing down payments on phishing. So the problem is, even if we identify every single person who showed a sign of being compromised, there’s still inevitably the segment that hasn’t shown any symptom yet. So a lot of what we’re working on is education.
It’s https for Twitter, it’s on by default for people, so when you’re going to sign in, double check the address bar. If you click on a DM link and it takes you to a page where you have to sign in, why? You were on Twitter.
What about two-factor authentication? That’s something that other online services have done. Twitter has not yet enabled that option for users. Why not, and is that a possibility in the future to combat these types of attacks.
Harvey: I think everything is a possibility in the future, certainly, but quite frankly, two-factor authentication is not the most practical use of resources. The folks who use two-factor authentication are a pretty small segment of the population as a whole. They’re usually the more savvy folks who are less likely to get phished in the first place. Two-factor authentication for Twitter, at least, would require folks to give us a lot more information than we actually ask for. You don’t have to give us your real name, you can give us a disposable email address, you can use a proxy to access us. People tend to really value those components of Twitter. Here’s two-factor that can make your account more secure, and also give us this additional information, the adoption rate is so low vs. using those resources to do more.
I would use it.
Harvey: I would use it, too. But we’re probably also not going to click on those links.
Harvey: I don’t think I’ve ever had anyone ask me that on the street.
OK, what would you tell me if I asked you that?
Say you’ve been geotagging your tweets, which is an opt-in so you would have had to say, yes, I want to geotag my tweets. But then you’re like, that was a really bad idea. There’s actually an option on the settings page to delete all geotagged information associated with my tweets.
That’s something that we really try to think about in general. We have information, or you’ve given us information. We want to give you a way to either remove it yourself or ask for it to be removed.
Earlier this year, Twitter was among the companies that were found to be storing data from user address books without clearly explaining to users what the company was doing with that data. You updated the Find Friends description to be more clear. But what did you learn from that experience, and what were the broader lessons for the industry from that whole address book debacle?
Harvey: What I learned, primarily, was that I needed to assign a representative to that team, too. When an engineer does something, they’re not, like, I’m going to create this thing and it’s going to save their stuff. It’s going to ruin everything, it will be perfect. They’re like, this is going to make kittens! It’s going to make kittens for everybody.
I see that it makes kittens, but did you see that it also shoots bullets?
Well, why would you want to shoot bullets with it? It makes kittens!
You have these people who are creating these products and creating these exciting things, and they don’t think about what could go horribly wrong with it, or how people can misuse it terribly.
Honestly this is how my whole experience at Twitter started. I joined in October of 2008, and I was the first person dealing with issues related to abuse and privacy and everything else. I remember I had a conversation with Biz and Ev around that time about spam. Biz was like, “Oh, I don’t think it will ever be a problem. You can choose who you follow.” And I was like, oh, dear. … It’s this really interesting disconnect of the people who are creating things and trying to do awesome things and trying to delight their users, if you will, and the group of people who I get to fall into. We’re not really the idea people, we’re more the dream crushers, I think. We have ideas about how to crush the dreams.
It’s essentially, how do we let you do this thing that really is awesome and amazing and does in fact make kittens, while also removing this component of it that will be misunderstood or recontextualized.
Your group (with less than 40 people) covers areas including account activity and abuse, ad policy, analytics, APIs, brand policy, legal policy. It seems like you need 300 to me. How do you manage?
Harvey: I’m very against throwing people at problems that we should really be throwing computers at. It’s really easy to say, OK, everybody sit here and look at all these things and determine if there’s abuse. It’s a lot more scalable and it has a lot more impact to say, OK, everybody look at these things and determine the commonalities of them, let’s identify the behavior that indicates abuse and let’s focus on that. For most sites and most organizations, that’s the future of detecting this sort of thing, in terms of accounts that are bad from the start, and also in terms of accounts that are compromised or that have some sort of change to their state that indicates something has gone wrong.
Last week, Twitter sought to block a court order that required the company to turn over to prosecutors data from an Occupy Wall Street protester’s Twitter account. You’re probably limited in what you can say about specific cases, but broadly, how are you dealing with legal orders these days, subpoenas or non-subpoena requests for data, and how is that changing based on the changing legal environment?
That is an incredibly, incredibly complicated question. To give you the highest level overview, the things that we always look for in terms of receiving any kind of requests are what’s behind this request, does it seem to be a reasonable request, does it seem to be overreaching? Do we have enough context? Is it valid? There’s also a whole separate swath of really fascinating stuff in terms of what happens when it’s an emergency request, where a user has said, I have just taken 60 valium, goodbye world, and they’re saying we’re trying to get to this user and save them.
Twitter has been very transparent about telling users when authorities ask for data. And not every online service is that way. What’s behind that philosophy, and how do you implement that.
It just kinda seems like the right thing to do.
But not to everyone.
We’ve always gone down this path of trying to tell you everything we know, and if we can’t tell you, we’ll tell you when you can. Google’s Transparency Report is something we’re working toward having for Twitter, as well, to communicate that even more clearly, and not just to the specific user.
As you look at the broad scope of trust and safety on Twitter, is there one or two issues that are going to have to change, you’re going to have to make reforms, or the industry is going to have to do something to make things better for Twitter to remain trustworthy and secure.
Harvey: We’re already in the process of revamping our safety and security center to instead of having it being targeted just at Twitter, having it being targeted at the Internet as a whole, broader practices for being online, with Twitter-specific components. What that ties into is outreach to smaller companies — companies that are just getting started in the space, who maybe don’t know what their policy should be about how they handle this issue or that issue. We meet with them pretty regularly, we really try to walk them through, these are the ideas behind our policies, these are what we think the right choices are, and hopefully this year we’re going to actually try to get something off the ground between a lot of the different tech companies around these issues of safety and security and privacy.
We want to make it so that these are the core components of what people should know/think about/realize when they’re dealing with abuse. That’s actually hugely necessary because what we have right now is this fragmented advise that users get.
Audience Question from Mike Schwartz of Gluu: Is the business model of Twitter ever at odds with the service that you provide to users, or the relationship with users?
Harvey: Certainly the potential is always going to exist. Anytime you have a company that’s trying to monetize, there’s always the potential, but Twitter as a whole has always taken a firm approach to doing things “the right way,” to the extent that we have these company core values that we came up with. The company as a whole spent six months going through iterations, everybody was super-involved in it.
One of the core values is grow our business in a way that makes us proud. And another of the core values is defend and respect the user’s voice. The fact that those are only two of the ten core values that the company chose gives us something that we can always look to, which I think has been very helpful, in terms of, is this the right choice, will people understand why we’re making this choice, is it the right thing to do.