It has been a weird year for Google Cloud CEO Diane Greene.
Just as the company’s cloud-computing division has started to hit its stride, signing enterprise deals and reaping the rewards of its tech investments in container technology, employee backlash over artificial-intelligence contracts with the military have kept things interesting during Greene’s third year running the third-place cloud-computing company.
While Google is still looking up at Amazon Web Services and Microsoft Azure when it comes to infrastructure cloud computing, it appears to be finding the balance between keeping engineers happy with cloud-native computing tools and courting enterprise company suits with service-level agreements and steak dinners.
Greene, the former VMWare CEO, was brought on board in 2015 to help Google navigate the world of the enterprise technology buyer, who generally operates under different constraints and with different priorities than the legendary infrastructure engineers who built the backbone of Google’s services. It seems to be working: 25,000 people flocked to the Moscone Center last week in San Francisco for Cloud Next 2018 to learn more about Google’s cloud tech, more than double the amount the company hosted last year.
But these are strange times for the country and the tech industry. After the backlash over Google’s involvement with Project Maven (which Greene refuses to talk about), the company finds itself at the center of debate over how artificial intelligence technology — arguably its crown jewel in the cloud market — should be used in the real world, and no one seems to have a very good answer.
I caught up with Greene in San Francisco to talk about Google’s position in the cloud, its obligations when it comes to applied artificial intelligence, and her role over several decades of this fast-moving industry. A transcript of our conversation, edited for clarity and brevity, follows below.
GeekWire: How’s your week going?
Diane Greene: It’s good. … You know it’s just amazing to see what everybody put together here. It’s a great event because we pull everything we’re doing together … and it gives you this sort of rhythm internally.
So that’s invaluable. And then the chance for all our engineers for all of us to spend time with so many customers in such a concentrated way is invaluable and very motivational.
If you look back over the last year, what do you think the biggest accomplishment was for the cloud group?
Greene: I think we’ve really refined our very customer engineering-centric approach, and I think we’ve really refined our vertical (markets) approach. And they’re both working really well.
GeekWire: (Things) like our Office of the CTO, the ratio of engineers, having an integrated team from engineering and customer-facing to focus on each vertical. And I think all those things have been really valuable. And customers really like it, and it lets us scale better and do more to push the opportunity faster.
I’m just so proud of what G Suite’s gotten done. I mean that has just hit an inflection point, I feel, in terms of its security, how it makes you more productive because of all the AI, and then how integrated it is now. And then new products like voice, you know?
The corollary to that is, what was your biggest mistake?
Greene: That’s an interesting question. I’m sure we’ve made plenty.
We all do.
I think we’re constantly adapting how we go to market. Sometimes I wonder if we need to focus more on the competition, but what we focus on is our customer, and doing what’s right for them. But it’s a very competitive space.
One thing I have to ask you about is a lot of the stuff that happened with Project Maven, and…
Greene: I can’t talk about Maven. I’m really sorry. I will not go on record talking about Project Maven.
Do you think you have a chance to work with military customers in the future?
We will absolutely do things to help the military in any way we can. We just have to … we will also follow our AI principles. The only thing I can tell you is what we did establish is that we will not use AI for any sort of weapon system, and that is unambiguous.
Without talking specifically about the military, Google seems to be a very unique company in the way that employees have feedback and input.
Greene: Well, I think the the AI researchers are doing research and I think it’s fair for them to have feedback and want clarity about how their work will be used.
I think we all need to navigate our way through this, because there’s so much good that’s getting done with technology and it’s solving so many problems. And I think it’s really important to move fast on all the good that technology can do and use it to educate people and use it to eliminate mundane jobs and use it to help the economy and improve health, reduce energy wastage and all that kind of stuff.
By solving these problems, you make the world better, which reduces the reasons to use it for bad. And maybe that’s a little Pollyanna, but it just seems that not … making it harder to use it for good because you don’t want to use it for bad is … nobody wants to do that.
Greene: Oh, on the face recognition.
Yeah, and calling for regulation on that. I have been around a while and I don’t think I’ve ever heard a tech executive call for regulation on a technology product. What does Google think about that?
Greene: Well, we have a major effort to work with the regulators, because the regulators mean well, they’re trying to do the right thing and they play a very important role in the world, but it’s hard for them. Technology is moving faster than they can move, and it’s moving faster than they can understand it. You really have to be almost a full-time technologist to have a deep understanding of technology.
We feel it’s incumbent on us to work as closely with them as we can to help them really understand the depth of the technology and what it’s capable of. So, we’re working a lot in that and, you know, you educate them and then where you think they’re going in a direction that’s going to really set things backwards, you work really hard at showing them that might not be the best forward and providing an alternative, which I suppose is what Brad Smith is trying to do around face recognition.
It was an interesting post, not only for the reason that it’s just rare for industry to call for regulation, but also because this is an area in which we just don’t … I feel like even the deep technologists don’t have a great understanding of where facial recognition technology will go.
Greene: Well, we keep reading about all the ways China’s using it and … Actually, someone just told me, I haven’t seen it, that there’s an article today about Amazon’s face recognition.
Yeah, the ACLU did a test where they used recognition, their cloud service, to … they ran the mugshots … well, they ran the headshots of prominent members of Congress against a police database and it found all these matches, which, obviously-
Greene: Were not real.
[Editor’s note: Since we spoke, Amazon released a blog post challenging many of the assumptions behind the ACLU’s test.]
The more people I talk to about AI who know a lot about AI, it seems like they are less concerned about the immediacy of Skynet and all of these scary things that you hear about. You have access to some of the best AI minds on the planet. How do you see all this? How far along are we and how much farther do we have to go before we start getting really worried about the impact of this?
Greene: There are a broad set of opinions about that. I mean, I think already we see it’s very important to make sure your data sets don’t have bias to the extent that you can, and it’s important to have a diverse workforce developing the AI and that sort of thing. And then I think, like what we found at Google, it’s important for us to have clarity around how it will and won’t be used. But I personally think it’s important not to hamper the good it can do because of the bad it can do.
What are some of the good things it can do?
Greene: Oh my goodness. I mean, point it at health and what it can do, just with all the image stuff where it can do diagnostics or predictive health or a lot of different diagnostic things, being able to take what a doctor is saying and have it transcribe (it). What happened was we went into digital health records, which is good, we’ve got all these digital health records, but in the process of doing that, we made a doctor’s job to type into a computer. So we need to fix that and we can fix that with machine learning.
But (also) energy usage, we use it to reduce energy usage in our data centers, or the World Wildlife Fund is using it to automatically analyze the cameras out in the field to see what’s happening with the wildlife.
For instance, the World Wildlife Fund had to have volunteers analyzing every picture and now they don’t have to do that and that’s a huge win. So, I think it’ll get applied to education and will just get applied to everything and help us be more effective, which will raise the quality of life for everybody. But there’s no denying there’s a lot of not-good uses of it in the wrong hands.
At Google, we have not released face recognition (as a cloud service). It’s used in photos, internal stuff. We have good face recognition, but we don’t release it as a public API because we don’t know what’s going to happen there and-
Will you not release it as a public API, similar to what Amazon has done with Recoknition?
Greene: I don’t think I can comment on what we might or might not do, I’m getting into legal territory.
But to say you can’t be self-regulating is an odd thing to say, I think. People will self-regulate to their values, so what he’s saying is there needs to be a statement about what the values of a given government are. Anybody can self-regulate to their own values, but you won’t have consistency of values.
Well, but that’s the point though. Google can have a statement of values over how it’s going to use AI technology and that’s all well and good, but there are companies that do not necessarily share Google’s values and will use it in different ways. In the past, in technology, even companies that had a defined set of values resisted regulation of industries even though there were bad actors who might take the technology in a different way, just simply because it would limit their business opportunities.
Greene: Well, it may limit a business opportunity or it may or it may create a lot of work. There’s a lot of governments that say data can’t leave their borders, so we have to build a way to support that and it’s a lot of work. An inherently distributed system is more available, more performant, (and) better for global organization. There’s all kinds of reasons you want your data distributed, so the regulators are taking us backwards in terms of the capabilities of the technology to help people.
My view is it’s incumbent on the technology companies to really spend the time helping regulators have a deep enough technology understanding, and it’s to the point where maybe you bring in ethicists, philosophers. But technology’s just moving so quickly, and I believe it’s doing so much good and solving so many problems that we should err on the side of letting it move quickly.
But it’s definitely a confusing time for people, particularly for people that don’t have a deep understanding of what’s happening, and companies that do are coming out with the technology and doing it in a way that is (consistent with) their values. Maybe it’s the role of government to set the values for a government for everybody that’s governed, but that’s not working so well either.
No, it is not. I think that’s part of the problem.
Greene: There’s no really good solution right now.
We have, as citizens and taxpayers, outsourced that to private industry, the need to set these standards and values, and I personally don’t think that’s a very good idea, because I think that private industry has a wide range of views about what these values should be, and even though some companies may understand the issues at play, some companies don’t care.
Greene: Yeah, so I guess that’s the point Brad Smith’s making, that the government needs to step up and say how we’re going to do this. But … government needs to change if it’s going to be able to do that.
You gave this interview (last week) where you’re talking about revenue and you called it a “lagging indicator” of where you are with progress in Google cloud, and so I was wondering what you use to actually measure your progress.
Greene: I pay attention to bookings and commitments and usage. You know I look at everything. Usage I watch very closely, and how good we are at helping customers move quickly. Because actually when we engage with a customer, the faster we do things, the more successful it is. So I pay a lot of attention to that.
I pay attention to all our support metrics, all our performance metrics, all our availability metrics.
It feels like, for years, price cuts in cloud were like clockwork. It was like every six weeks, everybody was cutting prices, cutting prices, cutting prices. The last year or so, it feels like that has abated a bit.
Greene: We will, if some new technology were to come out that makes us able to operate at a much lower cost on some particular service, we’ll pass that (cost savings along) but we’ve done so much for customers in terms of configurable machines and sustained use discounts and all those kinds of things. You never want to win on price, you want to win on the value of your product.
Maybe this is a reflection of more multiyear deals being signed or moving away from ad hoc consumption?
Greene: Well, it’s a reflection of us becoming more enterprise-focused, I think, and really looking at the business more holistically and the value we’re bringing and seeing the whole business together. Because in addition to the service itself, we’re giving support for it and we’re giving engineering help to deploying it. I think it’s just maybe the business is focused more on our value now. I mean, we don’t want to charge too much, but to arbitrarily lower the cost every year is not (sustainable).
Google had a reputation at one point for being extremely aggressive on pricing. I don’t know if that is still the case.
Greene: Yeah. I think we’re a bargain simply because we perform better, generally. Say you were twice as fast at doing the same thing and you both had the same price. We’d be half the price because you only pay for what you use, so we have to do some work on that to show that to people.
I think we have more work to do. Because what we hear from people when they move to our cloud, they were doing all this analysis on pricing and everything, and then they start running on our cloud and it costs them a lot less than they thought it would because of our flexibility in how you configure (workloads), how we only charge for what you use and we give sustained-use discounts. And then we have great performance, and the more performant you are … we keep optimizing for our customers.
Given your background, when it comes to the rise of virtual machines to containers to functions (serverless), how have you seen that playing out and where do you think, personally, that’s going over the next couple of years?
Well, I think you always want to use the lightest weight container (in the generic sense, she clarified) that you can, right? Whether it’s a VM or a container or a function or whatever it is. And then whatever that container is, you want to have a really nice way to manage it, to keep it running, to secure it, to monitor it. So, then you, depending on how much context you have, you can use a lighter and lighter weight thing, like if it’s just a little thing that comes up, does its job, and goes away again, just put it in a function.
What do you want people to remember about your time at Google?
Greene: You’re asking me what do I want my legacy to be? I don’t really like … It’s not something I think about, my legacy.
I mean, I’ve worked with the team, I understood the enterprise. I was someone that came in and really understood the enterprise and really likes the enterprise, really likes working with companies, so I think imparting what the enterprise needs and also how great it is to work with all these companies and bringing in people that also understood (that work).
I mean, we have an incredible team now of people that understand the enterprise well, but they’re also real technologists. So, getting everybody, working with everybody, to both see what we could do and then enjoy doing it well and then start doing it, collaborating with all of Google to bring our technologies and help every company, if I’ve been a force for that, I’ll be proud of it.