Trending: Could one city fly, fly away with Amazon’s HQ2 prize? Here’s who has edge on flights from Seattle

Facebook vice president of engineering Jay Parikh visited the Seattle engineering office on Wednesday.

One of the first things that struck me upon entering Facebook’s 17-month-old engineering office in downtown Seattle today was the … silence. Despite the open newsroom-style environment on the 8th floor of 101 Stewart, coders worked away with very little verbal communication. Just the tap, tap, tap of keystrokes — presumably using Facebook chat to get tasks done.

The office — the first engineering center for Facebook outside of Silicon Valley — now employs about 60 folks. They won’t be there for long. The company recently announced plans to move to a larger facility in downtown Seattle with room to nearly double the staff.

There’s certainly a lot of work to be done, one of the reasons why Facebook’s engineering team has now swelled to about 800 people worldwide. (Total employment now stands at about 3,000).

Keeping the world’s largest social network — now at more than 800 million members — humming is no easy task. Jay Parikh, the engineering vice president who oversees Facebook’s infrastructure team, admits that there’s no playbook for many of the technical challenges they face.

“You have to rethink everything you know. These problems have not been solved,” said Parikh, pointing specifically to the storage challenges of archiving some 100 billion photos.

Because of the uniqueness of the challenges, Parikh said the company must rely on its engineers to creatively solve complex problems. And those problems are numerous, from the best way to deploy video chat to rolling out energy-efficient data centers.

And that’s one of the reasons he says Facebook is trying to recruit a special breed of engineer. We asked Parikh — who was in Seattle today — what it takes to work as a Facebook engineer. We also asked him what problems he’d solve for Facebook if he were able to wave a magical wand.

Read on for excerpts from his conversation with GeekWire.

GeekWire: How do you manage and scale an engineering workforce that is growing so rapidly?

Parikh: “We have to very much focus on finding the talent that we need — whether it be straight out of school or experienced or international. We are constantly spending time looking for the folks that bring great technical expertise, passion, energy and fit our culture — (people) that want to be part of something that is at such massive scale and is growing, and presents so many interesting challenges…. It is not only the recruiting part of it. But it is also the happiness, retention, motivation part of it. That’s where I spend a lot of my time.”

GeekWire: What makes a good Facebook engineer?

Parikh: “A couple things. First and foremost, it is just the passion and energy and drive to be successful and to do great things. We measure and qualify everything that we do in terms of impact.  If you focus on the type of impact you can make — whether it be on building some new feature for our iPhone apps or building some great tool that increases the productivity of the engineers or solving some really hard-core distributed systems problem for our caching tier — that’s all ways to make big impact. And people who are ambitious that want to make a big impact, that’s the top line or the first order bit in being a great engineer. Obviously, technical competence — whether it be in algorithms or if you are a kernel expert or you are a network expert — that will certainly help.

And then a third bit of this is teamwork. We are very much a team-oriented culture and come together across many different functions, because we have to, because we have to solve these insanely hard problems that haven’t been solved before. We don’t have a text book we can read and say: ‘Oh, that’s the solution to solving storing over a 100 billion photos for the largest photo sharing site in the world.’ That’s not written in any textbook or on TechCrunch or anything like that. We have to have people who are ambitious and can experiment and can come up with ideas for how to build that stuff.”

GeekWire: What’s the engineering talent like in Seattle compared to Silicon Valley?

Parikh: “The main motivation for coming up to Seattle is that we believe, and we still believe, that Seattle has an incredible talent pool of engineers who do have this passion to solve big problems.”

GeekWire: You said it is a team-based culture? How do you maintain that as you expand to Seattle, and soon into New York?

Parikh: “There are certain sets of projects that are wholly-owned and run out of the Seattle office. So, the team is here, the lead is here and all of the building is done here… Some of the infrastructure work we do — half the team is here and half the team is in Menlo Park. We try to make sure there is good ownership across the problem set where some part of the system is owned by the engineers here, and the other part of it is owned by the team in Menlo Park and there is regular communication. So, we’ve invested a lot in video conferencing and Skype and going back-and-forth — because it is a short day trip. We use electronic communication, IRC, and email chat profusely internally, so we do a lot to make sure that engineers are chatting with each other and collaborating as much as possible. The same will apply to New York.”

GeekWire: Is it easier to recruit in Seattle or Silicon Valley?

Parikh: “I don’t know if it is easier or not. For us, it is all about getting the right people. Wherever we go, we want to build a high-quality team…. I think the market, just in general, is very competitive today and there is a lot of great opportunity for engineers to look at and we want make sure that we are here with a local presence that we get to show off what we get to work on.”

GeekWire: What has been built here in the Seattle office?

Parikh: “One that we touched on was video calling, and being able to handle that across 800 million active users on the site presents all sort of interesting challenges in terms of how do you handle the load, what is the precise user experience … how do we make sure that the back-end infrastructure is connecting and there is a high-quality experience there. This is something that had to be integrated into the core messaging chat functionality in a nice, smooth user experience.”

GeekWire: Has there been good uptick of that product?

Parikh: “I don’t think we’ve put any stats out, but I generally believe that we are pretty happy with where that feature is. As Facebook works on features, we always are iterating, so we are never once-and-done. We put things out there, and watch how they are used by the user base and then try to enhance that feature and keep it engaging and interesting and exciting.”

GeekWire: What are the other challenges you face in terms of scaling the site and handling the user growth as you add new features?

Parikh: “There are two examples I can give you. So, we recently released this feature, Timeline, that replaces your old Facebook profile. So, if you have played around with Timeline or have seen it, you have noticed that it gives you this long historical view back on everything a person has done on Facebook — what they have shared, what activities they have done, what they have posted about, etc. — and it allows you to really share and understand and feel like you get to know somebody in a better way. The challenges there are pretty fascinating because, one is there is a lot more data that is represented on Timeline and in one finger click you can scroll through years and years of history of how somebody has represented themselves on Facebook. That is something that has never been done before…. Now, with Timeline you have the ability to go back kind of infinitely, and the challenges there, for one, is that there is a lot more data that you have to make sure is accessible and can be served fast in this user experience. The other part of this is you want to make sure the user experience — the way it is rendered in the browser and the navigation of the site — is easy and intuitive and it is not confusing and cluttered. So, there are interesting challenges there — one was to layer the development process so having the folks working on the front-end user experience understanding how to refine and layout the content, working in parallel with the folks who were trying to figure out how to scale and store and make this massive amount of data fast and efficient and doing that all basically in a six-month time frame. Most cases, this would be a serialized project that would take a year or year-and-a-half to do, but we did it in six-months because we ‘parallelized’ the development of the layers of the system. That was a recent example of how we move fast and how we are bold in thinking about developing these features, and how we can bring a team of folks together working on different parts of the overall features and how the teams work in parallel.”

GeekWire: Why have you chosen to build some of your systems from the ground up, like your Prineville data center, as opposed to partnering or outsourcing?

Facebook's Prineville data center in Oregon

Parikh: “In order to get the flexibility we needed to build this infrastructure to handle the users and the amount of data, the real-time nature of the application, and the connectedness  of all of the data, you have to rethink everything because you can’t rely on the standard solutions because, I think, many of the standard solutions that you can buy constrain your ability to get the cost savings or better performance or more energy efficient infrastructure. And, honestly, our approach to a lot of these things is to try to keep it simple. Our servers … you can see that the philosophy was vanity-free. We chucked out all of the stuff we didn’t need off the server, and it is a sheet of metal with the components that we need.”

Previously on GeekWire: “Video: Facebook’s ultra-efficient server, a guided tour”

GeekWire: The Prineville data center is built from the ground-up. But your overall data center footprint is pretty small (also in North Carolina and Sweden). Will we see Facebook build out more of its own data centers?

Parikh: “We certainly, right now, continue to explore both options. But, in order to get the power efficiency and the cost savings … those are things that we think we are going to get with our Open Compute designs, which we have also shared with the broader community.”

GeekWire: Since you oversee the infrastructure engineering team, do you feel as if Facebook has been able to scale without major hiccups?

Parikh: “We take performance of the site and quality of the site extremely seriously… It is something that we are constantly working on. As long as we continue to grow and be successful, this is something we continue to work on to make things faster and more efficient and make them more flexible so we can build cooler and more engaging products on top of it.”

GeekWire: You previously worked at Akamai and Ning. Why did you join Facebook?

Parikh: “One is the passion of everybody in the company to do what we do in terms of trying to make the world more open and connected. Everybody is singularly focused on that one mission and that is really exciting and really fun to be part of. The other part of it is the breakneck speed that we move at — and we make decisions and build stuff. That is — at our scale of over 800 million users to be able to release a new version of the site everyday — is fun. Sometimes it is definitely nerve-wracking when things don’t go perfectly, but we take this stuff seriously and it is important for us to be able to move fast. And that is the fun part of it, not only moving fast, but staying moving fast.”

GeekWire: You release a new version of the site everyday. What do you mean by that?

Parikh: “We release code, basically to the front-end of the site everyday. The front-end code that you see on your phone or on your desktop is getting changed every day. We have a very aggressive release process where engineers can work on their stuff, submit it and it gets release literally that day or the next day…. This could be major features, bug fixes, optimizations or trying to streamline some flow so users have an easier time finding a new feature, yeah it is everything.”

Does Facebook need a time machine? Photo: Wikipedia

GeekWire: If you had a magic wand and could fix three things on the engineering side of things, what would they be?

Parikh: “We need the flux capacitor. (Laughs). I don’t know if it is a magic wand, but these are the things we are very hard at work at at Facebook. One is the performance of the site, across the desktop and mobile, smartphones, tablets or whatever. Just making sure that the site is silky smooth and it is fast and responsive. It is such as a real-time experience … that that stuff has to be extremely lightning fast and feel like a dynamic, engaging user experience. So, speed is something we are in ever pursuit of. The other thing that I’d say is continuing to make the infrastructure flexible and building it in a way that allows the app developers — the people who are building the front-end of Facebook — to create. We want them to be imaginative. We don’t want them to be constrained by — ‘Oh, you can’t do that because we don’t store that that way, or that’s too slow, or too costly, or not reliable enough.’ So, we constantly want to make sure that there is a minimal amount of friction or obstacles for the creative thinking of the product engineers. We really want to build a flexible and powerful platform and infrastructure … so that while they are trying to come up with the idea, they can iterate really, really quickly. And that typically is a small amount of users, but once they found the magic idea and they say: ‘Wow, that’s it, we need to release that to 800 million users’ — then it can scale overnight to 800 million users.”

Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline


Job Listings on GeekWork

Optical Software EngineerRadiant Vision Systems
Senior Frontend Engineer, AristoThe Allen Institute for Artificial Intelligence (AI2)
Find more jobs on GeekWork. Employers, post a job here.