Google Spotlights Data Center Inner Workings

May 31st, 2008

This is far out. The Google monster eats computer hardware day and night.

Never mind the power consumption, which would be unthinkable. When will Peak Google occur? HA

Via: cNet:

On the one hand, Google uses more-or-less ordinary servers. Processors, hard drives, memory–you know the drill.

On the other hand, Dean seemingly thinks clusters of 1,800 servers are pretty routine, if not exactly ho-hum. And the software company runs on top of that hardware, enabling a sub-half-second response to an ordinary Google search query that involves 700 to 1,000 servers, is another matter altogether.

Google doesn’t reveal exactly how many servers it has, but I’d estimate it’s easily in the hundreds of thousands. It puts 40 servers in each rack, Dean said, and by one reckoning, Google has 36 data centers across the globe. With 150 racks per data center, that would mean Google has more than 200,000 servers, and I’d guess it’s far beyond that and growing every day.

Regardless of the true numbers, it’s fascinating what Google has accomplished, in part by largely ignoring much of the conventional computing industry. Where even massive data centers such as the New York Stock Exchange or airline reservation systems use a lot of mainstream servers and software, Google largely builds its own technology.

I’m sure a number of server companies are sour about it, but Google clearly believes its technological destiny is best left in its own hands. Co-founder Larry Page encourages a “healthy disrespect for the impossible” at Google, according to Marissa Mayer, vice president of search products and user experience, in a speech Thursday.

To operate on Google’s scale requires the company to treat each machine as expendable. Server makers pride themselves on their high-end machines’ ability to withstand failures, but Google prefers to invest its money in fault-tolerant software.

“Our view is it’s better to have twice as much hardware that’s not as reliable than half as much that’s more reliable,” Dean said. “You have to provide reliability on a software level. If you’re running 10,000 machines, something is going to die every day.”

Breaking in is hard to do

Bringing a new cluster online shows just how fallible hardware is, Dean said.

In each cluster’s first year, it’s typical that 1,000 individual machine failures will occur; thousands of hard drive failures will occur; one power distribution unit will fail, bringing down 500 to 1,000 machines for about 6 hours; 20 racks will fail, each time causing 40 to 80 machines to vanish from the network; 5 racks will “go wonky,” with half their network packets missing in action; and the cluster will have to be rewired once, affecting 5 percent of the machines at any given moment over a 2-day span, Dean said. And there’s about a 50 percent chance that the cluster will overheat, taking down most of the servers in less than 5 minutes and taking 1 to 2 days to recover.

One Response to “Google Spotlights Data Center Inner Workings”

  1. pdugan says:

    It’d be interesting to see data on what those failure rates become after the first year. It’s possible that they drop off precipitously once they’ve got it running long enough to experience and adjust for the various contingencies that emerge.

Leave a Reply

You must be logged in to post a comment.