A recent post at Rich Miller’s Data Center Knowledge site reveals that Google’s data center in Belgium doesn’t just try not to use its chillers to save energy; it doesn’t have any.  As an example of facilities-side innovation to drive efficiency, it’s certainly impressive. But most data center professionals will never have an opportunity to experience that level of facilities-side innovation.  What makes Google’s innovation feasible, however, is workload distribution.  Moving workloads makes it posssible to migrate work to other Google data centers to avoid overheating in the face of a Belgian heat wave, for example.

Workload distribution isn’t really new.  But distribution based upon facilities needs (like reliability / efficiency) as in this example at Google is new (at least to me  Frankly facilities-smarts  seem a requirement to ensure moving workloads for other purposes – preserving performance under load, for example – doesn’t yield unintended and disasterous consequences.  Imagine if turning on VMWare Vmotion moved workloads to servers that looked highly available from a performance point of view, but were dangerously close to overheating  because they were in a hot zone.

But beyond risk avoidance, factoring facilities concerns like efficiency into workload distrution has major upside in improved efficiency.  And this is something that virtually all data center professionals would benefit from as it doesn’t require changes to the physical plant.  Better still, the efficiency gains are greater as every unit of energy saved on the IT side saves another unit on the facilities side through cooling avoidance.

I’d say a 2x improvement that doesn’t require major facilities changes is pretty cool.

Michael Manos who is currently a Sr. VP at Digital Realty Trust and previously was a data center executive at Microsoft has one of the better blogs on datacenter efficiency.  I added him to the blog roll for those of you who may not already read his stuff.  He tends to write meaty, thought-provoking posts.

If you can work your way around the pot shots at marketing his latest post is no exception as it details some shortcomings of the Green Grid’s Power Utilization Effectiveness ratio.  Through a new construction and roll-out scenario, he contrasts the difference between design target PUE and various forms of assessed PUE (like annual average and annual peak), crisply demonstrating how the numbers dramatically change over time, despite generally adhering to best practices.

As Mike says, the actual PUE rating doesn’t really mean anything; it’s the discussion of what’s behind the number and how it’s changing that’s important.

The Data Center Efficiency Lifecycle

The Data Center Efficiency Lifecycle

In addition to effective and simple metrics (PUE is certainly a start) the industry needs  consistent and regular measurement.  Today this practice is actually far harder to find in the wild than self-proclaimed, seemingly low PUE ratings. As I’m sure others have pointed out (please let me know who), data center efficiency is a journey not a destination.  To guide our way, we need to define a high-level process and standardized data collection which provides not only credible point-in-time metrics, but allows us to track our progress over time.  Only then can we not only consistently assess our efficiency, but also document the effectiveness (or lack thereof) of various transformations.   To this end, I propose the Data Center Efficiency Lifecycle.  Far from rocket science, I know  (I modified it from the widely used process which enterprises use to manage IT security.)  Feel free to edit the content of the steps to suit your needs, or to add additional steps that would tune it to your environment.  As with PUE itself, the point is not that the exact same process apply to everyone, but simply that we have a continuous process to foster regular improvement, drive a dialog across the organization, document progress over time, and (perhaps) support regulatory / compliance requirements.

What do you think?  Are you already doing something like this at your organization?

yahoo-rooster

Details continue to stream in about Yahoo’s planned datacenter outside Buffalo, New York.  This time regarding their planned “chicken coop” style design which includes louvers to maximize the use of outside air for cooling.

Leveraging this and other efficient design techniques, they are aiming for a PUE of just 1.1.  That’s impressive.

Our sources indicate that Yahoo will use a new line of energy efficient servers from HP dubbed the Red Rooster, which run on – you guessed it – chicken feed ;-)

If only it were that easy.

Well done Yahoo.  There is no doubt plenty to do bringing this dream to fruition.  Thanks for taking the time to share your innovation with the rest of us.

Over the last few months two projects have been launched to build next-generation datacenters via partnerships between colleges / universities and major tech vendors.  The first is a collaboration between Syracuse and IBM.  (Talks nearly broke down over re-naming ‘cuse sports teams “The Greenmen” ;-)  The second project unites MIT, UMass, Cisco, and EMC.

holy vs syrThese are great projects that could very well drive the next generation of datacenter efficiency.  In addition, both will create opportunities for some bright college students and two regions that have been particularly hard hit by the flagging economy.

But why stop here?  What about Apple / Cal Berkeley?  Or Oracle / Stanford?

With a few more, we could have a league and bowl games :-)

What other matchups would you like to see?  Who would you place odds on?

The Inspector General recently released an audit of the Department of Energy’s IT energy efficiency.  You can get a copy of the report here.  The report finds that, “the Department had not always taken advantage of opportunities to reduce energy consumption associated with its information technology resources.  Nor, had it ensured that resources were managed in a way that minimized impact on the environment.”

doeThis is a little embarrassing not just because it is the DoE which given its mandate should serve as an example to others from an efficiency perspective, but also because they were found to have not followed their own guidelines.

This is not to say the DoE hasn’t done anything.  The report gave them a pat on the back for purchasing more efficient computers and for an automated PC shut down system at HQ which saves $400K / year.

From a datacenter perspective, the report dinged the DoE for having, “not always taken the necessary steps to reduce energy consumption and resource usage of their data centers, such as identifying and monitoring the amount of energy used at their facilities.” In fact, none of the data centers reviewed had utilized the Department’s own Data Center Energy Profiler.  Ouch.

Among the conclusions is that the DoE should conduct data center energy usage assessements.  To this we say: way to go Inspector General!  This is excellent advice for anyone frankly.  Without a solid baseline of current energy consumption, ideally that maps power to useful work, data center managers have no way to intelligently select efficiency measures, nor gauge their effectiveness.

According to DataCenter Knowledge Apple is  considering a $1 billion east coast datacenter.  If the figure is even close to that amount, it’s a doozy and would be substantially more than current behemoths from the likes of Google or Microsoft.

apple-iDCAs Apple has tried to embrace green manufacturing processes for its products and has emphasized their green features, it will be fascinating to see if they follow through with this new datacenter.  If it’s ultimately a new build-out, or essentially so, then green facitlities infrastructure (ex. free cooling) is probably a no-brainer.  But I can’t help but wonder what they’ll do about servers?  Will the iDC be stacked to the gills with the latest Xserve?  It’s an interesting question as using Xserves could constrain their energy efficiency relative to today’s next-generation server architectures like Google’s home-grown approach or offerings like Rackable.

Given that well more than 1/2 of energy savings can be found in the IT layer, as opposed to facilities improvements, this is no small question – especially for a datacenter of this size and presumed density.  But of course it’s a political hot potato within Apple and without if they don’t use their own gear.

Anyone know what they’re doing in their existing data centers?

fb-elec1

Please forgive that this post frankly has nothing to do with datacenters or power.  But having blogged about Facebook’s datacenter real estate bill yesterday, I couldn’t help but feel I had to share today’s speculation at TechCrunch.  Namely that Facebook turned down a $200 million venture round that would have valued the company at $8 billion.  Reportedly at issue was not the valuation, but the requirement of a board seat.  The rumor goes on to set anticipated 2009 revenue at $550 million.

I can guess where the money is going.  Yesterday’s post about their real estate expense is just the tip of the iceberg… IT infrastructure expense and power surely dwarf rent.  But are they really approaching that level of revenue on advertising?

Follow

Get every new post delivered to your Inbox.