Google Data Center Efficiency Best Practices -- Full Video
Summary
TLDRThis video discusses how Google optimizes its data centers to minimize energy and resource consumption. Through innovations like measuring Power Usage Effectiveness (PUE), managing airflow, adjusting temperature settings, utilizing free cooling methods, and optimizing power distribution, Google has achieved significant energy savings. These best practices have allowed their data centers to use half the energy of typical ones, resulting in millions in savings. The video highlights that even small improvements, such as raising temperatures or reducing unnecessary power conversion steps, can lead to impactful efficiency gains for both large and small data centers.
Takeaways
- 💡 Google innovates data center designs to reduce energy and resource usage significantly.
- 💻 Data centers vary from small closets with few machines to large warehouses optimized for IT computing.
- 🌱 Google uses 50% less energy than typical data centers due to years of innovation in efficiency.
- ⚡ Power Usage Effectiveness (PUE) is critical for measuring data center efficiency, with Google achieving a PUE as low as 1.09.
- ❄️ Managing airflow is key to energy efficiency; eliminating hot and cold air mixing improves cooling effectiveness.
- 📈 Thermal modeling helps improve airflow, reducing energy consumption through small, simple changes like adjusting CRAC units.
- 🛠️ Google's use of low-cost containment methods, like meat locker curtains and sheet metal doors, results in significant savings.
- 🌡️ Raising the data center temperature from 72°F to 80°F saves thousands of dollars annually without compromising equipment safety.
- 🌊 Google leverages free cooling techniques, such as sea water cooling and evaporative towers, to reduce mechanical cooling costs.
- 🔋 Optimizing power distribution and reducing conversion stages increases energy efficiency, especially with UPS improvements.
Q & A
What are data centers used for at Google?
-Data centers at Google are critical for delivering web services to users. They can range from small rooms with a few machines to large warehouse-scale facilities optimized for power use and IT computing.
What is PUE, and why is it important for data centers?
-PUE, or Power Usage Effectiveness, is a ratio that measures how efficiently a data center uses energy. It compares total facility energy to IT equipment energy, helping to identify areas where overhead energy usage, such as cooling, can be reduced.
What was the typical PUE of an enterprise data center in 2006, and how does that compare to Google’s recent PUE?
-In 2006, the typical PUE of an enterprise data center was 2.0, meaning for every watt of IT energy consumed, one watt was consumed for overhead. Google’s PUE has improved significantly, with a 12-month average PUE of 1.16 and their lowest data center achieving 1.09.
Why is airflow management crucial in data center efficiency?
-Airflow management prevents the mixing of hot and cold air, which helps maintain efficient cooling. Techniques like CFD analysis and simple design changes can dramatically improve airflow, resulting in reduced energy consumption.
What is free cooling, and how does Google implement it?
-Free cooling refers to using ambient temperatures, such as outside air, to cool data centers without relying on energy-intensive equipment like chillers. Google uses free cooling at all its data centers and even utilizes sea water in Finland and evaporative towers in Belgium.
What are the benefits of raising the temperature in data centers, according to Google?
-By raising the temperature in their data centers from 72°F to 80°F, Google saves thousands of dollars in energy costs annually while ensuring equipment runs safely at higher temperatures, aligning with ASHRAE guidelines.
How does Google minimize energy losses in power distribution?
-Google minimizes energy losses by reducing conversion stages in power distribution. Instead of converting AC power multiple times, they directly use DC power, eliminating unnecessary steps and improving efficiency.
What impact does Google’s approach to evaporative cooling have on water usage?
-Although evaporative cooling uses water, Google’s efficient use of this method saves more water overall by reducing the water needed for energy production, translating into hundreds of millions of gallons saved annually.
What is the significance of measuring PUE frequently, according to Google?
-Frequent PUE measurements allow for more meaningful results, providing accurate data on a data center’s performance. Google recommends measuring PUE as often as possible, ideally every second, to better manage energy efficiency.
What are the key steps Google follows to reduce energy usage in data centers?
-Google follows five key steps to reduce data center energy use: 1) Measure PUE, 2) Manage Airflow, 3) Adjust Thermostat, 4) Utilize Free Cooling, and 5) Optimize Power Distribution. These steps are cost-effective and applicable to both small and large data centers.
Outlines
🏢 Google Data Centers: Innovation and Efficiency
Erik Teetzel introduces Google's approach to data centers, which vary in size from small server rooms to large warehouse facilities optimized for power and computing. Google emphasizes energy efficiency and innovation, achieving significant reductions in energy and water use. These efforts have resulted in Google using half the energy of typical data centers, contributing to a significant reduction in global greenhouse gas emissions. Best practices, such as design choices, are key to improving energy efficiency, and Google's efforts have yielded millions in energy savings.
🔋 Measuring Power Usage Effectiveness (PUE)
Kevin Dolder explains that managing a data center's efficiency starts with measuring PUE, a ratio of total facility energy to IT equipment energy. A lower PUE indicates better energy efficiency. The typical PUE in 2006 was 2.0, but Google has driven its PUE down to 1.16 in the last year, with the lowest data center reaching 1.09. Accurate and frequent PUE measurements are crucial for meaningful results, and integrating them into a building management system is essential for optimal operation.
💨 Optimizing Airflow for Energy Savings
Erik Teetzel discusses the importance of managing airflow in data centers to reduce energy consumption. Preventing the mixing of hot and cold air is vital, and Google uses CFD analysis to identify hotspots and direct airflow. Simple design changes, such as adjusting CRAC intake positions, have improved airflow efficiency. Google implemented cost-effective retrofits like meat locker curtains and sheet metal doors to separate cold and hot aisles, leading to improved airflow and significant energy savings.
🧊 Sealing and Temperature Control for Efficiency
Erik Teetzel outlines further steps to enhance data center efficiency, such as sealing unused rack space with blanking panels to create a tightly controlled environment. Raising the cold aisle temperature from 72°F to 80°F saves thousands of dollars in energy costs annually, without compromising equipment safety. Free cooling, using ambient outdoor temperatures to reduce the need for mechanical cooling, is another strategy Google employs at all its data centers, resulting in tremendous efficiency gains.
🌊 Free Cooling and Water Efficiency
Chris Malone highlights Google's use of free cooling and evaporative cooling to eliminate or reduce mechanical cooling. In Europe, two of Google's data centers operate without chillers, utilizing local conditions such as sea water cooling in Finland and evaporative towers in Belgium. These methods not only enhance energy efficiency but also save significant amounts of water. For every gallon used in evaporative cooling, two gallons are saved in energy production, leading to hundreds of millions of gallons saved annually.
⚡ Optimizing Power Distribution in Data Centers
Tracy Van Dyk discusses how Google minimizes power losses by reducing unnecessary conversion stages in data centers. Traditional setups involve multiple conversions between AC and DC power, which lead to inefficiencies. Google eliminated these by integrating batteries directly into server trays, simplifying the process. Ensuring efficient AC/DC power supplies, such as Energy Star-certified models, also helps reduce energy use. These innovations save Google over $30 per server annually.
🔑 Best Practices for Data Center Efficiency
Erik Teetzel summarizes Google's five key steps to data center energy efficiency: 1) Measure PUE, 2) Manage airflow, 3) Adjust thermostat, 4) Utilize free cooling, and 5) Optimize power distribution. These cost-effective approaches are applicable to data centers of all sizes, and many of them return savings within 12 months. By following these practices, significant reductions in energy consumption can be achieved, contributing to both environmental and financial benefits.
Mindmap
Keywords
💡Data Center
💡PUE (Power Usage Effectiveness)
💡Energy Efficiency
💡Airflow Management
💡Free Cooling
💡ASHRAE Guidelines
💡Conversion Stages
💡UPS (Uninterruptible Power Supply)
💡Containment
💡Cost Savings
Highlights
Google designs its data centers to minimize energy, water, and resource use, resulting in the use of half the energy of a typical data center.
Data centers represent 15% of the 2% of global greenhouse gas emissions produced by the ICT sector, making efficiency critical.
Power Usage Effectiveness (PUE) measures total facility energy against IT equipment energy. Google's lowest PUE is 1.09, compared to the typical 2.0 in 2006.
Regularly measuring PUE, as often as every second, gives more meaningful data, helping data centers operate more efficiently.
Google uses CFD analysis and other methods to optimize airflow, reducing energy costs by improving how air moves in data centers.
Simple retrofits like sheet metal extensions and meat locker curtains can improve airflow, saving $65,000 in energy costs annually for Google.
Google saves thousands of dollars by raising the temperature of their cold aisles to 80°F, rather than the previously believed necessary lower temperatures.
Free cooling methods, such as using ambient outdoor temperatures, are employed at all Google data centers to reduce mechanical cooling needs.
Google’s data centers in Belgium and Finland use chiller-less cooling, relying on evaporative cooling and seawater cooling, reducing energy and water use.
Google's evaporative cooling systems save water by eliminating the need for two gallons of water on the energy production side for every gallon used.
Minimizing power conversion stages and optimizing power supply efficiency, such as using DC instead of AC in servers, saves both energy and money.
On-board batteries eliminate three unnecessary power conversion stages in Google servers, enhancing efficiency and reducing energy loss.
By making power supplies more efficient, including using Energy Star-certified supplies, Google saves over $30 per server annually.
Many best practices in energy efficiency can deliver cost savings within 12 months, making them applicable for both small and large data centers.
Google recommends five steps for data center efficiency: Measure PUE, Manage Airflow, Adjust Thermostat, Utilize Free Cooling, and Optimize Power Distribution.
Transcripts
ERIK TEETZEL: Here at Google, data centers are very
important to us.
They are how we deliver all of our web services
to all of our users.
A data center can mean a variety of things.
It can mean a small closet filled with a couple of
machines all the way to very large warehouse scale
buildings that are optimized for power use and IT computing
and filled with thousands of servers.
At Google, we spend a lot of time innovating the way in
which we design and build these facilities to minimize
the amount of energy, and water, and other the resources
that these computing facilities use.
In terms of the results of all of the work that we've been
doing over many, many years, now we use half of the energy
of the typical data center.
To put things into perspective, the entire ICT
sector, that includes mobile phones, computers, monitors,
cell phone towers, represents roughly about 2% of global greenhouse
gas emissions.
Of that 2%, the data center portion is
responsible for about 15%.
There's design choices that you can make for energy
efficiency that improve the performance
of your data center.
And these things are just best practices.
And adhering well to best practices, that's how you can
actually make the most improvement in
terms of energy use.
The results of the these types of activities return
Google millions of dollars in energy savings.
So the results are significant.
We've invited several members of our data center team
to explain some of these best practices to all of you.
KEVIN DOLDER: The first step in managing the efficiency of
your data center is to make sure you have the
instrumentation in place to measure the PUE, or power
usage effectiveness.
PUE is the ratio of total facility energy to IT
equipment energy within your data center.
It's a measure of how effectively you deliver power
and cooling to the IT equipment.
In 2006, the typical PUE of an enterprise
data center was 2.0.
Which means that for every one watt of IT energy consumed, one watt of
overhead was consumed by the facility to deliver the power
and cooling.
ERIK TEETZEL: Reducing the overhead is
really what you want.
You want PUE to get to as close to 1.0 as possible.
KEVIN DOLDER: Over the last 12 months, our TTM PUE was 1.16.
We've continuously measured that and it's gone down nearly
every quarter since we began reporting it back in 2008.
Last quarter the lowest data center was 1.09.
Ideally, you should measure PUE as fast as you can, as often
as you can, every second or so.
And the more often you can measure it, the more
meaningful the results will be.
It's important to measure PUE over the course of a year --
annually or quarterly -- to get a meaningful result.
If you just take snapshots in time the information won't be
realistic and it won't really be an actual measure of how
well your data center is operating.
One way to make it easier to manage is to incorporate the
PUE measurements into your building management system.
We do this at all of our sites at Google.
Without having easy access to this data we wouldn't to be
able to operate our data centers as
efficiently as we do.
ERIK TEETZEL: Once you have the ability to measure and
manage your PUE, the first step in terms of reducing your
data center energy load is to focus on the
management of the air flow.
The most important thing here is to eliminate the mixing of
the hot and the cold air.
And there's no one right way to do this.
Containment can be achieved through many different
approaches.
One thing we found very useful at Google is CFD analysis to
see where are your hot spots and how is your air flow going
actually be directed in your data center?
By doing so, you can actually model the way in which air
flow will go and it helps you make very simple design
choices to improve the air flow in your data center.
For example, in one of our computing and networking
rooms, we call them CNRs, we actually did some thermal
modeling to see exactly what air flow was doing.
Through that modeling we realized that the intake to
our CRACs was too low.
And that by simply piecing together some sheet metal we
could create extensions that would dramatically increase
the air flow quality into the CRACs.
We also did a bunch of other retrofits.
KEN WONG: Here in this corporate data center at
Google, we've implemented meat locker curtains that are very
inexpensive and easy to install.
These are hung from the overhead structure and they
separate the cold aisle, which is actually hot, and the hot
aisle, which is actually hotter.
We are set now to enter to hot aisle containment door.
And we incorporated these simple, inexpensive, sheet metal doors
to separate very tightly the cold aisle from the hot aisle.
Now over here, we've got the hot air from the racks
coming up, going over head, up through the return
air plenum
back to the CRAC units to give you a nice high temperature
differential across your CRAC units.
A very important step is to seal the rack space where you
don't quite have all of your equipment populated.
And it's very easy to do with these blanking panels.
It's almost like weatherizing your house to make sure that
you've got a nice, tight environment.
ERIK TEETZEL: All totalled, we spent about $25,000 in parts.
And those $25,000 saved us over $65,000 in
energy costs yearly.
Once you manage your air flow properly, the next step in
data center efficiency is to increase the temperature of
your cold aisle.
It's long been believed by many data center operators
that the data center has to be cold to keep all the equipment
at a temperature that it will run safely at.
And in fact, that's just false.
So if you look at recommended guidelines from ASHRAE, they
recommend you running all the way up to 80 degrees
Fahrenheit.
And at Google, that's exactly what we do.
We've got a small corporate data center here.
It's about 200 kilowatts of load.
Simply raising the temperature from 72 degrees to 80 degrees
saves us thousands of dollars in energy costs
every single year.
What's nice about that is it also allows our employees to
come to work in shorts.
Whenever possible, we recommend people to free cool.
Free cooling means utilizing ambient temperatures outside
of your data center to be able to provide cooling without
operating very heavy energy consuming
equipment like chillers.
CHRIS MALONE: We use free cooling at
all of our data centers.
And you can see this in our publicly recorded PUE data
where the PUE values go up in the summertime and down in the
wintertime.
And this is just a reality of running our operations
with free cooling.
And it yields tremendous efficiency gains.
In Europe, we have two data centers that have no chillers
whatsoever.
We're able to take advantage of the local constraints and
conditions.
In Belgium, we use evaporative towers without any chillers
given the ambient conditions.
In Finland, we use sea water cooling.
Sea water from the Bay of Finland cools the servers.
And then we temper the water returning to the Bay of
Finland so there's no temperature gradience
returning to the bay.
Evaporative cooling uses water on site, but what we found
through our studies is that by the use of evaporative cooling
in a very efficient fashion, we save water on the whole.
So for every gallon of water that we use in the evaporative
cooling plants, we eliminate the use of two gallons of
water on the energy production side.
This translates into hundreds of millions of gallons per
year in water savings.
There's no one right way to deliver free cooling.
The important point is that you should examine these
opportunities and take advantage of them to eliminate
or reduce substantially the mechanical cooling.
TRACY VAN DYK: In the data center, you pull power in from
the electrical grid and you convert it down to the
voltages that are needed for all the
components in the data center.
And there's a lot conversion stages in there.
By minimizing those conversion stages, you can save
money and save energy.
Also by making each conversion stage more efficient you can
save energy, as well.
Traditionally, one of the biggest losses is UPS,
Uninterruptible Power Supply.
Typically, there's a giant room of batteries.
The batteries are DC voltage.
And the power coming in to charge those batteries is AC.
And so you need to convert from AC down to DC with a
rectifier in order to charge the batteries.
And then when the batteries are needed in a power event,
you need to convert that back to AC with an inverter.
And then the AC needs to be converted back down to DC for
all the components in the data center.
So you've got three conversion stages in there
that are not necessary.
What Google has done is put a battery on board the tray.
So you're eliminating those three conversion steps.
You just have DC right into the server components.
In a typical server configuration, you have a
server with an AC/DC power supply attached to it.
By making sure that AC/DC power supply is efficient, you
can save a lot of energy.
Things like Energy Star labels will point you to power
supplies that are 90% plus efficient.
Google is able to save over $30 dollars per year per
server by implementing all of these features.
ERIK TEETZEL: There really are very simple, effective
approaches that all of us can implement to reduce the data
center energy use.
And most of them are cost effective within
12 months of operation.
So a lot of efficiency best practices should be adopted by
just about everyone.
They're applicable to small data centers
or large data centers.
It's simply following the five steps that we go through here
to make sure that you're able to reduce your energy use.
1. Measure PUE 2. Manage Airflow 3. Adjust Thermostat 4. Utilize Free Cooling 5. Optimize Power Distribution
5.0 / 5 (0 votes)