From The Founder and Senior Analyst of ZapThink

Ron Schmelzer

Subscribe to Ron Schmelzer: eMailAlertsEmail Alerts
Get Ron Schmelzer: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing

Article

Datacenters, Virtualization, and the Promise of Cloud Computing

Get Your PUE and DCIE Straight, and Think Again About What You Want to Do

It's simple, really: Cloud Computing is a much cooler term than Virtualization. It's easier to say and easier to grasp immediately, even if it's not real specific.

But a squishy term like Cloud Computing lets people define it to mean whatever they want it to mean; its current vagueness means people might get misdirected a bit on whether the term means anything at all, and whether it can help them improve their businesses and lives.

A Virtual Analysis
Let's set that aside for a bit and put the focus back on virtualization. Because even though not all virtualization is Cloud Computing, all Cloud Computing requires virtualization.

I alluded recently to figures from a Terremark facility study that showed a 50% reduction in power requirements with virtualized versus non-virtualized resources. Virtualized resources are also said to increase productivity from the 10-15% range to the 80-85% range; the Terremark test showed that virtualized resources consume twice the power per square meter.

Thus, let's call this a 7X productivity improvement, divided in half by the extra power, or a 3.5X improvement in the power required for transaction. This works out to a power reduction of about 72%. Since we're working with rough numbers here--not trying to land anything on Mars--let's call it 70%.

As the cliched old sales pitch goes, "If I could offer you a 70% reduction in your utility bill, would you be interested?"

The Aluminum Analogy - Bear With Me
Before I was fortunate enough to get involved in the technology business, I spent a few years reporting on heavy industry, including mining. In those early days I learned that aluminum is one of the great power hogs on the planet.

Aluminum is abundant, and there's still enough to last for many centuries. But it doesn't occur naturally, so you have to extract it from ore. To do this, you have to melt it, at about 2,000 degrees F, then shoot a big jolt of electricity through it to tease out the good stuff.

This requires huge amounts of power, so much so that aluminum plants are almost invariably located near big hydroelectric power plants (the Tennessee Valley in the US, for example).

 

It takes about 15 kilowatt hours of electricity to create a kilogram of aluminum. So if you wanted to crank out, say, just a couple hundred pounds of it a day in your garage, Monday through Friday, you'd first need to make sure you can pay for 30,000 kilowatt-hours per month. That's $3,000 at 10 cents per kilowatt-hour.

World production of aluminum is about 30 million (non-metric) tons. That's about 27 billion kilograms, so requires about 405,000 gigawatt hours of power per year. This works out to a continuous power requirement of about 46 gigawatts, or about 2.3 percent of all the world's power. Just to make aluminum (or aluminium as most of our British Commonwealth friends say.) Keep recycling those cans!

The Contrast with Datacenters
I kept these figures in my head as I contemplated the power requirements for data centers. Everybody knows they require a lot of power, and that this requirement may already be in hockey-stick growth mode.

By way of contrast with aluminum, data-center requirements, according to an EPA report issued in 2007, at the time required 61 killowatt-hours in the US, representing about 1.5% of US consumption at the time, or 0.3% of world consumption. The report said further that this power requirement was expected to double in 2011. It hasn't quite reached aluminum's Olympian heights, but it's on the path to do just that. (Plus there are the untrivial power requirements associated with building all the chips, cases, cables, and buildings we'll need.)

PUE and DCIE - Easy to Grasp
Datacenters have also been focused on PUE (power usage effectiveness, although I always want to say efficiency) and its obverse, DCIE (Data Center Infrastructure Efficiency). The bad news here is that we've added two more ingredients to the industry alphabet soup; the good news is that these are very easy things to understand, if diabolically difficult to engineer and improve.

 

DCIE is a percentage, and PUE is a number. They both simply compare the amount of electricity you need for air-conditioning versus powering your servers. If the requirements are the same, you have a DCIE of 50% and a PUE of 2.0. In the perfect, platonic world, your DCIE is 100% (you don't need any air-conditioning) and your PUE is 1.0. In the real world, 50%-2.0 is average, and many datacenters are striving to reach levels between 67%-1.5 and 77%-1.3.

If the datacenter industry has doubled since the time of the EPA report I cited, then datacenters are now using $12-14 billion in electricity in the US on an annualized basis. This one's tricky, as local electricity rates per kilowatt vary in the 8- to 15-cent range. I just use 10 cents for now.

(The world offers a big contrast in rates as well. I pay 17 cents per kilowatt hour in the Southeast Asian country where I'm based right now, due to inefficient generation, monopoly local control, and a weak dollar. Meanwhile, the CEO of the big Tier III MEEZA datacenter in Qatar is quite happy to pay about 2.5 cents per kilowatt hour. I don't have a full grasp yet of how heat truly affects PUE, and how a tropical climate might affect things compared to a desert climate. The related question is how do bitter cold climates affect PUE? At what point do you have to start pumping in warm air instead of cold air? Is there such a point?)

Cut to the Chase
Engineers managing and building datacenters have a deep grasp of all this stuff. There is plenty of information online for those who want to dive in to at least the medium-deep part of the pool.

But to me, what's important is that C-Suite executives--and even more important, politicians--get a grasp of the dimensions of the problem. Quantum improvements in efficiency through virtualization (and by extension, Cloud Computing), along with incremental improvements in PUE can return many billions of dollars to the global economy through reduced power consumption.

Even as overall power consumption rises, we also need to measure what people and organizations are doing with all this virtualization and Cloud Computing. Are they offering improved government services? Are they improving urban traffic flows, creating smarter buildings, and reducing urban power requirements globally? Are they bringing new efficiencies, even capabilities, to smaller businesses and developing economies worldwide? Or are they just adding more pics to Facebook and harvesting Farmville?

More Stories By Roger Strukhoff

Roger Strukhoff (@IoT2040) is Executive Director of the Tau Institute for Global ICT Research, with offices in Illinois and Manila. He is Conference Chair of @CloudExpo & @ThingsExpo, and Editor of SYS-CON Media's CloudComputing BigData & IoT Journals. He holds a BA from Knox College & conducted MBA studies at CSU-East Bay.