About the Post

Author Information

Rien Dijkstra is Managing Director at Infrarati, advising organizations on information technology strategy. He speaks at industry conferences and he is co-writer of the book Greening IT

Green cloud computing, which way to go?

Green cloud computing, which way to go?What is the right cloud architecture to create a green and sustainable cloud? Should we consolidate to huge mega data centers or is there another way to go?

The analogy

Currently data centers are constructed on the intersection of the electrical energy infrastructure and the network (data) infrastructure. Considering the current electrical energy infrastructure, for this moment the trend is consolidation of data centers to mega data centers, with the principle of economy of scale as the thriving force.

In the book “The Big Switch” Nicholas Carr makes a historical analysis to create the idea that data centers in combination with the Internet following the same developmental path as electric power did 100 years ago. At that time companies stopped generating their own power and plugged into the newly built electric grid that was fed with electric energy produced by huge generic power plants. The big switch is between today’s proprietary corporate data centers to what Carr calls the world wide computer. Basically the cloud with some huge generic data centers that provides web services that will be as ubiquitous, networked and shared as the electricity infrastructure now is. This modern cloud computing infrastructure is following the same structure as the electricity infrastructure: the plant (data center), transmission network (Internet) and distribution networks (MAN, (W)LAN) to give process power and storage services to all kind of end-devices.

So this is a nice analogy but is the analogy right? Is the current power grid architecture able to accommodate the ever rising energy demands?  And by taking the current power grid architecture as an example for the cloud infrastructure architecture do we really get a sustainable, robust IT infrastructure by centralizing IT services in mega data centers?

Not everybody is following the line of reasoning of the big switch.

A hitch in the network

While previous studies of energy consumption in cloud computing have focused only on the energy consumed in the data center, researchers 1 from the University of Melbourne in Victoria, Australia, found that transporting data between data centers and local computers can consume even larger amounts of energy than storing it. They investigated using cloud computing for  storage, software, and processing services; on public and private systems.

The reduction of energy consumption depends on the use case. Using infrequently and at low intensities, cloud computing can consume less power than conventional computing. But at medium and high usage levels, transport dominates total power consumption and greatly increases the overall energy consumed. The researchers explain that home computer users can achieve significant energy savings by using low-end laptops for routine tasks and cloud processing services for computationally intensive tasks that are infrequent, instead of using a mid- or high-end PC. For corporations, it is less clear whether the energy consumption saved in transport with a private cloud compared to a public cloud offsets the private cloud’s higher energy consumption.

A hitch in the power grid 

A very specific element of an electrical power infrastructure is that there is no storage. Therefore demand and supply must be the same, in equilibrium, else there is the risk that this infrastructure shuts down. A controlling agency must coordinate the dispatch of generating units of electricity to meet the expected demand of the system across the power grid. This is a complex management task with the ever fluctuating energy demands.

Another issue is that the power grid is suffering huge energy losses, this loss from primary energy source to the actual delivery of electrical power at the data center, is almost 70% (around 67% conversion loss for a traditional power plant conversion loss and 8-10% transmission grid loss). In some parts of the world there are certain critical locations, also known as Critical Areas for Transmission Congestion, were there is insufficient capacity to meet the demand at peak periods2.

Indirect mega data centers are part of this 70% energy loss in the power grid and the power grid capacity and delivery issues.

These are examples that by a traditional scale up of capacity by centralization and a simple-minded reach for economy of scale we are neglecting the tradeoffs of growing management complexity of the central node and capacity issues of the network.

The analogy one step beyond

But where Carr stops using the power grid analogy we go one step beyond. Current developments in the electrical energy infrastructure shows local power generation based on alternative, renewable, energy sources such as wind and solar energy. Local power generation that, with improvements of the current technology, could even lead to local energy self-sufficiency. The two kind of approaches can even be mixed in a hybrid service model where a macro, centralized, delivery model works together with a localized delivery model using intelligent two-way digital technology to control power supply.

Using this as an analogy another cloud industry development, or next step or next phase, can be envisioned.

Taking another direction

Instead of relying only on a cloud with centralized mega data centers there is another solution, another paradigm, that is much more focussing on an intelligent localized delivery of services and local power generation: the micro data center.

This new distributed systems architecture with a swarm of collaborating data centers should create a sustainable distributed data center grid and take care of the issues that accompanies a centralization approach.  A cloud architecture were data centers are scale out instead of scale up.

In delivering computer process power and storage capacity there are two opposite cloud computing approaches, the mega data center  “bigger is better” and the local micro data center “small is beautiful”. The current “bigger is better” model of cloud computing leads, although shifted from customer to supplier, still to enormous capital expenditures, problems in power usage and cooling, power supply and leads also to some structural vulnerabilities in terms of resiliency and availability of the infrastructure. The alternative p2p data center approach leads to questions about delivering enough processing power, network capacity, network supply and the governance of such a distributed system.

Is this so called Energy Self Sufficient Data Center concept science fiction?

Examples

An example of this hybrid approach is developed in Amsterdam by the OZZO project. The OZZO Project’s mission is to ‘Build an energy-neutral data center in the Amsterdam Metropolitan Area before the end of 2015. This data center produces by itself all the energy it needs, with neither CO2 emission nor nuclear power.’ According to OZZO the data center should function within smart, three-layer grid: for data, electrical energy, and thermal energy. Processing and storage move fluidly over the grid in response to real-time local facility and energy intelligence, always looking for optimum efficiency.

Another example for the distributed data center concept is a new paper from Microsoft Research,The Data Furnace: Heating Up with Cloud Computing3 . According to this research the problem of heat generation in data centers can be turned into an advantage: computers can be placed directly into buildings to provide low latency cloud computing for its offices or residents, and the heat that is generated can be used to heat the building. By piggy-backing on the energy use for building heating, the IT industry could grow for some time in size without increasing its carbon footprint or its load on the power grid and generation systems.

Future

How to create a green and sustainable cloud computing industry? Just scale up by consolidating data centers to huge mega data centers with the help of the current power grid is to simplistic and is also creating all kind of issues. Using developments in the power grid infrastructure as an analogy we can envision another solution direction; creating a smart grid of micro data centers. But still a lot of research has to be done before we have a working data center grid.

For the current moment with the trend of consolidation of data centers to mega data centers, based on the thriving force of economy of scale, the emphasis should be made on data center efficiency and the usage of renewable energy. Although we should take the Jevons paradox in considering; increases in the efficiency of using a resource tends to increase the usage of that resource, but we should appreciate that every kilowatt that isn’t used also doesn’t have to be generated.

1 Green Cloud Computing: Balancing Energy in Processing, Storage, and Transport, Jayant Baliga et all. Proceedings of the IEEE, Issue Date: Jan. 2011.

2 DOE, “National Electric Transmission Congestion Study.”, 2006.

3 The Data Furnace: Heating Up with Cloud Computing, Jie Liu et all., Microsoft Research, June 2011.


Tags: , , , , , ,

Trackbacks/Pingbacks

  1. Green Cloud Computing. Which way to go? « INFRARATI - October 20, 2011

    [...] [Article is also published on Cloud Computing Economics blog] [...]

  2. Research group digest – ulno.net - October 21, 2011

    [...] Green cloud computing, which way to go? What is the right cloud architecture to create a green and sustainable cloud? Should we consolidate to huge mega data centers or is there another way to go? The analogy Currently data centers are constructed on the intersection of the electrical energy infrastructure and the network (data) infrastructure. Considering the current electrical energy infrastructure, for [...] [...]

  3. Green Cloud Computing. Which way to go? « Data-Center.BlogNotions - Thoughts from Industry Experts - October 31, 2011

    [...] [Article is also published on  Cloud Computing Economics blog] [...]

Leave a Reply