Skip to content Skip to sidebar Skip to footer

A big fat eco rant and report

By Simon Forge, Ptak Noel & Associates

1 Where are we now?

Natural climate change is part of our environment. Extreme variations have occurred during the past 60 million years, although for the last 10,000 years our climate has been relatively stable. But climate records since 1880 show rising temperatures, going up even more steeply since 1980.

The greenhouse gas situation, globally
In consequence, the 20th century was probably the warmest century in the last 1,000 years: there was about 0.6°C of average warming, with land warming more than sea. There has been more temperature change on the earth’s surface in the last 20 years than in the last 200, and in the last 200 years, more than in the last 2,000 so the 1990s were the warmest decade in the last 100 years.

Why is this? Generally the global consensus is that human-driven environmental change is accelerating, as emphasised below. The blue line of CO2 concentration in the graph below, driven by our industrial, transport and consumer emissions illustrates strong correlation with the red line of global temperature:

Until quite recently climate change was believed to be slow. New evidence from ice cores extracted in 2006, confirm an increasing rate of warming as well as a clear relationship between atmospheric carbon dioxide levels and global surface temperature as shown above. Altogether there are six greenhouse gases (or GHGs) – carbon dioxide (CO2), being the most significant by volume.

Since the industrial revolution we have been using the sky as a waste bin – so much so that the CO2 in the atmosphere is now at highest level in 400,000 years, and is a third higher than in pre-industrial times some 200 years ago. Each year the CO2 level rises another 0.5 to 1.0%. With a lifetime in the atmosphere of over 100 years CO2’s atmospheric level will continue to rise as long as global emissions from human activities continue1. As a result, our rainfall patterns are changing, sea levels rise as glaciers retreat and arctic sea-ice thins, while the incidence of extreme weather increases world-wide.

What happens if we do nothing?
Today the penalties of doing nothing are estimated as being far higher than the calculations of around 1-3 % of global GDP of five years ago. Future global effects are expected to be much more varied than a simple and uniform warming over the entire planet, as heating also alters the cycle of rain/cloud formation/evaporation, linked to atmospheric currents and sea currents. Consequently, many regions will become considerably hotter or cooler, or wetter or drier, than others. As the global climate warms, it will affect the location and duration of droughts and the extent of landmass. Economic impacts are now estimated to be very significant – the UK government’s October 2006 Stern report2 expects a range of economic impacts costing between 5% and 20% of global output over the next two centuries.

2 How does IT, and specifically the data centre figure in this?

Coming down to the level of the corporate data centre, just how much of a factor is the energy it consumes contributing to the greenhouse effect? Limiting emissions of greenhouse gases from across the corporation is now becoming a board level issue. Moreover as the IT component in emissions overall is so high (data centres take 4 to 5% of energy generated in the larger Western European economies for instance3) will the data centre, and IT generally, come under government regulation in the future4 just as aircraft travel has done?

The Data centre – a major greenhouse gas generator
Let’s look at one large European company as an example which operates across Europe and globally. In it has major data centres across the EU typically each data centre having a total power consumption for its the servers of some 7MW5. Yes Mega Watts. And heat accounts for 91% of the power consumed. Hence the servers only use 9% of the power fed to them purely for processing functions6, the rest is heat generated by the processing activity. And this company has two data centres of this size and three more of around a third of that size in one Member State alone, in all consuming some 20MW – a small power station. It also operates large data centres across Europe in three Member States and smaller data centres in some twenty other countries worldwide, including ten EU countries.

However total energy consumption may be even larger. For every watt of DC power consumed at the motherboard of the server, router or storage device, some 1.8 to 2.3 Watts of AC power must be supplied7. The rest is lost on cooling, conversion losses from AC to DC and in the power distribution system of transformers and cables. So power required may be double the DC power level specified for the server or disk store. Overall, the data centre’s 7MW may be 70% to 100% higher, with cooling etc. Moreover, expansion in power consumption is the general trend in data centres as demand rises for Internet-driven processing with the growth of e-commerce and web-technologies, using portals with many webservers. For example, Google in the USA has half a million servers in its datacentres. A key question here is – have you done a power, cooling and energy audit recently?

The size of the problem
Just how important are data centres in energy consumption relatively, and so in greenhouse gas emissions? Focusing on the EU as we have seen, the largest data centres may consume directly between 7 and 12MW each, perhaps on average some 10MW. Looking at this in comparative terms over a year with air travel and domestic EU housing in the major economies, we can judge the relative importance of the data centre as being a massive generator of green house gases. Over one year, a large data centre (10MW) is the equivalent of emissions from some 2600 residential houses in the EU’s larger economies, or some 18,000 transatlantic flights of 3470 miles, London to New York, while a small centre (2MW) equates to some 520 houses, or around 3600 transatlantic flights8.

However EU demand for data centre space may spiral by between 25% and 45% at the high end9 by 2010 at current growth rates. This could trigger higher energy usage per square metre, far more than 45% unless processor technology advances. Because the cooling per square metre required by 2010 may increase even faster unless processors reduce heat generated per unit processing power, as higher computing power becomes more densely packed into the same space. And all this is very costly. Electric power now makes up roughly 30% of a data centre’s operating costs, opex. In the larger EU economies, average data centre costs up are to €5.3m per year. By 2010 in Europe, rising energy costs could more than double opex to €11m at the top end10, driven largely by electricity costs. Breaking down the total cost of ownership of corporate IT resources shows that some 60% could already be energy related11. For instance, Vodafone noted recently that it had a 43 million Euro (UK£28 million) a year electricity bill, 78% of which was IT12 in one EU country.

3 How do we get there? An IT strategy for green data centres

Can we reduce this energy and heat problem? And if so, how? Because the strategy of increasing servers in the data centre to get more processing power is no longer viable. Fatally, the trend is to more power consumption per square metre and so more cooling. However, we note a critical factor here– average server utilisation at any one time may be around 15% or even less.

Utilisation varies by server type though – it can be as low as 2% for Wintel servers, perhaps 20-30% for the latest Linux servers, although for the mainframe the problems may have been largely solved, as typical utilisation is above 70% and often 80-95%. So the problem for most data centres is in having far too many commodity servers idle, needlessly using electricity, generating heat and driving up CO2 emissions – perhaps six times too many or even more! We need to rethink the whole corporate IT strategy, from the perspective of reducing emissions, and the key tool is rationalising usage to cut server numbers.

The way forward is through consolidation of many assets into fewer, be they servers or whole data centres, and secondly, forms of virtualisation – that is making the retained servers work harder by acting as virtual machines for many environments, so less are needed.

Strategic solutions for green business services management – reviewing the application portfolio with five steps to managing energy consumption

Everything in IT needs to be business driven but the IT function can only be more responsive if the business tells it what matters, and if IT in turn reports back what it is doing, in business rather than in IT terms. Moreover, a green IT strategy should form part of the corporate business strategy, to reduce emissions and energy. Consequently we now examine a number of steps to prepare a strategic solution, firstly at application level, then at processing platform level. Software solutions generally rely on reducing energy and cooling by optimising the deployment of the remaining applications portfolio and so further reducing the number of servers in the data centre

Reviewing the application portfolio – because server utilisation is typically so low, the IT Department should be discussing a new IT strategy with the users, with the overall aim of reducing emissions. It must identify those applications really essential to business needs. It should be asking whether some of the rest should be running at all. Such a strategy takes the attitude that only those applications that are key to the business should be active – some might be eliminated, some only loaded to run on demand. This approach also requires identifying which applications can be combined from a business point of view on to one server, rather than each application having a dedicated machine and mass storage. A further goal is to make any server compatible with any application so all is interchangeable – the application is not aware of which server it is running on and the new flexibility reduces overall server count. But it may require rethinking the architecture of the applications and of the server infrastructure. We now look at developing this at an operational platform level.

Five steps to save energy at the operational platform level – Discovery, performance assurance, performance management, assets and configuration management – to prepare for energy optimisation, we examine and understand the situation before rationalising :-

Firstly the topology of networking, data centre and business processes must be understood, that is everything related to the physical assets and their functional roles. This may be simple in some organisations, especially if there is already an up to date and complete inventory and configuration13. But in many organisations a discovery phase of the functional topology is needed, to reconcile physical assets with application processing. Gathering this inventory may require adding software agents to servers to both identify which applications are running there and the processor utilisation. From discovery of operational topology and a blueprint of the existing inventory, we can move to optimisation. A more basic part of this discovery is actually metering IT power consumption and understanding the total energy used, perhaps detailed by server, by application and if possible, by each business process.

The next step is to examine performance assurance, just why things are done the way they are. It requires an understanding the historical context of IT operations against the business processes, with the physical and virtual environments of servers and their utilisation, together with how databases, networks and the different operating systems are used in daily service. The aim is to identify potential server and application candidates for consolidation and virtualisation. But consolidation must be done in a manner that assures overall business throughput, specifically assuring appropriate levels of service, responsiveness, and net delivery are provided to the business, despite changes in IT environment.

Analysis of performance management follows – that is how to optimise management in real-time of the assets and of any events that may occur, and how it is currently implemented, such as handling peak demands for transaction processing, perhaps reacting with different server partitions and reconfiguration for intelligent dynamic optimisation.

Having completed our background on the installed base and current processing – and having identified any anomalies in energy terms – we can design anew. We can plan for optimised management of the retained assets for performance in both energy and business terms. We may subsequently store the energy efficient configuration, in a suitable assets manager (perhaps in a CMDB as explained below) with details of all physical and software assets and their assignments, against the business processes, in a manner that also captures the energy budget.

Finally we can implement the planned changes as a new configuration, under change management control, with testing and roll-out of the new energy-saving configuration against performance benchmarks. Steps 4 and 5 may be an iterative process to establish business throughput and energy saving progressively, rather than in one big bang approach. Any physical data centre redesign for cooling and power delivery efficiency may be the final action in this step, once the logical configuration has been proven.

Below we examine future developments for the key steps of 2 and 3 for consolidation of servers and data centres, and for enhanced server virtualisation.

Dynamic applications management across servers – virtualisation and grid technology – our strategy to reduce servers is to share servers across applications rather than dedicating servers by application so there is no longer one application to one server. We also require dynamic scheduling to optimise throughput. There are several approaches here. The first, also the cheapest and simplest today, is to host many environments on one server or a server cluster on an as-needed basis by using virtualisation software, either from the systems vendors or a package from an ISV. Virtualisation enables the running of different operating systems and applications in partitions on the same physical server, for savings on hardware, power, cooling and data centre space. A more sophisticated approach now coming to market is to dynamically orchestrate the allocation of servers as they become free to host the next application. It relies on developments from grid computing which efficiently orchestrate and load a whole environment on one server or a group to run an application14. For this to run smoothly, job scheduling also requires intelligent management, to ensure that the resources coming free are adequate for the next job, in terms of processor power, storage and input/output access.

Pro-active Systems management solutions- The systems management function of the future has to take on a new role – server economy. The aim is to manage applications to meld batch and real-time interactive loads into a job stream that saves on total processors required – to spread the load across fewer servers. It should also minimise those servers required for backup, on hot standby. This demands a systems management utility having advanced functions for ease of reconfiguration by the systems operator, combined with an advanced dashboard showing current energy consumption figures with historical logs. But the key tool is a running assessment of the optimal configuration to follow an energy economy strategy. For operational energy management, future systems management may need to add electrical power measurement with sensors at mother board/server level, under the umbrella of the industry-standard for management, SNMP.

Storage systems software – Compacting data using advanced compression techniques on disk and tape volumes on those mighty SAN and tape installations can be both a further energy and a space saver.

4 Refusing to respond to the threat of global warming will increasingly look like a poor strategic decision

A company that does not believe in the climatic threat and does not encourage its top officers and IT managers to take climate change seriously is going to get rolled over by government legislation and costs. It will face spiralling energy and cooling bills to run its inefficient Data Centres – and potentially might be facing fines under new legislation. Together they spell serious business impacts, evident on the bottom line.

In contrast, those companies that see climate change coming, and recognise it for what it is, will perform the changes needed in the data centre, as well as developing a positive attitude to deal with climate change on the part of top management. They will act responsibly, being prepared for events. They stand to do very well by reducing Data Centre costs and making IT more aligned with business needs – that is reducing emissions, helping save the planet while avoiding fines for non-compliance with the next round of legislation.

A changing world requires a changing company – tomorrow’s company.

The Global View creates and curates research, perspectives and intelligence on the modern leader’s agenda.

Subscribe Now

Get our latest research papers and amazing posts directly in your email.


The   Global view © 2024. All Rights Reserved.