Space is becoming more of a valuable commodity in data centers. With the lack of affordable ground space, over-stretched power grids and tightening efficiency legislation, building a new data center is often not an option. Consider the New York area where I live. Clearly there is a huge demand for high-tech and this means servers and data centers, however, whilst there is the infrastructure to build a data center on the banks of the Hudson, the area lacks both the physical space and the grid capacity to support one. In fact, the New York city metro area is significantly over capacity at present which has caused a noticeable impact on service, most severe on August 14th in 2003 when the already strained power grid was brought to it’s knees by a suspected system failure somewhere in Ontario, Canada. Generators from the North-East United States; including New York, tried in vain to shunt power westward to the affected areas causing a widespread blackout of Ontario, the North-East US, and parts of Michigan and Ohio.
To further confirm the observation that the New York metro grid is strained, it was reported that in the summer of 2004 New York City’s peak energy load was 11,150MW whilst the physical capacity on the grid was a mere 8,940MW and in addition, it is known, that due to constraints with the transmission lines, the vast majority of power for New York City must be generated within the city limits during peak times which does not provide a great deal of slack. Worst of all, the New York City and Long Island zones' electricity generating infrastructure has the highest average age of generating units in the state and is still highly dependent on an aging fleet of combustion and gas turbine plants bringing its long-term reliability into question.
These factors all combine to produce the $20 million server effect or the one server that simply cannot fit in a full-to-capacity facility and therefore is the catalyst for building a new data center. A large number of companies are very reactive in terms of their capacity and often, due to a combination of cost, complexity and sometimes I’m sure, an element of denial, new facilities are built on-demand rather than in advance. Instead of allocating budget, sourcing planning permission, carrying out the design work and breaking ground on a new data center before the extra capacity is needed, often projects that require new infrastructure tip the balance of capacity over the edge and the building of a new data center becomes a knee-jerk reaction instead of a proactive exercise. The $20 million server becomes a reality.
So how can this be avoided? Considering the challenges of actually locating and building a datacenter, making more efficient use of what you have is, in many cases, a much better and more cost effective option. Technologies such as ‘virtualization’ and ‘cloud computing’ can help to relieve the burden on over stressed facilities. In addition, gaining visibility into the consumption and efficiency of servers is an important component is building up a picture of wasted resources. Knowing whether or not a server is ‘useful’ enables the business to make more informed decisions about reallocation of hardware to reclaim precious space and capacity in the data center.
1E’s NightWatchman Server Edition visualizes server ‘useful work’ and is the seek and destroy tool for virtual machine sprawl...say goodbye to the $20 million server and hello to a more efficient data center.
-- This article can also be found on the 1E Business Blog --
No comments:
Post a Comment