Home EuropeEurope II 2011 Connecting tomorrow’s data centre

Connecting tomorrow’s data centre

by david.nunes
Anne Marie KenneallyIssue:Europe II 2011
Article no.:13
Topic:Connecting tomorrow’s data centre
Author:Anne Marie Kenneally
Title:VP Sales EMEA
Organisation:CommScope
PDF size:266KB

About author

Anne Marie Kenneally is the Vice President of Sales for Europe, Middle East and Africa (EMEA) for the CommScope Enterprise Solutions Division; she has over 20 years experience in the industry, with a career that began with AT&T. Since then Ms Kenneally has led the Lucent Technologies’ SYSTIMAX enterprise cabling market in UK/Ireland, before becoming Managing Director of the Lucent Technologies SYSTIMAX business in EMEA, and thereafter Vice President of the SYSTIMAX business under Avaya. Ms Kenneally moved to CommScope when it acquired the SYSTIMAX business from Avaya. In addition to her business qualifications from INSEAD, amongst others, Anne Marie Kenneally holds an Honorary Fellowship from the Institute of Sales, Ireland (2005) for her outstanding contribution to the growth of the SYSTIMAX business in the EMEA region.

Article abstract

Virtualisation, cloud computing and high-bandwidth services like video-on-demand are already driving organisations to upgrade their networks, some to 100 GbE, to prepare for the coming data traffic onslaught. Organisations would be wise to spend a bit more initially for a more robust upgrade to avoid heavy additional costs and operational turmoil in the future. Upgrades need state-of-the-art intelligent infrastructure solutions to proactively track and monitor data centre activity and regulate network traffic, end-user activities, applications, networking protocols, servers and such.

Full Article

The amount of data we now consume on a daily basis is truly astronomical. In 2007, IDC estimated that the world’s total digital content was 161 billion gigabytes; by 2009, that figure had reached 487 billion. If our planet’s digital content were now printed and bound into books it would form a stack that would stretch from Earth to Pluto ten times over. Given the surge in consumption of digital information over the last decade, it may come as no surprise that 2010 saw a 50 per cent increase in the amount of data being transferred over the world’s networks. More shocking is Gartner’s recent prediction that the amount of data running across networks may increase to 4400 per cent of current levels by 2020. Nowhere is the pressure of this information overload felt more strongly than by organisations and the IT networks powering them. Over the last decade the demands companies place on their networks and data centres have increased exponentially. Amsterdam’s AMS-IX Internet exchange – one of the largest internet exchange points in the world – recently made the switch to 100 Gigabit Ethernet (GbE) to support a huge increase in capacity. This came in response to seeing traffic double at the exchange every eighteen months and the need to maintain high-speed connectivity, 24/7 availability and unshakable reliability. Since the first publication of the Ethernet standard almost 30 years ago, data rates have rocketed skyward. The 40 and 100 GbE standards, recently ratified by the IEEE 802.3ba committee, are a far cry from the early 10 megabits per second connections of the 1980s; they will become essential by 2015. Virtualisation, cloud computing and high-bandwidth services like video-on-demand are driving the need for increased access network speeds. A recent forecast from Gartner supports this scenario, predicting that fully 50 per cent of workloads will be running on virtual machines by the end of 2012. Networks around Europe are already being upgraded to support these unprecedented levels of data traffic. For example, Verizon recently announced the rollout of a 100 GbE link between Paris and Frankfurt, a portion of the company’s European long-haul network. Given the potent combination of these pressures, the question facing enterprises around the world is not whether to upgrade their data centre infrastructure, but when. Tomorrow’s networks, today Many large enterprises currently operate networks installed about ten years ago and are attempting run applications that weren’t even in existence when the network was originally designed. As the pace of technological development continues, enterprises must look to deploy flexible infrastructure that can keep pace with the future demands of virtualisation and cloud computing. Given the relatively modest cost increase between installing 10 GbE OM3 cabling and 40/100 GbE ready OM4 cabling, enterprises should be deploying higher bandwidth infrastructure now, especially for connections between the access switch and the core of the network in the data centre. Although deploying the most current, forward-looking standard for fibre in the data centre will increase capital expenditure (CapEx), deploying solutions today that will require time-consuming and expensive upgrades in the future is a far less cost-effective strategy in the long-term. Enterprises that do not future-proof their data centres today, will face the need for wholesale upgrades in three to five years that may generate significant downtime. For this reason, organisations should be careful not to reduce their initial CapEx at the cost of greater operational expenditure (OpEx) during the total lifetime of their infrastructure. Purchasing better quality equipment can also reduce the need for on-going maintenance and technical support during a data centre’s lifetime. While the decision to deploy 40/100 GbE connections in the data centre today makes good financial sense, increasing bandwidths also present enterprises with additional concerns. 100 GbE – the cost of downtime As the bandwidth of each individual link in the data centre rises, so does the cost of a connection failure. The huge volumes of data that will soon be running across individual cables will mean that the failure of even one connection can have a significant impact upon a data centre’s overall performance. Unfortunately the rapid increase in network complexity in recent years has produced many more potential points of degradation and failure in data centres – from an accidentally severed cable to a software security false alarm halting all network traffic. While the chances of outages have increased, so too have the negative consequences of downtime. Not only can IT infrastructure failures severely damage your business, it will also negatively impact the businesses of those who depend on you. Reduced network performance can have a significant impact on an organisation’s productivity, corporate image and bottom-line. Infonetics Research has found that the average enterprise loses 3.6 per cent of their annual revenue through network downtime. Without access to essential communications tools and business critical applications, employees are unable to maintain services and customers may look elsewhere. It is essential then that enterprises have the tools in place to proactively track and monitor data centre activity. Businesses are rapidly waking up to the reality that the less efficient methods used in the past for maintaining their networks and tracing faults are no longer adequate. Nevertheless, any system that aims to monitor and control a network faces a considerable challenge, since it must regulate network traffic, end-user activities, applications, networking protocols, servers and network hardware devices. Intelligent infrastructure The answer is Intelligent Infrastructure Solutions (IIS) – systems that provide the missing link between real-time network management tools and the traditionally passive structured cabling infrastructures that connect network devices together. By providing insight into the physical layer, IIS helps IT professionals and network managers ensure the efficiency of their network by providing accurate reports for capacity management; generating real-time alerts to detect, locate and resolve any unauthorised changes within the network; providing automatic discovery and tracking of physical location of devices connected to the network in real-time, and proactively applying changes utilising electronic work orders in support of change management. During the economic downturn of 2009-10, many CIOs had to implement cost-cutting measures in their enterprise ICT architecture. Having deferred or under-spent on infrastructure upgrades, many CIOs now face the issues of supporting faster data rates and improving efficiency with aging or underpowered infrastructure. Yet, because of this need for growth and evolution, enterprises still have the opportunity to implement solutions now that will help them steal a march on their competitors. Gartner recently forecast that over 50 per cent of enterprises will expand their current data centres by the end of 2011 and more than 30 per cent are building new data centres to combat and overcome the capacity challenges ahead. These companies should take advantage of this opportunity to deploy a physical layer infrastructure that can handle the deluge of data and improve system performance across the board. Businesses need to ensure that the foundation is in place to take advantage of next-generation services today – and that foundation is IIS. Organisations unsure about immediately committing to IIS should, at the very least, ensure that their systems are upgradeable. There are now IIS solutions available which offer the ability to either upgrade or retro-fit intelligence into passive network infrastructure, though not all of them offer this for both copper and fibre networks. The need for IIS will only become more pressing as the IT sector continues to evolve and services like cloud computing increase pressure on data centre efficiency. Intelligence is the best solution for these types of environments to keep precise operational control and thus to deliver a seamless service through the cloud. Given that many businesses will soon depend on providing a seamless virtualised service, or utilising such a service, the importance of greater network control and reliability is clear. With these systems in place, enterprises will have well-rehearsed plans in place to recover from system problems, and solutions in place to eliminate the potential for human errors. Systems should also be implemented that automatically plan and schedule maintenance, as this is key to ensuring on-going infrastructure reliability. The adoption of 40 and 100 GbE will create new possibilities for network connectivity – supporting new ways for us to work and interact in homes, offices and enterprises. In order to adapt and thrive in the face of these new possibilities, it’s essential that CIOs ensure their data centres are ready for the future.

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More