Home North AmericaNorth America I 2014 Mastering virtualization challenges

Mastering virtualization challenges

by Administrator
Alastair WaiteIssue:North America I 2014
Article no.:12
Topic:Mastering virtualization challenges
Author:Alastair Waite
Title:Head of Enterprise Data Center Business, EMEA
Organisation:TE Connectivity
PDF size:368KB

About author

Alastair Waite is the Head of Data Centre Business Line, EMEA. Alastair joined TE Connectivity in September 2003 as a Product Manager for the company’s Enterprise Fibre Optic division. Since that time he has held a number of key roles in the business, including Head of Enterprise Product Management, for EMEA, and Head of Market Management. Since May 2011, Alastair has responsibility for the Data Centre business in EMEA, ensuring that TE Connectivity has strategic alignment with its customers in this market segment.

Prior to joining TE Connectivity, Alastair was a Senior Product Line Manager for Optical Silicon at Conexant Semiconductor, where he had global responsibility for all of the company’s optical interface products.

Alastair has a BSc in Electronic Engineering from UC Wales.

Article abstract

Networks are as strong as their weakest point, and that point is the physical cabling. To enhance the ability of the physical layer to support virtualization and cope with the massive growth of data traffic, fibre or copper connectors can be fitted with EEPROMs that allow the physical layer to communicate with the management layer. This granular view at the cabling level offers a whole new perspective on how traffic can be controlled, achieving virtualization at the lowest network layer.

Full Article

Data growth is rising exponentially as we see consumers and business adopt feature rich platforms and applications, with an expectation that the content will be readily available 24/7. This desire for ubiquitous data is driving a huge demand for data centres and data centre networking equipment, which come with high price tags and short shelf lives, consuming large amounts of (costly) energy.
By cutting costs, without increasing physical resources, companies can exponentially increase workload capacities and keep up with data demand by virtualizing their existing hardware. However, despite the many obvious benefits of virtualization, it can also present challenges regarding the actual location of data, which in turn raises security, traceability and potential disaster recovery concerns for users who have mission critical, or highly sensitive, data needs.
Deploying innovative solutions within the physical layer can bridge the gap between the benefits of the virtual world and the security of the physical world.
An accepted standard
Why has virtualization become such an accepted standard in the data centre? Apart from the obvious user and business benefits mentioned above, virtualization has been adopted by businesses to support two key initiatives:
1. Agility – The ability to dynamically control the resources that a physical server offers and make it part of a “pool” of computing power that can be easily harnessed to work on the processes that businesses need at any given time.
2. Efficiency – Instead of having many physical servers dedicated to a single business unit or process, fixed physical servers are configured to become virtualized, meaning a single server can replace the workload many servers. This triggers a reduction in energy costs (power and cooling) and frees up expensive floor space.
High availability
In addition to these two initiatives, maintaining high availability and fast response times from a virtualized network is critical to keeping consumers and internal stakeholders happy. It also helps disguise the fact that they are using a ‘pooled’ resource.
From Layers 2 and up in the Open Systems Interconnection (OSI) stack, control and flexibility can be easily achieved. However, a network is only as strong as its weakest link, and in many cases this is the physical layer (also known as the cabling), which happens to be the foundation that all data centre operations are built on. Since it is passive, the physical layer presents problems to network architectures that need to understand how and where things are connected. Today, this can happen logically, but logical data gives no indication of the physical routes packets take between two points. Did the data flow between servers in adjacent racks, or did it flow via different buildings, or even via different countries? Was the path taken declared ‘secure’ by the data owner, or was the path shared with other unknown users? Both of these questions are becoming increasingly pertinent as virtualization becomes more prevalent in data centre networks around the globe.
Being able to monitor and communicate with each connection point in the physical layer is critical to answering these complex questions and solving audit/compliance challenges. One way of monitoring the physical layer cabling is Connection Point Identification (CPID) technology, where an Electrically Erasable Programmable Read-Only Memory (EEPROM), housed in the body of a fibre or copper connector, can allow the physical layer cabling to communicate with the management layers of the network when inserted into a CPID-enabled patch panel.
In this scenario, automatic associations can be made between the connected devices in the path that the packets are flowing through. The interconnecting points, which are supported by an EEPROM, allow management software to interrogate the physical layer cabling to understand facts about its length, data-carrying capacity, or to even use CPID data to allocate a physical route to high priority/high value traffic, while another route can be dedicated to low priority/low value traffic. Similarly, operations will be able to distinguish between fibre and copper channels within the physical layer, a critical factor in identifying suitable routes for supporting future growth paths for higher data rates.
Having this granular view of cabling offers a whole new perspective to the physical layer and allows the network owner to consider it as a value-adding asset, as opposed to a simple device to interconnect servers, switches and storage devices.
Previously, we discussed how virtualized networks aggregate resources into ‘pools,’ which means that next-generation networks will have to be built ready to transmit data packets at 40 and 100Gb/s. Today, the medium of choice for architects and engineers to achieve those throughput rates would be parallel optics, via either 4 x 10 Gb/s or 10 x 10 Gb/s channels, due to the lower cost of the optical modules at those data rates.
Remember that fibre is not a full duplex technology. Unlike its copper cousin, fibre requires eight lanes (4 Tx/4Rx) for 40Gb/s communication and 20 lanes (10 Tx/ 10Rx) for 100Gb/s. These data rates can be achieved via a pre-terminated fibre network based on a 24 Multiple Fibre Push-On (MPO) connector technology. 100GbE optical modules that are currently available on the market already incorporate a 24 fibre MPO connector interface, making it simple and easy to build out a future proofed network ready for throughput-hungry virtualization activities today.
Not planning to build a network ready for 40 or 100Gb/s may prove to be costly in the long run, both in terms of the CAPEX required to re-configure the existing physical layer to meet demand, and in terms of lost revenue through network downtime while this activity is being conducted.
A recent report published by the IEEE in North America has revealed that servers with 100GbE I/O ports will begin to ship in 2015, and by 2020 will make up more than 15% of all server port speeds shipped. When one considers that all these 100GbE enabled servers will aggregate at the core of the data centre, the next generation of switching platforms will need to be prepared to accept all this data leading to the possibility of zettabyte “highways”.
Virtualization makes sense
Virtualization is an enabling technology that makes sense in so many ways, supporting IT initiatives while keeping financials costs in balance. It allows a business to scale to meet customer demands without requiring the same linear increase in physical resources.
However, as in life, achieving great things and stepping to the next level of performance and reward requires focused control and ability. Controlling the physical layer with CPID and enabling it for 40 and 100Gb/s throughput will help support network virtualization that delivers the capacity and bandwidth required to stay ahead in the zettabyte era.

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More