Home EuropeEurope II 2010 The virtual reality of the cloud

The virtual reality of the cloud

by david.nunes
Roger BaskervilleIssue:Europe II 2010
Article no.:11
Topic:The virtual reality of the cloud
Author:Roger Baskerville
Title:VP of Sales for EMEA
PDF size:201KB

About author

Roger Baskerville is VP of Sales, EMEA at Vizioncore. Mr Baskerville joined Vizioncore in November 2008 from Citrix Systems where he was Regional Director for Server Virtualisation in Northern Europe following their acquisition of XenSource, where he was Sales Director EMEA.

Article abstract

Server virtualisation hosts various virtual machines and appliances in one place and delivers huge potential for consolidation and cost-savings, both in terms of IT assets and energy consumption. But all this new power is accompanied by a fresh set of responsibilities and associated risks for those not managing it appropriately. Properly managed, virtualisation enables viable service level agreements and improves performance for end users. The same factors apply to the cloud – but on a larger scale.

Full Article

Once upon a time, IT was simple. Each worker would sit at their own personal computer with all their data stored securely in one physical place. If a new worker joined, you bought another machine and that was that. But then virtualisation came along and changed everything. At its core, virtualisation, like the cloud, is all about abstraction. By severing the direct link between user experience and hardware, the computer becomes more personal than ever, gaining an independent existence that makes it available whenever and wherever you need it. But to understand where virtualisation and its relationship with the cloud stand today, it’s worth first undertaking a quick trip through its history and considering the advantages and lessons learned so far. Consolidation What became quickly apparent when server virtualisation was in its infancy was the potential for cost-savings and consolidation. By hosting various virtual machines and appliances in one place, the potential extra efficiency from making redundancies is enormous and immediately gratifying. It’s a natural evolution, simple to explain and justify to anyone within the organisation. While 20:1 server consolidation ratios are common, some organisations reach 40:1 or even higher. When you consider the low end, a 20:1 consolidation ratio means that now one server does the work previously done by 20. The organisation no longer has to power, cool and maintain 19 servers, which represents a 95 per cent reduction. You can potentially hit the high end of consolidation by making virtual machines as compact as possible, which lets more of them share a single virtual host. New solutions can automatically shrink the size of virtual machines (VM), using techniques such as de-duplication, compression, realignment and defragmentation. With reductions of VMs ranging from 30 to as high as 80 per cent of their original size, anyone can see the potential impact for conserving storage resources. The ability to resize VMs not only enables more VMs to share a single host, but also reduces the IT assets and energy required to backup, store and transfer virtual images. When you consider that IT equipment accounts for approximately nine per cent of all energy consumed by businesses, and data centres use up to 1000 times more power than equivalent office space, the potential savings here is something that every organisation needs to reckon with. Servers alone account for 0.6 per cent of all power consumed in the USA (1.2 per cent if power for cooling systems is included), according to a study by Stanford University researchers. Flexibility Alongside this, firms recognised the unprecedented flexibility enabled by the technology. The ability to make virtual machines appear or disappear on demand in whatever configuration necessary was another immediately seductive aspect of the offering. Having to deploy a new server for each new application made IT departments slower to react to business needs as each physical machine would have to be budgeted for, purchased, and set up on the network. A ‘virtual-first’ approach means organisations will look to run any new application in a VM, unless a standalone physical machine is absolutely required. Pooled resources With so many physical servers in the average data centre, one significant problem was the difficulty of knowing what was going on. Virtual infrastructures treat the entire environment as a large pool of resources and administrators can see exactly what is going on across the entire resource pool. Administrators can have top-down views of the infrastructure, as well as getting alerts and reports on potential hardware problems, virtual machine performance and host machine performance. The insight into IT performance which virtualisation management tools give allows organisations to be far more proactive. Management tools allow IT departments to make much more informed decisions on where to focus expenditure and resources by providing a level of detail on the performance of the overall infrastructure far greater than that available in the physical world. However, as is so often the case, all this new power brought with it a whole new set of responsibilities and associated risks for those not managing it as the new paradigm demanded. As a result, we’re entering a new phase of the virtualisation story and, because the two are so inextricably entwined, turning the page on a new chapter for the cloud. Server sprawl As discussed previously, the ability to make a new virtual machine out of nothing is a formidable power. However, there’s still no technology that can put physical hardware out of the picture completely – sooner or later, it comes back to a game of resources and efficiency. As a result, virtual server sprawl has emerged as one of the predominant threats to those that move to a virtual infrastructure. Very quickly, the temptation of being able to provision VMs on a whim can lead to an explosion in their creation. It’s not uncommon for a large enterprise to end up with hundreds of unused machines sitting idle. Even in this state, such machines can be surprisingly demanding in terms of resources, consuming memory, disk space and data protection time and resources. Furthermore, this introduces a point of vulnerability in the fact that virtual machines sitting inactive cannot receive security updates; leaving them in a position of relative threat should they be reactivated again. Left unchecked, virtual server sprawl can end up putting an organisation in a position where all efficiency improvements brought around by the move to virtualisation are completely negated. However, an astute administration equipped with the right tools increasingly has little to fear from this threat. Backup 2.0 Virtualisation is also changing the way that organisations tackle tasks such as backup, recovery and replication. Physical backup solutions have focused solely on the data stored on a server. As far as recovering lost data goes, this is a fairly reliable method, but in a virtual environment the possibilities go much further. When backing up VMs, the entire server environment, including operating system, application and data can be copied. Gone are the days when a failure meant sourcing new hardware and rebuilding the machine from the ground up; restoring a VM is as simple as restoring a file onto a new host machine. This offers a very cost effective way of tackling business continuity and disaster recovery. Virtual machines can be stored off site and then moved very quickly back into the production environment should any problem occur. As with any kind of backup, organisations should have the correct strategy in place to ensure that virtual machines are backed up regularly enough to guarantee that the backup is usable and relevant to the business. SLAs Virtualisation makes disaster recovery and business continuity available to any organisation and this new-found technological ability suddenly means that service level agreements (SLAs) become viable. IT departments now have a platform which gives them the flexibility to deliver an optimised infrastructure and take a proactive approach to managing services, rather than engaging in the levels of fire-fighting which were previously commonplace. Furthermore, virtualisation delivers a better level of performance to end users, so suddenly SLAs aren’t just about uptime but can also factor in performance. Virtual infrastructures treat computing resources as one large pool, meaning that if an application has a large spike in activity then more resources can be made available to it. Rules can be built in to guarantee levels of resources to particular applications. Suddenly, IT departments can take end-user feedback and performance metrics and optimise the infrastructure with this in mind. The cloud In many ways, the cloud faces all these factors on a larger scale. Definitions vary simply because it can be deployed in so many ways, but what’s important about the cloud is that layer of abstraction. Services are created dynamically as and when needed from resources that may span various data centres. This is generally achieved via a combination of virtual servers, networks and storage. But functions like backup remain critical and old habits must die hard. As an example, the adoption of technologies that base data protection on images rather than file systems is a key shift in the thinking required that pays dividends in the new structure. Success in creating or making the most of cloud infrastructure requires the clued-in to draw on their experiences in virtualisation and apply their knowledge to the larger canvas with a touch of imagination. The ‘magic’ of cloud, if there is any, is the way in which all resources are now dynamic and can be provisioned on-the-fly in response to real-time fluctuations in user demand and processing requirements. In this context, better tuned monitoring tools become essential – because they can trigger real-time re-provisioning of every type of resource. And so, this radical new world of virtual, abstract computing requires one key priority; accurate, effective management. This has always been a concern but now that we move toward a new age of computing, it presents itself more and more as the single most valuable way to harness and optimise the coming power. This will be the key to seeing what cloud computing, and indeed all virtual computing, can really do.

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More