Andy Huckridge Issue: Europe I 2014
Article no.: 9
Topic: Running the services of tomorrow, on the networks of tomorr
Author: Andy Huckridge
Title: Director of Service Provider Solutions & SME
Organisation: Gigamon Inc.
PDF size: 203KB

About author

Andy Huckridge is director of service provider solutions at Gigamon, spearheading the strategy for Gigamon’s entire spectrum of global Service Provider Solutions. Huckridge joined Gigamon from VSS Monitoring, where he was Senior Director of Telecom Strategy and Marketing.
Previously, he was Director of Marketing at Spirent Communications, and Director of Product Management at 8×8 Inc. Huckridge holds a patent in VoIP and was an inaugural member of the “Top 100 Voices of IP Communications” list.
Huckridge holds a M.Sc. in advanced telecoms and spacecraft engineering from the University of Surrey, as well as a B.Eng (Hons) in Telecoms and EE from the University of Glamorgan.

 

Article abstract

As carriers gear up to Big Data and superfast LTE broadband, they must also upgrade their network monitoring capabilities. Visibility is also essential for the anticipated NFV deployments. NFV creates virtual computing environments for telecommunications functions across the entire network, which removes the last barrier to Big Data, thus creating further monitoring difficulties. NFV-ready monitoring tool will be able to connect the large pipes to the appropriate analytical tools. It will provide the necessary visibility during NFV deployments, when islands of unimplemented or incompatible NFV systems still remain in the network.

 

Full Article

The mobile device has now become a ubiquitous and almost indispensable tool for a vast proportion of the population. In fact, the International Telecommunications Union estimates that in 2012, there were 1,721m mobile subscribers in the EMEA region. Mobile subscribers demand constant access to the network from home, work and all points in between. While SMS and voice were once satisfactory, users now expect to be able to update friends and family through social media, watch streamed TV programmes, share photos, play video games and stream music at the touch of a button, from anywhere. This increased usage is pushing networks to their limit, yet subscribers continue to demand high quality at low cost. With estimated 586m1 active mobile broadband users in EMEA, operators are struggling to keep up.
To compound the situation further, we are moving into an era of 4G LTE where the speeds on offer will cause the amount of data traversing the network to explode. Although Europe currently only accounts for five percent of the 4G market globally, this number is likely to rapidly increase over the next few years. Analyst firm IDATE has predicted that Africa and the Middle East will account for 7.5 percent of a 915m subscriber market, with Eastern Europe accounting for 4.9 percent and Western Europe 15.8 percent by the end of 2016. As more subscribers join 4G networks, this ‘Big Data’ tsunami will continue to increase and there is little carriers can do to slow down the volume of information traversing their networks. Carriers are facing a difficult balancing act between implementing tools to monitor and manage the data, while keeping Annual Revenue Per User (ARPU) up and churn rates down. While emerging technology such as Network Function Virtualisation (NFV) is attractive to operators thanks to its promise to decrease capital expenditure and operating costs, such complex environments pose further monitoring challenges due to their diversity.
Customer retention in a fickle market
While basic factors, such as an operator’s ability to provide the latest ‘must-have’ devices can have an effect on whether a customer chooses to remain with a provider or not, for the most part, customer loyalty and retention needs to be earned over time. Customers have become increasingly fickle and loyalty is steadily dipping – a WDS study found that, in the UK alone, almost 40 percent of mobile subscribers are at risk of churn. Carriers therefore need to implement solutions that can manage the complexity of today’s networks, optimise end-user experience, manage network capacity and pave the way for service offerings of tomorrow.
When it comes to keeping customers, user experience is the most important consideration for carriers within EMEA today. However, it is often the case that speeds and bandwidth are not equally shared among customers. It has been said that a very small percentage of users generate the most network load, and heavy users are therefore negatively impacting the Quality of Experience (QoE) for other users. A lack of subscriber-level visibility has more often than not led carriers in EMEA to develop and market ‘one-size-fits-all’ data packages and tariffs that have little effect on congestion – and more importantly, have a negative impact on the majority of subscribers who are stuck with slower speeds.
Bandwidth issues are by no means the only challenge that operators are facing. As network providers navigate the transformation to next generation network technologies, such as 4G and beyond, they will have to contend with emerging devices, significantly more data traffic and sudden surges in popularity over the latest devices on the market. This makes the development of cost-effective, high capacity networks inherently difficult. To make matters worse, operators simply cannot optimise the design and management of their networks without fully understanding the drivers of traffic – in terms of applications, devices, subscriber behaviour, usage patterns, and so on.
Business models at breaking point
In order to ensure that they can cope with this influx of data on the network, operators currently have little option but to install larger pipes and increase the number of monitoring tools on the network. However, these upgrades come at vast expense and this cost cannot be passed on to customers, as an increase in standard pricing promotes churn – something mobile carriers can ill afford in an already temperamental market. As the cost of tools required to monitor and analyse the huge amounts of data continue to rise, ARPU decreases and this is causing service providers’ existing business models to break down.
Part of the problem is the fact that many service providers lack essential visibility across their networks, often creating a number of blind spots, which can further impact performance as next-generation services are rolled out. While it is true that most solutions on the market deliver some insight into network activity, they often lack the intelligence to link application usage patterns with individual subscribers for an end-to-end view. Add to that the issue of mobile device fragmentation, the potential for service-impacting handset configurations, the dynamic application developer ecosystem – and carriers soon find that their networks become complex and difficult to manage. More importantly, new revenue channels become virtually impossible to spot.
Network function virtualisation – the solution?
Network operators in EMEA are therefore looking towards new technology in an attempt to decrease capital expenditure and operating costs, without negatively impacting the QoE for their subscribers. Network Function Virtualisation (NFV) is one such technology that is making waves within the industry today. NFV provides the ability to implement network functions – such as firewalls, routers and VPN gateways – within software, and consolidate many network equipment types onto industry standard high volume servers, switches and storage. The technology promises to reduce equipment costs and power consumption – thereby decreasing operating costs.
There are, however, a number of obstacles in the way of successful NFV, which centre on the difficulty that will arise with monitoring these agile and diverse environments. For example, each NFV vendor will implement the standard in a slightly different way, or implement a different version of the same standard. There is also the challenge of islands of differing topology as it will be some time before all network functions are fully virtualised, which means that networks will be based on multiple different technologies.
Essentially, operators need to reduce costs, maximise ARPU and increase agility but are facing a situation where they cannot cope with monitoring and analysing the data already present within their infrastructures.– It is a seemingly lose-lose situation, as the technology available to reduce operating costs will create further monitoring difficulties. To compound this, NFV will create elastic computing environments for entire functions of telecommunications networks and remove the last barrier to Big Data, allowing it to truly explode and thereby intensifying the problem.
Enabling pervasive visibility
Service providers therefore require a solution that will allow them to monitor the new equipment being deployed as part of NFV enabled networks at the same time as providing an effective way of monitoring and analysing increasing traffic.
NFV deployments require higher level monitoring capabilities that allow for greater reduction of monitored traffic through advanced multi-threaded filtering, as well as packet manipulation. This in turn allows greater integration with analytic tools, which enables those tools to perform more efficiently by maximising their analytic throughput. A monitoring network that enables NFV deployments will need to provide a lot of functionality at the packet, flow, and the network wide level, across NFV, traditional and hybrid deployments to be able to truly bring the visibility required.
At the same time, in order to cope with increasing data, the monitoring network will need to be able to connect the right analytical tools to the appropriate large pipes. As well as this, the data needs to be conditioned through advanced filtering and data packet manipulation so that the amount of data arriving at each tool is reduced and ensures it is formatted exactly for the tool’s consumption. This means that each tool is able to process more data without it having to dissect the incoming information, leaving it to get on with the important task of data analysis.
From a service provider perspective, NFV is the path forward for several reasons: from removing proprietary software, to new ways of controlling services, to reduced operating costs. While there is clearly great value in this technology, without the provision of a monitoring infrastructure, the speed of adoption could be greatly reduced.
The deployment of a unified visibility fabric architecture should ease monitoring headaches. It should deliver pervasive visibility into NFV environments – as well as legacy and hybrid. Pervasive visibility is essential as it enables the unification of data visibility across topologies. With a monitoring fabric in place, network operators will be able to efficiently deploy NFV environments, at the same time as managing their data more efficiently. Only through increased visibility will operators be able to improve on current business models, and, more importantly, existing expense structures, whilst running the big data services of tomorrow, on the networks of tomorrow.