|Topic:||The network that gets this right will get the gold|
|Title:||Vice President for EMEA|
David Hill is the Vice President for EMEA in Spirent. Mr Hill is responsible for both Performance Analysis and Service Assurance businesses, including the Wireless business.
Mr Hill joined Spirent Communications in 2001 to run Performance Analysis Broadband in EMEA. Over the next few years, he added both the Wireless and Service Assurance businesses to his responsibilities. His career in Telecommunications spans 27 years. He started with Racal Milgo in the 80s where he held a number of sales Management positions. In 1989 Mr Hill moved to Penril Datability Networks where he progressed from managing Europe to running all of the International business outside North America. From Penril Datability he moved to a similar role at Micom Communications in the mid-90s. In 1996, Micom was acquired by Nortel Networks, and for the first two years, Mr Hill managed the ‘Nortel Micom’ business as Head of Global Sales. When Nortel Networks acquired Bay Networks, Mr Hill became responsible for the integration of Micom and Bay with Nortel’s Enterprise organisation. He then took up the post of MD UK and Eire for Enterprise and moved to a position of VP EMEA Carrier Managed Solutions prior to joining Spirent in 2001, as VP EMEA.
With the advent of smart devices, events such as the Olympics 2012 will be testing network capabilities to their limits. In such events, real-time video is in great demand, with high peaks created at particular moments of scoring or record breaking. The complexities of mixed traffic, real-time video and unexpected peak challenge the access and the backhaul part of the network, as well as the core. This means that all parts of the service must be tested before roll out. Testing tools should provide simulated scenarios of stress conditions and high peaks for the laboratory tests, for end-to-end field-testing mechanisms and for on-going monitoring facilities. Testing of Video, in particular, must be comprehensive, since the sensitive human eye is spots the slightest video fault, resulting in loss of reputation that could be avoided by thorough testing.
The 2012 Olympics present a microcosm of all the problems facing today’s video providers. Any service that can deliver what the public wants under peak traffic conditions will win massive loyalty. David Hill VP EMEA Spirent Communications suggests ways that service providers can best prepare to meet the demand.
”Were you lucky enough to get tickets?” – How many times have I heard that come up recently in conversations about the 2012 Olympics? It is the sort of thing that gives ICT a bad name – luck shouldn’t really be involved. If the system can’t cope with the demand for tickets, what is the chance that they’ll be able to deliver a high bandwidth service like video on demand?
The Olympics is now a global event, but it has not always been that way. The 1924 Paris Olympics were the first to be partly covered by live radio broadcast. The 1936 Berlin Olympics were televised, but only broadcast across Berlin and Potsdam in Germany. Worldwide television coverage only began in the 1960 Rome Olympics, and the 1964 Tokyo Olympic Games was the first to be transmitted live across the Pacific by satellite.
Television coverage in 2012 should reach new heights – with 3D and high definition already well established – but we can also expect a whole new tide of personal Olympic experiences to flood the Internet in the form of YouTube clips, e-mails and downloads. For the first time, we can access this new intimate layer of Olympic experience.
It is hard to believe that 2012 is only the fifth anniversary of the iPhone, first launched on June 29th 2007. In just five years, the smartphone has had a significant impact on consumer expectations and society. As the world’s first smartphone Olympics, the London event will be different from any previous one.
Imagine a world record being broken: there will be a forest of smartphones held up to catch the winning moment and transmit it back to families and friends. Meanwhile, in scattered locations, crowds will be watching other events but at the same time will be able to witness that record breaking being broadcast to their handsets.
That is the dream, but what about the reality? Thousands of smartphones simultaneously transmitting massive video files from a single stadium will be a data traffic nightmare. How can the system cope without overload, backhaul problems, dropped calls and compromised quality? Not only will the public be angry if they cannot use their devices on the day, but there will also be numerous messages between organisers, messages to and from athletes or lost spectators – critical messages that must be delivered in order to avoid chaos.
The challenge of video on demand
Any EMEA video provider – whether terrestrial, satellite, cable, broadband or mobile – faces two basic problems:
• Getting hold of the content
• Delivering that content to provide a good user experience.
The first problem is universal: for an event as prestigious as the Olympics, the established broadcast channels have the advantage. At the other end of the scale, services like YouTube will be flooded with free content by users.
Solutions to the second problem depend on the technology. Broadcasters have a great deal of control over the schedules and are free to focus on high quality and best possible user experience. The most pressing delivery problems come with any form of video on demand, because demand can fluctuate wildly.
An event like the Olympics or a World Cup creates its own peak of demand and, within that broad peak there will be demand explosions when records are being broken – the very times when most people are watching. If the network fails at that point, everyone knows about it. The mobile network presents an extreme example of fluctuating demand, so we will concentrate on the problem of delivering video to and from smartphones – though many of the suggestions equally apply to Internet TV and other on-demand services.
The challenge of mobile backhaul
For mobile operators in EMEA, the capacity of the core network is not such an issue, because the main bottleneck now lies in the link between the cell tower and the high bandwidth core. Traditionally this was provided by setting up a leased line and adding further lines in anticipation of growing traffic. This is too cumbersome a solution to meet today’s fluctuations in demand. Instead, it is broadly accepted that the future lies with Carrier Ethernet for backhaul – a technology that offers the simplicity of an all-IP network for data traffic, as well as scalability in small, rapidly implemented stages.
Ethernet can run over copper, optical or microwave media allowing enormous flexibility of deployment, and packet microwave backhaul is booming as a flexible and effective solution. This means that there is a solution to the backhaul crisis, and it is being implemented as the pressure mounts. As with any change, it needs to be thoroughly tested under a range of operating conditions to make sure that it will handle peak loads and stress conditions. When the service provider owns the backhaul, it will be tested before roll out, but the same applies to third party backhaul. The supplier cannot guarantee service levels until the backhaul has been thoroughly tested, but for the operator this is just the start – because the complexity of communication networks means that you cannot just test each component, you must also test end-to-end delivery.
End to end testing for video delivery
Network testing needs more than just piling on massive amounts of data to see how much it can carry, as most real life network traffic comes in bursts, made up of different overlaid patterns – unless it is a dedicated video delivery network.
Every type of transaction has its own pattern, a pattern partly dictated by the actual technology and its communication protocols, but also by the human user. Voice calls, for example, include tiny moments of silence, Internet browsing has different rhythms while a big download could come as one sustained rush. Therefore, the real test is to simulate such real-world traffic and increase it to peak levels and beyond. This requires a trained tester who saves much of the trial-and-error by knowing what to look for and where to apply pressure. Sophisticated test devices, however, go a long way to creating realistic conditions because they can record samples of everyday traffic and multiply typical behaviours many times to simulate real-life rush-hour traffic. Only when you have found some way to generate and superimpose realistic traffic profiles can you be sure of a truly realistic test scenario.
Today’s sophisticated network testing devices not only generate huge volumes of simulated realistic network traffic, but they also monitor what happens and pinpoint bottlenecks or contentions in the system. In some cases, this can save on capital investment – rather than imposing an extensive upgrade of the whole network, you can pinpoint the bottlenecks in the system and often solve them with a minor upgrade, or by simply reconfiguring the existing system.
Testing any video delivery system requires three main stages: pre-testing in the lab, field testing and on-going monitoring during full deployment. For laboratory testing, the greatest possible realism of traffic conditions is needed. For field-testing, it helps to have fully featured, yet mobile, test devices. For ongoing monitoring of the network’s performance, what is required is test equipment that can be set up by the operator to run regular or real time test programmes with customised reporting of performance and alerts for any problems before they become critical.
In the case of real time video, it is not enough to transmit the data end to end. What needs to be tested is the quality of the viewer’s experience, and that depends on a number of key parameters including latency, packet loss and jitter.
Video delivery is an exceptionally demanding challenge, since the human eye can detect tiny movements. Deliver a picture with a few missing pixels or moiré interference, and the viewer’s eye goes straight to that glitch. From a PR point of view, it is almost better to deliver no picture than to deliver bad quality video – therefore quality of experience must be tested under all likely operating conditions.
In addition, any service running over data networks might be subject to cyber-attacks such as denial of service. If this is likely to cause problems, you should use test equipment that can emulate a range of attack scenarios and monitor the results. It is not so much a question of trying to make your network 100 per cent safe (unless, maybe, for government or military purposes) but rather to know the risks in advance and to be prepared.
The reputation of any EMEA public service with such a high profile as video depends critically on good delivery and user experience. This must be tested before roll out.
When the service is delivered over a data network, sophisticated equipment and procedures are needed to address the complexities of mixed traffic. When the service depends on demand, even more care must be taken to test for extreme fluctuations.
The final test will be in the eyes of the viewer.