|Topic:||The evolution of broadcasting in an IP-Centric World|
|Title:||US Subsidiary Manager|
|Organisation:||WorldCast Systems Inc.|
Tony Peterle, US Subsidiary Manager, WorldCast Systems
Tony Peterle has been involved in radio broadcasting continuously for nearly 40 years, as an engineer, talent and trafficwatch pilot. Tony has held Chief Engineer positions in several major markets, and in 2005 received C.S.R.E. certification from the Society of Broadcast Engineers (SBE). Shortly after, Tony came to work for Worldcast Systems where he now manages sales and support for all of the Americas. Certified C.P.B.E. in 2015, Tony enjoys a reputation for knowledge of integration of IT systems, and has written numerous papers and an SBE course on the use of SNMP in broadcast operations.
This paper will examine the effects that widespread use of Information Protocol (IP) technology – particularly the Internet – has had on broadcasters. Using real-world scenarios, we will also show two examples of how broadcasters can turn that connectivity to their advantage.
The IT revolution
Broadcasting has been in a unique position to both take advantage of the explosion in connectivity – and be eviscerated by it. Audience shares and advertising revenue have both dropped significantly as consumers distribute their attention to multiple, network-enabled devices. Streaming music services, podcasts, on-demand audio and video can be summoned and enjoyed as easily as tuning in a local over-the-air (OTA) program. Broadcasters have expanded the number of programs they can offer, through digital radio and television technologies, but none can compete with the sheer volume of available entertainment options.
Despite this new dynamic, broadcasting has survived, and in some cases is thriving, partly due to certain characteristics, some technological and some more sociological. The key technological consideration is that broadcasting is a true one-to-many distribution system. The number of consumers using the service at any given time has absolutely no impact on the performance of the system. Most online services operate on a one-to-one connection – even if 1000 users are watching the same live stream, the system must generate an individual stream to be delivered to each individual screen. Network bandwidth may be widely available and inexpensive, but it is not scalable and is rarely completely free of charge.
The sociological aspect of broadcasting that offers an advantage is localization. Should consumers desire information about their local area – traffic, events, news items of interest – that information may be available on line with a bit of searching, but a local broadcast can deliver local content and entertainment simultaneously with a minimum effort by the user.
Figure 2 Real World Redundant Streaming Performance on a Trans-
Taking advantage of the connections
Broadcasters have faced the challenge of diminishing audience and advertising revenues by consolidating and maximizing efficiency. And in many cases, the very network
connectivity that caused the difficulty can also support some of these cost savings. Radio talent can “track” shows, allowing them to cover multiple shifts in multiple markets.
TV stations can deliver programs seamlessly to transmitters and streaming sites, and all can employ centralized monitoring and management systems for their program and
transmission facilities, to make the most of their available
Two recent real-world projects illustrate some ways that IT can help broadcasters economize and compete in such a fragmented media environment. One involves a nationwide system of monitoring and control, for a major TV network to
centralize overnight program switching and operations. The first we’ll discuss, however, is using the Internet for highquality audio delivery and distribution.
The audio cloud
The “Audio Cloud” is a fresh concept in broadcast that envisions an architecture that is inherently redundant and self-governing in terms of audio routing and backup. Ultimately the aim of the “Audio Cloud” is to allow the broadcaster to deliver audio from point A to B or indeed from point A to B through Z as cost effectively as possible with the greatest degree of reliability and the least degree of user intervention.
There are four key components to the Audio Cloud:
• Redundant Streaming
• Distributed Intelligence
• Packet Forwarding
• Multicast / Multiple Unicast Relocation
I. Redundant streaming
Redundant Streaming has already proven to be a key technology to enhance any given point-to-point data stream, protecting against packet losses and losses of connection (LoC). In the screen shot below we can see an example of Redundant Streaming – an audio stream from Belfast, Northern Ireland to Miami, Florida using public Internet and cheap DSL providers (two on each end). Each of the contributing streams have suffered losses, but the combined streams deliver zero-loss performance for weeks at a time.
II. Distributed intelligence
In addition to the primary role of passing content, the codecs in an Audio Cloud system must also have some capacity for intelligence and communication with one another. Processor power, and the ability to trigger an action based on a schedule or some sort of out-of tolerance condition are essential. Communication is accomplished usually with the
Simple Network Management Protocol (SNMP) which will be discussed later.
III. Packet forwarding
Packet forwarding over IP is essentially the ability to give any decode site on the network the capability of supplying other decoders with the same data packets on either a
primary or on an automated backup basis. This allows the broadcaster to have multiple encoders as potential encode sources thereby avoiding a single point of failure. Packet
forwarding means that no unnecessary decode and re-encode is required, a packet can simply traverse a codec or node en route to a final designated decoder.
IV. Multicast / Multiple Unicast Relocation
Increasingly broadcasters are looking to IP technology to replace larger broadcast audio distribution networks (satellite, etc.). Deploying an IP codec network on a large
scale can quickly exceed the bandwidth available at a single encode point. True multicast IP structure is one solution, as it replicates the packets all along the network, but it imposes a great deal of protocol complexity on network service providers and is generally unavailable across the public Internet, save on custom segments such as eLANs. In a “distributed unicast” scenario, the multi-stream generation function is moved away from the source encoder and closer to the decoders and can offer significant benefits.
Figure 3 Distributed Unicast Architecture
Figure 4 Typical monitoring point for Broadcast
Figure 5 An Object Identifier (OID) number
Of course the upload bandwidth from the origin site can be minimized, requiring connections to only 2-3 devices rather than one for each destination. In addition, the packet replicator/forwarding units can be hosted in secure data centers with guaranteed bandwidth, backup power, and close physical proximity to receive sites in different regions. Each layer of the distribution network retains enough capacity to cover multiple hardware or network connection failures. Intelligent control capability and SNMP communications between nodes allows the network to automatically adapt to any interruption in the system.
Technological advances have also increased the reliability of nearly every component of a broadcast operation, which in turn has supported an uncomfortable trend in cutting engineering staff. One way in which technology can help ameliorate the effects of those cuts is by centralization of monitoring and management. Many television stations have outsourced the program switching outside of local programming times, and several large broadcast networks have nationwide monitoring networks that can alert central personnel of any failures so they can remotely enable backup systems and direct their limited engineering resources to the most effective solution.
Recently, a national TV network outsourced the program switching for the markets in which they actually owned the television stations themselves. Along with that came a need to monitor the operation of and exercise control over 19 ATSC transmitters at 11 different physical sites across 8 markets. The time frame for installation was very short, and there was a requirement that this new control system could not interfere with existing remote control solutions that were already installed at most of the sites. All of the information and controls were to be made available to a central Master Control room in Atlanta. In order to meet these requirements, the solution employs the Simple Network Management Protocol (SNMP), for both the transmitter connections and the sharing of information with the Master Control room in Atlanta.
The SNMP protocol has been around for more than 30 years, and it provides a standard for communication between dissimilar networked devices. At its heart, SNMP is simply a way to retrieve and set data points in a remote system. These data points, called Objects, are defined by the maker of that system, usually at the time of manufacture. Each data object in a particular device can be located through a long string of numbers called the Object Identifier, or OID:
The implementation of this particular control system was greatly eased by the fact that every transmitter the customer owned was fully SNMP compliant. They were all from the same manufacturer, and 14 of the 19 transmitters were the exact same model. This reduced the amount of time needed for the identifying the OIDs of the desired data and control points. Once a basic SNMP script configuration had been established, it could be used nearly across the board.
Figure 6 Generic controls for a two transmitter site
Figure 7 Central monitoring point for the 8 markets
The SNMP access also allowed the system to be compatible with the existing remote control systems at the sites. The existing systems were wired to the I/O interfaces of the transmitters in the traditional way, but the SNMP communications and control from the centralized system could take place simultaneously without interference.
And it all depends on IP connectivity across the public Internet. Of course, the customer has a secure WAN established between their markets, but the Internet is the backbone on which all the communications depend. Now the central point in Atlanta can monitor and control all 19 transmitters across the country as needed.
Cost pressures will continue to drive broadcasters towards more centralization of control and monitoring, and making best use of the connectivity that has changed their industry in so many ways. Whether on the “front end” of program generation and distribution, or the “back end” of transmission and monitoring, network technology offers savings, reliability, and seemingly endless possibilities for broadcasters of today – and tomorrow.