Home Page ContentPress Releases Cisco Innovators: Bufferbloat

Cisco Innovators: Bufferbloat

by david.nunes

Cisco Innovators: Bufferbloat

Cisco Distinguished Engineer looks for a broad solution to a pressing network performance issue.

May 05 , 2014

You’re downloading a big file in your home office when suddenly, for no apparent reason, the process stalls. At the same instant there is a howl of protest from the kids, who are playing video games online in the other room.

Welcome to bufferbloat.

The term bufferbloat was coined in late 2010 to describe network quality issues when large amounts of data are temporarily held in the memory of routers and switches.

Cisco's Engineer Rong Pan

At Cisco, Distinguished Engineer Rong Pan belongs to an elite fellowship of mathematicians and engineers at organizations around the world who are working on the bufferbloat problem. Rong leads Cisco’s Bufferbloat project; working alongside her teammates in Cisco’s Research and Advanced Development organization.

Bufferbloat’s Big Break

Cisco's Engineer Rong Pan with team in datacenter

In October 2013, the cable industry selected the algorithm Rong’s team developed as the “default on” active queue management for the DOCSIS 3.1 standard for cable modems. This means hundreds of millions of families worldwide will be impacted by the team’s work. The algorithm, called PIE, is a patented Cisco technology.

The hope is that the Internet Engineering Task Force (IETF) will adopt the algorithm as well, which will impact the whole networking industry.

“I’m just very excited to know that almost every family will use this. We can see we are having a positive impact on people’s lives, ” Rong shared.

What’s Behind Bufferbloat

TCP, the protocol that governs the way information moves through the network, is designed to regulate itself through occasional packet drops. When a router or switch has data coming in more rapidly than it can pass it along, it tries to smooth the situation out by temporarily holding some data in a buffer. If the buffer fills up, a few packets of information will be dropped.

Cisco's Engineer Rong Pan

“Random drops are actually healthy for TCP, so it knows when to send more data and when to send less,” says Rong. “Routers and switches need to have buffers for network efficiency – but we don’t want to buffer too much or else voice calls can get cut, video can get clipped, and TCP might even time out. It’s a delicate balance.”

Bufferbloat is urgent because there is so much traffic on the network now thanks to video, mobile devices, sensor networks, grid computing, and virtualization and cloud. By 2016, according to the Cisco Visual Networking Index forecast, the global IP networks will transmit 12.5 petabytes every five minutes.

Because computer memory has become so powerful and inexpensive, the tendency across the industry has been to throw more memory at the buffering problem. Ironically, the bigger the buffer the longer the potential delay, and the worse the network performance.

Bufferbloat can manifest as jitter in a video, awkward lags in voice calls – or worse. Online retailers lose money when networks sputter, and many uses – such as telemedicine, safety and security applications, remotely operated heavy equipment, emergency responders, stock market transactions, and live music performances – can’t tolerate any latency at all.

The improved algorithm will be useful across all Cisco product lines and address a pressing network dilemma worldwide – which is why bufferbloat is one of the first two projects funded by Cisco’s Technology Fund.

“The problem is urgent and important and we have to address it,” Rong says.

Contributors: Mary Barnsdale and Karen Snell

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More