How to Improve Data Center and Network Latency to Seamlessly Stream Videos
End users have come to expect seamless content delivery, to the point that any amount of network latency causes major issues. And as the world moves from away from serial digital interface (SDI) to real-time video over IP, the danger of network latency grows. It’s vital for businesses to create a seamless streaming experience to sustain and grow their user base.
Currently, SDI is the standard for digital video transmissions over coaxial cable. Video over IP is a technique that uses existing standard video codec to reduce content to an encapsulated bitstream that can be transported into a stream of IP packets over an IP network. While the resulting video streaming is faster, it’s also time-sensitive, and packet loss and network delays can break up the video, disrupting the user experience and hurting the business.
While bandwidth availability and traffic engineering tactics are still helpful to reduce network latency, there are other, less obvious factors that are equally vital to address — lags in storage and server, for example. Shoring up support for data center infrastructure can go a long way to resolve video content latency issues, which is crucial for live video broadcasts.
Finding Server and Storage Latency
Broadly speaking, latency is “a measure of the time required for a subsystem or a component in that subsystem to process a single storage transaction or data request,” explains George Crump in Storage Switzerland. “It’s akin to the propagation delay of a signal through a discrete component and is typically a function of hardware.”
Therefore, latency metrics vary between infrastructure components. For example, in storage subsystems, “latency refers to how long it takes for a single data request to be received and the right data found and accessed from the storage media,” Crump continues. Read latency in a disk drive refers to “the time required for the controller to find the proper data blocks and place the heads over those blocks […] to begin the transfer process,” he explains. The read latency in a flash device “includes the time to navigate through the various network connectivity” in addition to the “time within the flash subsystem to find the required data blocks and prepare to transfer data.”
So, latency is a function of both a physical property and a signal transport property. Addressing just one of those properties won’t resolve all latency problems.
Develop a Streaming Strategy
Servers have several latency issues, ranging from too little memory and ill-managed caches to slow processor cores and signal throughput bottlenecks. Be sure to check each for proper configuration to optimize performance.
“Multicore servers available today have plenty of CPU power; yet getting that power to or from the network may be an issue,” writes Karl Paulsen in RadioWorld. “Network interface cards and host bus adaptors can be a server bottleneck if not properly configured, specified or implemented.”
Storage systems also have a series of performance issues affecting latency — ranging from rotational delay to access and response times. Perhaps the easiest way to deal with these issues is to choose storage systems that demonstrate precise user performance requirements.
“For example, a NAS system designed specifically for media and entertainment will employ characteristics that can’t be met by other non-M&E storage solutions,” writes Paulsen.
Another way to guard against too much latency in your enterprise network is to test storage systems against specific workloads. Look for value adds in storage — these upgrades vendors make to continuously improve performance levels can help you create an effective streaming strategy that can carry business seamlessly into the future.