<iframe src="//www.googletagmanager.com/ns.html?id=GTM-5X59DQ" height="0" width="0" style="display:none;visibility:hidden">

Pristine Blog

The Strengths and Weaknesses of HLS in Real-Time Communications

Posted by Patrick on Apr 24, 2015 12:23:07 PM

Part one of a two part series....

When considering in-browser video solutions, HLS and WebRTC are the two contenders for modern standards-based solutions. Each of these technologies has a set of advantages and limitations which shape their optimal use cases.

Part 1: The Strengths and Weaknesses of HLS in Real-Time Communications

HTTP Live Streaming (HLS) is a client-server protocol that was originally designed by Apple to deliver streaming video to their mobile endpoints. HLS uses HTTP to deliver its content, allowing it to be implemented with relative ease, as streaming video is represented as a series of regular file downloads. Since HLS is HTTP-based, video can be securely transmitted over HTTPS to end clients. In addition, this allows HLS to send an individual stream to multiple clients with ease.

Since it is a TCP-based protocol, HLS allows for “reliable” delivery of video content. When chunks of the video are not received by the client, the delivery server attempts to retransmit the video until the client either receives it or times out. Unfortunately, this is a double edged sword, as it introduces delay into the delivery of the stream. For real-time communications, this results in a real-world lag that ranges from hundreds of milliseconds to 30+ seconds. While the adaptive nature of HLS does allow for degradation of video quality in poor network conditions, it does not mitigate the latencies introduced by constant retransmission.

HLS is a protocol that adapts to client bandwidth restrictions by providing streams with varying video and audio quality on the server side. On the client side, the client requests the highest quality stream that it can handle, based on its assessment of its performance. Since the video and audio must be encoded in different qualities, this introduces latency, either at the encoding-client or at the server. This latency can be mitigated by reducing the number of different quality options offered, although that effectively cripples the adaptive nature of HLS.

HLS uses H.264 as its video codec and can use either MP3 or AAC as its audio codec. Since HLS is an older standard, chip manufacturers have had time to implement support for H.264 and AAC encoding and decoding in silicon. This allows HLS to provide relatively high quality audio and video with less CPU and battery impact. Although H.264 is a video codec that can be appropriate for real-time communications, [AAC and MP3 are audio codecs that are not designed for real-time communications. AAC and MP3 take >10x more time to encode a given audio sample](http://www.opus-codec.org/comparison/), when compared to real-time codecs like Opus. Another disadvantage about AAC is that at the same bitrate, Opus performs better [in subjective quality tests](http://people.xiph.org/~greg/opus/ha2011/).


Overall, HLS excels in situations where guaranteed transmission of video and the transmission of an individual stream to multiple clients is required. From a use-case standpoint, this applies to one-way live-event broadcasts (sports games, concerts, keynotes, online education etc.). Due to the latency issues that come with the inherent design of HLS, it is not appropriate for a real-time communications scenario. This is doubly true for bi-directional communication, since latencies are effectively doubled.

Topics: The Pristine Story