What is Low Latency Live Video

Streaming live videos is one of the best ways to deliver engaging content to an incredibly wide audience. One aspect that’s related to live video platforms is latency – the difference of time between the recording of an event and displaying said event on a viewer’s screen. It’s important to note that even media TV giants experience latency. This article will focus on latency, its causes, and different ways we can limit it, to improve overall viewer experience.

What is Latency?

Latency can be described, in simple terms, as the difference of time between when something happens and is being recorded and when it is viewed on screen.  For example, a ball passing through the view of a camera that’s set to record and the time it takes for the ball image to show on the viewer’s screen.

High latency on live video streams is extremely detrimental for user experience, especially when we’re talking about interactive live streams. Imagine a streamer recording a live gameplay video on a streaming platform with a built-in chat room but with a video latency of around 20 seconds. Users will ask questions and engage with the streamer, but with a problem – they would be viewing and asking questions about what he or she was doing 20 seconds ago, meaning that context is lost and user satisfaction is lowered. Even if the streamer manages to remember the question’s context, his answer would still take 20 more seconds to reach the viewer.

What Causes it?

All online media architecture currently in use today is more complex than it might seem at first glance – and all components related to this recording-to-screen pipeline can add up to overall latency. Let’s take a look at the biggest factors of latency in live videos:

Packaging & Encoding – The amount of latency incurred is heavily dependent on the desired output signal quality and the configuration used. Different streaming protocols can also increase latency by sending full chunks of media only after they have been completely ingested.

First Mile Upload – Uploading your media content to a content delivery network can also be limited by the place and time of your live stream. For example, your upload will incur significantly more latency if a mobile data network is used, as opposed to an Ethernet connection.

Content Delivery Networks – If you want to deliver your stream to a wide audience, you will have to use a CDN. But when using a CDN, latency can also increase because your video stream will have to propagate between different caches.

Last Mile Delivery – Your viewer’s network connection can also impact latency in a negative way. It all depends if the user is accessing your content from a mobile connection, Wi-Fi or Ethernet. Another aspect is geography – if the user is very distant from the closest CDN endpoint, latency will increase.

Player Buffer – All video players will buffer media to allow the video smooth playback. Media specifications often define the size of buffers but some can be optimized for low-latency streaming.

Quality vs Scalability vs Latency

When trying to reduce overall latency, it’s also important to take a close look at the configuration settings of all the different components that go into the recording-to-screen pipeline. Most of the time, users will try to change these settings randomly which in turn leads to even more stream latency. There are a few factors we need to consider when changing these configurations, namely scale and quality:

  • Scale – Most streaming protocols were often limited in terms of scalability. RTMP-based protocols would often clog up due to a massive increase in viewers and data load. This is why newer protocols based on HTTP have emerged, such as HTTP Live Streaming or HLS. The only downside of these protocols, however, is the fact that they increase latency.
  • Quality – Video quality can also have a detrimental effect on latency. High quality video requires more bandwidth due to the higher frame rate and larger resolutions.

Glass-To-Glass vs Switch vs Join Latency

We already gave an example of measuring between the recording camera and viewer’s screen by throwing a ball in the camera’s view range. We also mentioned the recording-to-screen pipeline, which is also called “glass-to-glass”, as in the glass of the recording camera and the glass of the viewer’s screen – hence the term glass-to-glass latency. But now let’s take a look at a few other types, namely switch and join latency. While these are not 100% latency, they are perceived as such by the end user, meaning that they can have a detrimental effect on overall user experience.

  • Join – this type refers to the time needed to load the first frame of a live stream, or the overall loading time of a new stream. This type of latency is more often than not caused by network delays such as the number of network requests needed. Media players can offer some configuration options such as playing the video as soon as they receive the first data or waiting until a new bigger chunk of data is being sent by the server – this can result in a higher join latency but overall lower glass-to-glass latency.
  • Switch – this type refers to the time needed to switch between different stream channels. Switch latency is mostly dependent on the structure of the streaming protocol and the format which is being used for the group of pictures or GOP. Playback will only start when a new GOP starts, which is at a keyframe. Newer configurations try to keep the number of keyframes low and the groups of pictures large since this will provide the highest quality for the lowest bitrate.

Current Trends

Optimizing is extremely important, but the question is “what figure to shoot for?”. The answer depends on the type of stream and type of business you are running. Most optimization tricks will lower it to somewhere between 20 and 30s. This can be done with a minimal cost of time and money. If your business requires ultra low latency or real time latency, WebRTC or RTMP solutions can be applied – but keep in mind that these have a relatively high cost when increasing quality and scaling. 

WpStream

The WpStream platform has an average latency of around 30 seconds, which is well suited for broadcasting content that is not time sensitive, like live concerts. The platform uses the HTTP Live Streaming protocol, which is the most widely used in the industry.

We offer a the solution (2-3 seconds) to select customers. It is still experimental and it comes with a few restrictions, however, the feedback is positive. Please get in touch with us if you’d like to be among the early adopters of this technology.

 

Beatrice Tabultoc

Beatrice Tabultoc

Beatrice is the digital marketing go-to at WpStream. She manages all things social media, content creation, and copywriting.

Start your free trial with WpStream today and experience the ability to broadcast live events, set up Pay-Per-View videos, and diversify the way you do your business.
Share