FAQ

How Does LibreQoS Fit Into My Network Design?

What Equipment Do I Need?

Multi-WAN

LibreQoS and its Long Term Stats service is designed to support multiple shaper boxes.

This works most optimally when traffic going in/out of each WAN is predictable – for example if a client from Tower A always connects through WAN 1, and a client from Tower B always connects through WAN 2, this will work well. However, if traffic from a client may transit either WAN routinely, or both at the same time (ECMP), that is a situation where the client could receive twice their allotted bandwidth.

ISPs have successfully deployed LibreQoS + LTS with 5+ shaper boxes on one network with multiple WANs. LibreQoS LTS gathers statistics from all shaper boxes in one place so that you can see a client’s throughput and RTT regardless of which WAN was used.

Latency

What is Speed? - Bandwidth vs Latency​

Bandwidth is the maximum amount of data that can be transmitted over an internet connection, usually measured in Megabits-per-second.

Latency is delay, and can be thought of as the time it takes a message sent from your device to reach its destination, plus the time it takes to receive a response back. Latency is measured in milliseconds (ms).

Bandwidth affects how long it takes to download or upload large static files (game release downloads, video editing files). For other daily internet tasks (video streaming, VoIP,  conference calling, gaming) – performance comes down to Latency, and Latency under Load (bufferbloat).

Bandwidth does not impact internet performance as much as advertising would lead us to believe. Here is an excerpt from the recent research paper Understanding the Metrics of Internet Broadband Access: How Much Is Enough?

Above about 20 mb/s, adding more speed does not improve the load time. The limit on the load time is the latency to the servers providing the elements of the web page.

The world’s largest Content Distribution Network, Cloudflare, has confirmed observing this pattern in the data flowing across their worldwide network.

Internet researchers with the Broadband Internet Technical Advisory Group have also come to this same conclusion in their research paper, Latency Explained.

Web page load time is largely determined not by throughput, but by two other factors: how long a network round-trip takes, and how many network round-trips are required.

The average household uses just 5 Mbps of bandwidth during peak usage hours, and only 2.5% of residential internet users use more than 32 Mbps during peak hours. While Internet Service Providers advertise plans in terms of bandwidth, they completely neglect latency – which determines our actual experience of the internet day-to-day.

The less delay that a network or application has, the more “responsive” a service will feel to an end user. The more delay (or lag), the worse it will feel.

Critically, however, reducing delay meaningfully improves all existing user applications.

This effect is seen most clearly with online gaming, which uses less than 1 Mbps in each direction, but demands very stable, low latency. Recent consumer research confirms that latency is more important than bandwidth for online gaming.

Overall, the consistently reinforced takeaway is that latency has now clearly overtaken broadband speed as the focus area for network providers seeking to provide – and guarantee and commercially benefit from – optimum experience in both online multiplayer and cloud gaming.

Latency Under Load / Bufferbloat​

Many of us take it for granted that it is “normal” for a video conference call to stutter or disconnect when someone else on the same home network is watching a 4K video. That is actually a symptom of Bufferbloat – the undesirable latency that results from network equipment buffering too much data. Connections with high Bufferbloat have lower perceived responsiveness. Cable and DSL internet services suffer from significant Bufferbloat, which can make these connections feel slow even when speed tests show normal bandwidth (Mbps).

LibreQoS keeps customers’ latency and bufferbloat as low as possible, providing a more streamlined internet experience. With LibreQoS, your WiFi calls, zoom calls, and online games are given fair priority, even when large file downloads or other so-called “bulk” tasks are occurring in the background. This allows ISPs to provide a more responsive and “snappy” experience using the internet.

In recent years, internet researchers have come to find that reducing Bufferbloat is crucial for improving internet performance.

Queue management techniques such as Active Queue Management are available that will reduce bufferbloat in network bottleneck equipment by triggering applications to reduce the amount of queuing delay that they cause. This is not theoretical; AQM has been proven to work at scale in DOCSIS and other networks.

In addition, it is also important to have a consistently responsive service where delay stays consistently low no matter how heavily utilized a user’s Internet connection may be and no matter what mix of applications are being used. This might seem like an unreasonable demand — expecting a network to be able to provide consistently low delay even under heavy load — but, as this report shows, this is in fact possible with today’s technology.

Find out if you experience Bufferbloat on your home internet connection using the Waveform Bufferbloat Test.

Why do we need CAKE and fq_codel?

Internet access bandwidth has increased continually, somewhat like Moore’s Law.  This is good and will likely continue.  Bandwidth demand also increases to fill the supply.  Consequently, a bottleneck may exist in the packet’s path across the Internet.  In this case, two issues may emerge: 1) latency increases for some flows due to heavy demand from other flows, 2) even a single flow can have excessive latency due to aspects of typical TCP behavior, leading to the classic buffer bloat dysfunction.  After much effort, these problems have been studied, and good solutions have been found by using certain queueing policies in routers and switches, loosely referred to as “fq_codel’ and “cake”.  These have been integrated into Linux and deployed in some systems, but still, many routers, switches, RF access points, and base stations do not yet have these improved queueing policies.

How do CAKE and fq_codel work?

CAKE and fq_codel are hybrid packet scheduler and Active Queue Management (AQM) algorithms. LibreQoS uses a Hierarchical token bucket (HTB) to direct each customer’s traffic into its own queue, where it is then shaped using either CAKE or fq_codel. Each customer’s bandwidth ceiling is controlled by the HTB, according to the customer’s allocated plan bandwidth, as well as the available capacity of the customer’s respective Access Point and Site.

The difference is dramatic: the chart below shows the ping times during a Realtime Response Under Load (RRUL) test before and after enabling LibreQoS. The RRUL test sends full-rate traffic in both directions, then measures latency during the transfer. Note that the latency drops from ~20 msec (green, no LibreQoS) to well under 1 msec (brown, using LibreQoS).

The impact of fq_codel on a 3000Mbps connection vs hard rate limiting —
a 30x latency reduction.

“FQ_Codel provides great isolation… if you've got low-rate videoconferencing and low rate web traffic they never get dropped. A lot of issues with IW10 go away, because all the other traffic sees is the front of the queue. You don't know how big its window is, but you don't care because you are not affected by it. FQ_Codel increases utilization across your entire networking fabric, especially for bidirectional traffic… If we're sticking code into boxes to deploy codel, don't do that. Deploy fq_codel. It's just an across the board win.”
Van Jacobson
IETF 84 Talk

Other Ways to Solve Bufferbloat