Rate Limitation vs. Bandwidth Limitation

What is the difference between "rate limitation" and “reduced bandwidth link emulation"?

Rate limitation refers to the deliberate restriction or limitation of the data transfer rate in a network. It involves controlling the speed at which data is transmitted, ensuring it does not exceed a certain predefined rate. On the other hand, reduced bandwidth link emulation refers to simulating a network connection with a lower bandwidth than the actual network environment. It replicates the conditions of a slower or congested network by artificially limiting the available bandwidth for data transmission. While rate limitation focuses on controlling the speed of data transfer, reduced bandwidth link emulation simulates a constrained network environment with lower overall capacity.

It is important to test and validate the production-readiness of infrastructure and applications before deployment. IT staff and network administrators can reduce roll-out failures by learning in advance how applications will perform under a variety of network conditions.

When emulating network conditions, two terms are used interchangeably but are actually very different.

Rate Limitation vs Reduced Bandwidth Link Emulation Graphic

Reduced bandwidth link emulation

Reduced Bandwidth Link Emulation is a technique used in a lab to evaluate how a network application behaves when its packets traverse a slow link. Slow links are common on the internet. Services such as ADSL, Cable TV, and satellite links can be bottlenecks that reduce application performance and user satisfaction. Some links that normally have adequate bandwidth, most particularly some kinds of wireless links, can degrade under certain conditions to well below their nominal bandwidth.

Reduced Bandwidth Link Emulation is a piece of a broader methodology called "impairment testing". In the full realm of impairment testing, reduced bandwidth link emulation is used in conjunction with tools that introduce packet loss, packet delay and jitter, packet duplication, packet reordering, and packet alteration.

Rate limitation

Rate Limitation is a technique used to manage the utilization of network resources. For example, bulk file transfers may be limited to make way for time critical voice traffic.

Rate Limitation is widely practiced by providers and rate limitation software is also found in many consumer grade devices such as cable and DSL modems.

Rate limited links can often have a significant impact on application performance and user satisfaction.

Reduced bandwidth link emulation and rate limitation: similar but far from identical

Reduced Bandwidth Link Emulation and Rate Limitation are both processes that are applied to the transmission side of a link to manage whether packets can be transmitted immediately upon them becoming ready or whether they are to be held back (and consequently delayed).

The difference between Rate Limitation and Reduced Bandwidth Link Emulation lies in the choice of which packets should be held back and for how long.

Both Rate Limitation and Reduced Bandwidth Link Emulation involve a queue of packets awaiting transmission via network interface hardware and a mechanism that decides when to hand the leading packet in the queue over to the transmission hardware.

Metered On Ramps and Narrow Bridges To make it easier, consider two highway situations that provide a helpful visual image.

The first image is a highway on-ramp equipped with traffic metering lights. Automobiles arrive at the ramp, form a queue to wait their turn, and a red-green traffic light controls when the leading automobile may leave the queue and enter the highway. (For simplicity, consider all vehicles as equal and do not consider the number of occupants that each may be carrying.)

The second image is a one-lane, one-way bridge. Automobiles arrive at the bridge, form a queue to wait their turn, and the leading automobile leaves the queue and crosses when the bridge becomes unoccupied. Consider the bridge to be narrow and that automobiles must drive slowly. Assume that this is a very long bridge and takes quite a while to cross. (Most modern network pathways are full duplex, however, we will not burden this analogy with issues of traffic flowing in two directions contending for a single lane bridge.)

Rate limitation versus reduced bandwidth link emulation: synthetic vs. real world

Returning to our topic: What is the difference between Rate Limitation and Reduced Bandwidth Link Emulation?

Rate Limitation is a process that manages packet transmission so that as measured over a period of time the average transmission rate does not exceed some defined limit. The critical point is that Rate Limitation acts over a time window and allows unused bandwidth early in the window to be accumulated and applied to packets that become ready to transmit later in the time window.

Reduced Bandwidth Link Emulation is a "real-world" process in which unused bandwidth is considered lost. Thus if a packet becomes ready for transmission after a quiet period the bandwidth of that prior period is not magically available.

Rate limitation: the metered highway on-ramp

Imagine that packets are vehicles and that our goal is to manage the rate at which vehicles enter the highway. Let's say that we want no more than 12 automobiles per minute to enter the highway. (And let's also say that we are not trying to manage the spacing between automobiles, that we are happy allowing a sequence of closely spaced vehicles to enter the highway so long as the average over a minute does not exceed 12 vehicles.)

Imagine that the little computer controlling the traffic light keeps a count of how many vehicles have entered the highway over the preceding 60 seconds. (Keeping this tally is a bit complicated - the little computer has to maintain what is called a "sliding window" to view the count for the 60 seconds before "now" even as "now" advances with the ticking of the clock.)

The little computer runs a fairly simplistic program: If over the last 60 seconds fewer than 12 automobiles have entered the highway, then the light is set to green. If over the last 60 seconds the number of vehicles is 12 or more, then the light is set to red.

If you try to envision this on-ramp you can mentally see that if there has been a period of quiet then the light will be green and vehicles will zip onto the highway without waiting.

But after a while, as long as vehicles continue to arrive at the on-ramp faster than 12 per minute, eventually the light will go red and the vehicles will back up and wait.

The light will turn green after enough clock ticks so that the average drops below 12 vehicles per minute. If the arrival rate of vehicles is above 12 per minute then the on-ramp will enter a steady state in which a vehicle is released every five seconds.

If the arrival rate is high, then some vehicles may have to wait a very long time. If the arrival rate is low enough for a long enough period the queue will drain and the light will go green long enough for vehicles to enter the highway unvexed by a delay.

As you can see, vehicles that arrive after a quiet period receive an advantage because the computation of the average rate encompasses the time prior to the arrival of those vehicles.

Now let's look at Reduced Bandwidth Link Emulation, a situation where what happened before has very little bearing on what happens next.

Reduced bandwidth link emulation: the one-lane, one-way bridge

Reduced bandwidth link emulation recognizes that in the real world unused bandwidth does not accumulate for future use; unused bandwidth is forever lost.

Reduced Bandwidth Link Emulation also recognizes that on low bandwidth links a period of time is required to send the bits onto the link (serialization delay) and for the data to propagate from one end of the link to the other (transit delay.)

No link is so fast that there is neither serialization nor transit delay.

Serialization delay depends on how many bits can be pushed onto a link in a given period of time; the higher the link bandwidth the more bits can be serialized per second. On local and high speed networks serialization delays can be quite short (a small number of microseconds) and for many purposes can be ignored.

But on low speed links, such as consumer DSL, Cable TV, or wireless (particularly degraded wireless) the serialization delays can be significant. If, for example, a home wireless degrades, as they often do, into the few megabit-per-second range then it might take several milliseconds just to move a packet through a PC's wireless network interface (NIC). On the typical home ADSL or Cable TV upstream link (384Kbits/second) the serialization delay of a packet can be on the order of 30 milliseconds.

Transit delays are usually dependent on distance. At a minimum, bits must propagate. Even fiber optic links can't move bits faster than about 70% of the speed of light. This means that even on a direct fiber optic link it takes about 17 milliseconds for a bit to move from San Francisco to Boston. But the internet is not composed of direct links. Rather, it is a store-and-forward network in which packets are bounced hither and yon and suffer queuing delays (and additional serialization delays) as they pass from router to router to router. A typical propagation delay across the United States over the internet can easily exceed 100 milliseconds.

Let's go back to the narrow bridge on a highway.

Imagine a situation in which there is not a lot of traffic and that when a vehicle arrives the bridge is usually empty, so the automobile suffers no delay beyond the fact that each vehicle has to travel slowly across the bridge. However, imagine further that a convoy of vehicles arrives at the bridge. The first can begin to cross immediately, but will still suffer some delay because of the reduced speed and the length of the bridge. The remaining vehicles stop and wait. The second vehicle is forced to wait a relatively short time as the first vehicle works its way onto the bridge. But each successive vehicle waits longer than its predecessor.

Main Differences Rate Limitation and Bandwidth Limitation

Rate Limitation is a statistical process to manage resources. Reduced Bandwidth Link Emulation is a recognition that there are limits and that some resources, if unused, are lost.

Rate Limiting software makes sure that over a period of time (on the order of a few seconds) the average bit rate does not exceed a defined threshold. This means that credit is given in the calculations for unused periods on the wire even if the packets are readied for transmission after that period has occurred.

Reduced Bandwidth Link Emulation is different. Reduced Bandwidth Link Emulation recognizes that it is impossible to send a packet before it is ready for transmission and also that it takes some period of time to clock the bits of a packet out of memory and onto the wire and then for those bits to move across the link to the destination.

With Rate Limitation, sometimes a packet flashes through with essentially zero delay because there has been an accumulation of no-traffic periods preceding it. With Reduced Bandwidth Link Emulation no packet can ever go through without at least a minimal serialization delay because that would be physically impossible.

Real world devices

Rate Limitation is a useful tool to manage the utilization of network resources. For example, it is often considered useful to rate-limit background bulk file transfers in order to allow time-critical interactive voice or video to transit the net and not cause the receiver to hear broken voice or see blotchy or stuttered (or "juddered") video.

Because most network devices operate on a store-and-forward principle they have buffers to hold traffic for a short time. This makes the statistical character of Rate Limitation a very useful tool for ISPs to manage their often oversubscribed bandwidth resources.

Rate Limitation mechanisms are found in most network devices, from consumer DSL and Cable modems to heavy duty ISP switches and routers.

Reduced Bandwidth Link Emulation is a tool mainly of value to those who are building network devices who need to validate in the lab that their creations will work under real-world network conditions.

Practical implications for pre-deployment testing

Because both limited bandwidth links and rate limited links are so common on today's internet, a prudent vendor of a network protocol stack or application ought to confirm that their product works across such links.

Reduced Bandwidth Link Emulation and Rate Limitation have a significant common characteristic - they both cause packets to be delayed. This means that for many purposes reduced bandwidth testing will cover for rate limitation testing and vice-versa.

However, if one is designing an application or protocol that is trying to adapt and accommodate transit and serialization delays then rate limitation testing, although useful, will not reveal design or protocol flaws that could be revealed by reduced bandwidth link testing.

Using the Maxwell Network Emulators for rate limitation or reduced bandwidth link emulation

The Maxwell products do both Rate Limitation and Reduced Bandwidth Link Emulation.

Several parameters control the serialization delay and transit delays as well as the length of the queue before packet discard occurs. The user may control ancillary factors such as the size of the padding (i.e. the hidden wrapper bits) that are carried along with the packet as they traverse the low bandwidth links.

The low end of the Maxwell family does Rate Limitation according to the definition of this white paper.
Depending on the application, the low end of the Maxwell family can approximate Reduced Bandwidth Link Emulation, however please consult a Maxwell engineer for an assessment.


© 2021 InterWorking Labs, Inc. dba IWL. ALL RIGHTS RESERVED.
Web: https://iwl.com/
Phone: +1.831.460.7010
Email: info@iwl.com