QoS is a fundamental network infrastructure technology—in the same class as high availability and security technologies. As the network traffic is evolving and video traffic is growing fast, the QoS plays an important role in model network.
Not only bandwidth, several basic characteristics define a traffic class and thereby dictate the network’s handling of packets belonging to that traffic class. Crucial among these characteristics are the following:
- Delay (or latency): This is the finite amount of time that it takes a packet to reach the receiving endpoint after being sent from the sending endpoint.
- Jitter (or delay variation): This is the variation, or difference, in the end-to-end delay in arrival between sequential packets.
- Packet drops: This is a comparative measure of the number of packets faithfully sent and received to the total number sent, expressed as a percentage.
Before networks converged, network engineering was focused on connectivity. The rates at which data came onto the network resulted in bursty data flows. Data packets tried to grab as much bandwidth as they could at any given time. Access was on a first-come, first-served basis. The data rate available to any one user varied depending on the number of users accessing the network at any given time.
The protocols that have been developed have adapted to the bursty nature of data networks, and brief outages are survivable. For example, when you retrieve email, a delay of a few seconds is generally not noticeable. A delay of minutes is annoying but not serious. Traditional networks also had requirements for applications such as latency-sensitive data, drop-sensitive data, and video.
Quality Issues in Converged Networks
The four major problems facing converged enterprise networks include the following:
■ Bandwidth capacity: Large graphics files, multimedia uses, and increasing use of voice and video cause bandwidth capacity problems over data networks.
■ End-to-end delay (both fixed and variable): Delay is the time that it takes for a packet to reach the receiving endpoint after being transmitted from the sending endpoint. This period of time is called “end-to-end delay,” and consists of two components:
■ Fixed network delay: Two types of fixed delays are serialization and propagation delays. Serialization is the process of placing bits on a circuit. The higher the circuit speed, the less time it takes to place the bits on a circuit. Therefore, the higher the speed of the link, the less serialization delay that is incurred. Propagation delay is the time that it takes for frames to transit the physical media.
■ Variable network delay: Queuing delay is a type of variable delay. Specifically, the amount of time a packet spends in the output buffer (that is, the output queue) of an interface can vary based on network congestion. Therefore, this queuing delay is considered to be a variable delay.
■ Variation of delay (also called jitter): Jitter is the delta, or difference, in the total end-to-end delay values of two voice packets in the voice flow.
■ Packet loss: Loss of packets is usually caused by congestion in a WAN, resulting in speech dropouts or a stutter effect if the playout side tries to accommodate by repeating previous packets.
To be continued
Congestion and solution
Queuing algorithms are used to manage congestion. Many algorithms have been designed to serve different needs.
Queuing on routers is necessary to accommodate bursts when the arrival rate of packets is greater than the departure rate, usually because of one of the following two reasons:
■ The input interface is faster than the output interface.
■ The output interface is receiving packets coming in from multiple other interfaces.
The queuing structure is split into two parts, as follows:
■ Hardware queue: Uses FIFO strategy, which is necessary for the interface drivers to transmit packets one-by-one. The hardware queue is sometimes referred to as the transmit queue (TxQ). Packets in the hardware queue cannot be reordered.
■ Software queue: Schedules packets into the hardware queue based on the QoS requirements. Software queuing is implemented when the interface is congested. The software queuing system is bypassed whenever there is room in the hardware queue.
The software queue is much larger than the hardware queue, which has a capacity of only a few packets (typically two to three). The software queue can hold tens of packets and allows their reordering prior to transmission.
Software-only interfaces, such as subinterfaces or tunnels, have no concept of departure rate, because there is no hardware interface that is directly tied to them. No congestion can occur, and they cannot perform queuing. Therefore, it is impossible to configure a queuing service policy directly to a software interface.
Policing and Shaping
Both mechanism have common points:
- Both traffic shaping and policing mechanisms are traffic conditioning mechanisms used in a network to control traffic rates.
- Both mechanisms use classification so that they can differentiate traffic.
- They both measure the rate of traffic and compare that rate to the configured traffic shaping or traffic policing policy.
The difference between traffic shaping and policing can be described in terms of their implementation.
Traffic shaping buffers excessive traffic so that the traffic stays within a desired rate. With traffic shaping, traffic bursts are smoothed out by queuing the excess traffic to produce a steadier flow of data. Reducing traffic bursts helps reduce congestion in the network.
Traffic policing drops excess traffic in order to control traffic flow within specified rate limits. Traffic policing does not introduce any delay to traffic that conforms to traffic policies. Traffic policing can cause more TCP retransmissions, because traffic in excess of specified limits is dropped.
Traffic policing mechanisms such as Class-based policing or Committed Access Rate (CAR) also have marking capabilities in addition to rate-limiting capabilities. Instead of dropping the excess traffic, traffic policing can alternatively mark and then send the excess traffic. This allows the excess traffic to be re-marked with a lower priority before the excess traffic is sent out. Traffic shapers, on the other hand, do not re-mark traffic; these only delay excess traffic bursts to conform to a specified rate.