Quality of Service (QoS) is a suite of technologies used to manage bandwidth usage as data crosses computer networks. Its most common use is for protection of real-time and high priority data applications.
Before configuring standard QoS, you must have a thorough understanding of these items:
- The types of applications used and the traffic patterns on your network.
- Traffic characteristics and needs of your network. For example, is the traffic on your network bursty?
Do you need to reserve bandwidth for voice and video streams?
- Bandwidth requirements and speed of the network.
- Location of congestion points in the network.
Restrictions for QoS
- To use these features, the switch must be running the LAN Base image: stacking, DSCP, auto-QoS, trusted boundary, policing, marking, mapping tables, and weighted tail drop.
- Ingress queueing is not supported. QoS analyse the ingress traffic, classify, mark, then process with the egress queueing.
- You can configure QoS only on physical ports. VLAN-based QoS is not supported. You configure the QoS settings, such as classification, queueing, and scheduling, and apply the policy map to a port. When configuring QoS on a physical port, you apply a nonhierarchical policy map to a port.
- If the switch is running the LAN Lite image, you can configure ACLs, but you cannot attach them to physical interfaces. You can attach them to VLAN interfaces to filter traffic to the CPU.
- The switch must be running the LAN Base image to use the following QoS features:
- Policy maps
- Policing and marking
- Mapping tables
Information About QoS
When you configure the QoS feature, you can select specific network traffic, prioritize it according to its relative importance, and use congestion-management and congestion-avoidance techniques to provide preferential treatment. Implementing QoS in your network makes network performance more predictable and bandwidth utilization more effective.
The QoS implementation is based on the Differentiated Services (Diff-Serv) architecture, a standard from the Internet Engineering Task Force (IETF). This architecture specifies that each packet is classified upon entry into the network.
The classification is carried in the IP packet header, using 6 bits from the deprecated IP type of service (ToS) field to carry the classification ( class) information. Classification can also be carried in the Layer 2 frame.
- Bucket depth: the maximum burst that is tolerated before the bucket overflows.
- Type of service (ToS): used to carry the classification ( class) information, which is deprecated.
- Differentiated Services Code Point (DSCP): means of classifying and managing network traffic and of providing quality of service (QoS) in modern Layer 3 IP networks.
- Weighted tail drop (WTD). WTD is implemented on queues to manage the queue lengths and to provide drop precedences for different traffic classifications.
- SRR: Shaped round robin (SRR) servicing Egress queues.
Layer 3 Packet Prioritization Bits
IP precedence values range from 0 to 7. DSCP values range from 0 to 63.
End-to-End QoS Solution Using Classification
All switches and routers that access the Internet rely on the class information to provide the same forwarding treatment to packets with the same class information and different treatment to packets with different class information. The class information in the packet can be assigned by end hosts or by switches or routers along the way, based on a configured policy, detailed examination of the packet, or both. Detailed examination of the packet is expected to occur closer to the edge of the network, so that the core switches and routers are not overloaded with this task.
Switches and routers along the path can use the class information to limit the amount of resources allocated per traffic class. The behaviour of an individual device when handling traffic in the Diff-Serv architecture is called per-hop behaviour. This means if you want to construct an end-to-end QoS solution, all devices along a path must provide a consistent per-hop behaviour.
QoS Deployment Lifecycle
The following steps define a QoS Deployment Lifecycle:
1. Project planning and buy-in—Understand current and near future QoS needs of your organization as a whole as well as for every department. Choose an appropriate QoS model then get departmental buy-in before you begin.
2. Investigation and design—If you are making significant hardware or software changes, make those first. Then…
- Snapshot existing QoS policies in case you need to rollback
- Research the QoS capabilities of your network devices
- Baseline the network with flow monitoring and use analysis
- Select a QoS model for the traffic classes you want to support
- Define QoS policies for headquarters and campus LANs
- Define QoS policies for WAN links and branch offices
3. Proof of Concept (POC)—Test QoS policies and settings first in a non-production environment using real and synthetic traffic to generate controlled conditions. Test separately for each policy and then with all policies combined.
4. Iterative deployment cycle—Roll out QoS policies in a phased approach, either by sections of the network or by QoS functions (classification then queuing). Confirm changes at each iteration for at least 24 hours before continuing to the next step.
5. Ongoing monitoring and analysis—Perform ongoing monitoring and adjust your polices not just for average daily usage but also for monthly, quarterly, and yearly business cycles.