E N D
1. Integrated LAN-WAN Integrated QoS ? Johan De Ridder – Cisco
Rudi Vankemmel - Belgacom
2. What is QoS? Basic Perspectives
3. Understanding QoSTransmission quality is defined by following factors:
4. Elements That Affect Latency and Jitter
5.
Latency = 150 ms
Jitter = 30 ms
Loss = 1%
21-106 kbps guaranteed priority bandwidth per call
BW = (Payload + Headers) * pps
150 bps (+ layer 2 overhead) guaranteed bandwidth for Voice-Control trafffic per call QoS Requirements for Voice
6. QoS Requirements for Video-Conferencing Latency = 150 ms
Jitter = 30 ms
Loss = 1%
Minimum priority bandwidth guarantee required is:
Video-Stream + 20%
e.g. a 384 kbps stream would require 460 kbps of priority bandwidth
7. QoS Requirements for Data Different applications have different traffic characteristics
Different versions of the same application can have different traffic characteristics
Classify Data into relative-priority model with no more than four classes e.g. :
Gold: Mission-Critical Apps
(ERP Apps, Transactions)
Silver: Guaranteed-Bandwidth
(Intranet, Messaging)
Bronze: Best-Effort
(Email, Internet)
Less-Than-Best-Effort: (FTP, Backups, Napster/Kazaa)
Do not assign more than 3 apps to Gold or Silver classes
8. QoS Toolset Classification Tools
Scheduling Tools
Provisioning Tools
Management Tools
9. Classification ToolsTrust Boundaries – Where Traffic is ‘Marked’ A device is trusted if it correctly classifies packets
For scalability, classification should be done as close to the edge as possible - avoids detailed traffic identification at intermediate hops!
The outermost trusted devices represent the trust boundary
1 and 2 are optimal, 3 is acceptable - if access switch cannot perform classification
10. Classification ToolsEthernet 802.1Q/p Class of Service
11. Classification ToolsIPv4 IP Precedence and DiffServ Code Points
12. Classification ToolsPer-Hop Behaviour & NBAR Per-Hop Behaviour – maps to specific DSCP values
Best Effort
Assured Forwarding
Expedited
Network Based Application Recognition
Majority of data applications can be identified by L3 or L4 criteria (e.g. well-known ports)
For applications where these criteria are insufficient NBAR may be a viable alternative
Matching against Protocol Description Language Module ~ application signature
13. Scheduling ToolsQueuing Algorithms Congestion can occur at any point in the network where there are speed mismatches
Devices have buffers that allow for scheduling higher-priority packets to exit sooner then lower priority ones ~ queuing
Low-Latency Queuing (LLQ) used for highest-priority traffic (voice/video – strict prioritization)
Class-Based Weighted-Fair Queuing (CBWFQ) used for guaranteeing bandwidth to data applications
14. Scheduling ToolsLow-Latency Queueing Logic It is important to keep in mind that that the LLQ is in effect a FIFO queue.
The amount of bandwidth reservable for the LLQ is variable, yet if the LLQ is over-provisioned, the overall effect will be a dampening of QoS functionality. This is because the scheduling algorithm deciding how packets exit the device will be predominantly FIFO (which is essentially “no QoS”). Put another way, a FIFO link is essentially a 100% provisioned LLQ. Over-provisioning the LLQ would effectively defeat the purpose of enabling QoS at all.
For this reason, it is recommended not to provision more than 33% of the link’s capacity as a LLQ.
More than one LLQ can be provisioned; one LLQ could be provisioned for voice and another for videoconferencing. Each queue would be serviced exhaustively before the next one would begin to be serviced (this scheduling is similar to legacy Priority Queueing). If more than one LLQ is provisioned, the 33% limit for all LLQs still applies.
Note: The 33% limit for all LLQs is a design recommendation. There may be cases where specific business needs cannot be met while holding to this suggestion. In such cases, the enterprise must provision queueing according to their specific requirements and constraints. It is important to keep in mind that that the LLQ is in effect a FIFO queue.
The amount of bandwidth reservable for the LLQ is variable, yet if the LLQ is over-provisioned, the overall effect will be a dampening of QoS functionality. This is because the scheduling algorithm deciding how packets exit the device will be predominantly FIFO (which is essentially “no QoS”). Put another way, a FIFO link is essentially a 100% provisioned LLQ. Over-provisioning the LLQ would effectively defeat the purpose of enabling QoS at all.
For this reason, it is recommended not to provision more than 33% of the link’s capacity as a LLQ.
More than one LLQ can be provisioned; one LLQ could be provisioned for voice and another for videoconferencing. Each queue would be serviced exhaustively before the next one would begin to be serviced (this scheduling is similar to legacy Priority Queueing). If more than one LLQ is provisioned, the 33% limit for all LLQs still applies.
Note: The 33% limit for all LLQs is a design recommendation. There may be cases where specific business needs cannot be met while holding to this suggestion. In such cases, the enterprise must provision queueing according to their specific requirements and constraints.
15. WAN SchedulingDesign Recommendation It is important to keep in mind that the LLQ is in effect a first-in first-out (FIFO) queue.
The amount of bandwidth reservable for the LLQ is variable, yet if the LLQ is over-provisioned, the overall effect will be a dampening of QoS functionality. This is because the scheduling algorithm that decides how packets exit the device will be predominantly FIFO (which is essentially “no QoS”).
Over-provisioning the LLQ defeats the purpose of enabling QoS at all. For this reason, it is recommended that you not provision more than 33% of the link's capacity as a LLQ
Note: The 33% limit for all LLQs is a design recommendation. There may be cases where specific business needs cannot be met while holding to this recommendation. In such cases, the enterprise must provision queueing according to their specific requirements and constraints.
To avoid bandwidth starvation of background applications (such as routing protocols, network services,and layer 2 keepalives), it is recommended that you not provision total bandwidth guarantees to exceed75% of the link's capacityIt is important to keep in mind that the LLQ is in effect a first-in first-out (FIFO) queue.
The amount of bandwidth reservable for the LLQ is variable, yet if the LLQ is over-provisioned, the overall effect will be a dampening of QoS functionality. This is because the scheduling algorithm that decides how packets exit the device will be predominantly FIFO (which is essentially “no QoS”).
Over-provisioning the LLQ defeats the purpose of enabling QoS at all. For this reason, it is recommended that you not provision more than 33% of the link's capacity as a LLQ
Note: The 33% limit for all LLQs is a design recommendation. There may be cases where specific business needs cannot be met while holding to this recommendation. In such cases, the enterprise must provision queueing according to their specific requirements and constraints.
To avoid bandwidth starvation of background applications (such as routing protocols, network services,and layer 2 keepalives), it is recommended that you not provision total bandwidth guarantees to exceed75% of the link's capacity
16. Scheduling Tools Congestion Avoidance Algorithms Queueing algorithms manage the front of the queue
i.e. which packets get transmitted first
Congestion Avoidance algorithms, like Weighted-Random Early-Detect (WRED), manage the tail of the queue
i.e. which packets get dropped first when queueing buffers fill
WRED can operate in a DiffServ compliant mode which will drop packets according to their DSCP markings
WRED works best with TCP-based applications
17. Provisioning ToolsPolicers & Shapers – Respond to traffic violations A policer typically drops traffic
A shaper typically delays excess traffic, smooths bursts and prevents unnecessary drops
Very common on Non-Broadcast Multiple-Access (NMBA) network topologies such as Frame-Relay and ATM
18. Provisioning ToolsLink-Efficiency Mechanisms - LFI Serialization delay is the finite amount of time required to put frames on a wire
For links = 768 kbps serialization delay is a major factor affecting latency and jitter
For such slow links, large data packets need to be fragmented and interleaved with smaller, more urgent voice packets A data frame can only be sent to the physical wire at the serialization rate of the interface.
This serialization rate is the size of the frame divided by the clocking speed of the interface.
For example, a 1500 Byte frame takes 214 ms to serialize on a 56 kbps circuit.
If a delay sensitive voice packet is behind a large data packet in the egress interface queue, the end-to-end delay budget of 150 ms could be exceeded.
Additionally, even relatively small frame can adversely affect overall voice quality by simply increasing the jitter to a value greater than the size of the adaptive jitter buffer at the receiver.
LFI tools are used to fragment large data frames into regularly sized pieces and interleave voice frames into the flow so the end-to-end delay can be accurately predicted. This places bounds on jitter by preventing voice traffic from being delayed behind large data frames.
A maximum of10 ms serialization delay is the recommended target to use for setting fragmentation size, as this allows adequate time for end-to-end latency required by voice.
Two tools are available for LFI: MLP LFI (for point-to-point links), and FRF.12 (for Frame-Relay links). A data frame can only be sent to the physical wire at the serialization rate of the interface.
This serialization rate is the size of the frame divided by the clocking speed of the interface.
For example, a 1500 Byte frame takes 214 ms to serialize on a 56 kbps circuit.
If a delay sensitive voice packet is behind a large data packet in the egress interface queue, the end-to-end delay budget of 150 ms could be exceeded.
Additionally, even relatively small frame can adversely affect overall voice quality by simply increasing the jitter to a value greater than the size of the adaptive jitter buffer at the receiver.
LFI tools are used to fragment large data frames into regularly sized pieces and interleave voice frames into the flow so the end-to-end delay can be accurately predicted. This places bounds on jitter by preventing voice traffic from being delayed behind large data frames.
A maximum of10 ms serialization delay is the recommended target to use for setting fragmentation size, as this allows adequate time for end-to-end latency required by voice.
Two tools are available for LFI: MLP LFI (for point-to-point links), and FRF.12 (for Frame-Relay links).
20. Design Approach to Enabling QoS
21. QoS Tools Mapped To Design RequirementsRealizing End-to-End QoS
22. Belgacom IP-VPN QoS Support BiLAN IP-VPN
Fast Switching backbone through MPLS technology
MPLS Traffic Engineering for path optimisation
QoS typically expressed in delay and packet loss (any?any)
End-to-End WAN QoS: 4 classes supported today
Voice or Real-time traffic:
mainly intended to carry voice and/or video over UDP/RTP
CoS with low end-to-end delay, minimal jitter and a guaranteed bandwidth.
High priority traffic 1 and 2 or Business 1 and 2:
Intended to carry respons sensitive traffic such as SNA traffic, File Transfers or internal Web applications
CoS requires a certain bandwidth.
Best Effort traffic
23. Belgacom IP-VPN QoS – How
24. Do QoS requirements change your LAN WAN integration approach ?
1) no - i don't bother
2) no - because it is already implemented
3) yes - i'm investigating it
4) yes - but i don't know how
This question gives the possibility to make the link with the NSI wheel smoothly without taking the risk to get into some technical discussion afterwards.
Do QoS requirements change your LAN WAN integration approach ?
1) no - i don't bother
2) no - because it is already implemented
3) yes - i'm investigating it
4) yes - but i don't know how
This question gives the possibility to make the link with the NSI wheel smoothly without taking the risk to get into some technical discussion afterwards.
25. End to End QoS = End to End Integration End to End QoS requires WAN and LAN integration
Belgacom is a strong partner on the WAN
What can we do in the LAN area: the NSI Wheel
26. NSI : Solution Overview
27. CONCLUSION New Applications and Services require solid End-to-End QoS
QoS support technologies exist for the LAN and the WAN
End-to-End QoS design considerations push for an integrated approach to the LAN and WAN
Belgacom has the largest expertise in the WAN and is using state of the art IP-VPN technology
Belgacom NSI is ready to support your LAN projects or to provide advice on LAN and WAN integration
Further Information:
Contact your Account Manager
QoS Design Guide ? www.cisco.com/go/srnd