410 likes | 702 Views
IPTV / IPMulticast End-to-End. Greg Shepherd (shep@cisco.com). Agenda. IPTV Deployments Impact of Packet Loss on MPEG2 Frames Network Impairment Contributors CRS IPMulticast Test Data Summary Future Challenges. IPTV Deployments today. Two schools of thought in deployments today:
E N D
IPTV / IPMulticast End-to-End Greg Shepherd (shep@cisco.com)
Agenda • IPTV Deployments • Impact of Packet Loss on MPEG2 Frames • Network Impairment Contributors • CRS IPMulticast Test Data • Summary • Future Challenges
IPTV Deployments today • Two schools of thought in deployments today: • I think I need 50ms cvg • IPMulticast is fast enough • IPMulticast is UDP • The only acceptable loss is 0ms • How much is “reasonable”? • 50ms “requirement” is not a video requirement • Legacy telco voice requirement • Efforts for 50ms only cover a limited portion network events • Where to put the effort? • Make IPMulticast better? • Improve the transport? • Add layers of network complexity to improve core convergence?
Agenda • IPTV Deployments • Impact of Packet Loss on MPEG2 Frames • Network Impairment Contributors • CRS IPMulticast Convergence Test Data • Summary • Future Challenges
Impact of Packet Loss on MPEG Stream 0% Packet Loss Video is very susceptible to IP Impairments 0.5 % Packet Loss 5 % Packet Loss
Impact of Packet Loss on MPEG Stream • Compressed Digitized Video is sent as I, B, P Frames • I-frames: contain full picture information • Transmit I frames approximately every 15 frames (GOP interval) • P-frames: predicted from past I or P frames • B-frames: use past and future I or P frames I-frame loss “corrupts” P/B frames for the entire GOP
Impact of Packet Loss on MPEG Stream Network events create correlated packet loss, not random single packet loss. What’s the relationship between network CVG time and I-frame loss? • Example Assumptions: • MPEG2 stream CBR = 4.8828Mbps • MPEG2 IP stream pps = 427.35pps • L3 pkt_size = 1487Bytes (encap IP + UDP + RTP) • GOP-size-in-msec 480 • GOP-size-in-pkts 205
MPEG Frame Impact from Packet Loss • P/B frame loss is less noticeable • Error concealment techniques in the receiver can mask some • I-Frames loss is more problematic • I-frame loss can result in an entire GOP loss • A single packet lost from an I-frame corrupts the entire I-frame • I-frame (GOP) loss can result in blank screen for 1-2 secs • 50ms is a phantom goal • 32% chance of I-frame loss • ..another way.. • 32% of your streams will have 1-2 sec blank screen outage • Why then is this a goal for some?
Agenda • IPTV Deployments • Impact of Packet Loss on MPEG2 Frames • Network Impairment Contributors • CRS IPMulticast Convergence Test Data • Summary • Future Challenges
What are the Impairment Contributors? • Link Failures • Node Failures • Random Uncorrected Bit Errors • Congestion How do we measure these?
What are the Impairment Contributors? • 1st: Need Quantify Impairments • Need some “standard” • Relevant to viewers’ experience # Impairments per 2 hours • Representative of a typical movie duration • Allow for comparing contributions over a standard window of time
What are the Impairment Contributors? • Some Assumptions / Some Industry Standard Data / Some Customer Experience Data • Total Value Across a Typical Provider Network • Trunk Failures - .0010 Imp/2hr • HW Card Failures - .0003 Imp/2hr • SW Failures - .0012 Imp/2hr • NSF/SSO reduces the realized amount of this contribution • SW Upgrades - .0037 Imp/2hr • Modular code (IOS-XR) reduces the realized amount of this contribution
What are the Impairment Contributors? • Uncorrected Bit Errors - 11.4629 Imp/2hrs • "Video over IP" by WesSimpson (page 238) - 10-10 per trunk Trunk Failures: .0010 HW Failures: .0003 SW Failures: .0012 Maintenance: .0037 Total: .0062 Imp/2hrs
Network Impairment Contributors • All HW/SW/Link failures combined do not compare to uncorrected bit errors • Last-mile networks often most significant contributors • SW failures/Maintenance each contribute much more than link failures • Stable, modular software with NSF/SSO can reduce this contribution even further • Fast convergence in the core is a worthy goal • Improves core-contributed artifacts • Need to consider the balance of a solid platform vs. layered complexity • Solid performing platform is more important than complex protocol solutions
Agenda • IPTV Deployments • Impact of Packet Loss on MPEG2 Frames • Network Impairment Contributors • CRS IPMulticast Convergence Test Data • Summary • Future Challenges
Internal CRS Multicast Convergence Test • Typical customer topologies • Most tree repairs are single-hop repairs • Typical customer network configurations • 2500 IGP routes • 250k BGP routes • Tests of 400 - 4000 (S,G) SSM entries • ISIS Prefix Prioritization • Loopbacks • Multicast source prefixes • Not yet in OSPF
Summary across all available CRS1 data • And most important… CRS1 SSM Convergence is rather quick! 2500 IGP, 250k BGP 4000 IPTV channels 800 IPTV channels 400 IPTV channels
Summary across all available CRS1 data • UC has negligible impact on MC behavior whether 2500 or 5000 ISIS prefixes
MPEG Frame Impact from Packet Loss 100% Competitor systems can take 1sec or MORE to converge 400 (S,G) entries
IPMcast QOS Requirements • Network congestion is not a significant impairment contributor • ..normally.. • BUT it is a necessary safety-net • On-network multicast traffic is well known • Flows, rates, sources.. • Access can sycronize/spike • “Mother’s day” events • Real-time (VoIP) traffic should not suffer from other traffic events
Triple-PlayQOS test assumptions • Support 4 classes of service with a strict priority relationship between these classes as follows: • Unicast High > Multicast High > Multicast Low > Unicast Low • ie: Voip > Premium Vid > Broadcast Vid > Access • Full line rate performance is expected for all traffic transmitted in each class when uncongested. • No effects observed on higher class performance due to traffic transmission on a lower class. • No effect on unicast traffic, nor should the unicast traffic effect the multicast traffic. • Congested interface should not affect same multicast flow(s) destined to adjacent uncongested interfaces.
QOS Test Configuration 2 HPU HPM,LPM TenGigE0/0/0/3 Port 3 TenGigE0/0/0/0 Port 1 TenGigE0/0/0/4 Port 4 SPIRENT AX-4000 SPIRENT AX-4000 TenGigE0/0/0/5 Port 5 - TenGigE0/0/0/1 Port 2 CRS-1 TenGigE0/0/0/6 Port 6 LPU HPU + HPM + LPM + LPU = 100% HPU + HPM + LPM + LPU > 100% Strict Priority order for dropping HPU > HPM > LPM > LPU
QOS Test 2 - Port 4 QOS profile is maintained on the adjacent interface with zero packet loss of the higher priority traffic.
QOS Test Configuration 3 HPM HPU, LPM TenGigE0/0/0/0 Port 1 TenGigE0/0/0/2 Port 3 SPIRENT AX-4000 SPIRENT AX-4000 - TenGigE0/0/0/1 Port 2 CRS-1 LPU HPU + HPM + LPM + LPU = 100% HPU + HPM + LPM + LPU > 100% HPU = 55%, LPM = 30%, LPU= 5% HPM step from 10% to 90%
QOS Test Configuration 3 HPM HPU, LPM TenGigE0/0/0/0 Port 1 TenGigE0/0/0/2 Port 3 SPIRENT AX-4000 SPIRENT AX-4000 - TenGigE0/0/0/1 Port 2 CRS-1 LPU HPU + HPM + LPM + LPU = 100% HPU + HPM + LPM + LPU > 100% Test parameters: Rate: Data traffic for HPU,LPM and LPU is fixed at 5.5Gb/s, 3.0Gb/s and 0.5Gb/s respectively. HPM traffic follows square wave pattern from 1.0Gb/s for 30secs to 9Gb/s for 20secs to represent bursty HPM traffic Packet size: 220 bytes for HPU, 1496 for HPM, LPM and LPU
CRS IPMulticast Test Summary • CRS is the IPTV / IPMcast Industry Leader • IPMcast convergence is exceptionally fast • And improving through constant engineering efforts • No need to chase phantom numbers • QOS performance is unmatched
Agenda • IPTV Deployments • Impact of Packet Loss on MPEG2 Frames • Network Impairment Contributors • CRS IPMulticast Test Data • Summary • Future Challenges
IPTV Deployments today • Native IPMulticast performance in the CRS is a driving factor • Mcast PPS • Fan-out • IPMcast convergence • Unicast/Mcast QOS • 200 - 1000 (S,G)s is typical • Cable, DSL, and Satellite • The largest is testing 10,000 (S,G)s as its goal • Has chosen native IPMcast over PtMP • High performance of the CRS (exceeded requirements) • Simple to configure, maintain, operate, etc..
IPTV Deployments Today • Beginning to see the 50ms “requirement” disbanded • Scalability and stability of CRS/XR • Simplicity of IPMcast • Overlay solutions do not reduce frequency of impairments • Never reaches 0 loss (never really reaches the 50ms goal) • May doesn’t address link-up transitions • Adds excessive network and operational complexity for little gain • Selecting the right platform is the best solution
Agenda • IPTV Deployments • Impact of Packet Loss on MPEG2 Frames • Network Impairment Contributors • CRS IPMulticast Test Data • Summary • Future Challenges
Future Challenges • Current IPTV is a value added service • On-net injection • PPV or local Advertising Revenue • Walled Garden • Edge provider “owns” the customer • Will this last?
Future Challenges • Access bandwidth is driven by competition • Access bandwidth rapidly surpassing video bandwidth • Video bandwidth is semi-bounded
Future Challenges • IPTV works as a Value Added service today • Access bandwidth growth opens up new applications • Over-the-top video is already here - in some form.. • Joost, MacTV, YouTube, BitTorrent, AMT • More available bandwidth will only improve these applications • DVRs are changing how people watch TV • Consumers don’t care how their DVRs are populated • Will live-TV be relevant in the future?
Future Challenges • How does a provider say in the food-chain? • Continue to expand content offering • Stay ahead of the curve • Open IPMcast transport to off-net content • Look for key strategic content partners • Integrated Directory API • Cisco/SciAtl