310 likes | 329 Views
CCAP Converged Cable Access Platform. Gerry White , Distinguished Engineer CTO Group CABU. With material shamelessly stolen from multiple industry wide CCAP Sources. July 2014. Agenda. Why CCAP? What is CCAP? Differences to current CMTS CCAP components & implementation
E N D
CCAPConverged Cable Access Platform Gerry White, Distinguished Engineer CTO Group CABU With material shamelessly stolen from multiple industry wide CCAP Sources July 2014
Agenda • Why CCAP? • What is CCAP? • Differences to current CMTS • CCAP components & implementation • Distributed CCAP / Remote PHY • NFV & Virtualized CCAP
Cable Operator Challenges to Meet the Traffic Growth • More Personal • More Interactive • More Video • More Devices Keep up with unprecedented bandwidth growth Migrate to an all-IP network with the existing infrastructure Pressure to reduce rack space and power
CCAP Objectives • Converged multi-service platform - single port per SG • Increased DOCSIS capacity / SG • Reduced cost-per-downstream • Reduce rack space per system • Scaleable deployment options
Current Head End • Separate CMTSs & EQAMs • Limited channel capacity per platform • Multiple platforms for each • Complex combining • Scaling problems as add SGs Router broadcast narrowcast CMTS UEQAM OOB analog OOB combining Laser Rcvr
CCAP Head End • Combine CMTS & EQAM • Higher performance • Single port per SG • Simpler combining • Easier scaling Router Digital Video and data CCAP OOB analog OOB combining Laser Rcvr
CCAP with Analog Optical Interfaces • Include optics and combining • Further space reductions Router Digital Video and data CCAP OOB OOB analog Analog optics
CCAP • Integration of services • One port per SG • High capacity & density • Lower costs • Efficiency & scale • Centralization of resources • Hub in box
DOCSIS 3.1 • Goals • Achieve 10+ Gbps in the DS. • Achieve 1+ Gbps in the US • Backward compatibility story with DOCSIS 3.0, 2.0, & 1.1. • Better spectral efficiency. • Technology • OFDM, OFDMA, LDPC • New DS and US spectrum • Re-use of D3.0 MAC concepts • This will allow D3.1 to offer services competitive with FTTH.
CCAP Objectives Exabytes per month • Converged multi-service platform - single port per SG • Increased DOCSIS capacity / SG • Reduced cost-per-downstream • Reduce rack space per system • Scaleable deployment options • + DOCSIS 3.1 Year
Today’s Headend IP Services CMTS DOCSISEQAM ForwardCombiner DOCSISCombiningNetwork Data VoIP IPVideo SG 1 HFC VoDEQAM VoDCombiningNetwork Digital VideoServices SDVEQAM SDVCombiningNetwork ForwardCombiner Linear VoD SG N HFC BroadcastEQAM NPVR BcastCombiningNetwork Inefficient EQAM capacity utilization, complex combining networks
Integrated CCAP Architecture Integrated CCAP ForwardCombiner IP Services Data VoIP HFC/PON SG 1 IPVideo CMTS ForwardCombiner Digital VideoServices UniversalEQAM HFC/PON SG N Linear VoD NPVR Increase capacity & reduce cost, rack space and power consumption One port per SG
Generic CCAP Components Northbound interfaces To core 10G Ethernet Routing Packet engines Control & management Supervisor & packet engine BH PIC DOCSIS line cards DOCSIS + EQAM DOCSIS line cards Southbound interfaces To HFC RF or optical Active line cards RF PIC Spare line card(s) External timing Timing Digital and RF mid-planes
CCAP Front • Supervisor Cards • Integrated backhaul capacity • 1+1 redundancy • N * 10G interfaces • RF Line Cards • Port per SG • Full spectrum per port • DS + US on one card or • DS cards + US cards • N+1 redundancy with integrated RF Switch • Power Supplies • 5 -> 10KW
CCAP – Rear • RF Line Card PICs • High density connectors • Integrated analog optics • Remote PHY digital optics • Supervisor PICs • N x 10 GE ports • Management • Timing Power Connections • Cooling • Exhaust fans
CCAP Impact Capacity DOCSIS 3.0 + 3.1 Scale from 1 to 10 Gbps downstream per SG 100 to 200Gbps backhaul initially – more later Convergence DOCSIS IP and MPEG video narrowcast and broadcast Reduced costs Next generation silicon – processing, packet forwarding, DOCSIS High level of integration Reduced cost per channel Reduced space & power Integration Reduced combining Integrated optics
Remote Phy Goals • Remove RF from head end / hub • Replace analog fiber from hub to node with digital • Leverage Ethernet / PON and digital optics • Extend IP networking to the node • Simplify operations • Keep the node as simple as possible • Keep the complex s/w central
CCAP with Centralized PHY • In a I-CCAP, the CMTS and EQAM share a common PHY • PHY provides digital to analog conversion • Clock is local to the CCAP platform DOCSIS L2 MAC Common L1 PHY RF Video L2 MAC clock
CCAP with Remote PHY DOCSIS L2 MAC Remote PHY Remote PHY Common L1 PHY DEPI UEPI Ethernet Ethernet RF Video L2 MAC clock clock R-DTI • The CCAP PHY chip is remotely located and connected over Ethernet • Digital to analog occurs in the Remote PHY node • Remote DTI manages transfer of time and frequency
Fiber Deeper & Remote PHY DOCSIS Signaling Remote PHY Signaling CCAP Core L2 and above CCAP Remote PHY DOCSIS CM Coax Digital Fiber (IP) Internet DOCSIS PacketCable DOCSIS Policy Server • Adapts CCAP to an HFC plant that contains digital fiber instead of linear fiber • DOCSIS signaling remains end-to-end DOCSIS Provisioning
Remote Phy Impact Router • Remove RF from head end / hub • Replace analog fiber from hub to node with digital • Leverage Ethernet / PON and digital optics • Extend IP networking to the node • Enabler for virtualization Digital Video and data CCAP OOB OOB Digital Optics
NFV Concept Leverage data centre tools and technology Run network functions in VMs in data centers Enablers Hypervisor and cloud computing technology Improving x86 h/w performance Value Proposition Shorter innovation cycle Improved service agility Reduction in CAPEX and OPEX Applications CCAP? NMS dDOS WLC DHCP CGN DPI RaaS SBC DNS Caching CDN SDN Ctrl. PCRF Firewall IPS Virus Scan WAAS BRAS Portal NAT VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
vCCAP? • With Remote PHY • CCAP -> CCAP core + Remote PHY • With no RF interfaces CCAP core is a canditate for virtualization • vCCAP runs in VM on standard server platform with Ethernet interfaces • CCAP = CMTS +EQAM • vCCAP is actually vCMTS + vEQAM • CCAP becomes vCMTS + vEQAM + R-PHY CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP CCAP VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
Why NFV? • NFV is a direction that Service Providers are headed in an effort to reduce OPEX • It allows a generic hardware complex with specialized software applications. • It trades off specialized hardware for less optimized common platforms • It uses standard management and orchestration tools • NFV and Orchestration required is not simple but • It is heavily leveraged from the data center • It is mainstream technology • It could have significant advantages especially for scaling & OPEX • Physical versus virtual will be a choice
Evolved Network Infrastructure Products Applications & End to End Connectivity Applications Residential & Business Services Applications Evolved Services Platform Orchestration CCAP-Core NFV FTTxOLT Installed Base + CCAP Ethernet HFC Plant RPHY SHELF RPHY NODE ONT NID • Small Hub • Linear Fiber • Deep Fiber • Digital Fiber • - Classic HFC • High SLA Commercial • Select Residential
Gerry White gerrwhit@cisco.com