580 likes | 732 Views
APAN meeting, Fukoka, Japan 24th January 2003. The Kent Ridge Advanced Network (KRAN). Lek-Heng NGOH PhD Deputy Director, SingAREN & Research Manager Institute of Infocomm Research A*STAR, Singapore. Goal.
E N D
APAN meeting, Fukoka, Japan24th January 2003 The Kent Ridge Advanced Network (KRAN) Lek-Heng NGOH PhD Deputy Director, SingAREN & Research Manager Institute of Infocomm Research A*STAR, Singapore
Goal To research and develop an advanced IP-over-optical network infrastructure with support for grid computing
Approach • Work Focuses on the following Layers: • Advanced IP Layer • Optical Layer • Grid Middleware Layer
Approach • Design and setup optical testbed • Test and evaluate three emerging LAN/WAN technologies – GE, POS and RPR • Trial and study of optical plane signaling and control solutions • Evaluate and test KRAN with grid middleware & applications • Conclusion
Timeline Tender Process & optical technologies selection KRAN Formation Network design 1 Mar KRAN launch 25 Apr KRAN kick-off1st SCM 13 May CSCO/SCSSolutionImplemented 15 Mar A*STAR grant 12 Apr Closed Tender Network connectivity, IP addressing and configuration Detailed Test plans & logistics planning Staging Tests Project Planning 1 Jul Officialstaging 10 Jul Equipment Arrival 11 Jul 1st power up test 18 Jul 2nd SCM 1 Jun BII-trainee
Time Schedule Outdoor tests Early Mar 03 End Aug 03 Late Dec to Early Jan 03 Early Oct Early Nov Early Dec Complete RPR indoor Complete POS indoor Complete GE indoor Deployment Complete outdoor tests Application tests Items in red are completed. The progress of the KRAN project is on schedule.
KRAN Project Working Group • Wong Yew Fai (CC) • Wong Chiang Yoon (LIT) • Nigel Teow Teck Ming (BII-CC) • Cisco Systems • SCS (Singapore) Ltd,
Optical Node IP/Layer-2 Node Optical Fibre Detailed Physical Map NUS, CC IMCB NUS, EE SoC I2R, BII
Staging Connections AC-DC 15194 10720 10720 SMB SMB Attenuator 10720
Deployment Connections AC-DC 10720 15194 0.75km SMB NUS Fibre Drum 1km 10720 10720 SMB
NUSNET /26 /26 /26 Addressing and Naming 172.18.44.0/24 Loopback 0 to 31 Backbone 32 to 63 CC 64 to 127 I2R 128 to 191 SOC 192 to 254 172.18.36.1 172.18.36.252 Switch (.1) 33 /30 34 66 CC (.2) 37 41 /30 /30 38 42 130 /30 I2R (.3) SOC (.4) 194 44 45
Project Plans • 9 main major items to test • Throughput/Delay/Loss/Jitter • QoS • Fault Recovery • Service Provisioning • Network Management • IP support • Multicast • MPLS • Others
Apparatus Used • SmartBits as Traffic Generator • SmartFlow software to drive SmartBits • 3 x 10720 routers • 1 x ONS15194 IP traffic aggregator • 6 x 15km fiber drums • 6 x 10dB attenuators • Relevant fiber patch cords • Optional: Catalyst 3550 switches
Item 1- Throughput… May subject to changes
Network Performance (4) • Throughput = Packets sent w/o lost • Thru’put is better for large frame sizes • Limitation of router to handle too many packets/sec • For large frame sizes, thru’put approaching line rate
Network Performance (5) • As loading inc, frame lost inc. • Frame Lost is huge and starts at low loading conditions for small frame sizes • Same reasoning – router limitation • For large frame sizes (>512 bytes), lost is about 7% at 2.6G loading.
Network Performance (6) • As loading inc, latency inc. • Again, large frame sizes outperform small frame sizes • 3 platforms: • Lowest is minimum time it takes packets to traverse about 22.5km • 2 other queues (e.g. interface and processor)
Network Performance (7) • As loading inc, latency dev inc. (intuitive) • Similarly, large frame sizes outperform small frame sizes (router limitation) • Platforms also evident – due to queuing; inherits from the latency graphs earlier
Network Performance (8) • What is presented is only a portion of the experiments conducted. • Other experiments include: • Using attenuators, instead of fiber drums • Stressing the GE/FE module instead of the RPR module • Driving symmetric traffic (1.3 + 1.3) rather than asymmetric traffic (2 + 0.6) • TCP/UDP/IP testing
Network Performance (9) • Some conclusions include: • Fibre drum (7db) results better than attenuator (10dB) results • GE/FE module does not handle 2.6G of input traffic and creates a bottleneck even before packets can be sent out of RPR interface. • No difference between TCP/UDP in terms of frame loss, latency and latency standard deviation. • Multiple TCP flows and single TCP flows do not affect performance.
Throughput Test • POS results poor (hardware card related) • RPR better for larger frame sizes. • GE seemingly better for smaller frame size. • GE (routers) worse than GE (switches) because of IP processing overheads
Frame Loss Test • Related to throughput results • RPR performs best at large frame sizes • GE (switching) is generally better than other technologies (except RPR large frame size) • POS results are the worst again due to hardware card.
Item 2 - QoS May subject to changes
Example Test Item – RPR QoS KRAN07-R2-QoS-RPR.doc Configured SRP queues on all 10720s 10720 10720 maps SRP/bits to appropriate traffic 2.4GB RPR Ring 10720 10720 0.48GB SMB measures Throughput, Delay, Jitter, Loss SMB measures Throughput, Delay, Jitter, Loss
Class High 0 1 2 3 4 Scheduler Class Default Mapper Maps Class High to SRP 7 and the Class Default to default SRP 0 CBWFQ Layer 2 QoS Testing (4) HI LO 7 0 5 6 7 7 0 0 80% 20% Slicer SRP 5 – 7 goes to HI queue, the rest goes to default LO queue SRP transmit interface
Item 3 - Fault May subject to changes
Example Test Item – RPR Fault KRAN10-R1-QoS-fault.doc 10720 2.4GB RPR Ring 10720 10720 SMB measures Throughput, Delay, Jitter, Loss SMB measures Throughput, Delay, Jitter, Loss
Fault Recovery • RPR (IPS) recovers in less than 5ms, well within 50ms telecom standard for voice. • POS recovers in 7.5s • GE (STP) recovers in almost 1 min. • The GE (RSTP) recovers in about 1.65s. • RPR is the clear winner
Item 4 – Service Provisioning May subject to changes
Item 5 – Network Management May subject to changes
Item 6 – IP Support May subject to changes
Item 7 – Multicast May subject to changes
Item 8 - MPLS May subject to changes
Item 9 – Others May subject to changes
Optional Items • IPv6 • Security Features • Jumbo Frame Support
Time Table Deploy best network Switch-over
Deliverables • 1 x Safety Document (end July) - Done • 1 x RPR indoor Test Report (mid Oct) - Done • 1 x POS indoor Test Report (mid Nov) - Done • 1 x GE indoor Test Report (mid Dec) – Almost Done • 1 x Staging Test Report (early Jan) – in progress • 1 x Final Report (End Apr)
Evaluation • Inferring from the experimental results, • GE is strong in Network Stress + QoS + Pricing • POS is strong in Multicast • RPR is strong in QoS + Fault recovery • If not for fault recovery, GE may be a good choice for many networks.
Evaluation • However, a more systematic approach has been considered to determine the best of the three techs (RPR, POS, GE) • For each category (e.g. stress, QoS, Fault recovery), ranking was given. • Weights are assigned to each category depending on network requirements. (e.g. if the network requirement is strict on fault recovery times, then the fault recovery category will receive higher weigtage than other categories.)
RPR POS GE Data 4.5ms 7.5s 1.65s Ranking 3 1 2 Evaluation • Fault Recovery • A rank of 3 is better than 2, and 2 is better than 1
Ranking RPR POS GE QoS 3 1 2 Stress 2 1 3 Multicast 2 3 1 Evaluation • Other categories (QoS, Stress, etc.) are ranked similarly. Table below briefly illustrates. The actual ranking has more details. • NB: A rank of 3 is better than 2, and 2 is better than 1
Ranking RPR POS GE Fault (2) 3 1 2 QoS (2) 3 1 2 Stress (3) 2 1 3 Multicast (1) 2 3 1 Costs (4) 1 1 3 Evaluation • Weights (example weights in blue) are assigned to each category depending on its importance on the user network.
Ranking RPR POS GE Fault (2) 3 1 2 QoS (2) 3 1 2 Stress (3) 2 1 3 Multicast (1) 2 3 1 Costs (4) 1 1 3 Score 24 14 30 Evaluation • Preferred tech based score = on the product of the two matrices (weight matrix and Tech Eval matrix).
Ranking RPR POS GE Score 24 14 30 Evaluation • The table indicates that GE has the highest score of 30 and is the most desired tech for the given weights. • Suppose weights were given to favour fault recovery timings more than pricing, RPR would have been the winner.
Conclusion • All indoor tests have been completed. • Experimental results were presented (fault recovery, stress test, QoS, multicast). • All 10720 routers have been deployed at CC, SOC and I2R. • Backbone connectivity between deployed nodes are up. • Half the milestones were achieved and more than half of the deliverables were completed. • Will commence outdoor tests. • Evaluation of the best tech after comparisons were provided. • GE -> QoS, Stress, Pricing • POS -> Multicast • RPR -> QoS, Fault Recovery
Objectives • To experiment and identify suitable optical network signalling and control software solutions (GMPLS, OGSI) for the following cross-layer activities: • traffic Engineering/QoS Management • Fault Protection and Recovery • To support Data-in-Network Research