90 likes | 109 Views
This overview delves into the challenges of architectural complexity in network protocol engineering, emphasizing the significance of simple wins, scalability, and graceful upgrades. It discusses deployment issues, cross-cutting concerns, and the impact of complexity on protocol design and implementation. Through examples and analysis of measuring complexity, the text explores the importance of reusability, minimalism, and learning from failures in creating efficient and effective protocols. It also touches upon the integration of functionality and the role of domain experts in modern protocol design.
E N D
Architectural Complexity Henning Schulzrinne Dept. of Computer Science Columbia University
Overview • Deployment problems • Layer creep • Simple and universal wins • Scaling in human terms • Cross-cutting concerns, e.g., • CPU vs. human cycles • we optimize the $100 component, not the $100/hour labor • introspection • graceful upgrades • no policy magic
Simple wins (mostly) • Examples: • Ethernet vs. all other L2 technologies • HTTP vs. HTTPng and all the other hypertext attempts • SMTP vs. X.400 • SDP vs. SDPng • TLS vs. IPsec (simpler to re-use) • no QoS & MPLS vs. RSVP • DNS-SD (“Bonjour”) vs. SLP • SIP vs. H.323 (but conversely: SIP vs. Jabber, SIP vs. Asterisk) • Efficiency is not • BitTorrent, P2P searching, RSS, …
Measuring complexity • Traditional O(.) metrics rarely helpful • except maybe for routing protocols • Focus on parsing, messaging complexity • marginally helpful, but no engineering metrics for trade-offs • No protocol engineering discipline, lacking • guidelines for design • learning from failures • we have plenty to choose from – but hard to look at our own (communal) failures • re-usable components • components not designed for plug-and-play • “we don’t do APIs” we don’t worry about whether a simple API can be written that can be taught in Networking 101 • Separate worlds: • most of the new protocols in the real world based on WS • IETF stuck in bubble of one-off protocols more fun! • re-use considered a disadvantage • insular protocols that have local cult following (BEEP)
Measuring complexity • Conceptual complexity • can I explain the protocol operation in one class? • e.g., PIM-SM, MADCAP, OSPF • Observable vs. hidden • one side, without “god box” • hidden state and actions increase information complexity • unknown variables can have any state • Number of system interfaces • see 3GPP
Possible Complexity Metrics • new code needed (vs. reuse) less likely to be buggy or have buffer overflows • e.g., new text format almost the same • numerous binary formats • security components • new identities and identifiers needed • number of configurable options + parameters • must be configured & can be configured (with interop impact) • discoverable vs. manual/unspecified • SIP experience: things that shouldn’t be configurable will be • RED experience: parameter robustness • mute programmer interop test: two implementations, no side channel • number of “left-to-local policy” • DSCP confusion • start-up latency (“protocol boot time”) • IPv4 DAD, IGMP
Time for a new protocol stack? • Now: add x.5 sublayers and overlay • HIP, MPLS, TLS, … • Doesn’t tell us what we could/should do • or where functionality belongs • use of upper layers to help lower layers (security associations, authorization) • what is innate complexity and what is entropy? • Examples: • Applications: do we need ftp, SMTP, IMAP, POP, SIP, RTSP, HTTP, p2p protocols? • Network: can we reduce complexity by integrating functionality or re-assigning it? • e.g., should e2e security focus on transport layer rather than network layer?
Conclusion • Traditional protocol engineering • “must do congestion control” • “must do security” • New module engineering • re-usable components • most protocol design will be done by domain experts (cf. PHP vs. C++) • out-of-the-box experience • What would a clean-room design look like?