1 / 229

C++ Network Programming Mastering Complexity with ACE & Patterns

Dr. Douglas C. Schmidt d.schmidt@vanderbilt.edu www.cs.wustl.edu/~schmidt/tutorials-ace.html. C++ Network Programming Mastering Complexity with ACE & Patterns. Professor of EECS Vanderbilt University Nashville, Tennessee. Motivation: Challenges of Networked Applications.

tyne
Download Presentation

C++ Network Programming Mastering Complexity with ACE & Patterns

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dr. Douglas C. Schmidt d.schmidt@vanderbilt.edu www.cs.wustl.edu/~schmidt/tutorials-ace.html C++ Network ProgrammingMastering Complexity with ACE & Patterns Professor of EECS Vanderbilt University Nashville, Tennessee

  2. Motivation: Challenges of Networked Applications Complexities in networked applications • Accidental Complexities • Low-level APIs • Poor debugging tools • Algorithmic decomposition • Continuous re-invention/discovery of core concepts & components • Inherent Complexities • Latency • Reliability • Load balancing • Causal ordering • Scheduling & synchronization • Deadlock • Observation • Building robust, efficient, & extensible concurrent & networked applications is hard • e.g., we must address many complex topics that are less problematic for non-concurrent, stand-alone applications

  3. Presentation Outline • Presentation Organization • Background • Concurrent & network challenges & solution approaches • Patterns & wrapper facades in ACE + applications Cover OO techniques & language features that enhance software quality • Patterns,which embody reusable software architectures & designs • ACE wrapper facades, which encapsulate OS concurrency & network programming APIs • OO language features, e.g., classes, dynamic binding & inheritance, parameterized types

  4. The Evolution of Information Technologies CPUs & networks have increased by 3-7 orders of magnitude in the past decade 2,400 bits/sec to 1 Gigabits/sec These advances stem largely from standardizing hardware & software APIs & protocols, e.g.: 10 Megahertz to 1 Gigahertz • Intel x86 & Power PC chipsets • TCP/IP, ATM Increasing software productivity & QoS depends heavily on COTS • POSIX & JVMs • Middleware & components • Quality of service aspects • Extrapolating this trend to 2010 yields • ~100 Gigahertz desktops • ~100 Gigabits/sec LANs • ~100 Megabits/sec wireless • ~10 Terabits/sec Internet backbone In general, software has not improved as rapidly or as effectively as hardware

  5. Component Middleware Layers Historically, mission-critical apps were built directly atop hardware & OS • Tedious, error-prone, & costly over lifecycles There are layers of middleware, just like there are layers of networking protocols • Standards-based COTS middleware helps: • Control end-to-end resources & QoS • Leverage hardware & software technology advances • Evolve to new environments & requirements • Provide a wide array of reuseable, off-the-shelf developer-oriented services There are multiple COTS layers & research/ business opportunities

  6. Operating System & Protocols INTERNETWORKING ARCH MIDDLEWARE ARCH RTP TFTP FTP HTTP Middleware Applications DNS TELNET Middleware Services UDP TCP IP Middleware Fibre Channel Solaris VxWorks Ethernet ATM FDDI Win2K Linux LynxOS 20th Century 21st Century • Operating systems & protocols provide mechanisms to manage endsystem resources, e.g., • CPU scheduling & dispatching • Virtual memory management • Secondary storage, persistence, & file systems • Local & remove interprocess communication (IPC) • OS examples • UNIX/Linux, Windows, VxWorks, QNX, etc. • Protocol examples • TCP, UDP, IP, SCTP, RTP, etc.

  7. Host Infrastructure Middleware Asynchronous Event Handling • Examples • Java Virtual Machine (JVM), Common Language Runtime (CLR), ADAPTIVE Communication Environment (ACE) Asynchronous Transfer of Control Physical Memory Access Synchronization Memory Management Scheduling www.cs.wustl.edu/~schmidt/ACE.html www.rtj.org • Host infrastructure middleware encapsulates & enhances native OS mechanisms to create reusable network programming components • These components abstract away many tedious & error-prone aspects of low-level OS APIs Domain-Specific Services Common Middleware Services Distribution Middleware Host Infrastructure Middleware

  8. Distribution Middleware • Examples • OMG Real-time CORBA & DDS, Sun RMI, Microsoft DCOM, W3C SOAP realtime.omg.org/ en.wikipedia.org/wiki/Data_Distribution_Service • Distribution middleware defines higher-level distributed programming models whose reusable APIs & components automate & extend native OS capabilities Domain-Specific Services Common Middleware Services Distribution Middleware Host Infrastructure Middleware Distribution middleware avoids hard-coding client & server application dependencies on object location, language, OS, protocols, & hardware

  9. Common Middleware Services • Examples • CORBA Component Model & Object Services, Sun’s J2EE, Microsoft’s .NET, W3C Web Services • Common middleware services support many recurring distributed system capabilities, e.g., • Transactional behavior • Authentication & authorization, • Database connection pooling & concurrency control • Active replication • Dynamic resource management • Common middleware services augment distribution middleware by defining higher-level domain-independent services that focus on programming “business logic” Domain-Specific Services Common Middleware Services Distribution Middleware Host Infrastructure Middleware

  10. Domain-Specific Middleware • Examples • Siemens MEDSyngo • Common software platform for distributed electronic medical systems • Used by all ~13 Siemens MED business units worldwide Modalities e.g., MRI, CT, CR, Ultrasound, etc. • Boeing Bold Stroke • Common software platform for Boeing avionics mission computing systems • Domain-specific middleware services are tailored to the requirements of particular domains, such as telecom, e-commerce, health care, process automation, or aerospace Domain-Specific Services Common Middleware Services Distribution Middleware Host Infrastructure Middleware

  11. Overview of Patterns • Present solutions to common software problems arising within a certain context • Help resolve key software design forces • Flexibility • Extensibility • Dependability • Predictability • Scalability • Efficiency Client Proxy Service AbstractService service service service • Generally codify expert knowledge of design strategies, constraints & “best practices” • Capture recurring structures & dynamics among software participants to facilitate reuse of successful designs 1 1 The Proxy Pattern

  12. Overview of Pattern Languages • Motivation • Individual patterns & pattern catalogs are insufficient • Software modeling methods & tools that just illustrate how, not why, systems are designed • Benefits of Pattern Languages • Define a vocabulary for talking about software development problems • Provide a process for the orderly resolution of these problems, e.g.: • What are key problems to be resolved & in what order • What alternatives exist for resolving a given problem • How should mutual dependencies between the problems be handled • How to resolve each individual problem most effectively in its context • Help to generate & reuse software architectures

  13. Taxonomy of Patterns & Idioms

  14. The Layered Architecture of ACE www.cs.wustl.edu/~schmidt/ACE.html • Features • Open-source • 200,000+ lines of C++ • 40+ person-years of effort • Ported to many OS platforms • Large open-source user community • www.cs.wustl.edu/~schmidt/ACE-users.html • Commercial support by Riverace • www.riverace.com/

  15. Sidebar: Platforms Supported by ACE • ACE runs on a wide range of operating systems, including: • PCs, e.g., Windows (all 32/64-bit versions), WinCE; Redhat, Debian, & SuSE Linux; & Macintosh OS X • Most versions of UNIX, e.g., Solaris, SGI IRIX, HP-UX, Digital UNIX (Compaq Tru64), AIX, DG/UX, SCO OpenServer, UnixWare, NetBSD, & FreeBSD • Real-time operating systems, e.g., VxWorks, OS/9, LynxOS, Pharlap TNT, QNX Neutrino & RTP, & RTEMS • Large enterprise systems, e.g., OpenVMS, MVS OpenEdition, Tandem NonStop-UX, & Cray UNICOS • ACE can be used with all of the major C++ compilers on these platforms • The ACE Web site at http://www.cs.wustl.edu/~schmidt/ACE.html contains a complete, up-to-date list of platforms, along with instructions for downloading & building ACE

  16. Key Capabilities Provided by ACE Event Handling & IPC Service Access & Control Synchronization Concurrency

  17. The Pattern Language for ACE • Pattern Benefits • Preserve crucial design information used by applications & middleware frameworks & components • Facilitate reuse of proven software designs & architectures • Guide design choices for application developers

  18. POSA2 Pattern Abstracts Service Access & Configuration Patterns The Wrapper Facade design pattern encapsulates the functions & data provided by existing non-object-oriented APIs within more concise, robust, portable, maintainable, & cohesive object-oriented class interfaces. The Component Configurator design pattern allows an application to link & unlink its component implementations at run-time without having to modify, recompile, or statically relink the application. Component Configurator further supports the reconfiguration of components into different application processes without having to shut down & re-start running processes. The Interceptor architectural pattern allows services to be added transparently to a framework & triggered automatically when certain events occur. The Extension Interface design pattern allows multiple interfaces to be exported by a component, to prevent bloating of interfaces & breaking of client code when developers extend or modify the functionality of the component. Event Handling Patterns The Reactor architectural pattern allows event-driven applications to demultiplex & dispatch service requests that are delivered to an application from one or more clients. The Proactor architectural pattern allows event-driven applications to efficiently demultiplex & dispatch service requests triggered by the completion of asynchronous operations, to achieve the performance benefits of concurrency without incurring certain of its liabilities. The Asynchronous Completion Token design pattern allows an application to demultiplex & process efficiently the responses of asynchronous operations it invokes on services. The Acceptor-Connector design pattern decouples the connection & initialization of cooperating peer services in a networked system from the processing performed by the peer services after they are connected & initialized.

  19. POSA2 Pattern Abstracts (cont’d) Synchronization Patterns The Scoped Locking C++ idiom ensures that a lock is acquired when control enters a scope & released automatically when control leaves the scope, regardless of the return path from the scope. The Strategized Locking design pattern parameterizes synchronization mechanisms that protect a component’s critical sections from concurrent access. The Thread-Safe Interface design pattern minimizes locking overhead & ensures that intra-component method calls do not incur ‘self-deadlock’ by trying to reacquire a lock that is held by the component already. The Double-Checked Locking Optimization design pattern reduces contention & synchronization overhead whenever critical sections of code must acquire locks in a thread-safe manner just once during program execution. Concurrency Patterns The Active Object design pattern decouples method execution from method invocation to enhance concurrency & simplify synchronized access to objects that reside in their own threads of control. The Monitor Object design pattern synchronizes concurrent method execution to ensure that only one method at a time runs within an object. It also allows an object’s methods to cooperatively schedule their execution sequences. The Half-Sync/Half-Async architectural pattern decouples asynchronous & synchronous service processing in concurrent systems, to simplify programming without unduly reducing performance. The pattern introduces two intercommunicating layers, one for asynchronous & one for synchronous service processing. The Leader/Followers architectural pattern provides an efficient concurrency model where multiple threads take turns sharing a set of event sources in order to detect, demultiplex, dispatch, & process service requests that occur on the event sources. The Thread-Specific Storage design pattern allows multiple threads to use one ‘logically global’ access point to retrieve an object that is local to a thread, without incurring locking overhead on each object access.

  20. The Frameworks in ACE

  21. Example: Applying ACE in Real-time Avionics Key Results • Test flown at China Lake NAWS by Boeing OSAT II ‘98, funded by OS-JTF • www.cs.wustl.edu/~schmidt/TAO-boeing.html • Also used on SOFIA project by Raytheon • sofia.arc.nasa.gov • First use of RT CORBA in mission computing • Drove Real-time CORBA standardization • Goals • Apply COTS & open systems to mission-critical real-time avionics • Key System Characteristics • Deterministic & statistical deadlines • ~20 Hz • Low latency & jitter • ~250 usecs • Periodic & aperiodic processing • Complex dependencies • Continuous platform upgrades

  22. Example: Applying ACE to Time-Critical Targets • Goals • Detect, identify, track, & destroy time-critical targets Challenges are also relevant to TBMD & NMD Key System Characteristics • Real-time mission-critical sensor-to-shooter needs • Highly dynamic QoS requirements & environmental conditions • Multi-service & asset coordination Key Solution Characteristics • Efficient & scalable • Affordable & flexible • COTS-based • Adaptive & reflective • High confidence • Safety critical • Time-critical targets require immediate response because: • They pose a clear & present danger to friendly forces & • Are highly lucrative, fleeting targets of opportunity

  23. Example: Applying ACE to Large-scale Routers BSE IOM BSE BSE IOM IOM IOM IOM IOM IOM BSE BSE BSE IOM IOM IOM IOM IOM IOM BSE BSE BSE IOM IOM IOM IOM IOM Key Software Solution Characteristics • High confidence & scalable computing architecture • Networked embedded processors • Distribution middleware • FT & load sharing • Distributed & layered resource management • Affordable, flexible, & COTS • Goal • Switch ATM cells + IP packets at terabit rates • Key System Characteristics • Very high-speed WDM links • 102/103 line cards • Stringent requirements for availability • Multi-layer load balancing, e.g.: • Layer 3+4 • Layer 5 www.arl.wustl.edu

  24. Example: Applying ACE to Hot Rolling Mills www.siroll.de Key Software Solution Characteristics • Affordable, flexible, & COTS • Product-line architecture • Design guided by patterns & frameworks • Windows NT/2000 • Real-time CORBA (ACE+TAO) • Goals • Control the processing of molten steel moving through a hot rolling mill in real-time • System Characteristics • Hard real-time process automation requirements • i.e., 250 ms real-time cycles • System acquires values representing plant’s current state, tracks material flow, calculates new settings for the rolls & devices, & submits new settings back to plant

  25. Example: Applying ACE to Real-time Image Processing Key Software Solution Characteristics • Affordable, flexible, & COTS • Embedded Linux (Lem) • Compact PCI bus + Celeron processors • Remote booted by DHCP/TFTP • Real-time CORBA (ACE+TAO) www.krones.com • Goals • Examine glass bottles for defects in real-time • System Characteristics • Process 20 bottles per sec • i.e., ~50 msec per bottle • Networked configuration • ~10 cameras

  26. Networked Logging Service Example • Key Participants • Client application processes • Generate log records • Server logging daemon • Receive, process, & store log records • The logging server example in C++NPv2 is more sophisticated than the one in C++NPv1 • C++ code for all logging service examples are in • ACE_ROOT/examples/ C++NPv1/ • ACE_ROOT/examples/ C++NPv2/ • There’s an extra daemon involved

  27. Patterns in the Networked Logging Service Leader/ Followers Monitor Object Active Object Half-Sync/ Half-Async Reactor Pipes & Filters Acceptor- Connector Component Configurator Proactor Wrapper Facade Thread-safe Interface Strategized Locking Scoped Locking

  28. ACE Basics: Logging • ACE’s logging facility usually best for diagnostics • Can customize logging sinks • Filterable logging severities • Portable printf()-like format directives (thread/process ID, date/time, types) • Serializes output across multiple threads • Propagates settings to threads created via ACE • Can log across a network • ACE_Log_Msg class; use thread-specific singleton most of the time, via ACE_LOG_MSG macro • Macros encapsulate most usage. Most common: • ACE_DEBUG ((severity, format [, args…])); • ACE_ERROR[_RETURN] ((severity, format [,args…])[, return-value]); • See ACE Programmer’s Guide (APG) tables 3.1 (severities), 3.2 (directives), 3.3 (macros)

  29. ACE Logging Usage • The ACE logging API is similar to printf(), e.g.: • ACE_ERROR ((LM_ERROR, "(%t) fork failed")); • generates: • Oct 31 14:50:13 1992@ics.uci.edu@2766@LM_ERROR@client::(4) fork failed • and • ACE_DEBUG ((LM_DEBUG, "(%t) sending to server %s", host)); • generates: • Oct 31 14:50:28 1992@ics.uci.edu@1832@LM_DEBUG@drwho::(6) sending to server tango

  30. Logging Severities • Per-thread mask initializer can be adjusted (default is all severities disabled): • ACE_Log_Msg::disable_ debug_messages(); • ACE_Log_Msg::enable_ debug_messages(); • These static methods set & clear a (set of) bits instead of replacing the mask, as priority_mask() does • Any set of severities can be specified (OR’d together) • You can control which severities are seen at run time using two masks : • Process-wide mask (defaults to all severities enabled) • Per-thread mask (defaults to all severities disabled) • Message is displayed is logged severity is enabled in either mask • Set process/instance mask with: • ACE_Log_Msg::priority_mask (u_long mask, MASK_TYPE which); • MASK_TYPE is ACE_Log_Msg::PROCESS or ACE_Log_Msg::THREAD • Since default is to enable all severities process-wide, all severities are logged in all threads unless you change it • See ACE_Logging_Strategy for interesting ways of changing it…

  31. Logging Severities Example • To allow threads to decide their own logging, the desired severities must be: • Disabled at process level & enabled in the thread(s) to display them. • e.g., ACE_LOG_MSG->priority_mask (0, ACE_Log_Msg::PROCESS); ACE_Log_Msg::enable_debug_messages (); ACE_Thread_Manager::instance ()->spawn (service); ACE_Log_Msg::disable_debug_messages (); ACE_Thread_Manager::instance ()->spawn_n (3, worker); • LM_DEBUG severity (only) logged in service thread • LM_DEBUG severity (and all others) not logged in worker threads • Note how severities are “inherited” when threads spawned

  32. Redirect Logging to a File • Default logging sink is stderr • Redirect to a file by setting the OSTREAM flag & assigning a stream • Flag can be set in two ways: • ACE_Log_Msg::open (const ACE_TCHAR *prog_name, u_long options_flags = ACE_Log_Msg::STDERR, const ACE_TCHAR *logger_key = 0); • ACE_Log_Msg::set_flags (u_long flags); • Assign a stream: • ACE_Log_Msg::msg_ostream (ACE_OSTREAM_TYPE *);(Optional 2nd arg to tell ACE_Log_Msg to delete the ostream) • ACE_OSTREAM_TYPE is ofstream where supported, else FILE* • To also stop output to stderr, use open() without STDERR flag, or ACE_Log_Msg::clr_flags (STDERR)

  33. Redirect Logging to Syslog • Redirected log output to ACE_Log_Msg::SYSLOG goes to: • Windows NT4 & up: system’s Event Log • UNIX/Linux: syslog facility (uses LOG_USER syslog facility) • Can’t set this with set_flags/clr_flags; must open. For example: • ACE_LOG_MSG->open(argv[0], ACE_Log_Msg::SYSLOG, ACE_TEXT (“syslogTest”)); • Windows: 3rd arg, if supplied, replaces 1st as program name in event log • To turn it off, call open() again with different flag(s) • This seems odd, but you’re effectively resetting the logging… think of it as reopen().

  34. Logging Callbacks • Logging callbacks are useful for adding special processing or filtering to log output • Derive a class from ACE_Log_Msg_Callback & reimplement: • virtual void log (ACE_Log_Record &log_record); • Use ACE_Log_Msg::msg_callback() to register callback • Also call ACE_Log_Msg::set_flags() to add ACE_Log_Msg::MSG_CALLBACK flag • Beware… • Callback registration is specific to each ACE_Log_Msg instance • Callbacks are not inherited when new threads are created • See ACE_ROOT/examples/Log_Msg for an example

  35. Useful Logging Flags • There are some other ACE_Log_Msg flags that add useful functionality to ACE’s logging: • VERBOSE: Prepends program name, timestamp, host name, process ID, & message priority to each message • VERBOSE_LITE: Prepends timestamp & message priority to each message (this is what ACE test suite uses) • SILENT: Don’t display any messages of any severity • LOGGER: Write messages to the local client logger daemon • Example of using VERBOSE_LITE: ACE_DEBUG ((LM_DEBUG, ACE_TEXT ("This is ACE Version %u.%u.%u\n\n"), ACE_MAJOR_VERSION, ACE_MINOR_VERSION, ACE_BETA_VERSION)); Outputs: Mar 29 09:50:34.891 2005@LM_DEBUG@This is ACE Version 5.4.4

  36. Tracing & Logging • ACE’s tracing facility logs function/method entry & exit • Uses logging with severity LM_TRACE, so output can be selectively disabled • Just put ACE_TRACE macro in the method: #include “ace/Log_Msg.h” void foo (void) { ACE_TRACE (“foo”); // … do stuff } Says: (1024) Calling foo in file ‘test.cpp’ on line 3 (1024) Leaving foo • Clever indenting by call depth makes output easier to read • Huge amount of output, so tracing no-op’d out by default • Enable by rebuilding with config.h having: #define ACE_NTRACE 0 • See ACE_ROOT/examples/Misc for an example

  37. Networked Logging Service Example • Key Participants • Client application processes • Generate log records • Server logging daemon • Receive, process, & store log records • We’ll develop architecture similar to ACE’s, but not same implementation. • C++ code for all logging service examples are in • ACE_ROOT/examples/ C++NPv1/ • ACE_ROOT/examples/ C++NPv2/

  38. Network Daemon Design Dimensions • Communication dimensions address the rules, form, & level of abstraction that networked applications use to interact • Concurrency dimensions address the policies & mechanisms governing the proper use of processes & threads to represent multiple service instances, as well as how each service instance may use multiple threads internally • Service dimensions address key properties of a networked application service, such as the duration & structure of each service instance • Configuration dimensions address how networked services are identified & the time at which they are bound together to form complete applications

  39. Communication Design Dimensions • Communication is fundamental to networked application design • The next three slides present a domain analysis of communication design dimensions, which address the rules, form, & levels of abstraction that networked applications use to interact with each other • We cover the following communication design dimensions: • Connectionless versus connection-oriented protocols • Synchronous versus asynchronous message exchange • Message-passing versus shared memory

  40. Connectionless vs. Connection-oriented Protocols • A protocol is a set of rules that specify how control & data information is exchanged between communicating entities SYN SYN/ACK ACK Acceptor Connector 3-way handshake in TCP/IP • Connection-oriented applications must address two additional design issues: • Data framing strategies, e.g., bytestream vs. message-oriented • Connection multiplexing (muxing) strategies, e.g., multiplexed vs. nonmultiplexed

  41. Alternative Connection Muxing Strategies • In multiplexed connections all client requests emanating from threads in a single process pass through one TCP connection to a server process • Pros: Conserves OS communication resources, such as socket handles & connection control blocks • Cons: harder to program, less efficient, & less deterministic • In nonmultiplexed connections each client uses a different connection to communicate with a peer service • Pros: Finer control of communication priorities & low synchronization overhead since additional locks aren't needed • Cons: use more OS resources, & therefore may not scale well in certain environments

  42. Sync vs. Async Message Exchange • Asynchronous request/response protocols stream requests from client to server without waiting for responses synchronously • Multiple client requests can be transmitted before any responses arrive from a server • These protocols therefore often require a strategy for detecting lost or failed requests & resending them later • Synchronous request/response protocols are the simplest form to implement • Requests & responses are exchanged in a lock-step sequence. • Each request must receive a response synchronously before the next is sent

  43. Message Passing vs. Shared Memory • Message passing exchanges data explicitly via the IPC mechanisms • Application developers generally define the protocol for exchanging the data, e.g.: • Format & content of the data • Number of possible participants in each exchange (e.g., point-to-point unicast), multicast, or broadcast) • How participants begin, conduct, & end a message-passing session • Shared memory allows multiple processes on the same or different hosts to access & exchange data as though it were local to the address space of each process • Applications using native OS shared memory mechanisms must define how to locate & map the shared memory region(s) & the data structures that are placed in shared memory

  44. Sidebar: C++ Objects & Shared Memory Allocating a C++ Object in shared Memory void *obj_buf = … // Get a pointer to location in shared memory ABC *abc = new (obj_buf) ABC; // Use C++ placement new operator • General responsibilities using placement new operator • Pointer passed to placement new operator must point to a memory region that is big enough & is aligned properly for the object type being created • The placed object must be destroyed by explicitly calling the destructor • Pitfalls initializing C++ objects with virtual functions in shared memory • The shared memory region may reside at a different virtual memory location in each process that maps the shared memory • The C++ compiler/linker need not locate the vtable at the same address in different processes that use the shared memory • ACE wrapper façade classes that can be initialized in shared memory must therefore be concrete data types • i.e., classes with only non-virtual methods

  45. Overview of the Socket API (1/2) Sockets are the most common network programming API available on operating system platforms • Originally developed in BSD Unix as a C language API to TCP/IP protocol suite • The Socket API has approximately two dozen functions classified in five categories • Socket is a handle created by the OS that associates it with an end point of a communication channel • Asocket can be bound to a local or remote address • In Unix, socket handles & I/O handles can be used interchangeably in most cases, but this is not the case for Windows

  46. Overview of the Socket API (2/2) Local context management Connection establishment & termination Data transfer mechanisms Options management Network addressing

  47. Taxonomy of Socket Dimensions The Socket API can be decomposed into the following dimensions: • Type of communication service • e.g., streams versus datagrams versus connected datagrams • Communication & connection role • e.g., clients often initiate connections actively, whereas servers often accept them passively • Communication domain • e.g., local host only versus local or remote host

  48. Limitations with the Socket APIs (1/2) • Poorly structured, non-uniform, & non-portable • API is linear rather than hierarchical • i.e., the API is not structured according to the different phases of connection lifecycle management & the roles played by the participants • No consistency among the names • Non-portable & error-prone • Function names: read() & write() used for any I/O handle on Unix but Windows needs ReadFile() & WriteFile() • Function semantics: different behavior of same function on different OS e.g., accept () can take NULL client address parameter on Unix/Windows, but will crash on some operating systems, such as VxWorks • Socket handle representations: different platforms represent sockets differently e.g., Unix uses unsigned integers whereas Windows uses pointers • Header files: Different platforms use different names for header files for the socket API

  49. Limitations with the Socket APIs (2/2) • Lack of type safety • I/O handles are not amenable to strong type checking at compile time • e.g., no type distinction between a socket used for passive listening & a socket used for data transfer • Steep learning curve due to complex semantics • Multiple protocol families & address families • Options for infrequently used features such as broadcasting, async I/O, non blocking I/O, urgent data delivery • Communication optimizations such as scatter-read & gather-write • Different communication & connection roles, such as active & passive connection establishment, & data transfer • Too many low-level details • Forgetting to use the network byte order before data transfer • Possibility of missing a function, such as listen() • Possibility of mismatch between protocol & address families • Forgetting to initialize underlying C structures e.g., sockaddr • Using a wrong socket for a given role

  50. Example of Socket API Limitations (1/3) 1 #include <sys/types.h> 2 #include <sys/socket.h> 3 4 const int PORT_NUM = 10000; 5 6 int echo_server () 7 { 8 struct sockaddr_in addr; 9 int addr_len; 10 char buf[BUFSIZ]; 11 int n_handle; 12 // Create the local endpoint. Possible differences in header file names Forgot to initialize to sizeof (sockaddr_in) Use of non-portable handle type

More Related