3.12k likes | 3.27k Views
Implementation/Infrastructure Support for Collaborative Applications. Prasun Dewan. Infrastructure vs. Implementation Techniques. Implementation technique are interesting when general Applies to a class of applications Coding of such an implementation technique is infrastructure.
E N D
Implementation/Infrastructure Support for Collaborative Applications Prasun Dewan
Infrastructure vs. Implementation Techniques • Implementation technique are interesting when general • Applies to a class of applications • Coding of such an implementation technique is infrastructure. • Sometimes implementation techniques apply to very narrow app set • Operation transformation for text editors. • These may not qualify as infrastructures • Will study implementation techniques applying to small and large application sets.
Collaborative Application Coupling Coupling
Infrastructure-Supported Sharing Client Sharing Infrastructure Coupling Coupling
NLS (Engelbart ’68) Colab (Stefik ’85) VConf (Lantz ‘86) Rapport (Ahuja ’89) XTV (Abdel-Wahab, Jeffay & Feit ‘91) Rendezvous (Patterson ‘90) Suite (Dewan & Choudhary ‘92) TeamWorkstation (Ishii ’92) Weasel (Graham ’95) Habanero (Chabert et al ‘ 98) JCE (Abdel-Wahab ‘99) Disciple (Marsic ‘01) Post Xerox Xerox Stanford Bell Labs UNC/ODU Bellcore Purdue Japan Queens U. Illinois ODU Rutgers Systems: Infrastructures
VNC (Li, Stafford-Fraser, Hopper ’01) NetMeeting Groove Advanced Reality LiveMeeting (Pay by minute service model) Webex (service model) ATT Research Microsoft Microsoft Systems: Products
Issues/Dimensions • Architecture • Session management • Access control • Concurrency control • Firewall traversal • Interoperability • Composability • … Concurrency Control Session Management Architecture Model Colab. Sys. 2 Implementation 3 Colab. Sys. 1 Implementation 1
Infrastructure-Supported Sharing Client Sharing Infrastructure Coupling Coupling
Architecture? Infrastructure/client (logical) components Component (physical) distribution
Near-WYSIWIS Coupling Shared Window Logical Architecture Application Window Window
Centralized Physical Architecture XTV (‘88) VConf (‘87) Rapport (‘88) NetMeeting X Client Input/Output Pseudo Server Pseudo Server X Server X Server User 1 User 2
Replicated Physical Architecture Rapport VConf X Client X Client Input Pseudo Server Pseudo Server X Server X Server User 1 User 2
Near-WYSIWIS Coupling Relaxing WYSIWIS? Application Window Window
Model-View Logical Architecture Model View View Window Window
Centralized Physical Model Rendezvous (‘90, ’95) Model View View Window Window
Replicated Physical Model Sync ’96, Groove Model Infrastructure Model View View Window Window
App App App Input I/O Pseudo Server Pseudo Server Pseudo Server Pseudo Server Window Window Window Window Comparing the Architectures Architecture Design Space? Model Model Model View View View View Window Window Window Window
Architectural Design Space • Model/ View are Application-Specific • Text Editor Model • Character String • Insertion Point • Font • Color • Need to capture these differences in architecture
Layer 0 Layer 1 Layer 1 I/O Layers Layer N-1 Layer N-1 Layer N Layer N Single-User Layered Interaction Layer 0 Increasing Abstraction Communication Layers PC Physical Devices
Layer 1 Layer N-1 Layer N PC Single-User Interaction Layer 0 Increasing Abstraction PC
Widget Window Framebuffer PC Example I/O Layers Model Increasing Abstraction
{“John Smith”, 2234.57} Interactor = Absrtraction Representation + Syntactic Sugar Layered Interaction with an Object Abstraction Interactor/Abstraction • John Smith • John Smith Interactor/Abstraction X • John Smith Interactor
Layer 1 Layer N-1 Layer N PC Single-User Interaction Layer 0 Increasing Abstraction
Program Component Shared Layer Layer S Layer S+1 User-Interface Component Layer N PC Identifying the Shared Layer Higher layers will also be shared Layer 0 Increasing Abstraction Lower layers may diverge
Layer S+1 Layer S+1 Layer S+1 Layer N Layer N Layer N PC PC PC Replicating UI Component
Layer 0 Layer S Layer S+1 Layer S+1 Layer S+1 Layer N Layer N Layer N PC PC PC Centralized Architecture
Layer 0 Layer 0 Layer 0 Layer S Layer S Layer S Layer S+1 Layer S+1 Layer S+1 Layer N Layer N Layer N PC PC PC Replicated (P2P) Architecture
Layer 0 Layer S Master Input Relayer Output Broadcaster Slave I/O Relayer Slave I/O Relayer Layer S+1 Layer S+1 Layer S+1 Layer N Layer N Layer N Implementing Centralized Architecture PC
Layer 0 Layer 0 Layer 0 Layer S Layer S Layer S Input Broadcaster Input Broadcaster Input Broadcaster Layer S+1 Layer S+1 Layer S+1 Layer N Layer N Layer N Replicated Architecture PC
Shared Layer Rep vs. Central Classifying Previous Work • XTV • NetMeeting App Sharing • NetMeeting Whiteboard • Shared VNC • Habanero • JCE • Suite • Groove • LiveMeeting • Webex
Shared Layer Rep vs. Central Classifying Previous Work • Shared layer • X Windows (XTV) • Microsoft Windows (NetMeeting App Sharing) • VNC Framebuffer (Shared VNC) • AWT Widget (Habanero, JCE) • Model (Suite, Groove, LiveMeeting) • Replicated vs. centralized • Centralized (XTV, Shared VNC, NetMeeting App. Sharing, Suite, PlaceWare) • Replicated (VConf, Habanero, JCE, Groove, NetMeeting Whiteboard)
Service vs. Server vs. Local Commuication • Local: User site sends data • VNC, XTV, VConf, NetMeeting Regular • Server: Organization’s site connected by LAN to user site sends data • NetMeeting Enterprise, Sync • Service: External sites connected by WAN to user site sends data • LiveMeeting, Webex
Push vs. Pull of Data • Consumer pulls new data by sending request for it in response to • notification • MVC • receipt of previous data • VNC • Producer pushes data for consumers • As soon as data are produced • NetMeeting, Real-time sync • When user requests • Asynchronous Sync
Dimensions • Shared layer level. • Replicated vs. Centralized. • Local vs. Server vs. Service Broadcast • Push vs. Pull Data • …
Coupling Flexibility Automation Ease of Learning Reuse Interoperability Firewall traversal Concurrency and correctness Security Performance Bandwidth usage Computation load Scaling Join/leave time Response time Feedback to actor Local Remote Feedthrough to observers Local Remote Task completion time Evaluating design space points
Sharing a layer nearer the data Greater view independence Bandwidth usage less For large data sometimes visualization is compact. Finer-grained access and concurrency control Shared window system support floor control. Replication problems better solved with more app semantics More on this later. Sharing a layer nearer the physical device Have referential transparency Green object no meaning if objects colored differently Higher chance layer is standard. Sync vs. VNC promotes reusability and interoperability Sharing Low-Level vs. High-Level Layer • Sharing flexibility limited with fixed layer sharing • Need to support multiple layers.
CSCW Input immediately delivered without distributed commitment. Floor control or operation transformation for correctness Distributed computing: More reads (output) favor replicated More writes (input) favor centralized Centralized vs. Replicated: Dist. Comp. vs. CSCW
Bandwidth Usage in Replicated vs. Centralized • Remote I/O bandwidth only an issue when network bandwidth < 4MBps (Nieh et al ‘2000) • DSL link = 1 Mbps • Input in replication less than output • Input produced by humans • Output produced by faster computers
Feedback in Replicated vs. Centralized • Replicated: Computation time on local computer • Centralized • Local user • Computation time on local computer • Remote user • Computation time on hosting computer plus roundtrip time • In server/ service model an extra LAN/ WAN link
Influence of communication cost • Window sharing remote feedback • Noticeable in NetMeeting. • Intolerable in PlaceWare’s service model. • Powerpoint presentation feedback time • not noticeable in Groove & Webex replicated model. • noticeable in NetMeeting for remote user. • Not typically noticeable in Sync with shared model • Depends on amt of communication with remote site • Which depends on shared layer
Case Study: Collaborative Video Viewing (Cadiz, Balachandran et al. 2000) • Two users collaboratively executing media player commands • Centralized NetMeeting sharing added unacceptable video latency • Replicated architecture created using T 120 later • Part of problem in centralized system sharing video through window layer
Influence of Computation Cost • Computation intensive apps • Replicated case: local computer’s computation power matters. • Central case: central computer’s computation power matters • Central architecture can give better feedback, specially with fast network [Chung and Dewan ’99] • Asymmetric computation power => asymmetric architecture (server/desktop, desktop/PDA)
Feedthrough • Time to show results at remote site. • Replicated: • One-way input communication time to remote site. • Computation time on local replica • Centralized: • One-way input communication time to central host • Computation time on central host • One-way output communication time to remote site. • Server/service model add latency • Less significant than remote feedback: • Active user not affected. • But must synchronize with audio • “can you see it now?”
Task completion time • Depends on • Local feedback • Assuming hosting user inputs • Remote feedback • Assuming non hosting user inputs • Not the case in presentations, where centralized favored • Feedthrough • If interdependencies in task • Not the case in brainstorming, where replicated favored • Sequence of user inputs • Chung and Dewan ’01 • Used Mitre log of floor exchanges and assumed interdependent tasks • Task completion time usually smaller in replicated case • Asymmetric centralized architecture good when computing power asymmetric (or task responsibility asymmetric?).
Scalability and Load • Centralized architecture with powerful server more suitable. • Need to separate application execution with distribution. • PlaceWare • Webex • Related to firewall traversal. More later. • Many collaborations do not require scaling • 2-3 collaborators in joint editing • 8-10 collaborators in CAD tools (NetMeeting Usage Data) • Most calls are not conference calls! • Adapt between replicated and centralized based on # collaborators • PresenceAR goals
Display Consistency • Not an issue with floor control systems. • Other systems must ensure that concurrent input should appear to all users to be processed in the same (logical) order. • Automatically supported in central architecture. • Not so in replicated architectures as local input processed without synchronizing with other replicas.
Synchronization Problems abc abc dabc aebc deabc daebc Insert d,1 Insert e,2 Insert d,1 Insert e,2 Program Program Insert e,2 Insert d,1 Input Distributor Input Distributor UI UI User 1 User 2
Peer to peer Merger abc abc dabc aebc daebc daebc Insert d,1 Insert e,2 Insert d,1 Insert e,2 Program Program Insert e,3 Insert d,1 Input Distributor Input Distributor Merger Merger UI UI Ellis and Gibbs ‘89, Groove, … User 1 User 2
abc abc dabc aebc daebc daebc Insert d,1 Insert e,2 Insert d,1 Insert e,2 Local and Remote Merger Merger • Curtis et al ’95, LiveMeeting, Vidot ‘02 • Feedthrough via extra WAN Link • Can recreate state through central site Program Program Insert e,3 Insert d,1 Input Distributor Input Distributor Merger Merger UI UI User 1 User 2