680 likes | 877 Views
Characterizing the Use of a Campus Wireless Network . David Schwab and Rick Bunt University of Saskatchewan INFOCOM 2004. Outline. Goal and Related Work Methodology Experimental Results Conclusions and Future Work. Overview. Performed a week long traffic trace January 2003
E N D
Characterizing the Use of a Campus Wireless Network David Schwab and Rick Bunt University of Saskatchewan INFOCOM 2004
Outline • Goal and Related Work • Methodology • Experimental Results • Conclusions and Future Work
Overview • Performed a week long traffic trace • January 2003 • Included centralized authentication log • Attempted to ensure privacy • Wireless network usage • Where, when, how much, for what • Roaming patterns
Related Work • Balachandran et al. • Gathered data at ACM conference • Set schedule forced traffic patterns • Close access points • Kotz and Essien • Gathered data at Dartmouth • Experienced access point failures and mis-configurations
Network Environment • Two different subnets for wired and wireless networks. • Centralized router connects the subnets to each other and the Internet • Traffic to and from wireless subnet was mirrored and recorded. EtherPeek Router Wired Internet Wireless
Network Environment • 18 Cisco access points • IP addresses are dynamically assigned • IP’s in trace had changed since analysis • Availability of technology not well advertised
Methodology • EtherPeek analyzed each packet • Date, time, origin, destination, and protocol • Able to record MAC addresses (identify unique users) • Anonymised administrator log provided • For each authentication • Date, time, MAC address, and IP of access point
Analysis • Perl scripts parse trace files • First pass indicated some non-standard packets as well as human error. • Focus order: • Traffic data from trace file • Authentication log • Combined data
Traffic Rate Packet sizes are unknown
Traffic Rate Analysis • Clear night/day trend • Clear weekend/weekday trend • At least 15 packets/sec regardless of day and time. • From non-wireless traffic multicasted onto the network for maintenance
Authentication Data • Total authentications seems unreasonably high • Average authentications per user is much greater than median
Authentication Data Big skew towards Law School
Authentication Data Analysis • Some users are frequently switching access points. • Law School has (by far) the most authentications • Since wireless cards store authentication data, cards can rapidly switch APs if signal strength is similar. • Authentication != Distinct Sessions
Non-Wireless Traffic • 38% of packets are not associated with users from the log. • Arrived at a constant rate of 15 packets/sec, regardless of time • Conclusion: Generated automatically and flooded onto wireless subnet, but why? • Still a mystery… but could improve performance if solved!
Protocol Mix • Non-Wireless Traffic • 87% SNAP • 7% ARP • Other traffic • Wireless Traffic • 27% HTTP • Normal file sharing, P2P, AIM, etc. • 34.6% “TCP Other”
Traffic vs. Authentication Authentications do not seem to have much indication of traffic.
Roaming On average, users visited 3 access points during the week
Roaming Analysis • Even though the Geology Library is centrally located, few roaming users connected to it. • Clear relationship between proximity and roaming • High number of APs in close proximity = high rates of usages and roaming • Users won’t attempt to connect if they don’t know an AP is nearby
College of Law • 86% of authentications from Law Student Lounge • Over one third of traffic by College of Law • Factors: • Major wireless commitment by the college • Wired computer labs have been closed • Paradigm shift within the legal community
Design Principles • Focus on location rather than movement • Better to have “islands” of connectivity then “continuous corridors” of coverage • Preference given to departments with lots of online material • Priority given to areas with a large number of mobile users (such as high-tech or professional programs) • For College of Law, a high level of connectivity was obtained with only a few APs
Conclusions and Future Work • Data collected in a centralized manner • Used both traffic trace and authentication log • Average user connected a small number of times from a limited number of APs • Popularity of an AP is determined by its familiarity • Continuing more in-depth studies and developing tools to measure performance
Discussion • As authors mention, this single week may not be representative of overall usage patterns. • What’s a better methodology?
The Synchronization of Periodic Routing Messages Sally Floyd and Van Jacobson Presented by Wanmin Wu
Synchrony • In 1917, a letter to the journal Science described the phenomenon of fireflies apparently blinking on and off together in unison, but the writer dismissed it as the "twitching" of his eyelids. • Christian Huygens (1629-1695), a watchmaker, observed that two unsynchronized clocks would keep in time if hung on the same wall - synchronized by the barely perceptible vibrations each induced in the wall.
Synchrony • It is not just possible, it is inevitable. • Strogatz et al. proved mathematically that any system of "coupled oscillators" -- that is, entities capable of responding to each other's signals, be they crickets, electrons or planets -- will spontaneously self-organize
Synchrony in Network Researchers find the total network traffic is not uniform, but highly synchronized.
Examples of Occurrence • TCP congestion windows • Increase/decrease cycles shared by flows through a common bottleneck gateway • Synchronization on an external clock • Processed become synchronized • Client-server models • Clients synchronized as they wait for service from a busy or recovering server
A Unique Example • Periodic routing messages • Involves seemingly independent processes (routers are initially unsynchronized) • Periodic routing messages from different routers should remain unsynchronized, huh? • Apparently-independent processes can inadvertently become synchronized. Si X Independent Sources Independent Traffic? Sj
This Paper • Examines the synchronization of periodic routing updates using simulation and analysis • Emphasizes the utility of injecting randomization to prevent synchronization
Main Results • Router synchronization behavior • Injecting randomness • Surprisingly large amount is needed Transition is Abrupt Unsynced Synced
How do they find out… • the synchronization of periodic routing updates? • Periodic losses observed in end-to-end Internet pinging traffic • Found to be caused by periodic exchange of router updates (huge)
Periodic Messages Model • A general model of periodic routing messages on a network • many practical protocols (e.g., EGP, IGRP, RIP) conform to the model • Each router transmits a routing message at periodic intervals • Ensures routing tables are kept up-to-date even if the messages are occasionally lost
time spent in state depends on msgs received from others (weak coupling between routers) Periodic Messages Model receive update from neighbor, process it (time: TC2) Prepare own routing update (time: TC) <ready> send update (time: Tdto arrive at dest) set_timer (uniform: [Tp-Tr, Tp+Tr]) after processing its own and incoming updates timer expires or link fail update or receive update from neighbor (process immediately) Wait Weak coupling can result in eventual synchronization
Simulation of the Model • 20 routers broadcast updates to each other • Tc=0.11 s, Tp=121 s, Td=0, Tr=0.1 • Routers initially not synced: first msg sent uniform [0, Tp] • Y-axis: round length By t=100000 all routers send msgs at essentially the same time
A Closer Look • Two specific nodes • : timer is set • x: timer expires • Overlap from the 6th round – node A and B reset timers at the same time
Avoiding Synchronization receive update from neighbor, process it (time: TC2) • Enforce max time spent in prepare state • Choose random timer component, Tr large (e.g., several multiples of Tc) Prepare own routing update (time: TC) <ready> send update set_timer (uniform: [Tp-Tr, Tp+Tr]) Wait
Markov Chain Model • Used to compute the expected time for the system to move from an unsync’ed state to a sync’ed state, and vice versa. • System is in state i if the largest cluster is of size i. • A cluster of size i is a set of i routing messages that have become sync’ed.
Markov Chain Model • Assume Td=0 • pi,i-1 = (1 - Tc/2Tr)i • pi,i+1 = 1 - e-((N-i+1)/Tp)((i-1)Tc-Tr(i-1)/(i+1))
Analysis Results • f(i) - mean # rounds to enter state i from 0 • g(i) - mean time to enter state i from N f(i) g(i) Tr increases
Important Results • With increasing Tr (randomization) • Takes longer to synchronize • May need Tr=10*Tc • A robust choice of timer Tr=Tp/2
Summary • Examines the spontaneous synchronization behavior of router update messages • Emphases the utility of injecting (large amount of) randomization to break up synchronization
Internet Routing Instability Craig Labovitz, G. Robert Malan, and Farnam Jahanian University of Michigan SIGCOMM 1997
Overview • Problem Statement and Goals • Background Information • Methodology • Analysis • Conclusions
Routing Instability • The rapid change of network reachability and topology information • Also known as “route flaps” • Causes: • Router configuration errors • Physical and data link problems • Software bugs
Effects of Instability • Instability can propagate • Effects include: • Increased packet loss • Delays in network convergence • Resource overhead (CPU, memory, etc.)
The Internet • Composed of interconnected backbones • “Peer” at exchange points; exchange data • Maintain default-free routing tables • Autonomous systems have distinct routing policies, but most use BGP AS AS Exchange AS AS AS
Border Gateway Protocol • Incremental – Sends updates only when topology or policy change • Two forms of information: • Announcements: Router has learned of a new network component or prefers another route • Withdraws: Decides a network is unreachable • Explicit: Sends an actual withdraw message • Implicit: Route is replaced by announcement of a new route
Inter-domain Routing Updates • Forwarding Instability: Topological changes that affect forwarding paths • Routing Policy Fluctuation: Changes in policy that may not affect forwarding paths • Pathological (Redundant): Neither routing nor forwarding instability • Instability is considered forwarding instability or routing policy fluctuation.