1 / 39

Dr. Vijay Raghavan Defense Advanced Research Projects Agency Information Exploitation Office

Network Embedded Systems Technology (NEST). Extreme Scaling Program Plan. Dr. Vijay Raghavan Defense Advanced Research Projects Agency Information Exploitation Office. December 17, 2003. Topics. Extreme Scaling Overview Workshop Action Items Project Plan. Extreme Scaling Overview.

kellan
Download Presentation

Dr. Vijay Raghavan Defense Advanced Research Projects Agency Information Exploitation Office

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Embedded Systems Technology(NEST) Extreme Scaling Program Plan Dr. Vijay Raghavan Defense Advanced Research Projects Agency Information Exploitation Office December 17, 2003

  2. Topics • Extreme Scaling Overview • Workshop Action Items • Project Plan

  3. Extreme Scaling Overview • [Insert background/overview information here…]

  4. Workshop Action Items • Concept of Operations • Experiment Site Selection • System Design and Tier II Architecture • Xtreme Scaling Mote (XSM) Design • Xtreme Scaling Mote Sensor Board Design • Super Node Design • Application Level Fusion • Remote Programming • Localization • Power Management • Group Formation • Simulation • Support Infrastructure

  5. 1. Concept of OperationsPrimary Effort Surveillance of Long, Linear, Static Structures Problem: • Too vast an area for limited personnel resources (mobile guards) • Hostile actions: • Destruction (explosives) • Damage to pumps and transformers • Stripping of copper power lines (for pumps) Operational Need: • Reliable automated surveillance to detect movement in security zone FY04 Experiment: • Sense movement of personnel and/or vehicles toward the pipeline • Track the movement and the stop/start of movement Pipeline Damage in Iraq

  6. 1. Concept of Operations (cont.)Primary Effort Detection and tracking of vehicles Mobile Patrol Pipeline 20 km Pump Station Detection of unknowns ? Guard Force Alerted Pipeline Security Zone Detection and tracking of personnel 1 km Reaction Force

  7. 1. Concept of Operations (cont.)Related Efforts Similar Long, Linear, Static Structures Enemy Observation Point (OP) Surveillance of Supply Routes: • Detect potential ambush sites: • Personnel w/shoulder fired weapons • Improvised Explosive Devices (IEDs) FY04 Experiment: • Sense movement of personnel/vehicles toward supply route and then: • They remain near a point • They remain for a while and then leave • Sense suspicious movement on the road Wire to OP IED IED More Related Efforts: Border Patrol, surveillance around air base/ammo point

  8. 2. Experiment Site SelectionCharacteristics • Relatively flat, open area • Easier to survey/mark off 1 km x 20 km site • Easier to deploy/recover sensors • Easier for observers to see large section of experiment site • No forests • No large physical obstructions (e.g., buildings) to line-of-site communications • Small obstructions (e.g., small rocks) okay • Relatively good weather (little rain, light winds, etc.) • Sensors can stay out for days • Military base • Site can be guarded • Sensors deployed on day 1 and remain in place until end of experiment (days later) • Potential for personnel to support deployment/recovery of sensors

  9. 2. Experiment Site Selection (cont.)Primary Candidate Site Naval Air Weapons Facility, China Lake • Being used for DARPA SensIT effort (Feb 2004) • 150 miles NE of Los Angeles • Encompasses 1.1 million acres of land in California's upper Mojave Desert, ranging in altitude from 2,100 to 8,900 feet • Varies from flat dry lake beds to rugged piñon pine covered mountains • Weather should be consistent • Summer will be hot

  10. 2. Experiment Site Selection (cont.)Other Candidate Sites • Fort Bliss, TX (near El Paso, TX) • Nellis AFB (near Las Vegas, NV) • NAS Fallon (near Reno, NV) • Marine Corps Air to Ground Combat Center, 29Palms, CA • Eglin AFB (near Pensacola, FL)

  11. 3. System Design and Tier II Architecture Matched filter Group management (for multilateration) Routing Localization Clustering Power management Time sync

  12. 3. System Design and Tier II Architecture (cont.) • Localization, time sync, and multihop reprogramming design/testing to be joint for Tier I & II • e.g., localization design to include how to associate XSMs with super nodes; to maintain clusters in stable way; etc. • Reliable communication needed for exfil unicasts and for localization/multihop reprogramming broadcasts • because hops are many, comms are almost-always-off, & latency requirement is tight • Testing 802.11 indoors problematic for some environments

  13. BATTERIES 4. Xtreme Scaling Mote DesignDesign Concept #1 BALL SIMILAR TO PET TOYS, SIZE OF GRAPEFRUIT ANTENNA MICROPHONE PIR SENSOR THREADED HALVES

  14. ANTENNA PIR SENSOR MICROPHONE BATTERIES STAKE 4. Xtreme Scaling Mote Design (cont.)Design Concept #2 STAKE IN THE GROUND, SENSOR ON TOP

  15. 4. Xtreme Scaling Mote Design (cont.)Design Concept #3 SODA CAN WITH SENSOR ON TOP ANTENNA PIR SENSOR MICROPHONE

  16. 4. Xtreme Scaling Mote Design (cont.)Proposed Changes • Keep Daylight and Temperature Sensor • New Mag Circuit - 2-Axis, Amplifier, Filter, and Set/Reset Circuit using HMR1052 or HMR1022  • Anti-alias filter on Microphone, no tone-detector • 1 PIR sensor with as big a FOV as possible - either Perkin Elmer or Kube • No Adxl202 but the pads will be there so a few could be populated • Loud Buzzer • Needs research on size, voltage requirements

  17. 4. Xtreme Scaling Mote Design (cont.)Known Issues • Loud (> 90dB) Sounders • Voltage Requirements – 9-12V • Size (1” x 1”) • What Frequency – 2, 4KHz • Tone Detection? • PIR Field of View • Daylight detection circuit • Standardize Battery Selection • Will improve battery voltage accuracy • Watchdog Timer / Remote Programming • Needs significant testing • Preload Mote with Stable TinyOS+Watchdog+XNP

  18. 4. Xtreme Scaling Mote Design (cont.)Proposed Phase 1 • Build 20-30 New Sensor Boards and Distribute to Group for use with existing MICA2 • Late January • In parallel, review package design

  19. 5. Xtreme Scaling Mote Sensor Board Design • Candidate sensor suite at Tier I PrimarySecondary Magnetometer Temperature PIR Seismic Acoustic Humidity Buzzer Barometer LEDs Infrared LED • Issues • What analog electronics to include to reduce sampling rates, (e.g., tunable LPF, integrator) • A to D lines • Packaging to address wind noise/eddies, and actuator visibility • Early API design for sensors and their TinyOS (driver) support • Early testing of sensor interference issues • Sensors at Tier II: GPS and ?

  20. 6. Super Node Design • Candidates (Crossbow is doing fine grain comparison) • Stargate –iPAQ • Instrinsyc Cerfcube μPDA –AppliedData Bitsy, BitsyX • Inhand Fingertip3 –Medusa MK-2 • Evaluation Criteria • 802.11 wireless range (need several hundred meters) • networking with motes • development environment • programming methodology support, simulation tool support • availability of network/middleware services • platform familiarity within NEST • Issues • PDA wakeup times longer?

  21. 7. Application Level Fusion • Features to include in application data from Tier ITier II • energy content • signal duration • signal amplitude and signal max/min ratio • angular velocity • angle of arrival • Issues • Tradeoffs • Tier I XSM density of active nodes • Tier II detection accuracy (to minimize communication power requirement) • Tier III detection latency • Early validation of environment noise and intruder models • Early validation of influence field statistic w/ acoustic sensors and PIR • Might need CFAR in space and time

  22. 8. Remote Programming • The many levels of reprogramming: • In order of increasing cost and decreasing frequency: • Re-configuration • Highly parameterized modules a big win for midterm demo • Scripting • Good for rapid prototyping of top-level algorithmic approaches • Page-level diffs for small changes to binary code • Pages are unit of loss, recovery, & change; acks for reliability • Many possible design choices for repair / loss recovery protocol • Space-efficient diffs will require some thought, compiler support • Loading a whole image of binary code • Optimizations: pipelining a big win; but beware: many optimizations that sound good don’t help as much in practice as you might think (see, e.g., Deluge measurements) • Claim: All levels should use epidemic, anti-entropy mechanisms • Good for extreme reliability: deals with partitions, new nodes, version sync • Good for extreme scalability: avoids need for global state • Tradeoff: flexibility of reprogramming vs. reliability of reprogramming • Want a minimal fail-safe bootloader that speaks reprogramming protocol • Good for reliability: if you’re really sick, blow away everything but bootloader • Discussion topic: How much do we hard-code in the bootloader?

  23. 9. Localization • Distance estimates based on Time Difference of Arrival • Sound and Radio signals used • Median value of repeated measurements used to eliminate random errors Distance Estimation with the Custom Sounder Motes with the UIUC customized sounder

  24. 9. Localization (cont.)Experimental Validation Demo Sensor Network Deployment Ft. Benning localization experiments Fort Benning localization results: actual location versus estimated location • Localization based on trilateration. • Use more than three anchors for correction of systematic errors. • Pick largest consistent cluster of different anchors’ distance intersections. • Minimize sum ofleast squares • Gradient descent search used. • Results • Error correction is effective • Mean location errors of 30cm (median error lower) • Computations can be done entirely on the motes.

  25. 9. Localization (cont.)Plans for Extreme Scaling Problem • Localize nodes in 100x100m2 • UIUC customized motes reliably measure only up to 20m using acoustic ranging • Proposed solution: Multi-Hop ranging • Algorithm • Measure the distances between nodes • Find a relative coordinate system for each node for nodes within acoustic range • Find transformations between coordinate systems • Find distance to an anchor node or find position in the anchor’s coordinate system • Simulation Results • Error accumulates slowly with more transformations. • 100 nodes in a100x100m2 area • Acoustic signal range: 18m • 0 mean, 3σ=2m normal error • hop count = number of transformations

  26. 10. Power Management • Super Node Design • Power management at least as important at Tier II as at Tier I • Key evaluation criterion for device selection • Tier II Power Management Needs • Exploit mote to PDA interrupt wakeup • Low Pfa in detection traffic from supernodes to support almost-always-off communication • TDMA?

  27. 11. Group Formation • Service(s) to Support • Multilateration with gradient descent for distributed tracking and classification (at Tier II) • Reliable broadcast of information from Super Nodes to Xtreme Scaling Motes • Power managed, persistent(?) hierarchical routing (at Tier II) • Issues • Stability of persistent clusters • e.g., in mapping XSM motes to supernodes use unison/hysteresis • Stabilization of clusters • tolerance to failures, displacement, layout non-uniformity

  28. 12. SimulationEmStar Simulation Environment UCLA (GALORE Project) • Software for StarGate and other linux-based microserver nodes for hierarchical networks • EmStar: seamless simulation to deploymentand, EmView: extensible visualizer • http://cvs.cens.ucla.edu/viewcvs/viewcvs.cgi/emstar/ • CVS repository of Linux and bootloader code-base for StarGate • http://cvs.cens.ucla.edu/viewcvs/viewcvs.cgi/stargate/ • Stargate users mailing list • http://www.cens.ucla.edu/mailman/listinfo/stargate-users

  29. 12. Simulation (cont.)Programming Microservers: EmStar • What is it? • Application development framework for microserver nodes • Defines standard set of interfaces • Simulation, emulation, and deployment with same code • Reusable modules, configurable wiring • Event-driven reactive model • Support for robustness, visibility for debugging, network visualization • Supported on StarGate, iPAQs, Linux PCs, and pretty much anything that runs Linux 2.4.x kernel • Where are we using it? • NEST GALORE system: sensing hierarchy • CENS Seismic network: time distribution, s/w upgrade • NIMS robotics application • Acoustic sensing using StarGate + acoustic hardware • Note: EmStar co-funded by NSF CENS, main architect Jeremy Elson

  30. 12. Simulation (cont.)From {Sim,Em}ulation to Deployment • EmStar code runs transparently at many degrees of “reality”: high visibility debugging before low-visibility deployment Scalability Reality

  31. Collaborative Sensor Processing Application Collaborative Sensor Processing Application Real Node 1 State Sync State Sync Real Node n 3d Multi- Lateration 3d Multi- Lateration Topology Discovery Topology Discovery Acoustic Ranging Acoustic Ranging Neighbor Discovery Neighbor Discovery Leader Election Leader Election Reliable Unicast Reliable Unicast Time Sync Time Sync Radio Radio Audio Audio Sensors Sensors 12. Simulation (cont.)Real System each node is autonomous; they communicate via the real environment . . .

  32. Collaborative Sensor Processing Application Collaborative Sensor Processing Application Simulated Node 1 State Sync State Sync Simulated Node n 3d Multi- Lateration 3d Multi- Lateration Topology Discovery Topology Discovery Acoustic Ranging Acoustic Ranging Neighbor Discovery Neighbor Discovery Leader Election Leader Election Reliable Unicast Reliable Unicast Time Sync Time Sync Radio Radio Audio Audio Sensors Sensors 12. Simulation (cont.)Simulated System the real software runs in a synthetic environment (radio, sensors, acoustics) . . . Very Simple Radio Channel Model Very Simple Acoustic Channel Model EMULATOR/SIMULATOR

  33. Collaborative Sensor Processing Application Collaborative Sensor Processing Application Simulated Node 1 State Sync State Sync Simulated Node n 3d Multi- Lateration 3d Multi- Lateration Topology Discovery Topology Discovery Acoustic Ranging Acoustic Ranging Neighbor Discovery Neighbor Discovery Leader Election Leader Election Reliable Unicast Reliable Unicast Time Sync Time Sync Radio Radio Audio Audio Sensors Sensors 12. Simulation (cont.)Hybrid System real software runs centrally, interfaced to hardware distributed in the real world . . . EMULATOR/SIMULATOR Radio Radio

  34. 12. Simulation (cont.)Interacting with EmStar • Text/Binary on same device file • Text mode enables interaction from shell and scripts • Binary mode enables easy programmatic access to data as C structures, etc. • EmStar device patterns support multiple concurrent clients • IPC channels used internally can be viewed concurrently for debugging • “Live” state can be viewed in the shell (“echocat –w”) or using emview

  35. 13. Support Infrastructure • Important techniques for monitoring, fault detection, and recovery: • System monitoring: big antenna was invaluable during midterm demo • Network health monitoring: e.g., min, max transmission rates • Node health monitoring: e.g., ping; query version, battery voltage, sensor failures; reset/sleep commands • Program integrity checks: e.g., stack overflow • Watchdog timer: e.g., tests timers, task queues, basic system liveness • Graceful handling of partial faults: e.g., flash/eeprom low voltage conditions • Log everything: use Matchbox flash filesystem + high-speed log extraction • Simulation at scale: tractable to simulate 1000’s of nodes; use it!

  36. 13. Support Infrastructure (cont.) • A possible network architecture: • Claim: the key to extreme scaling is hierarchy: 100 networks of 100 motes (+ a network of 100 Stargates?), not a network of 10,000 motes • “Everything runs TinyOS”: enables simulation of all levels of hierarchy • Consider adding high-speed backchannel (e.g., 802.11) to a subset of nodes for debugging, monitoring, log extraction • Topics for discussion: • What is the role of end-to-end fault recovery? (e.g., watchdog timers) • What can we learn from theory? (e.g., Byzantine fault toler., self-stabilization) • Logging and replay mechanisms, for after-the-fact debugging? • Quantity vs. quality tradeoff? (Choice between focusing on making individual nodes more reliable, vs. adding more nodes for redundancy)

  37. Project Plan • [Insert Project Plan slides here…]

  38. BACKUP / MISCELLANEOUS SLIDES

  39. Preliminary Program PlanRoles and Responsibilities Systems Integration Ohio State Technology Development Sensors Crossbow Xtreme Scaling Mote Crossbow Technology Relay Node Crossbow Technology Display Unit Ohio State GUI Ohio State Application Layer Ohio State UC Berkeley Middleware Services • Clock Sync (UCLA,OSU) • Group Formation (OSU, UCB) • Localization (UIUC) • Remote Programming (UCB) • Routing (OSU, UCB) • Sensor Fusion (OSU) • Power Management (UCB) • Relay Node Services (UCLA) Operating System UC Berkeley Application Tools Ohio State UCLA UC Berkeley MAC Layer UC Berkeley Auxiliary Services • Testing (OSU, MITRE, CNS Technologies) • Monitoring, logging, and testing infrastructure (UCB, OSU) • Evaluation (MITRE) • Logistics, site planning (CNS Technologies, OSU) • ConOps development (Puritan Research, CNS Technologies, SouthCom, US Customs & Border Protection, MITRE, OSU) • Simulation tools (UCB, UCLA, Vanderbilt, OSU) Transition Partners USSOUTHCOM, U.S. Customs & Border Protection, USSOCOM, AFRL

More Related