1 / 35

Measured Characteristics of FutureGrid Clouds for Collaborative Sensor-Centric Grid Applications

This talk discusses the measured performance, scalability, and reliability of distributed cloud computing infrastructure for collaborative sensor-centric applications on FutureGrid. The focus is on understanding the characteristics of cloud technologies for scalable, on-demand computing for collaboration with small devices such as sensors and mobile phones.

haet
Download Presentation

Measured Characteristics of FutureGrid Clouds for Collaborative Sensor-Centric Grid Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measured Characteristics of FutureGrid Clouds For Scalable Collaborative Sensor-Centric Grid Applications Geoffrey C. Fox Indiana University & Alex Ho, Eddy Chan Anabas, Inc. May 25, 2011 Indiana University/Anabas, Inc.

  2. Agenda of The Talk Indiana University/Anabas, Inc.

  3. Background The emergence of cloud technologies has raised a renewed emphasis on the issue of scalable on-demand computing. Motivation Cloud back-end support of a large number of small devices such as sensors and mobile phones being used for collaboration is one important application. Our Effort We focus on understanding the characteristics of distributed cloud computing infrastructure for collaborative sensor-centric applications on the FutureGrid. Our Results and Future Plan We report our preliminary findings in areas of measured performance, scalability and reliability; and discuss follow-on plan. Indiana University/Anabas, Inc.

  4. Background • Cloud computing promises infrastructure resources to support application scalability. • Few studies on collaboration applications in clouds. • Even fewer work on leveraging heterogeneous, distributed clouds for real-time, distributed collaborative sensor-centric applications – for instances, applications for situation awareness. Indiana University/Anabas, Inc.

  5. Motivation • Technology has enabled a noticeable shift from using a few expensive and feature-riched sensors to a large number of small, inexpensive commodity sensors. • This technology trend will drive increased use of collaborative sensing for better information about the environment or operational picture of interest. • An example is an urban-scale deployment of parking space sensors and the sharing of real-time, parking space availability information with users of smartphones. Another example is crowd-sourcing apps. • We expect a growing demand for scalable support of collaborative applications that could utilize a massive number of geographically dispersed sensors of different types for timely and actionable decision-support scenarios. Indiana University/Anabas, Inc.

  6. Our Effort • We focus on understanding the characteristics of distributed cloud computing infrastructure for collaborative sensor-centric applications on the FutureGrid. • Terminology: • Collaboration – we broadly define it as the sharing of digital objects • Sensor - a source of time-dependent stream of information • Real time – it is application-dependent • Grids – they represent the systems formed by the distributed collection of digital capabilities that are managed and coordinated to support some sort of enterprises. • Clouds – they are commercially supported data-center models competing with general-purpose computing centers. Indiana University/Anabas, Inc.

  7. Methodology to measure performance, scalability and reliability characteristics of the FutureGrid: • Use standard network performance tools at the network level • Use the IU NaradaBrokering system, which supports many practical communication protocols, to measure data at the message level • Use the Anabas sensor-centric grid framework, a message-based sensor service management and application development framework, to enable measuring data at the collaboration applications level Indiana University/Anabas, Inc.

  8. An Overview of The Anabas Collaborative Sensor-Centric Grid Framework • In order to generate and measure collaborative sensor-centric application traffic on distributed clouds, we need a tool to • Build a sensor-centric grid • Deploy sensors • Manage sensors • Support development of collaborative sensor-centric applications • The Anabas collaborative sensor-centric grid framework was designed and built for an earlier project in partnership with Indiana University and Ball Aerospace. • IU is leading the technology development aspect of a Ball Aerospace project, that takes the sensor-centric grid framework to sensor cloud. Indiana University/Anabas, Inc.

  9. An Overview of The Anabas Collaborative Sensor-Centric Grid Framework (cont’d) • GB, the grid builder tool, • supports assembling subgrids into a mission-specific grid application • provides services for • defining sensor properties • deploying sensors according to defined properties • monitoring deployment status of sensors • remote management of deployed sensors • distributed management of deployed sensors • A deployed sensor-centric grid communicates with • deployed sensors irrespective of sensor locations • deployed sensor-centric applications irrespective of application locations • GB mediates the collaboration among these three modules. Indiana University/Anabas, Inc.

  10. An Illustration of A Collaborative Sensor-Centric Application Indiana University/Anabas, Inc.

  11. Supported Services • Sensor Services: • RFID • GPS • Wii remote • Webcam video • Lego Mindstorm NXT • Ultrasonic • Sound • Light • Touch • Gyroscope • Compass • Accelerometer • Thermistor • Nokia N800 Internet Tablet • Computational Service • VED (Video Edge Detection) • RFID positioning Indiana University/Anabas, Inc.

  12. ANABAS Indiana University/Anabas, Inc.

  13. Indiana University/Anabas, Inc.

  14. ANABAS Indiana University/Anabas, Inc.

  15. ANABAS Indiana University/Anabas, Inc.

  16. Overview of FutureGrid Indiana University/Anabas, Inc.

  17. An Overview of FutureGrid • It is an experimental testbed that could support large-scale research on distributed and parallel systems, algorithms, middleware and applications running on virtual machines (VM) or bare metal. • It supports several cloud environments including Eucalyptus and Nimbus clouds. • Eucalyptus and Nimbus are open source software platforms that implement IaaS-style cloud computing. • Both support AWS-compliant, EC2-based web service interface. • Eucalyptus supports AWS storage-compliant service. • Nimbus supports saving of customized-VMs to Nimbus image repository. Indiana University/Anabas, Inc.

  18. General Experimental Setup • We use four distributed, heterogeneous clouds on FutureGrid • Hotel (Nimbus at University of Chicago) • Foxtrot (Nimbus at University of Florida) • India (Eucalyptus at Indiana University) • Sierra (Eucalyptus at UCSD) • Distributed cloud scenarios are • either pairs of clouds, or • a group of four clouds • In Nimbus we use 2-cores with 12 GB RAM in a CentOS VM • In Eucalyptus clouds we use m1.xlarge instances. Each m1.xlarge instance is roughly equivalent to a 2-core Intel Xeon X5570 with 12 GB RAM • We use ntp to synchronize the cloud instances before experiments Indiana University/Anabas, Inc.

  19. Network Level Measurement • We run two types of experiments: • Using iperf to measure bi-directional throughput on pairs of cloud instances, one instance on each cloud in the pairs. • Using ping in conjunction with iperf to measure packet loss and round-trip latency under loaded and unloaded network on pairs of cloud instances, one instance on each cloud in the pairs . Indiana University/Anabas, Inc.

  20. Network Level - Throughput Indiana University/Anabas, Inc.

  21. Instance Pair Unloaded Packet Loss Rate Loaded (32 iperf connections) Packet Loss Rate India-Sierra 0% 0.33% India-Hotel 0% 0.67% India-Foxtrot 0% 0% Sierra-Hotel 0% 0.33% Sierra-Foxtrot 0% 0% Hotel-Foxtrot 0% 0.33% Network Level – Packet Loss Rate Indiana University/Anabas, Inc.

  22. Network Level – Ping RTT with 32 iperf connections Indiana University/Anabas, Inc.

  23. Network Level – Ping RTT with 32 iperf connections Indiana University/Anabas, Inc.

  24. Network Level – Round-trip Latency Due to VM Indiana University/Anabas, Inc.

  25. Network Level – Round-trip Latency Due to Distance Indiana University/Anabas, Inc.

  26. Message Level Measurement • We run a 2-cloud distributed experiment. • Use Nimbus clouds on Foxtrot and Hotel • A NaradaBrokering (NB) broker runs on Foxtrot • Use simulated participants for single and multiple video conference session(s) on Hotel • Use NB clients to generate video traffic patterns instead of using Anabas Impromptu multipoint conferencing platform for large scale and practical experimentation. • Single video conference session has up to 2,400 participants • Up to 150 video conference sessions with 20 participants each Indiana University/Anabas, Inc.

  27. Messages Level Measurement – Round-trip Latency Indiana University/Anabas, Inc.

  28. Message Level Measurement • The average inter-cloud round-trip latency incurred between Hotel and Foxtrot in a single video conference session with up to 2,400 participants is about 50 ms. • Average round-trip latency jumps when there are more than 2,400 participants in a single session. • Message backlog is observed at the broker when there are more than 2,400 participants in a single session. • Average round-trip latency can be maintained at about 50 ms with 150 simultaneous sessions, each with 20 participants. An aggregate total of 3,000 participants. • Multiple smaller sessions allow NB broker to balance its work better. Limits shown are due to use of single broker and not of the system. Indiana University/Anabas, Inc.

  29. Collaborative Sensor-Centric Application Level Measurement • We report initial observations of an application using the Anabas collaborative sensor-centric grid framework. • Use virtual GPS sensors to stream information to a sensor-centric grid at a rate of 1 message per second. • A sensor-centric application consumes all the GPS sensor streams and computes latency and jitter. • We run two types of experiments • A single VM in a cloud to establish a baseline - India • In 4 clouds – India, Hotel, Foxtrot, Sierra – each with a single VM Indiana University/Anabas, Inc.

  30. Collaborative Sensor-Centric Application Level – Round-trip Latency Indiana University/Anabas, Inc.

  31. Collaborative Sensor-Centric Application Level – Jitter Indiana University/Anabas, Inc.

  32. Collaborative Sensor-Centric Application Level Measurement • Observations: • In the case of of a single VM in a cloud, we could stretch to support 100 virtual GPS sensors, with critically low idle CPU at 7% and un-used RAM at 1 GB. Not good for long running applications or simulations. The average round-trip latency and jitter grow rapidly beyond 60 sensors. • In the case of using four geographically distributed clouds of two different types to run a total of 200 virtual GPS sensors, average round-trip latency and jitter remain quite stable. Average idle CPU at about 35% level which is more suitable for long running simulations or applications. Indiana University/Anabas, Inc.

  33. Preliminary Results • Network Level Measurement • FutureGrid can sustain at least 1 Gbps inter-cloud throughput and is a reliable network with low packet loss rate. • Message Level Measurement • FutureGrid can sustain a throughput close to its implemented capacity of 1 Gbps between Foxtrot and Hotel. • The multiple video conference sessions shows clouds can support publish and subscribe brokers effectively. • Note the limit around 3,000 participants in the figure was reported as 800 in earlier work, showing any degradation in server performance from using clouds is more than compensated by improved server performance. • Collaborative Sensor-Centric Application Level Measurement • Distributed clouds has an encouraging potential to support scalable collaborative sensor-centric applications that have stringent throughput, latency, jitter and reliability requirements. Indiana University/Anabas, Inc.

  34. Future Plan • Repeat current experiments to get better statistics • Include scalability in the number of instances in a each cloud • Study impact on latency along the line of  bare metal vs VMs, commercial vs academic clouds, different cloud infrastructures (OpenStack, Nimbus, Eucalyptus) • Look at server side limits with distributed brokers versus number of clients (where virtual clients run so client side not bottlenecked) • Look at effect of use of secure communication mechanisms Indiana University/Anabas, Inc.

  35. Acknowledgments We thank Bill McQuay of AFRL, Ryan Hartman of Indiana University and Gary Whitted of Ball Aerospace for their important support of the work. This material is based on work supported in part by the National Science Foundation under Grant No. 0910812 to Indiana University for “FutureGrid: An Experimental, High-Performance Grid Test-bed.” Other partners in the FutureGrid project include U. Chicago, U. Florida, U. Southern California, U. Texas at Austin, U. Tennessee at Knoxville, U. of Virginia. Indiana University/Anabas, Inc.

More Related