1 / 36

An Architecture-based Framework For Understanding Large-Volume Data Distribution

An Architecture-based Framework For Understanding Large-Volume Data Distribution. Chris A. Mattmann USC CSSE Annual Research Review March 17, 2009. Agenda. Research Problem and Importance Our Approach Classification Selection Analysis Evaluation Precision, Recall, Accuracy Measurements

Rita
Download Presentation

An Architecture-based Framework For Understanding Large-Volume Data Distribution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Architecture-based Framework For Understanding Large-Volume Data Distribution Chris A. Mattmann USC CSSE Annual Research Review March 17, 2009

  2. Agenda • Research Problem and Importance • Our Approach • Classification • Selection • Analysis • Evaluation • Precision, Recall, Accuracy Measurements • Speed • Conclusion & Future Work

  3. Research Problem and Importance ? • Content repositories are growing rapidly in size • At the same time, we expect more immediate dissemination of this data • How do we distribute it… • In a performant manner? • Fulfilling system requirements?

  4. Data Distribution Scenarios A Backup Siteperiodically connects across the WAN to the Digital Movie Repository to backup its entire catalog and archive of over 20 terabytes of movie data and metadata. A medium-sized volume of data, e.g., on the order of a gigabyte needs to be delivered across a LAN, using multiple delivery intervals consisting of 10 megabytes of data per interval, to a single user.

  5. Data Distribution Problem Space

  6. Insight: Software Architecture • The definition of a system in the form of its canonical building blocks • Software Components: the computational units in the system • Software Connectors: the communications and interactions between software components • Software Configurations: arrangements of components and connectors and the rules that guide their composition

  7. Data Consumer Data Consumer Data Consumer Data Consumer Component data data Connector Data Distribution Systems Component Data Producer ??? Insight: Use Software Connectors to model data distribution technologies

  8. Impact of Data Distribution Technologies • Broad variety of data distribution technologies • Some are highly efficient, some more reliable • P2P, Grid, Client/Server, and Event-based • Some are entirely appropriate to use, some are not appropriate

  9. Data Movement Technologies • Wide array of available OTS “large-scale” connector technologies • GridFTP, Aspera, HTTP/REST, RMI, CORBA, SOAP, XML-RPC, Bittorrent, JXTA, UFTP, FTP, SFTP, SCP, Siena, GLIDE/PRISM-MW, and more • Which one is the best one? • How do we compare them • Given our current architecture? • Given our distribution scenarios & requirements?

  10. Research Question • What types of software connectors are best suited for delivering vast amounts of data to users, that satisfy their particular scenarios, in a manner that is performant, scalable, in these hugely distributed data systems?

  11. Broad variety of distribution connector families • P2P, Grid, Client/Server, and Event-based • Though each connector family varies slightly in some form or fashion • They all share 3 common atomic connector constituents • Data Access, Stream, Distributor • Adapted from our group’s ICSE2000 Connector Taxonomy

  12. Connector Tradeoff Space • Surveyed properties of 13 representative distribution connectors, across all 4 distribution connector families and classified them • Client/Server • SOAP, RMI, CORBA, HTTP/REST, FTP, UFTP, SCP, Commercial UDP Technology • Peer to Peer • Bittorrent • Grid • GridFTP, bbFTP • Event-based • GLIDE, Sienna

  13. Large Heterogeneity in Connector Properties

  14. How do experts make these decisions? • Performed survey of 33 “experts” • Experts defined to be • Practitioners in industry, building data-intensive systems • Researchers in data distribution • Admitted architects of data distribution technologies • General consensus? • They don’t the how and the why about which connector(s) are appropriate • They rely on anecdotal evidence and “intuition” 45% of respondents claimed to be uncomfortablebeing addressed as a data distribution expert.

  15. Why is it bad to have these types of experts? • Employ a small set of COTS, and/or pervasive distribution technologies, and stick to them • Regardless of the scenario requirements • Regardless of the capabilities at user’s institutions • Lack a comprehensive understanding of benefits/tradeoffs amongst available distribution technologies • They have “pet technologies” that they have used in similar situations • These technologies are not always applicable and frequently only satisfy one or two scenario requirements and ignore the rest

  16. Our Approach: DISCO • Develop a software framework for: • Connector Classification • Build metadata profiles of connector technologies, describing their intrinsic properties (DCPs) • Connector Selection • Adaptable, extensible algorithm development framework for selecting the “right” connectors (and identifying wrong ones) • Connector Selection Analysis • Measurement of accuracy of results • Connector Performance Analysis

  17. DISCO in a Nutshell

  18. Scenario Language • Describes distribution scenarios e.g., 10 MB, 100 GB, etc., int + higher order unit e.g., 1, 10, int e.g., SSL/HTTP 1.0, Linux File System Perms, string from controlled value range 1-10, computed scale e.g., 1, 10, int e.g., 1, 10, int e.g., 1, 10, int

  19. Distribution Connector Model • Developed model for distribution connectors • Identified combination of primitive connectors that a distribution connector is made from

  20. Distribution Connector Model • Model defines important properties of each of the important “modules” within a distribution connector • Defines value space for each property • Defines each property • Properties are based on the combination of underlying “primitive” connector constituents • Model forms the basis for a metadata description (or profile) of a distribution connector

  21. Selection Algorithms • So far • Let data system architects encode the data distribution scenarios within their system using scenario language • Let connector gurus describe important properties of connectors using architectural metadata (connector model) • Selection Algorithms • Use scenario(s) and connector properties identify the “best” connectors for the given scenario(s)

  22. Selection Algorithms • Formal Statement of the problem

  23. Selection Algorithms This interface is desirable because it allows a user to rank and compare how “appropriate” each connector is, rather than having a binary decision • Selection algorithm interface ? (bbFTP, 0.157) (FTP,0.157) (GridFTP,0.157) (HTTP/REST, 0.157) (SCP, 0.157) (UFTP, 0.157) (Bittorrent, 0.021) (CORBA, 0.005) (Commercial UDP Technology, 0.005) (GLIDE, 0.005) (RMI, 0.005) (Sienna, 0.005) (SOAP, 0.005) scenario ConnectorKB

  24. Selection Algorithm Approach • White box • Consider the internal properties of a connector (e.g., its internal architecture) when selecting it for a distribution scenario • Black box • Consider the external (observable) properties of the connector (such as performance) when selecting it for a distribution scenario

  25. Develop complementary selection algorithms • Software architects fill out Bayesian domain profiles containing conditional probabilities • Likelihood a connector, given attribute A and its value, and given scenario requirement, is appropriate for scenario S • Users familiar with connector technologies develop score functions • Relating observable properties (performance reqs) of connector to scenario dimensions

  26. Selection Analysis • How do we make decisions based on a rank list? • Insight: looking at the rank list, it is apparent that many connectors are similarly ranked, while many are not • Appropriate versus Inappropriate?

  27. appropriate inappropriate Selection Analysis (bbFTP, 0.15789473684210525) (FTP,0.15789473684210525) (GridFTP,0.15789473684210525) (HTTP/REST, 0.15789473684210525) (SCP, 0.15789473684210525) (UFTP, 0.15789473684210525) (Bittorrent, 0.02105263157894737) (CORBA, 0.005263157894736843) (Commercial UDP Technology, 0.005263157894736843) (GLIDE, 0.005263157894736843) (RMI, 0.005263157894736843) (Sienna, 0.005263157894736843) (SOAP, 0.005263157894736843)

  28. Selection Analysis

  29. Selection Analysis • Employed k-means data clustering algorithm • k parameter defines how many sets data is partitioned into • Allows for clustering of data points (x, y) around a “centroid” or mean value • We developed an exhaustive connector clustering algorithm based on k-means • clusters connectors into 2 groups, appropriate, and inappropriate • uses connector rank value as y parameter (x is the connector name) • exhaustive in the sense that it iterates over all possible connector clusters (vanilla k-means is heuristic & possibly incomplete)

  30. Tool Support • Allows a user to utilize different connector knowledge bases, configure selection algorithms and execute them and visualize their results

  31. 80.5% 87% Decision Process • Precision - the fraction of connectors correctly identified as appropriate for a scenario • Accuracy - the fraction of connectors correctly identified as appropriate or inappropriate for a scenario

  32. Decision Process: Speed

  33. Conclusions & Future Work • Conclusions • Domain experts (gurus) rely on tacit knowledge and often cannot explain design rationale • Disco provides a quantification of & framework for understanding an ad hoc process • Bayesian algorithm has a higher precision rate • Future Work • Explore the tradeoffs between white-box and black-box approaches • Investigate the role of architectural mismatch in connectors for data system architectures

  34. Thank You!Questions?

  35. Backup

  36. Related Work • Software Connectors • Mehta00 (Taxonomy), Spitznagel01, Spitznagel03, Arbab04, Lau05 • Data Distribution/Grid Computing • Crichton01, Chervenak00, Kesselman01 • COTS Component/Connector selection • Bhuta07, Mancebo05, Finkelstein05 • Data Dissemination • Franklin/Zdonik97

More Related