1 / 33

CONSONA Constraint Networks for the Synthesis of Networked Applications

CONSONA Constraint Networks for the Synthesis of Networked Applications. Decentralizing the Boeing OEP Challenge Problem Lambert Meertens Asuman S ünbül Stephen Fitzpatrick. Lambert Meertens Cordell Green Kestrel Institute. Administrative.

Download Presentation

CONSONA Constraint Networks for the Synthesis of Networked Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CONSONAConstraint Networks for the Synthesisof Networked Applications Decentralizing the Boeing OEPChallenge Problem Lambert Meertens Asuman Sünbül Stephen Fitzpatrick Lambert Meertens Cordell Green Kestrel Institute

  2. Administrative • Project Title: CONSONA - Constraint Networks for the Synthesis of Networked Applications • PM: Vijay Raghavan • PI: Lambert Meertens & Cordell Green • PI Phone #: 650-493-6871 • PI Email: meertens@kestrel.edu & green@kestrel.edu • Institution: Kestrel Institute • Contract #: F30602-01-C-0123 • AO number: L545 • Award start date: 05 Jun 2001 • Award end date: 04 Jun 2004 • Agent name & organization: Juan Carbonell, AFRL/Rome

  3. Subcontractors and Collaborators • Subcontractors • none • Collaborators • none

  4. Lambert Meertens / Cordell Green Kestrel Institute ConsonaDecentralizing the Boeing OEP Challenge Problem Demonstration Summary Relationship to Project • The Consona project aims at developing truly scalable fine-grain fusion of physical and information processes in large ensembles of networked nodes • Active damping of vibration in a fairing is an excellent challenge problem • The current CP approach hinges, unfortunately, on a centralized architecture, making it difficult to demonstrate scalability • The demonstration shows how the Consona approach contributes to decentralization and scalability in the CP • Current CP formulation is • centralized and in this • form not scalable • Decentralize: • Constraint Service • Group Formation • Control Show impact on performance Conclusions Middleware services provided to Boeing Challenge Problem Evaluation Criteria • Decentralizing the Constraint Service: • speed (not measured) • anytime, self-stabilizing protocol • quality of node-mode assignment obtained • Application Energy Metric • Decentralizing Group Formation: • reduction of delay before new control takes effect • Distributed Adaptive Control: • Application Energy Metric • “Constraint Satisfaction”assign sensors/actuators to control resonance modes • “Group Management”distributed formation/adaptation of modal groups f sensors/actuators • “Actuator Control”highly distributed control of actuators based on local sensor feedback

  5. Overview of Project • Develop model-based methods and tools that • integrate design and code generation  design-time performance trade-offs • in a goal-oriented way  goal-oriented run-time performance trade-offs • of NEST applications and services  low composition overhead

  6. Overview of Technical Approach • Services and applications are both modeled as sets of soft constraints to be maintained at run-time • High-level code is produced by repeated instantiation of constraint-maintenance schemas • Constraint-maintenance schemas are represented as triples (C,M,S), meaning that constraint C can be maintained by executing code M, provided that ancillary constraints S are maintained • High-level code is optimized to generate efficient low-level code

  7. Demo Overview of Demonstration Default System Id centralized Actuator Control per-node couplingmatrix coupling matrix Constraint Service centralized distributed node-to-mode assignment artificial synchronization Group Formation centralized distributed artificial synchronization modal groups Parameter Selection centralized could be per-group? modal characteristics Actuator Control per-group

  8. Distributed ‘‘Constraint’’ Service (This is really an Optimization service, solving an assignment problem) • To assign nodes for controlling the modes, the following problem must be solved: • given a matrix C of the degree in which each node can contribute to controlling each mode, • assign the nodes to the modes so as to maximize the sum of Cmn taken over all modes m, and all nodes n in m’s group, • subject to a limitation on the group sizes

  9. Distributed Assignment Problem • Key idea: turn the Node-Mode Assignment Problem into a Polygamic variant of the Stable Marriage Problem • The well-known Stable Marriage Problem: • Find a stable pairing between two sets of ‘‘men’’ and ‘‘women’’, given their preference matrices, where stable means: no man and woman prefer each other over their present partners • SMP solutions are not guaranteed to be optimal assignments, but hopefully are good enough

  10. The Polygamic Island of Huta • On the idyllic island of Huta the individuals of the prevalent two sexes are known as moadhs and noadhs • Curiously, the attraction between any moadh and noadh is always the same for either direction: attraction is perfectly symmetric • Equally remarkably, there is always some attraction; although the degree differs, repulsion simply does not occur • For unclear reasons, there is a marked imbalance in the sex ratio: there are about nine noadhs to each moadh

  11. HutaCourtshipandMarriage The Laws of Huta Courtship and Marriage 1. A noadh may be married to at most one moadh 2. A moadh may be married to up to ten noadhs 3. Bachelor noadhs propose, moadhs decide 4. Any moadh or noadh may at any time summarily divorce any of their spouses

  12. The Ancient HutaPolygame • The Rules of the Polygame • Bachelor noadhs propose to the most attractive moadh by whom they have not (yet) been rejected • A moadh accepts a proposal if the full complement of ten has not been reached,or if the proposer is more attractive than the least attractive present noadh— who then is forthwith dismissed and returns to bachelor status

  13. Claims • The Polygame protocol always settles on a stable assignment • The protocol needs no ‘‘rounds’’ or other forms of synchronization or mutual exclusion — assuming reliable communication • It can easily be turned into a self-stabilizing protocol • Not addressed: termination detection — but is that important? • Not addressed: mode agent failure

  14. Some Metrics • Metric: Sum of Cmn over all modes m, and all nodes n in m’s group, where C is the Φ matrix • Sc#1: default assignment for Scenario #1 • Sc#3: default assignment for Scenario #3 • Poly: assignment computed by Polygame

  15. Application Energy Metric Comparison of assignments: reduction in Application Energy is rather similar

  16. Variant Assignment Problem • Inspection of the assignments returned by the default Constraint Service (provided in Build 1.6) suggests that some of the lower modes shun nodes that score high on a neighboring mode • fear of excitation energy spilling over? • The precise criteria are unclear, but we experimented with a variant in which the same is avoided by modifying the entries in matrix C

  17. Application Energy Metric Dramatic reduction may be an artifact of how the AEM window is painted

  18. Conclusion • Satisfactory distributed Assignment Service appears possible • Optimality criterion needs further elucidation • In all likelihood, this will then prove to be rather application-specific, and not suitable as a service for general use • Unsolved important NEST topic: creating reliable ‘‘agents’’ with unreliable nodes

  19. Distributed Group Formation • Group formation/adjustment has significant latency In current control application, modal groups are formed sequentially group formation delay Group formation can be delegatedto the controller nodes 0s 1 2 3 4

  20. Distributed Actuator Control • Atmospheric turbulence sets up vibration in fairing • Original idea: Wave Watch • Propagation of vibration can be modeled using a wave equation (spatio-temporal pde) • If the nodes can propagate information faster than the vibration propagates, we could use the ‘‘field’’ of nodes to ‘‘forecast’’ the excitation (like weather forecast) and use the results to apply that form of control that maximizes the dissipation of wave energy • Question: Assuming we have that forecast, what should the control be then? • Observation: Just like for the weather, the present is a good approximation for the immediate future

  21. Objective: Reduce Kinetic Energy “The energy dissipation goal is to provide 20 db of attenuation for the important structural modes for the vibration suppression problem” Kestrel Questions & Answers, Boeing NEST OEP website

  22. Reducing Kinetic Energy Push/pull the fairingin the oppositedirection to itsobserved motion Q: How hard?

  23. Estimation • Estimate velocity — up to a constant of proportionality — by using delta in the sensor reading (position) • Multiply by negative value to get actuator setting (force) • We calibrated with results of global-control runs to equate levels of power consumption (estimated by using average square of actuator settings) • No stability guarantees (yet)! — latency & frequency dependent; further study needed actuatorCommand = -lambda*deltaSensor; lambda = 3100.0;

  24. Summary of Results Comparison of controls (global control as in Scenario 1): local control meets –20dB goal

  25. Summary of Results This form of local control may, however, be more sensitive to massive node failure 0% 100% progressive node failure

  26. Demonstration Issues • Application-determined relevant metrics (like ratio between controlled and uncontrolled energy) should be made available • Constraint-service API documentation, write-up and actual calls all disagree

  27. Demonstration Lessons Learned • Build 1.6 distribution contains • 6,387 files with, all together, • 1,603,464 bytes • 397,725* lines of C++ code * estimated

  28. Large Footprint Executable has large footprint

  29. Large Footprint Larger than Powerpoint!

  30. Time is of the Essence • It takes 500 real seconds to simulate one second on an 850 Mhz Pentium III processor • Assuming a network of 100 such nodes, we are still short by a factor of 5 of real-time requirements

  31. Application Architecture Mismatch • Control application is essentially centralized • sensors and actuators are distributed but most major control components are centralized • interfacing distributed services with the centralized components is ungainly • not to mention inefficient & error-prone • Simple tasks involve significant bureaucracy • coding is cumbersome • execution overhead is significant • To show NEST benefits of large numbers of simple resources, we must switch to a concept of control that is more free-flowing • not tightly orchestrated

  32. System Infrastructure Mismatch • CORBA was designed to address a problem, which it solves… at a cost • In the NEST context, that cost seems to be more an issue than the problem • Infrastructure overhead rules out possibly more appropriate design choices

  33. Middleware Mismatch • On closer inspection, the usefulness of the CP domain services appears to be more problem-specific than is desirable • There is a virtually unlimited plethora of possible middleware services — the question that needs more study is which are the truly and generally useful abstractions that can be realized by middleware

More Related