150 likes | 243 Views
IWLU Panel. Panelists: Mary Shaw, Steve Easterbrook, Betty Cheng, David Garlan, Alexander Egyed, Alex Orso, Tim Menzies Moderator: Marsha Chechik. Where is Uncertainty?. Where are we now (current state) Which way we are going (process)
E N D
IWLU Panel Panelists: Mary Shaw, Steve Easterbrook, Betty Cheng, David Garlan, Alexander Egyed, Alex Orso, Tim Menzies Moderator: Marsha Chechik
Where is Uncertainty? • Where are we now (current state) • Which way we are going (process) • Where do we want to end up (product, end result) • What is the configuration of our system? • What are the properties of our environment? • What is process scheduling for our system? • What are sources of our information?
Dealing with Uncertainty(stolen without permission from Steve Easterbrook) • Avoid/ignore. Uncertainty can be isolated and is not important • Resolve – negotiate a new solution or find a compromise Circumvent – make (some) decisions under some model of uncertainty • Ameliorate – take actions that improve the situation but which do not get rid of the uncertainty
Quick Summary of Interests of Workshop Participants • AI/SE border in describing/reasoning with uncertainty • For process and product • Based on probabilities • Decision making • Including delaying and reconsidering decisions • Formal methods • Architectural design • Modeling • Reasoning • Advice to architects
More interests • Uncertainty in testing • Dynamically adaptive systems: • Requirements elicitation • Decision making • Uncertainly/inconsistency in viewpoints • Pervasive computing: • Resource allocation, QoS management under uncertainty • Incremental requirements • Impact of early design decisions
Questions to the Panelists • What have I learned today? (or what I believe to be true) • How do we make decisions in the presence of incompleteness? • What problems are most difficult (for decision making or for elimination) • Which are most important? • What problems are most feasible to tackle, in the short run and in the long run • How do we proceed? • How do we coordinate efforts? • Challenge problems • What do we offer as a baseline? • …So that other results can improve or refute • And how do we document that?
More Global Questions • Is there a community forming? • (easier) Should we have another workshop? • (harder) What should be our research agenda?
KNOWLEDGE DETAIL subjective abstract incomplete untrustworthy inconsistent optional unclear ENVIRONMENT evolving FUTURE imprecise SEMANTICS inaccurate mistrusted low-confidence context-sensitive ambiguous unreliable confused nondeterministic uncertain (in time) contingent
Outcomes • Initial classification of what incompleteness means • Incompleteness is EVERYWHERE and we should have had this a long time ago • Decision making requires more completeness • Economical argument. Who are stakeholders? • Sensitivity analysis. What will help do prediction • Need to build adaptible systems. Always! • Goal: explain competence in the presence of uncertainty • Knowledge acquisition: even experts disagree about stuff they know (even with themselves) • Need to know stochastic theorem proving • Non-monotonic reasoning (model-based diagnosis people) • Need empirical basis (including parameterization of uncertainty)
More Outcomes • Learn from other fields • Probability/statistics, AI • Inspiration from real world (e.g., snails) • Use Avida to support model evolution • Need more of: • Interplay between human and computer reasoning • Reasoning takes place in reactive environment and does by groups of people where noone has complete info. • Explicit boundaries of decision-making • Realiance on other people (how to scale this up from small company to large)
More Outcomes • Autonomous evolution • Using dynamic info to steer the model • Going from architecture down to code. • Has strong relevance to testing • Did not hear enough of • What to do if it is impossible to ask the user for more information! • What is the boundary between incompleteness and uncertainty? • How to reasoning anyway? • Did not hear enough of • Use of backtracking when wrong decisions were made • When to backtrack? • How far to backtrack
Yet More Outcomes • Evolution of systems and uncertainty: • From simple programs to human-in-the-loop, uncertainty grows • Need to explicitly integrate uncertainty and quantify it • Permitting incremental refinement
Research Questions • How to decide what additional info is most useful for comparing designs? • How to explain the context of slack? • Need to plan for incompleteness and build in excess capacity • Can we develop the calculus of confidence and other subjective info • Tools for expertise management • Where to go to for info?
More Research Questions • How to decide what we need to know and what should be factored into the model? • How to estimate quality of these models as far as prediction goes • How to create runtime systems to help adapt? • How to validate our results? • Reiterate: reasoning with uncertainty, when more information cannot be elicited • Use of backtracking
Yet More Research Questions • Development of a challenge problem • With a baseline solution • … or not • … or a model problem? • Anything as long as there is an objective • And based on real people doing real things • Maybe begin by defining terms • Survey paper • Maybe a set of scenarios (will get a link) • Look for principally different models • Law of medium numbers…