340 likes | 553 Views
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski, 1998. Lecture on Safety and Reliability of Human-Machine Systems / Sicurezza e Affidabilità dei Sistemi Uomo-Macchina / Adam Maria Gadomski E-mail: gadomski_a@casaccia.enea.it
E N D
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski, 1998 Lecture on Safety and Reliability of Human-Machine Systems / Sicurezza e Affidabilità dei Sistemi Uomo-Macchina/ Adam Maria Gadomski E-mail: gadomski_a@casaccia.enea.it URL: http://wwwerg.casaccia.enea.it/ing/tispi/gadomski/gadomski.html 1998/99
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Presentation outline • Definitions: Reliability, Safety and Human Errors • Human-Machine Systems: Low risk systems, High risk systems • Human Errors: Operator - Designer - Organization • User Modeling for Decision support • Reduction of Human Errors: From Passive DSS to Intelligent DSSs • Some Examples
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Reliability and Safety • Reliability problem - generation of economic losses; Characterized by different “loss of function” over a given period of time under a given set of operational conditions. • Safety problem -generation ofhealth, environmental and cultural losses; direct losses for humans body (harm, injury). Safety (effects):Yes No ReliabilityYes+ + (causes)No+ + As we see Safety and Reliability are either independent (wrong design) or dependent indicators of the system utility.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Human errors • Human error: • Human action or inaction that can produce unintended results (*) or system failures (**). • (*)[ISO/ ITC Information Technology Vocabulary,96] • (**) [ NUREC-1624] Machine failures Human errors Complex consequences interrelations Reliability problems Safety problems
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski • Human - Machine Systems • High risk systems ( most important: safety problems ) Human errors cause high losses: disasters (off site), accident (on site), incidents, human dead. - Nuclear, chemical plants, public transportation systems, banks ... • Low risk systems ( most importany: reliability problems) Human errors cause only long term loweconomical losses or quality problems - Office systems, public information systems, travel, sale systems, Internet ...
MIND Organization Physical environment Psycho-social environment Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Causes of Human Errors Machine(controlled system/processes) Control and Measurement System Computer Console Humanoperator Hardware & Software
amg Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Propagation ofHuman Errors Consequences (Losses) consequences Design err. Machine Organization consequences Stress caused err. Environment Remarks: Critical & modificable element is the Human-Computer Interface System
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Mental Direct Human User ErrorsSensorial Rational Emotional • Erroneous perception + + + of images and texts • Erroneous request + + + of information • Erroneous manipulation+ + + Possible propagation
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Sources of errors from the user perspective Designer ErrorscauseOperator Errors ! Main Designer error: Adopting system interface to his own needs. 1. Neglecting human factors and cognitiveimportance scale on the level of human sensing and manipulation: - too much information(images, texts) on the screen - not clear hierarchy (criteria) of the information presentation - mode of presentation, use of: size, structure, color, voice. - choice of proper buttons of control (importance scale); place, abreviations, … - lack the possibility of correction of errorneouss commands.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski 2.Neglecting of the necessity of the user understanding of plant/process global and particular situation - not clear User-Computer Cooperation. - lack of hierarchical monitoring of the situation; flat representation of the intervention domain. - lack of explanations on the operator request. - lack of warning. - lack of suggestions. User should always know what the system may offer. System should help users to understand the ”machine”.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski • Organization Errors cause Operator Errors ! 1. Related to Human Decisional Problems - Not sufficiently clear duties and responsibility of the human operator/user - the system offers forbidden or never requested functions. - role of user is modified during “machine” exploitation. - stress caused by individual responsibility - too high individual risk, or too low individual responsibility. 2. Lack of proper instructions and training (competencies) creates gap between possible interventions and tasks received from superiors. 3. Knowledge support: Lack of an easy access to organisation experts. 4. Co-operation: Not sufficiently precise define co-operations conditions 5. Ergonomy: Improper organisation of workplace.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Integrated Solution: To design Active/Intelligent Decision Support Systems - Reduction of operator functions, it requires a function allocation and the design of new cooperation functions - Active support structuredon the levels of: 1. Datapresentation/manual manipulations - goal-oriented 2. Data processing: Selected data processing/calculations - task- oriented 3. Mechanical Reasoning (qualitative); implementation of decision- making components.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski, 1998 • Flexible Interaction Strategy of human-computer interface, • Control of the roleof the operator (operator's competencies, responsibilities, access to information), • Understanding support; textual and graphical languages, information density, • Decision support related to: information, preferences, knowledge management • Active intervention support; suggested solutions, explanations. An integrated role-dependent user/operator modeling is necessary.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski User Modeling Frameworks - Artificial Intelligence • IPK (Information, Preference, Knowledge) framework [Gadomski, 1989] • BDI (Beliefs, Desires, Intentions) framework [Anand Rao at al. 1991] strong human subjective metaphor. • CKI - Communication, Know-How, Intentions Model [M.Sing,94] Current tendency: Active DSS designed by the application of human metaphor.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Basic concepts of IPK • Information : how a particular situation looks (before, now, in the future) ? • - facts , measurements, observations • Knowledge: how situation may be classified and modeled, and what is possible to do in” this type of situation “? • - descriptive frames, rules, procedures, methods • Preferences : what is more important? what is more efficient? • Goal : :what should be achieved ?
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Application of the IPK to Abstract Intelligent Agent construction Possible domain-independent reasoning mechanisms: -deductive, inductive, abductive, case based ... + different logics
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Abstract Intelligent Agent (AIA) • abstract because such model of intelligent agent is independent from its application-domain, specific role of the decision-maker, and independent from its software implementation environment • AIA is dependent on its architecture constrains
Physical Domain of Activity Data Acquisition Action Domain System: DS Representation of Agent physical Domain of Activity DS New Decision Information Agent Preference System PS PS KS Agent Knowledge System Goal KS Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Basic architecture element of AIA [ Gadomski,93]
AGENTE SEMPLICE CONSIGLIERE DIRETTO SISTEMA DOMINIO SISTEMA PREFERENZE SISTEMA CONOSCENZA AGENTE SEMPLICE PIANIFICATORE AGENTE SEMPLICE GESTORE PREFERENZE S. RAPRESENTAZIONE DELLA COMOSCENZA S. RAPRESENTAZIONE DELLE PREFERENZE CRITERI COSTRUZIONE PIANI S. META-CONOSCENZA METODI DI PIANIFICAZIONE STRATEGIE CAMBIO PREFERENZE S. META -PREFERENZE Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski MONDO O SIMULATORE ESTERNO Multi-Agent Structure of Abstract Intelligent Agent
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Role model Decisional Errors Abstract Intelligent Agent: Knowledge Preferences Information Competencies Responsibilities, Duties Access to information Out of competencies Wrong choice criteria Not proper or insufficient information
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Routine software engineers task is To design software systems which satisfy user’s production goal (user requirements). What more is needed? To satisfy also safety & economic goals. It means User Modeling is a new paradigm in the software life cycle
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski • New Life Cycle: Production, Safety and Economic Goals [M.Lind,92] Production Goal Design of Physical Processes Safety &Economics Goal Design of Human & Decision-making Process Safety &Economics Goal Design of Computer Support Processes Integration/modifications Integration/modifications Modeling & Testing Production and Control Modeling & Testing human factors and cognitive reasoning processes Modeling & Testing of H/S processes (structural modeling) constrains constrains
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski New components in the Software Life-Cycle required • Identification of possible causes and mechanisms of human errors and possible consequences; - Cause-Consequence analysis. • Ideal Users/Operator functional modeling. • Allocation of functions and the definition of new interface functions. • Design of additional cooperation functions. • User training in new conditions. They requires New Systems & Technologies
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Passive Decision Support Systems Passive Decision Support Systems (Information Systems) have been the first attempt to the computer aid for plant operators and emergency managers Unfortunately, their application requires from their users continuous learning and training to which typical emergency managers are not enough motivated Large part of the user decisions relies on the choice of the concrete button from menubars or menu tools being parts of a visualized hierarchical menu structures (menu-driven paradigm) Active/Intelligent Decision Support Systems Can be viewed as computerized interfaces for fitting passive DSS functions to the requirements, properties and preferences of man. Eliminates redundancy of not actual in this moment alternatives Suggests choicesdetermined by criteria defined on higher abstraction levels Is based on goal-driven paradigm
Intranet/Internet Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Information em. domain current data Knowledge rules, models, plans, strategies Preferences risk, roles and resources criteria EMERGENCY DOMAIN Human-Computer Cooperation [Gadomski at al,1995] Intervention decisions Images, Measured Data Continuous monitoring Active DECISION SUPPORT SYSTEM EMERGENCY MANAGER Cooperation IDSS dialogue suggestions explanations (Intelligent Agent) cooperation data acquisition Human Organization Experts
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Mental errors reduction: IPK Architecture.
Generic Emergency Management Scenario 1 N-th Decomposition levels: . . . . . . Architecture 2 integration . . . . Model 1 Model 2 Model 3 Model n of ideal emergency manager Sub-Model 1 . . . . Sub-Model m selection Sub-Model 2 integration . . . Prototype 1 Prototype 2 Prototype m Architecture of IDSS 3 kernel Verification modification Definition of new user functions Validation modification - Modeling phase Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski EXAMPLE • . 1 2 3 - Requirement specification phase - Prototyping phase
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski • Examplesofdifferent ADSSs and IDSSs • FIT, the Institute for Applied Information Technology,, Germany's national research center for information technology. • FABEL • Distributed AI-based support system for complex architectural design tasks; integrates case-based and rule-based methods • GeoMed • Distributed open geographical information systems - implemented as extensions to the World-Wide Web - which support urban and regional planning as multi-party / multi-goal processes • KIKon • Knowledge-based system for the configuration of telecommunication services and • customer premise installations • ZENO • develops and evaluates AI-based tools for Mediation in real-world cooperativeplanning and design tasks.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Evolution of DSSs - ENEA’s Example • 1990 -- Passive DSS ; Information Support with Large Data Bases. • ISEM: Information Technology Support for Emergency Management; • Multi-actors, Large territorial emergency. • 1993 - CAT (Computer Aided Tutoring); Recognition of human errors. • MUSTER: Multi-Users System for Training and Evaluating Environmental • Emergency Response. Genoa Oil Port. Goal: Training support in emergency • managers cooperation. • 1995 - Active DSS; Implementation of some mental functions + GIS; • CIPRODS : Civil Italian PRotection Overview and Decision Support; • Supervision of territorial emergency on the national level. • 1996 - Active DSS; Some mental functions inserted as autonomous • software tools with graphical interface. GEO: Emergency Management • on Oil Transport Devices (Lines and Deposits) • 1997/8 - Intelligent DSS; User role modeling; User must know - What? • System must know - How? IDA - Intelligent Decision Advisor; Multipurpose agent- based system MINDES
Plans and Emergency Procedures Toxic substances and risk industries Data Base Algorithms for consequences analysis EVENT CONSEQUENCES Diagnostic Module Predictive Module Decisional Module Actions Symptoms GEOGRAPHICAL DATABASES Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Example 1: A Cognitive Functional Architecture of ADSS - The system suggests possible actions in a concrete application domain EDSS (Emergency Decision Support System) CIPRODS General Architecture [Di Costanzo et al.,1995] What will happen or could Happen? What to do ? What happens?
Choice of Symptom Information a procedure Event Active Phase tree Tasks list procedure Event Variable Interface 1 modification textual numeric- graphic- information for the user Commands Choice of Interface Active action Command 2 : selected production Tools menu tool ( icons ) Choice of Computational tool calculations visualization activation Composition of one action Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Example 2 : Schema of functions allocation among Active DSS and its user[Balducelli,Gadomski,97].
Simulated P Intervention Domain D K Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Example 3: Dynamic Humans Modeling - Cooperation Training K P Trainee 1 D Trainee 3 K P D Trainee 2 D P K D Tutor (Training Supervisor) [Balducelli at al.1994] C P
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski Some References • Human Reliability and Safety Analysis Data Handbook David I. Gertman and Harold S. Blackman Published by John Wiley & Sons, ISBN 0-471-59110-6 . • NASA SAFETY POLICY AND REQUIREMENTS DOCUMENT: NASA Handbook NHB 1700.1 (V1-B) June 1,1993. Most advanced: Nuclear power industry and associated Government regulatory agencies - it does little to broaden the analysis of human reliability in other applications, such as air traffic control and other human-in-the-loop situations. • IAEA instruction, manuals. • US Nuclear Regulatory Commission Reports • Stress and Operator Decision Making in Coping with Emergencies T.Kontogiannis, Int.J. Human-Computer Studies (1996) v.45.
Lecture on Safety and Reliability of Human-Machine Systems Adam M.Gadomski ENEA • A.M. Gadomski, V.Nanni, Intelligent Computer Aid for Operators: TOGA Based Conceptual Framework. Proceedings of "Second International Conference on Automation, Robotics, and Computer Vision", Singapore, Sept.1992. • A.M. Gadomski , S. Bologna, G. Di Costanzo. Intelligent Decision Support for Cooperating Emergency Managers: the TOGA based Conceptualization Framework. The Proceedings of "TIEMEC 1995: The International Emergency Management and Engineering Conference", J.D. Sullivan, J.L. Wybo, L. Buisson (Eds), Nice, May, 1995. • C. Balducelli, S. Bologna, G. Di Costanzo, A. M. Gadomski, G. Vicoli. Computer Aided Training for Cooperating Emergency Managers: Some Results of Muster Project. Proceedings. of The MemBrain Conference. Oslo’95, 1995. • A. M. Gadomski, C. Balducelli, S. Bologna, G. DiCostanzo. Integrated Parallel Bottom-up and Top-down Approach to the Development of Agent-based Intelligent DSSs for Emergency Management. Proceedings of the International Emergency Management Society Conference. TIEMS’98: Disaster and Emergency Management. Washington, May 1998.