230 likes | 238 Views
Visions for Data Management and Remote Collaboration on ITER M. Greenwald, D. Schissel, J. Burruss, T. Fredian, J. Lister, J. Stillerman MIT, GA, CRPP. Presented by Martin Greenwald MIT – Plasma Science & Fusion Center ICALEPCS 2005, Geneva.
E N D
Visions for Data Management and Remote Collaboration on ITERM. Greenwald, D. Schissel, J. Burruss, T. Fredian, J. Lister, J. StillermanMIT, GA, CRPP Presented by Martin Greenwald MIT – Plasma Science & Fusion Center ICALEPCS 2005, Geneva
ITER is the Next Big Thing in Magnetic Fusion Research • World’s first burning plasma experiment • To be built and operated as an international collaboration • Site: Cadarache, France • Europe, USA, Japan, Russia, Korea, China • Construction ~10 years, ~10B$
What Challenges Will ITER Present? • Fusion experiments require extensive data visualization and analysis in support of between-shot decision making. • “Shots” are the basic unit of the experiments. For ITER, shots are: • ~400 seconds each, perhaps 2,000 per year for 15 years • Average cost per shot is very high (order $1M) • Most data are long time series (or multi-dimensional arrays) • Today, teams of ~30-100 work together closely during operation. • Real-time remote participation is standard operating procedure.
Challenges: Experimental Fusion Science is a Demanding Real-Time Activity • Run-time goals: • Optimize fusion performance • Ensure plasmas are fully documented before changing conditions • Drives need to assimilate and assess large quantity of data between shots.
Challenge: Long Pulse Length • Concurrent reading, writing; larger data sets • Greater challenge – integration across time scales • Data will span range > 109 in significant time scales • Will require efficient tools • To browse very long records • To locate and describe specific events or intervals Challenge: Long Life of Project • 10 years construction; 15+ years operation • Systems must adapt to decades of information technology evolution & revolution • Backward compatibility must be maintained
Challenge: Long Pulse Length • Concurrent reading, writing; larger data sets • Greater challenge – integration across time scales • Data will span range > 109 in significant time scales • Will require efficient tools • To browse very long records • To locate and describe specific events or intervals Challenge: Long Life of Project • 10 years construction; 15+ years operation • Systems must adapt to decades of information technology evolution & revolution • Backward compatibility must be maintained
Challenges: International, Remote Participation • Scientists will want to participate in “live” experiments from their home institutions dispersed around the world. • View and analyze data • Manage ITER diagnostics • Lead experimental sessions • Collaborations span many administrative domains • Resource management • Trouble shooting/end-to-end problem resolution • Cyber-security must be maintained, plant security must be inviolable.
We Are Beginning the Dialogue About How to Proceed • This is not yet an “official” ITER activity • What follows is our vision for data management and remote participation systems • Opinions expressed here are the authors alone (but based on decades of experience).
Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes: Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements
Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes: Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements
Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes: Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements
Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes: Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements
Strategy: Design, Prototype and Demo • With 10 years before first operation, it is too early to choose specific implementations – software or hardware • Begin now on enduring features • Define requirements, scope of effort, approach • Decide on general principles and features of architecture • Within 2 years: start on prototypes: Part of conceptual design • Within 4 years: demonstrate: • Test, especially on current facilities • Simulation codes could provide testbed for long-pulse features • In 6 years: proven implementations expanded and elaborated to meet requirements
General Features • Extensible, flexible, scalable • We won’t be able to predict all future needs • Capable of continuous and incremental improvement • Requires robust underlying abstraction • Data Accessible from wide range of languages, software frameworks and hardware platforms • Built-in security • Must protect plant without endangering science mission • Employ best features of identity-based, application and perimeter security models • Strong authentication mechanisms, single sign-on • Distributed authorization and resource management
Data Acquisition Control Relational Database Data Acquisition Systems Service Oriented API Analysis Applications Main Repository Visualization Applications Proposed Top Level Data Architecture Contains data searchable by their contents Contains large multi-dimensional data arrays.
Data System – Contents & Structure • Coherent, complete, integrated, self-descriptive view of all data visible through simple interfaces. • All Raw, processed, analyzed data, configuration, geometry calibrations, data acquisition setup, code parameters, labels, comments… • No data in applications or private files • Metadata stored for each data element • Logical relationships and associations among data elements are made explicit by structure (probably multiple hierarchies). • Data structures can be traversed independent of reading data. • Capable data directories (105 – 106 named items)
Data System - Abstractions • Service oriented • Loosely coupled applications, running on distributed servers • Interfaces simple and generic, implementation details hidden • Transparency and ease-of-use are crucial • Applications specify what is to be done, not how • Data structures shared • Service discovery supported • Data driven • All parameters in database not imbedded in applications • Data structure, relations, associations are data themselves • Processing can be sensitive to data relationships and to position of data within structure • Data acquisition and processing “tree” maintained as data
Data System - Abstractions • Service oriented • Loosely coupled applications, running on distributed servers • Interfaces simple and generic, implementation details hidden • Transparency and ease-of-use are crucial • Applications specify what is to be done, not how • Data structures shared • Service discovery supported • Data driven • All parameters in database, not imbedded in applications • Data structure, relations, associations are data themselves • Processing can be sensitive to data relationships and to position of data within structure • Data acquisition and processing “tree” maintained as data
Data System - Higher Level Organization • All part of database • All indexed into main data repository • High level physics analysis • Scalar and profile databases • Event identification, logging & tracking • Integrated and shared workspaces • Electronic logbook • Summaries and status • Runs • Task groups • Campaigns • Presentations & publications
Remote ParticipationCreating an Environment That Is Equally Productive for Local and Remote Researchers • Transparent remote access to data • Secure and timely • Real-time info • Machine status • Shot cycle • Data acquisition and analysis monitoring • Announcements • Shared applications • Provision for ad hoc interpersonal communications • Provision for structured communications
Remote is Easy, Distributed is Hard • Informal interactions in the control room are a crucial part of the research • We must extend this into remote and distributed operations • Fully engaging remote participants is challenging • (Fortunately we have already substantial experience)
Remote ParticipationAd Hoc Communications • Exploit convergence of telecom and internet technologies (eg. SIP) • Deploy integrated communications • Voice • Video • Messaging • E-mail • Data streaming • Advanced directory services • Identification, location, scheduling • “Presence” • Support for “roles”
Summary • While ITER operation is many years in the future, work on the systems for data management and remote participation should begin now • We Propose: • All data into a single, coherent, self-descriptive structure • Service-oriented access • Data driven applications • Remote participation fully supported • Transparent, secure, timely remote data access • Shared applications • Capable tools for ad hoc interpersonal communications