130 likes | 226 Views
Computing Panel Discussion Continued. Marco Apollonio, Linda Coney, Mike Courthold, Malcolm Ellis, Jean-Sebastien Graulich, Pierrick Hanlet, Henry Nebrensky CM26 Mar 26, 2010. What We Need to Do.
E N D
Computing Panel Discussion Continued Marco Apollonio, Linda Coney, Mike Courthold, Malcolm Ellis, Jean-Sebastien Graulich, Pierrick Hanlet, Henry Nebrensky CM26 Mar 26, 2010
What We Need to Do • MICE needs to run the DAQ to record data, monitor the equipment, and control the various components of the experiment. We need to monitor the data-taking and reconstruct live data to ensure data quality. We need to archive actual running parameters. We need a Database to automatically keep track of run conditions which then feeds into the analysis software. The analysis software needs to be able to reconstruct the data and run simulations of the experiment. • In order to facilitate these requirements, MICE needs network access both at RAL and from outside of RAL.
What We Need to Do2 • At RAL: • We need access to off-site computers and the ability to copy files to those computers. • We need access to the web, web email, and other email sources. • We need to be able to work on other computers as if we were there in person (VPN). • We need a system to maintain code development: CVS for analysis, Bazaar for C&M. • We need to allow MICE at RAL to read and write to the eLOG. • We need access to the Database (through an interface – API) for analysis. The interaction between the user and interface should remain transparent even if the interface needs to modify how it interacts with the Database.
What We Need to Do3 • From outside of RAL: • We need to allow expert access to MICE machines in the control room and on the PPD network. • They must be able to access the data in real-time to debug. • Some experts must be able to remotely display a terminal from the MLCR to a remote computer to make changes. Others need to remotely display information for monitoring purposes. (ex. EPICS, DS cooldown, Target calibration data) • We need offsite experts to be able to see web based information like the webcams and CKOV temperature/humidity monitor. • External MICE must be able to read/write to the eLOG. • External MICE also need access to the Database. • Access details: • PPD Network CAN see MiceNet • Visitors Network NOT see MiceNet • Outside of RAL NOT see MiceNet directly • Federal ID g into RAL as if on PPD (ex.)
How We Do All of This • Hardware • Machines in MLCR, Lab7 (tracker), R76 (MICE Office), R78 (Target test area), Visitor laptops, Analysis Machine/Farm? • SSH Bastion • Web Services Machine • Database Machine • Networks Used • MiceNet – control room network • PPD – RAL particle physics dept. network • Visitor’s Network (Guest) • Networks available in MLCR – PPD, ISIS • Access details: • PPD Network CAN see MiceNet • Visitors Network NOT see MiceNet • Outside of RAL NOT see MiceNet directly • Federal ID g into RAL as if on PPD (ex.)
External Connectivity MICO Slide SSH Bastion Micenet / MLCR Outside World SSH ssh EPICS EPICS Gateway Config Database DB DB API “Web” services PPD-Grid managed DB API eLog EPICS archiver web interface Spare node SSH / web services Grid Transfer Box MICE managed Grid clients PPD-IT supervised Analysis Node/Farm SSH ssh + analysis code 6 Henry Nebrensky – CM26 – 24 March 2010
What We Are Doing • Replicating Functionality of heplnw17 - three new servers already in R1 • MICE SSH Bastion • Provides access into RAL site • Once in, as if on PPD network • Then can get to MiceNet • No direct access to MiceNet • Copy files out (ie. Data) • Note: ask Mike about SCP out? • Bring through xterm windows to outside (x forwarding) • ON PPD NETWORK • Will provide all access requested so far… • Web Services Machine • eLOG (if still want it at RAL) • ON PPD NETWORK g Least downtime possible • MICE have direct access • Global MICE read/write access • User interface for Database • Stuff related to forwarding web related traffic from the control room to IIT server (like webcams) • Database Machine • ON PPD NETWORK
What We Are Doing2 • SSH Bastion • Malcolm has kindly agreed to pick this up (with written approval by management) • Will speak to RAL PPD person Tuesday to push through firewall ports • IDS meeting at Fermilab – then back to finish eta. Beginning May • Move Windows machine out of MLCR into R76 • Not needed in control room • User machine on wired Visitors Network • Provide Analysis Machine/Farm at RAL? • Must be on PPD network to use GRID • Federal IDs for MICE • Longer term process • Longer term access (SSH Bastion provides easy way to give new person access if needed on short notice) • Should solve new Guest access issues • ≤1 month now
Computing Support • Who is responsible for each machine/system? • The 3 new servers in R1 are on the PPD Network • Requesting official PPD support for these machines • MICE SSH Bastion, Web Services Machine, Database Machine
Computing Support • Who is responsible for each machine/system? • MLCR Machines • C&M = James Leaver - GRID = Henry • DAQ = Jean-Sebastien Graulich • OnRec = Linda Coney - Target =? James • Linde Computer = Linde • Willie’s Laptop = ?Craig, Malcolm? • Webcams & Printer = Craig • Other • MOM Laptop = Malcolm • Ash computer = PPD • Lab 7 = FNAL & James & Geneva? • Visiting Laptops = Visitors • Backups…
Roles to Define • Database Administration • eLOG Management • Web cams - ?Craig • MICE Network Administrator/liason– Mike Courthold, Craig Macwaters (deputy), Henry Nebrensky • Backup Management • GRID • Data Transfer – Henry Nebrensky • Software Management – Vassil Verguilov • Data Archivist - Henry Nebrensky • Repository Management • MICE software updates (EPICS, G4MICE, etc) • Simulation Production • Real Data Production • Analysis Production
DL DL DL Questions for You • What are requirements for future systems? • Spectrometer Solenoid • How controls/monitoring handled? • What expert access will be needed? • LH2 • Is this handled by RAL? • RF • Same questions as Spectrometer Solenoid