180 likes | 268 Views
MEG Software Status Framework for MC, Schedule, DC reconstruction update and Discussion on a ROOT-based offline. MEG Software Group. Question from reviewer. Give a common goal to be achieved by the next review;.
E N D
MEG Software StatusFramework for MC,Schedule,DC reconstruction updateandDiscussion on a ROOT-based offline MEG Software Group
Question from reviewer • Give a common goal to be achieved by the next review; “I have appreciated the presentation of the offline software team, with a team leader etc... As promised I come forth with the question: could you give us -- and yourself -- a common goal to be achieved with that software in time for the next review?”
Our goal was: • MC unified framework • With double option: Tokyo or Pisa • Large Prototype MC unification • CEX beam test • Same offline package for DATA & MC • Event Display • Initially GEANT3
MC Framework for MEG • Unified framework with double option: • Tokyo or Pisa code. • Tree structure & skeleton, • Merged materials & tracking medium, • Stand alone & built-in event generator, • DC, TC, Magnet and B field are merged. • Framework based on: • Fortran 77, • CERNLIB, • ZEBRA I/O, • GEANT3. • Distributed via CVS.
To be done: MEG MC • I/O • ZEBRA structure … in progress • Digitization • Liq. Xe • Geometry … in progress • Scintillation photon tracking … in progress • Full reconstruction program • Offline database… ODB? Separate database (e.g.PostgreSQL, MySQL, etc.)?
Large Proto CEX beam test • MC merge into the new framework… just finished. • Common output format for DATA/MC • Same offline package for DATA & MC • Scintillation photon tracking in Liq. Xe… Full ray tracing or analytical tracking • Rayleigh scattering/absorption in Liq. Xe • Reflection on the PMT quartz window… Fresnel or total reflection • Scintillation light spectrum… Monochromatic, Gaussian, Basov et.al, etc.
To be done: beam test • NaI response simulation • LiH target and Vessel simulation • Phase space simulation for p0 • Offline database … ODB? Separate database (e.g. PostgreSQL, MySQL, etc.)?
Schedule/man power • By end of April 2004 • Liq. Xe geometry merging … S.Yamada/F.Cei • ZEBRA data structure … P.Cattaneo • By end of June 2004 • MC output with Digitized waveform …XE: S.Yamada/F.Cei, DC: H.Nishiguchi, TC: P.Cattaneo • Summer DC beam test will performed usingunified MC
DC reconstruction • Track Dictionary • Efficiency • Resolution
The dictionary concept Several tracks with similar kinematics producing a single hit pattern Hit pattern Single and unique string(i.e. a dictionary key) Average (over the set) track parameters
Building the dictionary • MC sample used to build the dictionary: • Positrons from Michel decay; • Unpolarized muons; • Generator level cuts: 0.08 < |cosθ| < 0.35;-60° < φ < 60° . • No Tdrift used, • No Z measurement used yet.
250000 generated events 12900 patterns; efficiency = 95% The population of the patterns is not uniform: 40% has 1 entry 43% has 2 ÷ 10 entries 13% has 11 ÷ 50 entries 4% more than 50 entries Number of events in a dictionary record
Momentum components in the dictionary LEFT RIGHT Event by event distributions Average in each Dictionary record Px / MeV Px / MeV Track first turn has hits in at least three sectors A Hit pattern cannot tell the sign of PZ Py / MeV Py / MeV Pz / MeV Pz / MeV
Dictionary “resolution” Generate a sample of independent events For tracks in the dictionary acceptance (Nsectors > 2) find the dictionary key compare; Px with <Px>(key); normalize to RMS<Px> vertex X Px vertex Y Py pMC - < pdict > σ vertex Z Pz
To be done: dictionary • Optimize stats. given by 1 – eff. ~ 10-3 - 10-4 and by looking at RMS vs. stats. (intrinsic resolution of method); • Add noise hits; • Add inefficiency of Drift Chamber; • Add drift time; • Superimpose tracks.
Discussion on a ROOT-based offline system • Fully OO framework, including: • all needed functionality present (from data taking to final plots, geometry package with integrated event display) • transparent LAN, WAN and HPSS support • parallel computation (PROOF) • interface to SQL/RDBMS • interface to GEANT3 with migration tools • Extensive CERN support • ROOT I/O exceeds MEG requirement (raw data throughput: ~3.5MB/s) • Max ROOT I/O throughput: ~51MB/s(from FNAL test) • Scalable to cope with trigger rate uncertainties
MONARC for computing & data model • A central site, Tier-0 • Prompt calibration, reconstruction and run info • Regional centers, Tier-1/2 • Reprocessing • MC production • Mirror locally the reconstructed objects • Tier-3/4 centers • Analysis • Offline system reads raw data streams via rootd and tree output streams: • Raw event database(to HPSS) • Tag database(on disk) • Run Catalog database(on RDBMS)
The Run Manager and the VMC Detectors DCH TOF EMC STEER run management interface classes detector base classes data structure base classes MEG offline main program EVGEN PYTHIA6 HIJING … MICROCERN ROOT Externalpackages Geant4 VMC Geant3 VMC Geant3 VMC Geant4