210 likes | 283 Views
Duncan Brown Inspiral Working Group University of Wisconsin-Milwaukee LIGO-G040107-00-Z. Search Pipelines for Binary Inspiral. S2 Inspiral Pipeline. lalapps_tmpltbank. S2 Inspiral Pipeline. lalapps_inspiral. S2 Inspiral Pipeline. lalapps_inca. S2 Inspiral Pipeline. lalapps_inca.
E N D
Duncan Brown Inspiral Working Group University of Wisconsin-Milwaukee LIGO-G040107-00-Z Search Pipelines for Binary Inspiral
S2 Inspiral Pipeline lalapps_tmpltbank
S2 Inspiral Pipeline lalapps_inspiral
S2 Inspiral Pipeline lalapps_inca
S2 Inspiral Pipeline lalapps_inca
S2 Inspiral Pipeline lalapps_inspiral
S2 Inspiral Pipeline lalapps_inca
S2 Inspiral Pipeline lalapps_inca
S2 Inspiral Pipeline LIGO_LW XML file
Pipeline Infrastructure Requirements • Ensure that all data is analyzed • Automate pipeline as much as possible • Provide flexible pipeline for testing and tuning • Allow easy construction of complex workflows • Simple reusable infrastructure • Easy to debug
Pipeline Implementation • Condor to manage job submission to cluster • lalapps code to execute components of pipeline • Use LAL functions for GW analysis • Condor DAGman to manage execution of pipeline • Standard file types for I/O • Read AS_Q and calibration from frame data • Writes triggers as LIGO_LW XML • Can write r(t), x2(t), PSD, filter data as frames
Creation of the DAG • Simple Python modules in lalapps to build scripts that write pipeline • lalapps/src/lalapps/pipeline.py • Read segwizard files • Manipulate science segments (union, intersection, inverse) • Create Condor Jobs and DAGs • lalapps/src/inspiral/inspiral.py • Construction of DAG nodes specific to inspiral • lalapps/src/inspiral/inspiral_pipe.in • Use building blocks to construct pipeline
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Putting It All Together data = pipeline.ScienceData() data.read(‘segwizard.txt’,2048) data.make_chunks(length,overlap,isplay) dag = pipeline.CondorDAG(‘mydag.dag’) datafind_job = pipeline.LSCDataFindJob() inspiral_job = inspiral.InspiralJob() for seg in data: df = pipeline.LSCDataFindNode() df.set_start(seg.start()) df.set_end(seg.end()) for chunk in seg: insp = inspiral.InspiralNode() insp.set_start(chunk.start()) insp.set_end(chunk.end()) insp.add_parent(df) dag.write()
Conclusions • Use of Condor DAGman has been very successful • Simplifies management of analysis workflow • More time to concentrate on scientific questions • Infrastructure written in lalapps is simple to use • Python modules are documented in lalapps documentation • Reusable code • LIGO/TAMA inspiral analysis (Steve Fairhurst) • Stochastic lalapps pipeline (Adam Mercer) • Fast, simple, efficient!