160 likes | 272 Views
Overview of Recent MCMD Developments. Manojkumar Krishnan January CCA Forum Meeting Boulder. MCMD Working Group. 2007 activities focus on development of specifications for CCA-based processor groups teams BOFs held during CCA meetings in April and July, 2007
E N D
Overview of Recent MCMD Developments Manojkumar Krishnan January CCA Forum Meeting Boulder
MCMD Working Group • 2007 activities focus on development of specifications for CCA-based processor groups teams • BOFs held during CCA meetings in April and July, 2007 • Mini-Workshop held January 24, 2007 • Use cases documented and analyzed • Wiki webpage and mailing list: https://www.cca-forum.org/wiki/tiki-index.php?page=MCMD-WG • Specifications document version 0.4 • Discussed during Sept and Nov HPC initiative telecons • Several other people sent good comments by email • Issues about threads, fault tolerant environment, MPI-centric narrative and examples, ID representation • Recent developments • Prototype implementation • Application evaluation • NWChem, subsurface
MCMD MCMD SCMD SCMD Multiple Component Multiple Data • MCMD extends the SCMD (single component multiple data) model that was the main focus of CCA in Scidac-1 • Prototype solution described at SC’05 for computational chemistry • Allows different groups of processors execute different CCA components • Main motivation for MCMD is support for multiple levels of parallelism in applications NWChem example
MCMD Use Cases • Coop Parallelism • Hierarchical Parallelism in Computational Chemistry • Ab Initio Nuclear Structure Calculations • Coupled Climate Modeling • Molecular Dynamics, Multiphysics Simulations • Fusion use-case described at Silver Springs Meeting
Single/Multiple mpiruns MPI Tasks/ Processes Threads Threads Target Execution Model and Global Ids • Global id specification • global id = <machine id> + <job id> + <task/process rank> + <thread id>
Group Management • Various execution models • E.g. coop parallelism vs. single mpirun • Programming Models • Should be MPI-Friendly but also open to other models • MPI, Threads, GAS models including GA, UPC, HPCS languages • Global process and team ids • Group translators
CCA Processor Teams MPI Job B MPI Job A • We propose to use a slightly different term of process(or) teams rather than groups • Avoid confusion with existing terminology and interfaces in programming models • Some use cases call for something more general than MPI groups e.g., COOP with multiple mpiruns • For example, CCA team can encompass a collection of processes in two different MPI jobs. We cannot construct a single MPI group corresponding to that. • Operations on CCA teams might not have direct mapping to group operations in programming models that support groups MPI groups CCA Process Team
CCA Team Service • How do initialize the application? • COOP example makes it non-trivial • Provides the following • Create, destroy, compare, split teams • More capabilities can be added as required • Assigns global ids to tasks from one or more jobs running on one or more machines • Global id = <machines id> + <job id> + < task id> • Also, <thread id> if we were to support threads at component level in the future • Locality Information • Gets the job id, machines id, task id of the given task
Example Coupled System PVM Job A MPI/GA Job B Ocean Land Ocean Model Land Model I/O PVM ProcGroup GA ProcGroup MPI ProcGroup Global CCA Team
Prototype Implementation – MCMD Specification • Based on Spec 0.4 • Version 0.4 available on wiki • Please review and contribute • Proof of concept • It works! • Not really high performance (Future work) • MCMD Initializer • MCMD TeamService (port?) and classes • class Team • ProcessID - to store global ID and other info • Create and manage Teams and parallel jobs
MCMD Initialization • One or more parallel jobs • E.g. COOP style • Init() • All processes must participate • MCMD Barrier and Locks • File based • Job file – input • Similar to machinefile or hostfile • MCMD environment initialized based on this information <njobs> <job id> <number of procs> <job id> <number of procs> ….. 3 0 64 1 128 2 64
Team ProcessID TeamService (port) proclist rank rank2 size split compare merge create destroy jobCount joblist jobSize jobProcList rank machineId jobId procId create gRank globalID globalID2 gSize jobCount jobSize jobProcList MCMD TeamService
Molecular Dynamics Application • How can applications effectively exploit the massive amount of h/w parallelism available in petaflop-scale machines? • Massive numbers of CPUs in future systems require algorithm and software redesign to exploit all available parallelism • Molecular Dynamics Example • Multilevel parallelism • Divide work into parts that can be executed concurrently on groups of processors • Can exploit massive hardware parallelism • Increases granularity of computation => improve the overall scalability MD Task 1 MD Task 2 MD Task 1 MD Task 2
MCMD driver int32_t mcmd::Driver_impl::go_impl () { // DO-NOT-DELETE splicer.begin(mcmd.Driver.go) gov::cca::Port mcmdport = svc.getPort("mcmdport"); if(mcmdport._not_nil()) { mcmd::TeamService ts = ::babel_cast< mcmd::TeamService >(mcmdport); // initialize XM’s mcmd service – implementation specific for now xm::TeamService ts_xm = ::babel_cast< xm::TeamService > (ts); ts_xm.init(); int32_t jobcnt = ts.jobCount(); int32_t grank = ts.gRank(); int32_t gsize = ts.gSize(); ……. mcmd::Team t1 = ts.create(ranks, teamsize); ….. printf("%d: A new team is created. size=%d, rank=%d\n", grank, t1.size(), t1.rank());
Ongoing (and Future) work • “Getting NWChem to Petascale” meeting • Internal meeting – CS and Chemistry group. • Most of the modules are group-aware • NWChem components are now part of release (production version) • Current implementation is not high performance • Explore Low-level network API • Elan, OpenIB, etc. • No sockets! • MCMD Application Generator • Dynamic MCMD environment • Similar to MPI-2 • Task-based parallelism