670 likes | 878 Views
PETSc and Neuronal Networks. Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006. Tips for general ODEs. Tips for general ODEs. Recall: have an input program to convert to PETSc binary format. Tips for general ODEs. Recall: have an input program to convert to PETSc binary format
E N D
PETSc and Neuronal Networks Toby Isaac VIGRE Seminar, Wednesday, November 15, 2006
Tips for general ODEs • Recall: have an input program to convert to PETSc binary format
Tips for general ODEs • Recall: have an input program to convert to PETSc binary format • e.g.: Vec for initial values, Mat for linear ODE, adjacency/connectivity Mat
Tips for general ODEs • Recall: have an input program to convert to PETSc binary format • e.g.: Vec for initial values, Mat for linear ODE, adjacency/connectivity Mat • PetscBinaryView for arrays of scalars (see exampleinput.c)
Tips for general ODEs • To keep scalar parameters organized (end time, dt, # cells, etc.) use a PetscBag:
Tips for general ODEs • To keep scalar parameters organized (end time, dt, # cells, etc.) use a PetscBag: • Allows you to save a struct in binary and read in to all processors
Tips for general ODEs • To keep scalar parameters organized (end time, dt, # cells, etc.) use a PetscBag: • Allows you to save a struct in binary and read in to all processors • No need to keep track of order in which scalars are written/read
Tips for general ODEs • Recall: using “Load” functions, parallel layout is specified at read in…
Tips for general ODEs • Recall: using “Load” functions, parallel layout is specified at read in… • Except for arrays: only go to first processor
Tips for general ODEs • Recall: using “Load” functions, parallel layout is specified at read in… • Except for arrays: only go to first processor • Use MPI_Bcast to send those arrays to all processors
Tips for general ODEs • Recall: using “Load” functions, parallel layout is specified at read in… • Except for arrays: only go to first processor • Use MPI_Bcast to send those arrays to all processors • e.g.: piecewise constant inj. current
Tips for general ODEs • A TS object keeps track of the settings for time-stepping
Tips for general ODEs • A TS object keeps track of the settings for time-stepping • Same old song: TSCreate and TSDestroy
Tips for general ODEs • A TS object keeps track of the settings for time-stepping • Same old song: TSCreate and TSDestroy • TSSetType: forward Euler, backward Euler, “ode45”, (pseudo-timestepping)
Tips for general ODEs • A TS object keeps track of the settings for time-stepping • Same old song: TSCreate and TSDestroy • TSSetType: forward Euler, backward Euler, “ode45”, (pseudo-timestepping) • TSSetProblemType: linear, nonlinear
Tips for general ODEs • TSSetSolution: set initial conditions
Tips for general ODEs • TSSetSolution: set initial conditions • TSSetRHSFunction/TSSetRHSMatrix:
Tips for general ODEs • TSSetSolution: set initial conditions • TSSetRHSFunction/TSSetRHSMatrix: • Specified functions has format rhsfunc(ts, t, u, du, void *additional arguments)
Tips for general ODEs • TSSetSolution: set initial conditions • TSSetRHSFunction/TSSetRHSMatrix: • Specified functions has format rhsfunc(ts, t, u, du, void *additional arguments) • Create a struct for passing additional arguments
Tips for general ODEs • TSSetRHSJacobian, if method calls for it
Tips for general ODEs • TSSetRHSJacobian, if method calls for it • TSSetInitialTimeStep (that is, initial time and initial time step)
Tips for general ODEs • TSSetRHSJacobian, if method calls for it • TSSetInitialTimeStep (that is, initial time and initial time step) • TSSetDuration
Tips for general ODEs • TSSetRHSJacobian, if method calls for it • TSSetInitialTimeStep (that is, initial time and initial time step) • TSSetDuration • TSRKSetTolerance:
Tips for general ODEs • TSSetRHSJacobian, if method calls for it • TSSetInitialTimeStep (that is, initial time and initial time step) • TSSetDuration • TSRKSetTolerance: • Control absolute error over whole time of integration: a bit sketchy
Tips for general ODEs • If only interested in final state, run TSStep to execute
Tips for general ODEs • If only interested in final state, run TSStep to execute • If interested in progress along the way, you need a monitor function:
Tips for general ODEs • If only interested in final state, run TSStep to execute • If interested in progress along the way, you need a monitor function: • Runs after every time step, can output, plot, change parameters, change time-step etc.
Tips for general ODEs • Multiple monitor functions can run: e.g. one for parameter changes, one for output
Tips for general ODEs • Multiple monitor functions can run: e.g. one for parameter changes, one for output • Attention IAF modelers: you can change the state vector too!
Tips for general ODEs • Multiple monitor functions can run: e.g. one for parameter changes, one for output • Attention IAF modelers: you can change the state vector too! • Syntax: TSSetMonitor, • monitor(ts, iter#, t, u, void *args)
Tips for Homogeneous Nets • Most dependency occurs within cell: bad to have one cell divided across processors
Tips for Homogeneous Nets • Most dependency occurs within cell: bad to have one cell divided across processors • No guarantee that PETSC_DECIDE won’t split your vector this way
Tips for Homogeneous Nets • Have a vector y of length = # cells
Tips for Homogeneous Nets • Have a vector y of length = # cells • PETSc evenly distributes this vector
Tips for Homogeneous Nets • Have a vector y of length = # cells • PETSc evenly distributes this vector • nlocal = VecGetLocalSize(y)
Tips for Homogeneous Nets • Have a vector y of length = # cells • PETSc evenly distributes this vector • nlocal = VecGetLocalSize(y) • VecCreateMPI(…, neqns*nlocalcells, PETSC_DETERMINE,&x);
Tips for Homogeneous Nets • VecSetBlockSize: set this to the number of equations per cell
Tips for Homogeneous Nets • VecSetBlockSize: set this to the number of equations per cell • VecStrideGather: send value from same index for each block to another vector
Tips for Homogeneous Nets • VecSetBlockSize: set this to the number of equations per cell • VecStrideGather: send value from same index for each block to another vector • VecStrideScatter: send values from a vector to the same index for each block
Tips for Homogeneous Nets • Paradigm for ease/simplicity: gather like indices, make changes, scatter back
Tips for Homogeneous Nets • Paradigm for ease/simplicity: gather like indices, make changes, scatter back • VecStrideGatherAll/VecStrideScatterAll: take the state vector, break it up into an array of vectors, one for each equivalent index
Tips for Homogeneous Nets • In RHSFunction: Vec U and Vec DU are inputs
Tips for Homogeneous Nets • In RHSFunction: Vec U and Vec DU are inputs • Declare arrays Vec u[neqns], du[neqns]
Tips for Homogeneous Nets • In RHSFunction: Vec U and Vec DU are inputs • Declare arrays Vec u[neqns], du[neqns] • VecStrideGatherAll at the start
Tips for Homogeneous Nets • In RHSFunction: Vec U and Vec DU are inputs • Declare arrays Vec u[neqns], du[neqns] • VecStrideGatherAll at the start • Set du[i] in terms of u[] for each i
Tips for Homogeneous Nets • In RHSFunction: Vec U and Vec DU are inputs • Declare arrays Vec u[neqns], du[neqns] • VecStrideGatherAll at the start • Set du[i] in terms of u[] for each I • VecStrideScatterAll at the end
Tips for Homogeneous Nets • For very large networks, large number of processors: message passing will take its toll
Tips for Homogeneous Nets • For very large networks, large number of processors: message passing will take its toll • Order cells so that connections occur between close numbers