This section describes the YMeRo interface and introduces the reader to installing and running the code.
The YMeRo code is designed as a classical molecular dynamics code adapted for inclusion of rigid bodies and cells. The simulation consists of multiple time-steps during which the particles and bodies will be displaces following laws of mechanics and hydrodynamics. One time-step roughly consists of the following steps:
- compute all the forces in the system, which are mostly pairwise forces between different particles,
- move the particles by integrating the equations of motions,
- bounce particles off the wall surfaces so that they cannot penetrate the wall even in case of soft-core interactions,
- bounce particles off the bodies (i.e. rigid bodies and elastic membranes),
- perform additional operations dictated by plug-ins (modifications, statistics, data dumps, etc.).
The code uses Python scripting language for the simulation setup. The script defines simulation domain, number of MPI ranks to run; data containers, namely Particle Vectors and data handlers: Initial conditions, Integrators, Interactions, Walls, Object bouncers, Object belonging checkers and Plugins.
The setup script usually starts with importing the module, e.g.:
import ymero as ymr
Loading the module will set the
sys.exepthook to invoke
Otherwise single failed MPI process will not trigger shutdown, and a deadlock will happen.
The coordinator class,
ymero, and several submodules will be available after that:
- Consists of classes that store the collections of particles or objects
like rigid bodies or cell membranes.
The handlers from the other submodules usually work with one or several
ParticleVector. Typically classes of this submodule define liquid, cell membranes, rigid objects in the flow, etc.
- Provides a way to create a new
ParticleVectorby splitting an existing one. The split is based on a given
ObjectVector: all the particles that were inside the objects will form one
ParticleVector, all the outer particles – the other
ParticleVector. Removing inner or outer particles is also possible. Typically, that checker will be used to remove particles of fluid from within the suspended bodies, or to create a
ParticleVectordescribing cytoplasm of cells. See also
- Various interactions that govern forces between particles. Pairwise force-fields (DPD, Lennard-Jones) and membrane forces are available.
- Various integrators used to advance particles’ coordinates and velocities.
- Provides ways to ensure that fluid particles don’t penetrate inside of objects (or the particles from inside of membranes don’t leak out of them).
- Some classes from this submodule may influence simulation in one way or another, e.g. adding extra forces, adding or removing particles, and so on. Other classes are used to write simulation data, like particle trajectories, averaged flow-fields, object coordinates, etc.
A simple script may look this way:
#!/usr/bin/env python import ymero as ymr # Simulation time-step dt = 0.001 # 1 simulation task ranks = (1, 1, 1) # Domain setup domain = (8, 16, 30) # Applied extra force for periodic poiseuille flow f = 1 # Create the coordinator, this should precede any other setup from ymero package u = ymr.ymero(ranks, domain, debug_level=2, log_filename='log') pv = ymr.ParticleVectors.ParticleVector('pv', mass = 1) # Create a simple particle vector ic = ymr.InitialConditions.Uniform(density=8) # Specify uniform random initial conditions u.registerParticleVector(pv=pv, ic=ic) # Register the PV and initialize its particles # Create and register DPD interaction with specific parameters dpd = ymr.Interactions.DPD('dpd', 1.0, a=10.0, gamma=10.0, kbt=1.0, dt=dt, power=0.5) u.registerInteraction(dpd) # Tell the simulation that the particles of pv interact with dpd interaction u.setInteraction(dpd, pv, pv) # Create and register Velocity-Verlet integrator with extra force vv = ymr.Integrators.VelocityVerlet_withPeriodicForce('vv', dt=dt, force=f, direction='x') u.registerIntegrator(vv) # This integrator will be used to advance pv particles u.setIntegrator(vv, pv) # Set the dumping parameters sampleEvery = 2 dumpEvery = 1000 binSize = (1., 1., 1.) # Write some simulation statistics on the screen u.registerPlugins(ymr.Plugins.createStats('stats', every=500)) # Create and register XDMF plugin field = ymr.Plugins.createDumpAverage('field', [pv], sampleEvery, dumpEvery, binSize, [("velocity", "vector_from_float8")], 'h5/solvent-') u.registerPlugins(field) # Run 5002 time-steps u.run(5002)
Running the simulation¶
YMeRo is intended to be executed within MPI environments, e.g.:
mpirun -np 12 python3 script.py
The code employs simple domain decomposition strategy (see
ymero) with the work
mapping fixed in the beginning of the simulation.
When the simulation is started, every subdomain will have 2 MPI tasks working on it. One of the tasks, referred to as compute task does the simulation itself, another one (postprocessing task) is used for asynchronous data-dumps and postprocessing.
Recommended strategy is to place two tasks per single compute node with one GPU or 2 tasks per one GPU in multi-GPU configuration. The postprocessing tasks will not use any GPU calls, so you may not need multiprocess GPU mode or MPS.
If the code is started with number of tasks exactly equal to the number specified in the script, the postprocessing will be disabled. All the plugins that use the postprocessing will not work (all the plugins that write anything, for example). This execution mode is mainly aimed at debugging.
The running code will produce several log files (one per MPI task): see
Most errors in the simulation setup (like setting a negative particle mass) will be reported to the log.
In case the code finishes unexpectedly, the user is advised to take a look at the log.