This section describes the Mirheo interface and introduces the reader to installing and running the code.

The Mirheo code is designed as a classical molecular dynamics code adapted for inclusion of rigid bodies and cells. The simulation consists of multiple time-steps during which the particles and bodies will be displaces following laws of mechanics and hydrodynamics. One time-step roughly consists of the following steps:

  • compute all the forces in the system, which are mostly pairwise forces between different particles,
  • move the particles by integrating the equations of motions,
  • bounce particles off the wall surfaces so that they cannot penetrate the wall even in case of soft-core interactions,
  • bounce particles off the bodies (i.e. rigid bodies and elastic membranes),
  • perform additional operations dictated by plug-ins (modifications, statistics, data dumps, etc.).

Python interface

The code uses Python scripting language for the simulation setup. The script defines simulation domain, number of MPI ranks to run; data containers, namely Particle Vectors and data handlers: Initial conditions, Integrators, Interactions, Walls, Object bouncers, Object belonging checkers and Plugins.

The setup script usually starts with importing the module, e.g.:

import mirheo as mir


Loading the module will set the sys.exepthook to invoke MPI_Abort. Otherwise single failed MPI process will not trigger shutdown, and a deadlock will happen.

The coordinator class, mirheo, and several submodules will be available after that:

  • ParticleVectors.
    Consists of classes that store the collections of particles or objects like rigid bodies or cell membranes. The handlers from the other submodules usually work with one or several ParticleVector. Typically classes of this submodule define liquid, cell membranes, rigid objects in the flow, etc.
  • InitialConditions.
    Provides various ways of creating initial distributions of particles or objects of a ParticleVector.
  • BelongingCheckers.
    Provides a way to create a new ParticleVector by splitting an existing one. The split is based on a given ObjectVector: all the particles that were inside the objects will form one ParticleVector, all the outer particles – the other ParticleVector. Removing inner or outer particles is also possible. Typically, that checker will be used to remove particles of fluid from within the suspended bodies, or to create a ParticleVector describing cytoplasm of cells. See also applyObjectBelongingChecker.
  • Interactions.
    Various interactions that govern forces between particles. Pairwise force-fields (DPD, Lennard-Jones) and membrane forces are available.
  • Integrators.
    Various integrators used to advance particles’ coordinates and velocities.
  • Walls.
    Provides ways to create various static obstacles in the flow, like a sphere, pipe, cylinder, etc. See also makeFrozenWallParticles
  • Bouncers.
    Provides ways to ensure that fluid particles don’t penetrate inside of objects (or the particles from inside of membranes don’t leak out of them).
  • Plugins.
    Some classes from this submodule may influence simulation in one way or another, e.g. adding extra forces, adding or removing particles, and so on. Other classes are used to write simulation data, like particle trajectories, averaged flow-fields, object coordinates, etc.

A simple script may look this way:

#!/usr/bin/env python
import mirheo as mir

# Simulation time-step
dt = 0.001

# 1 simulation task
ranks  = (1, 1, 1)

# Domain setup
domain = (8, 16, 30)

# Applied extra force for periodic poiseuille flow
f = 1

# Create the coordinator, this should precede any other setup from mirheo package
u = mir.mirheo(ranks, domain, dt, debug_level=2, log_filename='log')

pv = mir.ParticleVectors.ParticleVector('pv', mass = 1)   # Create a simple particle vector
ic = mir.InitialConditions.Uniform(density=8)             # Specify uniform random initial conditions
u.registerParticleVector(pv=pv, ic=ic)                    # Register the PV and initialize its particles

# Create and register DPD interaction with specific parameters
dpd = mir.Interactions.DPD('dpd', 1.0, a=10.0, gamma=10.0, kbt=1.0, power=0.5)

# Tell the simulation that the particles of pv interact with dpd interaction
u.setInteraction(dpd, pv, pv)

# Create and register Velocity-Verlet integrator with extra force
vv = mir.Integrators.VelocityVerlet_withPeriodicForce('vv', force=f, direction='x')

# This integrator will be used to advance pv particles
u.setIntegrator(vv, pv)

# Set the dumping parameters
sample_every = 2
dump_every   = 1000
bin_size     = (1., 1., 1.)

# Write some simulation statistics on the screen
u.registerPlugins(mir.Plugins.createStats('stats', every=500))

# Create and register XDMF plugin
u.registerPlugins(mir.Plugins.createDumpAverage('field', [pv],
                                                sample_every, dump_every, bin_size,
                                                [("velocity", "vector_from_float4")], 'h5/solvent-'))

# Run 5002 time-steps

Running the simulation

Mirheo is intended to be executed within MPI environments, e.g.:

mpirun -np 2 python3 script.py

The code employs simple domain decomposition strategy (see mirheo) with the work mapping fixed in the beginning of the simulation.


When the simulation is started, every subdomain will have 2 MPI tasks working on it. One of the tasks, referred to as compute task does the simulation itself, another one (postprocessing task) is used for asynchronous data-dumps and postprocessing.


Recommended strategy is to place two tasks per single compute node with one GPU or 2 tasks per one GPU in multi-GPU configuration. The postprocessing tasks will not use any GPU calls, so you may not need multiprocess GPU mode or MPS.


If the code is started with number of tasks exactly equal to the number specified in the script, the postprocessing will be disabled. All the plugins that use the postprocessing will not work (all the plugins that write anything, for example). This execution mode is mainly aimed at debugging.

The running code will produce several log files (one per MPI task): see mirheo. Most errors in the simulation setup (like setting a negative particle mass) will be reported to the log. In case the code finishes unexpectedly, the user is advised to take a look at the log.