Welcome to the PySPH documentation!¶
PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. Users can implement an SPH formulation in pure Python and still obtain excellent performance. PySPH can make use of multiple cores via OpenMP or be run seamlessly in parallel using MPI.
Here are some videos of simulations made with PySPH.
PySPH is hosted on github. Please see the github site for development details.
Overview¶
Overview¶
PySPH is an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations. It is implemented in Python and the performance critical parts are implemented in Cython and PyOpenCL.
PySPH is implemented in a way that allows a user to specify the entire SPH simulation in pure Python. Highperformance code is generated from this highlevel Python code, compiled on the fly and executed. PySPH can use OpenMP to utilize multicore CPUs effectively. PySPH can work with OpenCL and use your GPGPUs. PySPH also features optional automatic parallelization (multiCPU) using mpi4py and Zoltan. If you wish to use the parallel capabilities you will need to have these installed.
Here are videos of simulations made with PySPH.
PySPH is hosted on github. Please see the site for development details.
Features¶
 User scripts and equations are written in pure Python.
 Flexibility to define arbitrary SPH equations operating on particles.
 Ability to define your own multistep integrators in pure Python.
 Highperformance: our performance is comparable to handwritten solvers implemented in FORTRAN.
 Seamless multicore support with OpenMP.
 Seamless GPU support with PyOpenCL.
 Seamless parallel integration using Zoltan.
 BSD license.
SPH formulations¶
Currently, PySPH has numerous examples to solve the viscous, incompressible NavierStokes equations using the weakly compressible (WCSPH) approach. The following formulations are currently implemented:
 Weakly Compressible SPH (WCSPH) for freesurface flows (Gesteira et al. 2010, Journal of Hydraulic Research, 48, pp. 6–27)
 Transport Velocity Formulation for incompressilbe fluids (Adami et al. 2013, JCP, 241, pp. 292–307).
 SPH for elastic dynamics (Gray et al. 2001, CMAME, Vol. 190, pp 6641–6662)
 Compressible SPH (Puri et al. 2014, JCP, Vol. 256, pp 308–333)
 Generalized Transport Velocity Formulation (GTVF) (Zhang et al. 2017, JCP, 337, pp. 216–232)
 Entropically Damped Artificial Compressibility (EDAC) (Ramachandran et al. 2019, Computers and Fluids, 179, pp. 579–594)
 deltaSPH (Marrone et al. CMAME, 2011, 200, pp. 1526–1542)
 Dual Time SPH (DTSPH) (Ramachandran et al. arXiv preprint)
 Incompressible (ISPH) (Cummins et al. JCP, 1999, 152, pp. 584–607)
 Simple Iterative SPH (SISPH) (Muta et al. arXiv preprint)
 Implicit Incompressibel SPH (IISPH) (Ihmsen et al. 2014, IEEE Trans. Vis. Comput. Graph., 20, pp 426–435)
 Gudnov SPH (GSPH) (Inutsuka et al. JCP, 2002, 179, pp. 238–267)
 Conservative Reproducible Kernel SPH (CRKSPH) (Frontiere et al. JCP, 2017, 332, pp. 160–209)
 Approximate Gudnov SPH (AGSPH) (Puri et al. JCP, 2014, pp. 432–458)
 Adaptive Density Kernel Estimate (ADKE) (Sigalotti et al. JCP, 2006, pp. 124–149)
 Akinci (Akinci et al. ACM Trans. Graph., 2012, pp. 62:1–62:8)
Boundary conditions from the following papers are implemented:
 Generalized Wall BCs (Adami et al. JCP, 2012, pp. 7057–7075)
 Do nothing type outlet BC (Federico et al. European Journal of Mechanics  B/Fluids, 2012, pp. 35–46)
 Outlet Mirror BC (Tafuni et al. CMAME, 2018, pp. 604–624)
 Method of Characteristics BC (Lastiwaka et al. International Journal for Numerical Methods in Fluids, 2012, pp. 35–46)
 Hybrid BC (Negi et al. arXiv preprint)
Corrections proposed in the following papers are also the part for PySPH:
 Corrected SPH (Bonet et al. CMAME, 1999, pp. 97–115)
 hgcorrection (Hughes et al. Journal of Hydraulic Research, pp. 105–117)
 Tensile instability correction’ (Monaghan J. J. JCP, 2000, pp. 2990–311)
 Particle shift algorithms (Xu et al. JCP, 2009, pp. 6703–6725), (Skillen et al. CMAME, 2013, pp. 163–173)
Surface tension models are implemented from:
 Morris surface tension (Morris et al. Internaltional Journal for Numerical Methods in Fluids, 2000, pp. 333–353)
 Adami Surface tension formulation (Adami et al. JCP, 2010, pp. 5011–5021)
Credits¶
PySPH is primarily developed at the Department of Aerospace Engineering, IIT Bombay. We are grateful to IIT Bombay for the support. Our primary goal is to build a powerful SPHbased tool for both application and research. We hope that this makes it easy to perform reproducible computational research.
To see the list of contributors the see github contributors page
Some earlier developers not listed on the above are:
 Pankaj Pandey (stress solver and improved load balancing, 2011)
 Chandrashekhar Kaushik (original parallel and serial implementation in 2009)
Citing PySPH¶
You may use the following article to formally refer to PySPH:
 Prabhu Ramachandran, Kunal Puri, Aditya Bhosale, A Dinesh, Abhinav Muta, Pawan Negi, Rahul Govind, Suraj Sanka, Pankaj Pandey, Chandrashekhar Kaushik, Anshuman Kumar, Ananyo Sen, Rohan Kaushik, Mrinalgouda Patil, Deep Tavker, Dileep Menon, Vikas Kurapati, Amal S Sebastian, Arkopal Dutt, Arpit Agarwal, “PySPH: a Pythonbased framework for smoothed particle hydrodynamics”, under review (https://arxiv.org/abs/1909.04504).
The following are older presentations:
 Prabhu Ramachandran, PySPH: a reproducible and highperformance framework for smoothed particle hydrodynamics, In Proceedings of the 15th Python in Science Conference, pages 127–135, July 11th to 17th, 2016. Link to paper.
 Prabhu Ramachandran and Kunal Puri, PySPH: A framework for parallel particle simulations, In proceedings of the 3rd International Conference on ParticleBased Methods (Particles 2013), Stuttgart, Germany, 18th September 2013.
History¶
 2009: PySPH started with a simple Cython based 1D implementation written by Prabhu.
 20092010: Chandrashekhar Kaushik worked on a full 3D SPH implementation with a more general purpose design. The implementation was in a mix of Cython and Python.
 20102012: The previous implementation was a little too complex and was largely overhauled by Kunal and Pankaj. This became the PySPH 0.9beta release. The difficulty with this version was that it was almost entirely written in Cython, making it hard to extend or add new formulations without writing more Cython code. Doing this was difficult and not too pleasant. In addition it was not as fast as we would have liked it. It ended up feeling like we might as well have implemented it all in C++ and exposed a Python interface to that.
 20112012: Kunal also implemented SPH2D and another internal version called ZSPH in Cython which included Zoltan based parallelization using PyZoltan. This was specific to his PhD research and again required writing Cython making it difficult for the average user to extend.
 2013present In early 2013, Prabhu reimplemented the core of PySPH to be almost entirely autogenerated from pure Python. The resulting code was faster than previous implementations and very easy to extend entirely from pure Python. Kunal and Prabhu integrated PyZoltan into PySPH and the current version of PySPH was born. Subsequently, OpenMP support was also added in 2015.
Support¶
If you have any questions or are running into any difficulties with PySPH, please email or post your questions on the pysphusers mailing list here: https://groups.google.com/d/forum/pysphusers
Please also take a look at the PySPH issue tracker.
Changelog¶
1.0b1¶
 Release date: Still under development.
 Remove pyzoltan, cyarray into their own packages on pypi.
1.0a6¶
90 pull requests were merged for this release. Thanks to the following who contributed to this release (in alphabetical order): A Dinesh, Abhinav Muta, Aditya Bhosale, Ananyo Sen, Deep Tavker, Prabhu Ramachandran, Vikas Kurapati, nilsmeyerkit, Rahul Govind, Sanka Suraj.
 Release date: 26th November, 2018.
 Enhancements:
 Initial support for transparently running PySPH on a GPU via OpenCL.
 Changed the API for how adaptive DT is computed, this is now to be set in
the particle array properties called
dt_cfl, dt_force, dt_visc
.  Support for nonpairwise particle interactions via the
loop_all
method. This is useful for MD simulations.  Add support for
py_stage1, py_stage2 ...
, methods in the integrator.  Add support for
py_initialize
andinitialize_pair
in equations.  Support for using different sets of equations for different stages of the integration.
 Support to call arbitrary Python code from a
Group
via thepre/post
callback arguments.  Pass
t, dt
to the reduce method.  Allow particle array properties to have strides, this allows us to define properties with multiple components. For example if you need 3 values per particle, you can set the stride to 3.
 Mayavi viewer can now show nonreal particles also if saved in the output.
 Some improvements to the simple remesher of particles.
 Add simple STL importer to import geometries.
 Allow user to specify openmp schedule.
 Better documentation on equations and using a different compiler.
 Print convenient warning when particles are diverging or if
h, m
are zero.  Abstract the code generation into a common core which supports Cython, OpenCL and CUDA. This will be pulled into a separate package in the next release.
 New GPU NNPS algorithms including a very fast octtree.
 Added several sphysics test cases to the examples.
 Schemes:
 Add a working Implicit Incompressible SPH scheme (of Ihmsen et al., 2014)
 Add GSPH scheme from SPH2D and all the approximate Riemann solvers from there.
 Add code for Shepard and MLSbased density corrections.
 Add kernel corrections proposed by Bonet and Lok (1999)
 Add corrections from the CRKSPH paper (2017).
 Add basic equations of Parshikov (2002) and Zhang, Hu, Adams (2017)
 Bug fixes:
 Ensure that the order of equations is preserved.
 Fix bug with dumping VTK files.
 Fix bug in Adami, Hu, Adams scheme in the continuity equation.
 Fix mistake in WCSPH scheme for solid bodies.
 Fix bug with periodicity along the zaxis.
1.0a5¶
 Release date: 17th September, 2017
 Mayavi viewer now supports empty particle arrays.
 Fix error in scheme chooser which caused problems with default scheme property values.
 Add starcluster support/documentation so PySPH can be easily used on EC2.
 Improve the particle array so it automatically ravel’s the passed arrays and also accepts constant values without needing an array each time.
 Add a few new examples.
 Added 2D and 3D viewers for Jupyter notebooks.
 Add several new Wendland Quintic kernels.
 Add option to measure coverage of Cython code.
 Add EDAC scheme.
 Move project to github.
 Improve documentation and reference section.
 Fix various bugs.
 Switch to using pytest instead of nosetests.
 Add a convenient geometry creation module in
pysph.tools.geometry
 Add support to script the viewer with a Python file, see
pysph view h
.  Add several new NNPS schemes like extended spatial hashing, SFC, octtrees etc.
 Improve Mayavi viewer so one can view the velocity vectors and any other vectors.
 Viewer now has a button to edit the visualization properties easily.
 Add simple tests for all available kernels. Add
SuperGaussian
kernel.  Add a basic dockerfile for pysph to help with the CI testing.
 Update build so pysph can be built with a system zoltan installation that is
part of trilinos using the
USE_TRILINOS
environment variable.  Wrapping the
Zoltan_Comm_Resize
function inpyzoltan
.
1.0a4¶
 Release date: 14th July, 2016.
 Improve many examples to make it easier to make comparisons.
 Many equation parameters no longer have defaults to prevent accidental errors from not specifying important parameters.
 Added support for
Scheme
classes that manage the generation of equations and solvers. A user simply needs to create the particles and setup a scheme with the appropriate parameters to simulate a problem.  Add support to easily handle multiple rigid bodies.
 Add support to dump HDF5 files if h5py is installed.
 Add support to directly dump VTK files using either Mayavi or PyVisfile,
see
pysph dump_vtk
 Improved the nearest neighbor code, which gives about 30% increase in performance in 3D.
 Remove the need for the
windows_env.bat
script on Windows. This is automatically setup internally.  Add test that checks if all examples run.
 Remove unused command line options and add a
maxsteps
option to allow a user to run a specified number of iterations.  Added Ghia et al.’s results for liddrivencavity flow for easy comparison.
 Added some experimental results for the dam break problem.
 Use argparse instead of optparse as it is deprecated in Python 3.x.
 Add
pysph.tools.automation
to facilitate easier automation and reproducibility of PySPH simulations.  Add spatial hash and extended spatial hash NNPS algorithms for comparison.
 Refactor and cleanup the NNPS related code.
 Add several gasdynamics examples and the
ADEKEScheme
.  Work with mpi4py version 2.0.0 and older versions.
 Fixed major bug with TVF implementation and add support for 3D simulations with the TVF.
 Fix bug with uploaded tarballs that breaks
pip install pysph
on Windows.  Fix the viewer UI to continue playing files when refresh is pushed.
 Fix bugs with the timestep values dumped in the outputs.
 Fix floating point issues with timesteps, where examples would run a final extremely tiny timestep in order to exactly hit the final time.
1.0a3¶
 Release date: 18th August, 2015.
 Fix bug with
output_at_times
specification for solver.  Put generated sources and extensions into a platform specific directory in
~/.pysph/sources/<platformspecificdir>
to avoid problems with multiple Python versions, operating systems etc.  Use locking while creating extension modules to prevent problems when multiple processes generate the same extesion.
 Improve the
Application
class so users can subclass it to create examples. The users can also add their own command line arguments and add pre/post step/stage callbacks by creating appropriate methods.  Moved examples into the
pysph.examples
. This makes the examples reusable and easier to run as installation of pysph will also make the examples available. The examples also perform the postprocessing to make them completely selfcontained.  Add support to write compressed output.
 Add support to set the kernel from the command line.
 Add a new
pysph
script that supportsview
,run
, andtest
subcommands. Thepysph_viewer
is now removed, usepysph view
instead.  Add a simple remeshing tool in
pysph.solver.tools.SimpleRemesher
.  Cleanup the symmetric eigenvalue computing routines used for solid mechanics problems and allow them to be used with OpenMP.
 The viewer can now view the velocity magnitude (
vmag
) even if it is not present in the data.  Port all examples to use new
Application
API.  Do not display unnecessary compiler warnings when there are no errors but display verbose details when there is an error.
1.0a2¶
 Release date: 12th June, 2015
 Support for tox, this makes it trivial to test PySPH on py26, py27 and py34 (and potentially more if needed).
 Fix bug in code generator where it is unable to import pysph before it is installed.
 Support installation via
pip
by allowingegg_info
to be run without cython or numpy.  Added Codeship CI build using tox for py27 and py34.
 CI builds for Python 2.7.x and 3.4.x.
 Support for Python3.4.x.
 Support for Python2.6.x.
1.0a1¶
 Release date: 3rd June, 2015.
 First public release of the new PySPH code which uses codegeneration and is hosted on bitbucket.
 OpenMP support.
 MPI support using Zoltan.
 Automatic code generation from highlevel Python code.
 Support for various multistep integrators.
 Added an interpolator utility module that interpolates the particle data onto a desired set of points (or grids).
 Support for inlets and outlets.
 Support for basic Gmsh input/output.
 Plenty of examples for various SPH formulations.
 Improved documentation.
 Continuous integration builds on Shippable, Drone.io, and AppVeyor.
Installation and getting started¶
Installation and getting started¶
To install PySPH, you need a working Python environment with the required dependencies installed. You may use any of the available Python distributions. PySPH is currently tested with Python2.7.x and 3.x. If you are new to Python we recommend Enthought Canopy or EDM. PySPH will work fine with Miniconda_, Anaconda or other environments like WinPython. The following instructions should help you get started.
Since there is a lot of information here, we suggest that you skim the section on Dependencies and then directly jump to one of the “Installing the dependencies on xxx” sections below depending on your operating system. Depending on your chosen Python distribution, simply follow the instructions and links referred therein.
 Quick installation
 Dependencies
 Installing the dependencies on GNU/Linux
 Installing the dependencies on Ubuntu 18.04
 Installing the dependencies on Mac OS X
 Installing the dependencies on Windows
 Using a virtualenv for PySPH
 Downloading PySPH
 Building and Installing PySPH
 Running the tests
 Running the examples
 Possible issues with the viewer
Quick installation¶
If you are reasonably experienced with installing Python packages, already have a C++ compiler setup on your machine, and are not immediately interested in running PySPH on multiple CPUs (using MPI), then installing PySPH is simple. Simply running pip like so:
$ pip install PySPH
should do the trick. You may do this in a virtualenv if you chose to. The important examples are packaged with the sources, you should be able to run those immediately. If you wish to download the sources and explore them, you can download the sources either using the tarball/ZIP or from git, see Downloading PySPH.
The above will install the latest released version of PySPH, you can install the development version using:
$ pip install https://github.com/pypr/pysph/zipball/master
If you wish to track the development of the package, clone the repository (as described in Downloading PySPH and do the following:
$ pip install r requirements.txt
$ python setup.py develop
The following instructions are more detailed and also show how optional dependencies can be installed. Instructions on how to set things up on Windows is also available below.
If you are running into strange issues when you are setting up an installation with ZOLTAN, see here, pipcacheissues.
Dependencies¶
Core dependencies¶
The core dependencies are:
The project’s requirements.txt lists all the required core dependencies.
These packages can be installed from your Python distribution’s package manager, or using pip. For more detailed instructions on how to do this for different distributions, see below.
Running PySPH requires a working C/C++ compiler on your machine. On Linux/OS X the gcc toolchain will work well. On Windows, you will need to have a suitable MSVC compiler installed, see https://wiki.python.org/moin/WindowsCompilers for specific details.
On Python 2.7 for example, you will need Microsoft Visual C++ Compiler for Python 2.7 or an equivalent compiler. More details are available below.
Note
PySPH generates highperformance code and compiles it on the fly. This requires a working C/C++ compiler even after installing PySPH.
Optional dependencies¶
The optional dependencies are:
 OpenMP: PySPH can use OpenMP if it is available. Installation instructions are available below.
 PyOpenCL: PySPH can use OpenCL if it is available. This requires installing PyOpenCL.
 Mayavi: PySPH provides a convenient viewer to visualize the output of simulations. This viewer can be launched using the command
pysph view
and requires Mayavi to be installed. Since this is only a viewer it is optional for use, however, it is highly recommended that you have it installed as the viewer is very convenient. mpi4py and Zoltan: If you want to use PySPH in parallel, you will need mpi4py and the Zoltan data management library along with the PyZoltan package. PySPH will work in serial without mpi4py or Zoltan. Simple build instructions for Zoltan are included below.
Mayavi is packaged with all the major distributions and is easy to install. Zoltan is very unlikely to be already packaged and will need to be compiled.
Building and linking PyZoltan on OSX/Linux¶
If you want to use PySPH in parallel you will need to install PyZoltan. PyZoltan requires the Zoltan library to be available. We’ve provided a simple Zoltan build script in the PyZoltan repository. This works on Linux and OS X but not on Windows. It can be used as:
$ ./build_zoltan.sh $INSTALL_PREFIX
where the $INSTALL_PREFIX
is where the library and includes will be
installed (remember, this script is in the PyZoltan repository and not in
PySPH). You may edit and tweak the build to suit your installation. However,
this script is what we use to build Zoltan on our continuous integration
servers on TravisCI and Shippable.
After Zoltan is build, set the environment variable ZOLTAN
to point to the
$INSTALL_PREFIX
that you used above:
$ export ZOLTAN=$INSTALL_PREFIX
Note that replace $INSTALL_PREFIX
with the directory you specified above.
After this, follow the instructions to build PyZoltan. The PyZoltan wrappers
will be compiled and available.
Now, when you build PySPH, it too needs to know where to link to Zoltan and
you should keep the ZOLTAN
environment variable set. This is only needed
until PySPH is compiled, thereafter we do not need the environment variable.
If you are running into strange issues when you are setting up pysph with ZOLTAN, see here, pipcacheissues.
Note
The installation will use $ZOLTAN/include
and $ZOLTAN/lib
to find
the actual directories, if these do not work for your particular
installation for whatever reason, set the environment variables
ZOLTAN_INCLUDE
and ZOLTAN_LIBRARY
explicitly without setting up
ZOLTAN
. If you used the above script, this would be:
$ export ZOLTAN_INCLUDE=$INSTALL_PREFIX/include
$ export ZOLTAN_LIBRARY=$INSTALL_PREFIX/lib
Installing the dependencies on GNU/Linux¶
If you are using Enthought Canopy EDM or Anaconda the instructions in the section Installing the dependencies on Mac OS X will be useful as the instructions are the same. The following are for the case where you wish to use the native Python packages distributed with the Linux distribution you are using.
If you are running into trouble, note that it is very easy to install using EDM (see Using EDM) or conda (see Using Anaconda) and you may make your lives easier going that route.
GNU/Linux is probably the easiest platform to install PySPH. On Ubuntu one may install the dependencies using:
$ sudo aptget install buildessential pythondev pythonnumpy \
pythonmako cython pythonpytest mayavi2 pythonqt4 pythonvirtualenv
OpenMP is typically available but if it is not, it can be installed with:
$ sudo aptget install libompdev
If you need parallel support:
$ sudo aptget install libopenmpidev pythonmpi4py
$ ./build_zoltan.sh ~/zoltan # Replace ~/zoltan with what you want
$ export ZOLTAN=~/zoltan
On Linux it is probably best to install PySPH into its own virtual environment. This will allow you to install PySPH as a user without any superuser priviledges. See the section below on Using a virtualenv for PySPH. In short do the following:
$ virtualenv systemsitepackages pysph_env
$ source pysph_env/bin/activate
$ pip install cython upgrade # if you have an old version.
If you wish to use a compiler which is not currently your default compiler,
simply update the CC
and CXX
environment variables. For example, to use
icc run the following commands before building PySPH:
$ export CC=icc
$ export CXX=icpc
Note
In this case, you will additionally have to ensure that the relevant intel
shared libraries can be found when running PySPH code. Most intel
installations come along with shell scripts that load relevant environment
variables with the right values automatically. This shell script is
generally named compilervars.sh
and can be found in
/path/to/icc/bin
. If you didn’t get this file along with your
installation, you can try running export
LD_LIBRARY_PATH=/path/to/icc/lib
.
You should be set now and should skip to Downloading PySPH and Building and Installing PySPH.
On recent versions of Ubuntu (16.10 and 18.04) there may be problems with
Mayavi viewer, and pysph view
may not work correctly. To see how to
resolve these, please look at Possible issues with the viewer.
Note
If you wish to see a working build/test script please see our shippable.yml.
Installing the dependencies on Ubuntu 18.04¶
On Ubuntu 18.04 it should be relatively simple to install PySPH with ZOLTAN as follows:
# For OpenMP
$ sudo aptget install libompdev
# For Zoltan
$ sudo aptget install openmpibin libopenmpidev libtrilinoszoltandev
$ export ZOLTAN_INCLUDE=/usr/include/trilinos
$ export ZOLTAN_LIBRARY=/usr/lib/x86_64linuxgnu
$ export USE_TRILINOS=1
Now depending on your setup you can install the Python related dependencies. For example with conda_ you can do:
$ conda install c condaforge cython mako matplotlib jupyter pyside pytest \
mock numpystl pytools
$ conda install c condaforge mpi4py
Then you should be able to install pyzoltan and its dependency cyarray using:
$ pip install pyzoltan
Finally, install PySPH with
$ pip install pysph
Or with:
$ pip install nocachedir pysph
If you are having trouble due to pip’s cache as discussed in pipcacheissues.
You should be all set now and should next consider Running the tests.
Installing the dependencies on Mac OS X¶
On OS X, your best bet is to install Enthought Canopy, EDM, or Anaconda or some other Python distribution. Ensure that you have gcc or clang installed by installing XCode. See this if you installed XCode but can’t find clang or gcc.
OpenMP on OSX¶
The default “gcc” available on OSX uses an LLVM backend and does not support OpenMP. To use OpenMP on OSX, you can install the GCC available on brew using
$ brew install gcc
Once this is done, you need to use this as your default compiler. The gcc
formula on brew currently ships with gcc version 8. Therefore, you can
tell Python to use the GCC installed by brew by setting:
$ export CC=gcc8
$ export CXX=g++8
Using EDM¶
It is very easy to install all the dependencies with the Enthought Deployment Manager (EDM).
Download the EDM installer if you do not already have it installed. Install the appropriate installer package for your system.
Once you have installed EDM, run the following:
$ edm install mayavi pyside cython matplotlib jupyter pytest mock pip $ edm shell $ pip install mako
With this done, you should be able to install PySPH relatively easily, see Building and Installing PySPH.
Using Canopy¶
Download the Canopy express installer for your platform (the full installer is also fine). Launch Canopy after you install it so it initializes your user environment. If you have made Canopy your default Python, all should be well, otherwise launch the Canopy terminal from the Tools menu of the Canopy editor before typing your commands below.
NumPy ships by default but Cython does not. Mako and Cython can be installed
with pip
easily (pip
will be available in your Canopy environment):
$ pip install cython mako
Mayavi is best installed with the Canopy package manager:
$ enpkg mayavi
Note
If you are a subscriber you can also enpkg cython
to install
Enthought’s build.
If you need parallel support, please see Installing mpi4py and Zoltan on OS X, otherwise, skip to Downloading PySPH and Building and Installing PySPH.
Using Anaconda¶
After installing Anaconda or miniconda_, you will need to make sure the dependencies are installed. You can create a separate environment as follows:
$ conda create n pysph_env
$ source activate pysph_env
Now you can install the necessary packages:
$ conda install c condaforge cython mako matplotlib jupyter pyside pytest mock
$ conda install c menpo mayavi
If you need parallel support, please see Installing mpi4py and Zoltan on OS X, otherwise, skip to Downloading PySPH and Building and Installing PySPH.
Installing mpi4py and Zoltan on OS X¶
In order to build/install mpi4py one first has to install the MPI library.
This is easily done with Homebrew as follows (you need to have brew
installed for this but that is relatively easy to do):
$ sudo brew install openmpi
After this is done, one can install mpi4py by hand. First download mpi4py from here. Then run the following (modify these to suit your XCode installation and version of mpi4py):
$ cd /tmp
$ tar xvzf ~/Downloads/mpi4py1.3.1.tar.gz
$ cd mpi4py1.3.1
$ export MACOSX_DEPLOYMENT_TARGET=10.7
$ export SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/
$ python setup.py install
Change the above environment variables to suite your SDK version. If this installs correctly, mpi4py should be available.
You can then follow the instructions on how to build/install Zoltan and
PyZoltan given above. You should be set now and should move to
Building and Installing PySPH. Just make sure you have set the ZOLTAN
environment
variable so PySPH knows where to find it.
Installing the dependencies on Windows¶
While it should be possible to use mpi4py and Zoltan on Windows, we do not at this point have much experience with this. Feel free to experiment and let us know if you’d like to share your instructions. The following instructions are all without parallel support.
Using EDM¶
It is very easy to install all the dependencies with the Enthought Deployment Manager (EDM).
Download the EDM installer if you do not already have it installed. Install the appropriate installer package for your system.
Once you have installed EDM, run the following:
> edm install mayavi pyside cython matplotlib jupyter pytest mock pip > edm shell > pip install mako
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Using Canopy¶
Download and install Canopy Express for you Windows machine (32 or 64 bit).
Launch the Canopy editor at least once so it sets up your user environment.
Make the Canopy Python the default Python when it prompts you. If you have
already skipped that option, you may enable it in the Edit>Preferences
menu. With that done you may install the required dependencies. You can
either use the Canopy package manager or use the command line. We will use
the command line for the rest of the instructions. To start a command line,
click on “Start” and navigate to the All Programs/Enthought Canopy
menu.
Select the “Canopy command prompt”, if you made Canopy your default Python,
just starting a command prompt (via cmd.exe
) will also work.
On the command prompt, Mako and Cython can be installed with pip
easily
(pip
should be available in your Canopy environment):
> pip install cython mako
Mayavi is best installed with the Canopy package manager:
> enpkg mayavi
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Note
If you are a subscriber you can also enpkg cython
to install
Enthought’s build.
Using WinPython¶
Instead of Canopy or Anaconda you could try WinPython 2.7.x.x. To obtain the core dependencies, download the corresponding binaries from Christoph Gohlke’s Unofficial Windows Binaries for Python Extension Packages. Mayavi is available through the binary ETS.
You can now add these binaries to your WinPython installation by going to WinPython Control Panel. The option to add packages is available under the section Install/upgrade packages.
Make sure to set your system PATH variable pointing to the location of the
scripts as required. If you have installed WinPython 2.7.6 64bit, make sure
to set your system PATH variables to <path to installation
folder>/python2.7.6.amd64
and <path to installation
folder>/python2.7.6.amd64/Scripts/
.
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Using Anaconda¶
Install Anaconda for your platform, make it the default and then install the required dependencies:
$ conda install cython mayavi
$ pip install mako
Once you are done with this, please skip ahead to Installing Visual C++ Compiler for Python.
Installing Visual C++ Compiler for Python¶
For all of the above Python distributions, it is highly recommended that you build PySPH with Microsoft’s Visual C++ for Python. See see https://wiki.python.org/moin/WindowsCompilers for specific details for each version of Python. Note that different Python versions may have different compiler requirements.
On Python 3.6 and above you should use Microsoft’s Build Tools for Visual Studio 2017.
On Python 2.7 for example use Microsoft’s Visual C++ for Python 2.7. We
recommend that you download and install the VCForPython27.msi
available
from the link. Make sure
you install the system requirements specified on that page. For example, you
will need to install the Microsoft Visual C++ 2008 SP1 Redistributable Package
for your platform (x86 for 32 bit or x64 for 64 bit) and on Windows 8 and
above you will need to install the .NET framework 3.5. Please look at the link
given above, it should be fairly straightforward. Note that doing this will
also get OpenMP working for you.
After you do this, you will find a “Microsoft Visual C++ Compiler Package for Python” in your Start menu. Choose a suitable command prompt from this menu for your architecture and start it (we will call this the MSVC command prompt). You may make a short cut to it as you will need to use this command prompt to build PySPH and also run any of the examples.
After this is done, see section Downloading PySPH and get a copy of PySPH. Thereafter, you may follow section Building and Installing PySPH.
Warning
On 64 bit Windows, do not build PySPH with mingw64 as it does not work reliably at all and frequently crashes. YMMV with mingw32 but it is safer and just as easy to use the MS VC++ compiler.
Using a virtualenv for PySPH¶
A virtualenv allows you to create an isolated environment for PySPH and its related packages. This is useful in a variety of situations.
 Your OS does not provide a recent enough Cython version (say you are running Debian stable).
 You do not have root access to install any packages PySPH requires.
 You do not want to mess up your system files and wish to localize any installations inside directories you control.
 You wish to use other packages with conflicting requirements.
 You want PySPH and its related packages to be in an “isolated” environment.
You can either install virtualenv (or ask your system administrator to) or
just download the virtualenv.py script and use
it (run python virtualenv.py
after you download the script).
Create a virtualenv like so:
$ virtualenv systemsitepackages pysph_env
This creates a directory called pysph_env
which contains all the relevant
files for your virtualenv, this includes any new packages you wish to install
into it. You can delete this directory if you don’t want it anymore for some
reason. This virtualenv will also “inherit” packages from your system. Hence
if your system administrator already installed NumPy it may be imported from
your virtual environment and you do not need to install it. This is
very useful for large packages like Mayavi, Qt etc.
Note
If your version of virtualenv
does not support the
systemsitepackages
option, please use the virtualenv.py
script
mentioned above.
Once you create a virtualenv you can activate it as follows (on a bash shell):
$ source pysph_env/bin/activate
On Windows you run a bat file as follows:
$ pysph_env/bin/activate
This sets up the PATH to point to your virtualenv’s Python. You may now run any normal Python commands and it will use your virtualenv’s Python. For example you can do the following:
$ virtualenv myenv
$ source myenv/bin/activate
(myenv) $ pip install Cython mako pytest
(myenv) $ cd pysph
(myenv) $ python setup.py install
Now PySPH will be installed into myenv
. You may deactivate your
virtualenv using the deactivate
command:
(myenv) $ deactivate
$
On Windows, use myenv\Scripts\activate.bat
and
myenv\Scripts\deactivate.bat
.
If for whatever reason you wish to delete myenv
just remove the entire
directory:
$ rm rf myenv
Note
With a virtualenv, one should be careful while running things like
ipython
or pytest
as these are sometimes also installed on the
system in /usr/bin
. If you suspect that you are not running the
correct Python, you could simply run (on Linux/OS X):
$ python `which ipython`
to be absolutely sure.
Using Virtualenv on Canopy¶
If you are using Enthought Canopy, it already bundles virtualenv for you but
you should use the venv
script. For example:
$ venv help
$ venv systemsitepackages myenv
$ source myenv/bin/activate
The rest of the steps are the same as above.
Downloading PySPH¶
One way to install PySPH is to use pip
$ pip install PySPH
This will install PySPH, and you should be able to import it and use the modules with your Python scripts that use PySPH. This will also provide the standard set of PySPH examples. If you want to take a look at the PySPH sources you can get it from git or download a tarball or ZIP as described below.
To get PySPH using git type the following
$ git clone https://github.com/pypr/pysph.git
If you do not have git or do not wish to bother with it, you can get a ZIP or tarball from the pysph site. You can unzip/untar this and use the sources.
In the instructions, we assume that you have the pysph sources in the
directory pysph
and are inside the root of this directory. For example:
$ unzip pysphpysph*.zip
$ cd pysphpysph1ce*
or if you cloned the repository:
$ git clone https://github.com/pypr/pysph.git
$ cd pysph
Once you have downloaded PySPH you should be ready to build and install it, see Building and Installing PySPH.
Building and Installing PySPH¶
Once you have the dependencies installed you can install PySPH with:
$ pip install PySPH
You can install the development version using:
$ pip install https://github.com/pypr/pysph/zipball/master
If you downloaded PySPH using git or used a tarball you can do:
$ python setup.py install
You could also do:
$ python setup.py develop
This is useful if you are tracking the latest version of PySPH via git. With git you can update the sources and rebuild using:
$ git pull
$ python setup.py develop
You should be all set now and should next consider Running the tests.
Note that pip caches any packages it has built and installed earlier. So if you installed PySPH without Zoltan support, say and then uninstalled PySPH using:
$ pip uninstall pysph
then if you try a pip install pysph
again (and the PySPH version has not
changed), pip will simply reuse the old build it made. You do not want this
and want it to rebuild PySPH to use ZOLTAN say, then you can do the
following:
$ pip install nocachedir pysph
In this case, pip will disregard its default cache and freshly download and build PySPH. This is often handy.
Running the tests¶
Once you install PySPH you can run the tests using the pysph
script
that is installed:
$ pysph test
If you see errors while running the tests, you might want more verbose reporting which you can get with:
$ pysph test v
This should run all the tests that do not take a long while to complete. If this fails, please contact the pysphusers mailing list or send us email.
There are a few additional test dependencies that need to be installed when running the tests. These can be installed using:
$ pip install r requirementstest.txt
Once you run the tests, you should see the section on Running the examples.
Note
Internally, we use the pytest
package to run the tests.
For more information on what you can do with the pysph
script try
this:
$ pysph h
Running the examples¶
You can verify the installation by exploring some examples. The examples are
actually installed along with the PySPH library in the pysph.examples
package. You can list and choose the examples to run by doing:
$ pysph run
This will list all the available examples and allow you to run any of them. If
you wish to run a particular one, like say elliptical_drop
, you may do:
$ pysph run elliptical_drop
This can also be run as:
$ pysph run pysph.examples.elliptical_drop
To see the options available, try this:
$ pysph run elliptical_drop h
Note
Technically you can run the examples using python m
pysph.examples.elliptical_drop
. The pysph run
command is a
lot more convenient as it allows a much shorter command
You can view the data generated by the simulation (after the simulation
is complete or during the simulation) by running pysph view
command.
To view the simulated data you may do:
$ pysph view elliptical_drop_output
If you have Mayavi installed this should show a UI that looks like:
If the viewer does not start, you may want to see Possible issues with the viewer.
There are other examples that use the transport velocity formulation:
$ pysph run cavity
This runs the driven cavity problem using the transport velocity formulation
of Adami et al. The example also performs postprocessing of the results and
the cavity_output
will contain a few PNG images with these. You may view
these results using pysph view cavity_output
.
For example for
example the file streamlines.png
may look like what is shown below:
If you want to use PySPH for elastic dynamics, you can try some of the examples from Gray et al., Comput. Methods Appl. Mech. Engrg. 190 (2001), 66416662:
$ pysph run solid_mech.rings
Which runs the problem of the collision of two elastic rings. View the results like so:
$ pysph view rings_output
This should produce something that may look like the image below.
The autogenerated highperformance code for the example resides in the
directory ~/.pysph/source
. A note of caution however, it’s not for the
faint hearted.
Running the examples with OpenMP¶
If you have OpenMP available run any of the examples as follows:
$ pysph run elliptical_drop openmp
This should run faster if you have multiple cores on your machine. If you wish to change the number of threads to run simultaneously, you can try the following:
$ OMP_NUM_THREADS=8 pysph run elliptical_drop openmp
You may need to set the number of threads to about 4 times the number of physical cores on your machine to obtain the most scaleup. If you wish to time the actual scale up of the code with and without OpenMP you may want to disable any output (which will be serial), you can do this like:
$ pysph run elliptical_drop disableoutput openmp
Note that one may run example scripts directly with Python but this
requires access to the location of the script. For example, if a script
pysph_script.py
exists one can run it as:
$ python pysph_script.py
The pysph run
command is just a convenient way to run the
preinstalled examples that ship with PySPH.
Running the examples with OpenCL¶
If you have PyOpenCL installed and working with an appropriate device setup, then you can transparently use OpenCL as well with PySPH. This feature is very new and still fairly experimental. You may run into issues but using it is simple. You may run any of the supported examples as follows:
$ pysph run elliptical_drop opencl
Yes, thats it, just use the opencl
option and code will be
autogenerated and run for you. By default it uses singleprecision but you
can also run the code with double precision using:
$ pysph run elliptical_drop opencl usedouble
Currently inlets and outlets are not supported, periodicity is slow and many optimizations still need to be made but this is rapidly improving. If you want to see an example that runs pretty fast, try the cube example:
$ pysph run cube disableoutput np 1e6 opencl
You may compare the execution time with that of OpenMP.
Running the examples with MPI¶
If you compiled PySPH with Zoltan and have mpi4py installed you may run any
of the examples with MPI as follows (here we choose 4 processors with
np 4
, change this to suit your needs):
$ mpirun np 4 pysph run dam_break_3d
This may not give you significant speedup if the problem is too small. You can also combine OpenMP and MPI if you wish. You should take care to setup the MPI host information suitably to utilize the processors effectively.
Note
Note that again we are using pysph run
here but for any other
scripts, one could do mpirun np python some_script.py
Possible issues with the viewer¶
Often users are able to install PySPH and run the examples but are unable to
run pysph view
for a variety of reasons. This section discusses how these
could be resolved.
The PySPH viewer uses Mayavi. Mayavi can be installed via pip. Mayavi depends on VTK which can also be installed via pip if your package manager does not have a suitable version.
If you are using Ubuntu 16.04 or 16.10 or a VTK version built with Qt5, it is possible that you will see a strange segmentation fault when starting the viewer. This is because Mayavi uses Qt4 and the VTK build has linked to Qt5. In these cases it may be best to use to use the latest VTK wheels that are now available on pypi. If you have VTK installed but you want a more recent version of Mayavi, you can always use pip to install Mayavi.
For the very specific case of Mayavi on Ubuntu 16.04 and its derivatives, you can use Ubuntu’s older VTK package like so:
$ sudo apt remove mayavi2 pythonvtk6
$ sudo apt install pythonvtk
$ pip install mayavi
What this does is to remove the system Mayavi and the VTK6.x package which is linked to Qt5 and instead install the older pythonvtk package. Then using pip to install Mayavi against this version of VTK. If the problem persists remember that by default pip caches any previous installations of Mayavi and you may need to install Mayavi like this:
$ pip nocachedir install mayavi
If you are using EDM or Anaconda, things should work most of the time. However, there may be problems and in this case please report the issues to the pysphusers mailing list or send us email.
Learning the ropes¶
In the tutorials, we will introduce the PySPH framework in the context of the examples provided. Read this if you are a casual user and want to use the framework as is. If you want to add new functions and capabilities to PySPH, you should read The PySPH framework. If you are new to PySPH however, we highly recommend that you go through this document and the next tutorial (A more detailed tutorial).
Recall that PySPH is a framework for parallel SPHlike simulations in Python. The idea therefore, is to provide a user friendly mechanism to setup problems while leaving the internal details to the framework. All examples follow the following steps:
The tutorials address each of the steps in this flowchart for problems with increasing complexity.
The first example we consider is a “patch” test for SPH formulations for incompressible fluids in elliptical_drop_simple.py. This problem simulates the evolution of a 2D circular patch of fluid under the influence of an initial velocity field given by:
The kinematical constraint of incompressibility causes the initially circular patch of fluid to deform into an ellipse such that the volume (area) is conserved. An expression can be derived for this deformation which makes it an ideal test to verify codes.
Imports¶
Taking a look at the example (see elliptical_drop_simple.py), the first several lines are imports of various modules:
from numpy import ones_like, mgrid, sqrt
from pysph.base.utils import get_particle_array
from pysph.solver.application import Application
from pysph.sph.scheme import WCSPHScheme
Note
This is common for most examples and it is worth noting the pattern of the
PySPH imports. Fundamental SPH constructs like the kernel and particle
containers are imported from the base
subpackage. The framework
related objects like the solver and integrator are imported from the
solver
subpackage. Finally, we import from the sph
subpackage, the
physics related part for this problem.
The organization of the pysph
package is given below.
Organization of the pysph
package¶
PySPH is organized into several subpackages. These are:
pysph.base
: This subpackage defines thepysph.base.particle_array.ParticleArray
, the various SPH Kernels, the nearest neighbor particle search (NNPS) code, and the Cython code generation utilities.pysph.sph
: Contains the various SPH equations, the Integrator related modules and associated integration steppers, and the code generation for the SPH looping.pysph.sph.wc
contains the equations for the weakly compressible formulation.pysph.sph.solid_mech
contains the equations for solid mechanics andpysph.sph.misc
has miscellaneous equations.pysph.solver
: Provides thepysph.solver.solver.Solver
, thepysph.solver.application.Application
and a convenient way to interact with the solver as it is running.pysph.parallel
: Provides the parallel functionality.pysph.tools
: Provides some useful tools including thepysph
script CLI and also the data viewer which is based on Mayavi.pysph.examples
: Provides many standard SPH examples. These examples are meant to be extended by users where needed. This is extremely handy to reproduce and compare SPH schemes.
Functions for loading/generating the particles¶
The code begins with a few functions related to obtaining the exact solution for the given problem which is used for comparing the computed solution.
A single new class called EllipticalDrop
which derives from
pysph.solver.application.Application
is defined. There are several
methods implemented on this class:
initialize
: lets users specify any parameters of interest relevant to the simulation.create_scheme
: lets the user specify thepysph.sph.scheme.Scheme
to use to solve the problem. Several standard schemes are already available and can be readily used.create_particles
: this method is where one creates the particles to be simulated.
Of these, create_particles
and create_scheme
are mandatory for without
them SPH would be impossible. The rest (and other methods) are optional. To
see a complete listing of possible methods that one can subclass see
pysph.solver.application.Application
.
The create_particles
method looks like:
class EllipticalDrop(Application):
# ...
def create_particles(self):
"""Create the circular patch of fluid."""
dx = self.dx
hdx = self.hdx
ro = self.ro
name = 'fluid'
x, y = mgrid[1.05:1.05+1e4:dx, 1.05:1.05+1e4:dx]
x = x.ravel()
y = y.ravel()
m = ones_like(x)*dx*dx
h = ones_like(x)*hdx*dx
rho = ones_like(x) * ro
u = 100*x
v = 100*y
# remove particles outside the circle
indices = []
for i in range(len(x)):
if sqrt(x[i]*x[i] + y[i]*y[i])  1 > 1e10:
indices.append(i)
pa = get_particle_array(x=x, y=y, m=m, rho=rho, h=h, u=u, v=v,
name=name)
pa.remove_particles(indices)
print("Elliptical drop :: %d particles"
% (pa.get_number_of_particles()))
self.scheme.setup_properties([pa])
return [pa]
The method is used to initialize the particles in Python. In PySPH, we use a
ParticleArray
object as a container for particles of a given
species. You can think of a particle species as any homogenous entity in a
simulation. For example, in a twophase air water flow, a species could be
used to represent each phase. A ParticleArray
can be conveniently
created from the command line using NumPy arrays. For example
>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.01, 0:1:0.01]
>>> x = x.ravel(); y = y.ravel()
>>> pa = sph.get_particle_array(x=x, y=y)
would create a ParticleArray
, representing a uniform distribution
of particles on a Cartesian lattice in 2D using the helper function
get_particle_array()
in the base subpackage. The
get_particle_array_wcsph()
is a special version of this suited to
weaklycompressible formulations.
Note
ParticleArrays in PySPH use flattened or onedimensional arrays.
The ParticleArray
is highly convenient, supporting methods for
insertions, deletions and concatenations. In the create_particles
function, we use this convenience to remove a list of particles that fall
outside a circular region:
pa.remove_particles(indices)
where, a list of indices is provided. One could also provide the indices in the
form of a cyarray.carray.LongArray
which, as the name suggests, is
an array of 64 bit integers.
The particle array also supports what we call strided properties where you may associate multiple values per particle. Normally the stride length is 1. This feature is convenient if you wish to associate a matrix or vector of values per particle. You must still access the individual values as a “flattened” array but one can resize, remove, and add particles and the strided properties will be honored. For example:
>>> pa.add_property(name='A', data=2.0, default=1.0, stride=2)
Will create a new property called 'A'
with a stride length of 2.
Note
Any onedimensional (NumPy) array is valid input for PySPH. You can generate this from an external program for solid modelling and load it.
Note
PySPH works with multiple ParticleArrays. This is why we actually return a list in the last line of the get_circular_patch function above.
The create_particles
always returns a list of particle arrays even if
there is only one. The method self.scheme.setup_properties
automatically
adds any properties needed for the particular scheme being used.
Setting up the PySPH framework¶
As we move on, we encounter instantiations of the PySPH framework objects.
In this example, the pysph.sph.scheme.WCSPH
scheme is created in
the create_scheme
method. The WCSPHScheme
internally creates other
basic objects needed for the SPH simulation. In this case, the scheme
instance is passed a list of fluid particle array names and an empty list of
solid particle array names. In this case there are no solid boundaries. The
class is also passed a variety of values relevant to the scheme and
simulation. The kernel to be used is created and passed to the
configure_solver
method of the scheme. The
pysph.sph.integrator.EPECIntegrator
is used to integrate the
particle properties. Various solver related parametes are also setup.
def create_scheme(self):
s = WCSPHScheme(
['fluid'], [], dim=2, rho0=self.ro, c0=self.co,
h0=self.dx*self.hdx, hdx=self.hdx, gamma=7.0, alpha=0.1, beta=0.0
)
dt = 5e6
tf = 0.0076
s.configure_solver(dt=dt, tf=tf)
return s
As can be seen, various options are configured for the solver, including initial damping etc. The scheme is responsible for:
 setting up the actual equations that describe the interactions between particles (see SPH equations),
 setting up the kernel (SPH Kernels) and integrator (Integrator related modules) to use for the simulation. In this case a default cubic spline kernel is used.
 setting up the Solver (Module solver), which marshalls the entire simulation.
For a more detailed introduction to these aspects of PySPH please read, the
A more detailed tutorial tutorial which provides greater detail on these.
However, by simply creating the WCSPHScheme
and creating the particles,
one can simulate the problem.
The astute reader may notice that the EllipticalDrop
example is
subclassing the Application
. This makes it easy to pass command
line arguments to the solver. It is also important for the seamless parallel
execution of the same example. To appreciate the role of the
Application
consider for a moment how might we write a parallel
version of the same example. At some point, we would need some MPI imports and
the particles should be created in a distributed fashion. All this (and more)
is handled through the abstraction of the Application
which hides
all this detail from the user.
Running the example¶
In the last two lines of the example, we instantiate the EllipticalDrop
class and run it:
if __name__ == '__main__':
app = EllipticalDrop()
app.run()
The Application
takes care of creating the particles, creating the
solver, handling command line arguments etc. Many parameters can be
configured via the command line, and these will override any parameters setup
in the respective create_*
methods. For example one may do the following
to find out the various options:
$ pysph run elliptical_drop_simple h
If we run the example without any arguments it will run until a final time of 0.0075 seconds. We can change this example to 0.005 by the following:
$ pysph run elliptical_drop_simple tf=0.005
When this is run, PySPH will generate Cython code from the equations and
integrators that have been provided, compiles that code and runs the
simulation. This provides a great deal of convenience for the user without
sacrificing performance. The generated code is available in
~/.pysph/source
. If the code/equations have not changed, then the code
will not be recompiled. This is all handled automatically without user
intervention. By default, output files will be generated in the directory
elliptical_drop_output
.
If we wish to utilize multiple cores we could do:
$ pysph run elliptical_drop_simple openmp
If we wish to run the code in parallel (and have compiled PySPH with Zoltan and mpi4py) we can do:
$ mpirun np 4 pysph run elliptical_drop_simple
This will automatically parallelize the run using 4 processors. In this example doing this will only slow it down as the number of particles is extremely small.
Visualizing and postprocessing¶
You can view the data generated by the simulation (after the simulation
is complete or during the simulation) by running the pysph view
command. To view the simulated data you may do:
$ pysph view elliptical_drop_simple_output
If you have Mayavi installed this should show a UI that looks like:
For more help on the viewer, please run:
$ pysph view h
On the user interface, the right side shows the visualized data. On top of it there are several toolbar icons. The left most is the Mayavi logo and clicking on it will present the full Mayavi user interface that can be used to configure any additional details of the visualization.
On the bottom left of the main visualization UI there is a button which has the text “Launch Python Shell”. If one clicks on this, one obtains a full Python interpreter with a few useful objects available. These are:
>>> dir()
['__builtins__', '__doc__', '__name__', 'interpolator', 'mlab',
'particle_arrays', 'scene', 'self', 'viewer']
>>> particle_arrays['fluid'].name
'fluid'
The particle_arrays
object is a dictionary of ParticleArrayHelpers
which is available in
pysph.tools.mayavi_viewer.ParticleArrayHelper
. The
interpolator
is an instance of
pysph.tools.mayavi_viewer.InterpolatorView
that is used by the
viewer. The other objects can be used to script the user interface if desired.
Note that the particle_arrays
can be indexed by array name or index.
Here is an example of scripting the viewer. Let us say we have two particle arrays, ‘boundary’ and ‘fluid’ in that order. Let us say, we wish to make the boundary translucent, then we can write the following:
b = particle_arrays['boundary']
b.plot.actor.property.opacity = 0.2
This does require some knowledge of Mayavi and scripting with it. The plot
attribute of the pysph.tools.mayavi_viewer.ParticleArrayHelper
is
a Glyph instance from Mayavi. It is useful to use the record feature
of Mayavi to learn more about how best to script the view.
The viewer will always look for a mayavi_config.py
script inside the
output directory to setup the visualization parameters. This file can be
created by overriding the pysph.solver.application.Application
object’s customize_output
method. See the dam break 3d
example to see this being used. Of course, this file can also be created
manually.
Loading output data files¶
The simulation data is dumped out either in *.hdf5
files (if one has h5py
installed) or *.npz
files otherwise. You may use the
pysph.solver.utils.load()
function to access the raw data
from pysph.solver.utils import load
data = load('elliptical_drop_100.hdf5')
# if one has only npz files the syntax is the same.
data = load('elliptical_drop_100.npz')
When opening the saved file with load
, a dictionary object is returned.
The particle arrays and other information can be obtained from this
dictionary:
particle_arrays = data['arrays']
solver_data = data['solver_data']
particle_arrays
is a dictionary of all the PySPH particle arrays.
You may obtain the PySPH particle array, fluid
, like so:
fluid = particle_arrays['fluid']
p = fluid.p
p
is a numpy array containing the pressure values. All the saved particle
array properties can thus be obtained and used for any postprocessing task.
The solver_data
provides information about the iteration count, timestep
and the current time.
A good example that demonstrates the use of these is available in the
post_process
method of the elliptical_drop.py
example.
Interpolating properties¶
Data from the solver can also be interpolated using the
pysph.tools.interpolator.Interpolator
class. Here is the simplest
example of interpolating data from the results of a simulation onto a fixed
grid that is automatically computed from the known particle arrays:
from pysph.solver.utils import load
data = load('elliptical_drop_output/elliptical_drop_100.npz')
from pysph.tools.interpolator import Interpolator
parrays = data['arrays']
interp = Interpolator(list(parrays.values()), num_points=10000)
p = interp.interpolate('p')
p
is now a numpy array of size 10000 elements shaped such that it
interpolates all the data in the particle arrays loaded. interp.x
and
interp.y
are numpy arrays of the chosen x
and y
coordinates
corresponding to p
. To visualize this we may simply do:
from matplotlib import pyplot as plt
plt.contourf(interp.x, interp.y, p)
It is easy to interpolate any other property too. If one wishes to explicitly set the domain on which the interpolation is required one may do:
xmin, xmax, ymin, ymax, zmin, zmax = 0., 1., 1., 1., 0, 1
interp.set_domain((xmin, xmax, ymin, ymax, zmin, zmax), (40, 50, 1))
p = interp.interpolate('p')
This will create a meshgrid in the specified region with the specified number of points.
One could also explicitly set the points on which one wishes to interpolate the data as:
interp.set_interpolation_points(x, y, z)
Where x, y, z
are numpy arrays of the coordinates of the points on which
the interpolation is desired. This can also be done with the constructor as:
interp = Interpolator(list(parrays.values()), x=x, y=y, z=z)
There are some cases, where one may require a higher order interpolation or
gradient approximation of the property. This can be done by passing a
method
for interpolation to the interplator as:
interp = Interpolator(list(parrays.values()), num_points=10000, method='order1')
Currently, PySPH has three method of interpolation namely shepard
,
sph
and order1
. When order1
is set as method then one can get the
higher order interpolation or it’s derivative by just passing an extra
argument to the interpolate method suggesting the component. To get
derivative in x we can do as:
px = interp.interpolate('p', comp=1)
Here for comp=0, the interpolated property is returned and 1, 2, 3 will return gradient in x, y and z directions respectively.
For more details on the class and the available methods, see
pysph.tools.interpolator.Interpolator
.
In addition to this there are other useful pre and postprocessing utilities described in Miscellaneous Tools for PySPH.
Viewing the data in an IPython notebook¶
PySPH makes it relatively easy to view the data inside an IPython notebook with minimal additional dependencies. A simple UI is provided to view the saved data using this interface. It requires jupyter, ipywidgets and ipympl. Currently, a 2D and 3D viewer are provided for the data. Here is a simple example of how one may use this in a notebook. Inside a notebook, one needs the following:
%matplotlib ipympl
from pysph.tools.ipy_viewer import Viewer2D
viewer = Viewer2D('dam_break_2d_output')
The viewer
has many useful methods:
viewer.show_info() # prints useful information about the run.
viewer.show_results() # plots any images in the output directory
viewer.show_log() # Prints the log file.
The most handy one is the one to perform interactive plots:
viewer.interactive_plot()
This shows a simple ipywidgets based UI that uses matplotlib to plot the data
on the browser. The different saved snapshots can be viewed using a convenient
slider. The viewer shows both the particles as well as simple vector plots.
This is convenient when one wishes to share and show the data without
requiring Mayavi. It does require pysph to be installed in order to be able to
load the files. It is mandatory to have the first line that sets the matplotlib
backend to ipympl
.
There is also a 3D viewer which may be used using Viewer3D
instead of the
Viewer2D
above. This viewer requires ipyvolume to be installed.
A slightly more complex example¶
The first example was very simple. In particular there was no postprocessing of the results. Many pysph examples also include post processing code in the example. This makes it easy to reproduce results and also easily compare different schemes. A complete version of the elliptical drop example is available at elliptical_drop.py.
There are a few things that this example does a bit differently:
 It some useful code to generate the exact solution for comparison.
 It uses a
Gaussian
kernel and also uses a variety of different options for the solver (see how theconfigure_solver
is called) for various other options seepysph.solver.solver.Solver
. The
EllipticalDrop
class has apost_process
method which optionally postprocess the results generated. This in turn uses a couple of private methods_compute_results
and_make_final_plot
. The last line of the code has a call to
app.post_process(...)
, which actually postprocesses the data.
This example is therefore a complete example and shows how one could write a useful and reusable PySPH example.
Doing more¶
The Application
has several more methods that can be used in
additional contexts, for example one may override the following additional
methods:
add_user_options
: this is used to create additional userdefined command line arguments. The command line options are available inself.options
and can be used in the other methods.consume_user_options
: this is called after the command line arguments are parsed, and can be optionally used to setup any variables that have been added by the user inadd_user_options
. Note that the method is called before the particles and solver etc. are created.create_domain
: this is used when a periodic domain is needed.create_inlet_outlet
: Override this to return any inlet an outlet objects. See thepysph.sph.simple_inlet_outlet
module.
There are many others, please see the Application
class
documentation to see these. The order of invocation of the various methods is
also documented there.
There are several examples that ship with PySPH, explore these to get a better idea of what is possible.
Debugging when things go wrong¶
When you attempt to run your own simulations you may run into a variety of errors. Some errors in setting up equations and the like are easy to detect and PySPH will provide an error message that should usually be helpful. If this is a Python related error you should get a traceback and debug it as you would debug any Python program.
PySPH writes out a log file in the output directory, looking at that is sometimes useful. The log file will usually tell you the kernel, integrator, NNPS, and the exact equations and groups used for a simulation. This can be often be very useful when sorting out subtle issues with the equations and groups.
Things get harder to debug when you get a segmentation fault or your code just crashes. Even though PySPH is implemented in Python you can get one of these if your timestep is too large or your equations are doing strange things (divide by zero, taking a square root of a negative number). This happens because PySPH translates your code into a lowerlevel language for performance. The following are the most common causes of a crash/segfault:
 The particles have “blown up”, this can happen when the accelerations are very large. This can also happen when your timestep is very large.
 There are mistakes in your equations or integrator step. Divide by zero, or some quantity was not properly initialized – for example if the particle masses were not correctly initialized and were set to zero you might get these errors. It is also possible that you have made some indexing errors in your arrays, check all your array accesses in your equations.
Let us see how we can debug these. Let us say your code is in example.py
,
you can do the following:
$ python example.py pfreq 1 detailedoutput
In this case, the pfreq 1
asks pysph to dump output at every timestep.
By default only specific properties that the user has requested are saved.
Using detailedoutput
dumps every property of every array. This includes
all accelerations as well. Viewing this data with the pysph view
command
makes it easy to see which acceleration is causing a problem.
Sometimes even this is not enough as the particles diverge or the code blows
up at the very first step of a multistage integrator. In this case, no output
would be generated. To debug the accelerations in this situation one may
define a method called pre_step
in your
pysph.solver.application.Application
subclass as follows:
class EllipticalDrop(Application):
# ...
def pre_step(self, solver):
solver.dump_output()
What this does is to ask the solver to dump the output right before each
timestep is taken. At the start of the simulations the first accelerations
have been calculated and since this output is now saved, one should be able to
debug the accelerations. Again, use detailedoutput
with this to look at
the accelerations right at the start.
A more detailed tutorial¶
In the previous tutorial (Learning the ropes) we provided a high level overview of the PySPH framework. No details were provided on equations, integrators and solvers. This tutorial assumes that you have read the previous one.
Recall that in the previous tutorial, a circular patch of fluid with a given
initial velocity field was simulated using a weakycompressible SPH scheme.
In that example, a WCSPHScheme
object was created in the create_scheme
method. The details of what exactly the scheme does was not discussed. This
tutorial explains some of those details by solving the same problem using a
lowerlevel approach where the actual SPH equations, the integrator, and the
solver are created manually. This should help a user write their own schemes
or modify an existing scheme. The full code for this example can be seen in elliptical_drop_no_scheme.py.
Imports¶
This example requires a few more imports than the previous case.
the first several lines are imports of various modules:
import os
from numpy import array, ones_like, mgrid, sqrt
# PySPH base and carray imports
from pysph.base.utils import get_particle_array_wcsph
from pysph.base.kernels import Gaussian
# PySPH solver and integrator
from pysph.solver.application import Application
from pysph.solver.solver import Solver
from pysph.sph.integrator import EPECIntegrator
from pysph.sph.integrator_step import WCSPHStep
# PySPH sph imports
from pysph.sph.equation import Group
from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation
from pysph.sph.wc.basic import TaitEOS, MomentumEquation
Note
This is common for all examples that do not use a scheme and it is worth
noting the pattern of the PySPH imports. Fundamental SPH constructs like
the kernel and particle containers are imported from the base
subpackage. The framework related objects like the solver and integrator
are imported from the solver
subpackage. Finally, we import from the
sph
subpackage, the physics related part for this problem.
The methods defined for creating the particles are the same as in the previous
tutorial with the exception of the call to
self.scheme.setup_properties([pa])
. In this example, we do not create a
scheme, we instead create all the required PySPH objects from the
application. We do not override the create_scheme
method but instead have
two other methods called create_solver
and create_equations
which
handle this.
Setting up the PySPH framework¶
As we move on, we encounter instantiations of the PySPH framework objects.
These are the pysph.solver.application.Application
,
pysph.sph.integrator.TVDRK3Integrator
and
pysph.solver.solver.Solver
objects. The create_solver
method
constructs a Solver
instance and returns it as seen below:
def create_solver(self):
kernel = Gaussian(dim=2)
integrator = EPECIntegrator( fluid=WCSPHStep() )
dt = 5e6; tf = 0.0076
solver = Solver(kernel=kernel, dim=2, integrator=integrator,
dt=dt, tf=tf, adaptive_timestep=True,
cfl=0.05, n_damp=50,
output_at_times=[0.0008, 0.0038])
return solver
As can be seen, various options are configured for the solver, including initial damping etc.
Intuitively, in an SPH simulation, the role of the
EPECIntegrator
should be obvious. In the code, we see that we
ask for the “fluid” to be stepped using a WCSPHStep
object. Taking
a look at the create_particles
method once more, we notice that the
ParticleArray representing the circular patch was named as fluid. So
we’re essentially asking the PySPH framework to step or integrate the
properties of the ParticleArray fluid using WCSPHStep
. It is
safe to assume that the framework takes the responsibility to call this
integrator at the appropriate time during a timestep.
The Solver
is the main driver for the problem. It marshals a
simulation and takes the responsibility (through appropriate calls to the
integrator) to update the solution to the next time step. It also handles
input/output and computing global quantities (such as minimum time step) in
parallel.
Specifying the interactions¶
At this stage, we have the particles (represented by the fluid ParticleArray) and the framework to integrate the solution and marshall the simulation. What remains is to define how to actually go about updating properties within a time step. That is, for each particle we must “do something”. This is where the physics for the particular problem comes in.
For SPH, this would be the pairwise interactions between particles. In PySPH, we provide a specific way to define the sequence of interactions which is a list of Equation objects (see SPH equations). For the circular patch test, the sequence of interactions is relatively straightforward:
 Compute pressure from the Equation of State (EOS): \(p = f(\rho)\)
 Compute the rate of change of density: \(\frac{d\rho}{dt}\)
 Compute the rate of change of velocity (accelerations): \(\frac{d\boldsymbol{v}}{dt}\)
 Compute corrections for the velocity (XSPH): \(\frac{d\boldsymbol{x}}{dt}\)
Care must be taken that the EOS equation should be evaluated for all the particles before the other equations are evaluated.
We request this in PySPH by creating a list of Equation
instances
in the create_equations
method:
def create_equations(self):
equations = [
Group(equations=[
TaitEOS(dest='fluid', sources=None, rho0=self.ro,
c0=self.co, gamma=7.0),
], real=False),
Group(equations=[
ContinuityEquation(dest='fluid', sources=['fluid',]),
MomentumEquation(dest='fluid', sources=['fluid'],
alpha=self.alpha, beta=0.0, c0=self.co),
XSPHCorrection(dest='fluid', sources=['fluid']),
]),
]
return equations
Each Group
instance is completed before the next is taken up. Each group
contains a list of Equation
objects. Each interaction is specified
through an Equation
object, which is instantiated with the general
syntax:
Equation(dest='array_name', sources, **kwargs)
The dest
argument specifies the target or destination
ParticleArray on which this interaction is going to operate
on. Similarly, the sources
argument specifies a list of
ParticleArrays from which the contributions are sought. For some
equations like the EOS, it doesn’t make sense to define a list of
sources and a None
suffices. The specification basically tells PySPH
that for one time step of the calculation:
 Use the Tait’s EOS to update the properties of the fluid array
 Compute \(\frac{d\rho}{dt}\) for the fluid from the fluid
 Compute accelerations for the fluid from the fluid
 Compute the XSPH corrections for the fluid, using fluid as the source
Note
Notice the use of the ParticleArray name “fluid”. It is the responsibility of the user to ensure that the equation specification is done in a manner consistent with the creation of the particles.
With the list of equations, our problem is completely defined. PySPH now knows what to do with the particles within a time step. More importantly, this information is enough to generate code to carry out a complete SPH simulation. For more details on how new equations can be written please read The PySPH framework.
The example may be run the same way as the previous example:
$ pysph run elliptical_drop_no_scheme
The resulting output can be analyzed or viewed the same way as in the previous example.
In the previous example (Learning the ropes), the equations and solver
are created automatically by the WCSPHScheme
. If the create_scheme
is
overwritten and returns a scheme, the create_equations
and create_solver
need not be implemented. For more details on the various application methods,
please see pysph.solver.application.Application
. Implementing other
schemes can be done by either implementing the equations directly as done in
this example or one could implement a new pysph.sph.scheme.Scheme
.
The framework and library¶
The PySPH framework¶
This document is an introduction to the design of PySPH. This provides additional highlevel details on the functionality that the PySPH framework provides. This should allow you to use PySPH effectively and extend the framework to solve problems other than those provided in the main distribution.
To elucidate some of the internal details of PySPH, we will consider a typical SPH problem and proceed to write the code that implements it. Thereafter, we will look at how this is implemented using the PySPH framework.
The dambreak problem¶
The problem that is used for the illustration is the Weakly Compressible SPH (WCSPH) formulation for free surface flows, applied to a breaking dam problem:
A column of water is initially at rest (presumably held in place by some membrane). The problem simulates a breaking dam in that the membrane is instantly removed and the column is free to fall under its own weight and the effect of gravity. This and other variants of the dam break problem can be found in the examples directory of PySPH.
Equations¶
The discrete equations for this formulation are given as
Boundary conditions¶
The dam break problem involves two types of particles. Namely, the fluid (water column) and solid (tank). The basic boundary condition enforced on a solid wall is the nopenetration boundary condition which can be stated as
Where \(\vec{n_b}\) is the local normal vector for the boundary. For this example, we use the dynamic boundary conditions. For this boundary condition, the boundary particles are treated as fixed fluid particles that evolve with the continuity ((2)) and equation the of state ((1)). In addition, they contribute to the fluid acceleration via the momentum equation ((3)). When fluid particles approach a solid wall, the density of the fluids and the solids increase via the continuity equation. With the increased density and consequently increased pressure, the boundary particles express a repulsive force on the fluid particles, thereby enforcing the nopenetration condition.
Time integration¶
For the time integration, we use a second order predictorcorrector integrator. For the predictor stage, the following operations are carried out:
Once the variables are predicted to their half time step values, the pairwise interactions are carried out to compute the accelerations. Subsequently, the corrector is used to update the particle positions:
Note
The acceleration variables are prefixed like \(a_\). The boldface symbols in the above equations indicate vector quantities. Thus \(a_\boldsymbol{v}\) represents \(a_u,\, a_v,\, \text{and}\, a_w\) for the vector components of acceleration.
Required arrays and properties¶
We will be using two ParticleArrays (see
pysph.base.particle_array.ParticleArray
), one for the fluid and
another for the solid. Recall that for the dynamic boundary conditions, the
solid is treated like a fluid with the only difference being that the velocity
(\(a_\boldsymbol{v}\)) and position accelerations (\(a_\boldsymbol{x}
= \boldsymbol{u} + \boldsymbol{u}^{\text{XSPH}}\)) are never calculated. The
solid particles therefore remain fixed for the duration of the simulation.
To carry out the integrations for the particles, we require the following variables:
 SPH properties: x, y, z, u, v, w, h, m, rho, p, cs
 Acceleration variables: au, av, aw, ax, ay, az, arho
 Properties at the beginning of a time step: x0, y0, z0, u0, v0, w0, rho0
A nonPySPH implementation¶
We first consider the pseudocode for the nonPySPH implementation. We assume
we have been given two ParticleArrays fluid and solid corresponding to
the dambreak problem. We also assume that an pysph.base.nnps.NNPS
object nps is available and can be used for neighbor queries:
from pysph.base import nnps
fluid = get_particle_array_fluid(...)
solid = get_particle_array_solid(...)
particles = [fluid, solid]
nps = nnps.LinkedListNNPS(dim=2, particles=particles, radius_scale=2.0)
The part of the code responsible for the interactions can be defined as
class SPHCalc:
def __init__(nnps, particles):
self.nnps = nnps
self.particles = particles
def compute(self):
self.eos()
self.accelerations()
def eos(self):
for array in self.particles:
num_particles = array.get_number_of_particles()
for i in range(num_particles):
array.p[i] = # TAIT EOS function for pressure
array.cs[i] = # TAIT EOS function for sound speed
def accelerations(self):
fluid, solid = self.particles[0], self.particles[1]
nps = self.nps
nbrs = UIntArray()
# continuity equation for the fluid
dst = fluid; dst_index = 0
# source is fluid
src = fluid; src_index = 0
num_particles = dst.get_number_of_particles()
for i in range(num_particles):
# get nearest fluid neigbors
nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)
for j in nbrs:
# pairwise quantities
xij = dst.x[i]  src.x[j]
yij = dst.y[i]  src.y[j]
...
# kernel interaction terms
wij = kenrel.function(xi, ...) # kernel function
dwij= kernel.gradient(xi, ...) # kernel gradient
# compute the interaction and store the contribution
dst.arho[i] += # interaction term
# source is solid
src = solid; src_index = 1
num_particles = dst.get_number_of_particles()
for i in range(num_particles):
# get nearest fluid neigbors
nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)
for j in nbrs:
# pairwise quantities
xij = dst.x[i]  src.x[j]
yij = dst.y[i]  src.y[j]
...
# kernel interaction terms
wij = kenrel.function(xi, ...) # kernel function
dwij= kernel.gradient(xi, ...) # kernel gradient
# compute the interaction and store the contribution
dst.arho[i] += # interaction term
# Destination is solid
dst = solid; dst_index = 1
# source is fluid
src = fluid; src_index = 0
num_particles = dst.get_number_of_particles()
for i in range(num_particles):
# get nearest fluid neigbors
nps.get_nearest_particles(src_index, dst_index, d_idx=i, nbrs)
for j in nbrs:
# pairwise quantities
xij = dst.x[i]  src.x[j]
yij = dst.y[i]  src.y[j]
...
# kernel interaction terms
wij = kenrel.function(xi, ...) # kernel function
dwij= kernel.gradient(xi, ...) # kernel gradient
# compute the interaction and store the contribution
dst.arho[i] += # interaction term
We see that the use of multiple particle arrays has forced us to write a fairly long piece of code for the accelerations. In fact, we have only shown the part of the main loop that computes \(a_\rho\) for the continuity equation. Recall that our problem states that the continuity equation should evaluated for all particles, taking influences from all other particles into account. For two particle arrays (fluid, solid), we have four such pairings (fluidfluid, fluidsolid, solidfluid, solidsolid). The last one can be eliminated when we consider the that the boundary has zero velocity and hence the contribution will always be trivially zero.
The apparent complexity of the SPHCalc.accelerations method notwithstanding, we notice that similar pieces of the code are being repeated. In general, we can break down the computation for a general sourcedestination pair like so:
# consider first destination particle array
for all dst particles:
get_neighbors_from_source()
for all neighbors:
compute_pairwise_terms()
compute_inteactions_for_dst_particle()
# consider next source for this destination particle array
...
# consider the next destination particle array
Note
The SPHCalc.compute method first calls the EOS before calling the main loop to compute the accelerations. This is because the EOS (which updates the pressure) must logically be completed for all particles before the accelerations (which uses the pressure) are computed.
The predictorcorrector integrator for this problem can be defined as
class Integrator:
def __init__(self, particles, nps, calc):
self.particles = particles
self.nps = nps
self.calc = calc
def initialize(self):
for array in self.particles:
array.rho0[:] = array.rho[:]
...
array.w0[:] = array.w[:]
def stage1(self, dt):
dtb2 = 0.5 * dt
for array in self.particles:
array.rho = array.rho0[:] + dtb2*array.arho[:]
array.u = array.u0[:] + dtb2*array.au[:]
array.v = array.v0[:] + dtb2*array.av[:]
...
array.z = array.z0[:] + dtb2*array.az[:]
def stage2(self, dt):
for array in self.particles:
array.rho = array.rho0[:] + dt*array.arho[:]
array.u = array.u0[:] + dt*array.au[:]
array.v = array.v0[:] + dt*array.av[:]
...
array.z = array.z0[:] + dt*array.az[:]
def integrate(self, dt):
self.initialize()
self.stage1(dt) # predictor step
self.nps.update() # update NNPS structure
self.calc.compute() # compute the accelerations
self.stage2(dt) # corrector step
The Integrator.integrate method is responsible for updating the solution the next time level. Before the predictor stage, the Integrator.initialize method is called to store the values x0, y0… at the beginning of a timestep. Given the positions of the particles at the half timestep, the NNPS data structure is updated before calling the SPHCalc.compute method. Finally, the corrector step is called once we have the updated accelerations.
This hypothetical implementation can be integrated to the final time by calling the Integrator.integrate method repeatedly. In the next section, we will see how PySPH does this automatically.
PySPH implementation¶
Now that we have a hypothetical implementation outlined, we can proceed to describe the abstractions that PySPH introduces, enabling a highly user friendly and flexible way to define pairwise particle interactions. To see a working example, see dam_break_2d.py.
We assume that we have the same ParticleArrays (fluid and solid) and NNPS objects as before.
Specifying the equations¶
Given the particle arrays, we ask for a given set of operations to be
performed on the particles by passing a list of Equation objects (see
SPH equations) to the Solver (see
pysph.solver.solver.Solver
)
equations = [
# Equation of state
Group(equations=[
TaitEOS(dest='fluid', sources=None, rho0=ro, c0=co, gamma=gamma),
TaitEOS(dest='boundary', sources=None, rho0=ro, c0=co, gamma=gamma),
], real=False),
Group(equations=[
# Continuity equation
ContinuityEquation(dest='fluid', sources=['fluid', 'boundary']),
ContinuityEquation(dest='boundary', sources=['fluid']),
# Momentum equation
MomentumEquation(dest='fluid', sources=['fluid', 'boundary'],
alpha=alpha, beta=beta, gy=9.81, c0=co),
# Position step with XSPH
XSPHCorrection(dest='fluid', sources=['fluid'])
]),
]
We see that we have used two Group objects (see
pysph.sph.equation.Group
), segregating two parts of the evaluation
that are logically dependent. The second group, where the accelerations are
computed must be evaluated after the first group where the pressure is
updated. Recall we had to do a similar seggregation for the SPHCalc.compute
method in our hypothetical implementation:
class SPHCalc:
def __init__(nnps, particles):
...
def compute(self):
self.eos()
self.accelerations()
Note
PySPH will respect the order of the Equation and equation Groups as provided by the user. This flexibility also means it is quite easy to make subtle errors.
Note that in the first group, we have an additional parameter called
real=False
. This is only relevant for parallel simulations and for
simulations with periodic boundaries. What it says is that the equations in
that group should be applied to all particles (remote and local), nonlocal
particles are not “real”. By default a Group
has real=True
, thus only
local particles are operated on. However, we wish to apply the Equation of
state on all particles. Similar is the case for periodic problems where it is
sometimes necessary to set real=True
in order to set the properties of the
additional particles used for periodicity.
Writing the equations¶
It is important for users to be able to easily write out new SPH equations of motion. PySPH provides a very convenient way to write these equations. The PySPH framework allows the user to write these equations in pure Python. These pure Python equations are then used to generate highperformance code and then called appropriately to perform the simulations.
There are two types of particle computations in SPH simulations:
 The most common type of interaction is to change the property of one particle (the destination) using the properties of a source particle.
 A less common type of interaction is to calculate say a sum (or product or maximum or minimum) of values of a particular property. This is commonly called a “reduce” operation in the context of Mapreduce programming models.
Computations of the first kind are inherently parallel and easy to perform correctly both in serial and parallel. Computations of the second kind (reductions) can be tricky in parallel. As a result, in PySPH we distinguish between the two. This will be elaborated in more detail in the following.
In general an SPH algorithm proceeds as the following pseudocode illustrates:
for destination in particles:
for equation in equations:
equation.initialize(destination)
# This is where bulk of the computation happens.
for destination in particles:
for source in destination.neighbors:
for equation in equations:
equation.loop(source, destination)
for destination in particles:
for equation in equations:
equation.post_loop(destination)
# Reduce any properties if needed.
total_mass = reduce_array(particles.m, 'sum')
max_u = reduce_array(particles.u, 'max')
The neighbors of a given particle are identified using a nearest neighbor algorithm. PySPH does this automatically for the user and internally uses a linklist based algorithm to identify neighbors.
In PySPH we follow some simple conventions when writing equations. Let us look
at a few equations first. In keeping the analogy with our hypothetical
implementation and the SPHCalc.accelerations method above, we consider the
implementations for the PySPH pysph.sph.wc.basic.TaitEOS
and
pysph.sph.basic_equations.ContinuityEquation
objects. The former
looks like:
class TaitEOS(Equation):
def __init__(self, dest, sources=None,
rho0=1000.0, c0=1.0, gamma=7.0):
self.rho0 = rho0
self.rho01 = 1.0/rho0
self.c0 = c0
self.gamma = gamma
self.gamma1 = 0.5*(gamma  1.0)
self.B = rho0*c0*c0/gamma
super(TaitEOS, self).__init__(dest, sources)
def loop(self, d_idx, d_rho, d_p, d_cs):
ratio = d_rho[d_idx] * self.rho01
tmp = pow(ratio, self.gamma)
d_p[d_idx] = self.B * (tmp  1.0)
d_cs[d_idx] = self.c0 * pow( ratio, self.gamma1 )
Notice that it has only one loop
method and this loop is applied
for all particles. Since there are no sources, there is no need for
us to find the neighbors. There are a few important conventions that
are to be followed when writing the equations.
d_*
indicates a destination array.s_*
indicates a source array.d_idx
ands_idx
represent the destination and source index respectively. Each function can take any number of arguments as required, these are automatically supplied internally when the application runs.
 All the standard math symbols from
math.h
are also available.
Let us look at the ContinuityEquation
as another simple example.
It is instantiated as:
class ContinuityEquation(Equation):
def initialize(self, d_idx, d_arho):
d_arho[d_idx] = 0.0
def loop(self, d_idx, d_arho, s_idx, s_m, DWIJ, VIJ):
vijdotdwij = DWIJ[0]*VIJ[0] + DWIJ[1]*VIJ[1] + DWIJ[2]*VIJ[2]
d_arho[d_idx] += s_m[s_idx]*vijdotdwij
Notice that the initialize
method merely sets the value to zero. The
loop
method also accepts a few new quantities like DWIJ
, VIJ
etc.
These are precomputed quantities and are automatically provided depending on
the equations needed for a particular source/destination pair. The following
precomputed quantites are available and may be passed into any equation:
HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])
.XIJ[0] = d_x[d_idx]  s_x[s_idx]
,XIJ[1] = d_y[d_idx]  s_y[s_idx]
,XIJ[2] = d_z[d_idx]  s_z[s_idx]
R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]
RIJ = sqrt(R2IJ)
WIJ = KERNEL(XIJ, RIJ, HIJ)
WJ = KERNEL(XIJ, RIJ, s_h[s_idx])
RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])
WI = KERNEL(XIJ, RIJ, d_h[d_idx])
RHOIJ1 = 1.0/RHOIJ
DWIJ
:GRADIENT(XIJ, RIJ, HIJ, DWIJ)
DWJ
:GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)
DWI
:GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)
VIJ[0] = d_u[d_idx]  s_u[s_idx]
VIJ[1] = d_v[d_idx]  s_v[s_idx]
VIJ[2] = d_w[d_idx]  s_w[s_idx]
EPS = 0.01 * HIJ * HIJ
In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation:
t
: is the current time.dt
: the current time step.
Note
Note that all standard functions and constants in math.h
are available
for use in the equations. The value of \(\pi\) is available in
M_PI
. Please avoid using functions from numpy
as these are Python
functions and are slow. They also will not allow PySPH to be run with
OpenMP. Similarly, do not use functions or constants from sympy
and
other libraries inside the equation methods as these will significantly
slow down your code.
In addition, these constants from the math library are available:
M_E
: value of eM_LOG2E
: value of log2eM_LOG10E
: value of log10eM_LN2
: value of loge2M_LN10
: value of loge10M_PI
: value of piM_PI_2
: value of pi / 2M_PI_4
: value of pi / 4M_1_PI
: value of 1 / piM_2_PI
: value of 2 / piM_2_SQRTPI
: value of 2 / (square root of pi)M_SQRT2
: value of square root of 2M_SQRT1_2
: value of square root of 1/2
In an equation, any undeclared variables are automatically declared to be
doubles in the highperformance Cython code that is generated. In addition
one may declare a temporary variable to be a matrix
or a cPoint
by
writing:
mat = declare("matrix((3,3))")
point = declare("cPoint")
When the Cython code is generated, this gets translated to:
cdef double[3][3] mat
cdef cPoint point
One can also declare any valid ctype using the same approach, for example if
one desires a long
data type, one may use ii = declare("long")
.
One may also perform any reductions on properties. Consider a trivial example
of calculating the total mass and the maximum u
velocity in the following
equation:
class FindMaxU(Equation):
def reduce(self, dst, t, dt):
m = serial_reduce_array(dst.m, 'sum')
max_u = serial_reduce_array(dst.u, 'max')
dst.total_mass[0] = parallel_reduce_array(m, 'sum')
dst.max_u[0] = parallel_reduce_array(u, 'max')
where:
dst
: refers to a destinationParticleArray
.t, dt
: are the current time and timestep respectively.serial_reduce_array
: is a special function provided that performs reductions correctly in serial. It currently supportssum, prod, max
andmin
operations. Seepysph.base.reduce_array.serial_reduce_array()
. There is also apysph.base.reduce_array.parallel_reduce_array()
which is to be used to reduce an array across processors. Usingparallel_reduce_array
is expensive as it is an alltoall communication. One can reduce these by using a single array and use that to reduce the communication.
We recommend that for any kind of reductions one always use the
serial_reduce_array
function and the parallel_reduce_array
inside a
reduce
method. One should not worry about parallel/serial modes in this
case as this is automatically taken care of by the code generator. In serial,
the parallel reduction does nothing.
With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns.
If you wish to use adaptive time stepping, see the code
pysph.sph.integrator.Integrator
. The integrator uses information
from the arrays dt_cfl
, dt_force
, and dt_visc
in each of the
particle arrays to determine the most suitable time step.
For a more focused discussion on how you should write equations, please see Writing equations.
Writing the Integrator¶
The integrator stepper code is similar to the equations in that they are all written in pure Python and Cython code is automatically generated from it. The simplest integrator is the Euler integrator which looks like this:
class EulerIntegrator(Integrator):
def one_timestep(self, t, dt):
self.initialize()
self.compute_accelerations()
self.stage1()
self.do_post_stage(dt, 1)
Note that in this case the integrator only needs to implement one timestep
using the one_timestep
method above. The initialize
and stage
methods need to be implemented in stepper classes which perform the actual
stepping of the values. Here is the stepper for the Euler integrator:
class EulerStep(IntegratorStep):
def initialize(self):
pass
def stage1(self, d_idx, d_u, d_v, d_w, d_au, d_av, d_aw, d_x, d_y,
d_z, d_rho, d_arho, dt=0.0):
d_u[d_idx] += dt*d_au[d_idx]
d_v[d_idx] += dt*d_av[d_idx]
d_w[d_idx] += dt*d_aw[d_idx]
d_x[d_idx] += dt*d_u[d_idx]
d_y[d_idx] += dt*d_v[d_idx]
d_z[d_idx] += dt*d_w[d_idx]
d_rho[d_idx] += dt*d_arho[d_idx]
As can be seen the general structure is very similar to how equations are
written in that the functions take an arbitrary number of arguments and are
set. The value of dt
is also provided automatically when the methods are
called.
It is important to note that if there are additional variables to be stepped in addition to these standard ones, you must write your own stepper. Currently, only certain steppers are supported by the framework. Take a look at the Integrator related modules for more examples.
Simulating periodicity¶
PySPH provides a simplistic implementation for problems with periodicity. The
pysph.base.nnps_base.DomainManager
is used to specify this. To use
this in an application simply define a method as follows:
# ...
from pysph.base.nnps import DomainManager
class TaylorGreen(Application):
def create_domain(self):
return DomainManager(
xmin=0.0, xmax=1.0, ymin=0.0, ymax=1.0,
periodic_in_x=True, periodic_in_y=True
)
# ...
This is a 2D example but something similar can be done in 3D. How this works is that PySPH will automatically copy the appropriate layer of the particles from each side of the domain and create “Ghost” particles (these are not “real” particles). The properties of the particles will also be copied but this is done before any accelerations are computed. Note that this implies that the real particles should be created carefully so as to avoid two particles being placed at the same location.
For example in the above example, the domain is defined in the unit square
with one corner at the origin and the other at (1,1). If we place any
particles exactly at \(x=0.0\) they will be copied over to 1.0 and if we
place any particles at \(x=1.0\) they will be copied to \(x=0\). This
will mean that there will be one real particle at 0 and a copy from 1.0 as
well at the same location. It is therefore important to initialize the
particles starting at dx/2
and all the way upto 1.0dx/2
so as to get
a uniform distribution of particles without any repetitions. It is important
to remember that the periodic particles will be “ghost” particles and so any
equations that set properties like pressure should be in a group with
real=False
.
Writing equations¶
This document puts together all the essential information on how to write equations. We assume that you have already read the section The PySPH framework. Some information is repeated from there as well.
The PySPH equations are written in a very restricted way. The reason for this is that if you do follow the suggestions and the conventions below you will benefit from:
 a highperformance serial implementation.
 support for using your equations with OpenMP.
 support for running on a GPU.
These are the main motivations for the severe restrictions we impose when you write your equations.
Overview¶
PySPH takes the equations you write and converts them on the fly to a highperformance implementation suitable for the particular backend you request.
It is important to understand the overall structure of how the equations are
used when the highperformance code is generated. Let us look at the different
methods of a typical Equation
subclass:
class YourEquation(Equation):
def __init__(self, dest, sources):
# Overload this only if you need to pass additional constants
# Otherwise, no need to override __init__
def py_initialize(self, dst, t, dt):
# Called once per destination array before initialize.
# This is a pure Python function and is not translated.
def initialize(self, d_idx, ...):
# Called once per destination particle before loop.
def initialize_pair(self, d_idx, d_*, s_*):
# Called once per destination particle for each source.
# Can access all source arrays. Does not have
# access to neighbor information.
def loop_all(self, d_idx, ..., NBRS, N_NBRS, ...):
# Called once before the loop and can be used
# for nonpairwise interactions as one can pass the neighbors
# for particle d_idx.
def loop(self, d_idx, s_idx, ...):
# loop over neighbors for all sources,
# called once for each pair of particles!
def post_loop(self, d_idx ...):
# called after all looping is done.
def reduce(self, dst, t, dt):
# Called once for the destination array.
# Any Python code can go here.
def converged(self):
# return > 0 for convergence < 0 for lack of convergence
It is easier to understand this if we take a specific example. Let us say, we
have a case where we have two particle arrays 'fluid', 'solid'
. Let us say
the equation is used as YourEquation(dest='fluid', sources=['fluid',
'solid'])
. Now given this context, let us see what happens when this
equation is used. What happens is as follows:
 for each destination particle array (
'fluid'
in this case), thepy_initialize
method is called and is passed the destination particle array,t
anddt
(similar toreduce
). This function is a pure Python function so you can do what you want here, including importing any Python code and run anything you want. The code is NOT transpiled into C/OpenCL/CUDA.  for each fluid particle, the
initialize
method is called with the required arrays.  for each fluid particle, the
initialize_pair
method is called while having access to all the fluid arrays.  the fluid neighbors for each fluid particle are found for each particle
and can be passed enmasse to the
loop_all
method. One can passNBRS
which is an array of unsigned ints with indices to the neighbors in the source particles.N_NBRS
is the number of neighbors (an integer). This method is ideal for any nonpairwise computations or more complex computations.  the fluid neighbors for each fluid particle are found and for each pair,
the
loop
method is called with the required properties/values.  for each fluid particle, the
initialize_pair
method is called while having access to all the solid arrays.  the solid neighbors for each fluid particle are found and for each pair,
the
loop
method is called with the required properties/values.  for each fluid particle, the
post_loop
method is called with the required properties.  If a reduce method exists, it is called for the destination (only once, not once per particle). It is passed the destination particle array and the time and timestep. It is transpiled when you are using Cython but is a pure Python function when you run this via OpenCL or CUDA.
The initialize, initialize_pair, loop_all, loop, post_loop
methods all may
be called in separate threads (both on CPU/GPU) depending on the
implementation of the backend.
It is possible to set a scalar value in the equation as an instance attribute,
i.e. by setting self.something = value
but remember that this is just one
value for the equation. This value must also be initialized in the
__init__
method. Also make sure that the attributes are public and not
private (i.e. do not start with an underscore). There is only one equation
instance used in the code, not one equation per thread or particle. So if you
wish to calculate a temporary quantity for each particle, you should create a
separate property for it and use that instead of assuming that the initialize
and loop functions run in serial. They do not run in serial when you use
OpenMP or OpenCL. So do not create temporary arrays inside the equation for
these sort of things. In general if you need a constant per destination array,
add it as a constant to the particle array. Also note that you can add
properties that have strides (see Learning the ropes and look for
“stride”).
Now, if the group containing the equation has iterate
set to True, then
the group will be iterated until convergence is attained for all the equations
(or subgroups) contained by it. The converged
method is called once and
not once per particle.
If you wish to compute something like a convergence condition, like the maximum error or the average error, you should do it in the reduce method.
The reduce function is called only once every time the accelerations are
evaluated. As such you may write any Python code there. The only caveat is
that when using the CPU, one will have to declare any variables used a little
carefully – ideally declare any variables used in this as
declare('object')
. On the GPU, this function is not called via OpenCL and
is a pure Python function.
Understanding Groups a bit more¶
Equations can be grouped together and it is important to understand how
exactly this works. Let us take a simple example of a Group
with
two equations. We illustrate two simple equations with pseudocode:
class Eq1(Equation):
def initialize(self, ...):
# ...
def loop(...):
# ...
def post_loop(...):
# ...
Let us say that Eq2
has a similar structure with respect to its methods.
Let us say we have a group defined as:
Group(
equations=[
Eq1(dest='fluid', sources=['fluid', 'solid']),
Eq2(dest='fluid', sources=['fluid', 'solid']),
]
)
When this is expanded out and used inside PySPH, this is what happens in terms of pseudocode:
# Instances of the Eq1, and Eq2.
eq1 = Eq1(...)
eq2 = Eq2(...)
for d_idx in range(n_destinations):
eq1.initialize(...)
eq2.initialize(...)
# Sources from 'fluid'
for d_idx in range(n_destinations):
for s_idx in NEIGHBORS('fluid', d_idx):
eq1.loop(...)
eq2.loop(...)
# Sources from 'solid'
for d_idx in range(n_destinations):
for s_idx in NEIGHBORS('solid', d_idx):
eq1.loop(...)
eq2.loop(...)
for d_idx in range(n_destinations):
eq1.post_loop(...)
eq2.post_loop(...)
That is, all the initialization is done for each equation in sequence,
followed by the loops for each set of sources, fluid and solid in this case.
In the end, the post_loop
is called for the destinations. The equations
are therefore merged inside a group and entirely completed before the next
group is taken up. Note that the order of the equations will be exactly as
specified in the group.
When the real=False
is used, then the nonlocal destination particles
are also iterated over. real=True
by default, which means that only
destination particles whose tag
property is local or equal to 0 are
operated on. Otherwise, when real=False
, remote and ghost particles are
also operated on. It is important to note that this does not affect the source
particles. That is, ALL source particles influence the destinations
whether the sources are local, remote or ghost particles. The real
keyword
argument only affects the destination particles and not the sources.
Note that if you have different destinations in the same group, they are
internally split up into different sets of loops for each destination and that
these are done separately. I.e. one destination is fully processed and then
the next is considered. So if we had for example, both fluid
and solid
destinations, they would be processed separately. For example lets say you had
this:
Group(
equations=[
Eq1(dest='fluid', sources=['fluid', 'solid']),
Eq1(dest='solid', sources=['fluid', 'solid']),
Eq2(dest='fluid', sources=['fluid', 'solid']),
Eq2(dest='solid', sources=['fluid', 'solid']),
]
)
This would internally be equivalent to the following:
[
Group(
equations=[
Eq1(dest='fluid', sources=['fluid', 'solid']),
Eq2(dest='fluid', sources=['fluid', 'solid']),
]
),
Group(
equations=[
Eq1(dest='solid', sources=['fluid', 'solid']),
Eq2(dest='solid', sources=['fluid', 'solid']),
]
)
]
Note that basically the fluids are done first and then the solid particles are done. Obviously the first form is a lot more compact.
While it may appear that the PySPH equations and groups are fairly complex, they actually do a lot of work for you and allow you to express the interactions in a rather compact form.
When debugging it sometimes helps to look at the generated log file which will also print out the exact equations and groups that are being used.
Conventions followed¶
There are a few important conventions that are to be followed when writing the
equations. When passing arguments to the initialize, loop, post_loop
methods,
d_*
indicates a destination array.s_*
indicates a source array.d_idx
ands_idx
represent the destination and source index respectively. Each function can take any number of arguments as required, these are automatically supplied internally when the application runs.
 All the standard math symbols from
math.h
are also available.
The following precomputed quantites are available and may be passed into any equation:
HIJ = 0.5*(d_h[d_idx] + s_h[s_idx])
.XIJ[0] = d_x[d_idx]  s_x[s_idx]
,XIJ[1] = d_y[d_idx]  s_y[s_idx]
,XIJ[2] = d_z[d_idx]  s_z[s_idx]
R2IJ = XIJ[0]*XIJ[0] + XIJ[1]*XIJ[1] + XIJ[2]*XIJ[2]
RIJ = sqrt(R2IJ)
WIJ = KERNEL(XIJ, RIJ, HIJ)
WJ = KERNEL(XIJ, RIJ, s_h[s_idx])
RHOIJ = 0.5*(d_rho[d_idx] + s_rho[s_idx])
WI = KERNEL(XIJ, RIJ, d_h[d_idx])
RHOIJ1 = 1.0/RHOIJ
DWIJ
:GRADIENT(XIJ, RIJ, HIJ, DWIJ)
DWJ
:GRADIENT(XIJ, RIJ, s_h[s_idx], DWJ)
DWI
:GRADIENT(XIJ, RIJ, d_h[d_idx], DWI)
VIJ[0] = d_u[d_idx]  s_u[s_idx]
VIJ[1] = d_v[d_idx]  s_v[s_idx]
VIJ[2] = d_w[d_idx]  s_w[s_idx]
EPS = 0.01 * HIJ * HIJ
SPH_KERNEL
: the kernel being used and one can call the kernel asSPH_KERNEL.kernel(xij, rij, h)
the gradient asSPH_KERNEL.gradient(...)
,SPH_KERNEL.gradient_h(...)
etc. The kernel is any one of the instances of the kernel classes defined inpysph.base.kernels
In addition if one requires the current time or the timestep in an equation, the following may be passed into any of the methods of an equation:
t
: is the current time.dt
: the current time step.
For the loop_all
method and the loop
method, one may also pass the
following:
NBRS
: an array of unsigned ints with neighbor indices.N_NBRS
: an integer denoting the number of neighbors for the current destination particle with index,d_idx
.
Note
Note that all standard functions and constants in math.h
are available
for use in the equations. The value of \(\pi\) is available as
M_PI
. Please avoid using functions from numpy
as these are Python
functions and are slow. They also will not allow PySPH to be run with
OpenMP. Similarly, do not use functions or constants from sympy
and
other libraries inside the equation methods as these will significantly
slow down your code.
In addition, these constants from the math library are available:
M_E
: value of eM_LOG2E
: value of log2eM_LOG10E
: value of log10eM_LN2
: value of loge2M_LN10
: value of loge10M_PI
: value of piM_PI_2
: value of pi / 2M_PI_4
: value of pi / 4M_1_PI
: value of 1 / piM_2_PI
: value of 2 / piM_2_SQRTPI
: value of 2 / (square root of pi)M_SQRT2
: value of square root of 2M_SQRT1_2
: value of square root of 1/2
In an equation, any undeclared variables are automatically declared to be
doubles in the highperformance Cython code that is generated. In addition
one may declare a temporary variable to be a matrix
or a cPoint
by
writing:
vec, vec1 = declare("matrix(3)", 2)
mat = declare("matrix((3,3))")
i, j = declare('int')
When the Cython code is generated, this gets translated to:
cdef double vec[3], vec1[3]
cdef double mat[3][3]
cdef int i, j
One can also declare any valid ctype using the same approach, for example if
one desires a long
data type, one may use i = declare("long")
.
Note that the additional (optional) argument in the declare specifies the
number of variables. While this is ignored during transpilation, this is
useful when writing functions in pure Python, the
compyle.api.declare()
function provides a pure Python
implementation of this so that the code works both when compiled as well as
when run from pure Python. For example:
i, j = declare("int", 2)
In this case, the declare function call returns two integers so that the code runs correctly in pure Python also. The second argument is optional and defaults to 1. If we defined a matrix, then this returns two NumPy arrays of the appropriate shape.
>>> declare("matrix(2)", 2)
(array([ 0., 0.]), array([ 0., 0.]))
Thus the code one writes can be used in pure Python and can also be safely transpiled into other languages.
Writing the reduce method¶
One may also perform any reductions on properties. Consider a trivial example
of calculating the total mass and the maximum u
velocity in the following
equation:
class FindMaxU(Equation):
def reduce(self, dst, t, dt):
m = serial_reduce_array(dst.m, 'sum')
max_u = serial_reduce_array(dst.u, 'max')
dst.total_mass[0] = parallel_reduce_array(m, 'sum')
dst.max_u[0] = parallel_reduce_array(u, 'max')
where:
dst
: refers to a destinationParticleArray
.t, dt
: are the current time and timestep respectively.serial_reduce_array
: is a special function provided that performs reductions correctly in serial. It currently supportssum, prod, max
andmin
operations. Seepysph.base.reduce_array.serial_reduce_array()
. There is also apysph.base.reduce_array.parallel_reduce_array()
which is to be used to reduce an array across processors. Usingparallel_reduce_array
is expensive as it is an alltoall communication. One can reduce these by using a single array and use that to reduce the communication.
We recommend that for any kind of reductions one always use the
serial_reduce_array
function and the parallel_reduce_array
inside a
reduce
method. One should not worry about parallel/serial modes in this
case as this is automatically taken care of by the code generator. In serial,
the parallel reduction does nothing.
With this machinery, we are able to write complex equations to solve almost any SPH problem. A user can easily define a new equation and instantiate the equation in the list of equations to be passed to the application. It is often easiest to look at the many existing equations in PySPH and learn the general patterns.
Adaptive timesteps¶
There are a couple of ways to use adaptive timesteps. The first is to compute
a required timestep directly perparticle in a particle array property called
dt_adapt
. The minimum value of this array across all particle arrays is
used to set the timestep directly. This is the easiest way to set the adaptive
timestep.
If the dt_adapt
parameter is not set one may also use standard velocity,
force, and viscosity based parameters. The integrator uses information from
the arrays dt_cfl
, dt_force
, and dt_visc
in each of the particle
arrays to determine the most suitable time step. This is done using the
following approach. The minimum smoothing parameter h
is found as
hmin
. Let the CFL number be given as cfl
. For the velocity criterion,
the maximum value of dt_cfl
is found and then a suitable timestep is found
as:
dt_min_vel = hmin/max(dt_cfl)
For the force based criterion we use the following:
dt_min_force = sqrt(hmin/sqrt(max(dt_force)))
for the viscosity we have:
dt_min_visc = hmin/max(dt_visc_fac)
Then the correct timestep is found as:
dt = cfl*min(dt_min_vel, dt_min_force, dt_min_visc)
The cfl
is set to 0.3 by default. One may pass cfl
to the
application to change the CFL. Note that when the dt_adapt
property is
used the CFL has no effect as we assume that the user will compute a suitable
value based on their requirements.
The pysph.sph.integrator.Integrator
class code may be instructive
to look at if you are wondering about any particular details.
Illustration of the loop_all
method¶
The loop_all
is a powerful method we show how we can use the above to
perform what the loop
method usually does ourselves.
class LoopAllEquation(Equation):
def initialize(self, d_idx, d_rho):
d_rho[d_idx] = 0.0
def loop_all(self, d_idx, d_x, d_y, d_z, d_rho, d_h,
s_m, s_x, s_y, s_z, s_h,
SPH_KERNEL, NBRS, N_NBRS):
i = declare('int')
s_idx = declare('long')
xij = declare('matrix(3)')
rij = 0.0
sum = 0.0
for i in range(N_NBRS):
s_idx = NBRS[i]
xij[0] = d_x[d_idx]  s_x[s_idx]
xij[1] = d_y[d_idx]  s_y[s_idx]
xij[2] = d_z[d_idx]  s_z[s_idx]
rij = sqrt(xij[0]*xij[0] + xij[1]*xij[1] + xij[2]*xij[2])
sum += s_m[s_idx]*SPH_KERNEL.kernel(xij, rij, 0.5*(s_h[s_idx] + d_h[d_idx]))
d_rho[d_idx] += sum
This seems a bit complex but let us look at what is being done. initialize
is called once per particle and each of their densities is set to zero. Then
when loop_all
is called it is called once per destination particle (unlike
loop
which is called pairwise for each destination and source particle).
The loop_all
is passed arrays as is typical of most equations but is also
passed the SPH_KERNEL
itself, the list of neighbors, and the number of
neighbors.
The code first declares the variables, i, s_idx
as an integer and long,
and then x_ij
as a 3element array. These are important for performance in
the generated code. The code then loops over all neighbors and computes the
summation density. Notice how the kernel is computed using
SPH_KERNEL.kernel(...)
. Notice also how the source index, s_idx
is found
from the neighbors.
This above loop_all
code does exactly what the following single line of
code does.
def loop(self, d_idx, d_rho, s_m, s_idx, WIJ):
d_rho[d_idx] += s_m[s_idx]*WIJ
However, loop
is only called pairwise and there are times when we want to
do more with the neighbors. For example if we wish to setup a matrix and solve
it per particle, we could do it in loop_all
efficiently. This is also very
useful for nonpairwise interactions which are common in other particle
methods like molecular dynamics.
Calling userdefined functions from equations¶
Sometimes we may want to call a userdefined function from the equations. Any pure Python function defined using the same conventions as listed above (with suitable type hints) can be called from the equations. Here is a simple example from one of the tests in PySPH.
def helper(x=1.0):
return x*1.5
class SillyEquation(Equation):
def initialize(self, d_idx, d_au, d_m):
d_au[d_idx] += helper(d_m[d_idx])
def _get_helpers_(self):
return [helper]
Notice that initialize
is calling the helper
function defined above.
The helper function has a default argument to indicate to our code generation
that x is a floating point number. We could have also set the default argument
to a list and this would then be passed an array of values. The
_get_helpers_
method returns a list of functions and these functions are
automatically transpiled into highperformance C or OpenCL/CUDA code and can
be called from your equations.
Here is a more complex helper function.
def trace(x=[1.0, 1.0], nx=1):
i = declare('int')
result = 0.0
for i in range(nx):
result += x[i]
return result
class SillyEquation(Equation):
def loop(self, d_idx, d_au, d_m, XIJ):
d_au[d_idx] += trace(XIJ, 3)
def _get_helpers_(self):
return [trace]
The trace function effectively is converted into a function with signature
double trace(double* x, int nx)
and thus can be called with any
onedimensional array.
Calling arbitrary Python functions from a Group¶
Sometimes, you may need to implement something that is hard to write (at least
initially) with the constraints that PySPH places. For example if you need to
implement an algorithm that requires more complex data structures and you want
to do it easily in Python. There are ways to call arbitrary Python code from
the application already but sometimes you need to do this during every
acceleration evaluation. To support this the Group
class supports
two additional keyword arguments called pre
and post
. These can be any
Python callable that take no arguments. Any callable passed as pre
will be
called before any equation related code is executed and post
will be
executed after the entire group is finished. If the group is iterated, it
should call those functions repeatedly.
Now these functions are pure Python functions so you may choose to do anything in them. These are not called within an OpenMP context and if you are using the OpenCL or CUDA backends again this will simply be a Python function call that has nothing to do with the particular backend. However, since it is arbitrary Python, you can choose to implement the code using any approach you choose to do. This should be flexible enough to customize PySPH greatly.
Writing integrators¶
Similar rules apply when writing an IntegratorStep
. One can create
a multistage integrator as follows:
class MyStepper(IntegratorStep):
def initialize(self, d_idx, d_x):
# ...
def py_stage1(self, dst, t, dt):
# ...
def stage1(self, d_idx, d_x, d_ax):
# ...
def py_stage2(self, dst, t, dt):
# ...
def stage2(self, d_idx, d_x, d_ax):
# ...
In this case, the initialize, stage1, stage2
, methods are transpiled and
called but the py_stage1, py_stage2
are pure Python functions called
before the respective stage
functions are called. Defining the
py_stage1
or py_stage2
methods are optional. If you have defined them,
they will be called automatically. They are passed the destination particle
array, the current time, and current timestep.
Different equations for different stages¶
By default, when one creates equations the implicit assumption is that the
same righthandside is evaluated at each stage of the integrator. However,
some schemes require that one solve different equations for different
integrator stages. PySPH does support this but to do this when one creates
equations in the application, one should return an instance of
pysph.sph.equation.MultiStageEquations
. For example:
def create_equations(self):
# ...
eqs = [
[Eq1(dest='fluid', sources=['fluid'])],
[Eq2(dest='fluid', sources=['fluid'])]
]
from pysph.sph.equation import MultiStageEquations
return MultiStageEquations(eqs)
In the above, note that each element of eqs
is a list, it could have also
been a group. Each item of the given equations is treated as a separate
collection of equations which is to be used. The use of the
pysph.sph.equation.MultiStageEquations
tells PySPH that multiple
equation sets are being used.
Now that we have this, how do we call the right accelerations at the right
times? We do this by subclassing the
pysph.sph.integrator.Integrator
. We show a simple example from our
test suite to illustrate this:
from pysph.sph.integrator import Integrator
class MyIntegrator(Integrator):
def one_timestep(self, t, dt):
self.compute_accelerations(0)
# Equivalent to self.compute_accelerations()
self.stage1()
self.do_post_stage(dt, 1)
self.compute_accelerations(1, update_nnps=False)
self.stage2()
self.update_domain()
self.do_post_stage(dt, 2)
Note that the compute_accelerations
method takes two arguments, the
index
(which defaults to zero) and update_nnps
which defaults to
True
. A simple integrator with a single RHS would simply call
self.compute_accelerations()
. However, in the above, the first set of
equations is called first, and then for the second stage the second set of
equations is evaluated but without updating the NNPS (handy if the particles
do not move in stage1). Note the call self.update_domain()
after the
second stage, this sets up any ghost particles for periodicity when particles
have been moved, it also updates the neighbor finder to use an appropriate
neighbor length based on the current smoothing length. If you do not need to
do this for your particular integrator you may choose not to add this. In the
above case, the domain is not updated after the first stage as the particles
have not moved.
The above illustrates how one can create more complex integrators that employ different accelerations in each stage.
Examples to study¶
The following equations provide good examples for how one could use/write the
reduce
method:
pysph.sph.gas_dynamics.basic.SummationDensityADKE
: relatively simple.pysph.sph.rigid_body.RigidBodyMoments
: this is pretty complex.pysph.sph.iisph.PressureSolve
: relatively straightforward.
The equations that demonstrate the converged
method are:
pysph.sph.gas_dynamics.basic.SummationDensity
: relatively simple.pysph.sph.iisph.PressureSolve
.
Some equations that demonstrate using matrices and solving systems of equations are:
Using StarCluster with PySPH¶
StarCluster is an open source clustercomputing toolkit for Amazon’s Elastic Compute Cloud (EC2). StarCluster has been designed to simplify the process of building, configuring, and managing clusters of virtual machines on Amazon’s EC2 cloud.
Using StarCluster along with PySPH’s MPI support, you can run PySPH code on multiple instances in parallel and complete simulations faster.
Configuring StarCluster¶
Creating Configuration File¶
After StarCluster has been installed, the next step is to update your StarCluster configuration
$ starcluster help
StarCluster  (http://star.mit.edu/cluster)
Software Tools for Academics and Researchers (STAR)
Please submit bug reports to starcluster@mit.edu
cli.py:87  ERROR  config file /home/user/.starcluster/config does not exist
Options:

[1] Show the StarCluster config template
[2] Write config template to /home/user/.starcluster/config
[q] Quit
Please enter your selection:
Select the second option by typing 2 and press enter. This will give you a template to use to create a configuration file containing your AWS credentials, cluster settings, etc. The next step is to customize this file using your favorite texteditor
$ emacs ~/.starcluster/config
Updating AWS Credentials¶
This file is commented with example “cluster templates”. A cluster template defines a set of configuration settings used to start a new cluster. The config template provides a smallcluster template that is ready to go outofthebox. However, first, you must fill in your AWS credentials and keypair info
[aws info]
aws_access_key_id = # your aws access key id here
aws_secret_access_key = # your secret aws access key here
aws_user_id = # your 12digit aws user id here
To find your AWS User ID, see Finding your Account Canonical User ID
You can get your root user credentials from the Security Credentials page on AWS
Management Console. However, root credentials allow for full access to all
resources on your account and it is recommended that you create separate IAM
(Identity and Access Management) user credentials for managing access to your
EC2 resources. To create IAM user credentials, see Creating IAM Users
(Console)
For StarCluster, create an IAM user with the EC2 Full Access
permission.
If you don’t already have a keypair, you can generate one using StarCluster by running:
$ starcluster createkey mykey o ~/.ssh/mykey.rsa
This will create a keypair called mykey on Amazon EC2 and save the private key to ~/.ssh/mykey.rsa. Once you have a key the next step is to fill in your keypair info in the StarCluster config file
[key mykey]
key_location = ~/.ssh/mykey.rsa
Also, update the following information for the smallcluster configuration:
[cluster smallcluster]
..
KEYNAME = mykey
..
Now that the basic configuration for StarCluster is complete, you can directly launch instances using StarCluster. However, note that EC2 charges are not pro rata and you will be charged for an entire hour even if you run an instance for a few minutes. Before attempting to deploy an instance/cluster you can modify the following information in your cluster configuration:
[cluster smallcluster]
..
NODE_INSTANCE_TYPE=t2.micro
NODE_IMAGE_ID=ami6b211202
..
Now you can launch an EC2 instance using:
$ starcluster start smallcluster
You can SSH into the master node by running:
$ starcluster sshmaster smallcluster
You can transfer files to the nodes using the get
and put
commands as:
$ starcluster put /path/to/local/file/or/dir /remote/path/
$ starcluster get /path/to/remote/file/or/dir /local/path/
Finally, you can terminate the instance by running:
$ starcluster terminate smallcluster
Setting up PySPH for StarCluster¶
Most of the public AMIs currently distributed for StarCluster are outdated and
have reached their end of life. To ensure a hasslefree experience while
further extending the AMI and installing packages, you can use the 64 bit
Ubuntu 16.04 AMI with AMI ID ami01fdc27a
which has most StarCluster
dependencies and PySPH dependencies installed.
Base AMI for PySPH [Optional]¶
The ami.sh
file which can be found in the starcluster
directory in the
PySPH repository automatically launches a vanilla 64bit Ubuntu 16.04 instance,
installs any necessary StarCluster and PySPH dependencies and saves an AMI with
this configuration on your AWS account
$ ./ami.sh
The AMI ID of the generated image is stored in AMI_ID
. You can also see a
list of the AMIs currently in your AWS account by running
$ starcluster listimages
Cluster configuration for PySPH¶
Modify your StarCluster configuration file with the following
information. Launching a cluster with the following configuration will start 2
t2.micro instances, install the latest version of PySPH in each and keep track
of the nodes loaded in /home/pysph/PYSPH_HOSTS
:
[cluster pysphcluster]
KEYNAME = mykey
CLUSTER_SIZE = 2 # Number of nodes in cluster
CLUSTER_USER = pysph
CLUSTER_SHELL = bash
NODE_IMAGE_ID = ami01fdc27a # Or AMI ID for base AMI generated previously
NODE_INSTANCE_TYPE = t2.micro # EC2 Instance type
PLUGINS = pysph_install
[plugin pysph_install]
setup_class = sc_pysph.PySPHInstaller
Also, copy sc_pysph.py
from the starcluster
directory to
~/.starcluster/plugins/
Running PySPH scripts on a cluster¶
You can start the cluster configured previously by running
$ starcluster start c pysphcluster cluster
Assuming your PySPH file cube.py
is in the local home directory, you can
first transfer this file to the cluster:
$ starcluster put u pysph cluster ~/cube.py /home/pysph/cube.py
Then run the PySPH code as:
$ starcluster sshmaster u pysph cluster "mpirun n 2 hostfile ~/PYSPH_HOSTS python ~/cube.py"
Finally, you can get the output generated by PySPH back by running:
$ starcluster get u pysph cluster /home/pysph/cube_output .
Using the PySPH library¶
In this document, we describe the fundamental data structures for working with particles in PySPH. Take a look at A more detailed tutorial for a tutorial introduction to some of the examples. For the experienced user, take a look at The PySPH framework for some of the internal codegeneration details and if you want to extend PySPH for your application.
Working With Particles¶
As an object oriented framework for particle methods, PySPH provides convenient data structures to store and manipulate collections of particles. These can be constructed from within Python and are fully compatible with NumPy arrays. We begin with a brief description for the basic data structures for arrays.
Carrays¶
The cyarray.carray.BaseArray
class provides a typed array data
structure called CArray. These are used throughout PySPH and are
fundamentally very similar to NumPy arrays. The following named types are
supported:
cyarray.carray.UIntArray
(32 bit unsigned integers)cyarray.carray.IntArray
(32 bit signed integers)cyarray.carray.LongArray
(64 bit signed integers)cyarray.carray.DoubleArray
(64 bit floating point numbers
Some simple commands to work with BaseArrays from the interactive shell are given below
>>> import numpy
>>> from cyarray.carray import DoubleArray
>>> array = DoubleArray(10) # array of doubles of length 10
>>> array.set_data( numpy.arange(10) ) # set the data from a NumPy array
>>> array.get(3) # get the value at a given index
>>> array.set(5, 1.0) # set the value at an index to a value
>>> array[3] # standard indexing
>>> array[5] = 1.0 # standard indexing
ParticleArray¶
In PySPH, a collection of BaseArrays make up what is called a
ParticleArray
. This is the main data structure that is used to
represent particles and can be created from NumPy arrays like so:
>>> import numpy
>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.1, 0:1:0.1] # create some data
>>> x = x.ravel(); y = y.ravel() # flatten the arrays
>>> pa = get_particle_array(name='array', x=x, y=y) # create the particle array
In the above, the helper function
pysph.base.utils.get_particle_array()
will instantiate and return a
ParticleArray
with properties x and y set from given NumPy
arrays. In general, a ParticleArray
can be instantiated with an
arbitrary number of properties. Each property is stored internally as a
cyarray.carray.BaseArray
of the appropriate type.
By default, every ParticleArray
returned using the helper
function will have the following properties:
 x, y, z : Position coordinates (doubles)
 u, v, w : Velocity (doubles)
 h, m, rho : Smoothing length, mass and density (doubles)
 au, av, aw: Accelerations (doubles)
 p : Pressure (doubles)
 gid : Unique global index (unsigned int)
 pid : Processor id (int)
 tag : Tag (int)
The role of the particle properties like positions, velocities and other variables should be clear. These define either the kinematic or dynamic properties associated with SPH particles in a simulation.
In addition to scalar properties, particle arrays also support “strided” properties i.e. associating multiple elements per particle. For example:
>>> pa.add_property('A', data=2.0, stride=2)
>>> pa.A
This will add a new property with name 'A'
but which has 2 elements
associated with each particle. When one adds/remove particles this is taken
into account automatically. When accessing such a particle, one has to be
careful though as the underlying array is still stored as a onedimensional
array.
PySPH introduces a global identifier for a particle which is required to be unique for that particle. This is represented with the property gid which is of type unsigned int. This property is used in the parallel load balancing algorithm with Zoltan.
The property pid for a particle is an integer that is used to identify the processor to which the particle is currently assigned.
The property tag is an integer that is used for any other identification. For example, we might want to mark all boundary particles with the tag 100. Using this property, we can delete all such particles as
>>> pa.remove_tagged_particles(tag=100)
This gives us a very flexible way to work with particles. Another way
of deleting/extracting particles is by providing the indices (as a
list, NumPy array or a LongArray
) of the particles to
be removed:
>>> indices = [1,3,5,7]
>>> pa.remove_particles( indices )
>>> extracted = pa.extract_particles(indices, props=['rho', 'x', 'y'])
A ParticleArray
can be concatenated with another array to
result in a larger array:
>>> pa.append_parray(another_array)
To set a given list of properties to zero:
>>> props = ['au', 'av', 'aw']
>>> pa.set_to_zero(props)
Properties in a particle array are automatically sized depending on the number
of particles. There are times when fixed size properties are required. For
example if the total mass or total force on a particle array needs to be
calculated, a fixed size constant can be added. This can be done by adding a
constant
to the array as illustrated below:
>>> pa.add_constant('total_mass', 0.0)
>>> pa.add_constant('total_force', [0.0, 0.0, 0.0])
>>> print(pa.total_mass, pa.total_force)
In the above, the total_mass
is a fixed DoubleArray
of length 1 and
the total_force
is a fixed DoubleArray
of length 3. These constants
will never be resized as one adds or removes particles to/from the particle
array. The constants may be used inside of SPH equations just like any other
property.
The constants can also set in the constructor of the ParticleArray
by passing a dictionary of constants as a constants
keyword argument. For
example:
>>> pa = ParticleArray(
... name='test', x=x,
... constants=dict(total_mass=0.0, total_force=[0.0, 0.0, 0.0])
... )
Take a look at ParticleArray
reference documentation for
some of the other methods and their uses.
Nearest Neighbour Particle Searching (NNPS)¶
To carry out pairwise interactions for SPH, we need to find the nearest
neighbours for a given particle within a specified interaction radius. The
NNPS
object is responsible for handling these nearest neighbour
queries for a list of particle arrays:
>>> from pysph.base import nnps
>>> pa1 = get_particle_array(...) # create one particle array
>>> pa2 = get_particle_array(...) # create another particle array
>>> particles = [pa1, pa2]
>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, radius_scale=3)
The above will create an NNPS
object that uses the classical
linkedlist algorithm for nearest neighbour searches. The radius of
interaction is determined by the argument radius_scale. The bookkeeping
cells have a length of \(\text{radius_scale} \times h_{\text{max}}\),
where \(h_{\text{max}}\) is the maximum smoothing length of all
particles assigned to the local processor.
Note that the NNPS
classes also support caching the neighbors
computed. This is useful if one needs to reuse the same set of
neighbors. To enable this, simply pass cache=True
to the
constructor:
>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, cache=True)
Since we allow a list of particle arrays, we need to distinguish between source and destination particle arrays in the neighbor queries.
Note
A destination particle is a particle belonging to that species for which the neighbors are sought.
A source particle is a particle belonging to that species which contributes to a given destination particle.
With these definitions, we can query for nearest neighbors like so:
>>> nbrs = UIntArray()
>>> nps.get_nearest_particles(src_index, dst_index, d_idx, nbrs)
where src_index, dst_index and d_idx are integers. This will
return, for the d_idx particle of the dst_index particle array
(species), nearest neighbors from the src_index particle array
(species). Passing the src_index and dst_index every time is
repetitive so an alternative API is to call set_context
as done
below:
>>> nps.set_context(src_index=0, dst_index=0)
If the NNPS
instance is configured to use caching, then it will also
precompute the neighbors very efficiently. Once the context is set one
can get the neighbors as:
>>> nps.get_nearest_neighbors(d_idx, nbrs)
Where d_idx and nbrs are as discussed above.
If we want to recompute the data structure for a new distribution of
particles, we can call the NNPS.update()
method:
>>> nps.update()
Periodic domains¶
The constructor for the NNPS
accepts an optional argument
(DomainManager
) that is used to delimit the maximum
spatial extent of the simulation domain. Additionally, this argument
is also used to indicate the extents for a periodic domain. We
construct a DomainManager
object like so
>>> from pysph.base.nnps import DomainManager
>>> domain = DomainManager(xmin, xmax, ymin, ymax, zmin, zmax,
periodic_in_x=True, periodic_in_y=True,
periodic_in_z=False)
where xmin … zmax are floating point arguments delimiting the simulation domain and periodic_in_x,y,z are bools defining the periodic axes.
When the NNPS
object is constructed with this
DomainManager
, care is taken to create periodic ghosts for
particles in the vicinity of the periodic boundaries. These ghost
particles are given a special tag defined by
ParticleTAGS
class ParticleTAGS:
Local = 0
Remote = 1
Ghost = 2
Note
The Local tag is used to for ordinary particles assigned and owned by a given processor. This is the default tag for all particles.
Note
The Remote tag is used for ordinary particles assigned to but not owned by a given processor. Particles with this tag are typically used to satisfy neighbor queries across processor boundaries in a parallel simulation.
Note
The Ghost tag is used for particles that are created to satisfy boundary conditions locally.
Particle aligning¶
In PySPH, the ParticleArray
aligns all particles upon a
call to the ParticleArray.align_particles()
method. The
aligning is done so that all particles with the Local tag are placed
first, followed by particles with other tags.
There is no preference given to the tags other than the fact that a particle with a nonzero tag is placed after all particles with a zero (Local) tag. Intuitively, the local particles represent real particles or particles that we want to do active computation on (destination particles).
The data attribute ParticleArray.num_real_particles returns the
number of real or Local particles. The total number of particles in
a given ParticleArray
can be obtained by a call to the
ParticleArray.get_number_of_particles()
method.
The following is a simple example demonstrating this default behaviour of PySPH:
>>> x = numpy.array( [0, 1, 2, 3], dtype=numpy.float64 )
>>> tag = numpy.array( [0, 2, 0, 1], dtype=numpy.int32 )
>>> pa = utils.get_particle_array(x=x, tag=tag)
>>> print(pa.get_number_of_particles()) # total number of particles
>>> 4
>>> print(pa.num_real_particles) # no. of particles with tag 0
>>> 2
>>> x, tag = pa.get('x', 'tag', only_real_particles=True) # get only real particles (tag == 0)
>>> print(x)
>>> [0. 2.]
>>> print(tag)
>>> [0 0]
>>> x, tag = pa.get('x', 'tag', only_real_particles=False) # get all particles
>>> print(x)
>>> [0. 2. 1. 3.]
>>> print(tag)
>>> [0 0 2 1]
We are now in a position to put all these ideas together and write our first SPH application.
Parallel NNPS with PyZoltan¶
PySPH uses the Zoltan data management library for dynamic load
balancing through a Python wrapper PyZoltan
, which
provides functionality for parallel neighbor queries in a manner
completely analogous to NNPS
.
Particle data is managed and exchanged in parallel via a derivative of
the abstract base class ParallelManager
object. Continuing
with our example, we can instantiate a
ZoltanParallelManagerGeometric
object as:
>>> ... # create particles
>>> from pysph.parallel import ZoltanParallelManagerGeometric
>>> pm = ZoltanParallelManagerGeometric(dim, particles, comm, radius_scale, lb_method)
The constructor for the parallel manager is quite similar to the
NNPS
constructor, with two additional parameters, comm
and lb_method. The first is the MPI communicator object and the
latter is the partitioning algorithm requested. The following
geometric load balancing algorithms are supported:
The particle distribution can be updated in parallel by a call to the
ParallelManager.update()
method. Particles across processor
boundaries that are needed for neighbor queries are assigned the tag
Remote as shown in the figure:
Putting it together: A simple example¶
Now that we know how to work with particles, we will use the data structures to carry out the simplest SPH operation, namely, the estimation of particle density from a given distribution of particles.
We consider particles distributed on a uniform Cartesian lattice ( \(\Delta x = \Delta y = \Delta\)) in a doubly periodic domain \([0,1]\times[0,1]\).
The particle mass is set equal to the “volume” \(\Delta^2\) associated with each particle and the smoothing length is taken as \(1.3\times \Delta\). With this initialization, we have for the estimation for the particle density
We will use the CubicSpline
kernel, defined in
pysph.base.kernels module. The code to setup the particle
distribution is given below
# PySPH imports
from cyarray.carray import UIntArray
from pysph.base.utils import utils
from pysph.base.kernels import CubicSpline
from pysph.base.nnps import DomainManager
from pysph.base.nnps import LinkedListNNPS
# NumPy
import numpy
# Create a particle distribution
dx = 0.01; dxb2 = 0.5 * dx
x, y = numpy.mgrid[dxb2:1:dx, dxb2:1:dx]
x = x.ravel(); y = y.ravel()
h = numpy.ones_like(x) * 1.3*dx
m = numpy.ones_like(x) * dx*dx
# Create the particle array
pa = utils.get_particle_array(x=x,y=y,h=h,m=m)
# Create the periodic DomainManager object and NNPS
domain = DomainManager(xmin=0., xmax=1., ymin=0., ymax=1., periodic_in_x=True, periodic_in_y=True)
nps = LinkedListNNPS(dim=2, particles=[pa,], radius_scale=2.0, domain=domain)
# The SPH kernel. The dimension argument is needed for the correct normalization constant
k = CubicSpline(dim=2)
Note
Notice that the particles were created with an offset of
\(\frac{\Delta}{2}\). This is required since the
NNPS
object will boxwrap particles near periodic
boundaries.
The NNPS
object will create periodic ghosts for the
particles along each periodic axis.
The ghost particles are assigned the tag value 2. For this example, periodic ghosts are created along each coordinate direction as shown in the figure.
SPH Kernels¶
Pairwise interactions in SPH are weighted by the kernel \(W_{ab}\). In PySPH, the pysph.base.kernels module provides a Python interface for these terms. The general definition for an SPH kernel is of the form:
class Kernel(object):
def __init__(self, dim=1):
self.radius_scale = 2.0
self.dim = dim
def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0):
...
return wij
def gradient(self, xij=[0., 0, 0], rij=1.0, h=1.0, grad=[0, 0, 0]):
...
grad[0] = dwij_x
grad[1] = dwij_y
grad[2] = dwij_z
The kernel is an object with two methods kernel and gradient. \(\text{xij}\) is the difference vector between the destination and source particle \(\boldsymbol{x}_{\text{i}}  \boldsymbol{x}_{\text{j}}\) with \(\text{rij} = \sqrt{ \boldsymbol{x}_{ij}^2}\). The gradient method accepts an additional argument that upon exit is populated with the kernel gradient values.
Density summation¶
In the final part of the code, we iterate over all target or destination particles and compute the density contributions from neighboring particles:
nbrs = UIntArray() # array for neighbors
x, y, h, m = pa.get('x', 'y', 'h', 'm', only_real_particles=False) # source particles will include ghosts
for i in range( pa.num_real_particles ): # iterate over all local particles
xi = x[i]; yi = y[i]; hi = h[i]
nps.get_nearest_particles(0, 0, i, nbrs) # get neighbors
neighbors = nbrs.get_npy_array() # numpy array of neighbors
rho = 0.0
for j in neighbors: # iterate over each neighbor
xij = xi  x[j] # interaction terms
yij = yi  y[j]
rij = numpy.sqrt( xij**2 + yij**2 )
hij = 0.5 * (h[i] + h[j])
wij = k.kernel( [xij, yij, 0.0], rij, hij) # kernel interaction
rho += m[j] * wij
pa.rho[i] = rho # contribution for this destination
The average density computed in this manner can be verified as \(\rho_{\text{avg}} = 0.99994676895585222\).
Summary¶
In this document, we introduced the most fundamental data structures in PySPH for working with particles. With these data structures, PySPH can be used as a library for managing particles for your application.
If you are interested in the PySPH framework and want to try out some examples, check out A more detailed tutorial.
Contribute to docs¶
How to build the docs locally¶
To build the docs, clone the repository:
$ git clone https://github.com/pypr/pysph
Make sure to work in an pysph
environment. I will proceed with the further
instructions assuming that the repository is cloned in home directory. Change to
the docs
directory and run make html
.
$ cd ~/pysph/docs/
$ make html
Possible error one might get is:
$ sphinxbuild: Command not found
Which means you don’t a have sphinxbuild in your system. To install across the system do:
$ sudo aptget install python3sphinx
or to install in an environment locally do:
$ pip install sphinx
run make html
again. The documentation is built locally at
~/pysph/docs/build/html
directory. Open `index.html
file by running
$ cd ~/pysph/docs/build/html
$ xdgopen index.html
How to add the documentation¶
As a starting point one can add documentation to one of the examples in
~/pysph/pysph/examples
folder. There is a dedicated
~/pysph/docs/source/examples
directory to add documentation to examples.
Choose an example to write documentation for,
$ cd ~/pysph/docs/source/examples
$ touch your_example.rst
We will write all the documentation in rst
file format. The index.rst
file in the examples directory should know about our newly created file, add a
reference next to the last written example.:
* :ref:`Some_example`:
* :ref:`Other_example`:
* :ref:`taylor_green`: the TaylorGreen Vortex problem in 2D.
* :ref:`sphere_in_vessel`: A sphere floating in a hydrostatic tank example.
* :ref:`your_example_file`: Description of the example.
and at the top of the example file add the reference, for example in
your_example_file.rst
, you should add,:
.. _your_example_file
That’s it, add the documentation and send a pull request.
Gallery of PySPH examples¶
Gallery of PySPH examples¶
In the following, several PySPH examples are documented. These serve to illustrate various features of PySPH and show one may use PySPH to solve a variety of problems.
The TaylorGreen Vortex¶
This example solves the classic TaylorGreen Vortex problem in twodimensions. To run it one may do:
$ pysph run taylor_green
There are many command line options that this example provides, check them out with:
$ pysph run taylor_green h
The example source can be seen at taylor_green.py.
This example demonstrates several useful features:
 user defined command line arguments and how they can be used.
 running the problem with multiple schemes.
 periodicity in both dimensions.
 post processing of generated data.
 using the
pysph.tools.sph_evaluator.SPHEvaluator
class for postprocessing.
We discuss each of these below.
User command line arguments¶
The user defined command line arguments are easy to add. The following code snippet demonstrates how one adds this.
class TaylorGreen(Application):
def add_user_options(self, group):
group.add_argument(
"init", action="store", type=str, default=None,
help="Initialize particle positions from given file."
)
group.add_argument(
"perturb", action="store", type=float, dest="perturb", default=0,
help="Random perturbation of initial particles as a fraction "\
"of dx (setting it to zero disables it, the default)."
)
# ...
This code is straightforward Python code to add options using the argparse
API. It is important to
note that the options are then available in the application’s options
attribute and can be accessed as self.options
from the application’s
methods. The consume_user_options
method highlights this.
def consume_user_options(self):
nx = self.options.nx
re = self.options.re
self.nu = nu = U*L/re
# ...
This method is called after the command line arguments are passed. To refresh
your memory on the order of invocation of the various methods of the
application, see the documentation of the
pysph.solver.application.Application
class. This shows that once
the application is run using the run
method, the command line arguments
are parsed and the following methods are called (this means that at this
point, the application has a valid self.options
):
consume_user_options()
configure_scheme()
The configure_scheme
is important as this example allows the user to
change the Reynolds number which changes the viscosity as well as the
resolution via nx
and hdx
. The code for the configuration looks like:
def configure_scheme(self):
scheme = self.scheme
h0 = self.hdx * self.dx
if self.options.scheme == 'tvf':
scheme.configure(pb=self.options.pb_factor*p0, nu=self.nu, h0=h0)
elif self.options.scheme == 'wcsph':
scheme.configure(hdx=self.hdx, nu=self.nu, h0=h0)
elif self.options.scheme == 'edac':
scheme.configure(h=h0, nu=self.nu, pb=self.options.pb_factor*p0)
kernel = QuinticSpline(dim=2)
scheme.configure_solver(kernel=kernel, tf=self.tf, dt=self.dt)
Note the use of the self.options.scheme
and the use of the
scheme.configure
method. Furthermore, the method also calls the scheme’s
configure_solver
method.
Using multiple schemes¶
This is relatively easy, this is achieved by using the
pysph.sph.scheme.SchemeChooser
scheme as follows:
def create_scheme(self):
wcsph = WCSPHScheme(
['fluid'], [], dim=2, rho0=rho0, c0=c0, h0=h0,
hdx=hdx, nu=None, gamma=7.0, alpha=0.0, beta=0.0
)
tvf = TVFScheme(
['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None,
p0=p0, pb=None, h0=h0
)
edac = EDACScheme(
['fluid'], [], dim=2, rho0=rho0, c0=c0, nu=None,
pb=p0, h=h0
)
s = SchemeChooser(default='tvf', wcsph=wcsph, tvf=tvf, edac=edac)
return s
When using multiple schemes it is important to recall that each scheme needs
different particle properties. The schemes set these extra properties for you.
In this example, the create_particles
method has the following code:
def create_particles(self):
# ...
fluid = get_particle_array(name='fluid', x=x, y=y, h=h)
self.scheme.setup_properties([fluid])
The line tht calls setup_properties
passes a list of the particle arrays
to the scheme so the scheme can configure/setup any additional properties.
Periodicity¶
This is rather easily done with the code in the create_domain
method:
def create_domain(self):
return DomainManager(
xmin=0, xmax=L, ymin=0, ymax=L, periodic_in_x=True,
periodic_in_y=True
)
See also Simulating periodicity.
Postprocessing¶
The code has a significant chunk of code for postprocessing the results. This
is in the post_process
method. This demonstrates how to iterate over the
files and read the file data to calculate various quantities. In particular it
also demonstrates the use of the
pysph.tools.sph_evaluator.SPHEvaluator
class. For example consider
the method:
def _get_sph_evaluator(self, array):
if not hasattr(self, '_sph_eval'):
from pysph.tools.sph_evaluator import SPHEvaluator
equations = [
ComputeAveragePressure(dest='fluid', sources=['fluid'])
]
dm = self.create_domain()
sph_eval = SPHEvaluator(
arrays=[array], equations=equations, dim=2,
kernel=QuinticSpline(dim=2), domain_manager=dm
)
self._sph_eval = sph_eval
return self._sph_eval
This code, creates the evaluator, note that it just takes the particle arrays of interest, a set of equations (this can be as complex as the normal SPH equations, with groups and everything), the kernel, and a domain manager. The evaluator has two important methods:
 update_particle_arrays(…): this allows a user to update the arrays to a new set of values efficiently.
 evaluate: this actually performs the evaluation of the equations.
The example has this code which demonstrates these:
def _get_post_process_props(self, array):
# ...
sph_eval = self._get_sph_evaluator(array)
sph_eval.update_particle_arrays([array])
sph_eval.evaluate()
# ...
Note the use of the above methods.
A rigid sphere floating in an hydrostatic tank¶
This example demonstrates the API of running a rigid fluid coupling problem in PySPH. To run it one may do:
$ cd ~/pysph/pysph/examples/rigid_body/
$ python sphere_in_vessel_akinci.py
There are many command line options that this example provides, check them out with:
$ python sphere_in_vessel.py h
The example source can be seen at sphere_in_vessel.py.
This example demonstrates:
 Setting up a simulation involving rigid bodies and fluid
 Discuss mainly about rigid fluid coupling
It is divided in to three parts:
 Create particles
 Create equations
 Run the application
Create particles¶
In this example, we have a tank with a resting fluid and a sphere falling into
the tank. Create three particle arrays, tank
, fluid
and cube
.
tank
and fluid
has to obey wcsph
scheme, where as cube
has to obey
rigid body equations.
def create_particles(self):
# elided
fluid = get_particle_array_wcsph(x=xf, y=yf, h=h, m=m, rho=rho,
name="fluid")
# elided
tank = get_particle_array_wcsph(x=xt, y=yt, h=h, m=m, rho=rho,
rad_s=rad_s, V=V, name="tank")
for name in ['fx', 'fy', 'fz']:
tank.add_property(name)
cube = get_particle_array_rigid_body(x=xc, y=yc, h=h, m=m, rho=rho,
rad_s=rad_s, V=V, cs=cs,
name="cube")
return [fluid, tank, cube]
We will discuss the reason for adding the properties \(fx\), \(fy\), \(fz\) to the
tank
particle array. The next step is to setup the equations.
Create equations¶
def create_equations(self):
equations = [
Group(equations=[
BodyForce(dest='cube', sources=None, gy=9.81),
], real=False),
Group(equations=[
SummationDensity(
dest='fluid',
sources=['fluid'], ),
SummationDensityBoundary(
dest='fluid', sources=['tank', 'cube'], fluid_rho=1000.0)
]),
# Tait equation of state
Group(equations=[
TaitEOSHGCorrection(dest='fluid', sources=None, rho0=self.ro,
c0=self.co, gamma=7.0),
], real=False),
Group(equations=[
MomentumEquation(dest='fluid', sources=['fluid'],
alpha=self.alpha, beta=0.0, c0=self.co,
gy=9.81),
AkinciRigidFluidCoupling(dest='fluid',
sources=['cube', 'tank']),
XSPHCorrection(dest='fluid', sources=['fluid', 'tank']),
]),
Group(equations=[
RigidBodyCollision(dest='cube', sources=['tank'], kn=1e5)
]),
Group(equations=[RigidBodyMoments(dest='cube', sources=None)]),
Group(equations=[RigidBodyMotion(dest='cube', sources=None)]),
]
return equations
A few points to note while dealing with Akinci formulation,
As a first point, while computing the density of the
fluid
due to solid, make sure to useSummationDensityBoundary
, because usualSummationDensity
computes density by considering the mass of the particle, where asSummationDensityBoundary
will compute it by considering the volume of the particle. This makes a lot of difference while dealing with heavy density variation flows.Apply
TaitEOSHGCorrection
so that there is no negative pressure.The force from the boundary (here it is tank) on fluid is computed using
AkinciRigidFluidCoupling
equation, but in a usual case we do it using the momentum equation. There are a few advantages by doing this. If we are computing the boundary force using the momentum equation, then one should compute the density of the boundary, then compute the pressure. Using such pressure we will compute the force. But usingAkinciRigidFluidCoupling
we don’t need to compute the pressure of the boundary because the force is dependent only on the fluid particle’s pressure.def loop(self, d_idx, d_m, d_rho, d_au, d_av, d_aw, d_p, s_idx, s_V, s_fx, s_fy, s_fz, DWIJ, s_m, s_p, s_rho): # elide d_au[d_idx] += psi * _t1 * DWIJ[0] d_av[d_idx] += psi * _t1 * DWIJ[1] d_aw[d_idx] += psi * _t1 * DWIJ[2] s_fx[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[0] s_fy[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[1] s_fz[s_idx] += d_m[d_idx] * psi * _t1 * DWIJ[2]
Since in
AkinciRigidFluidCoupling
(more in next point) we compute both force on fluid by solid particle and force on solid by fluid particle, which makes our sources to hold the propertiesfx
,fy
andfz
.Here first few equations deal with the simulation of fluid in hydrostatic tank. The equation dealing with rigid fluid coupling is
AkinciRigidFluidCoupling
. Coupling equation will deal with forces exerted by fluid on solid body, and forces exerted by solid on fluid. We find the force on fluid by solid and force on the solid by fluid in a singe equation.Usually in an SPH equation, we tend to change properties only of a destination particle array, but in this case, both destination and sources properties are manipulated.
The final equations deal with the dynamics of rigid bodies, which are discussed in other example files.
Run the application¶
Finally run the application by
if __name__ == '__main__':
app = RigidFluidCoupling()
app.run()
 The TaylorGreen Vortex: the TaylorGreen Vortex problem in 2D.
 A rigid sphere floating in an hydrostatic tank: A sphere floating in a hydrostatic tank example.
Reference documentation¶
Autogenerated from doc strings using sphinx’s autodoc feature.
PySPH Reference Documentation¶
Autogenerated from doc strings using sphinx’s autodoc feature.
Module application¶

class
pysph.solver.application.
Application
(fname=None, output_dir=None, domain=None)[source]¶ Bases:
object
Subclass this to run any SPH simulation. There are several important methods that this class provides. The application is typically used as follows:
class EllipticalDrop(Application): def create_particles(self): # ... def create_scheme(self): # ... ... app = EllipticalDrop() app.run() app.post_process(app.info_filename)
The
post_process()
method is entirely optional and typically performs the postprocessing. It is important to understand the correct sequence of the method calls. When theApplication
instance is created, the following methods are invoked by the__init__()
method:initialize()
: use this to setup any constants etc.create_scheme()
: this needs to be overridden if one wishes to use apysph.sph.scheme.Scheme
. If one does not want to use a scheme, thecreate_equations()
andcreate_solver()
methods must be overridden.self.scheme.add_user_options()
: i.e. the scheme’s command line options are added, if there is a scheme.add_user_options()
: add any user specified command line options.
When
app.run()
is called, the following methods are called in order:_parse_command_line()
: this is a private method but it is important to note that the command line arguments are first parsed.consume_user_options()
: this is called right after the command line args are parsed.configure_scheme()
: This is where one may configure the scheme according to the passed command line arguments.create_solver()
: Create the solver, note that this is needed only if one has not used a scheme, otherwise, this will by default return the solver created by the scheme chosen.create_equations()
: Create any equations. Defaults to letting the scheme generate and return the desired equations.create_particles()
create_inlet_outlet()
create_domain()
: Not needed for nonperiodic domains.create_nnps()
: Not needed unless one wishes to override the default NNPS.create_tools()
: Add anypysph.solver.tools.Tool
instances.customize_output()
: Customize the output visualization.
Additionally, as the application runs there are several convenient optional callbacks setup:
pre_step()
: Called before each time step.post_stage()
: Called after every stage of the integration.post_step()
: Called after each time step.
Finally, it is a good idea to overload the
post_process()
method to perform any post processing for the generated data.The application instance also has several important attributes, some of these are as follows:
args
: command line arguments, typicallysys.argv[1:]
.domain
: optionalpysph.base.nnps_base.DomainManager
instance.fname
: filename pattern to use when dumping output.inlet_outlet
: list of inlet/outlets.nnps
: instance ofpysph.base.nnps_base.NNPS
.num_procs
: total number of processes running.output_dir
: Output directory.parallel_manager
: in parallel, an instance ofpysph.parallel.parallel_manager.ParallelManager
.particles
: list of :py:class:`pysph.base.particle_array.ParticleArray`s.rank
: Rank of this process.scheme
: the optionalpysph.sph.scheme.Scheme
instance.solver
: the solver instance,pysph.solver.solver.Solver
.tools
: a list of possible :py:class:`pysph.solver.tools.Tool`s.
Constructor
Parameters:  fname (str) – file name to use for the output files.
 output_dir (str) – output directory name.
 domain (pysph.base.nnps_base.DomainManager) – A domain manager to use. This is used for periodic domains etc.

add_tool
(tool)[source]¶ Add a
pysph.solver.tools.Tool
instance to the application.

add_user_options
(group)[source]¶ Add any userdefined options to the given option group.
Note
This uses the argparse module.

configure_scheme
()[source]¶ This is called after
consume_user_options()
is called. One can configure the SPH scheme here as at this point all the command line options are known.

consume_user_options
()[source]¶ This is called right after the command line arguments are parsed.
All the parsed options are available in
self.options
and can be used in this method.This is meant to be overridden by users to setup any internal variables etc. that depend on the command line arguments passed. Note that this method is called well before the solver or particles are created.

create_domain
()[source]¶ Create a pysph.base.nnps_base.DomainManager and return it if needed.
This is used for periodic domains etc. Note that if the domain is passed to
__init__()
, then this method is not called.

create_inlet_outlet
(particle_arrays)[source]¶ Create inlet and outlet objects and return them as a list.
The method is passed a dictionary of particle arrays keyed on the name of the particle array.

create_nnps
()[source]¶ Create any NNPS if desired and return it, else a default NNPS will be created automatically.

create_scheme
()[source]¶ Create a suitable SPH scheme and return it.
Note that this method is called after the arguments are all processed and after
consume_user_options()
is called.

create_tools
()[source]¶ Create any tools and return a sequence of them. This method is called after particles/inlets etc. are all setup, configured etc.

customize_output
()[source]¶ Customize the output file visualization by adding any files.
For example, the pysph view command will look for a
mayavi_config.py
file that can be used to script the viewer. You can use self._mayavi_config(‘code’) to add a default customization here.Note that this is executed before the simulation starts.

post_process
(info_fname_or_directory)[source]¶ Given an info filename or a directory containing the info file, read the information and do any postprocessing of the results. Please overload the method to perform any processing.
The info file has a few useful attributes and can be read using the
read_info()
method.The output_files property should provide the output files generated.

post_stage
(current_time, dt, stage)[source]¶ If overloaded, this is called automatically after each integrator stage, i.e. if the integrator is a two stage integrator it will be called after the first and second stages.
The method is passed (current_time, dt, stage). See the the
pysph.sph.integrator.Integrator.one_timestep()
methods for examples of how this is called.

post_step
(solver)[source]¶ If overloaded, this is called automatically after each integrator step. The method is passed the solver instance.

pre_step
(solver)[source]¶ If overloaded, this is called automatically before each integrator step. The method is passed the solver instance.

read_info
(fname_or_dir)[source]¶ Read the information from the given info file (or directory containing the info file, the first found info file will be used).

setup
(solver, equations, nnps=None, inlet_outlet_factory=None, particle_factory=None, *args, **kwargs)[source]¶ Setup the application’s solver.
This will parse the command line arguments (if this is not called from within an IPython notebook or shell) and then using those parameters and any additional parameters and call the solver’s setup method.
Parameters:  solver (pysph.solver.solver.Solver) – The solver instance.
 equations (list) – A list of Groups/Equations.
 nnps (pysph.base.nnps_base.NNPS) – Optional NNPS instance. If None is given a default NNPS is created.
 inlet_outlet_factory (callable or None) – The inlet_outlet_factory is passed a dictionary of the particle arrays. The factory should return a list of inlets and outlets.
 particle_factory (callable or None) – If supplied, particles will be created for the solver using the particle arrays returned by the callable. Else particles for the solver need to be set before calling this method
 args – extra positional arguments passed on to the particle_factory.
 kwargs – extra keyword arguments passed to the particle_factory.
Examples
>>> def create_particles(): ... ... ... >>> solver = Solver(...) >>> equations = [...] >>> app = Application() >>> app.setup(solver=solver, equations=equations, ... particle_factory=create_particles) >>> app.run()
Module controller¶
Implement infrastructure for the solver to add various interfaces

class
pysph.solver.controller.
CommandManager
(solver, comm=None)[source]¶ Bases:
object
Class to manage and synchronize commands from various Controllers

add_function
(callable, interval=1)[source]¶ add a function to to be called every interval iterations

add_interface
(*args, **kwds)[source]¶ Add a callable interface to the controller
The callable must accept an Controller instance argument. The callable is called in a new thread of its own and it can do various actions with methods defined on the Controller instance passed to it The new created thread is set to daemon mode and returned

get_particle_array_combined
(idx, procs=None)[source]¶ get a single particle array with combined data from all procs
specifying processes is currently not implemented


class
pysph.solver.controller.
Controller
(command_manager, block=True)[source]¶ Bases:
object
Controller class acts a a proxy to control the solver
This is passed as an argument to the interface
Methods available:
 get – get the value of a solver parameter
 set – set the value of a solver parameter
 get_result – return result of a queued command
 pause_on_next – pause solver thread on next iteration
 wait – wait (block) calling thread till solver is paused (call after pause_on_next)
 cont – continue solver thread (call after pause_on_next)
Various other methods are also available as listed in
CommandManager.dispatch_dict
which perform different functions. The methods in CommandManager.active_methods do their operation and return the result (if any) immediately
 The methods in CommandManager.lazy_methods do their later when solver thread is available and return a taskid. The result of the task can be obtained later using the blocking call get_result() which waits till result is available and returns the result. The availability of the result can be checked using the lock returned by get_task_lock() method
FIXME: wait/cont currently do not work in parallel

cont
()[source]¶ continue solver thread after it has been paused by pause_on_next
call this only after calling the pause_on_next method

set_blocking
(block)[source]¶ set the blocking mode to True/False
In blocking mode (block=True) all methods other than getting of solver properties block until the command is executed by the solver and return the results. The blocking time can vary depending on the time taken by solver per iteration and the command_interval In nonblocking mode, these methods queue the command for later and return a string corresponding to the task_id of the operation. The result can be later obtained by a (blocking) call to get_result with the task_id as argument

class
pysph.solver.controller.
DummyComm
[source]¶ Bases:
object
A dummy MPI.Comm implementation as placeholder for for serial runs
SPH equations¶

class
pysph.sph.equation.
Equation
(dest, sources)[source]¶ Bases:
object
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays
Basic SPH Equations¶

class
pysph.sph.basic_equations.
BodyForce
(dest, sources, fx=0.0, fy=0.0, fz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Add a body force to the particles:
\(\boldsymbol{f} = f_x, f_y, f_z\)
Parameters:  fx (float) – Body force per unit mass along the xaxis
 fy (float) – Body force per unit mass along the yaxis
 fz (float) – Body force per unit mass along the zaxis

class
pysph.sph.basic_equations.
ContinuityEquation
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Density rate:
\(\frac{d\rho_a}{dt} = \sum_b m_b \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab}\)
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.basic_equations.
IsothermalEOS
(dest, sources, rho0, c0, p0)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the pressure using the Isothermal equation of state:
\(p = p_0 + c_0^2(\rho_0  \rho)\)
Parameters:  rho0 (float) – Reference density of the fluid (\(\rho_0\))
 c0 (float) – Maximum speed of sound expected in the system (\(c0\))
 p0 (float) – Reference pressure in the system (\(p0\))

class
pysph.sph.basic_equations.
MonaghanArtificialViscosity
(dest, sources, alpha=1.0, beta=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Classical Monaghan style artificial viscosity [Monaghan2005]
\[\frac{d\mathbf{v}_{a}}{dt}&=&\sum_{b}m_{b}\Pi_{ab}\nabla_{a}W_{ab}\]where
\[\begin{split}\Pi_{ab}=\begin{cases}\frac{\alpha_{\pi}\bar{c}_{ab}\phi_{ab}+ \beta_{\pi}\phi_{ab}^{2}}{\bar{\rho}_{ab}}, & \mathbf{v}_{ab}\cdot \mathbf{r}_{ab}<0\\0, & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0 \end{cases}\end{split}\]with
\[ \begin{align}\begin{aligned}\begin{split}\phi_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {\mathbf{r}_{ab}^{2}+\epsilon^{2}}\\\end{split}\\\begin{split}\bar{c}_{ab}&=&\frac{c_{a}+c_{b}}{2}\\\end{split}\\\bar{\rho}_{ab}&=&\frac{\rho_{a}+\rho_{b}}{2}\end{aligned}\end{align} \]References
[Monaghan2005] J. Monaghan, “Smoothed particle hydrodynamics”, Reports on Progress in Physics, 68 (2005), pp. 17031759. Parameters:  alpha (float) – produces a shear and bulk viscosity
 beta (float) – used to handle high Mach number shocks

class
pysph.sph.basic_equations.
SummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Good old Summation density:
\(\rho_a = \sum_b m_b W_{ab}\)
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.basic_equations.
VelocityGradient2D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the SPH evaluation for the velocity gradient tensor in 2D.
The expression for the velocity gradient is:
\(\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b  v_a)\frac{\partial W_{ab}}{\partial x_a^j}\)
Notes
The tensor properties are stored in the variables v_ij where ‘i’ refers to the velocity component and ‘j’ refers to the spatial component. Thus v_10 is \(\frac{\partial v}{\partial x}\)
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.basic_equations.
VelocityGradient3D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the SPH evaluation for the velocity gradient tensor in 2D.
The expression for the velocity gradient is:
\(\frac{\partial v^i}{\partial x^j} = \sum_{b}\frac{m_b}{\rho_b}(v_b  v_a)\frac{\partial W_{ab}}{\partial x_a^j}\)
Notes
The tensor properties are stored in the variables v_ij where ‘i’ refers to the velocity component and ‘j’ refers to the spatial component. Thus v_21 is \(\frac{\partial v}{\partial x}\)
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.basic_equations.
XSPHCorrection
(dest, sources, eps=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Position stepping with XSPH correction [Monaghan1992]
\[\frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}=\mathbf{v}_{a} \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab}\]References
[Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543574. Parameters: eps (float) – \(\epsilon\) as in the above equation Notes
This equation must be used to advect the particles. XSPH can be turned off by setting the parameter
eps = 0
.

class
pysph.sph.basic_equations.
XSPHCorrectionForLeapFrog
(dest, sources, eps=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
The XSPH correction [Monaghan1992] alone. This is meant to be used with a leapfrog integrator which already considers the velocity of the particles. It simply computes the correction term and adds that to
ax, ay, az
.\[\frac{d\mathbf{r}_{a}}{dt}=\mathbf{\hat{v}}_{a}=  \epsilon\sum_{b}m_{b}\frac{\mathbf{v}_{ab}}{\bar{\rho}_{ab}}W_{ab}\]References
[Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543574. Parameters: eps (float) – \(\epsilon\) as in the above equation Notes
This equation must be used to advect the particles. XSPH can be turned off by setting the parameter
eps = 0
.
Basic WCSPH Equations¶

class
pysph.sph.wc.basic.
ContinuityEquationDeltaSPH
(dest, sources, c0, delta=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Continuity equation with dissipative terms
\(\frac{d\rho_a}{dt} = \sum_b \rho_a \frac{m_b}{\rho_b} \left( \boldsymbol{v}_{ab}\cdot \nabla_a W_{ab} + \delta \eta_{ab} \cdot \nabla_{a} W_{ab} (h_{ab}\frac{c_{ab}}{\rho_a}(\rho_b  \rho_a)) \right)\)
References
[Marrone2011] S. Marrone et al., “deltaSPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542. Parameters:  c0 (float) – reference speed of sound
 delta (float) – coefficient used to control the intensity of diffusion of density

class
pysph.sph.wc.basic.
ContinuityEquationDeltaSPHPreStep
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Continuity equation with dissipative terms See
pysph.sph.wc.basic.ContinuityEquationDeltaSPH
The matrix \(L_a\) is multiplied to \(\nabla W_{ij}\) in thepysph.sph.scheme.WCSPHScheme
class by usingpysph.sph.wc.kernel_correction.GradientCorrectionPreStep
andpysph.sph.wc.kernel_correction.GradientCorrection
.Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.basic.
MomentumEquation
(dest, sources, c0, alpha=1.0, beta=1.0, gx=0.0, gy=0.0, gz=0.0, tensile_correction=False)[source]¶ Bases:
pysph.sph.equation.Equation
Classic Monaghan Style Momentum Equation with Artificial Viscosity
\[\frac{d\mathbf{v}_{a}}{dt}=\sum_{b}m_{b}\left(\frac{p_{b}} {\rho_{b}^{2}}+\frac{p_{a}}{\rho_{a}^{2}}+\Pi_{ab}\right) \nabla_{a}W_{ab}\]where
\[\begin{split}\Pi_{ab}=\begin{cases} \frac{\alpha\bar{c}_{ab}\mu_{ab}+\beta\mu_{ab}^{2}}{\bar{\rho}_{ab}} & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}<0;\\ 0 & \mathbf{v}_{ab}\cdot\mathbf{r}_{ab}\geq0; \end{cases}\end{split}\]with
\[ \begin{align}\begin{aligned}\begin{split}\mu_{ab}=\frac{h\mathbf{v}_{ab}\cdot\mathbf{r}_{ab}} {\mathbf{r}_{ab}^{2}+\eta^{2}}\\\end{split}\\\begin{split}\bar{c}_{ab} = \frac{c_a + c_b}{2}\\\end{split}\\\bar{\rho}_{ab} = \frac{\rho_a + \rho_b}{2}\end{aligned}\end{align} \]References
[Monaghan1992] J. Monaghan, Smoothed Particle Hydrodynamics, “Annual Review of Astronomy and Astrophysics”, 30 (1992), pp. 543574. Parameters:  c0 (float) – reference speed of sound
 alpha (float) – produces a shear and bulk viscosity
 beta (float) – used to handle high Mach number shocks
 gx (float) – body force per unit mass along the xaxis
 gy (float) – body force per unit mass along the yaxis
 gz (float) – body force per unit mass along the zaxis
 tensilte_correction (bool) – switch for tensile instability correction (Default: False)

class
pysph.sph.wc.basic.
MomentumEquationDeltaSPH
(dest, sources, rho0, c0, alpha=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation defined in JOSEPHINE and the deltaSPH model
\[\frac{du_{i}}{dt}=\frac{1}{\rho_{i}}\sum_{j}\left(p_{j}+p_{i}\right) \nabla_{i}W_{ij}V_{j}+\mathbf{g}_{i}+\alpha hc_{0}\rho_{0}\sum_{j} \pi_{ij}\nabla_{i}W_{ij}V_{j}\]where
\[\pi_{ij}=\frac{\mathbf{u}_{ij}\cdot\mathbf{r}_{ij}} {\mathbf{r}_{ij}^{2}}\]References
[Marrone2011] S. Marrone et al., “deltaSPH model for simulating violent impact flows”, Computer Methods in Applied Mechanics and Engineering, 200 (2011), pp 1526–1542. [Cherfils2012] J. M. Cherfils et al., “JOSEPHINE: A parallel SPH code for freesurface flows”, Computer Physics Communications, 183 (2012), pp 1468–1480. Parameters:  rho0 (float) – reference density
 c0 (float) – reference speed of sound
 alpha (float) – coefficient used to control the intensity of the diffusion of velocity
Notes
Artificial viscosity is used in this momentum equation and is controlled by the parameter \(\alpha\). This form of the artificial viscosity is similar but not identical to the Monaghanstyle artificial viscosity.

class
pysph.sph.wc.basic.
PressureGradientUsingNumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Pressure gradient discretized using number density:
\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (\frac{p_a}{V_a^2} + \frac{p_b}{V_b^2})\nabla_a W_{ab}\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.basic.
TaitEOS
(dest, sources, rho0, c0, gamma, p0=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Tait equation of state for waterlike fluids
\(p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} 1\right)\)
References
[Cole1948] H. R. Cole, “Underwater Explosions”, Princeton University Press, 1948. [Batchelor2002] G. Batchelor, “An Introduction to Fluid Dynamics”, Cambridge University Press, 2002. [Monaghan2005] J. Monaghan, “Smoothed particle hydrodynamics”, Reports on Progress in Physics, 68 (2005), pp. 17031759. Parameters:  rho0 (float) – reference density of fluid particles
 c0 (float) – maximum speed of sound expected in the system
 gamma (float) – constant
 p0 (float) – reference pressure in the system (defaults to zero).
Notes
The reference speed of sound, c0, is to be taken approximately as 10 times the maximum expected velocity in the system. The particle sound speed is given by the usual expression:
\(c_a = \sqrt{\frac{\partial p}{\partial \rho}}\)

class
pysph.sph.wc.basic.
TaitEOSHGCorrection
(dest, sources, rho0, c0, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
Tait Equation of State with Hughes and Graham Correction
\[p_a = \frac{c_{0}^2\rho_0}{\gamma}\left( \left(\frac{\rho_a}{\rho_0}\right)^{\gamma} 1\right)\]where
\[\begin{split}\rho_{a}=\begin{cases}\rho_{a} & \rho_{a}\geq\rho_{0}\\ \rho_{0} & \rho_{a}<\rho_{0}\end{cases}`\end{split}\]References
[Hughes2010] J. P. Hughes and D. I. Graham, “Comparison of incompressible and weaklycompressible SPH models for freesurface water flows”, Journal of Hydraulic Research, 48 (2010), pp. 105117. Parameters:  rho0 (float) – reference density
 c0 (float) – reference speed of sound
 gamma (float) – constant
Notes
The correction is to be applied on boundary particles and imposes a minimum value of the density (rho0) which is set upon instantiation. This correction avoids particle sticking behaviour at walls.

class
pysph.sph.wc.basic.
UpdateSmoothingLengthFerrari
(dest, sources, dim, hdx)[source]¶ Bases:
pysph.sph.equation.Equation
Update the particle smoothing lengths
\(h_a = hdx \left(\frac{m_a}{\rho_a}\right)^{\frac{1}{d}}\)
References
[Ferrari2009] A. Ferrari et al., “A new 3D parallel SPH scheme for free surface flows”, Computers and Fluids, 38 (2009), pp. 1203–1217. Parameters:  dim (float) – number of dimensions
 hdx (float) – scaling factor
Notes
Ideally, the kernel scaling factor should be determined from the kernel used based on a linear stability analysis. The default value of (hdx=1) reduces to the formulation suggested by Ferrari et al. who used a Cubic Spline kernel.
Typically, a change in the smoothing length should mean the neighbors are recomputed which in PySPH means the NNPS must be updated. This equation should therefore be placed as the last equation so that after the final corrector stage, the smoothing lengths are updated and the new NNPS data structure is computed.
Note however that since this is to be used with incompressible flow equations, the density variations are small and hence the smoothing lengths should also not vary too much.
Viscosity functions

class
pysph.sph.wc.viscosity.
ClearyArtificialViscosity
(dest, sources, dim, alpha=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial viscosity proposed By P. Cleary:
\[\mathcal{Pi}_{ab} = \frac{16}{\mu_a \mu_b}{\rho_a \rho_b (\mu_a + \mu_b)}\left( \frac{\boldsymbol{v}_{ab} \cdot \boldsymbol{r}_{ab}}{\boldsymbol{r}_{ab}^2 + \epsilon} \right),\]where the viscosity is determined from the parameter \(\alpha\) as
\[\mu_a = \frac{1}{8}\alpha h_a c_a \rho_a\]This equation is described in the 2005 review paper by Monaghan
 J. J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]

class
pysph.sph.wc.viscosity.
LaminarViscosity
(dest, sources, nu, eta=0.01)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.wc.viscosity.
LaminarViscosityDeltaSPH
(dest, sources, dim, rho0, nu)[source]¶ Bases:
pysph.sph.equation.Equation
See section 2 of the below reference
 Sun, A. Colagrossi, S. Marrone, A. Zhang
“The plusSPH model: simple procedures for a further improvement of the SPH scheme”, Computer Methods in Applied Mechanics and Engineering 315 (2017), pp. 2549.

class
pysph.sph.wc.viscosity.
MonaghanSignalViscosityFluids
(dest, sources, alpha, h)[source]¶ Bases:
pysph.sph.equation.Equation
Transport Velocity Formulation¶
References
[Adami2012]  (1, 2) S. Adami et. al “A generalized wall boundary condition for smoothed particle hydrodynamics”, Journal of Computational Physics (2012), pp. 7057–7075. 
[Adami2013]  S. Adami et. al “A transportvelocity formulation for smoothed particle hydrodynamics”, Journal of Computational Physics (2013), pp. 292–307. 

class
pysph.sph.wc.transport_velocity.
ContinuityEquation
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Conservation of mass equation
Eq (6) in [Adami2012]:
\[\frac{d\rho_a}{dt} = \rho_a \sum_b \frac{m_b}{\rho_b} \boldsymbol{v}_{ab} \cdot \nabla_a W_{ab}\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.transport_velocity.
ContinuitySolid
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Continuity equation for the solid’s ghost particles.
The key difference is that we use the ghost velocity ug, and not the particle velocity u.
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.transport_velocity.
MomentumEquationArtificialStress
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial stress contribution to the Momentum Equation
\[\frac{d\boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \frac{1}{2}(\boldsymbol{A}_a + \boldsymbol{A}_b) : \nabla_a W_{ab}\right]\]where the artificial stress terms are given by:
\[ \boldsymbol{A} = \rho \boldsymbol{v} (\tilde{\boldsymbol{v}}  \boldsymbol{v})\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.transport_velocity.
MomentumEquationArtificialViscosity
(dest, sources, c0, alpha=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial viscosity for the momentum equation
Eq. (11) in [Adami2012]:
\[\frac{d \boldsymbol{v}_a}{dt} = \sum_b m_b \alpha h_{ab} c_{ab} \frac{\boldsymbol{v}_{ab}\cdot \boldsymbol{r}_{ab}}{\rho_{ab}\left(r_{ab}^2 + \epsilon \right)}\nabla_a W_{ab}\]where
\[ \begin{align}\begin{aligned}\begin{split}\rho_{ab} = \frac{\rho_a + \rho_b}{2}\\\end{split}\\\begin{split}c_{ab} = \frac{c_a + c_b}{2}\\\end{split}\\h_{ab} = \frac{h_a + h_b}{2}\end{aligned}\end{align} \]Parameters:  alpha (float) – constant
 c0 (float) – speed of sound

class
pysph.sph.wc.transport_velocity.
MomentumEquationPressureGradient
(dest, sources, pb, gx=0.0, gy=0.0, gz=0.0, tdamp=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation for the Transport Velocity Formulation: Pressure
Eq. (8) in [Adami2013]:
\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[\bar{p}_{ab}\nabla_a W_{ab} \right]\]where
\[\bar{p}_{ab} = \frac{\rho_b p_a + \rho_a p_b}{\rho_a + \rho_b}\]Parameters:  pb (float) – background pressure
 gx (float) – Body force per unit mass along the xaxis
 gy (float) – Body force per unit mass along the yaxis
 gz (float) – Body force per unit mass along the zaxis
 tdamp (float) – damping time
Notes
This equation should have the destination as fluid and sources as fluid and boundary particles.
This function also computes the contribution to the background pressure and accelerations due to a body force or gravity.
The body forces are damped according to Eq. (13) in [Adami2012] to avoid instantaneous accelerations. By default, damping is neglected.

class
pysph.sph.wc.transport_velocity.
MomentumEquationViscosity
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation for the Transport Velocity Formulation: Viscosity
Eq. (8) in [Adami2013]:
\[\frac{d \boldsymbol{v}_a}{dt} = \frac{1}{m_a}\sum_b (V_a^2 + V_b^2)\left[ \bar{\eta}_{ab}\hat{r}_{ab}\cdot \nabla_a W_{ab} \frac{\boldsymbol{v}_{ab}}{\boldsymbol{r}_{ab}}\right]\]where
\[\bar{\eta}_{ab} = \frac{2\eta_a \eta_b}{\eta_a + \eta_b}\]Parameters: nu (float) – kinematic viscosity

class
pysph.sph.wc.transport_velocity.
SetWallVelocity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Extrapolating the fluid velocity on to the wall
Eq. (22) in [Adami2012]:
\[\tilde{\boldsymbol{v}}_a = \frac{\sum_b\boldsymbol{v}_b W_{ab}} {\sum_b W_{ab}}\]Notes
The destination particle array for this equation should define the filtered velocity variables \(uf, vf, wf\).
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.transport_velocity.
SolidWallNoSlipBC
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
Solid wall boundary condition [Adami2012]
This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries.
The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion.
Nopenetration:
Ghost particles participate in the continuity and state equations with fluid particles. This means as fluid particles approach the wall, the pressure of the ghost particles increases to generate a repulsion force that prevents particle penetration.
Noslip:
Extrapolation is used to set the dummy velocity of the ghost particles for viscous interaction. First, the smoothed velocity field of the fluid phase is extrapolated to the wall particles:
\[\tilde{v}_a = \frac{\sum_b v_b W_{ab}}{\sum_b W_{ab}}\]In the second step, for the viscous interaction in Eqs. (10) in [Adami2012] and Eq. (8) in [Adami2013], the velocity of the ghost particles is assigned as:
\[v_b = 2v_w \tilde{v}_a,\]where \(v_w\) is the prescribed wall velocity and \(v_b\) is the ghost particle in the interaction.
Parameters: nu (float) – kinematic viscosity Notes
For this equation the destination particle array should be the fluid and the source should be ghost or boundary particles. The boundary particles must define a prescribed velocity \(u_0, v_0, w_0\)

class
pysph.sph.wc.transport_velocity.
SolidWallPressureBC
(dest, sources, rho0, p0, b=1.0, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Solid wall pressure boundary condition [Adami2012]
This boundary condition is to be used with fixed ghost particles in SPH simulations and is formulated for the general case of moving boundaries.
The velocity and pressure of the fluid particles is extrapolated to the ghost particles and these values are used in the equations of motion.
Pressure boundary condition:
The pressure of the ghost particle is also calculated from the fluid particle by interpolation using:
\[p_g = \frac{\sum_f p_f W_{gf} + \boldsymbol{g  a_g} \cdot \sum_f \rho_f \boldsymbol{r}_{gf}W_{gf}}{\sum_f W_{gf}},\]where the subscripts g and f relate to the ghost and fluid particles respectively.
Density of the wall particle is then set using this pressure
\[\rho_w=\rho_0\left(\frac{p_w  \mathcal{X}}{p_0} + 1\right)^{\frac{1}{\gamma}}\]Parameters:  rho0 (float) – reference density
 p0 (float) – reference pressure
 b (float) – constant (default 1.0)
 gx (float) – Body force per unit mass along the xaxis
 gy (float) – Body force per unit mass along the yaxis
 gz (float) – Body force per unit mass along the zaxis
Notes
For a two fluid system (boundary, fluid), this equation must be instantiated with boundary as the destination and fluid as the source.
The boundary particle array must additionally define a property \(wij\) for the denominator in Eq. (27) from [Adami2012]. This array sums the kernel terms from the ghost particle to the fluid particle.

class
pysph.sph.wc.transport_velocity.
StateEquation
(dest, sources, p0, rho0, b=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
Generalized Weakly Compressible Equation of State
\[p_a = p_0\left[ \left(\frac{\rho}{\rho_0}\right)^\gamma  b \right] + \mathcal{X}\]Notes
This is the generalized Tait’s equation of state and the suggested values in [Adami2013] are \(\mathcal{X} = 0\), \(\gamma=1\) and \(b = 1\).
The reference pressure \(p_0\) is calculated from the artificial sound speed and reference density:
\[p_0 = \frac{c^2\rho_0}{\gamma}\]Parameters:  p0 (float) – reference pressure
 rho0 (float) – reference density
 b (float) – constant (default 1.0).

class
pysph.sph.wc.transport_velocity.
SummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Summation density with volume summation
In addition to the standard summation density, the number density for the particle is also computed. The number density is important for multiphase flows to define a local particle volume independent of the material density.
\[ \begin{align}\begin{aligned}\begin{split}\rho_a = \sum_b m_b W_{ab}\\\end{split}\\\mathcal{V}_a = \frac{1}{\sum_b W_{ab}}\end{aligned}\end{align} \]Notes
Note that in the pysph implementation, V is the inverse volume of a particle, i.e. the equation computes V as follows:
\[\mathcal{V}_a = \sum_b W_{ab}\]For this equation, the destination particle array must define the variable V for particle volume.
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.transport_velocity.
VolumeFromMassDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Set the inverse volume using mass density
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.transport_velocity.
VolumeSummation
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Number density for volume computation
See SummationDensity
Note that the quantity V is really \(\sigma\) of the original paper, i.e. inverse of the particle volume.
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays
Generalized Transport Velocity Formulation¶
Some notes on the paper,
 In the viscosity term of equation (17) a factor of ‘2’ is missing.
 A negative sign is missing from equation (22) i.e, either put a negative sign in equation (22) or at the integrator step equation(25).
 The Solid Mechanics Equations are not tested.
References
[ZhangHuAdams2017]  Chi Zhang, Xiangyu Y. Hu, Nikolaus A. Adams “A generalized transportvelocity formulation for smoothed particle hydrodynamics”, Journal of Computational Physics 237 (2017), pp. 216–232. 

class
pysph.sph.wc.gtvf.
ContinuityEquationGTVF
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Evolution of density
From [ZhangHuAdams2017], equation (12),
\[\frac{\tilde{d} \rho_i}{dt} = \rho_i \sum_j \frac{m_j}{\rho_j} \nabla W_{ij} \cdot \tilde{\boldsymbol{v}}_{ij}\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.gtvf.
CorrectDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Density correction
From [ZhangHuAdams2017], equation (13),
\[\rho_i = \frac{\sum_j m_j W_{ij}} {\min(1, \sum_j \frac{m_j}{\rho_j^{*}} W_{ij})}\]where,
\[\rho_j^{*} = \text{density before this correction is applied.}\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.gtvf.
DeviatoricStressRate
(dest, sources, dim, G)[source]¶ Bases:
pysph.sph.equation.Equation
Stress rate for solids
From [ZhangHuAdams2017], equation (5),
\[\frac{d \boldsymbol{\sigma}'}{dt} = 2 G (\boldsymbol{\epsilon}  \frac{1}{3} \text{Tr}(\boldsymbol{\epsilon})\textbf{I}) + \boldsymbol{\sigma}' \cdot \boldsymbol{\Omega}^{T} + \boldsymbol{\Omega} \cdot \boldsymbol{\sigma}'\]where,
\[\boldsymbol{\Omega_{i/j}} = \frac{1}{2} \left(\nabla \otimes \boldsymbol{v}_{i/j}  (\nabla \otimes \boldsymbol{v}_{i/j})^{T}\right)\]\[\boldsymbol{\epsilon_{i/j}} = \frac{1}{2} \left(\nabla \otimes \boldsymbol{v}_{i/j} + (\nabla \otimes \boldsymbol{v}_{i/j})^{T}\right)\]see the class VelocityGradient for \(\nabla \otimes \boldsymbol{v}_i\)
Parameters:  dim (int) – Dimensionality of the problem.
 G (float) – value of shear modulus

class
pysph.sph.wc.gtvf.
GTVFIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.wc.gtvf.
GTVFScheme
(fluids, solids, dim, rho0, c0, nu, h0, pref, gx=0.0, gy=0.0, gz=0.0, b=1.0, alpha=0.0)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters:  fluids (list) – List of names of fluid particle arrays.
 solids (list) – List of names of solid particle arrays.
 dim (int) – Dimensionality of the problem.
 rho0 (float) – Reference density.
 c0 (float) – Reference speed of sound.
 nu (float) – Real viscosity of the fluid.
 h0 (float) – Reference smoothing length.
 pref (float) – reference pressure for rate of change of transport velocity.
 gx (float) – Body force acceleration components in x direction.
 gy (float) – Body force acceleration components in y direction.
 gz (float) – Body force acceleration components in z direction.
 b (float) – constant for the equation of state.

configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters:  kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
 integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
 extra_steppers (dict) – Additional integration stepper instances as a dict.
 **kw (extra arguments) – Any additional keyword args are passed to the solver instance.

class
pysph.sph.wc.gtvf.
GTVFStep
[source]¶

class
pysph.sph.wc.gtvf.
MomentumEquationArtificialStress
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation Artificial stress for solids
See the class MomentumEquationPressureGradient for details.
Parameters: dim (int) – Dimensionality of the problem.

class
pysph.sph.wc.gtvf.
MomentumEquationArtificialStressSolid
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation Artificial stress for solids
See the class MomentumEquationPressureGradient for details.
Parameters: dim (int) – Dimensionality of the problem.

class
pysph.sph.wc.gtvf.
MomentumEquationPressureGradient
(dest, sources, pref, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation
From [ZhangHuAdams2017], equation (17),
\[\frac{\tilde{d} \boldsymbol{v}_i}{dt} =  \sum_j m_j \nabla W_{ij} \cdot \left[\left(\frac{p_i}{\rho_i^2} + \frac{p_j}{\rho_j^2} \right)\textbf{I}  \left(\frac{\boldsymbol{A_i}}{\rho_i^2} + \frac{\boldsymbol{A_j}}{\rho_j^2} \right)\right] + \sum_j \frac{\eta_{ij}\boldsymbol{v}_{ij}}{\rho_i \rho_j r_{ij}} \nabla W_{ij} \cdot \boldsymbol{x}_{ij}\]where,
\[\boldsymbol{A_{i/j}} = \rho_{i/j} \boldsymbol{v}_{i/j} \otimes (\tilde{\boldsymbol{v}}_{i/j}  \boldsymbol{v}_{i/j})\]\[\eta_{ij} = \frac{2\eta_i \eta_j}{\eta_i + \eta_j}\]\[\eta_{i/j} = \rho_{i/j} \nu\]for solids, replace \(\boldsymbol{A}_{i/j}\) with \(\boldsymbol{\sigma}'_{i/j}\).
The rate of change of transport velocity is given by,
\[(\frac{d\boldsymbol{v}_i}{dt})_c = p_i^0 \sum_j \frac{m_j} {\rho_i^2} \nabla \tilde{W}_{ij}\]where,
\[\tilde{W}_{ij} = W(\boldsymbol{x}_ij, \tilde{0.5 h_{ij}})\]\[p_i^0 = \min(10p_i, p_{ref})\]Notes:
A negative sign in \((\frac{d\boldsymbol{v}_i}{dt})_c\) is missing in the paper [ZhangHuAdams2017].
Parameters:  pref (float) – reference pressure
 gx (float) – body force per unit mass along the xaxis
 gy (float) – body force per unit mass along the yaxis
 gz (float) – body force per unit mass along the zaxis

class
pysph.sph.wc.gtvf.
MomentumEquationViscosity
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum equation Artificial stress for solids
See the class MomentumEquationPressureGradient for details.
Notes:
A factor of ‘2’ is missing in the viscosity equation given by [ZhangHuAdams2017].
Parameters: nu (float) – viscosity of the fluid.

class
pysph.sph.wc.gtvf.
VelocityGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of velocity vector
\[(\nabla \otimes \tilde{\boldsymbol{v}})_i = \sum_j \frac{m_j} {\rho_j} \tilde{\boldsymbol{v}}_{ij} \otimes \nabla W_{ij}\]Parameters: dim (int) – Dimensionality of the problem.

class
pysph.sph.wc.density_correction.
MLSFirstOrder2D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Moving Least Squares density reinitialization This is a first order density reinitialization
\[W_{ab}^{MLS} = \beta\left(\mathbf{r_{a}}\right)\cdot\left( \mathbf{r}_a  \mathbf{r}_b\right)W_{ab}\]\[\beta\left(\mathbf{r_{a}}\right) = A^{1} \left[1 0 0\right]^{T}\]where
\[A = \sum_{b}W_{ab}\tilde{A}\frac{m_{b}}{\rho_{b}}\]\[\tilde{A} = pp^{T}\]where
\[p = \left[1 x_{a}x_{b} y_{a}y_{b}\right]^{T}\]\[\rho_{a} = \sum_{b} \m_{b}W_{ab}^{MLS}\]References
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.density_correction.
MLSFirstOrder3D
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.density_correction.
ShepardFilter
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Shepard Filter density reinitialization This is a zeroth order density reinitialization
\[\tilde{W_{ab}} = \frac{W_{ab}}{\sum_{b} W_{ab}\frac{m_{b}} {\rho_{b}}}\]\[\rho_{a} = \sum_{b} \m_{b}\tilde{W_{ab}}\]References
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays
Kernel Corrections¶
These are the equations for the kernel corrections that are mentioned in the paper by Bonet and Lok [BonetLok1999].
References
[BonetLok1999]  Bonet, J. and Lok T.S.L. (1999) Variational and Momentum Preservation Aspects of Smoothed Particle Hydrodynamic Formulations. 

class
pysph.sph.wc.kernel_correction.
GradientCorrection
(dest, sources, dim=2, tol=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Kernel Gradient Correction
From [BonetLok1999], equations (42) and (45)
\[\nabla \tilde{W}_{ab} = L_{a}\nabla W_{ab}\]\[L_{a} = \left(\sum \frac{m_{b}}{\rho_{b}} \nabla W_{ab} \mathbf{\otimes}x_{ba} \right)^{1}\]

class
pysph.sph.wc.kernel_correction.
GradientCorrectionPreStep
(dest, sources, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.wc.kernel_correction.
KernelCorrection
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Kernel Correction
From [BonetLok1999], equation (53):
\[\mathbf{f}_{a} = \frac{\sum_{b}\frac{m_{b}}{\rho_{b}} \mathbf{f}_{b}W_{ab}}{\sum_{b}\frac{m_{b}}{\rho_{b}}W_{ab}}\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.kernel_correction.
MixedGradientCorrection
(dest, sources, dim=2, tol=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Mixed Kernel Gradient Correction
This is as per [BonetLok1999]. See the MixedKernelCorrectionPreStep for the equations.

class
pysph.sph.wc.kernel_correction.
MixedKernelCorrectionPreStep
(dest, sources, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation
Mixed Kernel Correction
From [BonetLok1999], equations (54), (57) and (58)
\[\tilde{W}_{ab} = \frac{W_{ab}}{\sum_{b} V_{b}W_{ab}}\]\[\nabla \tilde{W}_{ab} = L_{a}\nabla \bar{W}_{ab}\]where,
\[L_{a} = \left(\sum_{b} V_{b} \nabla \bar{W}_{ab} \mathbf{\otimes}x_{ba} \right)^{1}\]\[\nabla \bar{W}_{ab} = \frac{\nabla W_{ab}  \gamma} {\sum_{b} V_{b}W_{ab}}\]\[\gamma = \frac{\sum_{b} V_{b}\nabla W_{ab}} {\sum_{b} V_{b}W_{ab}}\]
CRKSPH corrections
These are equations for the basic kernel corrections in [CRKSPH2017].
References
[CRKSPH2017]  Nicholas Frontiere, Cody D. Raskin, J. Michael Owen “CRKSPH  A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme”, Journal of Computational Physics 332 (2017), pp. 160–209. 

class
pysph.sph.wc.crksph.
CRKSPH
(dest, sources, dim=2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Conservative Reproducing Kernel SPH
Equations from the paper [CRKSPH2017].
\[W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij}\]\[\partial_{\gamma}W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha} x_{ij}^{\alpha}\right)\partial_{\gamma}W_{ij} + \partial_{\gamma}A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} + A_{i}\left(\partial_{\gamma}B_{i}^{\alpha} x_{ij}^{\alpha} + B_{i}^{\gamma}\right)W_{ij}\]\[\nabla\tilde{W}_{ij} = 0.5 * \left(\nabla W_{ij}^{R}\nabla W_{ji}^{R} \right)\]where,
\[A_{i} = \left[m_{0}  \left(m_{2}^{1}\right)^{\alpha \beta} m_1^{\beta}m_1^{\alpha}\right]^{1}\]\[B_{i}^{\alpha} = \left(m_{2}^{1}\right)^{\alpha \beta} m_{1}^{\beta}\]\[\partial_{\gamma}A_{i} = A_{i}^{2}\left(\partial_{\gamma} m_{0}\left(m_{2}^{1}\right)^{\alpha \beta}\left( m_{1}^{\beta}\partial_{\gamma}m_{1}^{\alpha} + \partial_{\gamma}m_{1}^{\beta}m_{1}^{\alpha}\right) + \left(m_{2}^{1}\right)^{\alpha \phi}\partial_{\gamma} m_{2}^{\phi \psi}\left(m_{2}^{1}\right)^{\psi \beta} m_{1}^{\beta}m_{1}^{\alpha} \right)\]\[\partial_{\gamma}B_{i}^{\alpha} = \left(m_{2}^{1}\right)^ {\alpha \beta}\partial_{\gamma}m_{1}^{\beta} + \left(m_{2}^{1}\right)^ {\alpha \phi}\partial_{\gamma}m_{2}^{\phi \psi}\left(m_{2}^ {1}\right)^{\psi \beta}m_{1}^{\beta}\]\[m_{0} = \sum_{j}V_{j}W_{ij}\]\[m_{1}^{\alpha} = \sum_{j}V_{j}x_{ij}^{\alpha}W_{ij}\]\[m_{2}^{\alpha \beta} = \sum_{j}V_{j}x_{ij}^{\alpha} x_{ij}^{\beta}W_{ij}\]\[\partial_{\gamma}m_{0} = \sum_{j}V_{j}\partial_{\gamma} W_{ij}\]\[\partial_{\gamma}m_{1}^{\alpha} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}\partial_{\gamma}W_{ij}+\delta^ {\alpha \gamma}W_{ij} \right]\]\[\partial_{\gamma}m_{2}^{\alpha \beta} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}x_{ij}^{\beta}\partial_{\gamma}W_{ij} + \left(x_{ij}^{\alpha}\delta^{\beta \gamma} + x_{ij}^{\beta} \delta^{\alpha \gamma} \right)W_{ij} \right]\]Parameters:  dim (int) – Dimensionality of the problem.
 tol (float) – Tolerence value to decide std or corrected kernel

class
pysph.sph.wc.crksph.
CRKSPHIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.wc.crksph.
CRKSPHPreStep
(dest, sources, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.wc.crksph.
CRKSPHScheme
(fluids, dim, rho0, c0, nu, h0, p0, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, gamma=7.0, eta_crit=0.3, eta_fold=0.2, tol=0.5, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
Parameters:  fluids (list) – a list with names of fluid particle arrays
 solids (list) – a list with names of solid (or boundary) particle arrays

configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters:  kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
 integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
 extra_steppers (dict) – Additional integration stepper instances as a dict.
 **kw (extra arguments) – Any additional keyword args are passed to the solver instance.

class
pysph.sph.wc.crksph.
CRKSPHSymmetric
(dest, sources, dim=2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Conservative Reproducing Kernel SPH
This is symmetric and will only work for particles of the same array.
Equations from the paper [CRKSPH2017].
\[W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij}\]\[\partial_{\gamma}W_{ij}^{R} = A_{i}\left(1+B_{i}^{\alpha} x_{ij}^{\alpha}\right)\partial_{\gamma}W_{ij} + \partial_{\gamma}A_{i}\left(1+B_{i}^{\alpha}x_{ij}^{\alpha} \right)W_{ij} + A_{i}\left(\partial_{\gamma}B_{i}^{\alpha} x_{ij}^{\alpha} + B_{i}^{\gamma}\right)W_{ij}\]\[\nabla\tilde{W}_{ij} = 0.5 * \left(\nabla W_{ij}^{R}\nabla W_{ji}^{R} \right)\]where,
\[A_{i} = \left[m_{0}  \left(m_{2}^{1}\right)^{\alpha \beta} m1_{\beta}m1_{\alpha}\right]^{1}\]\[B_{i}^{\alpha} = \left(m_{2}^{1}\right)^{\alpha \beta} m_{1}^{\beta}\]\[\partial_{\gamma}A_{i} = A_{i}^{2}\left(\partial_{\gamma} m_{0}\left[m_{2}^{1}\right]^{\alpha \beta}\left[ m_{1}^{\beta}\partial_{\gamma}m_{1}^{\beta}m_{1}^{\alpha} + \partial_{\gamma}m_{1}^{\alpha}m_{1}^{\beta}\right] + \left[m_{2}^{1}\right]^{\alpha \phi}\partial_{\gamma} m_{2}^{\phi \psi}\left[m_{2}^{1}\right]^{\psi \beta} m_{1}^{\beta}m_{1}^{\alpha} \right)\]\[\partial_{\gamma}B_{i}^{\alpha} = \left(m_{2}^{1}\right)^ {\alpha \beta}\partial_{\gamma}m_{1}^{\beta} + \left(m_{2}^{1}\right)^ {\alpha \phi}\partial_{\gamma}m_{2}^{\phi \psi}\left(m_{2}^ {1}\right)^{\psi \beta}m_{1}^{\beta}\]\[m_{0} = \sum_{j}V_{j}W_{ij}\]\[m_{1}^{\alpha} = \sum_{j}V_{j}x_{ij}^{\alpha}W_{ij}\]\[m_{2}^{\alpha \beta} = \sum_{j}V_{j}x_{ij}^{\alpha} x_{ij}^{\beta}W_{ij}\]\[\partial_{\gamma}m_{0} = \sum_{j}V_{j}\partial_{\gamma} W_{ij}\]\[\partial_{\gamma}m_{1}^{\alpha} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}\partial_{\gamma}W_{ij}+\delta^ {\alpha \gamma}W_{ij} \right]\]\[\partial_{\gamma}m_{2}^{\alpha \beta} = \sum_{j}V_{j}\left[ x_{ij}^{\alpha}x_{ij}^{\beta}\partial_{\gamma}W_{ij} + \left(x_{ij}^{\alpha}\delta^{\beta \gamma} + x_{ij}^{\beta} \delta^{\alpha \gamma} \right)W_{ij} \right]\]

class
pysph.sph.wc.crksph.
CRKSPHUpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.wc.crksph.
EnergyEquation
(dest, sources, dim, gamma, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, eta_crit=0.5, eta_fold=0.2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Energy Equation
From [CRKSPH2017], equation (66):
\[\Delta u_{ij} = \frac{f_{ij}}{2}\left(v_j^{\alpha}(t) + v_j^{\alpha}(t + \Delta t)  v_i^{\alpha}(t)  v_i^{\alpha}(t + \Delta t)\right) \frac{Dv_{ij}^{\alpha}}{Dt}\]\[\begin{split}f_{ij} = \begin{cases} 1/2 &s_i  s_j = 0,\\ s_{\min} / (s_{\min} + s_{\max}) &\Delta u_{ij}\times(s_i  s_j) > 0\\ s_{\max} / (s_{\min} + s_{\max}) &\Delta u_{ij}\times(s_i  s_j) < 0\\ \end{cases}\end{split}\]\[s_{\min} = \min(s_i, s_j)\]\[s_{\max} = \max(s_i, s_j)\]\[s_{i/j} = \frac{p_{i/j}}{\rho_{i/j}^\gamma}\]see MomentumEquation for \(\frac{Dv_{ij}^{\alpha}}{Dt}\)

class
pysph.sph.wc.crksph.
MomentumEquation
(dest, sources, dim, gx=0.0, gy=0.0, gz=0.0, cl=2, cq=1, eta_crit=0.3, eta_fold=0.2, tol=0.5)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation
From [CRKSPH2017], equation (64):
\[\frac{Dv_{i}^{\alpha}}{Dt} = \frac{1}{2 m_i}\sum_{j} V_i V_j (P_i + P_j + Q_i + Q_j) (\partial_{\alpha}W_{ij}^R  \partial_{\alpha} W_{ji}^R)\]where,
\[V_{i/j} = \text{dest/source particle number density}\]\[P_{i/j} = \text{dest/source particle pressure}\]\[Q_i = \rho_{i} (C_{l} c_{i} \mu_{i} + C_{q} \mu_{i}^{2})\]\[\mu_i = \min \left(0, \frac{\hat{v}_{ij} \eta_{i}^{\alpha}}{\eta_{i}^{\alpha}\eta_{i}^{\alpha} + \epsilon^{2}}\right)\]\[\hat{v}_{ij}^{\alpha} = v_{i}^{\alpha}  v_{j}^{\alpha}  \frac{\phi_{ij}}{2}\left(\partial_{\beta} v_i^{\alpha} + \partial_{\beta}v_j^{\alpha}\right) x_{ij}^{\beta}\]\[\begin{split}\phi_{ij} = \max \left[0, \min \left[1, \frac{4r_{ij}}{(1 + r_{ij})^2}\right]\right] \times \begin{cases} \exp{\left[\left((\eta_{ij}  \eta_{crit})/\eta_{fold}\right)^2\right]}, &\eta_{ij} < \eta_{crit} \\ 1, & \eta_{ij} >= \eta_{crit} \end{cases}\end{split}\]\[\eta_{ij} = \min(\eta_i, \eta_j)\]\[\eta_{i/j} = (x_{ij}^{\alpha} x_{ij}^{\alpha})^{1/2} / h_{i/j}\]\[r_{ij} = \frac{\partial_{\beta} v_i^{\alpha} x_{ij}^{\alpha} x_{ij}^{\beta}}{\partial_{\beta} v_j^{\alpha}x_{ij}^{\alpha} x_{ij}^{\beta}}\]\[\partial_{\beta} v_i^{\alpha} = \sum_j V_j v_{ij}^{\alpha} \partial_{\beta} W_{ij}^R\]

class
pysph.sph.wc.crksph.
NumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Number Density
From [CRKSPH2017], equation (75):
\[V_{i}^{1} = \sum_{j} W_{i}\]Note that the quantity V is the inverse of particle volume, so when using in the equation use \(\frac{1}{V}\).
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.crksph.
SpeedOfSound
(dest, sources=None, gamma=7.0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.wc.crksph.
StateEquation
(dest, sources, gamma)[source]¶ Bases:
pysph.sph.equation.Equation
State Equation
State equation for ideal gas, from [CRKSPH2017] equation (77):
\[p_i = (\gamma  1)\rho_{i} u_i\]where, \(u_i\) is the specific thermal energy.

class
pysph.sph.wc.crksph.
SummationDensityCRKSPH
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Summation Density CRKSPH
From [CRKSPH2017], equation (76):
\[\rho_{i} = \frac{\sum_j m_{ij} V_j W_{ij}^R} {\sum_{j} V_{j}^{2} W_{ij}^R}\]where,
\[\begin{split}mij = \begin{cases} m_j, &i \text{ and } j \text{ are same material} \\ m_i, &i \text{ and } j \text{ are different material} \end{cases}\end{split}\]Note that in this equation we are just using \(m_{ij}\) to be \(m_i\) as the mass remains constant through out the simulation.
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.wc.crksph.
VelocityGradient
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Velocity Gradient
From [CRKSPH2017], equation (74)
\[\partial_{\beta} v_i^{\alpha} = \sum_j V_j v_{ij}^{\alpha} \partial_{\beta} W_{ij}^R\]Parameters: dim (int) – Dimensionality of the Problem.
PredictiveCorrective Incompressible SPH (PCISPH)¶
References
[SolPaj2009]  B. Solenthaler, R. Pajarola “PredictiveCorrective Incompressible SPH”, ACM Trans. Graph 28 (2009), pp. 1–6. 

class
pysph.sph.wc.pcisph.
ComputePressure
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
Compute Pressure
Compute pressure iteratively maintaining density within a given tolerance.
\[p_i += \delta \rho^{*}_{{err}_i}\]where,
\[\rho_{err_i} = \rho_i^{*}  \rho_0\]\[\delta = \frac{1}{\beta (\sum_j \nabla W_{ij} \cdot \sum_j \nabla W_{ij}  \sum_j \nabla W_{ij} \nabla W_{ij})}\]

class
pysph.sph.wc.pcisph.
MomentumEquationPressureGradient
(dest, sources, rho0, tolerance, debug)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation pressure gradient
Standard WCSPH pressure gradient,
\[\frac{d\mathbf{v}}{dt} =  \sum_j m_j \left(\frac{p_i}{\rho_i^2} + \frac{p_i}{\rho_i^2}\right) \nabla W(x_{ij}, h)\]

class
pysph.sph.wc.pcisph.
MomentumEquationViscosity
(dest, sources, nu=0.0, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation Viscosity
See “pysph.sph.wc.viscosity.LaminarViscocity”

class
pysph.sph.wc.pcisph.
PCISPHIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

initial_acceleration
(t, dt)[source]¶ Compute the initial accelerations if needed before the iterations start.
The default implementation only does this for the first acceleration evaluator. So if you have multiple evaluators, you must override this method in a subclass.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.wc.pcisph.
PCISPHScheme
(fluids, dim, rho0, nu, gx=0.0, gy=0.0, gz=0.0, tolerance=0.1, debug=False, show_itercount=False)[source]¶ Bases:
pysph.sph.scheme.Scheme

configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters:  kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
 integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
 extra_steppers (dict) – Additional integration stepper instances as a dict.
 **kw (extra arguments) – Any additional keyword args are passed to the solver instance.


class
pysph.sph.wc.pcisph.
PCISPHStep
(show_itercount=False)[source]¶

class
pysph.sph.wc.pcisph.
Predict
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Predict velocity and position
\[\mathbf{v}^{*}(t+1) = \mathbf{v}(t) + dt \left(\frac{d \mathbf{v}_{visc, g}(t)}{dt} + \frac{d \mathbf{v}_{p} (t)}{dt} \right)\]\[\mathbf{x}^{*}(t+1) = \mathbf{x}(t) + dt * \mathbf{v}(t+1)\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays
SPH Boundary Equations¶

class
pysph.sph.boundary_equations.
MonaghanBoundaryForce
(dest, sources, deltap)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.boundary_equations.
MonaghanKajtarBoundaryForce
(dest, sources, K=None, beta=None, h=None)[source]¶ Bases:
pysph.sph.equation.Equation
Basic Equations for Solid Mechanics¶
References
[Gray2001]  J. P. Gray et al., “SPH elastic dynamics”, Computer Methods in Applied Mechanics and Engineering, 190 (2001), pp 6641  6662. 

class
pysph.sph.solid_mech.basic.
ElasticSolidsScheme
(elastic_solids, solids, dim, artificial_stress_eps=0.3, xsph_eps=0.5, alpha=1.0, beta=1.0)[source]¶ Bases:
pysph.sph.scheme.Scheme

configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
Parameters:  kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
 integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
 extra_steppers (dict) – Additional integration stepper instances as a dict.
 **kw (extra arguments) – Any additional keyword args are passed to the solver instance.


class
pysph.sph.solid_mech.basic.
EnergyEquationWithStress
(dest, sources, alpha=1.0, beta=1.0, eta=0.01)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.solid_mech.basic.
HookesDeviatoricStressRate
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
**Rate of change of stress **
\[\frac{dS^{ij}}{dt} = 2\mu\left(\epsilon^{ij}  \frac{1}{3}\delta^{ij} \epsilon^{ij}\right) + S^{ik}\Omega^{jk} + \Omega^{ik}S^{kj}\]where
\[ \begin{align}\begin{aligned}\begin{split}\epsilon^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j} + \frac{\partial v^j}{\partial x^i}\right)\\\end{split}\\\Omega^{ij} = \frac{1}{2}\left(\frac{\partial v^i}{\partial x^j}  \frac{\partial v^j}{\partial x^i} \right)\end{aligned}\end{align} \]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.solid_mech.basic.
IsothermalEOS
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Compute the pressure using the Isothermal equation of state:
\(p = p_0 + c_0^2(\rho_0  \rho)\)
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.solid_mech.basic.
MomentumEquationWithStress
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Momentum Equation with Artificial Stress
\[\frac{D\vec{v_a}^i}{Dt} = \sum_b m_b\left(\frac{\sigma_a^{ij}}{\rho_a^2} +\frac{\sigma_b^{ij}}{\rho_b^2} + R_{ab}^{ij}f^n \right)\nabla_a W_{ab}\]where
\[ \begin{align}\begin{aligned}\begin{split}f_{ab} = \frac{W(r_{ab})}{W(\Delta p)}\\\end{split}\\R_{ab}^{ij} = R_{a}^{ij} + R_{b}^{ij}\end{aligned}\end{align} \]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.solid_mech.basic.
MonaghanArtificialStress
(dest, sources, eps=0.3)[source]¶ Bases:
pysph.sph.equation.Equation
Artificial stress to remove tensile instability
The dispersion relations in [Gray2001] are used to determine the different components of \(R\).
Angle of rotation for particle \(a\)
\[\tan{2 \theta_a} = \frac{2\sigma_a^{xy}}{\sigma_a^{xx}  \sigma_a^{yy}}\]In rotated frame, the new components of the stress tensor are
\[ \begin{align}\begin{aligned}\begin{split}\bar{\sigma}_a^{xx} = \cos^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \sin^2{\theta_a}\sigma_a^{yy}\\\end{split}\\\bar{\sigma}_a^{yy} = \sin^2{\theta_a} \sigma_a^{xx} + 2\sin{\theta_a} \cos{\theta_a}\sigma_a^{xy} + \cos^2{\theta_a}\sigma_a^{yy}\end{aligned}\end{align} \]Components of \(R\) in rotated frame:
\[ \begin{align}\begin{aligned}\begin{split}\bar{R}_{a}^{xx}=\begin{cases}\epsilon\frac{\bar{\sigma}_{a}^{xx}} {\rho^{2}} & \bar{\sigma}_{a}^{xx}>0\\0 & \bar{\sigma}_{a}^{xx}\leq0 \end{cases}\\\end{split}\\\begin{split}\bar{R}_{a}^{yy}=\begin{cases}\epsilon\frac{\bar{\sigma}_{a}^{yy}} {\rho^{2}} & \bar{\sigma}_{a}^{yy}>0\\0 & \bar{\sigma}_{a}^{yy}\leq0 \end{cases}\end{split}\end{aligned}\end{align} \]Components of \(R\) in original frame:
\[ \begin{align}\begin{aligned}\begin{split}R_a^{xx} = \cos^2{\theta_a} \bar{R}_a^{xx} + \sin^2{\theta_a} \bar{R}_a^{yy}\\\end{split}\\\begin{split}R_a^{yy} = \sin^2{\theta_a} \bar{R}_a^{xx} + \cos^2{\theta_a} \bar{R}_a^{yy}\\\end{split}\\R_a^{xy} = \sin{\theta_a} \cos{\theta_a}\left(\bar{R}_a^{xx}  \bar{R}_a^{yy}\right)\end{aligned}\end{align} \]Parameters: eps (float) – constant

pysph.sph.solid_mech.basic.
get_bulk_mod
(G, nu)[source]¶ Get the bulk modulus from shear modulus and Poisson ratio

pysph.sph.solid_mech.basic.
get_particle_array_elastic_dynamics
(constants=None, **props)[source]¶ Return a particle array for the Standard SPH formulation of solids.
Parameters: constants (dict) – Dictionary of constants Other Parameters: props (dict) – Additional keywords passed are set as the property arrays. See also
get_particle_array()
Equations for the High Velocity Impact Problems¶

class
pysph.sph.solid_mech.hvi.
MieGruneisenEOS
(dest, sources, gamma, r0, c0, S)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.solid_mech.hvi.
StiffenedGasEOS
(dest, sources, gamma, r0, c0)[source]¶ Bases:
pysph.sph.equation.Equation
Stiffenedgas EOS from “A Free Lagrange Augmented Godunov Method for the Simulation of ElasticPlastic Solids”, B. P. Howell and G. J. Ball, JCP (2002). http://dx.doi.org/10.1006/jcph.2001.6931

class
pysph.sph.solid_mech.hvi.
VonMisesPlasticity2D
(dest, sources, flow_stress)[source]¶ Bases:
pysph.sph.equation.Equation
Gas Dynamics¶
Basic equations for Gasdynamics

class
pysph.sph.gas_dynamics.basic.
ADKEAccelerations
(dest, sources, alpha, beta, g1, g2, k, eps)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
ADKEUpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
IdealGasEOS
(dest, sources, gamma)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
MPMAccelerations
(dest, sources, beta=2.0, update_alpha1=False, update_alpha2=False, alpha1_min=0.1, alpha2_min=0.1, sigma=0.1)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
MPMUpdateGhostProps
(dest, sources=None, dim=2)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
Monaghan92Accelerations
(dest, sources, alpha=1.0, beta=2.0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
ScaleSmoothingLength
(dest, sources, factor=2.0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.gas_dynamics.basic.
SummationDensity
(dest, sources, dim, density_iterations=False, iterate_only_once=False, k=1.2, htol=1e06)[source]¶ Bases:
pysph.sph.equation.Equation
Summation density with iterative solution of the smoothing lengths.
Parameters:
 density_iterations : bint
 Flag to indicate density iterations are required.
 iterate_only_once : bint
 Flag to indicate if only one iteration is required
 k : double
 Kernel scaling factor
 htol : double
 Iteration tolerance

class
pysph.sph.gas_dynamics.basic.
SummationDensityADKE
(dest, sources, k=1.0, eps=0.0)[source]¶ Bases:
pysph.sph.equation.Equation
References

class
pysph.sph.gas_dynamics.basic.
UpdateSmoothingLengthFromVolume
(dest, sources, dim, k=1.2)[source]¶ Bases:
pysph.sph.equation.Equation
Surface tension¶
Implementation of the equations used for surface tension modelling, for example in KHI simulations. The references are as under:
 M. Shadloo, M. Yildiz, “Numerical modelling of KelvinHelmholtz isntability using smoothed particle hydrodynamics”, IJNME, 2011, 87, pp 988–1006 [SY11]
 Joseph P. Morris “Simulating surface tension with smoothed particle hydrodynamics”, JCP, 2000, 33, pp 333–353 [JM00]
 Adami et al. “A new surfacetension formulation for multiphase SPH using a reproducing divergence approximation”, JCP 2010, 229, pp 5011–5021 [A10]
 X.Y.Hu, N.A. Adams. “A multiphase SPH method for macroscopic and mesoscopic flows”, JCP 2006, 213, pp 844861 [XA06]

class
pysph.sph.surface_tension.
AdamiColorGradient
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of color Eq. (14) in [A10]
\[\nabla c_a = \frac{1}{V_a}\sum_b \left[V_a^2 + V_b^2 \right]\tilde{c}_{ab}\nabla_a W_{ab}\,,\]where, the average \(\tilde{c}_{ab}\) is defined as
\[\tilde{c}_{ab} = \frac{\rho_b}{\rho_a + \rho_b}c_a + \frac{\rho_a}{\rho_a + \rho_b}c_b\]Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
AdamiReproducingDivergence
(dest, sources, dim)[source]¶ Bases:
pysph.sph.equation.Equation
Reproducing divergence approximation Eq. (20) in [A10] to compute the curvature
\[\nabla \cdot \boldsymbol{\phi}_a = d\frac{\sum_b \boldsymbol{\phi}_{ab}\cdot \nabla_a W_{ab}V_b}{\sum_b\boldsymbol{x}_{ab}\cdot \nabla_a W_{ab} V_b}\]

class
pysph.sph.surface_tension.
CSFSurfaceTensionForce
(dest, sources, sigma=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Acceleration due to surface tension force Eq. (25) in [JM00]:
Note that as per Eq. (17) in [JM00], the unnormalized normal is basically the gradient of the color function. The acceleration term therefore depends on the gradient of the color field.

class
pysph.sph.surface_tension.
CSFSurfaceTensionForceAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
ColorGradientAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
ColorGradientUsingNumberDensity
(dest, sources, epsilon=1e06)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of the color function using Eq. (13) of [SY11]:
\[\nabla C_a = \sum_b \frac{2 C_b  C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab}\]Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.
Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter \(\epsilon\)

class
pysph.sph.surface_tension.
ConstructStressMatrix
(dest, sources, sigma, d=2)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.surface_tension.
InterfaceCurvatureFromDensity
(dest, sources, with_morris_correction=True)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.surface_tension.
InterfaceCurvatureFromNumberDensity
(dest, sources, with_morris_correction=True)[source]¶ Bases:
pysph.sph.equation.Equation
Interface curvature using number density. Eq. (15) in [SY11]:
\[\kappa_a = \sum_b \frac{2.0}{\psi_a + \psi_b} \left(\boldsymbol{n_a}  \boldsymbol{n_b}\right) \cdot \nabla_a W_{ab}\]

class
pysph.sph.surface_tension.
MomentumEquationPressureGradientAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
MomentumEquationPressureGradientMorris
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
MomentumEquationViscosityAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
MomentumEquationViscosityMorris
(dest, sources, eta=0.01)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.surface_tension.
MorrisColorGradient
(dest, sources, epsilon=1e06)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of the color function using Eq. (17) of [JM00]:
\[\nabla c_a = \sum_b \frac{m_b}{\rho_b}(c_b  c_a) \nabla_{a} W_{ab}\,,\]where a smoothed representation of the color is used in the equation. Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.
Singularities are avoided as per the recommendation by [JM00] (see eqs 20 & 21) using the parameter \(\epsilon\)

class
pysph.sph.surface_tension.
SY11ColorGradient
(dest, sources, epsilon=1e06)[source]¶ Bases:
pysph.sph.equation.Equation
Gradient of the color function using Eq. (13) of [SY11]:
\[\nabla C_a = \sum_b \frac{2 C_b  C_a}{\psi_a + \psi_a} \nabla_{a} W_{ab}\]Using the gradient of the color function, the normal and discretized dirac delta is calculated in the post loop.

class
pysph.sph.surface_tension.
SY11DiracDelta
(dest, sources, epsilon=1e06)[source]¶ Bases:
pysph.sph.equation.Equation
Discretized diracdelta for the SY11 formulation Eq. (14) in [SY11]
This is essentially the same as computing the color gradient, the only difference being that this might be called with a reduced smoothing length.
Note that the normals should be computed using the SY11ColorGradient equation. This function will effectively overwrite the color gradient.

class
pysph.sph.surface_tension.
ShadlooViscosity
(dest, sources, alpha)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.surface_tension.
ShadlooYildizSurfaceTensionForce
(dest, sources, sigma=0.1)[source]¶ Bases:
pysph.sph.equation.Equation
Acceleration due to surface tension force Eq. (7,9) in [SY11]:
where, \(\delta^s\) is the discretized dirac delta function, \(\boldsymbol{n}\) is the interface normal, \(\kappa\) is the discretized interface curvature and \(\sigma\) is the surface tension force constant.

class
pysph.sph.surface_tension.
SmoothedColor
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Smoothed color function. Eq. (17) in [JM00]
\[c_a = \sum_b \frac{m_b}{\rho_b} c_b^i \nabla_a W_{ab}\,,\]where, \(c_b^i\) is the color index associated with a particle.
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
SolidWallPressureBCnoDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
SummationDensitySourceMass
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.surface_tension.
SurfaceForceAdami
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

pysph.sph.surface_tension.
get_surface_tension_equations
(fluids, solids, scheme, rho0, p0, c0, b, factor1, factor2, nu, sigma, d, epsilon, gamma, real=False)[source]¶ This function returns the required equations for the multiphase formulation taking inputs of the fluid particles array, solid particles array, the scheme to be used and other physical parameters :param fluids: List of names of fluid particle arrays :type fluids: list :param solids: List of names of solid particle arrays :type solids: list :param scheme: The scheme with which the equations are to be setup.
 Supported Schemes:
 1. TVF scheme with Morris’ surface tension. String to be used: “tvf” 2. Adami’s surface tension implementation which doesn’t involve calculation of curvature. String to be used: “adami_stress” 3. Adami’s surface tension implementation which involves calculation of curvature. String to be used: “adami” 4. Shadloo Yildiz surface tension formulation. String to be used: “shadloo” 5. Morris’ surface tension formulation. This is the default scheme which will be used if none of the above strings are input as scheme.
Parameters:  rho0 (float) – The reference density of the medium (Currently multiple reference densities for different particles is not supported)
 p0 (float) – The background pressure of the medium(Currently multiple background pressures for different particles is not supported)
 c0 (float) – The speed of sound of the medium(Currently multiple speeds of sounds for different particles is not supported)
 b (float) – The b parameter of the generalized Tait Equation of State. Refer to the Tait Equation’s documentation for reference
 factor1 (float) – The factor for scaling of smoothing length for calculation of interface curvature number for shadloo’s scheme
 factor2 (float) – The factor for scaling back of smoothing length for calculation of forces after calculating the interface curvature number in shadloo’s scheme
 nu (float) – The kinematic viscosity of the medium
 sigma (float) – The surface tension of the system
 d (int) – The number of dimensions of the problem in the cartesian space
 epsilon (float) – Put this option false if the equations are supposed to be evaluated for the ghost particles, else keep it True
Implicit Incompressible SPH¶
The basic equations for the IISPH formulation of
M. Ihmsen, J. Cornelis, B. Solenthaler, C. Horvath, M. Teschner, “Implicit Incompressible SPH,” IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 3, pp. 426435, March 2014. http://dx.doi.org/10.1109/TVCG.2013.105

class
pysph.sph.iisph.
AdvectionAcceleration
(dest, sources, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
ComputeAII
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
ComputeAIIBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
This is important and not really discussed in the original IISPH paper.

class
pysph.sph.iisph.
ComputeDII
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
ComputeDIIBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
ComputeDIJPJ
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
ComputeRhoAdvection
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
ComputeRhoBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
IISPHScheme
(fluids, solids, dim, rho0, nu=0.0, gx=0.0, gy=0.0, gz=0.0, omega=0.5, tolerance=0.01, debug=False, has_ghosts=False)[source]¶ Bases:
pysph.sph.scheme.Scheme
The IISPH scheme
Parameters:  fluids (list(str)) – List of names of fluid particle arrays.
 solids (list(str)) – List of names of solid particle arrays.
 dim (int) – Dimensionality of the problem.
 rho0 (float) – Density of fluid.
 nu (float) – Kinematic viscosity.
 gy, gz (gx,) – Componenents of body acceleration (gravity, external forcing etc.)
 omega (float) – Relaxation parameter for relaxedJacobi iterations.
 tolerance (float) – Tolerance for the convergence of pressure iterations as a fraction.
 debug (bool) – Produce some debugging output on iterations.
 has_ghosts (bool) – The problem has ghost particles so add equations for those.

configure_solver
(kernel=None, integrator_cls=None, extra_steppers=None, **kw)[source]¶ Configure the solver to be generated.
This is to be called before get_solver is called.
Parameters:  dim (int) – Number of dimensions.
 kernel (Kernel instance.) – Kernel to use, if none is passed a default one is used.
 integrator_cls (pysph.sph.integrator.Integrator) – Integrator class to use, use sensible default if none is passed.
 extra_steppers (dict) – Additional integration stepper instances as a dict.
 **kw (extra arguments) – Any additional keyword args are passed to the solver instance.

class
pysph.sph.iisph.
IISPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
A straightforward and simple integrator to be used for IISPH.

class
pysph.sph.iisph.
NormalizedSummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
NumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
PressureForce
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
PressureForceBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
PressureSolve
(dest, sources, rho0, omega=0.5, tolerance=0.01, debug=False)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
PressureSolveBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
SummationDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
SummationDensityBoundary
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
UpdateGhostPressure
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.iisph.
UpdateGhostProps
(dest, sources=None)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
ViscosityAcceleration
(dest, sources, nu)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.iisph.
ViscosityAccelerationBoundary
(dest, sources, rho0, nu)[source]¶ Bases:
pysph.sph.equation.Equation
The acceleration on the fluid due to a boundary.
Rigid body motion¶
Rigid body related equations.

class
pysph.sph.rigid_body.
AkinciRigidFluidCoupling
(dest, sources, fluid_rho=1000)[source]¶ Bases:
pysph.sph.equation.Equation
Force between a solid sphere and a SPH fluid particle. This is implemented using Akinci’s[1] force and additional force from solid bodies pressure which is implemented by Liu[2]
[1]’Versatile RigidFluid Coupling for Incompressible SPH’
URL: https://graphics.ethz.ch/~sobarbar/papers/Sol12/Sol12.pdf
[2]A 3D Simulation of a Moving Solid in Viscous FreeSurface Flows by Coupling SPH and DEM
https://doi.org/10.1155/2017/3174904
 Note: Here forces for both the phases are added at once.
 Please make sure that this force is applied only once for both the particle properties.

class
pysph.sph.rigid_body.
BodyForce
(dest, sources, gx=0.0, gy=0.0, gz=0.0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.rigid_body.
EulerStepRigidBody
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Fast but inaccurate integrator. Use this for testing

class
pysph.sph.rigid_body.
LiuFluidForce
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Force between a solid sphere and a SPH fluid particle. This is implemented using Akinci’s[1] force and additional force from solid bodies pressure which is implemented by Liu[2]
[1]’Versatile RigidFluid Coupling for Incompressible SPH’
URL: https://graphics.ethz.ch/~sobarbar/papers/Sol12/Sol12.pdf
[2]A 3D Simulation of a Moving Solid in Viscous FreeSurface Flows by Coupling SPH and DEM
https://doi.org/10.1155/2017/3174904
 Note: Here forces for both the phases are added at once.
 Please make sure that this force is applied only once for both the particle properties.

class
pysph.sph.rigid_body.
NumberDensity
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.rigid_body.
PressureRigidBody
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation
The pressure acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558
Use this with the fluid as a destination and body as source.

class
pysph.sph.rigid_body.
RK2StepRigidBody
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep

initialize
(d_idx, d_x, d_y, d_z, d_x0, d_y0, d_z0, d_omega, d_omega0, d_vc, d_vc0, d_num_body)[source]¶


class
pysph.sph.rigid_body.
RigidBodyCollision
(dest, sources, kn=1000.0, mu=0.5, en=0.8)[source]¶ Bases:
pysph.sph.equation.Equation
Force between two spheres is implemented using DEM contact force law.
Refer https://doi.org/10.1016/j.powtec.2011.09.019 for more information.
Opensource MFIXDEM software for gas–solids flows: Part I—Verification studies .
Initialise the required coefficients for force calculation.
Keyword arguments: kn – Normal spring stiffness (default 1e3) mu – friction coefficient (default 0.5) en – coefficient of restitution (0.8)
Given these coefficients, tangential spring stiffness, normal and tangential damping coefficient are calculated by default.

class
pysph.sph.rigid_body.
RigidBodyForceGPUGems
(dest, sources, k=1.0, d=1.0, eta=1.0, kt=1.0)[source]¶ Bases:
pysph.sph.equation.Equation
This is inspired from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html and BK Mishra’s article on DEM http://dx.doi.org/10.1016/S03017516(03)000322 A review of computer simulation of tumbling mills by the discrete element method: Part I  contact mechanics
Note that d is a factor multiplied with the “h” of the particle.

class
pysph.sph.rigid_body.
RigidBodyMoments
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.rigid_body.
RigidBodyMotion
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.rigid_body.
RigidBodyWallCollision
(dest, sources, kn=1000.0, mu=0.5, en=0.8)[source]¶ Bases:
pysph.sph.equation.Equation
Force between sphere and a wall is implemented using DEM contact force law.
Refer https://doi.org/10.1016/j.powtec.2011.09.019 for more information.
Opensource MFIXDEM software for gas–solids flows: Part I—Verification studies .
Initialise the required coefficients for force calculation.
Keyword arguments: kn – Normal spring stiffness (default 1e3) mu – friction coefficient (default 0.5) en – coefficient of restitution (0.8)
Given these coefficients, tangential spring stiffness, normal and tangential damping coefficient are calculated by default.

class
pysph.sph.rigid_body.
SummationDensityBoundary
(dest, sources, fluid_rho=1000.0)[source]¶ Bases:
pysph.sph.equation.Equation
Equation to find the density of the fluid particle due to any boundary or a rigid body
\(\rho_a = \sum_b {\rho}_fluid V_b W_{ab}\)

class
pysph.sph.rigid_body.
SummationDensityRigidBody
(dest, sources, rho0)[source]¶ Bases:
pysph.sph.equation.Equation

class
pysph.sph.rigid_body.
ViscosityRigidBody
(dest, sources, rho0, nu)[source]¶ Bases:
pysph.sph.equation.Equation
The viscous acceleration on the fluid/solid due to a boundary. Implemented from Akinci et al. http://dx.doi.org/10.1145/2185520.2185558
Use this with the fluid as a destination and body as source.

pysph.sph.rigid_body.
get_alpha_dot
()[source]¶ Use sympy to perform most of the math and use the resulting formulae to calculate:
inv(I) ( au  w x (I w))
Miscellaneous¶
Functions for advection¶

class
pysph.sph.misc.advection.
Advect
(dest, sources)[source]¶ Bases:
pysph.sph.equation.Equation
Parameters:  dest (str) – name of the destination particle array
 sources (list of str or None) – names of the source particle arrays

class
pysph.sph.misc.advection.
MixingVelocityUpdate
(dest, sources, T)[source]¶ Bases:
pysph.sph.equation.Equation
Functions to reduce array data in serial or parallel.

pysph.base.reduce_array.
dummy_reduce_array
(array, op='sum')[source]¶ Simply returns the array for the serial case.

pysph.base.reduce_array.
mpi_reduce_array
(array, op='sum')[source]¶ Reduce an array given an array and a suitable reduction operation.
Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.
Parameters
 array: numpy.ndarray: Any numpy array (1D).
 op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)

pysph.base.reduce_array.
parallel_reduce_array
(array, op='sum')¶ Reduce an array given an array and a suitable reduction operation.
Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.
Parameters
 array: numpy.ndarray: Any numpy array (1D).
 op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)

pysph.base.reduce_array.
serial_reduce_array
(array, op='sum')[source]¶ Reduce an array given an array and a suitable reduction operation.
Currently, only ‘sum’, ‘max’, ‘min’ and ‘prod’ are supported.
Parameters
 array: numpy.ndarray: Any numpy array (1D).
 op: str: reduction operation, one of (‘sum’, ‘prod’, ‘min’, ‘max’)
Group of equations¶

class
pysph.sph.equation.
Group
(equations, real=True, update_nnps=False, iterate=False, max_iterations=1, min_iterations=0, pre=None, post=None)[source]¶ Bases:
object
A group of equations.
This class provides some support for the code generation for the collection of equations.
Constructor.
Parameters:  equations (list) – a list of equation objects.
 real (bool) – specifies if only nonremote/nonghost particles should be operated on.
 update_nnps (bool) – specifies if the neighbors should be recomputed locally after this group
 iterate (bool) – specifies if the group should continue iterating until each equation’s “converged()” methods returns with a positive value.
 max_iterations (int) – specifies the maximum number of times this group should be iterated.
 min_iterations (int) – specifies the minimum number of times this group should be iterated.
 pre (callable) – A callable which is passed no arguments that is called before anything in the group is executed.
 pre – A callable which is passed no arguments that is called after the group is completed.
Notes
When running simulations in parallel, one should typically run the summation density over all particles (both local and remote) in each processor. This is because we must update the pressure/density of the remote neighbors in the current processor. Otherwise the results can be incorrect with the remote particles having an older density. This is also the case for the TaitEOS. In these cases the group that computes the equation should set real to False.

class
pysph.sph.equation.
MultiStageEquations
(groups)[source]¶ Bases:
object
A class that allows a user to specify different equations for different stages.
The object doesn’t do much, except contain the different collections of equations.
Parameters: groups (list/tuple) – A list/tuple of list of groups/equations, one for each stage.
Integrator related modules¶
Basic code for the templated integrators.
Currently we only support twostep integrators.
These classes are used to generate the code for the actual integrators from the sph_eval module.

class
pysph.sph.integrator.
EPECIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Predictor corrector integrators can have two modes of operation.
In the EvaluatePredictEvaluateCorrect (EPEC) mode, the system is advanced using:
\[ \begin{align}\begin{aligned}F(y^n) > Evaluate\\y^{n+\frac{1}{2}} = y^n + F(y^n) > Predict\\F(y^{n+\frac{1}{2}}) > Evaluate\\y^{n+1} = y^n + \Delta t F(y^{n+\frac{1}{2}}) > Correct\end{aligned}\end{align} \]Notes:
The Evaluate stage of the integrator forces a function evaluation. Therefore, the PEC mode is much faster but relies on old accelertions for the Prediction stage.
In the EPEC mode, the final corrector can be modified to:
 :math:`y^{n+1} = y^n + frac{Delta t}{2}left( F(y^n) +
 F(y^{n+frac{1}{2}}) right)`
This would require additional storage for the accelerations.
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.

class
pysph.sph.integrator.
EulerIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.integrator.
Integrator
(**kw)[source]¶ Bases:
object
Generic class for multistep integrators in PySPH for a system of ODES of the form \(\frac{dy}{dt} = F(y)\).
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

compute_time_step
(dt, cfl)[source]¶ If there are any adaptive timestep constraints, the appropriate timestep is returned, else None is returned.

initial_acceleration
(t, dt)[source]¶ Compute the initial accelerations if needed before the iterations start.
The default implementation only does this for the first acceleration evaluator. So if you have multiple evaluators, you must override this method in a subclass.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.

set_acceleration_evals
(a_evals)[source]¶ Set the acceleration evaluators.
This must be done before the integrator is used.
If you are using the SPHCompiler, it automatically calls this method.

set_compiled_object
(c_integrator)[source]¶ Set the highperformance compiled object to call internally.

set_post_stage_callback
(callback)[source]¶ This callback is called when the particles are moved, i.e one stage of the integration is done.
This callback is passed the current time value, the timestep and the stage.
The current time value is t + stage_dt, for example this would be 0.5*dt for a two stage predictor corrector integrator.

step
(time, dt)[source]¶ This function is called by the solver.
To implement the integration step please override the
one_timestep
method.

update_domain
()[source]¶ Update the domain of the simulation.
This is to be called when particles move so the ghost particles (periodicity, mirror boundary conditions) can be reset. Further, this also recalculates the appropriate cell size based on the particle kernel radius, h. This should be called explicitly when desired but usually this is done when the particles are moved or the h is changed.
The integrator should explicitly call this when needed in the one_timestep method.


class
pysph.sph.integrator.
LeapFrogIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.PECIntegrator
A leapfrog integrator.
Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.integrator.
PECIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
In the PredictEvaluateCorrect (PEC) mode, the system is advanced using:
\[ \begin{align}\begin{aligned}y^{n+\frac{1}{2}} = y^n + \frac{\Delta t}{2}F(y^{n\frac{1}{2}}) > Predict\\F(y^{n+\frac{1}{2}}) > Evaluate\\y^{n + 1} = y^n + \Delta t F(y^{n+\frac{1}{2}})\end{aligned}\end{align} \]Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.integrator.
PEFRLIntegrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
A PositionExtended ForestRuthLike integrator [Omeylan2002]
References
[Omeylan2002] I.M. Omelyan, I.M. Mryglod and R. Folk, “Optimized ForestRuth and Suzukilike algorithms for integration of motion in manybody systems”, Computer Physics Communications 146, 188 (2002) http://arxiv.org/abs/condmat/0110585 Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.


class
pysph.sph.integrator.
TVDRK3Integrator
(**kw)[source]¶ Bases:
pysph.sph.integrator.Integrator
In the TVDRK3 integrator, the system is advanced using:
\[ \begin{align}\begin{aligned}y^{n + \frac{1}{3}} = y^n + \Delta t F( y^n )\\y^{n + \frac{2}{3}} = \frac{3}{4}y^n + \frac{1}{4}(y^{n + \frac{1}{3}} + \Delta t F(y^{n + \frac{1}{3}}))\\y^{n + 1} = \frac{1}{3}y^n + \frac{2}{3}(y^{n + \frac{2}{3}} + \Delta t F(y^{n + \frac{2}{3}}))\end{aligned}\end{align} \]Pass fluid names and suitable IntegratorStep instances.
For example:
>>> integrator = Integrator(fluid=WCSPHStep(), solid=WCSPHStep())
where “fluid” and “solid” are the names of the particle arrays.

one_timestep
(t, dt)[source]¶ User written function that actually does one timestep.
This function is used in the highperformance Cython implementation. The assumptions one may make are the following:
t and dt are passed.
the following methods are available:
 self.initialize()
 self.stage1(), self.stage2() etc. depending on the number of stages available.
 self.compute_accelerations(index=0, update_nnps=True)
 self.do_post_stage(stage_dt, stage_count_from_1)
 self.update_domain()
Please see any of the concrete implementations of the Integrator class to study. By default the Integrator implements a predictevaluatecorrect method, the same as PECIntegrator.

Integrator steps for different schemes.
Implement as many stages as needed.

class
pysph.sph.integrator_step.
ADKEStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Predictor Corrector integrator for Gasdynamics ADKE

initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_rho, d_rho0)[source]¶


class
pysph.sph.integrator_step.
AdamiVerletStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Verlet time integration described in A generalized wall boundary condition for smoothed particle hydrodynamics 2012, JCP, 231, pp 7057–7075
This integrator can operate in either PEC mode or in EPEC mode as described in the paper.

class
pysph.sph.integrator_step.
EulerStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Fast but inaccurate integrator. Use this for testing

class
pysph.sph.integrator_step.
GasDFluidStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Predictor Corrector integrator for Gasdynamics

initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_h, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_e, d_e0, d_h0, d_converged, d_omega, d_rho, d_rho0, d_alpha1, d_alpha2, d_alpha10, d_alpha20)[source]¶


class
pysph.sph.integrator_step.
InletOutletStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
A trivial integrator for the inlet/outlet particles

class
pysph.sph.integrator_step.
IntegratorStep
[source]¶ Bases:
object
Subclass this and implement the methods
initialize
,stage1
etc. Use the same conventions as the equations.

class
pysph.sph.integrator_step.
LeapFrogStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Using this stepper with XSPH as implemented in pysph.base.basic_equations.XSPHCorrection is not directly possible and requires a nicer implementation where the correction alone is added to
ax, ay, az
.

class
pysph.sph.integrator_step.
OneStageRigidBodyStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Simple one stage rigidbody motion

class
pysph.sph.integrator_step.
PEFRLStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Using this stepper with XSPH as implemented in pysph.base.basic_equations.XSPHCorrection is not directly possible and requires a nicer implementation where the correction alone is added to
ax, ay, az
.
stage2
(d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0)[source]¶

stage3
(d_idx, d_x, d_y, d_z, d_u, d_au, d_v, d_av, d_w, d_aw, d_ax, d_ay, d_az, d_rho, d_arho, d_e, d_ae, dt=0.0)[source]¶


class
pysph.sph.integrator_step.
SolidMechStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Predictor corrector Integrator for solid mechanics problems

initialize
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_e0, d_e)[source]¶

stage1
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_e0, d_ae, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, dt)[source]¶

stage2
(d_idx, d_x0, d_y0, d_z0, d_x, d_y, d_z, d_u0, d_v0, d_w0, d_u, d_v, d_w, d_rho0, d_rho, d_au, d_av, d_aw, d_ax, d_ay, d_az, d_arho, d_e, d_ae, d_e0, d_s00, d_s01, d_s02, d_s11, d_s12, d_s22, d_s000, d_s010, d_s020, d_s110, d_s120, d_s220, d_as00, d_as01, d_as02, d_as11, d_as12, d_as22, dt)[source]¶


class
pysph.sph.integrator_step.
TransportVelocityStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Integrator defined in ‘A transport velocity formulation for smoothed particle hydrodynamics’, 2013, JCP, 241, pp 292–307
For a predictorcorrector style of integrator, this integrator should operate only in PEC mode.

class
pysph.sph.integrator_step.
TwoStageRigidBodyStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Simple rigidbody motion
At each stage of the integrator, the prescribed velocity and accelerations are incremented by dt/2.
Note that the time centered velocity is used for updating the particle positions. This ensures exact motion for a constant acceleration.

class
pysph.sph.integrator_step.
VelocityVerletSymplecticWCSPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Another symplectic second order integrator described in the review paper by Monaghan:
J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]
kick–drift–kick form of the verlet integrator

class
pysph.sph.integrator_step.
VerletSymplecticWCSPHStep
[source]¶ Bases:
pysph.sph.integrator_step.IntegratorStep
Symplectic second order integrator described in the review paper by Monaghan:
J. Monaghan, “Smoothed Particle Hydrodynamics”, Reports on Progress in Physics, 2005, 68, pp 1703–1759 [JM05]
Notes:
This integrator should run in PEC mode since in the first stage, the positions are updated using the current velocity. The accelerations are then computed to advance to the full time step values.
This version of the integrator does not update the density. That is, the summation density is used instead of the continuity equation.

stage1
(d_idx, d_x, d_y, d_
