Developer Guide

This document is for the development of IPS itself, if you want to develop drivers and components for IPS simulations see The IPS for Driver and Component Developers.


You can report bugs (including security bugs) using GitHub issues.

Alternatively the developers can be contacted at discussions.

Change requests can be made using GitHub pull request.

Getting and installing IPS from source code

To get started you first need to obtain the source code, I suggest installing in editable mode, see Installing IPS from source.

Development environment

IPS-framework doesn’t have any required dependencies. It has an optional dependency Dask that will enable Dask to be used for task pool scheduling, see submit_tasks().

IPS-framework will work with python version ≥ 3.6. It is tested to work with Dask and distributed ≥ 2.5.2 but may work with earlier versions.

IPS-framework will work on Linux and macOS. It won’t work on Windows directly but will work in the Windows Subsystem for Linux.

To run the tests requires pytest, pytest-cov and psutil. Optional dependencies are dask/distributed and mpirun/mpi4py which are needed to run all the tests.

It is recommend that you use conda but you also just install dependencies using system packages or with PyPI in an virtual environment.


To create a Conda environment with all testing dependencies run:

conda create -n ips python=3.8 pytest pytest-cov psutil dask mpi4py sphinx
conda activate ips

Code review expectations

Code will need to conform to the style as enforced by flake8 and should not introduce any new warnings or error from the static analysis, see Static Analysis.

All new features should have an accompanying test where it should try to include complete code coverage of the changes, see Testing.

All new functionality should have complete docstrings. If appropriate, further documentation or usage examples should be added, see Documentation.


Running Tests

The pytest framework is used for finding and executing tests in IPS-framework.

To run the tests

python -m pytest

To run test showing code coverage, install pytest-cov and run

python -m pytest --cov

and the output will look like

----------- coverage: platform linux, python 3.7.8-final-0 -----------
Name                                    Stmts   Miss  Cover
ipsframework/                   11      0   100%
ipsframework/                62     10    84%
ipsframework/                 105     19    82%
ipsframework/         105     25    76%
ipsframework/      510    103    80%
ipsframework/       29      1    97%
ipsframework/                72     15    79%
ipsframework/                       3      0   100%
ipsframework/              137     53    61%
ipsframework/         118     49    58%
ipsframework/                       360     51    86%
ipsframework/              61      2    97%
ipsframework/                 92      8    91%
ipsframework/                43      7    84%
ipsframework/                    73     26    64%
ipsframework/                   58      0   100%
ipsframework/            193     31    84%
ipsframework/               18      4    78%
ipsframework/              205     36    82%
ipsframework/            304     59    81%
ipsframework/           340     69    80%
ipsframework/      88     31    65%
ipsframework/                   41      2    95%
ipsframework/                 1200    234    80%
ipsframework/               322     74    77%
ipsframework/               59      5    92%
TOTAL                                    4609    914    80%

You can then also run python -m coverage report -m to show exactly which lines are missing test coverage.

Cori only tests

The are some tests that only run on Cori at NERSC and these are not run as part of the CI and must be run manually. To run those test you need to add the option --runcori to the pytest. There are tests for the shifter functionally that is Cori specific.

An example batch script for running the unit tests is:

#SBATCH -p debug
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --cpus-per-task=32
#SBATCH -t 00:10:00
#SBATCH -C haswell
#SBATCH -J pytest
#SBATCH -e pytest.err
#SBATCH -o pytest.out
#SBATCH --image=continuumio/anaconda3:2020.11
module load python/3.8-anaconda-2020.11
python -m pytest --runcori

The check the output in pytest.out to see that all the tests passed.

Writing Tests

The pytest framework is used for finding and executing tests in IPS-framework.

Tests should be added to tests directory. If writing component to use for testing that should go into tests/components and any executable should go into tests/bin.

Continuous Integration (CI)

GitHub Actions is used for CI and will run on all pull requests and any branch including once a pull request is merged into main. Static analysis checks and the test suite will run and report the code coverage to Codecov.

Static Analysis

The following static analysis is run as part of CI

  • flake8 - Style guide enforcement
  • pylint - Code analysis
  • bandit - Find common security issues
  • codespell - Check code for common misspellings

The configuration of these tools can be found in setup.cfg.


The test suite runs on Linux and macOS with python versions from 3.6 up to 3.9. It is also tested with 3 different version of Dask, 2.5.2, 2.30.0 and the most recent version. The 2.5.2, 2.30.0 versions of Dask where chosen to match what is available on Cori at NERSC in the modules python/3.7-anaconda-2019.10 and python/3.8-anaconda-2020.11.

The test suite also runs as part of the CI on Windows using WSL (Ubuntu 20.04) just using the default system python version.


sphinx is used to generate the documentation for IPS. The docs are found in the doc directory and the docstrings from the source code can included in the documentation. The documentation can be built by running make html within the doc directory, the output will go to doc/_build/html.

The docs are automatically build by Read the Docs when merged into main and deployed to You can see the status of the docs build by going to here

Release process

We have no set release schedule and will create minor (add functionality in a backwards compatible manner) and patch (bug fixes) releases as needed following Semantic Versioning.

The deployment to PyPI will happen automatically by a GitHub Actions workflow whenever a tag is created.

Release notes should be added to

We will publish a release candidate versions for any major or minor release before the full release to allow feedback from users. Patch versions will not normally have an release candidate.

Before a release is finalized the Cori only tests should be run.