PyCoTools

PyCoTools is a python package that was developed as an alternative interface into COPASI, simulation software for modelling biochemical systems. The PyCoTools paper can be found here and describes in detail the intentions and functionality of PyCoTools. There are some important differences between the PyCoTools version that is described in the publication and the current version. The first is that PyCoTools is now a python 3 only package. If using Python 2.7 you should create a virtual Python 3.6 environment using conda or virtualenv. My preference is conda. The other major difference is the interface to COPASI’s parameter estimation task which is described in the tutorials and examples.

Installation

First make sure you use a Python 3.6 environment.

Warning

Using Python 3.7 or 3.8 will not work at this time due to dependency issues (which are unfortunately out of my control).

Then use:

$ pip install pycotools3

Remember to source activate your python 3.6 environment if you need to.

To install from source:

$ git clone https://github.com/CiaranWelsh/pycotools3.git
$ cd pycotools3
$ python setup.py install

The procedure is the same in linux, mac and windows.

Troubleshooting

### PyCoTools3 will not install Pycotools3 is only supported in Python 3 to Python 3.6. If you are using Python 2.7 or Python 3.7 please create a new conda Python3.7 environment.

$ conda create -n py36 python=3.6
$ conda activate py36
$ pip install pycotools3

The same commands should work cross platform.

### You get errors when trying to build a model using pycotools3.model.loada() Make sure you have installed Copasi <http://copasi.org/Download/_ and added the Copasi/bin directory to the path variable.

  • On Linux <https://unix.stackexchange.com/questions/26047/how-to-correctly-add-a-path-to-path_>
  • On Windows <https://stackoverflow.com/questions/9546324/adding-directory-to-path-environment-variable-in-windows>_
  • On Mac <https://stackoverflow.com/questions/7703041/editing-path-variable-on-mac_>

Documentation

This is a guide to PyCoTools version >2.0.1.

Getting started

As PyCoTools only provides an alternative interface into some of COPASI’s tasks, if you are unfamiliar with COPASI, it is a good idea to become acquainted prior to proceeding. As much as possible, arguments to PyCoTools functions follow the corresponding option in the COPASI user interface.

In addition to COPASI, PyCoTools depends on tellurium which is a Python package for modelling biological systems. While tellurium and COPASI have some of the same features, generally they are complementary and productivity is enhanced by using both together.

More specifically, tellurium uses antimony strings to define a model which is then converted into SBML. PyCoTools provides the model.BuildAntimony class which is a wrapper around this tellurium feature, which creates a Copasi model and parses it into a PyCoTools model.Model.

Since antimony is described elsewhere we will focus here on using antimony to build a copasi model.

Build a model with antimony

[2]:
import site, os
from pycotools3 import model

working_directory = os.path.abspath('') #create a base directory to work from
copasi_filename = os.path.join(working_directory, 'NegativeFeedbackModel.cps') #create a string to a copasi file on system
antimony_string =  '''
    model negative_feedback()
        // define compartments
        compartment cell = 1.0
        //define species
        var A in cell
        var B in cell
        //define some global parameter for use in reactions
        vAProd = 0.1
        kADeg  = 0.2
        kBProd = 0.3
        kBDeg  = 0.4
        //define initial conditions
        A      = 0
        B      = 0
        //define reactions
        AProd:  => A; cell*vAProd
        ADeg: A =>  ; cell*kADeg*A*B
        BProd:  => B; cell*kBProd*A
        BDeg: B =>  ; cell*kBDeg*B
    end
    '''
negative_feedback = model.loada(antimony_string, copasi_filename) # load the model
print(negative_feedback)
assert os.path.isfile(copasi_filename) #check that the model exists
Model(name=negative_feedback, time_unit=s, volume_unit=l, quantity_unit=mol)

Create an antmiony string from an existing model

The Copasi user interface is an excellant way of constructing a model and it is easy to convert this model into an antimony string that can be pasted into a document.

[4]:
print(negative_feedback.to_antimony())
// Created by libAntimony v2.9.4
function Constant_flux__irreversible(v)
  v;
end

function Function_for_ADeg(A, B, kADeg)
  kADeg*A*B;
end

function Function_for_BProd(A, kBProd)
  kBProd*A;
end


model *negative_feedback()

  // Compartments and Species:
  compartment cell;
  species A in cell, B in cell;

  // Reactions:
  AProd:  => A; cell*Constant_flux__irreversible(vAProd);
  ADeg: A => ; cell*Function_for_ADeg(A, B, kADeg);
  BProd:  => B; cell*Function_for_BProd(A, kBProd);
  BDeg: B => ; cell*kBDeg*B;

  // Species initializations:
  A = 0;
  B = 0;

  // Compartment initializations:
  cell = 1;

  // Variable initializations:
  vAProd = 0.1;
  kADeg = 0.2;
  kBProd = 0.3;
  kBDeg = 0.4;

  // Other declarations:
  const cell, vAProd, kADeg, kBProd, kBDeg;
end

One paradigm of model development is to use antimony to ‘hard code’ perminent changes to the model and the Copasi user interface for experimental changes. The Model.open() method is useful for this paradigm as it opens the model with whatever configurations have been defined.

[5]:
negative_feedback.open()

Simulate a time course

Since we have used an antimony string, we can simulate this model with either tellurium or Copasi. Simulating with tellurium uses a library called roadrunner which is described in detail elsewhere. To run a simulation with Copasi we need to configure the time course task, make the task executable (i.e. tick the check box in the top right of the time course task) and run the simulation with CopasiSE. This is all taken care of by the tasks.TimeCourse class.

[6]:
from pycotools3 import tasks
time_course = tasks.TimeCourse(negative_feedback, end=100, step_size=1, intervals=100)
time_course
[6]:
<pycotools3.tasks.TimeCourse at 0x1ff411e5588>

However a more convenient interface is provided by the model.simulate method, which is a wrapper around tasks.TimeCourse which additionally parses the resulting data from file and returns a pandas.DataFrame

[7]:
from pycotools3 import tasks
fname = os.path.join(os.path.abspath(''), 'timcourse.txt')
sim_data = negative_feedback.simulate(0, 100, 1, report_name=fname) ##start, end, by
sim_data.head()
[7]:
A B
Time
0 0.000000 0.000000
1 0.099932 0.013181
2 0.199023 0.046643
3 0.295526 0.093275
4 0.387233 0.147810

The results are saved in a file defined by the report_name option, which defaults to timecourse.txt in the same directory as the copasi model.

Visualise a time course

PyCoTools also provides facilities for visualising simulation output. To plot a timecourse, pass the task.TimeCourse object to the viz.PlotTimeCourse object.

[8]:
from pycotools3 import viz
viz.PlotTimeCourse(time_course, savefig=True)
[8]:
<pycotools3.viz.PlotTimeCourse at 0x1ff411e53c8>
_images/getting_started_11_1.png
_images/getting_started_11_2.png

More information about running time courses with PyCoTools and Copasi can be found in the time course tutorial

Run Parameter Estimation

The following configures a regular copasi parameter estimation (context='s') on all global and initial concentration parameters (parameters='gm') using the genetic algorithm

[15]:
from pycotools3 import tasks, viz

with tasks.ParameterEstimation.Context(negative_feedback, fname, context='s', parameters='gm') as context:
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 50)
    context.set('number_of_generations', 300)
    context.set('run_mode', True) ##defaults to False
    context.set('pe_number', 2) ## number of repeat items in scan task
    context.set('copy_number', 2) ## number of times to copy model
    config = context.get_config()

pe = tasks.ParameterEstimation(config)
data = viz.Parse(pe)
print(data)
{'NegativeFeedbackModel':           A         B       RSS     kADeg     kBDeg    kBProd    vAProd
0  0.000004  0.000321  0.000348  0.198104  0.404187  0.298985  0.099467
1  0.000397  0.000009  0.000358  0.201986  0.396722  0.296337  0.100254
2  0.002442  0.000002  0.000399  0.197253  0.403480  0.299229  0.099694
3  0.110722  0.004461  0.002555  0.198449  0.402032  0.297208  0.097885}

pycotools3 supports the configuration of:

Also you can checkout the parameter estimation tutorial.

Tutorials

In this section of the documentation I provide detailed explainations of how PyCoTools works, with examples. The tutorials are split into sections which are linked to below.

Running Time-Courses

Copasi enables users to simulate their model with a range of different solvers.

Create a model

Here we do our imports and create the model we use for the tutorial

[1]:
import os
import site
from pycotools3 import model, tasks, viz

working_directory = r'/home/ncw135/Documents/pycotools3/docs/source/Tutorials/timecourse_tutorial'
if not os.path.isdir(working_directory):
    os.makedirs(working_directory)

copasi_file = os.path.join(working_directory, 'michaelis_menten.cps')

if os.path.isfile(copasi_file):
    os.remove(copasi_file)

antimony_string = """
model michaelis_menten()
    compartment cell = 1.0
    var E in cell
    var S in cell
    var ES in cell
    var P in cell

    kf = 0.1
    kb = 1
    kcat = 0.3
    E = 75
    S = 1000

    SBindE: S + E => ES; kf*S*E
    ESUnbind: ES => S + E; kb*ES
    ProdForm: ES => P + E; kcat*ES
end
"""

with model.BuildAntimony(copasi_file) as loader:
    mm = loader.load(antimony_string)


mm
The BuildAntimony context manager is deprecated and will be removed in future versions. Please use model.loada instead.
[1]:
Model(name=michaelis_menten, time_unit=s, volume_unit=l, quantity_unit=mol)
Deterministic Time Course
[2]:
TC = tasks.TimeCourse(
    mm, report_name='mm_simulation.txt',
    end=1000, intervals=50, step_size=20
)

## check its worked
os.path.isfile(TC.report_name)

import pandas
df = pandas.read_csv(TC.report_name, sep='\t')
df.head()
[2]:
Time [E] [S] [ES] [P] Values[kf] Values[kb] Values[kcat]
0 0 75.00000 1000.000000 1.000000 1.000 0.1 1 0.3
1 20 2.00306 479.797000 73.996900 448.206 0.1 1 0.3
2 40 12.80010 62.104800 63.199900 876.695 0.1 1 0.3
3 60 75.13160 0.119830 0.868371 1001.010 0.1 1 0.3
4 80 75.99560 0.000604 0.004434 1001.990 0.1 1 0.3

When running a time course, you should ensure that the number of intervals times the step size equals the end time, i.e.:

- $$intervals \cdot step\_size = end$$

The default behaviour is to output all model variables as they can easily be filtered later in the Python environment. However, the metabolites, global_quantities and local_parameters arguments exist to filter the variables that are simulated prior to running the time course.

[3]:
TC=tasks.TimeCourse(
    mm,
    report_name='mm_timecourse.txt',
    end=100,
    intervals=50,
    step_size=2,
    global_quantities = ['kf'], ##recall that antimony puts all parameters as global quantities
)

##check that we only kf as a global variables
pandas.read_csv(TC.report_name,sep='\t').head()
[3]:
Time [E] [S] [ES] [P] Values[kf]
0 0 75.00000 1000.000 1.0000 1.0000 0.1
1 2 1.10437 881.378 74.8956 45.7263 0.1
2 4 1.16265 836.516 74.8373 90.6465 0.1
3 6 1.22737 791.698 74.7726 135.5300 0.1
4 8 1.29962 746.928 74.7004 180.3720 0.1

An alternative and more convenient interface into the tasks.TimeCourse class is using the model.Model.simulate method. This is simply a wrapper and is used like so.

[4]:
data = mm.simulate(start=0, stop=100, by=0.1)
data.head()
[4]:
E ES P S
Time
0.0 75.00000 1.0000 1.00000 1000.000
0.1 1.05984 74.9402 3.02124 924.039
0.2 1.05665 74.9433 5.26956 921.787
0.3 1.05920 74.9408 7.51783 919.541
0.4 1.06175 74.9382 9.76601 917.296

This mechanism of running a time course has the advantage that 1) pycotools parses the data back into python in the form of a pandas.DataFrame and 2) the column names are automatically pruned to remove the copasi reference information.

Visualization
[5]:
viz.PlotTimeCourse(TC)
[5]:
<pycotools3.viz.PlotTimeCourse at 0x16c80294358>
_images/Tutorials_Timecourse_10_1.png
_images/Tutorials_Timecourse_10_2.png
_images/Tutorials_Timecourse_10_3.png
_images/Tutorials_Timecourse_10_4.png

It is also possible to plot these on the same axis by specifying separate=False

[6]:
viz.PlotTimeCourse(TC, separate=False)
[6]:
<pycotools3.viz.PlotTimeCourse at 0x16ca06f91d0>
_images/Tutorials_Timecourse_12_1.png

or to choose the y variables,

[7]:
viz.PlotTimeCourse(TC, y=['E', 'S'], separate=False)
viz.PlotTimeCourse(TC, y=['ES', 'P'], separate=False)
[7]:
<pycotools3.viz.PlotTimeCourse at 0x16ca0760748>
_images/Tutorials_Timecourse_14_1.png
_images/Tutorials_Timecourse_14_2.png
Plot in Phase Space

Choose the x variable to plot phase space. Same arguments apply as above.

[8]:
viz.PlotTimeCourse(TC, x='E', y='ES', separate=True)
[8]:
<pycotools3.viz.PlotTimeCourse at 0x16ca07c1b38>
_images/Tutorials_Timecourse_16_1.png
Save to file

Use the savefig=True option to save the figure to file and give an argument to the filename option to choose the filename.

[9]:
viz.PlotTimeCourse(TC, y=['S', 'P'], separate=False, savefig=True, filename='MyTimeCourse.png')
[9]:
<pycotools3.viz.PlotTimeCourse at 0x16ca0847cf8>
_images/Tutorials_Timecourse_19_1.png
Alternative Solvers

Valid arguments for the method argument of TimeCourse are:

  • deterministic
  • direct
  • gibson_bruck
  • tau_leap
  • adaptive_tau_leap
  • hybrid_runge_kutta
  • hybrid_lsoda

Copasi also includes a hybrid_rk45 solver but this is not yet supported by Pycotools. To use an alternative solver, pass the name of the solver to the method argument.

Stochastic MM

For demonstrating simulation of stochastic time courses we build another michaelis-menten type reaction schema. We need to do this so we can set unit substance = item, or in other words, change the model to particle numbers - otherwise there are too may molecules in the system to simulate a stochastic model

[10]:
copasi_file = os.path.join(working_directory, 'michaelis_menten_stochastic.cps')

antimony_string = """
model michaelis_menten()
    compartment cell = 1.0;
    var E in cell;
    var S in cell;
    var ES in cell;
    var P in cell;

    kf = 0.1;
    kb = 1;
    kcat = 0.3;
    E = 75;
    S = 1000;

    SBindE: S + E => ES; kf*S*E;
    ESUnbind: ES => S + E; kb*ES;
    ProdForm: ES => P + E; kcat*ES;

    unit substance = item;

end
"""

with model.BuildAntimony(copasi_file) as loader:
    mm = loader.load(antimony_string)
The BuildAntimony context manager is deprecated and will be removed in future versions. Please use model.loada instead.
Run a Time Course Using Direct Method
[11]:
data = mm.simulate(0, 100, 1, method='direct')
data.head(n=10)
[11]:
E ES P S
Time
0 75 1 1 1000
1 0 76 18 908
2 2 74 36 892
3 0 76 57 869
4 1 75 81 846
5 0 76 100 826
6 0 76 122 804
7 1 75 143 784
8 1 75 167 760
9 1 75 200 727
Plot stochastic time course

Note that we can also use the pandas, matplotlib and seaborn libraries for plotting

[12]:
import matplotlib
import seaborn
seaborn.set_context('notebook')
data.plot()
[12]:
<matplotlib.axes._subplots.AxesSubplot at 0x16ca0810c88>
_images/Tutorials_Timecourse_25_1.png

Notice how similar the stochastic simulation is to the deterministic. As the number of molecules being simulated increases, the stochastic simulation converges to the deterministic solution. Another way to put it, stochastic effects are greatest when simulating small molecule numbers

Insert Parameters

Parameters can be inserted automatically into a Copasi model.

Build a demonistration model

While antimony or the COPASI user interface are the preferred ways to build a model, PyCoTools does have a mechanism for constructing COPASI models. For variation and demonstration, this method is used here.

[1]:
import os
import site
from pycotools3 import model, tasks, viz
## Choose a working directory for model
working_directory = os.path.abspath('')
copasi_file = os.path.join(working_directory, 'MichaelisMenten.cps')

if os.path.isfile(copasi_file):
    os.remove(copasi_file)


kf = 0.01
kb = 0.1
kcat = 0.05
with model.Build(copasi_file) as m:
    m.name = 'Michaelis-Menten'
    m.add('compartment', name='Cell')

    m.add('metabolite', name='P', concentration=0)
    m.add('metabolite', name='S', concentration=30)
    m.add('metabolite', name='E', concentration=10)
    m.add('metabolite', name='ES', concentration=0)

    m.add('reaction', name='S bind E', expression='S + E -> ES', rate_law='kf*S*E',
          parameter_values={'kf': kf})

    m.add('reaction', name='S unbind E', expression='ES -> S + E', rate_law='kb*ES',
         parameter_values={'kb': kb})

    m.add('reaction', name='ES produce P', expression='ES -> P + E', rate_law='kcat*ES',
          parameter_values={'kcat': kcat})

mm = model.Model(copasi_file)
mm
[1]:
Model(name=Michaelis-Menten, time_unit=s, volume_unit=ml, quantity_unit=mmol)
Insert Parameters from Python Dictionary
[2]:
params = {'E': 100,
          'P': 150}

## Insert into model
I = model.InsertParameters(mm, parameter_dict=params)
##format the parameters for displaying nicely
I.parameters.index = ['Parameter Value']
I.parameters.transpose()
[2]:
Parameter Value
E 100
P 150

Alternatively use inplace=True argument (analogous to the pandas library) to modify the object inplace, rather than needing to assign

[3]:
model.InsertParameters(mm, parameter_dict=params, inplace=True)
[3]:
<pycotools3.model.InsertParameters at 0x15c51a255c0>
Insert Parameters from Pandas DataFrame
[4]:
import pandas
params = {'(S bind E).kf': 50,
          '(S unbind E).kb': 96}
df = pandas.DataFrame(params, index=[0])
df
[4]:
(S bind E).kf (S unbind E).kb
0 50 96
[ ]:

[5]:
model.InsertParameters(mm, df=df, inplace=True)
[5]:
<pycotools3.model.InsertParameters at 0x15c519f8dd8>
Insert Parameters from Parameter Estimation Output

First we’ll get some parameter estimation data by fitting a model to simulated data.

[6]:
fname = os.path.join(os.path.abspath(''), 'timecourse.txt')
print(fname)
data = mm.simulate(0, 50, 1, report_name=fname)
assert os.path.isfile(fname)
D:\pycotools3\docs\source\Tutorials\timecourse.txt
[7]:
with  tasks.ParameterEstimation.Context(copasi_file, fname, context='s', parameters='l') as context:
    context.set('randomize_start_values', True)
    context.set('lower_bound', 0.01)
    context.set('upper_bound', 100)
    context.set('run_mode', True)
    config = context.get_config()
print(config)
PE = tasks.ParameterEstimation(config)
datasets:
    experiments:
        timecourse:
            affected_models:
            - MichaelisMenten
            filename: D:\pycotools3\docs\source\Tutorials\timecourse.txt
            mappings:
                E:
                    model_object: E
                    object_type: Metabolite
                    role: dependent
                ES:
                    model_object: ES
                    object_type: Metabolite
                    role: dependent
                P:
                    model_object: P
                    object_type: Metabolite
                    role: dependent
                S:
                    model_object: S
                    object_type: Metabolite
                    role: dependent
                Time:
                    model_object: Time
                    role: time
            normalize_weights_per_experiment: true
            separator: "\t"
    validations: {}
items:
    fit_items:
        (ES produce P).kcat:
            affected_experiments:
            - timecourse
            affected_models:
            - MichaelisMenten
            affected_validation_experiments: []
            lower_bound: 1.0e-06
            start_value: model_value
            upper_bound: 1000000.0
        (S bind E).kf:
            affected_experiments:
            - timecourse
            affected_models:
            - MichaelisMenten
            affected_validation_experiments: []
            lower_bound: 1.0e-06
            start_value: model_value
            upper_bound: 1000000.0
        (S unbind E).kb:
            affected_experiments:
            - timecourse
            affected_models:
            - MichaelisMenten
            affected_validation_experiments: []
            lower_bound: 1.0e-06
            start_value: model_value
            upper_bound: 1000000.0
models:
    MichaelisMenten:
        copasi_file: D:\pycotools3\docs\source\Tutorials\MichaelisMenten.cps
        model: Model(name=Michaelis-Menten, time_unit=s, volume_unit=ml, quantity_unit=mmol)
settings:
    calculate_statistics: false
    config_filename: config.yml
    context: s
    cooling_factor: 0.85
    copy_number: 1
    create_parameter_sets: false
    cross_validation_depth: 1
    fit: 1
    iteration_limit: 50
    lower_bound: 0.01
    max_active: 3
    method: genetic_algorithm
    number_of_generations: 200
    number_of_iterations: 100000
    overwrite_config_file: false
    pe_number: 1
    pf: 0.475
    pl_lower_bound: 1000
    pl_upper_bound: 1000
    population_size: 50
    prefix: null
    problem: Problem1
    quantity_type: concentration
    random_number_generator: 1
    randomize_start_values: true
    report_name: PEData.txt
    results_directory: ParameterEstimationData
    rho: 0.2
    run_mode: true
    save: false
    scale: 10
    seed: 0
    start_temperature: 1
    start_value: 0.1
    std_deviation: 1.0e-06
    swarm_size: 50
    tolerance: 1.0e-05
    update_model: false
    upper_bound: 100
    use_config_start_values: false
    validation_threshold: 5
    validation_weight: 1
    weight_method: mean_squared
    working_directory: D:\pycotools3\docs\source\Tutorials

Now we can insert the estimated parameters using:

[8]:
##index=0 for best parameter set (i.e. lowest RSS)
model.InsertParameters(mm, parameter_path=PE.results_directory['MichaelisMenten'], index=0, inplace=True)
[8]:
<pycotools3.model.InsertParameters at 0x15c6fb8c2e8>
Insert Parameters using the model.Model().insert_parameters method

The same means of inserting parameters can be used from the model object itself

[9]:
mm.insert_parameters(parameter_dict=params, inplace=True)
Change parameters using model.Model().set

Individual parameters can also be changed using the set method. For example, we could set the metabolite with name S concentration or particle numbers to 55

[10]:
mm.set('metabolite', 'S', 55, 'name', 'concentration')

## or

mm.set('metabolite', 'S', 55, 'name', 'particle_numbers')
[10]:
Model(name=Michaelis-Menten, time_unit=s, volume_unit=ml, quantity_unit=mmol)

Parameter Scan

Copasi supports three types of scan, a regular parameter scan, a repeat scan and sampling from a parametric distributions.

We first build a model to work with throughout the tutorial.

[1]:
import os
import site
from pycotools3 import model, tasks, viz
import pandas

working_directory = r'/home/ncw135/Documents/pycotools3/docs/source/Tutorials/timecourse_tutorial'
if not os.path.isdir(working_directory):
    os.makedirs(working_directory)

copasi_file = os.path.join(working_directory, 'michaelis_menten.cps')

if os.path.isfile(copasi_file):
    os.remove(copasi_file)

antimony_string = """
model michaelis_menten()
    compartment cell = 1.0
    var E in cell
    var S in cell
    var ES in cell
    var P in cell

    kf = 0.1
    kb = 1
    kcat = 0.3
    E = 75
    S = 1000

    SBindE: S + E => ES; kf*S*E
    ESUnbind: ES => S + E; kb*ES
    ProdForm: ES => P + E; kcat*ES
end
"""

with model.BuildAntimony(copasi_file) as loader:
    mm = loader.load(antimony_string)


mm
The BuildAntimony context manager is deprecated and will be removed in future versions. Please use model.loada instead.
[1]:
Model(name=michaelis_menten, time_unit=s, volume_unit=l, quantity_unit=mol)
[2]:
S = tasks.Scan(
    mm, scan_type='scan', subtask='time_course', report_type='time_course',
    report_name = 'ParameterScanOfTimeCourse.txt', variable='S',
    minimum=1, maximum=20, number_of_steps=8, run=True,
)

## Now check parameter scan data exists
os.path.isfile(S.report_name)
[2]:
True
Two Way Parameter Scan

By default, scan tasks are removed before setting up a new scan. To set up dual scans, set clear_scans to False in a second call to Scan so that the first is not removed prior to adding the second.

[3]:
## Clear scans for setting up first scan
tasks.Scan(
    mm, scan_type='scan', subtask='time_course', report_type='time_course',
    variable='E', minimum=1, maximum=20, number_of_steps=8, run=False, clear_scan=True,
)

## do not clear tasks when setting up the second
S = tasks.Scan(
    mm, scan_type='scan', subtask='time_course', report_type='time_course',
    report_name = 'TwoWayParameterScanOfTimeCourse.csv', variable='S',
    minimum=1, maximum=20, number_of_steps=8, run=True, clear_scan=False,
)

## check the output exists
os.path.isfile(S.report_name)
[3]:
True

An arbitrary number of scans can be setup this way. Further, its possible to chain together scans with repeat or random distribution scans.

Repeat Scan Items

Repeat scans are very useful for running multiple parameter estimations and for running stochastic time courses.

[4]:
## Assume Parameter Estimation task already configured
tasks.Scan(
    mm, scan_type='repeat', subtask='parameter_estimation', report_type='parameter_estimation',
    number_of_steps=6, run=False, ##set run to True to run via CopasiSE
)


## Assume model runs stochastically and time course settings are already configured
tasks.Scan(
    mm, scan_type='repeat', subtask='time_course', report_type='time_course',
    number_of_steps=100, run=False,  ##set run to True to run via CopasiSE
)
[4]:
<pycotools3.tasks.Scan at 0x1cb859370b8>

Parameter Estimation

[4]:
import os, glob
import site
from pycotools3 import viz, model, misc, tasks
from io import StringIO
import pandas
%matplotlib inline
Build a Model
[8]:
working_directory = os.path.abspath('')

copasi_file = os.path.join(working_directory, 'negative_feedback.cps')
ant = """
        model negative_feedback
            compartment cell = 1.0
            var A in cell
            var B in cell

            vAProd = 0.1
            kADeg = 0.2
            kBProd = 0.3
            kBDeg = 0.4
            A = 0
            B = 0

            AProd: => A; cell*vAProd
            ADeg: A =>; cell*kADeg*A*B
            BProd: => B; cell*kBProd*A
            BDeg: B => ; cell*kBDeg*B
        end
        """
mod = model.loada(ant, copasi_file)

## open model in copasi
#mod.open()
mod
[8]:
Model(name=negative_feedback, time_unit=s, volume_unit=l, quantity_unit=mol)
Collect some experimental data

Organise your experimental data into delimited text files

[9]:
experimental_data = StringIO(
    """
Time,A,B
 0, 0.000000, 0.000000
 1, 0.099932, 0.013181
 2, 0.199023, 0.046643
 3, 0.295526, 0.093275
 4, 0.387233, 0.147810
 5, 0.471935, 0.206160
 6, 0.547789, 0.265083
 7, 0.613554, 0.322023
 8, 0.668702, 0.375056
 9, 0.713393, 0.422852
10, 0.748359, 0.464639
    """.strip()
)

df = pandas.read_csv(experimental_data, index_col=0)

fname = os.path.join(os.path.abspath(''), 'experimental_data.csv')
df.to_csv(fname)

assert os.path.isfile(fname)
The Config Object

The interface to COPASI’s parameter estimation using pycotools3 revolves around the ParameterEstimation.Config object. ParameterEstimation.Config is a dictionary-like object which allows the user to define their parameter estimation problem. All features of COPASI’s parameter estimations task are supported, including configuration of validation experiments, affected experiments, affected validation experiments and constraints as well additional features such as the configuration of multiple models simultaneously via the affected_models keyword.

The ParameterEstimation.Config object expects at the bare minimum some information about the models being configured, some experimental data, some fit items and a working directory. The remaining options are automatically filled in with defaults.

[10]:
config = tasks.ParameterEstimation.Config(
    models=dict(
        negative_feedback=dict(
            copasi_file=copasi_file
        )
    ),
    datasets=dict(
        experiments=dict(
            first_dataset=dict(
                filename=fname,
                separator=','
            )
        )
    ),
    items=dict(
        fit_items=dict(
            A={},
            B={},
        )
    ),
    settings=dict(
        working_directory=working_directory
    )
)
config
[10]:
datasets:
    experiments:
        first_dataset:
            affected_models:
            - negative_feedback
            filename: D:\pycotools3\docs\source\Tutorials\experimental_data.csv
            mappings:
                A:
                    model_object: A
                    object_type: Metabolite
                    role: dependent
                B:
                    model_object: B
                    object_type: Metabolite
                    role: dependent
                Time:
                    model_object: Time
                    role: time
            normalize_weights_per_experiment: true
            separator: ','
    validations: {}
items:
    fit_items:
        A:
            affected_experiments:
            - first_dataset
            affected_models:
            - negative_feedback
            affected_validation_experiments: []
            lower_bound: 1.0e-06
            start_value: model_value
            upper_bound: 1000000
        B:
            affected_experiments:
            - first_dataset
            affected_models:
            - negative_feedback
            affected_validation_experiments: []
            lower_bound: 1.0e-06
            start_value: model_value
            upper_bound: 1000000
models:
    negative_feedback:
        copasi_file: D:\pycotools3\docs\source\Tutorials\negative_feedback.cps
        model: Model(name=negative_feedback, time_unit=s, volume_unit=l, quantity_unit=mol)
settings:
    calculate_statistics: false
    config_filename: config.yml
    context: s
    cooling_factor: 0.85
    copy_number: 1
    create_parameter_sets: false
    cross_validation_depth: 1
    fit: 1
    iteration_limit: 50
    lower_bound: 1.0e-06
    max_active: 3
    method: genetic_algorithm
    number_of_generations: 200
    number_of_iterations: 100000
    overwrite_config_file: false
    pe_number: 1
    pf: 0.475
    pl_lower_bound: 1000
    pl_upper_bound: 1000
    population_size: 50
    prefix: null
    problem: Problem1
    quantity_type: concentration
    random_number_generator: 1
    randomize_start_values: false
    report_name: PEData.txt
    results_directory: ParameterEstimationData
    rho: 0.2
    run_mode: false
    save: false
    scale: 10
    seed: 0
    start_temperature: 1
    start_value: 0.1
    std_deviation: 1.0e-06
    swarm_size: 50
    tolerance: 1.0e-05
    update_model: false
    upper_bound: 1000000
    use_config_start_values: false
    validation_threshold: 5
    validation_weight: 1
    weight_method: mean_squared
    working_directory: D:\pycotools3\docs\source\Tutorials

The COPASI user will be familiar with most of these settings, though there are also a few additional options.

Once built, a ParameterEstimation.Config object can be passed to ParameterEstimation object.

[11]:
PE = tasks.ParameterEstimation(config)

By default, the run_mode setting is set to False. To run the parameter estimation in background processes using CopasiSE, set run_mode to True or parallel.

[12]:
config.settings.run_mode = True
PE = tasks.ParameterEstimation(config)
viz.Parse(PE)['negative_feedback']
# config
[12]:
A B RSS
0 0.000001 0.000001 7.955450e-12
Running multiple parameter estimations

With pycotools, parameter estimations are run via the scan task interface so that we have the option of running the same problem pe_number times. Additionally, pycotools provides a way of copying a model copy_number times so that the final number of parameter estimations that get executed is \(pe\_number\)cdot`copy_number``.

[13]:
config.settings.copy_number = 4
config.settings.pe_number = 2
config.settings.run_mode = True
PE = tasks.ParameterEstimation(config)

And sure enough we have ran the problem 8 times.

[14]:
viz.Parse(PE)['negative_feedback']
[14]:
A B RSS
0 0.000001 0.000001 7.955430e-12
1 0.000001 0.000001 7.955450e-12
2 0.000001 0.000001 7.955450e-12
3 0.000001 0.000001 7.955450e-12
4 0.000001 0.000001 7.955450e-12
5 0.000001 0.000001 7.955450e-12
6 0.000001 0.000001 7.955450e-12
7 0.000001 0.000001 7.955450e-12

A shortcut for configuring the ParameterEstimation.Config object

Manually configuring the ParameterEstimation.Config object can take some time as it is bulky, but necessarily so in order to enable users to configure any type of parameter estimation. The ParameterEstimation.Config class should be used directly when a lower level interface into COPASI configurations are required. For instance, if you want to configure different boundaries for each parameter, choose which parameters are affected by which experiment, mix timecourse and steasy state experiments, define independent variables, add constraints or choose which models are affected by which experiments, you can use the ParameterEstimation.Config class directly.

However, if you want a more standard configuration such as all parameters estimated between the same boundaries, all experiments affecting all parameters and models etc.. then you can use the ParameterEstimation.Context class to build the ParameterEstimation.Config class for you. The ParameterEstimation.Context class has a context argument that defaults to 's' for simple. While not yet implemented, everntually, alternative options for context will be provided to support other common patters, such as cross_validation or chaser_estimations (global followed by local algorithm). Note that an option is no longer required for model_selection since it is innately incorporated via the affected_models argument.

To use the ParameterEstimation.Context object

[17]:
with tasks.ParameterEstimation.Context(mod, fname, context='s', parameters='g') as context:
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 10)
    context.set('copy_number', 4)
    context.set('pe_number', 2)
    context.set('run_mode', True)
    context.set('separator', ',')
    config = context.get_config()

pe = tasks.ParameterEstimation(config)
[18]:
viz.Parse(pe)['negative_feedback']
[18]:
RSS kADeg kBDeg kBProd vAProd
0 8.851340e-13 0.2 0.4 0.3 0.1
1 8.851340e-13 0.2 0.4 0.3 0.1
2 8.851340e-13 0.2 0.4 0.3 0.1
3 8.851340e-13 0.2 0.4 0.3 0.1
4 8.851340e-13 0.2 0.4 0.3 0.1
5 8.851340e-13 0.2 0.4 0.3 0.1
6 8.851340e-13 0.2 0.4 0.3 0.1
7 8.851340e-13 0.2 0.4 0.3 0.1

The parameters keyword provides an easy interface for parameter selection. Here are the available options: - g specifies that all global variables are to be estimated - l specifies that all local parameters are to be estimated - m specifies that all metabolites are to be estimated - c specifies that all compartment volumes are to be estimated - a specifies that all of the above will be estimated

These options can also be combined. For example, parameters='cgm' means that compartment volumes, global quantities and metabolite concentrations (or particle numbers) will be estimated.

[ ]:

Examples

Working with an existing copasi model

You can create a new model using the antimony language. Sometimes however, you have already built a model using the CoapsiUI and want to use pycotools3 with this model. This example shows you how to use the pycotools3.model.Model class directly.

Create a PyCoTools model from an existing model
from pycotools3 import model

# remember to input the string to your own model here
copasi_file = <'string/to/model.cps'>

mod = model.Model(copasi_file)

assert isinstance(mod, model.Model)
Extracting the antimony string associated with a copasi model

It is often useful to have an antimony string generated directly from a copasi model.

ant_string = mod.to_antimony()
print(ant_string)

Integration with Tellurium

Tellurium is a Python package built on top of
libRoadRunner, a C++ package for simulation and analysis of models in systems biology. Antimony is a model definition language for SBML models that was written by the authors of Tellurium and libRoadRunner.

Since PyCoTools has adopted Antimony, the same model can be simulated and analysed with both COPASI and tellurium.

Here’s a short example of how using the tellurium back end.

import tellurium as te

antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

mod = te.loada(antimony_string)

# simulate a time series with 11 time points between 0 and 10
timeseries_data = mod.simulate(0, 10, 11)

# compute the steady state
mod.conservedMoietyAnalysis = True
mod.setSteadyStateSolver('nleq')
mod.steadyState()
mod.steadyStateSelections = ['A', 'B', 'C']
print(dict(zip(*mod.steadyStateSelections,
                mod.getSteadyStateValues()))

Simple Parameter Estimation

This is an example of how to configure a simple parameter estimation using pycotools. We first create a toy model for demonstration, then simulate some experimental data from it and fit it back to the model, using pycotools for configuration.

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz
seaborn.set_context(context='talk')         # set seaborn context for formatting output of plots

## Choose a directory for our model and analysis. Note this can be anywhere.
working_directory = os.path.abspath('')

## In this model, A gets reversibly converted to B but the backwards reaction is additionally regulated by C.
## B is reversibly converted into C.
antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

# Create a path to a copasi file
copasi_file = os.path.join(working_directory, 'example_model.cps')

## build model
mod = model.loada(antimony_string, copasi_file)
assert isinstance(mod, model.Model)

## simulate some data, returns a pandas.DataFrame
data = mod.simulate(0, 20, 1)

## write data to file
experiment_filename = os.path.join(working_directory, 'experiment_data.txt')
data.to_csv(experiment_filename)

## We now have a model and some experimental data and can
## configure a parameter estimation

Parameter estimation configuration in pycotools3 revolves around the tasks.ParameterEstimation.Config object which is the input to the parameter estimation task. The object necessarily takes a lot of manual configuration to ensure it is flexible enough for any parameter estimation configuration. However, the ParameterEstimation.Context class is a tool for simplifying the construction of a Config object.

with tasks.ParameterEstimation.Context(mod, experiment_filename, context='s', parameters='g') as context:
    context.set('separator', ',')
    context.set('run_mode', True)
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 100)
    context.set('lower_bound', 1e-1)
    context.set('upper_bound', 1e1)

    config = context.get_config()

pe = tasks.ParameterEstimation(config)

data = viz.Parse(pe).data
print(data)

Simple Parameter with Steady State Data

The short story here is that PyCoTools distinguishes time series and steady state data automatically, using the presence or absence of the time column.

Here’s an example.

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz
seaborn.set_context(context='talk')         # set seaborn context for formatting output of plots

## Choose a directory for our model and analysis. Note this can be anywhere.
working_directory = os.path.abspath('')

## In this model, A gets reversibly converted to B but the backwards reaction is additionally regulated by C.
## B is reversibly converted into C.
antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

# Create a path to a copasi file
copasi_file = os.path.join(working_directory, 'example_model.cps')

## build model
mod = model.loada(antimony_string, copasi_file)
assert isinstance(mod, model.Model)

## create some made up data
data = pandas.DataFrame({'A': 30, 'B': 10, 'C': 10})

## write data to file
experiment_filename = os.path.join(working_directory, 'experiment_data.txt')
data.to_csv(experiment_filename)

We now have a model and some experimental data and can configure a parameter estimation. Configuring steady state data is semantically identical to configuring time series data. The difference is that our data no longer has a time column and so PyCoTools assumes that it is steady state data.

Now, as usual, we configure the parameter estimation with the Context manager.

with tasks.ParameterEstimation.Context(mod, experiment_filename, context='s', parameters='g') as context:
    context.set('separator', ',')
    context.set('run_mode', True)
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 100)
    context.set('lower_bound', 1e-1)
    context.set('upper_bound', 1e1)

    config = context.get_config()

pe = tasks.ParameterEstimation(config)

data = viz.Parse(pe).data
print(data)

Parameter Estimation using Prefix Argument

Sometimes we would like to select a set of parameters to estimate and leave the rest fixed. One way to do this is to compile a list of parameters you would like to estimate and enter them into the ParameterEstimation.Config class. A quicker alternative is to use the prefix setting.

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz
seaborn.set_context(context='talk')

## Choose a directory for our model and analysis
working_directory = os.path.abspath('')

The prefix argument will setup the configuration of a parameter estimation containing only parameters that start with prefix. While this can be anything, its quite useful to use the _ character and then add an _ to any parameters that you want estimated. In this way you can keep your estimated parameters marked.

antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    _C in Cell;

    // reactions
    R1: A => B ; Cell * _k1 * A;
    R2: B => A ; Cell * k2 * B * _C;
    R3: B => C ; Cell * _k3 * B;
    R4: C => B ; Cell * k4 * _C;

    // initial concentrations
    A = 100;
    B = 1;
    _C = 1;

    // reaction parameters
    _k1 = 0.1;
    k2 = 0.1;
    _k3 = 0.1;
    k4 = 0.1;
end
"""

copasi_file = os.path.join(working_directory, 'example_model.cps')

## build model
mod = model.loada(antimony_string, copasi_file)

assert isinstance(mod, model.Model)

fname = os.path.join(working_directory, 'experiment_data.txt')
data = mod.simulate(0, 20, 1, report_name=fname)

## write data to file

And now we configure a parameter estimation like normal but set prefix to _.

with tasks.ParameterEstimation.Context(mod, experiment_filename, context='s', parameters='a') as context:
    context.set('separator', ',')
    context.set('run_mode', True)
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 100)
    context.set('lower_bound', 1e-1)
    context.set('upper_bound', 1e1)
    context.set('prefix', '_')
    config = context.get_config()

pe = tasks.ParameterEstimation(config)

data = viz.Parse(pe).data
print(data)

Parameter Estimation with Independent Variables

The concept of independent variables is important for parameter fitting because it enables us to simultaneously fit multiple datasets to a single model. The is achieved by iterating over all the datasets in your objective function and changing variables (such as initial concentration parameters) to what ever they should be for that dataset. These variables are independent variables and they basically define the initial conditions for fitting the dataset.

Independent variables are handled in PyCoTools by appending the string ‘_indep’ after a variable in the data file itself. PyCoTools will then recognize the variable and set it as independent rather than dependent.

Here’s an example:

import os, glob
import pandas, numpy
from pycotools3 import model, tasks, viz
seaborn.set_context(context='talk')         # set seaborn context for formatting output of plots

## Choose a directory for our model and analysis. Note this can be anywhere.
working_directory = os.path.abspath('')

## In this model, A gets reversibly converted to B but the backwards reaction is additionally regulated by C.
## B is reversibly converted into C.
antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

# Create a path to a copasi file
copasi_file = os.path.join(working_directory, 'example_model.cps')

# build model
mod = model.loada(antimony_string, copasi_file)
assert isinstance(mod, model.Model)

# simulate some data, returns a pandas.DataFrame
data = mod.simulate(0, 20, 1)

# creates a new column in the dataset called A_indep
data['A_indep'] = 50
# the initial abundance of A will now be set to 50 prior to estimation

# write data to file
experiment_filename = os.path.join(working_directory, 'experiment_data.txt')
data.to_csv(experiment_filename)

We now configure a parameter estimation like normal.

with tasks.ParameterEstimation.Context(mod, experiment_filename, context='s', parameters='g') as context:
    context.set('separator', ',')
    context.set('run_mode', True)
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 100)
    context.set('lower_bound', 1e-1)
    context.set('upper_bound', 1e1)

    config = context.get_config()

pe = tasks.ParameterEstimation(config)

data = viz.Parse(pe).data
print(data)

Advanced Parameter Estimation

Parameter estimation configuration in PyCoTools revolves around the ParameterEstimation.Config object, which is merely a nested data storage class. When you use the ParameterEstimation.Context object, you are using a high level interface to create a ParameterEstimation.Config object. However, it is also possible to build it manually. It should be stressed however, that the Context method should be preferred when possible. The low level interface can be tedious but gives you more flexibility for more fine tuned parameter estimation configurations. While there is no getting around the fact that building the ParameterEstimation.Config object can take time, I have built in a ‘default system’ whereby if a (non-compulsory) section is left blank, then PyCoTools tries to use the default values.

The ParameterEstimation.Config Object is a bit like a Struct from other languages or a tree (see bunch.Bunch). You can access attributes using either dot notation or dictionary like access.

There are four main sections to the Config object:

Each of these are described in detail below. Note that all branches in the config object described here are themselves nodes in the tree with children unless they are labelled with leaf. These nodes expect the usual key: value pairs.

Models

This is a nested structure containing an arbitrary number of models for simultaneous configuration. It is the users responsibility that it makes sense to configure multiple models for parameter estimation in the same way.

  • models
    • model1_name (leaf): full string to model 1 copasi file
    • model2_name (leaf): full string to model 2 copasi file
models = dict(
    model1=dict(copasi_file='</full/path/to/copasi_file1.cps'),
    model2=dict(copasi_file='</full/path/to/copasi_file2.cps')
)

Note

The nested structure of the models section remains a deprecated version of PyCoTools. Since it does not need to be nested, this will be changing to non-nested in the near future.

Datasets

This is where we tell PyCoTools about the data were using for parameter estimation. The datasets attribute is itself nested with the following structure:

  • datasets

    • experiments
    • validations

Both experiments and validations are nested structures and they are pretty much identical. The difference is that those configured via the experiments section are used for calibrating/training the model while those in the validation section are only used for validation (or testing). For simplicity, the structure of both is described in one section.

  • experiments (or validation)

    • experiment_name. An arbitrary string representing the name of the experiment.

      • filename (leaf). The full path to the dataset

      • affected_models. Analogous to affected_experiments or affected_validation_experiments, you can have an experiment target only one (or more) model. This feature is a superser of COPASI. Defaults to the string ‘all’ which is translated to all models.

      • mappings. Another nested structure for mapping arguments. If left blank, PyCoTools will assume 1:1 mappings between experimental data file headers and model variables. Independent variables are assumed to contain a trailing _indep, i.e. PI3K_indep. This should have as many elements as there are columns in the data file.

        • Experimental variable name (or time). These should be the same as used for data column headers.

          • model_object (leaf node). The object that corresponds to the experimental variable name.
          • role (leaf node). Either time, ignored (default), dependent or independent
      • separator (leaf). Overrides the separator in the settings section, for when they are different. However, good practice is to always use the same separator and set the separator in the settings section.

      • normalize_weights_per_experiment (leaf): boolean, default=True.

Here’s an example of the datasets section.

datasets=dict(
    experiments= dict(
        report1 = dict(
            filename='full/path/to_datafile1.csv',
            affected_models='all',
            mappings=dict(
                Time=dict(
                    model_object='Time',
                    role='time'
                ),
                A=dict(
                    model_object='A'),
                    role='dependent'),
                ),
            )
        ),
        # note the absence of the mappings field. This tells
        # PyCoTools that you have used the suggested convention
        # of matching model variables with data file headers and using
        # '_indep' suffix for independent variables.
        report2=dict(
            filename='full/path/to_datafile2.csv',
            separator='\t'  #overrides separator from main settings menu
        )
    ),
    # This data will not be used for parameter estimation. Only validation.
    validations=dict(
        report3=dict(
            filename='full/path/to_datafile3.csv',
            affected_models='model1',  # this validation experiment only affects model1
            # were excepting default mapping convention
        ),
    )
)
Items

This is where we configure the parameters to be estimated, their boundaries, start values and affected experiments. The items structure is composed of fit_items and constraint_items.

  • items

    • fit_items
    • constraint_items

Similarly to the experiment section, fit_items and constraint_items are nearly identical. The difference is that whilst fit items are used to define the parameter space constraints are used to restrict the parameter space to a subset of the full parameter space. An estimation with constraints can explore beyond the restrictions imposed by the constraints but solutions that violate the constraints will not be excepts. In contrast, the solution cannot go beyond the boundaries of the boundaries set by the fit_items.

items = dict(
    fit_items=dict(
        A=dict(
            affected_experiments=['report1'],
            affected_models=['model1'],
            affected_validation_experiments=['report3'],
            lower_bound=15,
            start_value=0.1,
            upper_bound=35
        ),
        B=dict(
            affected_experiments=['report1', 'report2'],
            affected_models=['model1'],
            affected_validation_experiments=['report3'],
            lower_bound=0.05,
            start_value=1.05,
            upper_bound=36
        ),
        C=dict(
            affected_experiments=['report1', 'report2'],
            affected_models=['model1'],
            affected_validation_experiments=['report3'],
            lower_bound=0.05,
            start_value=1.0,
            upper_bound=36
        )
    ),
    constraint_items=dict(
        C=dict(
            affected_experiments=['report1', 'report2'],
            affected_models=['model1'],
            affected_validation_experiments=['report3'],
            lower_bound=16,
            start_value=1.05,
            upper_bound=26
            )
    )
)

Note

affected_experiments, affected_models, and affected_validation_experiments all accept the special string all which resolves to all of your data files. This is default behaviour for both affected_experiments and affected_models whereas the default behaviour for affected_validation_experiments is None.

Settings

These are global settings for the parameter estimation.

settings = dict(
    calculate_statistics=False,     # Corresponds to the `calculate_statistics` flag in copasi
    config_filename=config.yml      # Filename for saving config to file
    context=s,                      # Alters the behaviour of the configuration. See :py:class:`ParameterEstimation.Context`.
    cooling_factor=0.85             # Parameter estimation algorithm setting
    copy_number=1,                  # How many times to copy the copasi file for simultaneous runs
    create_parameter_sets=False,    # Corresponds to the create_parameter_sets flag in copasi
    cross_validation_depth=1,       # depth of cross validation. Corresponds to COPASI, (though this feature was buggy during development)
    fit=1,                          # This is an index of parameter estimation. Increment by 1 to repeat a similar parameter estimation to test alternative configurations
    iteration_limit=50,             # Parameter estimation algorithm setting
    lower_bound=0.05                # Default lower boundary for all parameters in the estimation. Can be overwritten under the fit_items section to have different boundaries for every fit item.
    max_active=3,                   # When running
    method=genetic_algorithm_sr,    # which algorithm to use
    number_of_generations=100,      # Parameter estimation algorithm setting
    number_of_iterations=100000,    # Parameter estimation algorithm setting
    overwrite_config_file=False,    # Set to True to explicitely overwrite existing configuration file.
    pe_number=1,                    # How many parameter estimations
    pf=0.475                        # Parameter estimation algorithm settings
    pl_lower_bound=1000,            # When context is set to 'pl' for profile likelihood configurations, this defines the upper boundary of the analysis. The upper boundary is the best estimated parameter multiplied by this value.
    pl_upper_bound=1000,            # When context is set to 'pl' for profile likelihood configurations, this defines the lower boundary of the analysis. The lower boundary is the best estimated parameter divided by this value.
    population_size=38,             # Parameter estimation algorithm setting
    prefix=None,                    # Prefix used to automatically locate parameters to be estimated. For instance, you can 'tag' each parameter you want to include in the estimation with an underscore at the begining (i.e. _kAktPhosphorylation) to filter the parameters for estimation.
    problem=Problem1,               # This is the name of the folder that will be created to contain the results.
    quantity_type=concentration,    # either 'concentration' or 'particle_numbers' to switch between the two.
    random_number_generator=1,      # Parameter estimation algorithm setting.
    randomize_start_values=False,   # Corresponds to the 'randomize_start_values' flag in copasi
    report_name=PEData.txt          # The base report name for the parameter estimation output. This is automatically modified when copy_number is > 1. The results have as many rows as `pe_number`.
    results_directory=ParameterEstimationData,  # This folder stores the actual parameter estimation results, within the fit directory (which is within the Problem directory)
    rho=0.2                         # Parameter estimation algorithm setting
    run_mode=False,                 # Switch between False
    save=False,                     # Save the model to file after configuration or not.
    scale=10,                       # Parameter estimation algorithm setting
    seed=0,                         # Parameter estimation algorithm setting
    start_temperature=1,            # Parameter estimation algorithm setting
    start_value=0.1                 # Parameter estimation algorithm setting
    starting_parameter_sets=None,   # Experimental feature.
    std_deviation=1.0e-06           # Parameter estimation algorithm setting
    swarm_size=50,                  # Parameter estimation algorithm setting
    tolerance=1.0e-05               # Parameter estimation algorithm setting
    update_model=False,             # Corresponds to the update model flag in copasi
    upper_bound=36,                 # Default upper boundary for all parameters in the estimation. Can be overwritten under the fit_items section to have different boundaries for every fit item.
    use_config_start_values=False,  # If True, parameter estimation will start from the start values specified under the `fit_items` section.
    validation_threshold=8.5        # Corresponds to the validation threshold in COPASI. This is the default value that can be overwritten by giving this argument to the validation dataset section.
    validation_weight=4,            # Corresponds to the validation weight in COPASI.  This is the default value that can be overwritten by giving this argument to the validation dataset section.
    weight_method=value_scaling,    # Which weight method to use. Default='mean_squared'. Other options: mean, standard_deviation or value_scaling
    working_directory=/home/ncw135/Documents/pycotools3/Tests   # The overall directory for the whole analysis. Defaults to the same directory containing the first copasi file found for configuration.
)
Building a ParameterEstimation.Config object

When you have configured the relevant sections, you can simply call the ParameterEstimation.Config constructor to create your object.

Assuming you have nested dictionaries containing the apprioriate information detailed above:

config = tasks.ParameterEstimation.Config(
            models=models,
            datasets=datasets,
            items=items,
            settings=settings
        )

The config is formatted using yaml for ease of inspection.

Note

It is possible to load from yaml file on disk. Documentation to come.

Using a ParameterEstimation.Context as a template

The most effective way to use the low level interface is to let the ParameterEstimation.Context do most of the work and then retrieve the mostly configured config string and then make your desired ammendments.

Multiple Parameter Estimations

Configuring multiple parameter estimations is easy with COPASI because you can configure a parameter estimation task, configure a scan repeat item with the subtask set to parameter estimation and hit run. This is what pycotools does under the hood to configure a parameter estimation, even if the desired number of parameter estimations is 1.

Moreover, pycotools additionally supports the running of multiple copies of your copasi file in separate processes, or on a cluster.

from pycotools3 import model, tasks

working_directory = os.path.abspath('')

antimony_string =  '''
    model negative_feedback()
        // define compartments
        compartment cell = 1.0
        //define species
        var A in cell
        var B in cell
        //define some global parameter for use in reactions
        vAProd = 0.1
        kADeg  = 0.2
        kBProd = 0.3
        kBDeg  = 0.4
        //define initial conditions
        A      = 0
        B      = 0
        //define reactions
        AProd:  => A; cell*vAProd
        ADeg: A =>  ; cell*kADeg*A*B
        BProd:  => B; cell*kBProd*A
        BDeg: B =>  ; cell*kBDeg*B
    end
    '''

 copasi_file = os.path.join(working_directory, 'negative_fb.cps')
 mod = model.loada(antimony_string, copasi_file )

 data_fname = os.path.join(working_directory, 'timecourse.txt')
 mod.simulate(0, 10, 1, report_name=data_fname)

 assert os.path.isfile(data_fname)
Increasing Parameter Estimation Throughput

The pe_number argument specifies the number that gets entered into the copasi scan repeat item while the copy_number argument specifies the number of identical model copies to make.

with tasks.ParameterEstimation.Context(mod, data_fname, context='s', parameters='g') as context:
    context.set('copy_number', 2)
    context.set('pe_number', 2)
    context.set('randomize_start_values', True)
    context.set('run_mode', True)
    config = context.get_config()

pe = tasks.ParameterEstimation(config)
data = viz.Parse(pe)

Note

The copy_number argument here doesn’t really do anything useful because run_mode=True. This tells pycotools to run the parameter estimations in series (i.e. back to back) and therefore the copy_number argument here does nothing.

However, it is also possible to give run_mode=’parallel’ and in this case, each of the model copies will be run simultaneously.

with tasks.ParameterEstimation.Context(mod, data_fname, context='s', parameters='g') as context:
    context.set('copy_number', 2)
    context.set('pe_number', 2)
    context.set('randomize_start_values', True)
    context.set('run_mode', 'parallel)
    config = context.get_config()

pe = tasks.ParameterEstimation(config)
data = viz.Parse(pe)

Warning

Users should not use run_mode=’parallel’ in combination with a high copy_number as it will slow your system.

Your system has a limited amount of resources and can only handle a number of parameter estimations being run at once. For this reason, be careful when choosing the copy_number. For reference, my computer can run approximately 8 parameter estimations in different processes before slowing.

If you have access to a cluster running either SunGrid Engine or Slurm then each of the copy_number models will be submitted as separate jobs. To do this set run_mode=’slurm or run_mode=’sge’ (see tasks.Run).

Warning

The cluster functions are fully operational on the Newcastle University clusters but untested on other clusters. If you run into trouble, contact me for help.

It is easy to support other cluster systems by adding a method to tasks.Run using tasks.Run.run_sge() and tasks.Run.run_slurm() as examples.

Parameter estimation with multiple models

This is an example of how to configure a parameter estimation for multiple COPASI models using pycotools. We first create two similar but different toy models for demonstration, then simulate some experimental data from one of them and and fit it back to both models.

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz

# set seaborn context
seaborn.set_context(context='talk')

## Choose a directory for our model and analysis
working_directory = os.path.abspath('')

model1_string = """
model model1()

    R1:   => A ; k1*S;
    R2: A =>   ; k2*A;
    R3:   => B ; k3*A;
    R4: B =>   ; k4*B*C; //feedback term
    R5:   => C ; k5*B;
    R6: C =>   ; k6*C;

    S = 1;
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
    k5 = 0.1;
    k6 = 0.1;
end
"""

model2_string = """
model model2()
    R1:   => A ; k1*S;
    R2: A =>   ; k2*A*C; //feedback term
    R3:   => B ; k3*A;
    R4: B =>   ; k4*B;
    R5:   => C ; k5*B;
    R6: C =>   ; k6*C;

    S = 1;
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
    k5 = 0.1;
    k6 = 0.1;
end
"""
# create paths to where we want the two models
copasi_file1 = os.path.join(working_directory, 'model1.cps')
copasi_file2 = os.path.join(working_directory, 'model2.cps')

# Assemble into lists
antimony_strings = [model1_string, model2_string]
copasi_files = [copasi_file1, copasi_file2]

# create models
model_list = []
for i in range(len(copasi_files)):
    model_list.append(model.loada(antimony_strings[i], copasi_files[i])

## simulate some data, returns a pandas.DataFrame
data = model_list[0].simulate(0, 20, 1)

## write data to file
experiment_filename = os.path.join(working_directory, 'data_from_model1.txt')
data.to_csv(experiment_filename)

# Create the context, passing the model list rather than the Model object
with tasks.ParameterEstimation.Context(model_list, experiment_filename, context='s', parameters='g') as context:
    context.set('separator', ',')
    context.set('run_mode', True)
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 25)
    context.set('lower_bound', 1e-1)
    context.set('upper_bound', 1e1)

    config = context.get_config()

# Do the parameter estimation
pe = tasks.ParameterEstimation(config)

# Parse the resulting data
data = viz.Parse(pe).data
print(data)

Working from the model.Model class directly

The pycotools3 tasks module contains classes for a bunch of copasi tasks that can be configured from python using pycotools. To simplify some of these tasks, wrappers have been build around these task classes in the model.Model class so that they can be used like a regular method. Here I demonstrate some of these.

We first configure a model for the demonstration

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz
seaborn.set_context(context='talk')

## Choose a directory for our model and analysis
working_directory = os.path.abspath('')

## In this model, A gets reversibly converted to B but the backwards reaction is additionally regulated by C.
## B is reversibly converted into C.
antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

copasi_file = os.path.join(working_directory, 'example_model.cps')

## build model
mod = model.loada(antimony_string, copasi_file)

assert isinstance(mod, model.Model)
Inserting parameters
dct = {
  'k1': 55,
  'k2': 36
}
mod.insert_parameters(parameter_dict=dct, inplace=True)

or

mod = mod.insert_parameters(parameter_dict=dct)

or

import pandas
df = pandas.DataFrame(dct, index=[0])
mod.insert_parameters(df=df, inplace=True)

or if the dataframe df has more than one parameter set we can specify the rank using the index argument.

import pandas
##insert second best parameter set
mod.insert_parameters(df=df, inplace=True, index=1)

Note

This is most useful when using viz.Parse output dataframes, which are pandas.DataFrame objects containing parameters in the columns and parameter sets in the rows, sorted by best RSS

or, assuming the variable results_directory is a directory to a folder containing parameter estimation results.

mod.insert_parameters(parameter_path=results_directory, inplace=True)
Simulating a time course
data = mod.simulate(0, 10, 11)

Simulates a deterministic time course, 11 time points between 0 and 10. data contains a pandas.DataFrame object with variables along the columns and time points down the rows.

fname = os.path.join(os.path.dirname(__file__), 'simulation_data.csv')
## write data to file named fname
data = mod.simulate(0, 10, 11, report_name=fname)

Like with the other shortcuts, arguments for the tasks.TimeCourse class are pass on.

data = mod.simulate(0, 10, 11, method='direct')
fname = ps.path.join(os.path.dirname(__file__), 'scan_results.csv')
mod.scan(variable='A', minimum=5, maximum=10, report_name=fname)

By default the scan type is set to ‘scan’. We can change this

fname = ps.path.join(os.path.dirname(__file__), 'scan_results.csv')
mod.simulate(0, 10, 11, method='direct', run_mode=False)
mod.scan(variable='A', scan_type='repeat',
         number_of_steps=10, report_name=fname,
         subtask='timecourse')

Note

In the mod.simulate we configure copasi to run a stochastic time course but do not execute. We then configure the repeat scan task to run the stochastic time course 10 times.

Sensitivities
sens = mod.sensitivities(
            subtask='steady_state', cause='all_parameters',
            effect='all_variables'
       )

Profile Likelihoods

Since a profile likelihood is just a parameter scan of parameter estimations, all we need to do to configure a profile likelihood analysis is to setup an appropriate ParameterEstimation.Config object and feed it into the ParameterEstimation class. This would be tedious to do manually but is easy with ParameterEstimation.Context

import os, glob
import pandas, numpy
import matplotlib.pyplot as plt
import seaborn
from pycotools3 import model, tasks, viz

working_directory = os.path.abspath('')

antimony_string = """
model simple_parameter_estimation()
    compartment Cell = 1;

    A in Cell;
    B in Cell;
    C in Cell;

    // reactions
    R1: A => B ; Cell * k1 * A;
    R2: B => A ; Cell * k2 * B * C;
    R3: B => C ; Cell * k3 * B;
    R4: C => B ; Cell * k4 * C;

    // initial concentrations
    A = 100;
    B = 1;
    C = 1;

    // reaction parameters
    k1 = 0.1;
    k2 = 0.1;
    k3 = 0.1;
    k4 = 0.1;
end
"""

copasi_file = os.path.join(working_directory, 'example_model.cps')

## build model
mod = model.loada(antimony_string, copasi_file)

assert isinstance(mod, model.Model)

## simulate some data, returns a pandas.DataFrame
data = mod.simulate(0, 20, 1)

## write data to file
experiment_filename = os.path.join(working_directory, 'experiment_data.txt')
data.to_csv(experiment_filename)

The profile likelihood is calculated around the current parameter set in the model. If you want to change the current parameter set, maybe to the best fitting parameter set from a parameter estimation you can use the InsertParameters class. For now, we’ll assume the best parameter set is already in the model.

with ParameterEstimation.Context(
    mod, experiment_filename,
    context='pl', parameters='gm'
) as context:
    context.set('method', 'hooke_jeeves')
    context.set('pl_lower_bound', 1000)
    context.set('pl_upper_bound', 1000)
    context.set('pe_number', 25) # number of steps in each profile likelihood
    context.set('run_mode', True)
    config = context.get_config()

We set the method to hooke and jeeves, a local optimiser which does well with profile likelihoods. We also set the pl_lower_bound and pl_upper_bound arguments to 1000 (which are defaults anyway). These are multipliers, not boundaries, of the profile likelihood. For instance, if the best estimated parameter for A was 1, then the profile likelihood would stretch from 1-e3 to 1e3.

Now, like with other parameter estimations we can simply do

ParameterEstimation(config)

Because the context=pl was used, pycotools knows to copy the model for each parameter, remove the parameter of interest from the parameter estimation task and create a scan of the parameter of interest.

Cross validation

Validation experiments are not used in model calibration. Instead the objective function is evaluated on validation data to see if the model can predict data it has not already seen. This can then be used as stopping criteria for the algorithm as we give a threshold for the closeness of the validation fits to simulations. This idea is common practice in machine learning and is used to prevent overfitting.

Cross validation is a new feature of pycotools3 but has been supported by COPASI for some years. The idea is to rotate calibration and validation datasets until you have tried all the combinations.

Cross validation can help identify datasets which do and don’t fit well together. Here we create a model, simulate 3 datasets, make a data set up and use cross validation to infer the dataset that is made up.

# imports and create our antimony model string
from pycotools3 import model, tasks

working_directory = os.path.abspath('')
antimony_string =  '''
    model negative_feedback()
        // define compartments
        compartment cell = 1.0
        //define species
        var A in cell
        var B in cell
        //define some global parameter for use in reactions
        vAProd = 0.1
        kADeg  = 0.2
        kBProd = 0.3
        kBDeg  = 0.4
        //define initial conditions
        A      = 0
        B      = 0
        //define reactions
        AProd:  => A; cell*vAProd
        ADeg: A =>  ; cell*kADeg*A*B
        BProd:  => B; cell*kBProd*A
        BDeg: B =>  ; cell*kBDeg*B
    end
    '''

# Create string to where we want to copasi model to go
copasi_file = os.path.join(os.path.dirname(__file__), 'negative_fb.cps')
mod = model.loada(antimony_string, copasi_file )  # create the pycotools model

# create some filenames for experimental data
tc_fname1 = os.path.join(working_directory, 'timecourse1.txt')
tc_fname2 = os.path.join(working_directory, 'timecourse2.txt')
ss_fname1 = os.path.join(working_directory, 'steady_state1.txt')
ss_fname2 = os.path.join(working_directory, 'steady_state2.txt')

# simulate/create some experimental data
model.simulate(0, 5, 0.1, report_name=self.tc_fname1)
model.simulate(0, 10, 0.5, report_name=self.tc_fname2)
dct1 = {
    'A': 0.07,
    'B': 0.06,
    'C': 2.8
}
dct2 = {
    'A': 846,
    'B': 697,
    'C': 739
}
ss1 = pandas.DataFrame(dct1, index=[0])
ss1.to_csv(self.ss_fname1, sep='\t', index=False)
ss2 = pandas.DataFrame(dct2, index=[0])
ss2.to_csv(self.ss_fname2, sep='\t', index=False)

Configuring a cross validation experiment is similar to running parameter estimation or profile likelihoods: the difference is that you use context=’cv’ as argument to ParameterEstimation.Context.

with tasks.ParameterEstimation.Context(
    model, experiments, context='cv', parameters='gm'
) as context:
    context.set('randomize_start_values', True)
    context.set('method', 'genetic_algorithm')
    context.set('population_size', 20)
    context.set('number_of_generations', 50)
    context.set('validation_threshold', 500)
    context.set('cross_validation_depth', 1) ## 3/4 datasets calibration; 1/4 for validation.
    context.set('copy_number', 3) #3 per model (5 models here)
    context.set('run_mode', True)
    context.set('lower_bound', 1e-3)
    context.set('upper_bound', 1e2)
    config = context.get_config()

pe = ParameterEstimation(config)
data = pycotools3.viz.Parse(pe).concat()

Note

The cross_validation_depth argument specifies how far to go combinatorially. For instance, when cross_validation_depth=2 and there are 4 datasets, all combinations of 2 datasets for experiments and 2 for validation will be applied.

Warning

While validation experiments are correctly configured with pycotools, there seems to be some instability in the current release of Copasi regarging multiple experiments in the validation datasets feature. Validation experiments work well when only one validation experiment is specified, but can crash when more than one is given.

Note

The copy_number applies per model here. So 4 datasets, cross_validation_depth=1 means four models are configured for validation. Also configured is the model without any validation experiments for convenience.

The validation_weight and validation_threshold arguments are specific for validations. The copasi docs are vague on precisely what these mean but the higher the threshold, the more rigerous the validation.

API documentation

Here you will find detailed information about every module class and method in the pycotools3 package.

The model module

The pycotools3 model is of central importance in pycotools.

Model Construct a pycotools3 model from a copasi file
ImportSBML Import from sbml file
InsertParameters Parse parameters into a copasi model
BuildAntimony Build a copasi model using antimony
Build Build a copasi model.

The tasks module

TimeCourse Simulate a time course
ParameterEstimation.Config A parameter estimation configuration class
ParameterEstimation Interface to COPASI’s parameter estimation task
ParameterEstimation.Context A high level interface to create a ParameterEstimation.Config object.
Sensitivities Interface to COPASI sensitivity task
Scan Interface to COPASI scan task
Reports Creates reports in copasi output specification section.

The viz module

The viz module exists to make visualising simulation output quick and easy for common patterns, such as plotting time courses or comparing parameter estimation output to experimental data. However it should be emphasised that the matplotlib and seaborn libraries are always close to hand in Python.

The viz module is currently in a state of rebuilding and so I only describe here the features which currently work.

Parse General class for parsing copasi output into Python.
PlotTimeCourse Plot time course data
Boxplots Plot a boxplot for multi parameter estimation data.

Support

Users can post a question on stack-overflow using the pycotools tag. I get email notifications for these questions and will respond.

People

PyCoTools has been developed by Ciaran Welsh in Daryl Shanley’s lab at Newcastle University.

Caveats

  • Non-ascii characters are minimally supported and can break PyCoTools
  • Do not use unusual characters or naming systems (i.e. A reaction name called “A -> B” will break pycotools)
  • In COPASI we can have (say) a global quantity and a metaboltie with the same name because they are different entities. This is not supported in Pycotools and you must use unique names for every model component

Citing PyCoTools

If you made use of PyCoTools, please cite this article using:

  • Welsh, C.M., Fullard, N., Proctor, C.J., Martinez-Guimera, A., Isfort, R.J., Bascom, C.C., Tasseff, R., Przyborski, S.A. and Shanley, D.P., 2018. PyCoTools: a Python toolbox for COPASI. Bioinformatics, 34(21), pp.3702-3710.

And also please remember to cite COPASI:

  • Hoops, S., Sahle, S., Gauges, R., Lee, C., Pahle, J., Simus, N., Singhal, M., Xu, L., Mendes, P. and Kummer, U., 2006. COPASI—a complex pathway simulator. Bioinformatics, 22(24), pp.3067-3074.

and tellurium:

  • Medley, J.K., Choi, K., König, M., Smith, L., Gu, S., Hellerstein, J., Sealfon, S.C. and Sauro, H.M., 2018. Tellurium notebooks—An environment for reproducible dynamical modeling in systems biology. PLoS computational biology, 14(6), p.e1006220.