_images/logo.png

Casadi Interface for Optimum experimental design and Parameter Estimation and Identification Applications

casiopeia holds a user-friendly environment for optimum experimental design and parameter estimation and identification applications. It does so by providing Python classes that can be initialized with the problem specifications, while the computations can then easily be performed using the available class functions.

casiopeia uses the optimization framework CasADi to solve the resulting optimization problems.

Note

casiopeia is still in it’s testing state, and does not yet contain all the features it will provide in future versions. Therefore, you should check for updates on a regular basis.

In the following sections, you will receive the information necessary to obtain, install and use casiopeia. If you encounter any problems using this software, please feel free to submit your errors with a description of how they occurred to adrian.buerger@hs-karlsruhe.de.

New: try casiopeia live in your browser [1]

Contents

Get and install casiopeia

Within the next sections, you will get to know how to obtain and install casiopeia, and what prerequisites have to be met to get casiopeia working correctly.

Installation on Ubuntu 16.04

The following instructions show the installation on Ubuntu 16.04. If you are planning to install casiopeia on Linux systems different from Ubuntu 16.04, these commands need to be adapted accordingly.

Prerequesites

Python

In order to use casiopeia, please make sure that Python (currently supported version is Python 2.7) as well as Python Numpy (>= 1.8), PyLab and Python Setuptools are installed on your system. This can easily be ensured by running

sudo apt-get update
sudo apt-get install python python-numpy python-scipy python-matplotlib python-setuptools --install-recommends

If you want to install casiopeia using pip, which is the recommended and easiest way, you also need to install pip by running

sudo apt-get install python-pip

Also, you might want to install the Spyder IDE for working with Python. You can install it by running

sudo apt-get install spyder

Note

These commands require root privileges. In case you do not have root privileges on your system, consider using Miniconda to install Python and the necessary modules into a user-writeable directory.

CasADi

For casiopeia to work correctly, you need CasADi version >= 3.1 to be installed on your system. Installation instructions for CasADi can be found here.

Note

Some plugins for CasADi require extra prerequisites to work on Linux. For a list of the required libraries and installation instructions, see the corresponding section in the CasADi installation guide. If something goes wrong with executing CasADi and/or casiopeia, missing one or more of these libraries might be the reason.

In addition to unpacking the archive you just obtained, please make sure that the unpacked folder that contains CasADi can be found by Python permanently. As mentioned in the CasADi installation instructions, this can e. g. be ensured by adding the CasADi directory to the PYTHONPATH variable permanently. Just open the file ~/.bashrc on your system with your favorite text editor, and add the line

export PYTHONPATH=$PYTHONPATH:/<path>/<to>/<casadi>/<folder>

while /<path>/<to>/<casadi>/<folder> needs to be adapted to the path of your unpacked archive. Afterwards, save these changes and close all open terminals. Now open a new terminal, and have a look at the value of PYTHONPATH by typing

echo $PYTHONPATH

It should now contain at least the path your just inserted. If everything went well, you should be able to open a Python console, and execute the following commands

>>> from casadi import *
>>> x = MX.sym("x")
>>> print jacobian(sin(x),x)

without recieving error messages.

Option 2: Get casiopeia from GitHub

Installation

You can also obtain the casiopeia module directly from its GitHub repository. You can either clone the repository, or download the contained files within a compressed archive. To just obtain an archive, you do not need to have git installed, but cloning the repository provides an easy way to receive updates on casiopeia by pulling from the repository.

You can install git by running

sudo apt-get update
sudo apt-get install git

Note

These commands require root privileges. In case you do not have root priviliges and git ist not installed on you system, consider downloading the archive from the GitHub page using your favorite web browser instead of cloning the git repository.

Afterwards, you can clone the repository using the following commands

git clone git@github.com:adbuerger/casiopeia.git

and install casiopeia by running

sudo python setup.py install

from within the casiopeia directory. If this command fails with a message that CasADi cannot be found on your system, and you installed CasADi by appending it’s directory to PYTHONPATH via ~/.bashrc, it’s most likely that your users PYTHONPATH variable is not available when using sudo. In this case, try

sudo env PYTHONPATH=$PYTHONPATH python setup.py install

Note

These commands require root privileges. In case you do not have root priviliges, consider adding the casiopeia directory to PYTHONPATH, as described above for CasADi.

Upgrades

If you recieved casiopeia by cloning the git repository, you can update the contents of your local copy by running

git pull

from within the casiopeia directory. In case you did not clone the repository, you would again need to download a compressed archive.

Afterwards, you need to install the recent version again by running

sudo python setup.py install

or

sudo env PYTHONPATH=$PYTHONPATH python setup.py install

respectively.

Note

These commands require root privileges.

Warning

If you installed casiopeia by adding the directory to PYTHONPATH, just place the newly obtained files in the previously defined path to upgrade to a new version of casiopeia. You do not not need to add the directory again to PYTHONPATH then. Also, make sure not to add multiple versions of casiopeia to PYTHONPATH, since this might lead to conflicts.

Installation on Windows

The following instructions have been tested on Windows 7 64 bit.

Note

You need to have administrator rights on your system to be able to follow the instructions below.

Prerequesites

Python

The easiest way to meet the prerequesites for casiopeia and CasADi on a Windows system might be to install a recent version of Python(x,y), which is also the procedure recommended by the CasADi developers. It is recommended to do a “Full” installation. In the following, the instructions also assume that you are installing Python(x,y) and all components with their default paths.

CasADi

For casiopeia to work correctly, you need CasADi version >= 3.1 to be installed on your system. Installation instructions for CasADi can be found here.

After unpacking the archive, go to My Computer > Properties > Advanced System Settings > Environment Variables. If a variable PYTHONPATH already exists, apply the full path to the CasADi folder to the end of the variable value, and separate this new path from the ones already contained by ;. If PYTHONPATH does not yet exist on the system, create a new environmental variable with this name, and fill in the path to the unpacked CasADi folder.

Option 1: Get casiopeia using pip (recommended)

Installation

casiopeia is listed on the Python Package Index. Since you installed pip with Python(x,y), you can obtain casiopeia by opening a command line and running

pip install casiopeia

Note

If you have problems obtaining casiopeia with pip (which can e. g. be caused by a company’s proxy server) consider Option 2: Get casiopeia from GitHub.

Upgrades

Upgrades to new releases of casiopeia can simply be obtained by running

pip install casiopeia --upgrade

Option 2: Get casiopeia from GitHub

Installation

You can also obtain the casiopeia module directly from its GitHub repository. Since installing git is more time-consuming on Windows then it is on most Linux systems, it is recommended (at least for less experienced users) to just download the contained files for casiopeia within a compressed archive.

Afterwards, unpack the archive, and install casiopeia by running

python setup.py install

from the command line, within the unzipped folder.

Note

If this procedure is for some reason not applicable for you, you can consider adding the casiopeia directory to PYTHONPATH instead, as described above for CasADi.

Upgrades

For upgrading casiopeia, you would again need to download a compressed archive.

Afterwards, you need to install the recent version by again running

python setup.py install

Warning

If you installed casiopeia by adding the directory to PYTHONPATH, just place the newly obtained files in the previously defined path to upgrade to a new version of casiopeia. You do not not need to add the directory again to PYTHONPATH then. Also, make sure not to add multiple versions of casiopeia to PYTHONPATH, since this might lead to conflicts.

Recommendations

To speed up computations in casiopeia, it is recommended to install HSL for IPOPT. On how to install the solvers and for further information, see the page Obtaining HSL in the CasADi wiki.

Defining a system

Since casiopeia uses CasADi, the user first has to define the considered system using CasADi symbolic variables (of type MX). Afterwards, the symbolic variables which define states, controls, parameters, etc. of the system can be brought into connection by creating a casiopeia.System object.

class casiopeia.system.System(u='MX(u)', q='MX(q)', p=None, x='MX(x)', eps_u='MX(eps_u)', phi=None, f='MX(f)', g='MX(g)')[source]

The class System is used to define non-dynamic, explicit ODE- or fully implicit DAE-systems systems within casiopeia.

Raises:

TypeError, NotImplementedError

Parameters:
  • u (casadi.casadi.MX) – time-varying controls \(u \in \mathbb{R}^{\text{n}_\text{u}}\) that are applied piece-wise-constant for each control intervals, and therefor can change from on interval to another, e. g. motor dutycycles, temperatures, massflows (optional)
  • q (casadi.casadi.MX) – time-constant controls \(q \in \mathbb{R}^{\text{n}_\text{q}}\) that are constant over time, e. g. initial mass concentrations of reactants, elevation angles (optional)
  • p (casadi.casadi.MX) – unknown parameters \(p \in \mathbb{R}^{\text{n}_\text{p}}\)
  • x (casadi.casadi.MX) – differential states \(x \in \mathbb{R}^{\text{n}_\text{x}}\) (optional)
  • eps_u (casadi.casadi.MX) – input errors \(\epsilon_{u} \in \mathbb{R}^{\text{n}_{\epsilon_\text{u}}}\) (optional)
  • phi (casadi.casadi.MX) – output function \(\phi(u, q, x, p) = y \in \mathbb{R}^{\text{n}_\text{y}}\)
  • f (casadi.casadi.MX) – explicit system of ODEs \(f(u, q, x, p, \epsilon_\text{u}) = \dot{x} \in \mathbb{R}^{\text{n}_\text{x}}\) (optional)
  • g (casadi.casadi.MX) – equality constraints \(g(u, q, p) = 0 \in \mathbb{R}^{\text{n}_\text{g}}\) (optional)

Depending on the inputs the user provides, the System is interpreted as follows:

Non-dynamic system (x = None):

\[y = \phi(u, q, p)\]\[0 = g(u, q, p).\]

Explicit ODE system (x != None):

\[\begin{split}y & = & \phi(u, q, x, p) \\\end{split}\]\[\begin{split}\dot{x} & = & f(u, q, x, p, \epsilon_\text{u}).\end{split}\]

This system object can now be used within the casiopeia simulation, parameter estimation and optimum experimental design classes.

System simulation

The module casiopeia.sim contains the class used for system simulation.

class casiopeia.sim.Simulation(system, pdata, qdata=None)[source]

The class casiopeia.sim.Simulation is used to simulate dynamic systems defined with the casiopeia.system.System class. It is supposed that the system containsa number of time-constant parameters \(p\).

Parameters:
  • system (casiopeia.system.System) – system considered for simulation, specified using the casiopeia.system.System class
  • pdata (numpy.ndarray, casadi.DMatrix) – values of the time-constant parameters \(p \in \mathbb{R}^{\text{n}_\text{p}}\)
  • qdata (numpy.ndarray, casadi.DMatrix) – optional, values of the time-constant controls \(q \in \mathbb{R}^{\text{n}_\text{q}}\); if no values are given, 0 will be used
run_system_simulation(x0, time_points, udata=None, integrator_options={}, print_status=True)[source]
Parameters:
  • x0 (numpy.ndarray, casadi.DMatrix, list) – state values \(x_0 \in \mathbb{R}^{\text{n}_\text{x}}\) at the first time point \(t_0\)
  • time_points (numpy.ndarray, casadi.DMatrix, list) – switching time points for the controls \(t_\text{N} \in \mathbb{R}^\text{N}\)
  • udata (numpy.ndarray, casadi.DMatrix) – optional, values for the time-varying controls at the first \(N-1\) switching time points \(u_\text{N} \in \mathbb{R}^{\text{n}_\text{u} \times \text{N}-1}\); if no values are given, 0 will be used
  • integrator_options (dict) – optional, options to be passed to the CasADi integrator (see the CasADi documentation for a list of all possible options)
  • print_status (bool) – optional, set to True (default) or False to enable or disable console printing.

This function will run a system simulation for the specified initial state values and control data from \(t_0\) to \(t_\text{N}\).

If you receive integrator-related error messages during the simulation, please check the corresponding parts of the CasADi documentation.

After the simulation has finished, the simulation results \(x_\text{N}\) can be accessed via the class attribute Simulation.simulation_results.

Parameter estimation

The module casiopeia.pe contains the classes for parameter estimation. For now, only least squares parameter estimation problems are covered.

Parameter estimation from single experiments

class casiopeia.pe.LSq(system, time_points, udata=None, qdata=None, ydata=None, pinit=None, xinit=None, wv=None, weps_u=None, discretization_method='collocation', **kwargs)[source]

The class casiopeia.pe.LSq is used to set up least squares parameter estimation problems for systems defined with the casiopeia.system.System class, using a given set of user-provided control data, measurement data and different kinds of weightings.

Raises:

AttributeError, NotImplementedError

Parameters:
  • system (casiopeia.system.System) – system considered for parameter estimation, specified using the casiopeia.system.System class
  • time_points (numpy.ndarray, casadi.DMatrix, list) – time points \(t_\text{N} \in \mathbb{R}^\text{N}\) used to discretize the continuous time problem. Controls will be applied at the first \(N-1\) time points, while measurements take place at all \(N\) time points.
  • udata (numpy.ndarray, casadi.DMatrix) – optional, values for the time-varying controls \(u_\text{N} \in \mathbb{R}^{\text{n}_\text{u} \times \text{N}-1}\) that can change at the switching time points; if no values are given, 0 will be used; note that the the second dimension of \(u_\text{N}\) is \(N-1\) and not \(N\), since there is no control value applied at the last time point
  • qdata (numpy.ndarray, casadi.DMatrix) – optional, values for the time-constant controls \(q_\text{N} \in \mathbb{R}^{\text{n}_\text{q}}\); if not values are given, 0 will be used
  • ydata (numpy.ndarray, casadi.DMatrix) – values for the measurements at the switching time points \(y_\text{N} \in \mathbb{R}^{\text{n}_\text{y} \times \text{N}}\)
  • wv (numpy.ndarray, casadi.DMatrix) – weightings for the measurements \(w_\text{v} \in \mathbb{R}^{\text{n}_\text{y} \times \text{N}}\)
  • weps_u (numpy.ndarray, casadi.DMatrix) – weightings for the input errors \(w_{\epsilon_\text{u}} \in \mathbb{R}^{\text{n}_{\epsilon_\text{u}}}\) (only necessary if input errors are used within system)
  • pinit (numpy.ndarray, casadi.DMatrix) – optional, initial guess for the values of the parameters that will be estimated \(p_\text{init} \in \mathbb{R}^{\text{n}_\text{p}}\); if no value is given, 0 will be used; note that a poorly or wrongly chosen initial guess can cause the estimation to fail
  • xinit (numpy.ndarray, casadi.DMatrix) – optional, initial guess for the values of the states that will be estimated \(x_\text{init} \in \mathbb{R}^{\text{n}_\text{x} \times \text{N}}\); if no value is given, 0 will be used; note that a poorly or wrongly chosen initial guess can cause the estimation to fail
  • discretization_method (str) – optional, the method that shall be used for discretization of the continuous time problem w. r. t. the time points given in \(t_\text{N}\); possible values are “collocation” (default) and “multiple_shooting”

Depending on the discretization method specified in discretization_method, the following parameters can be used for further specification:

Parameters:
  • collocation_scheme (str) – optional, scheme used for setting up the collocation polynomials, possible values are radau (default) and legendre
  • number_of_collocation_points (int) – optional, order of collocation polynomials \(d \in \mathbb{Z}\) (default values is 3)
  • integrator (str) – optional, integrator to be used with multiple shooting. See the CasADi documentation for a list of all available integrators. As a default, cvodes is used.
  • integrator_options (dict) – optional, options to be passed to the CasADi integrator used with multiple shooting (see the CasADi documentation for a list of all possible options)

The resulting parameter estimation problem has the following form:

\[\begin{split}\begin{aligned} \text{arg}\,\underset{p, x, v, \epsilon_\text{u}}{\text{min}} & & \frac{1}{2} \| R(\cdot) \|_2^2 &\\ \text{subject to:} & & v_\text{k} + y_\text{k} - \phi(x_\text{k}, p; u_\text{k}, q) & = 0 \hspace{1cm} k = 1, \dots, N\\ & & g(x, p, \epsilon_\text{u}; u, q) & = 0 \\ \text{with:} & & \begin{pmatrix} {w_\text{v}}^T & {w_{\epsilon_\text{u}}}^T \end{pmatrix}^{^\mathbb{1}/_\mathbb{2}} \begin{pmatrix} {v} \\ {\epsilon_\text{u}} \end{pmatrix} & = R \\ \end{aligned}\end{split}\]

while \(g(\cdot)\) contains the discretized system dynamics according to the specified discretization method. If the system is non-dynamic, it only contains the user-provided equality constraints.

compute_covariance_matrix()

This function computes the covariance matrix for the estimated parameters from the inverse of the KKT matrix for the parameter estimation problem, which allows for statements on the quality of the values of the estimated parameters [1] [2].

For efficiency, only the inverse of the relevant part of the matrix is computed [3].

The values of the covariance matrix \(\Sigma_{\hat{\text{p}}}\) can afterwards be accessed via the class attribute LSq.covariance_matrix, and the contained standard deviations \(\sigma_{\hat{\text{p}}}\) for the estimated parameters directly via LSq.standard_deviations.

References

[1]Kostina, Ekaterina and Kostyukova, Olga: Computing Covariance Matrices for Constrained Nonlinear Large Scale Parameter Estimation Problems Using Krylov Subspace Methods, 2012.
[2]Kostina, Ekaterina and Kriwet, Gregor: Towards Optimum Experimental Design for Partial Differential Equations, SPP 1253 annual conference 2010, slides 12/13.
[3]Walter, Eric and Prozanto, Luc: Identification of Parametric Models from Experimental Data, Springer, 1997, pages 288/289.
print_estimation_results()
Raises:AttributeError

This function displays the results of the parameter estimation computations. It can not be used before function run_parameter_estimation() has been used. The results displayed by the function contain:

  • the values of the estimated parameters \(\hat{p}\) and their corresponding standard deviations \(\sigma_{\hat{\text{p}}},\) (the values of the standard deviations are presented only if the covariance matrix had already been computed),
  • the values of the covariance matrix \(\Sigma_{\hat{\text{p}}}\) for the estimated parameters (if it had already been computed), and
  • the durations of the estimation and (if already executed) of the covariance matrix computation.
run_parameter_estimation(solver_options={})
Parameters:solver_options (dict) – options to be passed to the IPOPT solver (see the CasADi documentation for a list of all possible options)

This functions will pass the least squares parameter estimation problem to the IPOPT solver. The status of IPOPT printed to the console provides information whether the optimization finished successfully. The estimated parameters \(\hat{p}\) can afterwards be accessed via the class attribute LSq.estimated_parameters.

Note

IPOPT finishing successfully does not necessarily mean that the estimation results for the unknown parameters are useful for your purposes, it just means that IPOPT was able to solve the given optimization problem. You have in any case to verify your results, e. g. by simulation using the casiopeia casiopeia.sim.Simulation class!

Parameter estimation from multiple experiments

class casiopeia.pe.MultiLSq(pe_setups=[])[source]

The class casiopeia.pe.MultiLSq is used to construct a single least squares parameter estimation problem from multiple least squares parameter estimation problems defined via two or more objects of type casiopeia.pe.LSq.

In this way, the results of multiple independent experimental setups can be used for parameter estimation.

Note

It is assumed that the system description used for setting up the several parameter estimation problems is the same.

Parameters:pe_setups (list) – list of two or more objects of type casiopeia.pe.LSq

Optimum experimental design

The module casiopeia.doe contains the classes used for optimum experimental design.

Optimum experimental design of single experiments

class casiopeia.doe.DoE(system, time_points, uinit=None, umin=None, umax=None, qinit=None, qmin=None, qmax=None, pdata=None, x0=None, xmin=None, xmax=None, wv=None, weps_u=None, discretization_method='collocation', optimality_criterion='A', **kwargs)[source]

The class casiopeia.doe.DoE is used to set up Design-of-Experiments-problems for systems defined with the casiopeia.system.System class.

The aim of the experimental design optimization is to identify a set of controls that can be used for the generation of measurement data which allows for a better estimation of the unknown parameters of a system.

To achieve this, an information function on the covariance matrix of the estimated parameters is minimized. The values of the estimated parameters, though they are mostly an initial guess for their values, are not changed during the optimization.

Optimum experimental design and parameter estimation methods can be used interchangeably until a desired accuracy of the parameters has been achieved.

Raises:

AttributeError, NotImplementedError

Parameters:
  • system (casiopeia.system.System) – system considered for parameter estimation, specified using the casiopeia.system.System class
  • time_points (numpy.ndarray, casadi.DMatrix, list) – time points \(t_\text{N} \in \mathbb{R}^\text{N}\) used to discretize the continuous time problem. Controls will be applied at the first \(N-1\) time points, while measurements take place at all \(N\) time points.
  • umin (numpy.ndarray, casadi.DMatrix) – optional, lower bounds of the time-varying controls \(u_\text{min} \in \mathbb{R}^{\text{n}_\text{u}}\); if not values are given, \(-\infty\) will be used
  • umax (numpy.ndarray, casadi.DMatrix) – optional, upper bounds of the time-vaying controls \(u_\text{max} \in \mathbb{R}^{\text{n}_\text{u}}\); if not values are given, \(\infty\) will be used
  • uinit (numpy.ndarray, casadi.DMatrix) – optional, initial guess for the values of the time-varying controls \(u_\text{N} \in \mathbb{R}^{\text{n}_\text{u} \times \text{N}-1}\) that (might) change at the switching time points; if no values are given, 0 will be used; note that a poorly or wrongly chosen initial guess can cause the optimization to fail, and note that the the second dimension of \(u_N\) is \(N-1\) and not \(N\), since there is no control value applied at the last time point
  • qmin (numpy.ndarray, casadi.DMatrix) – optional, lower bounds of the time-constant controls \(q_\text{min} \in \mathbb{R}^{\text{n}_\text{q}}\); if not values are given, \(-\infty\) will be used
  • qmax (numpy.ndarray, casadi.DMatrix) – optional, upper bounds of the time-constant controls \(q_\text{max} \in \mathbb{R}^{\text{n}_\text{q}}\); if not values are given, \(\infty\) will be used
  • qinit (numpy.ndarray, casadi.DMatrix) – optional, initial guess for the optimal values of the time-constant controls \(q_\text{init} \in \mathbb{R}^{\text{n}_\text{q}}\); if not values are given, 0 will be used; note that a poorly or wrongly chosen initial guess can cause the optimization to fail
  • pdata (numpy.ndarray, casadi.DMatrix) – values of the time-constant parameters \(p \in \mathbb{R}^{\text{n}_\text{p}}\)
  • x0 (numpy.ndarray, casadi.DMatrix, list) – state values \(x_0 \in \mathbb{R}^{\text{n}_\text{x}}\) at the first time point \(t_0\)
  • xmin (numpy.ndarray, casadi.DMatrix) – optional, lower bounds of the states \(x_\text{min} \in \mathbb{R}^{\text{n}_\text{x}}\); if no value is given, \(-\infty\) will be used
  • xmax (numpy.ndarray, casadi.DMatrix) – optional, lower bounds of the states \(x_\text{max} \in \mathbb{R}^{\text{n}_\text{x}}\); if no value is given, \(\infty\) will be used
  • wv (numpy.ndarray, casadi.DMatrix) – weightings for the measurements \(w_\text{v} \in \mathbb{R}^{\text{n}_\text{y} \times \text{N}}\)
  • weps_u (numpy.ndarray, casadi.DMatrix) – weightings for the input errors \(w_{\epsilon_\text{u}} \in \mathbb{R}^{\text{n}_{\epsilon_\text{u}}}\) (only necessary if input errors are used within system)
  • discretization_method (str) – optional, the method that shall be used for discretization of the continuous time problem w. r. t. the time points given in \(t_\text{N}\); possible values are “collocation” (default) and “multiple_shooting”
  • optimality_criterion (str) –

    optional, the information function \(I_\text{X}(\cdot)\) to be used on the covariance matrix, possible values are A (default) and D, while

    \[\begin{split}\begin{aligned} I_\text{A}(\Sigma_\text{p}) & = \frac{1}{n_\text{p}} \text{Tr}(\Sigma_\text{p}),\\ I_\text{D}(\Sigma_\text{p}) & = \begin{vmatrix} \Sigma_\text{p} \end{vmatrix} ^{\frac{1}{n_\text{p}}}, \end{aligned}\end{split}\]

    for further information see e. g. [1]

Depending on the discretization method specified in discretization_method, the following parameters can be used for further specification:

Parameters:
  • collocation_scheme (str) – optional, scheme used for setting up the collocation polynomials, possible values are radau (default) and legendre
  • number_of_collocation_points (int) – optional, order of collocation polynomials \(d \in \mathbb{Z}\) (default values is 3)
  • integrator (str) – optional, integrator to be used with multiple shooting. See the CasADi documentation for a list of all available integrators. As a default, cvodes is used.
  • integrator_options (dict) – optional, options to be passed to the CasADi integrator used with multiple shooting (see the CasADi documentation for a list of all possible options)

You do not need to specify initial guesses for the estimated states, since these are obtained with a system simulation using the initial states and the provided initial guesses for the controls.

The resulting optimization problem has the following form:

\[\begin{split}\begin{aligned} \text{arg}\,\underset{u, q, x}{\text{min}} & & I(\Sigma_{\text{p}}(x, u, q; p)) &\\ \text{subject to:} & & g(x, u, q; p) & = 0\\ & & u_\text{min} \leq u_\text{k} & \leq u_\text{max} \hspace{1cm} k = 1, \dots, N-1\\ & & x_\text{min} \leq x_\text{k} & \leq x_\text{max} \hspace{1cm} k = 1, \dots, N\\ & & x_1 \leq x(t_1) & \leq x_1 \end{aligned}\end{split}\]

where \(\Sigma_p = \text{Cov}(p)\) and \(g(\cdot)\) contains the discretized system dynamics according to the specified discretization method. If the system is non-dynamic, it only contains the user-provided equality constraints.

References

[1]Körkel, Stefan: Numerische Methoden für Optimale Versuchsplanungsprobleme bei nichtlinearen DAE-Modellen, PhD Thesis, Heidelberg university, 2002, pages 74/75.
plot_confidence_ellipsoids(properties='initial')[source]
Parameters:properties (str) – Set whether the experimental properties for the initial setup (“initial”, default), the optimized setup (“optimized”) or for both setups (“all”) shall be plotted. In the later case, both ellipsoids for one pair of parameters will be displayed within one plot.

Plot confidence ellipsoids for all parameter pairs. Since the number of plots is possibly big, all plots will be saved within a folder confidence_ellipsoids_scriptname in you current working directory rather than being displayed directly.

Optimum experimental design of multiple experiments

class casiopeia.doe.MultiDoE(doe_setups=[], optimality_criterion='A')[source]

The class casiopeia.doe.MultiDoE is used to construct a single experimental design problem from multiple experimental design problems defined via two or more objects of type casiopeia.doe.DoE.

This provides the possibility to design multiple experiments within one single optimization, so that the several experiments can focus on different aspects of the system which in combination then yields more information about the complete system.

Also, this functionality is in particular useful in case an experiment is limited to only small variable bounds, small time horizons, highly depends on the initialization of the system, or any other case when a single experiment might not be enough to capture enough information about a system.

Note

It is assumed that the system description used for setting up the several experimental design problems is the same!

Parameters:doe_setups (list) – list of two or more objects of type casiopeia.doe.DoE

Sample applications

The following sample applications give hands-on impressions on how to use casiopeia in practice. They all (and some more) are contained within the examples directory of the casiopeia sources.

Parameter estimation for a Lotka-Volterra predator-prey-model

The aim of the application lotka_volterra_pe.py is to estimate the unknown parameters of a Lotka-Volterra predator-prey-model for experimentally received measurement data and given standard deviations for the measurements [1]. The predator-prey-model is an ODE of the form \(\dot{x} = f(x,p)\), given by

\[\begin{split}\begin{aligned} \dot{x}_1 &= - \alpha x_1 + \beta x_1 x_2 \\ \dot{x}_2 &= \gamma x_2 - \delta x_1 x_2 \\ \end{aligned}\end{split}\]

where \(\alpha = 1.0\) and \(\gamma = 1.0\), the states \(x\) and parameters \(p\) are defined as

\[\begin{split}x = \begin{pmatrix} {x_1} \\ {x_2} \end{pmatrix}, ~ p = \begin{pmatrix} {\beta} \\ {\delta} \end{pmatrix},\end{split}\]

and we can measure the full state, i. e.

\[\phi = x.\]

The values resulting from the parameter estimation are

\[\begin{split}\hat{p} = \begin{pmatrix} {\hat{\beta}} \\ {\hat{\delta}} \end{pmatrix} = \begin{pmatrix} {0.693379029} \\ {0.341128482} \end{pmatrix}.\end{split}\]

The results for the system simulation using the estimated parameters in comparison to the measurement data are shown in the figure below.

_images/lv_results.png

Figure: Simulation results for the Lotka-Volterra predator-prey-model using the estimated parameters, compared to the given measurement data

Parameter estimation for a pendulum model

The aim of the application pendulum_pe.py is to estimate the spring constant \(k\) of a pendulum model for experimentally received measurement data [2]. The pendulum model is an ODE of the form \(\dot{x} = f(x,u,p)\), given by

\[\begin{split}\begin{aligned} \dot{\nu} &= \omega \\ \dot{\omega} &= \frac{k}{m L^2} (\psi - \nu) - \frac{g}{L} * \sin(\nu) \\ \end{aligned}\end{split}\]

where \(m = 1.0\), \(L = 3.0\) and \(g = 9.81\), the states \(x\), controls \(u\) and parameters \(p\) are defined as

\[\begin{split}x = \begin{pmatrix} {\nu} \\ {\omega} \end{pmatrix}, ~ u = \begin{pmatrix} {\psi} \end{pmatrix}, ~ p = \begin{pmatrix} {k} \end{pmatrix},\end{split}\]

while the only control \(\psi\) is the initial actuation angle of the pendulum, and therefor stays constant over time. Also, we can measure the full state, i. e.

\[\phi = x.\]

The value resulting from the parameter estimation is

\[\hat{p} = \begin{pmatrix} {\hat{k}}\end{pmatrix} = \begin{pmatrix} {2.99763513} \end{pmatrix}.\]

The results for the system simulation using the estimated parameter in comparison to the measurement data are shown in the figures below.

_images/pendulum_results.png

Figure: Simulation results for the pedulum model using the estimated parameters, compared to the given measurement data

Parameter estimation for a model race car

The aim of the application 2d_vehicle_pe.py is to estimate the unknown parameters of a 2D race car model for experimentally received measurement data [3]. The race car and the interpretation of the model states are shown in the figure below [4].

_images/rc.png

Figure: Depiction of the race car showing the models states

The 2D model of the race car is an ODE of the form \(\dot{x} = f(x,u,p)\), given by

\[\begin{split}\begin{aligned} \dot{X} &= v \, \text{cos}(\psi + C_{1} \delta)\\ \dot{Y} &= v \, \text{sin}(\psi + C_{1} \delta) \\ \dot{\psi} &= v \, \delta \, C_{2} \\ \dot{v} &= C_{\text{m}_{1}} \, D - C_{\text{m}_{2}} \, D \, v - C_{\text{r}_{2}} \, v^{2} - C_{\text{r}_{0}} - (v \, \delta)^{2} \, C_{2} \, C_{1}, \end{aligned}\end{split}\]

where the states \(x\), controls \(u\) and parameters \(p\) are defined as

\[\begin{split}x = \begin{pmatrix} {X} \\ {Y} \\ {\psi} \\ {v} \end{pmatrix}, ~ u = \begin{pmatrix} {\delta} \\ D \end{pmatrix}, ~ p = \begin{pmatrix} {C_{1}} \\ {C_{2}} \\ {C_{\text{m}_{1}}} \\ {C_{\text{m}_{2}}} \\ {C_{\text{r}_{2}}} \\ {C_{\text{r}_{0}}} \end{pmatrix},\end{split}\]

and we can measure the full state, i. e.

\[\phi = x.\]

The values resulting from the parameter estimation are

\[\begin{split}\hat{p} = \begin{pmatrix} {\hat{C_{1}}} \\ {\hat{C_{2}}} \\ {\hat{C_{\text{m}_{1}}}} \\ {\hat{C_{\text{m}_{2}}}} \\ {\hat{C_{\text{r}_{2}}}} \\ {\hat{C_{\text{r}_{0}}}} \end{pmatrix} = \begin{pmatrix} { 0.273408} \\ { 11.5602} \\ {2.45652} \\ {7.90959} \\ {-0.44353} \\ {-0.249098} \end{pmatrix}.\end{split}\]

The results for the system simulation using the estimated parameter in comparison to the measurement data are shown in the figures below.

_images/rc_results.png

Figure: Simulation results for the race car model using the estimated parameters, compared to the given measurement data

An evaluation of the covariance matrix for the estimated parameters shows that the standard deviations of \(\hat{C_{1}}\) and \(\hat{C_{2}}\) are relatively small in comparison to their own values, while the standard deviations of the other parameters are relatively big.

\[\begin{split}\hat{p} = \begin{pmatrix} {\hat{C_{1}}} \\ {\hat{C_{2}}} \\ {\hat{C_{\text{m}_{1}}}} \\ {\hat{C_{\text{m}_{2}}}} \\ {\hat{C_{\text{r}_{2}}}} \\ {\hat{C_{\text{r}_{0}}}} \end{pmatrix} = \begin{pmatrix} { 0.273408} \\ { 11.5602} \\ {2.45652} \\ {7.90959} \\ {-0.44353} \\ {-0.249098} \end{pmatrix} \pm \begin{pmatrix} {0.034497452} \\ {0.058569592} \\ {2.72097859} \\ {5.448817078} \\ {1.478999406} \\ {0.37343932} \end{pmatrix}\end{split}\]

This intends that the estimation results for the parameters \(\hat{C_{\text{m}_{1}}}\), \(\hat{C_{\text{m}_{2}}}\), \(\hat{C_{\text{r}_{2}}}\) and \(\hat{C_{\text{r}_{0}}}\) are probably not accurate, and might change substantially for other measurement and control data. Optimum experimental design can be an option to encounter this problem.

Optimum experimental design for a model race car

The aim of the application 2d_vehicle_doe_scaled.py is to solve an optimum experimental design problem for the 2D race car model from Parameter estimation for a model race car to obtain control values that allow for a better estimation of the unknown parameters of the model.

Initial setup

For this application, we assume that we are not bound to the previous race track to obtain measurements for the race car, but can drive the car on a rectangular mat of the racetrack’s material. The controls are bounded by the maximum and minimum values of the controls measurements from Parameter estimation for a model race car, as well as the states are bounded by their corresponding maximum and minimum values of the states measurements. The bounds are introduced to prevent the optimizer from creating unrealistic scenarios that could e. g. cause the race car to fall over when taking too sharp turns, which is not explicitly considered within the model.

The previous parameter estimation results \(\hat{p}\) from Parameter estimation for a model race car are used as a “guess” for the parameter values for the experimental design, and with this, to scale all parameter values within the optimization to 1.0 to prevent influences of the numerical values of the parameters on the optimization result.

A subset of the control values from the previous estimation is used as initial guesses for the optimized controls. The quality of the initial experimental setup in terms of estimated standard deviations of the unknown parameters is evaluated as follows

\[\begin{split}p_\text{I} = \begin{pmatrix} {C_{1, \text{I}}} \\ {C_{2, \text{I}}} \\ {C_{\text{m}_{1},\text{I}}} \\ {C_{\text{m}_{2},\text{I}}} \\ {C_{\text{r}_{2},\text{I}}} \\ {C_{\text{r}_{0},\text{I}}} \end{pmatrix} = \begin{pmatrix} {1.0} \\ {1.0} \\ {1.0} \\ {1.0} \\ {1.0} \\ {1.0} \end{pmatrix} \pm \begin{pmatrix} {6.1591763006} \\ {0.318683714861} \\ {92.0037213296} \\ {62.6460661875} \\ {286.556042737} \\ {108.733245939} \end{pmatrix}\end{split}\]

which indicates that the experimental setup is rather inappropriate for a sufficient estimation.

Optimized setup

Note

Running this optimization takes about 10 min on an Intel(R) Core(TM) i5-4570 3.20GHz CPU.

We use the A-criterion as objective for the experimental design (see Optimum experimental design). The results of the optimization can be analyzed and visualized with the script 2d_vehicle_doe_scaled_validation.py. The figure below shows the optimized control values in comparison to the initially used control values, while the suffix coll indicates that the values were obtained using collocation discretization.

_images/rc_doe_controls.png

Figure: Optimized control values in comparison to the initially used control values

The figure below shows a comparison of the simulated states values for both initially used and optimized control values, and with this, the effect of the optimization on the route of the race car and it’s velocity during the measurements.

_images/rc_doe_states.png

Figure: Comparison of the simulated states values for initial and optimized controls

The quality of the optimized experimental setup in terms of estimated standard deviations of the unknown parameters is evaluated as follows

\[\begin{split}p_\text{O} = \begin{pmatrix} {C_{1, \text{O}}} \\ {C_{2, \text{O}}} \\ {C_{\text{m}_{1},\text{O}}} \\ {C_{\text{m}_{2},\text{O}}} \\ {C_{\text{r}_{2},\text{O}}} \\ {C_{\text{r}_{0},\text{O}}} \end{pmatrix} = \begin{pmatrix} {1.0} \\ {1.0} \\ {1.0} \\ {1.0} \\ {1.0} \\ {1.0} \end{pmatrix} \pm \begin{pmatrix} {1.93054150676} \\ {0.278656552587} \\ {1.96689422255} \\ {1.51815346784} \\ {3.42713773836} \\ {1.88475684297} \end{pmatrix}\end{split}\]

which indicates that the optimized setup is more appropriate for parameter estimation compared to the initial experimental design. Though, the estimated standard deviations are still relatively big in comparison to the scaled parameter values, so it would probably make sense to further refine the experimental design.

Further steps

Possible strategies for further refinement of the experimental design could be to increase the duration of the experiment so that more measurements can be taken, or to loosen control and state bounds to allow for greater system excitation.

In case these strategies are not applicable (physical limitations, safety concerns or alike), designing multiple experiments within one optimization problem can be a useful approach, so that several independent experiments can focus on different aspects of the system, which allows for a structured gathering of additional information about the system that can later be used within one parameter estimation.

Both planning of such experiments and using independent measurements data sets within one parameter estimation can be realized with casiopeia as well, see Optimum experimental design of multiple experiments and Parameter estimation from multiple experiments.

References

[1]Bock, Sager et al.: Übungen zur Numerischen Mathematik II, sheet 9, IWR, Heidelberg university, 2006.
[2]Diehl, Moritz: Course on System Identification, exercise 7, SYSCOP, IMTEK, University of Freiburg, 2014/2015.
[3]Verschueren, Robin: Design and implementation of a time-optimal controller for model race cars, Master’s thesis, KU Leuven, 2014.
[4]Spengler, Patrick and Gammeter, Christoph: Modeling of 1:43 scale race cars, Master’s thesis, ETH Zürich, 2010.
[1]This service is at the moment limited to one user at a time, due to restricted resources. If your computations do no start immediately, there’s probably another user testing casiopeia at the moment.