AT-TPC DAQ Help

This manual describes how to install, set up, and use the web-based GUI for the AT-TPC’s DAQ system. The documentation is divided into a few different sections. For some background information about the system, see Overview of DAQ system. The Installation and initial setup section describes how to install the system and its dependencies, like Docker. The next two sections, Configuring the system and Logging information about runs, show how to set up the system for data taking.

The most important section of this manual for experimenters taking shifts is probably Operating the DAQ system. It describes how to configure the CoBos and start and stop runs. It also has instructions on how to record parameters about the runs, like pressures and voltages.

At the end of the manual is the Developer documentation section, which contains information about how the system is implemented. This is probably only of interest to people who want to maintain the system or add new features.

Contents

Overview of DAQ system

The AT-TPC DAQ is based on a collection of programs provided by the GET collaboration. These provide the back-end of the system by handling CoBo configuration and data recording. This web application serves as a front-end for those programs.

GET software components

There are two programs, in particular, that need to be running for each CoBo. They are:

getEccSoapServer
This program controls the CoBo. It sends the configuration to the CoBo and tells it when to start and stop acquisition.
dataRouter
This program records the data.

The web interface controls the system by acting as a client for the getEccSoapServer. It does not communicate with the dataRouter directly.

CoBo state machine

The ECC server controls the CoBo using the model of a state machine. This means that that CoBo can be in one of several well-defined states, and to change from one state to another, it will undergo a well-defined transition. The state machine for the CoBo looks like this:

_images/statemachine.png

The ellipses represent the different states that the system may be in, the arrows along the right side show the forward state transitions, and the arrows along the left show the reverse state transitions.

Config files

Each forward transition requires a particular part of the configuration file. This is why we have three config files for each setup (or, alternatively, two true files and one symbolic link). The expected files are named:

  • describe-[name].xcfg for the describe step
  • prepare-[name].xcfg for the prepare step
  • configure-[name].xcfg for the configure step

These names will be shown stripped of their prefix and suffix in the DAQ interface. For example, a file called describe-cobo0.xcfg will be shown as simply cobo0.

Web-based GUI

The interface to the system is a web application written in Python 3 using the Django web framework for the back-end with Bootstrap providing the front-end. The structure of the code is described briefly in Developer documentation and in comments directly in the code itself.

As Django apps can be a bit tricky to serve, the app has been structured to run inside Docker containers. The Dockerized version of the app can be built using the Docker compose utility like this:

docker-compose build

Installation and initial setup

Requirements

GET software

As mentioned previously, this DAQ software depends on two of the programs from the GET software suite: the getEccSoapServer and the dataRouter. These programs are not provided with this package, so they must be compiled and installed separately before this package can be installed.

Docker

Docker and the Docker Compose tool are required to get the containerized version of the DAQ software running. Docker can be downloaded and installed from its developers’ website or from your package manager if you’re using Linux.

Networking

If you’ll be running the system on multiple computers, be sure to consider where files will be stored. The ECC server will expect to find config files locally wherever it’s running, so if multiple ECC servers are running on multiple computers, you will likely want to share a folder on your local network to keep the config files in.

Source code

Finally, get the latest version of the DAQ software from GitHub:

git clone https://github.com/attpc/attpc-daq.git

Always use the latest version from the Master branch. The version in the Develop branch may not be stable.

Creating the environment file

There are a few environment variables that need to be set to system-dependent values inside the Docker container. Several of these variables provide encryption keys or passwords, so this environment file is not in the Git repository (and it should never be committed to the repository!).

Create a file in the root of the repository with the following values. Remove the comment strings (starting with #) before saving it.

DAQ_IS_PRODUCTION=True         # Tells the system to use the production settings, rather than debug.
POSTGRES_USER=[something]      # A user name for the PostgreSQL database. Set it to something reasonable.
POSTGRES_PASSWORD=[something]  # A secure, random password that you will not likely need to remember.
POSTGRES_DB=attpcdaq           # The name of the database for PostgreSQL
DAQ_SECRET_KEY=[something]     # A secure, *STRONG* random string for Django's cryptography tools.

The user name for PostgreSQL is not important. Just set it to something reasonable. The remaining two things to be filled in are the PostgreSQL database password and the Django secret key. Set these both to long, random strings of characters since you will not need to remember them.

Warning

Although it may not seem that important to have a strong password on the local network, consider that the Django secret key is used to derive everything cryptography-related in the app. This means that it’s especially important for this key to be both strong and secret.

One way to generate these random strings is the following Python script:

from __future__ import print_function
import random

chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
sr = random.SystemRandom()
key = ''.join(sr.choice(chars) for i in range(50))
print(key)

Or, if you prefer a one-liner:

python -c "import random; chars = 'abcdefghijklmnopqrstuvwxyz0123456789%^&*(-_=+)'; sr = random.SystemRandom(); print(''.join(sr.choice(chars) for i in range(50)))"

Building the containers

Once you’ve installed Docker and docker-compose, open a terminal in the root of the repository. This is the directory with the docker-compose.yml file. The Docker images can then be built with the command:

docker-compose build

This will create a set of Docker images and install all of the software’s dependencies inside them. This will require an internet connection.

Starting the app

Start all of the containers and the virtual network connecting them by running:

docker-compose up

This will instantiate the containers and start them, and then it will start printing the standard output from the containers. Keep this terminal window running to see the output as the program runs. If you want to quit the program later, press Control-c in this terminal.

The first time you run the code, it will need to do some housekeeping to get set up. This may take a minute or so. When the output printed to the terminal slows down or stops, continue with the next steps.

First-run setup

When the code is freshly installed, the database that backs the web app will be empty. We need to create a user in the web app so that we can log in and set up an experiment. To do this, open a new terminal and run this command:

docker exec -it attpcdaq_web_1 python manage.py createsuperuser

If we break this command down into parts, it opens a TTY inside the container running the Django app (docker exec -it attpcdaq_web_1) and runs the Django manage.py script to create a superuser account (python manage.py createsuperuser). It will prompt you for a username and password, which you should choose and remember for later.

Once you’ve made a superuser account, open a browser to http://localhost:8080/admin to access the Django admin interface. Log in with the username and password you just set up. This will put you on the Admin page.

_images/admin_page.png

This page allows you to access the internals of the DAQ web interface and directly change the contents of its database. For now, click on “Experiments” under the “DAQ” header and then click the “Add Experiment” button on the next page.

_images/add_experiment.png

Click the green plus to add a new regular user account.

Note

Experiments are associated with user names in a one-to-one mapping in this program, so every time you add an experiment, you should also create a new experimental user to go along with it.

Also enter a name for the experiment. Data will be written into a directory with this name at the end of each run. Spaces are ok in this name. Finally, click “Save” to create the experiment.

Once you’ve finished this, click “Log Out” in the upper right to log out of the admin interface.

Starting the remote processes

Note

This section assumes the code is running on macOS. Linux distributions support a similar method of configuring a process to automatically launch using systemd services or init scripts, but that will not be covered here.

Under macOS, the remote GET processes are managed by launchd, the operating system’s main management process. It will automatically re-launch the processes if they fail, and it will coordinate logging of the processes’ standard outputs to a log file.

The behavior of launchd with respect to the GET software components is controlled by a Launch Agent plist file. Example plist files are included in the Git repository, but here is an annotated example for the ECC server:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <!-- A label to identify the program -->
    <key>Label</key>
    <string>attpc.getEccSoapServer</string>

    <!-- Any necessary environment variables. This might include settings for paths
         needed for libraries installed using MacPorts, for example.-->
    <key>EnvironmentVariables</key>
    <dict>
        <key>DYLD_FALLBACK_LIBRARY_PATH</key>
        <string>/opt/local/lib</string>
    </dict>

    <!-- The commands needed to start the program. Each element of the command is given in
         a separate <string> tag. The first element should be the full path to the program,
         and the remaining elements give the command line arguments. -->
    <key>ProgramArguments</key>
    <array>
        <string>/path/to/getEccSoapServer</string>
        <string>--config-repo-url</string>
        <string>/path/to/configs/directory</string>
    </array>

    <!-- The working directory for the program. This is important for the dataRouter as it's
         where that program will write the data. -->
    <key>WorkingDirectory</key>
    <string>/path/to/working/directory</string>

    <!-- Where to write the standard out and standard error files. These may be the same file.
         It is probably best to put the logs in ~/Library/Logs since that will allow you to
         view them with the Console application. -->
    <key>StandardOutPath</key>
    <string>/Users/USER/Library/Logs/getEccSoapServer.log</string>

    <key>StandardErrorPath</key>
    <string>/Users/USER/Library/Logs/getEccSoapServer.log</string>

    <!-- Keep the program running at all times, even if there are no incoming connections. -->
    <key>KeepAlive</key>
    <true/>
</dict>
</plist>

A similar file should be created for the data router with the appropriate arguments.

Once plist files have been created, they are conventionally placed in ~/Library/LaunchAgents, and they should be launched on startup if they are in that directory. To launch the programs manually, use launchctl:

launchctl load ~/Library/LaunchAgents/attpc.getEccSoapServer.plist

Manually stopping the programs is very similar. Just replace load with unload in the above command.

This can also be automated for all of the remote computers using, e.g. Apple Remote Desktop.

Configuring the system

Once everything is up and running, the next step is to tell the DAQ system about the components you’ll be using. First, log into the system. Go to http://localhost:8080 in a browser to get to the main login page:

_images/daq_login.png

Sign in using the experiment account you created in the last section.

Note

Don’t use the superuser account to log in here, or you’ll just get an error page. That account should only be used to sign into the admin page.

After signing in, you’ll find yourself on the main page. If you’ve just installed the system, it should be blank, like this:

_images/status_blank.png

We need to tell the system about the data routers and ECC servers in the system. That can be accomplished a few different ways, but the easiest method is to use the Easy Setup page.

Note

Just to be clear, all of the steps on this page are used to set up the model of the GET electronics in the DAQ GUI. These steps will not start getEccSoapServer or dataRouter processes. That must be done separately. This section just tells the system where to contact these processes, and we’re assuming that they’re already running and reachable from the network.

Easy setup

On the status page, click the “Easy setup” link in the left-hand menu column. This will take you to a form that you can fill out to automatically set up the system with some default values. Fill in values for the following fields:

Number of CoBos
How many CoBos are you using? A data router will be created for each one.
Use one ECC server for all sources?
If this is checked, the system will create one ECC server and link all CoBos to it. If this is unchecked, a separate ECC server object will be created for each CoBo.
IP address of first CoBo ECC server
If we’re using one global ECC server, it will have this IP address. If each CoBo has its own ECC server, the first ECC server will get this IP address, and subsequent servers will get this address plus an offset in the last segment. For example, if this address is set to 192.168.1.10, CoBo 0’s ECC server will be at 192.168.1.10, CoBo 1’s ECC server will be at 192.168.1.11, CoBo 2’s ECC server will be at 192.168.1.12, etc.
IP address of first CoBo data router
The address assigned to the data router of the first CoBo. Subsequent CoBos have an address that is incremented by an offset like above.
Is there a MuTAnT?
If so, the system will create a data router object for it.
IP address of the MuTAnT ECC server
If there is only one global ECC server, the MuTAnT will also be connected to that server, so this field will have no effect. Otherwise, the MuTAnT’s ECC server will be found at this address.
IP address of MuTAnT data router
The address where we should look for the MuTAnT data router.

Note

Again, just to be clear, the IP addresses entered here should be the addresses of the computers where the ECC server and data router processes are already running.

Once you click “Submit”, the system will create all of the necessary objects for this setup.

Danger

Submitting this form will overwrite the current DAQ GUI configuration. This will not destroy any data or config files, but it will remove any Data Router, ECC Server, and Data Source objects you’ve previously configured.

Manual configuration

If you need to tweak the results of the easy setup page, or if you need something more sophisticated than what it provides, you can always set things up manually. Under “Setup” in the left-hand navigation menu, there are links for setting up ECC servers, data routers, and data sources. Each of these leads to a table of the instances of that object that are currently set up. You can add a new instance using the “Add” button in the table header, and instances can be edited or removed using the buttons in each row.

To manually set up the system, you should first create your ECC servers and data routers. Then, create data source objects to link the two. For more information about the model used to describe the system, see Modeling the system in code.

Logging information about runs

In addition to controlling data taking, the DAQ system also allows you to record metadata about each run. This includes information about when the runs started and stopped along with metadata about the conditions during the run. This is intended to replace a physical log book with run data sheets. The set of items that are recorded is customizable, but there are a few fields which are always recorded.

Default run information

The default set of information will be recorded for every run, regardless of configuration. This set of fields includes the following:

  • A sequential run number
  • A run class identifying the type of the run. Options include “Testing”, “Production”, “Beam”, “Pulser”, and “Junk.”
  • A title or label for the run
  • The date and time when the run started and ended
  • The name of the config file(s) used for this run

Adding additional fields

In addition to these defaults, any number of custom fields can be added. These fields, known in the DAQ software as observables can be used to record detector parameters like voltages and pressures. These should be set up at the beginning of an experiment, but they can also be added later.

To set up observables, click “Observables” under “Setup” in the left-hand menu. This will bring you to a list of the observables that are currently set up in the system. Add a new one by clicking the “Add” button in the top right corner of the “Observables” panel.

Tip

Observables in this list can be reordered by clicking and dragging the handle on the left-hand side of each row. This order will be remembered, and the fields for the observables will be presented in this order when entering run information.

An observable has four properties that you can set:

Name
The name of the measurement. Choose something descriptive, but don’t include units. They will be added later.
Value type
What type of data is this? Options include integer, floating point, and string values.
Units
The units this will be recorded in. This is just for display, and no unit conversions will be done by the software.
Comment
This optional comment will be shown next to the field on the run data sheet for this observable. This could be used to make a brief note of how to take a particular measurement, for example.

Fill these fields in and click “Submit” to add a new observable.

Operating the DAQ system

At this point, we’re nearly ready to take data. This page will describe how to choose a configuration file and start and stop runs. This is probably the most relevant part of the manual from the point of view of the person taking an experimental shift.

Web GUI status page

After logging into the system at http://localhost:8080 or whatever address the system is available at, you will arrive at the main status page:

_images/daq_main.png

This page shows an overview of what’s currently happening in the system. It is divided into a set of panels:

Run Information
This panel has details about the current current run, like how long it has been going and what run number is currently being recorded.
ECC Server Status
This panel lists the status of each ECC server the system knows about. The “State” indicator shows what state machine state the ECC server is in (i.e. “Idle”, “Ready”, “Running”, etc.). The “Selected Config” column lists the name of the config file set that will be used to configure the devices. he “Controls” column contains a set of buttons for changing the state of an individual ECC server. These button should only be used for troubleshooting purposes. Finally, clicking the icon in the “Logs” column will display the last few lines of the log file for that ECC server.
Data Router Status
This panel shows the state of all of the data routers the system knows about. The “Online” column shows if the data router process is running, and the “Clean” column shows if the data router’s staging directory contains unsorted files. Both of these should display green checkmarks if the system is ready to take data.
Log Entries
This panel will show the latest error messages from the web interface. This does not include error messages that may be produced by the GET software. You can click on an individual error to get more information and possibly a traceback. Finally, clicking “Clear” will discard all error messages.
Controls
This set of large buttons configures the entire system at once. This is what you should use to control the system. The reset button will step the system back one state. For example, if the system is in the “Ready” state, pressing Reset will step it back to “Prepared”.

Selecting a configuration

Once all necessary processes are up and running, the ECC Server Status panel should display a status of “Idle” for each ECC server and the Data Router Status panel should show green check marks next to each data router. At this point, you should select a config file for each ECC server.

Config files can be selected by clicking the pencil icon next to the current config name in the Selected Config column of the ECC Server Status panel.

_images/config_column.png

This will bring up a page with a drop-down menu listing the configurations available for that ECC server. The list of available configurations contains all possible permutations of the set of describe-*.xcfg, prepare-*.xcfg, and configure-*.xcfg files known to the ECC server. Each configuration is identified by a name composed of the names of the three *.xcfg files that go into it, formatted as [describe-name]/[prepare-name]/[configure-name]. For example, if you want to configure a data source using the files describe-cobo0.xcfg, prepare-experiment.xcfg, and configure-experiment.xcfg, then you should choose the configuration called cobo0/experiment/experiment. See Config files for more information about these files and their naming convention.

Preparing to take runs

After selecting a configuration, the CoBos and MuTAnT must be configured to prepare them to take data. This can be done using the first three buttons on the main Controls panel.

_images/prepare_buttons.png

Begin by clicking the “Describe all” button. The system will then send a message to the ECC servers telling them to execute the “Describe” transition on the CoBos. The status label for each ECC server should then disappear and be replaced by a spinning cursor. Once the transition is finished, each ECC server should list a status of “Described”, and the overall system status in the top-right corner should also be shown as “Described.”

Note

These system-wide buttons only work if all ECC servers are in the same state. If they are in different states, you will need to use the individual controls in the ECC Server Status panel to bring them into the same state.

The next two steps are nearly identical. Click the “Prepare all” button, and wait until the status on each ECC server is shown as “Prepared.” Finally, click “Configure all,” and wait for a status of “Ready.” At this point, the system is ready to take data.

Note

If one or more of the CoBos fails to complete the state transition, their ECC servers will remain in whatever state they started in. This will be apparent since that ECC server will have a different label from the others, and the overall system status in the top-right corner will be shown as “Error.” If this happens, look for an error message in the “Log entries” panel at the bottom of the page, and try to diagnose the problem. Once the problem is fixed, try using the individual controls in the ECC Server Status panel to bring the troublesome server to the same state as the others.

Starting a run

Runs are controlled using the “Start all” and “Stop all” buttons in the main Controls panel.

_images/start_stop_buttons.png

Once you click “Start all,” the CoBos will begin recording data and the Run Information panel should update to reflect the new run.

_images/run_info_panel.png

Danger

Data taking on the CoBos can also be started and stopped using the individual source control buttons on the ECC Server Status panel; however, if this is done, the global run number will not be updated. Therefore, these individual buttons should only be used in the case of an error where a CoBo fails to start recording data.

Recording run metadata

Once a new run has been started, metadata about the run can be entered by clicking on either the “Update values” or “Same as previous” button on the Run Information panel. Both of these will bring up a form where you can enter information about the current run. The only difference between the two is that the “Same as previous” button will pre-fill some fields with their values from the previous run. This is useful for values that don’t change often.

_images/run_metadata_page.png

Fill in any values on this page that were not filled automatically, and then click “Submit” to save them. You can get back to the status page by clicking “Status” in the left-hand menu.

Tip

The run will continue even if you navigate away from the status page or close the web browser.

Stopping a run

When it is time to stop a run, click the “Stop all” button. This will tell the CoBos to stop recording data, and it will also tell the system to connect to each computer where the data router is running and rearrange the data files into a directory for the just-completed run. Watch the “Clean” column in the Data Router Status panel to see when this process has finished.

Warning

It may take several seconds for the data files to be rearranged on each computer. You must wait until this process is complete before the system will allow you to start a new run.

Resetting the system

When an experiment is complete, or when you want to re-configure the CoBos, the system should be reset to the “Idle” state. This can be done using the “Reset all” button in the main Controls panel. One click of this button will step each ECC server back by one state in the state machine (see CoBo state machine).

Note

Each transition must finish before you click the Reset button again.

Developer documentation

This section contains some information about the structure of the DAQ code and some explanation of the design of the system. This will be useful mostly for people who want to modify the code or fix bugs.

Structure of the DAQ system

The AT-TPC DAQ system runs inside a collection of Docker containers. Each of these containers is responsible for running part of the system. In general, one container corresponds to one process. The responsibilities of each container are outlined below.

Django application

Container/service name: web

This is the core of the system, and it’s the container in which nearly all of the Python code in this application runs. The main process in this container is the Gunicorn web server, which runs the Django application that will be described in the next pages of this documentation.

This server responds to any requests for dynamic web content. When you click a link to load a page of the DAQ app, the Django library calls the appropriate functions in the web app to dynamically generate the HTML that will be shown. This also includes calls to the API that communicates with the ECC servers. These calls are implemented as functions that get called when certain URLs are requested.

NGINX web server

Container/service name: nginx

NGINX is a commonly used web server. It acts as a front-end to the application. When a URL is requested, NGINX receives the request first and decides whether the request is for static content or dynamic content. Requests for dynamically generated content are forwarded to the Gunicorn server described above for further processing. Requests for static content (such as CSS files, the help pages, and static images) are processed by NGINX itself in order to reduce the load on the Gunicorn server.

Celery task queue

Container/service name: celery

Celery is a Python-based, distributed, asynchronous task queue system. It receives messages from Django and schedules tasks accordingly. This allows asynchronous execution of portions of the web app’s code. For example, when you configure the CoBos, a set of tasks is sent to the Celery server that tell it to perform the configuration.

This is useful for long-running tasks like the configuration commands. If these tasks were executed synchronously inside the main Django process, the web interface would become unresponsive until the tasks finished. Instead, we execute the tasks asynchronously in the Celery worker processes and update the GUI later when the tasks are finished.

RabbitMQ message broker

Container/service name: rabbitmq

RabbitMQ is a “message broker” that coordinates communication between the main process of the Django application and the Celery task queue system. It needs to be running, but otherwise it is not particularly interesting from the perspective of the DAQ system.

PostgreSQL database

Container/service name: db

This is the database used to store the internal configuration of the web app. This stores things like the IP addresses of the ECC servers and data routers, the name of the config file to use for each CoBo, the history of recent runs, and the name of the current experiment.

Modeling the system in code

In the Django framework, models are used to represent entities. A model has a collection of fields associated with it, and these fields are mapped to columns in the model’s representation in the database. The models in the AT-TPC DAQ app are used to represent components of the DAQ system, including things like ECC servers and data routers. This page will provide an overview of the different models in the system and how they work together. For more specific information about each model, refer to their individual pages.

DAQ system components

The GET DAQ system is modeled using three classes: the ECCServer, the DataRouter, and the DataSource.

The ECC server

The ECCServer model is responsible for all communication with the GET ECC server processes. There should be one instance of this model for each ECC server in the system. The ECCServer has fields that store the IP address and port of the ECC server, and it also keeps track of which configuration file set to use, what the state of the ECC server is with respect to the CoBo state machine, and whether the ECC server is online and reachable.

In addition to storing basic information about the ECC servers, this model also has methods that allow it to communicate with the ECC server it represents. The refresh_configs() method fetches the list of available configuration file sets from the ECC server and stores it in the database. The refresh_state() method fetches the current CoBo state machine state from the ECC server and updates the state field accordingly. Finally, the method change_state() will tell the ECC server to transition its data sources to a different state. This last method is used to configure, start, and stop the CoBos during data taking.

Communication with the ECC server is done using the SOAP protocol. This is performed by a third-party library which is wrapped by the EccClient class in this module. The interface to the ECC server is defined by the file web/attpcdaq/daq/ecc.wsdl, which was copied from the source of the GET ECC server into this package. If the interface is updated in a future version of the ECC server, this file should be replaced.

The data router

The DataRouter model stores information about data routers in the system. The data router processes are each associated with one data source, and they record the data stream from that source to a GRAW file. This model simply stores information about the data router like its IP address, port, and connection type. This information is forwarded to the data sources when the ECC server configures them.

The data source

This represents a source of data, like a CoBo or a MuTAnT. This is functionally just a link between an ECC server, which controls the source, and a data router, which receives data from the source.

DAQ component models

ECCServer(*args, **kwargs) Represents an individual ECC server which may control one or more data sources.
DataRouter(*args, **kwargs) Represents the data router associated with one data source.
DataSource(*args, **kwargs) A source of data, probably a CoBo or a MuTAnT.
Config file sets

Sets of config files are represented as ConfigId objects. These contain fields for each of the three config files for the three configuration steps. These sets will generally be created automatically by fetching them from the ECC servers using ECCServer.refresh_configs(), but they can also be created manually if necessary.

Config file models

ConfigId(*args, **kwargs) Represents a configuration file set as seen by the ECC servers.
Run and experiment metadata

The Experiment and RunMetadata models store information about the experiment and the runs it contains. They are used to number the runs and to store metadata like the experiment name, the duration of each run, and a comment describing the conditions for each run.

The Observable and Measurement classes are used to store measurements of experimental parameters like voltages, pressures, and scalers. An Observable defines a quantity that can be measured, and each one adds a new field that can be filled in on the Run Info sheet. When a user fills in values for an Observable, a corresponding Measurement object is created to store that value. This design was chosen so that the user can add new observables at any time without reloading the code or altering the database structure. This would not be possible if we just defined a new field on the RunMetadata object for each observable.

Metadata models

Experiment(*args, **kwargs) Represents an experiment and the settings relevant to one.
RunMetadata(*args, **kwargs) Represents the metadata describing a data run.
Observable(*args, **kwargs) Something that can be measured.
Measurement(*args, **kwargs) A measurement of an Observable.

Interacting with the system

Interaction with the Django web app occurs through views, which are just functions and classes that Django calls when certain URLs are requested. Views are used to render the pages of the web app, and they are also how the user tells the system to “do something” like configure a CoBo or refresh the state of an ECC server.

Views are mapped to URLs automatically by Django. This mapping is set up in the module attpcdaq.daq.urls.

Some views render pages that accept information from the user. These generally use a Django form class to process the data.

Since the views serve a number of different purposes, they are organized into a few separate modules in the package attpcdaq.daq.views.

Page rendering views

These views, located in the module attpcdaq.daq.views.pages, are used to render the pages of the web app. This includes functions like status(), which renders the main status page, and others like show_log_page(), which contacts a remote computer, fetches the end of a log file, and renders a page showing it.

Views

status(request) Renders the main status page.
choose_config(request, pk) Renders a page for choosing the config for an ECC server.
experiment_settings(request) Renders the experiment settings page.
show_log_page(request, pk, program) Retrieve and render the log file for the given program.
EasySetupPage(**kwargs) Renders the easy setup page, where the system can be set up in one step.

Backend functions

easy_setup(experiment, num_cobos, ...[, ...]) Create a set of model instances with default values based on the given parameters.
ECC interaction views

A few of the views in the module attpcdaq.daq.views.api are used to interact with the ECC servers and request that they perform some action. These views are called when the user clicks a button to request a state change.

source_change_state(request) Submits a request to tell the ECC server to change a source’s state.
source_change_state_all(request) Send requests to change the state of all ECC servers.
API views

The remaining views in the attpcdaq.daq.views.api module provide an interface to the information stored in the database. These generate pages that allow the user to add, modify, and remove instances of models. There are also views that return information from the database so the GUI can be updated by AJAX calls.

Unlike other views described above, the API views for manipulating database objects are based on classes instead of functions. These are all subclasses of generic views provided by Django, so for more information on these views, take a look at Django’s documentation for class-based views.

Refreshing data

refresh_state_all(request) Fetch the state of all data sources from the database and return the overall state of the system.

Working with data sources

AddDataSourceView(**kwargs) Add a data source.
ListDataSourcesView(**kwargs) List all data sources.
UpdateDataSourceView(**kwargs) Change parameters on a data source.
RemoveDataSourceView(**kwargs) Delete a data source.

Working with data routers

AddDataRouterView(**kwargs) Add a data router.
ListDataRoutersView(**kwargs) List all data routers.
UpdateDataRouterView(**kwargs) Modify a data router.
RemoveDataRouterView(**kwargs) Delete a data router.

Working with ECC servers

AddECCServerView(**kwargs) Add an ECC server.
ListECCServersView(**kwargs) List all ECC servers.
UpdateECCServerView(**kwargs) Modify an ECC server.
RemoveECCServerView(**kwargs) Delete an ECC server.

Working with run metadata

ListRunMetadataView(**kwargs) List the run information for all runs.
UpdateRunMetadataView(**kwargs) Change run metadata
UpdateLatestRunMetadataView(**kwargs) Redirects to UpdateRunMetadataView for the latest run.

Working with Observables

AddObservableView(**kwargs) Add a new observable to the experiment.
ListObservablesView(**kwargs) List the observables registered for this experiment.
UpdateObservableView(**kwargs) Change properties of an Observable.
RemoveObservableView(**kwargs) Remove an observable from this experiment.

Setting the ordering of observables

set_observable_ordering(request) An AJAX request that sets the order in which observables are displayed.
Helper functions

These helper functions are called by some of the views to avoid duplicating code. They are located in the module attpcdaq.daq.views.helpers.

calculate_overall_state(request) Find the overall state of the system.
get_ecc_server_statuses(request) Gets some information about the ECC servers.
get_data_router_statuses(request) Gets some information about the data routers.
get_status(request) Returns some information about the system’s status.

Interfacing with the remote processes

The attpcdaq.daq.workertasks module contains a class that uses the Paramiko SSH library to connect to the nodes running the data router and ECC server in order to, for example, organize files at the end of a run. It can also check whether these processes are running.

This class should typically be used as a context manager (i.e., with a with statement). For example, to organize files, you could try the following:

with WorkerInterface(data_router_ip_address) as wint:
    wint.organize_files(experiment_name, run_number)

When used in this manner, the SSH session will automatically be opened when entering the with block and closed when leaving it.

The WorkerInterface class
class attpcdaq.daq.workertasks.WorkerInterface(hostname, port=22, username=None, config_path=None)[source]

An interface to perform tasks on the DAQ worker nodes.

This is used perform tasks on the computers running the data router and the ECC server. This includes things like cleaning up the data files at the end of each run.

The connection is made using SSH, and the SSH config file at config_path is honored in making the connection. Additionally, the server must accept connections authenticated using a public key, and this public key must be available in your .ssh directory.

Parameters:
  • hostname (str) – The hostname to connect to.
  • port (int, optional) – The port that the SSH server is listening on. The default is 22.
  • username (str, optional) – The username to use. If it isn’t provided, a username will be read from the SSH config file. If no username is listed there, the name of the user running the code will be used.
  • config_path (str, optional) – The path to the SSH config file. The default is ~/.ssh/config.

Methods

find_data_router() Find the working directory of the data router process.
get_graw_list() Get a list of GRAW files in the data router’s working directory.
working_dir_is_clean() Check if there are GRAW files in the data router’s working directory.
check_ecc_server_status() Checks if the ECC server is running.
check_data_router_status() Checks if the data router is running.
organize_files(experiment_name, run_number) Organize the GRAW files at the end of a run.
tail_file(path[, num_lines]) Retrieve the tail of a text file on the remote host.

Asynchronous tasks and Celery

Due to the distributed design of the DAQ system, it’s very likely that sometimes a command sent to the system will take a while to process. This is especially true when communicating with an ECC server if the ECC server is configuring all of its attached data sources in series. If we decided to send a long-running command to the ECC server synchronously in the middle of whatever view was responding to the user’s HTTP request, the view would block on the communication until it finished. This would prevent it from updating the GUI, giving the impression that the software has crashed, and in extreme cases, the browser could even return a timeout error.

To prevent this problem, we process slow commands asynchronously with Celery. Instead of directly initiating communications, the view submits a task to the Celery queue and returns immediately, updating the GUI to indicate that the task is processing. When the task is completed, some part of the database is generally updated. The GUI is then updated to reflect the fact that the task has completed when it periodically refreshes itself.

Tasks

The Celery tasks in this application are just Python functions with the @shared_task decorator. This decorator registers them with the Celery system as tasks, and it also allows us to set a time limit on them. All of the tasks are located in the module attpcdaq.daq.tasks.

ECC server interaction

eccserver_refresh_state_task Fetch the state of the given ECC server.
eccserver_refresh_all_task Fetch the state of all ECC servers.
eccserver_change_state_task Change the state of an ECC server (make it perform a transition).

Checking remote status

check_ecc_server_online_task Checks if the ECC server is online.
check_ecc_server_online_all_task Check and update the state of all known ECC servers.
check_data_router_status_task Checks whether the data router is online and if the staging directory is clean.
check_data_router_status_all_task Check and update the state of all known data routers.

File organization

organize_files_task Connects to the DAQ worker nodes to organize files at the end of a run.
organize_files_all_task Organize files on all remote nodes.
Task scheduling

Some of the tasks above are best run automatically according to a schedule. Periodic tasks are supported by the Celery system, and are configured using the CELERYBEAT_SCHEDULE entry in the attpcdaq.settings module. This is a dictionary with the format shown in the example below.

CELERYBEAT_SCHEDULE = {
    'update-state-every-5-sec': {                                 # A descriptive name for the task
        'task': 'attpcdaq.daq.tasks.eccserver_refresh_all_task',  # The dotted name of the task, as a string
        'schedule': timedelta(seconds=5),                         # The interval between runs
    },
}