Quick Look Framework

The Quick Look Framework (QLF) is part of the DES Instrument Control System (ICS) and provides an interface to execute the Quick Look (QL) pipeline and display data quality information in realtime.

QLF current version allows to follow the execution of QL pipeline, which process multiple cameras/arms in parallel. The interfaces for QA display are now in a mature stage of development using React and Bokeh plots.

Overview

Frontend

ReactJS App connect to the backend via websockets and REST API.

Monitor

Real-time app that controls and presents current system data

Features
  • Start/Stop pipeline execution.
  • Information about current exposure being processed.
  • Follow evolution of pipeline steps for each camera.
    • Per camera log by clicking on each camera process evolving line.
  • Log highlights pipeline’s checkpoints and run-time.
  • QA information for each PA displayed as tooltip when mouse over a petal.
  • Monitor new exposures as they are discovered.
  • Clear logs and redux files
  • Realtime camera log monitoring
  • Processing alert box

Processing History

Lists of all exposures processed by the pipeline.

Features
  • Most Recent lists 10 most recent processes (Updated real time)
  • History containing all exposures processed. (Not updated in real time)
    • Can be filtered by status, flavor, exposure start and end date.
    • Can be ordered by Status, Process Id, Process Date, Exp Id, Tile ID, Obs date, Obs Time, RA, DEC, Exp Time
  • QA column ✓ (or ✖︎) for each exposure links to QA screen app.
  • View global state of fibers for the 10 petals (only science exposures)
  • Comment process
  • Logs generated by process (only at KPNO)

Observing History

Lists of all exposures available (and respective last QA results when processed)

Features
  • List 10 most recent exposures (updated in real time)
  • History containing all exposures. (not updated in real time)
    • They can be filtered by flavor, exposure start and end date.
    • It can be ordered by Exp Id, Tile ID, Obs date, Obs Time, RA, DEC, Exp Time.
  • QA column ✓ (or ✖︎) for each exposure links to QA screen app (when process is available).

TODO: Inside the history tab is possible to reprocess a selected exposure and add it to the end of the pipeline processing queue by selecting and clicking Submit.

Afternoon Planning

TODO: Shows exposures processed offline at NERSC.

QA

Shows QA tests summary

Features
  • Process ID, Exposure ID, flavor, MJD and Date for the selected exposure
  • Each camera petal divided by PA results are represented by colors
    • Grey when there is a QA not found
    • Red alarm in a QA test
    • Yellow warning in a QA test
    • Green all QA tests in the step passed
  • Drill down by clicking a petal (except when grey)
    • Metrics are shown by either red or green, respectively failure or success
    • Graphs are shown with a short description
    • All display for each step, spectrograph, and arm selection

Trend Analysis

Features
  • Metrics Time Series plots for scalar metrics
    • Date range, metric, amplifier and camera selection filters
  • Tooltip to explain date period

Observing Conditions

Features
  • Time series and regression plots for observing conditions attributes (airmass, sky brightness)
    • Date range, attribute and camera selection filters (Time Series)
    • Date range, attribute X axis, attribute Y axis and camera selection filters (Regression)
  • Tooltip to explain date period

Survey Reports

Features
  • Observed footprint
    • Date range and program selection filters
  • Tooltip to explain date period

Configuration

Features
  • View QLF configuration
  • View QL configuration

Backend

Django and PYRO deamon used to wrap desispec pipelines, monitor it’s execution notifying the frontend and plotting bokeh visualizations.

QLF Pipeline

PYRO Daemon running QL.

Features
  • Runs QL using a configuration
  • Ingest QA files to database
  • Display QA scalar metrics

Django API

Administrative back-end managing Frontend, Pipeline connections and bokeh plots

Features
  • Provides an API
  • Connects to QLF Daemon
  • Manages Front-end websocket connections and REST services
  • Connects to databases (currently using only a local postgres database)
  • Generate QA graphs for drill downs
  • Generate footprint for Survey Reports
  • Generate Time Series and Regression plots

Installation

Docker

We use Docker as a container to install QLF. Install docker and docker-compose from the links below:

Make sure docker --version, docker-compose --version, and docker ps runs without error.

More details on how to use docker see Using docker and docker-compose.

QLF

Requires ~2 GB

Clone Quick Look Framework Project

git clone https://github.com/desihub/qlf.git

Everything you need is contained in this repository. Even desimodel, desisim, desitarget, desispec and desiutil as sub-repositories.

QLF Default Configuration

Get into qlf directory and configure it.

Make sure you have svn installed

cd qlf
./configure.sh

Besides setting the environment and cloning desispec and desiutil, this downloads the data used for tests (~1 GB).

Starting QLF

Start QLF frontend and backend.

./start.sh

frontend takes about 5 minutes to start in dev mode

Making sure all containers are up

Running docker ps you should see 4 containers:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                    NAMES
d92b7eb3583e        qlf_nginx           "nginx -g 'daemon of…"   5 minutes ago       Up 5 minutes        80/tcp, 7070/tcp, 0.0.0.0:80->8080/tcp   qlf_nginx_1
7622ffe8f2a9        qlf_backend         "/usr/bin/tini -- ./…"   5 minutes ago       Up 5 minutes        8000/tcp                                 qlf_backend_1
1c69cb7e5aff        redis               "docker-entrypoint.s…"   5 minutes ago       Up 5 minutes        6379/tcp                                 qlf_redis_1
709cc22755bc        postgres            "docker-entrypoint.s…"   5 minutes ago       Up 5 minutes        5432/tcp                                 qlf_db_1

You can access QLF interface by going to

http://localhost/

In case you need to access the backend

http://localhost/dashboard/api/

Stoping QLF

./stop.sh

Restarting QLF backend to update desispec changes

./restartBackend.sh

Changing Desispec Version

Desispec is a submodule in QLF and it’s version can be changed inside it’s diretory

qlf
└── backend
    └── desispec

From there it’s possible to make changes and checkout to other git version.

~/qlf/backend/desispec$ git status
HEAD detached at 10d1c39
nothing to commit, working directory clean

After the changes are made run ./restartBackend to restart the containers with the latest code.

~/qlf$ ./restartBackend.sh
Stopping qlf_app_1 ... done
Recreating 9c38c1b6436a_qlf_db_1    ... done
Recreating ac0c3d755824_qlf_redis_1 ... done
Recreating qlf_app_1                ... done
Recreating qlf_nginx_1              ... done

Map Code Inside Container

Make sure the volumes are mapped inside the container to make the changes.

docker-compose.yml example:

...
app:
    build: ./backend
    env_file:
    - ./backend/global-env
    volumes:
    - ./backend/:/app
...

Bokeh Plots

Steps To Create A New Metric

Let’s create a new metric called NEWMETRIC in CHECK FIBERS step on a science flavor exposure.

1. Create Map to Metric (Backend)

In order to understand how the plots end up in the frontend there are also 3 files divided by Flavor mapping each python QA to the appropriate frontend stage and name.

qlf
└── backend
    └── framework
        └── ql_mapping
            ├── arc.json
            ├── flat.json
            └── science.json

/qlf/backend/framework/ql_mapping/science.json

 {
 "flavor": "science",
 "step_list": [
     ...
         {
         "display_name": "CHECK FIBERS",
         "name": "CHECK_FIBERS",
         "start": "Starting to run step Flexure",
         "end": "Starting to run step ApplyFiberFlat_QP",
         "qa_list": [
             {
             "display_name": "NGOODFIB",
             "status_key": "NGOODFIB_STATUS",
             "name": "countbins"
             },
             {
             "display_name": "XYSHIFTS",
             "status_key": "XYSHIFTS_STATUS",
             "name": "xyshifts"
             },
             {
             "display_name": "NEWMETRIC",
             "status_key": "NEWMETRIC_STATUS",
             "name": "newmetric"
             }
         ]
     },
     ...
   ]
 }

The new status will be detected and ingested on the next processings when NEWMETRIC_STATUS appears at ql logs

2. Create NewMetric Folder

All QA plots are located in the same folder.

To add newmetric plots create a new folder with the following structure:

qa + name

 qlf
 └── backend
     └── framework
         └── qlf
             └── dashboard
                 └── bokeh
                     ├── qacheckarc
                     ├── qacheckflat
                     ├── qacountbins
                     ├── qacountpix
                     ├── qagetbias
                     ├── qainteg
                     ├── qaskycont
                     ├── qaskypeak
                     ├── qaskyR
                     ├── qasnr
                     ├── qaxwsigma
                     ├── qaxyshifts
                     ├── plots
                     └── qanewmetric
                         └── main.py
  • Each directory has a main.py file containing QA plots logic. To make new plots inside the QA they are the only place that will require changes.
  • Plots diretory contains commonly used functions throughout QAs.
  • Changes saved will trigger a bokeh reloaded automatically if the code is correctly mapped inse the container (Map Code Inside Container).

3. NewMetric Plots Code

Now adding actual python code for each plot. More details and examples in jupyter notebooks directory.

/qlf/backend/framework/qlf/dashboard/bokeh/qanewmetric/main.py

from bokeh.layouts import column

from bokeh.models.widgets import Div

from bokeh.resources import CDN
from bokeh.embed import file_html


class NewMetric:
    def __init__(self, process_id, arm, spectrograph):
        self.selected_process_id = process_id
        self.selected_arm = arm
        self.selected_spectrograph = spectrograph

    def load_qa(self):
        cam = self.selected_arm+str(self.selected_spectrograph)

        layout = column(Div(text='This is a new metric for camera {} process {}'.format(cam, self.selected_process_id)), css_classes=["display-grid"])

        return file_html(layout, CDN, "NEWMETRIC")

4. Create Metric View

  1. Import the NewMetric QA class
  2. Add a new qa view case inside views_bokeh.py.

/qlf/backend/framework/qlf/dashboard/views_bokeh.py

 from dashboard.bokeh.fits2png.fits2png import Fits2png
 from dashboard.bokeh.qacountpix.main import Countpix
 from dashboard.bokeh.qagetbias.main import Bias
 from dashboard.bokeh.qagetrms.main import RMS
 ...
 from dashboard.bokeh.spectra.main import Spectra
 from dashboard.bokeh.qanewmetric.main import NewMetric
 from datetime import datetime, timedelta
 ...

 def load_qa(request):
     """Generates and render qas"""
     template = loader.get_template('dashboard/qa.html')
     # Generate Image
     qa = request.GET.get('qa')
     spectrograph = request.GET.get('spectrograph')
     process_id = request.GET.get('process_id')
     arm = request.GET.get('arm')
     try:
         if qa == 'qacountpix':
             qa_html = Countpix(process_id, arm, spectrograph).load_qa()
         elif qa == 'qabias':
             qa_html = Bias(process_id, arm, spectrograph).load_qa()
         elif qa == 'qarms':
             qa_html = RMS(process_id, arm, spectrograph).load_qa()
         elif qa == 'qaxwsigma':
             qa_html = Xwsigma(process_id, arm, spectrograph).load_qa()
         ...
         elif qa == 'qahdu':
             qa_html = 'No Drill Down'
         elif qa == 'qacheckflat':
             qa_html = Flat(process_id, arm,spectrograph).load_qa() #'No Drill Down'
         elif qa == 'qacheckarc':
             qa_html = Arc(process_id, arm,spectrograph).load_qa() #'No Drill Down'
         elif qa == 'qaxyshifts':
             qa_html = Xyshifts(process_id, arm,spectrograph).load_qa()
         elif qa == 'qaskyrband':
             qa_html = SkyR(process_id, arm,spectrograph).load_qa()
         elif qa == 'qanewmetric':
             qa_html = NewMetric(process_id, arm, spectrograph).load_qa()
         else:
             qa_html = "Couldn't load QA"
     except Exception as err:
         qa_html = "Can't load QA: {}".format(err)
     ...

5. Restart Backend

The backend server restarts automatically on each save but if you can’t see the changes try (Restart Backend)

6. Create Map to Metric (Frontend)

The metrics mappping also exist in the frontend directory.

qlf
└── frontend
    └── src
        └── assets
            └── ql_mapping
                ├── arc.json
                ├── flat.json
                └── science.json

Do 1. Create Map to Metric (Backend) in the according file.

7. Rebuild Frontend

~/qlf$ docker-compose build nginx
Building nginx
...
Successfully built 22b9dc85a391
Successfully tagged qlf_nginx:latest
~/qlf$ docker-compose up

On the next exposure processed the QA will be available

Result

Before

Before add metric

After

After add metric

Using docker and docker-compose

This is not a comprehensive guide on how to use docker and all of it’s features but will give you common used commands.

List Running Containers

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Build containers:

docker-compose build

To build a specific container

docker-compose build NAME

e.g. docker-compose build nginx

Start a container

$ docker-compose up

Start in background: docker-compose up -d Start specific container: docker-compose up -d NAME

Stop all containers

docker-compose stop

Stop and delete containers

This will delete the container but not the built images. This is usually used for puging database entries.

docker-compose down

Delete all images from machine

This will delete all images, containers, volumes and networks created. Usually used to clean environment.

docker system prune -a

Work from inside the container

You can work from inside the container. This is useful to check if the correct versions are being used.

 ~/qlf/docs$ docker ps
 CONTAINER ID        IMAGE                                                    COMMAND                  CREATED             STATUS              PORTS                                            NAMES
 46d12dd9e9ef        qlf_nginx                                                "nginx -g 'daemon of…"   24 minutes ago      Up 24 minutes       80/tcp, 0.0.0.0:80->8080/tcp                     qlf_nginx_1
 29ff55378420        linea/qlf:BACK6f9165e25b07d92782cb2a7ab341ea013da99ac3   "/usr/bin/tini -- ./…"   24 minutes ago      Up 24 minutes       0.0.0.0:5006->5006/tcp, 0.0.0.0:8000->8000/tcp   qlf_app_1
 f1d141fa6f50        postgres                                                 "docker-entrypoint.s…"   24 minutes ago      Up 24 minutes       5432/tcp                                         qlf_db_1
 3ae98699a50f        redis                                                    "docker-entrypoint.s…"   24 minutes ago      Up 24 minutes       6379/tcp                                         qlf_redis_1
 ~/qlf/docs$ docker exec -it qlf_app_1 bash
 root@29ff55378420:/app#

Restart Backend

After the changes are made in the backend run ./restartBackend to restart the containers with the latest code if it doesn’t do it automatically.

~/qlf$ ./restartBackend.sh
Stopping qlf_app_1 ... done
Recreating 9c38c1b6436a_qlf_db_1    ... done
Recreating ac0c3d755824_qlf_redis_1 ... done
Recreating qlf_app_1                ... done
Recreating qlf_nginx_1              ... done

FAQ

  1. If you are using linux and got this error:
ubuntu ~/docker $ docker-compose up -d
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

Add your current user to docker group:

sudo usermod -aG docker $USER

And make sure to log out of your terminal prompt and log back in in order for usermod changes take effect.

  1. If port is already allocated

For instance,

ERROR: for db  Cannot start service db: driver failed programming external connectivity on endpoint backend_db_1 (4d2adece087f3df9a3e34695246a22db6639e63e8b8054e3cb03f1209252b88d): Bind for 0.0.0.0:5433 failed: port is already allocated

Run docker ps to check for old containers up and running on your machine.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                    NAMES
d92b7eb3583e        qlf_nginx           "nginx -g 'daemon of…"   5 minutes ago       Up 5 minutes        80/tcp, 7070/tcp, 0.0.0.0:80->8080/tcp   qlf_nginx_1
7622ffe8f2a9        qlf_backend         "/usr/bin/tini -- ./…"   5 minutes ago       Up 5 minutes        8000/tcp                                 qlf_backend_1
1c69cb7e5aff        redis               "docker-entrypoint.s…"   5 minutes ago       Up 5 minutes        6379/tcp                                 qlf_redis_1
709cc22755bc        postgres            "docker-entrypoint.s…"   5 minutes ago       Up 5 minutes        5432/tcp                                 qlf_db_1

You can stop it individually by name, for example: docker stop qlf_qlf_1