Welcome to dmpipe’s documentation!

Introduction

This is the dmpipe documentation page.

dmpipe is a python package that implements an analysis pipeline to search for Dark Matter (DM) signals in the data from the Large Area Telescope (LAT).

For more information about the Fermi mission and the LAT instrument please refer to the Fermi Science Support Center.

The dmpipe package is built on the pyLikelihood interface of the Fermi Science Tools and the Fermipy and dmksy software packages.

dmpipe implements a standard analysis pipeline that:

  • Uses the dmsky package to define lists of analysis targets.
  • Models the DM gamma-ray spectra in a standard set of annihilation channels.
  • Performs standard source analyses in regions of interest (ROI) around each target.
  • Extracts Spectal Energy Denisty (SED) likelihood information for each target, and possibly for multiple dark matter spatial profiles for each target.
  • Converts the SED likelihood information to likelihood information in on the DM interaction rate (i.e., the thermally averaged cross section), accounting for uncertainties on the DM spatial profile of the each target.
  • Combines the results for multiple targets by performing likelihood statcking.
  • Implements control versions of the analysis pipeline, using both simulated data, and randomly selected control directiions within each target’s ROI.

dmpipe uses a configuration-file driven workflow in which the analysis parameters (data selection, IRFs, and ROI model) are defined in a small set of YAML configuration files. Analysis is executed through a python script that dispatchs small analysis jobs to the computer batch farm.

For instructions on installing dmpipe see the Installation page. For a short introduction to using dmipe see the Quickstart Guide.

Getting Help

If you have questions about using dmpipe please open a GitHub Issue.

Documentation Contents

Installation

Note

dmpipe is only compatible with Science Tools 10-00-07 or later. If you are using an earlier version, you will need to download and install the latest version from the FSSC. Note that it is recommended to use the non-ROOT binary distributions of the Science Tools.

These instructions assume that you already have a local installation of the Fermi Science Tools (STs). For more information about installing and setting up the STs see Installing the Fermi Science Tools. If you are running at SLAC you can follow the Running at SLAC instructions. For Unix/Linux users we currently recommend following the Installing with Anaconda Python instructions. For OSX users we recommend following the Installing with pip instructions. The Installing with Docker instructions can be used to install the STs on OSX and Linux machines that are new enough to support Docker. To install the development version of dmpipe follow the Installing Development Versions instructions.

Installing the Fermi Science Tools

The Fermi STs are a prerequisite for dmpipe. To install the STs we recommend using one of the non-ROOT binary distributions available from the FSSC. The following example illustrates how to install the binary distribution on a Linux machine running Ubuntu Trusty:

$ curl -OL http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/tar/ScienceTools-v10r0p5-fssc-20150518-x86_64-unknown-linux-gnu-libc2.19-10-without-rootA.tar.gz
$ tar xzf ScienceTools-v10r0p5-fssc-20150518-x86_64-unknown-linux-gnu-libc2.19-10-without-rootA.tar.gz
$ export FERMI_DIR=ScienceTools-v10r0p5-fssc-20150518-x86_64-unknown-linux-gnu-libc2.19-10-without-rootA/x86_64-unknown-linux-gnu-libc2.19-10
$ source $FERMI_DIR/fermi-init.sh

More information about installing the STs as well as the complete list of the available binary distributions is available on the FSSC software page.

Installing with pip

These instructions cover installation with the pip package management tool. This will install dmpipe and its dependencies into the python distribution that comes with the Fermi Science Tools. First verify that you’re running the python from the Science Tools

$ which python

If this doesn’t point to the python in your Science Tools install (i.e. it returns /usr/bin/python or /usr/local/bin/python) then the Science Tools are not properly setup.

Before starting the installation process, you will need to determine whether you have setuptools and pip installed in your local python environment. You may need to install these packages if you are running with the binary version of the Fermi Science Tools distributed by the FSSC. The following command will install both packages in your local environment:

$ curl https://bootstrap.pypa.io/get-pip.py | python -

Check if pip is correctly installed:

$ which pip

Once again, if this isn’t the pip in the Science Tools, something went wrong. Now install dmpipe by running

$ pip install dmpipe

To run the ipython notebook examples you will also need to install jupyter notebook:

$ pip install jupyter

Finally, check that dmpipe imports:

$ python
Python 2.7.8 (default, Aug 20 2015, 11:36:15)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from dmpipe.gtanalysis import GTAnalysis
>>> help(GTAnalysis)
Installing with Anaconda Python

Note

The following instructions have only been verified to work with binary Linux distributions of the Fermi STs. If you are using OSX or you have installed the STs from source you should follow the Installing with pip thread above.

These instructions cover how to use dmpipe with a new or existing anaconda python installation. These instructions assume that you have already downloaded and installed the Fermi STs from the FSSC and you have set the FERMI_DIR environment variable to point to the location of this installation.

If you already have an existing anaconda python installation then dmpipe can be installed from the conda-forge channel as follows:

$ conda config --append channels conda-forge
$ conda install dmpipe

If you do not have an anaconda installation, the condainstall.sh script can be used to create a minimal anaconda installation from scratch. First download and source the condainstall.sh script from the dmpipe repository:

$ curl -OL https://raw.githubusercontent.com/dmpipe/dmpipe/master/condainstall.sh
$ source condainstall.sh

If you do not already have anaconda python installed on your system this script will create a new installation under $HOME/miniconda. If you already have anaconda installed and the conda command is in your path the script will use your existing installation. After running condainstall.sh dmpipe can be installed with conda:

$ conda install dmpipe

Once dmpipe is installed you can initialize the ST/dmpipe environment by running condasetup.sh:

$ curl -OL https://raw.githubusercontent.com/dmpipe/dmpipe/master/condasetup.sh
$ source condasetup.sh

If you installed dmpipe in a specific conda environment you should switch to this environment before running the script:

$ source activate fermi-env
$ source condasetup.sh
Installing with Docker

Note

This method for installing the STs is currently experimental and has not been fully tested on all operating systems. If you encounter issues please try either the pip- or anaconda-based installation instructions.

Docker is a virtualization tool that can be used to deploy software in portable containers that can be run on any operating system that supports Docker. Before following these instruction you should first install docker on your machine following the installation instructions for your operating system. Docker is currently supported on the following operating systems:

  • macOS 10.10.3 Yosemite or later
  • Ubuntu Precise 12.04 or later
  • Debian 8.0 or later
  • RHEL7 or later
  • Windows 10 or later

Note that Docker is not supported by RHEL6 or its variants (CentOS6, Scientific Linux 6).

These instructions describe how to create a docker-based ST installation that comes preinstalled with anaconda python and dmpipe. The installation is fully contained in a docker image that is roughly 2GB in size. To see a list of the available images go to the dmpipe Docker Hub page. Images are tagged with the release version of the STs that was used to build the image (e.g. 11-05-00). The latest tag points to the image for the most recent ST release.

To install the latest image first download the image file:

$ docker pull dmpipe/dmpipe

Now switch to the directory where you plan to run your analysis and execute the following command to launch a docker container instance:

$ docker run -it --rm -p 8888:8888 -v $PWD:/workdir -w /workdir dmpipe/dmpipe

This will start an ipython notebook server that will be attached to port 8888. Once you start the server it will print a URL that you can use to connect to it with the web browser on your host machine. The -v $PWD:/workdir argument mounts the current directory to the working area of the container. Additional directories may be mounted by adding more volume arguments -v with host and container paths separated by a colon.

The same docker image may be used to launch python, ipython, or a bash shell by passing the command as an argument to docker run:

$ docker run -it --rm -v $PWD:/workdir -w /workdir dmpipe/dmpipe ipython
$ docker run -it --rm -v $PWD:/workdir -w /workdir dmpipe/dmpipe python
$ docker run -it --rm -v $PWD:/workdir -w /workdir dmpipe/dmpipe /bin/bash

By default interactive graphics will not be enabled. The following commands can be used to enable X11 forwarding for interactive graphics on an OSX machine. This requires you to have installed XQuartz 2.7.10 or later. First enable remote connections by default and start the X server:

$ defaults write org.macosforge.xquartz.X11 nolisten_tcp -boolean false
$ open -a XQuartz

Now check that the X server is running and listening on port 6000:

$ lsof -i :6000

If you don’t see X11 listening on port 6000 then try restarting XQuartz.

Once you have XQuartz configured you can enable forwarding by setting DISPLAY environment variable to the IP address of the host machine:

$ export HOST_IP=`ifconfig en0 | grep "inet " | cut -d " " -f2`
$ xhost +local:
$ docker run -it --rm -e DISPLAY=$HOST_IP:0 -v $PWD:/workdir -w /workdir dmpipe ipython
Installing Development Versions

The instructions describe how to install development versions of dmpipe. Before installing a development version we recommend first installing a tagged release following the Installing with pip or Installing with Anaconda Python instructions above.

The development version of dmpipe can be installed by running pip install with the URL of the git repository:

$ pip install git+https://github.com/dmpipe/dmpipe.git

This will install the most recent commit on the master branch. Note that care should be taken when using development versions as features/APIs under active development may change in subsequent versions without notice.

Running at SLAC

This section provides specific installation instructions for running in the SLAC computing environment. We suggest to follow these instruction if you are running dmpipe at SLAC. You will create your own conda installation in this way you will not depend on old version of programs present in the SLAC machines. First grab the installation and setup scripts from the dmpipe github repository:

$ curl -OL https://raw.githubusercontent.com/dmpipe/dmpipe/master/condainstall.sh
$ curl -OL https://raw.githubusercontent.com/dmpipe/dmpipe/master/slacsetup.sh

Now choose an installation path. This should be a new directory (e.g. $HOME/anaconda) that has at least 2-4 GB available. We will assign this location to the CONDABASE environment variable which is used by the setup script to find the location of your python installation. To avoid setting this every time you log in it’s recommended to set CONDABASE into your .bashrc file.

Now run the following commands to install anaconda and dmpipe. This will take about 5-10 minutes.

$ export CONDABASE=<path to install directory>
$ bash condainstall.sh $CONDABASE

Once anaconda is installed you will initialize your python and ST environment by running the slacsetup function in slacsetup.sh. This function will set the appropriate environment variables needed to run the STs and python.

$ source slacsetup.sh
$ slacsetup

For convenience you can also copy this function into your .bashrc file so that it will automatically be available when you launch a new shell session. By default the function will setup your environment to point to a recent version of the STs and the installation of python in CONDABASE. If CONDABASE is not defined then it will use the installation of python that is packaged with a given release of the STs. The slacsetup function takes two optional arguments which can be used to override the ST version or python installation path.

# Use ST 10-00-05
$ slacsetup 10-00-05
# Use ST 11-01-01 and python distribution located at <PATH>
$ slacsetup 11-01-01 <PATH>

The installation script only installs packages that are required by dmpipe and the STs. Once you’ve initialized your shell environment you are free to install additional python packages with the conda package manager tool with conda install <package name>. Packages that are not available on conda can also be installed with pip.

conda can also be used to upgrade packages. For instance you can upgrade dmpipe to the newest version with the conda update command:

$ conda update dmpipe

You can verify that the installation has succeeded by importing GTAnalysis:

$ python
Python 2.7.8 |Anaconda 2.1.0 (64-bit)| (default, Aug 21 2014, 18:22:21)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
>>> from dmpipe.gtanalysis import GTAnalysis
Upgrading

By default installing dmpipe with pip or conda will get the latest tagged released available on the PyPi package respository. You can check your currently installed version of dmpipe with pip show:

$ pip show dmpipe

or conda info:

$ conda info dmpipe

To upgrade your dmpipe installation to the latest version run the pip installation command with --upgrade --no-deps (remember to also include the --user option if you’re running at SLAC):

$ pip install dmpipe --upgrade --no-deps
Collecting dmpipe
Installing collected packages: dmpipe
  Found existing installation: dmpipe 0.6.6
    Uninstalling dmpipe-0.6.6:
      Successfully uninstalled dmpipe-0.6.6
Successfully installed dmpipe-0.6.7

If you installed dmpipe with conda the equivalent command is:

$ conda update dmpipe
Developer Installation

These instructions describe how to install dmpipe from its git source code repository using the setup.py script. Installing from source can be useful if you want to make your own modifications to the dmpipe source code. Note that non-developers are recommended to install a tagged release of dmpipe following the Installing with pip or Installing with Anaconda Python instructions above.

First clone the dmpipe git repository and cd to the root directory of the repository:

$ git clone https://github.com/dmpipe/dmpipe.git
$ cd dmpipe

To install the latest commit in the master branch run setup.py install from the root directory:

# Install the latest commit
$ git checkout master
$ python setup.py install --user

A useful option if you are doing active code development is to install your working copy of the package. This will create an installation in your python distribution that is linked to the copy of the code in your local repository. This allows you to run with any local modifications without having to reinstall the package each time you make a change. To install your working copy of dmpipe run with the develop argument:

# Install a link to your source code installation
$ python setup.py develop --user

You can later remove the link to your working copy by running the same command with the --uninstall flag:

# Install a link to your source code installation
$ python setup.py develop --user --uninstall

Specific release tags can be installed by running git checkout before running the installation command:

# Checkout a specific release tag
$ git checkout X.X.X
$ python setup.py install --user

To see the list of available release tags run git tag.

Issues

If you get an error about importing matplotlib (specifically something about the macosx backend) you might change your default backend to get it working. The customizing matplotlib page details the instructions to modify your default matplotlibrc file (you can pick GTK or WX as an alternative). Specifically the TkAgg and macosx backends currently do not work on OSX if you upgrade matplotlib to the version required by dmpipe. To get around this issue you can switch to the Agg backend at runtime before importing dmpipe:

>>> import matplotlib
>>> matplotlib.use('Agg')

However note that this backend does not support interactive plotting.

If you are running OSX El Capitan or newer you may see errors like the following:

dyld: Library not loaded

In this case you will need to disable the System Integrity Protections (SIP). See here for instructions on disabling SIP on your machine.

In some cases the setup.py script will fail to properly install the dmpipe package dependecies. If installation fails you can try running a forced upgrade of these packages with pip install --upgrade:

$ pip install --upgrade --user numpy matplotlib scipy astropy pyyaml healpy wcsaxes ipython jupyter

Quickstart Guide

This page walks through the steps to setup and perform a basic analysis of two dark matter targets, the “Segue 1” and “Draco” dwarf spheroidal galaxies.

Getting input data

First, you will need get Fermi-LAT data to analyze. In particular you will need:

  • An event list, a so-called ‘FT1’ file with event data.
  • The correspond spacecraft pointing history file: the so-called ‘FT2’ file.
  • A ‘livetime’ htype cube of the amout of time each direction in the sky was at a particular direction with respect to the LAT instrument boresight.
Running the example notebook

First, download the configuration files and python notebook for this analysis

$ curl -OL https://raw.githubusercontent.com/fermiPy/fermipy-extras/master/data/dmpipe_example.tar.gz
$ tar zxvf dmpipe_example.tar.gz
$ cd dmpipe_example
$ curl -OL https://raw.githubusercontent.com/fermiPy/fermipy-extras/master/notebooks/dSphs.ipynb

Now you need to do two things to set up to run the example notebook

  • Point the dmksy package at the target “Roster” you just downloaded. .. code-block:: bash

    $ export DMSKY_PATH=<current_dir>

  • Edit the ‘config/config_dSphs.yaml’ file so that the ‘evfile’, ‘ltcube’, and ‘scfile’ lines refer to the input data you set up above.

Now you can run the example notebook.

$ jupyter-notebook dSphs.ipynb
Running the analysis

Every step of the analysis, including the top-level script that runs the entire analysis, can be invoked directly from the UNIX command line:

$ dmpipe-pipeline --config config/master_dSphs.yaml
Extracting Analysis Results

The analysis pipeline produces a number of different outputs, including:

  • Combined plots of the DM interaction limits for the stacked analysis, as well as plots showing the results of the control tests. These are in the dSphs/results directory.
  • Indvidual plots of the SEDs and the DM interaction limits for each target in the analysis.
  • Detailed intermediate results allowing the user to preproduce any plot or to refit any ROI or reproduce any other step of the analysis chain in isolation.
Loading and running interactively

One can load the pipeline interactively in python, and see the current status of the analysis.

from dmpipe import Pipeline
configfile = 'config/master_dSphs.yaml'
pipe = Pipeline(linkname='dSphs')
pipe.preconfigure(configfile)
pipe.update_args(dict(config=configfile))

# look at the current state
pipe.print_status()

# Continue running analysis starting from the previously saved
# state
pipe.run()

Many other commands are demonstrated in the jupyter notebook example.

Overview

This package implements an analysis pipeline to look for DM signals. This involves a lot of bookkeeping and loops over various things. It is probably easiest to first describe this with a bit of pseudo-code that represents the various analysis steps.

The various loop variable are:

  • rosters

    The list of all the rosters to analyze.

  • targets

    The list of all the analysis targets. This is generated by merging all the targets fron the input rosters.

  • target.profiles

    The list of all the spatial profiles to analyze for a particular targeet. This is generated by version targets from all the input rosters.

  • jpriors

    The list of all the type of prior on the J-factor. This is provied by the user.

  • channels

    The list of all the channels to analyze results for. This is provided by the user.

  • sims

    This is of all the simulation scenarios to analyze. This is provided by the user.

  • first, last

    The first and last seeds to use the random number genreator (for simulations), or the first and last random directions to use, (for random direction control studies).

# Initialization, prepare the analysis directories and precompute the DM spectra
PrepareTargets(rosters)
SpecTable


# Data analysis

# Loop over targets
for target in targets:
    AnalyzeROI(target)

    for profile in target.profiles:
        AnalyzeSED(target, profile)
        PlotCastro(target, profile)

        for jprior in jpriors:
            ConvertCastro(target, profile, jprior) # This loops over channels

            for channel in channels:
                PlotDM(target, profile, jprior, channel)
                PlotLimits(target, profile, jprior, channel)

for roster in rosters:
    for jprior in jpriors:
        StackLikelihood(roster, jprior) # This loops over channels

        for channel in channels:
            PlotDM(roster, jprior, channel, stacked=True)
            PlotLimits(roster, jprior, channel, stacked=True)


# Simulation analysis

# Loop over simulation scenarios
for sim in sims:

    # Loop over targets
    for target in targets:
        CopyBaseROI(sim, target)

        for profile in target.profiles:
            SimulateROI(sim, target, profile) # This loops over simulation seeds
            CollectSED(sim, target, profile)

            for seed in range(first, last):
                for jprior in jpriors:
                    ConvertCastro(sim, target, profile, seed, jprior)  # This loops over channels

            for jprior in jpriors:
                CollectLimits(sim, target, profile, jprior)  # This loops over channels

    for roster in rosters:
        for seed in range(first, last):
            for jprior in jpriors:
                StackLikelihood(sim, roster, seed, jprior)  # This loops over channels

                for channel in channels:
                    PlotDM(sim, roster, seed, jprior, channel, stacked=True)
                    PlotLimits(sim, roster, seed, jprior, channel, stacked=True)

        for jprior in jpriors:
            CollectLimits(sim, roster, jprior, stacked=True)  # This loops over channels
            for channel in channels:
                PlotLimits(sim, roster, jprior, channel, stacked=True, bands=True)



# Random direction control analysis

# Loop over targets
for target in targets:
    CopyBaseROI(target)
    RandomDirGen(target)

    for profile in target.profiles:
        for seed in range(first, last)
            AnalyzeSED(target, profile, seed)
            for jprior in jpriors:
                ConvertCastro(target, profile, seed, jprior)

        CollectSED('random', target, profile)

for roster in rosters:
    for seed in range(first, last):
        for jprior in jpriors:
            StackLikelihood(roster, seed, jprior) # This loops over channels

            for channel in channels:
                PlotDM(roster, jprior, seed, channel, stacked=True)
                PlotLimits(roster, jprior, seed, channel, stacked=True)

    for jprior in jpriors:
        CollectLimits(sim, roster, jprior, stacked=True) # This loops over channels
        for channel in channels:
            PlotLimits(sim, roster, jprior, channel, stacked=True, bands=True)

Configuration

This page describes the configuration management scheme used within the dmpipe package and documents the configuration parameters that can be set in the configuration file.

Analysis classes in the dmpipe package all inherit from the fermipy.jobs.Link class, which allow user to invoke the class either interactively within python or from the unix command line.

From the command line

$ dmpipe-plot-dm --infile dSphs/segue_1/dmlike_ack2016_point_none.fits --chan bb --outfile dSphs/segue_1/dmlike_ack2016_point_none_bb.fits

From python there are a number of ways to do it, we recommend this:

from dmipe.dm_plotting import PlotDM
link = PlotDM()
link.update_args(dict(infile='dSphs/segue_1/dmlike_ack2016_point_none.fits',
                                  chan='bb', outfile='dSphs/segue_1/dmlike_ack2016_point_none_bb.fits'))
link.run()

Master Configuration File

dmpipe uses YAML files to read and write its configuration in a persistent format. The configuration file has a hierarchical structure that groups parameters into dictionaries that are keyed to a section name (data, binning, etc.).

Sample Configuration
ttype: dSphs
rosters : ['test']
jpriors : ['none', 'lgauss']
spatial_models: ['point']
alias_dict : 'config/aliases_dSphs.yaml'

sim_defaults:
    seed : 0
    nsims : 20
    profile : ack2016

sims:
    'null' : {}
    '100GeV_bb_1e25' : {}
    '100GeV_bb_1e26' : {}

random: {}

plot_channels = ['bb', 'tautau']

data_plotting:
    plot-castro : {}
    plot-dm : {}
    plot-limits : {}
    plot-stacked-dm : {}
    plot-stacked-limits : {}

sim_plotting:
    plot-stacked-dm : {}
    plot-stacked-limits : {}
    plot-control-limits : {}

rand_plotting:
    plot-stacked-dm : {}
    plot-stacked-limits : {}
Top level configuration

Options at the top level apply to all parts of the analysis pipeline

Sample top level Configuration
  # Top level
  ttype : 'dSphs'
  rosters : ['test']
  jpriors : ['none', 'lgauss']
  spatial_models: ['point']
  alias_dict : 'config/aliases_dSphs.yaml'
  • ttype: str Target tpye. This is used for bookkeeping mainly, to give the naem of the top-level directory, and to call out specfic configuration files.
  • rosters: list List of dmsky rosters to analyze. Each roster represents a self-consistent set of targets and DM models for each target.
  • jpriors : list List of types of J-factor prior to use.
  • spatial_models: : list List of types of spatial model to use when fitting the DM. Options are * point : A point source * map: A spatial map (in a FITS file) * radial: A radial profile (in at text file) and central direction
  • alias_dict : Filename [Optional] Path to a file that give short names for the DM model to use with each target.

Note

If multiple rosters include the same target and DM model, that target will only be analyzed once, and those results will be re-used when combining each roster.

Simulation configuration

The sim_defaults, sims and random sections can be used to define analysis configurations for control studies with simulations and random sky directions.

Sample simulation Configuration
sim_defaults:
    seed : 0
    nsims : 20
    profile : ack2016

sims:
    'null' : {}
    '100GeV_bb_1e25' : {}
    '100GeV_bb_1e26' : {}

random: {}
  • sim_defaults : dict This is a dictionary of the parameters to use for simulations. This can be overridden for specific type of simulation.

    • seed : int
      Random number seed to use for the first simulation
    • nsims : int
      Number of simulations
    • profile : str
      Name of the DM spatial profile to use for simulations. This must match a profile defined in the roster for each target. The ‘alias_dict’ file can be used to remap longer profile names, or to define a common name for all the profiles in a roster.
  • sims : dict This is a dictionary of the simulation scenarious to consider, and of any option overrides for some of those scenarios.

    Each defined simulation needs a ‘config/sim_{sim_name}.yaml’ to define the injected source to use for that simulation.

  • random: dict This is a dictionary of the options to use for random sky direction control studies.

Plotting configuration
Sample plotting Configuration
plot_channels = ['bb', 'tautau']

data_plotting:
    plot-castro : {}
    plot-dm : {}
    plot-limits : {}
    plot-stacked-dm : {}
    plot-stacked-limits : {}

sim_plotting:
    plot-stacked-dm : {}
    plot-stacked-limits : {}
    plot-control-limits : {}

rand_plotting:
    plot-stacked-dm : {}
    plot-stacked-limits : {}
  • plot_channels : list

    List of the DM interaction channels for which to plot results.

  • data_plotting, sim_plotting, rand_plotting : dict

    Dictionaries of which types of plots to make for data, simulations and random direction controls. These dictionaries can be used to override the default set of channels for any particular set of plots.

    The various plot types are:

    • plot-castro : SED plots of a particular target, assuming a particular spatial profile.
    • plot-dm : Plots of the DMCastroData likelihoods in <sigmav> and mass space for each DM channel, assuming a particular J-factor prior type.
    • plot-limits : Plots of the DM upper limits in <sigmav> and mass space for each DM channel, assumig a particular J-factor prior type.
    • plot-stacked-dm: Plots of the DMCastroData likelihoods, stacked for each roster and assuing a particular J-factor prior type.
    • plot-stacked-limits: Plots of the DM upper limits in <sigmav> and mass space for the stacked analysis, for each roster, for each DM channel, assumig a particular J-factor prior type.
    • plot-control-limits Plots of the DM upper limit expecteation bands in <sigmav> and mass space for the stacked analysis, with the true injected signal marked.

Additional Configuration files

In addition to the master configuration file, the pipeline needs a few additional files.

Fermipy Analysis Configuration Yaml

This is simply the a template of the fermipy configuration file to be used for the baseline analysis and SED fitting in each ROI. Details of the syntax and options are here <https://fermipy.readthedocs.io/en/latest/config.html> _ The actually direction and name of the target source in this file will be over written for each target.

Dark Matter Spectral Configuration Yaml

This file specifies the masses and channels to analyze the DM spectra for. Here is an example of this file:

# This is the list of channels we will analyze.
# These must match channel names in the DMFitFunction class
channels : ['ee', 'mumu', 'tautau', 'bb', 'tt', 'gg', 'ww', 'zz', 'cc', 'uu', 'dd', 'ss']

# This defines the array of mass points we use (in GeV)
# The points are sampled in log-space
masses :
  mass_min : 10.
  mass_max : 10000.
  mass_nstep : 13
Simulation Scenario Configuration Yaml

This file specifies the DM signal to inject in the analysis (if any). Here is a example, note that everything inside the ‘injected_source’ tag is in the format that fermipy expects to see source defintions.

# For positive control tests we with injected source.
# In this case it is a DM annihilation spectrum.
injected_source:
  name : dm
  source_model :
    SpatialModel : PointSource
    SpectrumType : DMFitFunction
    norm :
      value : nan # This is the J-factor and depend on the target
    sigmav :
      value: 1.0E-25 # cm^3 s^-1.  (i.e., very large cross section)
    mass :
      value: 100.0 # GeV
    channel0 :
      value : 4 # annihilation to b-quarks

For null simulations, you should include the ‘injected_source’ tag, but leave it blank

# For positive control tests we with injected source.
# In this case it is a DM annihilation spectrum.
injected_source:
Profile Alias Configuration Yaml

This is a small file that remaps the target profile names used by dmsky to shorter names (without underscores in them). Removing the underscores helps keep the file name fields more logical, and dmpipe generally uses underscores as a field seperator. This also keeps file names shorter, and allow us to use roster with a mixed set of profile version to do simulations. Here is an example:

ackermann2016_photoj_0.6_nfw : ack2016
geringer-sameth2015_nfw : gs2015
Random Direction Control Sample Configuration Yaml

The file define how we select random directions for the random direction control studies. Here is an example:

# These are the parameters for the random direction selection
# The algorithm picks points on a grid

# File key for the first direction
seed : 0
# Number of directions to select
nsims : 20

# Step size between grid points (in deg)
step_x : 1.0
step_y : 1.0
# Max distance from ROI center (in deg)
max_x : 3.0
max_y : 3.0

Output Files

Fermipy ROI Snapshots

For each target, the pipeline will perform a baseline fit to the region of interest (ROI) and produce a snapshot in the standard fermipy ROI FITs file format These are producted by the AnalyzeROI link for data, and copied to directories used for simulations.

Fermipy SED Files

For each target, and for each spatial profile used to model the target the pipeline will produce a standard fermipy SED FITs file format These are producted by the AnalyzeSED link for data, or the SimulateROI link for simulations.

Dark Matter Likeihood ‘Castro’ Files

For each target, spatial profile and J-factor prior combination, the pipeline will produce DMCastroData fits file with the likelhoods as a function of DM interaction rate. These are produced by the ConvertCastro link for the individual targets, and by the StackLikelihood link for the stacked roster results.

Dark Matter Limits Files

For each target, spatial profile and J-factor prior combination, the pipeline will produce DMCastroData fits file with the upper limits on the DM interaction rate. These will also be produced for the stacked results from each roster and J-factor prior combination.

Expectation Band Files

For every target, simulation scenario and J-factor prior combination, the pipeline will also produce FITs files with summeries of the limits to capture the expected limits bands.

Bookkeeping and Generated Configuration Files

Several files needed for bookeeping are created by the PrepareTargets script.

  • Target List Yaml Files

    These are dictionaries of all the targets and all the profiles to consider for each target. Here is an example from the simple test analysis:

    draco: [ack2016_point]
    segue_1: [ack2016_point]
    
  • Roster List Yaml Files

    These are dictionaries of all the targets and target version that define each roster. Here is an example from teh simple test analysis:

    test_point: [‘segue_1:ack2016_point’, ‘draco:ack2016_point’]

  • ROI configuration Yaml Files

This is simply the fermipy configuration file to be used for the baseline analysis and SED fitting in each ROI. Details of the syntax and options are here <https://fermipy.readthedocs.io/en/latest/config.html> _ These are copied from the template version to each of the analysis directories and updated to include the target name and direction.

  • Spatial Profile Yaml Files

These file define the various spatial profiles used to fit each target. The syntax is basically what fermipy needs to create a new source.

name: ack2016_point
source_model: {DEC: 57.91528, RA: 260.05167, SpatialModel: PointSource, SpectrumType: PowerLaw}
  • J-value Yaml Files

These file define the values of the J-factor for different profiles. They are needed to convert the analysis results to DM annihilation rate. Here is an example:

{j_integ: 2.188e+18, j_sigma: 0.6, type: NFW}
  • Simulation Input Yaml Files

These file define the various spatial profiles used to fit each target. The syntax inside the ‘injected_source’ tag is exactly what fermipy needs to create a new source.

injected_source:
  name: dm
  source_model:
    SpatialModel: PointSource
    SpectrumType: DMFitFunction
    channel0: {value: 4}
    mass: {value: 100.0}
    norm: {value: 2.188e+18}
    sigmav: {value: 3.0e-26}
  • Simulated Source Spectrum Yaml Files

    These files are created by the SimulateROI task, and contain some information about the simulated sources.

  • Source Correlation Yaml Files

    These file are created by the AnalyzeSED (for data) or SimulateROI (for simulations) tasks, and contain the correlation factors between the target source and any other source in the ROI above the threshold for special treatment (typically 0.25).

dmpipe package

Module contents

Standalone Analysis Classes

class dmpipe.PrepareTargets(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to preprare analysis pipeline.

Parameters:
  • sims (<type 'list'>) – Names of the simulation scenario. [[]]
  • alias_dict (<type 'str'>) – File to rename target version keys. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • rosters (<type 'list'>) – Name of a dmsky target roster. [[]]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • config (<type 'str'>) – Path to fermipy config file. [None]
  • spatial_models (<type 'list'>) – Types of spatial models to use [[]]
appname = 'dmpipe-prepare-targets'
default_options = {'alias_dict': (None, 'File to rename target version keys.', <type 'str'>), 'config': (None, 'Path to fermipy config file.', <type 'str'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'rosters': ([], 'Name of a dmsky target roster.', <type 'list'>), 'sims': ([], 'Names of the simulation scenario.', <type 'list'>), 'spatial_models': ([], 'Types of spatial models to use', <type 'list'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Prepare directories for target analyses'
linkname_default = 'prepare-targets'
run_analysis(argv)[source]

Run this analysis

usage = 'dmpipe-prepare-targets [options]'
class dmpipe.SpecTable(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to build a table with all the DM spectra for this analysis

Parameters:
  • specconfig (<type 'str'>) – Path to DM yaml file defining DM spectra of interest. [None]
  • specfile (<type 'str'>) – Path to DM spectrum file. [None]
  • config (<type 'str'>) – Path to fermipy config file. [None]
  • clobber (<type 'bool'>) – Overwrite existing files. [False]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
appname = 'dmpipe-spec-table'
default_options = {'clobber': (False, 'Overwrite existing files.', <type 'bool'>), 'config': (None, 'Path to fermipy config file.', <type 'str'>), 'specconfig': (None, 'Path to DM yaml file defining DM spectra of interest.', <type 'str'>), 'specfile': (None, 'Path to DM spectrum file.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Build a table with the spectra for DM signals'
linkname_default = 'spec-table'
run_analysis(argv)[source]

Run this analysis

usage = 'dmpipe-spec-table [options]'
class dmpipe.ConvertCastro(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to convert SED to DM space.

Parameters:
  • astro_value_file (<type 'str'>) – Path to yaml file with target j_value or d_value [None]
  • limitfile (<type 'str'>) – Path to file with limits. [None]
  • outfile (<type 'str'>) – Path to output file. [None]
  • clobber (<type 'bool'>) – Overwrite existing files. [False]
  • sed_file (<type 'str'>) – Path to SED file. [None]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • nsims (<type 'int'>) – Number of simulations to run. [-1]
  • astro_prior (<type 'str'>) – Types of Prior on J-factor or D-factor [None]
  • specfile (<type 'str'>) – Path to DM spectrum file. [None]
appname = 'dmpipe-convert-castro'
static convert_sed(spec_table, channels, sed_file, outfile, limitfile, **kwargs)[source]

Convert a single SED to DM space.

Parameters:
  • spec_table (DMSpecTable) – Object with all the DM spectra
  • channels (list) – List of the channels to convert
  • sed_file (str) – Path to the SED file
  • outfile (str) – Path to write the output DMCastroData object to
  • limitfile (str) – Path to write the output limits to.
Keyword Arguments:
 
  • norm_type (str) – Normalization type to use
  • j_factor (dict) – Dictionary with information about the J-factor
  • d_factor (dict) – Dictionary with information about the J-factor
  • clobber (bool) – Flag to overwrite existing files.
static convert_sed_to_dm(spec_table, sed, channels, **kwargs)[source]

Convert an SED file to a DMCastroData object

Parameters:
  • spec_table (DMSpecTable) – Object with all the DM spectra
  • sed (CastroData) – Object with the SED data
  • channels (list) – List of the channels to convert
Keyword Arguments:
 
  • norm_type (str) – Normalization type to use
  • j_val (dict) – Dictionary with information about the J-factor
  • d_val (dict) – Dictionary with information about the D-factor
Returns:

  • castro_list (list) – List of the DMCastroData objects with the Likelihood data
  • table_list (list) – List of astropy.table.Table objects with the Likelihood data
  • name_list (list) – List of names

default_options = {'astro_prior': (None, 'Types of Prior on J-factor or D-factor', <type 'str'>), 'astro_value_file': (None, 'Path to yaml file with target j_value or d_value', <type 'str'>), 'clobber': (False, 'Overwrite existing files.', <type 'bool'>), 'limitfile': (None, 'Path to file with limits.', <type 'str'>), 'nsims': (-1, 'Number of simulations to run.', <type 'int'>), 'outfile': (None, 'Path to output file.', <type 'str'>), 'sed_file': (None, 'Path to SED file.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'specfile': (None, 'Path to DM spectrum file.', <type 'str'>)}
description = 'Convert SED to DMCastroData'
static extract_dm_limits(dm_castro_list, channels, alphas, mass_table)[source]

Extract limits from a series of DMCastroData objects for a set of channels and masses

Parameters:
  • dm_castro_lsit (list) – DMCastroData objects with all the DM spectra
  • channels (list) – List of the channels to convert
  • alphas (list) – List of the confidence level threshold to extract limits
  • mass_table (astropy.table.Table) – Table with the masses. This just gets appended to the lists of output tables.
Returns:

  • castro_list (list) – List of the DMCastroData objects with the Likelihood data
  • table_list (list) – List of astropy.table.Table objects with the Likelihood data
  • name_list (list) – List of names

static is_ann_sed(sedfile)[source]
static is_decay_sed(sedfile)[source]
linkname_default = 'convert-castro'
run_analysis(argv)[source]

Run this analysis

static select_channels(channels, sedfile)[source]
usage = 'dmpipe-convert-castro [options]'
class dmpipe.StackLikelihood(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to stack likelihoods that were written to DMCastroData objects.

Parameters:
  • specconfig (<type 'str'>) – Path to DM yaml file defining DM spectra of interest. [None]
  • rosterlist (<type 'str'>) – Path to the roster list. [None]
  • clobber (<type 'bool'>) – Overwrite existing files. [False]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
  • astro_prior (<type 'str'>) – Types of Prior on J-factor or D-factor [None]
appname = 'dmpipe-stack-likelihood'
default_options = {'astro_prior': (None, 'Types of Prior on J-factor or D-factor', <type 'str'>), 'clobber': (False, 'Overwrite existing files.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'rosterlist': (None, 'Path to the roster list.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'specconfig': (None, 'Path to DM yaml file defining DM spectra of interest.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Stack the likelihood from a set of targets'
static is_ann_roster(roster)[source]
static is_decay_roster(roster)[source]
linkname_default = 'stack-likelihood'
run_analysis(argv)[source]

Run this analysis

static select_channels(channels, roster)[source]
static stack_roster(rost, ttype, channels, astro_prior_key, sim, seed)[source]

Stack all of the DMCastroData in a roster

Parameters:
  • rost (list) – List of the targets
  • ttype (str) – Type of target, used for bookkeeping and file names
  • channels (list) – List of the channels to convert
  • j_prior_key (str) – String that identifies the type of prior on the J-factor
  • sim (str) – String that specifies the simulation scenario
  • seed (int or None) – Key for the simulation instance, used for bookkeeping and file names
Returns:

output – Dictionary of DMCastroData objects, keyed by channel

Return type:

dict

static stack_rosters(roster_dict, ttype, channels, astro_prior_key, sim, seed, clobber)[source]

Stack all of the DMCastroData in a dictionary of rosters

Parameters:
  • roster_dict (dict) – Dictionary of all the roster being used.
  • ttype (str) – Type of target, used for bookkeeping and file names
  • channels (list) – List of the channels to convert
  • astro_prior_key (str) – String that identifies the type of prior on the J-factor
  • sim (str) – String that specifies the simulation scenario
  • seed (int or None) – Key for the simulation instance, used for bookkeeping and file names
  • clobber (bool) – Flag to overwrite existing files.
usage = 'dmpipe-stack-likelihood [options]'
static write_fits_files(stacked_dict, resultsfile, limitfile, clobber=False)[source]

Write the stacked DMCastroData object and limits a FITS files

Parameters:
  • stacked_dict (dict) – Dictionary of DMCastroData objects, keyed by channel
  • resultsfile (str) – Path to the output file to write the DMCastroData objects to
  • limitfile (str) – Path to write the upper limits to
  • clobber (bool) – Overwrite existing files
static write_stacked(ttype, roster_name, stacked_dict, astro_prior_key, sim, seed, clobber)[source]

Write the stacked DMCastroData object to a FITS file

Parameters:
  • ttype (str) – Type of target, used for bookkeeping and file names
  • roster_name (str) – Name of the roster, used for bookkeeping and file names
  • stacked_dict (dict) – Dictionary of DMCastroData objects, keyed by channel
  • astro_prior_key (str) – String that identifies the type of prior on the J-factor
  • sim (str) – String that specifies the simulation scenario
  • seed (int or None) – Key for the simulation instance, used for bookkeeping and file names
  • clobber (bool) – Flag to overwrite existing files.
class dmpipe.CollectLimits(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to collect limit results from a series of simulations.

Parameters:
  • outfile (<type 'str'>) – Path to output file. [None]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • limitfile (<type 'str'>) – Path to file with limits. [None]
  • summaryfile (<type 'str'>) – Path to file with results summaries. [None]
  • specconfig (<type 'str'>) – Path to DM yaml file defining DM spectra of interest. [None]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
appname = 'dmpipe-collect-limits'
default_options = {'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'limitfile': (None, 'Path to file with limits.', <type 'str'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'outfile': (None, 'Path to output file.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'specconfig': (None, 'Path to DM yaml file defining DM spectra of interest.', <type 'str'>), 'summaryfile': (None, 'Path to file with results summaries.', <type 'str'>)}
description = 'Collect Limits from simulations'
static is_ann_limits(limitfile)[source]
static is_decay_limits(limitfile)[source]
linkname_default = 'collect-limits'
run_analysis(argv)[source]

Run this analysis

static select_channels(channels, limitfile)[source]
usage = 'dmpipe-collect-limits [options]'

Job-dispatch Analysis Classes

class dmpipe.ConvertCastro_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for the ConvertCastro script

This does a triple loop over targets, spatial profiles and J-factor priors
Parameters:
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • clobber (<type 'bool'>) – Overwrite existing files. [False]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • targetlist (<type 'str'>) – Path to the target list. [None]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
  • specfile (<type 'str'>) – Path to DM spectrum file. [None]
appname = 'dmpipe-convert-castro-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of ConvertCastro

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'clobber': (False, 'Overwrite existing files.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'specfile': (None, 'Path to DM spectrum file.', <type 'str'>), 'targetlist': (None, 'Path to the target list.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Run analyses on a series of ROIs'
job_time = 600
usage = 'dmpipe-convert-castro-sg [options]'
class dmpipe.StackLikelihood_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for StackLikelihood

This loops over the types of priors on the J-factor
Parameters:
  • specconfig (<type 'str'>) – Path to DM spectrum file. [None]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • clobber (<type 'bool'>) – Overwrite existing files. [False]
  • rosterlist (<type 'str'>) – Path to the roster list. [None]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-stack-likelihood-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of StackLikelihood

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'clobber': (False, 'Overwrite existing files.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'rosterlist': (None, 'Path to the roster list.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'specconfig': (None, 'Path to DM spectrum file.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Run analyses on a series of ROIs'
job_time = 120
usage = 'dmpipe-stack-likelihood-sg [options]'
class dmpipe.CollectLimits_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for CollectLimits

This does a triple loop over all targets, profiles and j-factor priors.
Parameters:
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • write_full (<type 'bool'>) – Write file with full collected results [False]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • specconifg (<type 'str'>) – Path to DM yaml file defining DM spectra of interest. [None]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • targetlist (<type 'str'>) – Path to the target list. [None]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-collect-limits-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of CollectLimits

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'specconifg': (None, 'Path to DM yaml file defining DM spectra of interest.', <type 'str'>), 'targetlist': (None, 'Path to the target list.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>), 'write_full': (False, 'Write file with full collected results', <type 'bool'>)}
description = 'Run analyses on a series of ROIs'
job_time = 120
usage = 'dmpipe-collect-limits-sg [options]'
class dmpipe.CollectStackedLimits_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for this script

This adds the following arguments:
Parameters:
  • rosterlist (<type 'str'>) – Path to the target list. [None]
  • write_summary (<type 'bool'>) – Write file with summary of collected results [False]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • write_full (<type 'bool'>) – Write file with full collected results [False]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-collect-stacked-limits-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of CollectLimits

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'rosterlist': (None, 'Path to the target list.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>), 'write_full': (False, 'Write file with full collected results', <type 'bool'>), 'write_summary': (False, 'Write file with summary of collected results', <type 'bool'>)}
description = 'Run analyses on a series of ROIs'
job_time = 120
usage = 'dmpipe-collect-stacked-limits-sg [options]'

Standalone Plotting Classes

class dmpipe.PlotDMSpectra(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to plot the DM spectra from pre-computed tables.

Parameters:
  • mass (<type 'float'>) – DM particle mass [100]
  • outfile (<type 'str'>) – Path to output file. [None]
  • chan (<type 'str'>) – DM annihilation channel [bb]
  • infile (<type 'str'>) – Path to input file. [None]
  • spec_type (<type 'str'>) – Type of flux to consider [eflux]
appname = 'dmpipe-plot-dm-spectra'
default_options = {'chan': ('bb', 'DM annihilation channel', <type 'str'>), 'infile': (None, 'Path to input file.', <type 'str'>), 'mass': (100, 'DM particle mass', <type 'float'>), 'outfile': (None, 'Path to output file.', <type 'str'>), 'spec_type': ('eflux', 'Type of flux to consider', <type 'str'>)}
description = 'Plot the DM spectra stored in pre-computed tables'
linkname_default = 'plot-dm-spectra'
run_analysis(argv)[source]

Run this analysis

usage = 'dmpipe-plot-dm-spectra [options]'
class dmpipe.PlotDM(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to plot the likelihood vs <sigma v> and DM particle mass

Parameters:
  • outfile (<type 'str'>) – Path to output file. [None]
  • chan (<type 'str'>) – DM annihilation channel [bb]
  • infile (<type 'str'>) – Path to input file. [None]
  • global_min (<type 'bool'>) – Use global min for castro plots. [False]
appname = 'dmpipe-plot-dm'
default_options = {'chan': ('bb', 'DM annihilation channel', <type 'str'>), 'global_min': (False, 'Use global min for castro plots.', <type 'bool'>), 'infile': (None, 'Path to input file.', <type 'str'>), 'outfile': (None, 'Path to output file.', <type 'str'>)}
description = 'Plot the likelihood vs <sigma v> and DM particle mass'
linkname_default = 'plot-dm'
run_analysis(argv)[source]

Run this analysis

usage = 'dmpipe-plot-dm [options]'
class dmpipe.PlotLimits(**kwargs)[source]

Bases: fermipy.jobs.link.Link

Small class to Plot DM limits on <sigma v> versus mass.

Parameters:
  • bands (<type 'str'>) – Name of file with expected limit bands. [None]
  • chan (<type 'str'>) – DM annihilation channel [bb]
  • infile (<type 'str'>) – Path to input file. [None]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
  • outfile (<type 'str'>) – Path to output file. [None]
appname = 'dmpipe-plot-limits'
default_options = {'bands': (None, 'Name of file with expected limit bands.', <type 'str'>), 'chan': ('bb', 'DM annihilation channel', <type 'str'>), 'infile': (None, 'Path to input file.', <type 'str'>), 'outfile': (None, 'Path to output file.', <type 'str'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>)}
description = 'Plot DM limits on <sigma v> versus mass'
linkname_default = 'plot-limits'
run_analysis(argv)[source]

Run this analysis

usage = 'dmpipe-plot-limits [options]'

Job-dispatch Plotting Classes

class dmpipe.PlotLimits_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for PlotLimits

This does a triple nested loop over targets, profiles and j-factor priors
Parameters:
  • channels (<type 'list'>) – DM annihilation channels [[]]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • targetlist (<type 'str'>) – Path to the target list. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
appname = 'dmpipe-plot-limits-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of PlotLimits

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'channels': ([], 'DM annihilation channels', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'targetlist': (None, 'Path to the target list.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Make castro plots for set of targets'
job_time = 60
usage = 'dmpipe-plot-limits-sg [options]'
class dmpipe.PlotStackedLimits_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for PlotStackedLimits

This does a double nested loop over rosters and j-factor priors
Parameters:
  • rosterlist (<type 'str'>) – Path to the roster list. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • bands (<type 'str'>) – Name of file with expected limit bands. [None]
  • channels (<type 'list'>) – DM annihilation channels [[]]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-plot-stacked-limits-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of PlotLimits

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'bands': (None, 'Name of file with expected limit bands.', <type 'str'>), 'channels': ([], 'DM annihilation channels', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'rosterlist': (None, 'Path to the roster list.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Make castro plots for set of targets'
job_time = 60
usage = 'dmpipe-plot-stacked-limits-sg [options]'
class dmpipe.PlotDM_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for PlotDM

This does a quadruple nested loop over targets, profiles, j-factor priors and channels
Parameters:
  • channels (<type 'list'>) – DM annihilation channels [[]]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • global_min (<type 'bool'>) – Use global min for castro plots. [False]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • targetlist (<type 'str'>) – Path to the target list. [None]
appname = 'dmpipe-plot-dm-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of PlotDM

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'channels': ([], 'DM annihilation channels', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'global_min': (False, 'Use global min for castro plots.', <type 'bool'>), 'targetlist': (None, 'Path to the target list.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Make castro plots for set of targets'
job_time = 60
usage = 'dmpipe-plot-dm-sg [options]'
class dmpipe.PlotStackedDM_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for PlotDM

This does a triple loop over rosters, j-factor priors and channels
Parameters:
  • rosterlist (<type 'str'>) – Path to the roster list. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • channels (<type 'list'>) – DM annihilation channels [[]]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • global_min (<type 'bool'>) – Use global min for castro plots. [False]
  • nsims (<type 'int'>) – Number of simulations to run. [20]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • seed (<type 'int'>) – Seed number for first simulation. [0]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-plot-stacked-dm-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of PlotDM

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'channels': ([], 'DM annihilation channels', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'global_min': (False, 'Use global min for castro plots.', <type 'bool'>), 'nsims': (20, 'Number of simulations to run.', <type 'int'>), 'rosterlist': (None, 'Path to the roster list.', <type 'str'>), 'seed': (0, 'Seed number for first simulation.', <type 'int'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Make castro plots for set of targets'
job_time = 60
usage = 'dmpipe-plot-stacked-dm-sg [options]'
class dmpipe.PlotControlLimits_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for PlotLimits

This does a quadruple loop over rosters, j-factor priors, channels, and expectation bands
Parameters:
  • channels (<type 'list'>) – DM annihilation channels [[]]
  • rosterlist (<type 'str'>) – Path to the target list. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-plot-control-limits-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of PlotLimits

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'channels': ([], 'DM annihilation channels', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'rosterlist': (None, 'Path to the target list.', <type 'str'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Make limits plots for positve controls'
job_time = 60
usage = 'dmpipe-plot-control-limits-sg [options]'
class dmpipe.PlotFinalLimits_SG(link, **kwargs)[source]

Bases: fermipy.jobs.scatter_gather.ScatterGather

Small class to generate configurations for PlotLimits

This does a quadruple loop over rosters, j-factor priors, channels, and expectation bands
Parameters:
  • sims (<type 'list'>) – Names of the simulation scenario. [[]]
  • channels (<type 'list'>) – DM annihilation channels [[]]
  • rosterlist (<type 'str'>) – Path to the roster list. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • astro_priors (<type 'list'>) – Types of Prior on J-factor or D-factor [[]]
  • ttype (<type 'str'>) – Type of target being analyzed. [None]
appname = 'dmpipe-plot-final-limits-sg'
build_job_configs(args)[source]

Hook to build job configurations

clientclass

alias of PlotLimits

default_options = {'astro_priors': ([], 'Types of Prior on J-factor or D-factor', <type 'list'>), 'channels': ([], 'DM annihilation channels', <type 'list'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'rosterlist': (None, 'Path to the roster list.', <type 'str'>), 'sims': ([], 'Names of the simulation scenario.', <type 'list'>), 'ttype': (None, 'Type of target being analyzed.', <type 'str'>)}
description = 'Make final limits plots'
job_time = 60
usage = 'dmpipe-plot-final-limits-sg [options]'

Pipeline Classes

class dmpipe.PipelineData(**kwargs)[source]

Bases: fermipy.jobs.chain.Chain

Chain together the steps of the dSphs pipeline

This chain consists of:

analyze-roi : AnalyzeROI_SG
Do the baseline analysis for each target in the target list.
analyze-sed : AnalyzeSED_SG
Extract the SED for each profile for each target in the target list.
convert-castro : ConvertCastro_SG
Convert the SED to DM-space for each target, profile and J-factor prior type
stack-likelihood : StackLikelihood_SG
Stack the likelihoods for each roster in the analysis, for each J-factor prior type.

Optional plotting modules includde

plot-castro : PlotCastro
Make ‘Castro’ plots of the SEDs for each profile for each target in the target list.
plot-dm : PlotDM_SG
Make DM ‘Castro’ plots for each profile, target, J-factor prior type and channel.
plot-limits : PlotLimits_SG
Make DM ‘Castro’ plots for each profile, target, J-factor prior type and channel.
plot-stacked-dm : PlotStackedDM_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
plot-stacked-limits : PlotStackedLimits_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
Parameters:
  • config (<type 'str'>) – Path to fermipy config file. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-pipeline-data'
default_options = {'config': (None, 'Path to fermipy config file.', <type 'str'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>)}
description = 'Data analysis pipeline'
linkname_default = 'pipeline-data'
usage = 'dmpipe-pipeline-data [options]'
class dmpipe.PipelineSim(**kwargs)[source]

Bases: fermipy.jobs.chain.Chain

Chain together the steps of the dSphs pipeline for simulations

This chain consists of:

copy-base-roi : CopyBaseROI_SG
Copy the baseline analysis directory files for each target.
simulate-roi : AnalyzeROI_SG
Simulate the SED analysis for each target and profile in the target list.
convert-castro : ConvertCastro_SG
Convert the SED to DM-space for each target, profile and J-factor prior type
stack-likelihood : StackLikelihood_SG
Stack the likelihoods for each roster in the analysis, for each J-factor prior type.
collect-sed : CollectSED_SG
Collect and summarize the SED results for all the simulations.
collect-limits : CollectLimits_SG
Collect and summarize the limits for all the targets for all the simulations.
collect-stacked-limits : CollectStackedLimits_SG
Collect and summarize the stacked imits for all the targets for all the simulations.

Optional plotting modules includde

plot-stacked-dm : PlotStackedDM_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
plot-stacked-limits : PlotStackedLimits_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
plot-control-limits : PlotControlLimits_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
plot-control-mles : PlotControlMLEs_SG
Make DM Maximum Likelihood estimate plots for each roster, J-factor prior type and channel.
Parameters:
  • config (<type 'str'>) – Path to fermipy config file. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
  • sim (<type 'str'>) – Name of the simulation scenario. [None]
appname = 'dmpipe-pipeline-sim'
default_options = {'config': (None, 'Path to fermipy config file.', <type 'str'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>), 'sim': (None, 'Name of the simulation scenario.', <type 'str'>)}
description = 'Run gtselect and gtbin together'
linkname_default = 'pipeline-sim'
usage = 'dmpipe-pipeline-sim [options]'
class dmpipe.PipelineRandom(**kwargs)[source]

Bases: fermipy.jobs.chain.Chain

Chain together the steps of the dSphs pipeline for random

direction studies.

This chain consists of:

copy-base-roi : CopyBaseROI_SG
Copy the baseline analysis directory files for each target.
random-dir-gen : RandomDirGen_SG
Select random directions inside the ROI and generate approriate target files.
analyze-sed : AnalyzeSED_SG
Extract the SED for each profile for each target in the target list.
convert-castro : ConvertCastro_SG
Convert the SED to DM-space for each target, profile and J-factor prior type
stack-likelihood : StackLikelihood_SG
Stack the likelihoods for each roster in the analysis, for each J-factor prior type.
collect-sed : CollectSED_SG
Collect and summarize the SED results for all the simulations.
collect-limits : CollectLimits_SG
Collect and summarize the limits for all the targets for all the simulations.
collect-stacked-limits : CollectStackedLimits_SG
Collect and summarize the stacked imits for all the targets for all the simulations.

Optional plotting modules includde

plot-stacked-dm : PlotStackedDM_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
plot-stacked-limits : PlotStackedLimits_SG
Make DM ‘Castro’ plots for each roster, J-factor prior type and channel.
Parameters:
  • config (<type 'str'>) – Path to fermipy config file. [None]
  • dry_run (<type 'bool'>) – Print commands but do not run them. [False]
appname = 'dmpipe-pipeline-random'
default_options = {'config': (None, 'Path to fermipy config file.', <type 'str'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>)}
description = 'Data analysis pipeline for random directions'
linkname_default = 'pipeline-random'
usage = 'dmpipe-pipeline-random [options]'
class dmpipe.Pipeline(**kwargs)[source]

Bases: fermipy.jobs.chain.Chain

Top level DM pipeline analysis chain.

This chain consists of:

prepare-targets : PrepareTargets
Make the input files need for all the targets in the analysis.
spec-table : SpecTable
Build the FITS table with the DM spectra for all the channels being analyzed.
data : PipelineData
Data analysis pipeline
sim_{sim_name} : PipelineSim
Simulation pipeline for each simulation scenario
random : PipelineRandom
Analysis pipeline for random direction control studies
final-plots : PlotFinalLimits_SG
Make the final analysis results plots
appname = 'dmpipe-pipeline'
default_options = {'config': (None, 'Path to fermipy config file.', <type 'str'>), 'dry_run': (False, 'Print commands but do not run them.', <type 'bool'>)}
description = 'Data analysis pipeline'
linkname_default = 'pipeline'
preconfigure(config_yaml)[source]

Run any links needed to build files that are used in _map_arguments

usage = 'dmpipe-pipeline [options]'

Indices and tables