uap – Robust, Consistent, and Reproducible Data Analysis

uap executes, controls and keeps track of the analysis of large data sets. It enables users to perform robust, consistent, and reproducible data analysis. uap encapsulates the usage of (bioinformatic) tools and handles data flow and processing during an analysis. Users can use predefined or self-made analysis steps to create custom analysis. Analysis steps encapsulate best practice usages for bioinformatic software tools. Although uap focuses on the analysis of high-throughput sequencing data it can be extended to handle any analysis. But its plugin architecture allows users to add functionality. This would enable any kind of large data analysis.

Usage:

uap is a command-line tool, implemented in Python, and runs under GNU/Linux. It takes a user-defined configuration file, which describes the analysis, as input. uap interacts with the analysis via subcommands.

Supported Platforms:

uap runs natively on Unix-like operating systems. But, it does also support the cluster engines UGE, OGE/SGE and SLURM.

Important Information

The uap installation does not include all necessary tools for the data analysis. It expects that the required tools are already installed.

The recommended workflow to analyse data with uap is:

  1. Install uap (see Installation of uap)
  2. Optionally: Extend uap by adding new steps (see Extending uap Functionality)
  3. Write a configuration file to setup the analysis (see Configuration File)
  4. Start the analysis locally (see run-locally) or submit it to the cluster (see submit-to-cluster)
  5. Follow the progress of the analysis (see status)
  6. Optionally: share your extensions with others (send a pull request via github)

A finished analysis leaves the user with:

  • The original input files (which are, of course, left untouched).
  • The experiment-specific configuration file (see Configuration File). You should keep this configuration file for later reference and you could even make it publicly available along with your input files for anybody to re-run the entire data analysis or parts thereof.
  • The result files and comprehensive annotations of the analysis (see Annotation Files). These files are located at the destination path defined in the configuration file.

Core aspects

Robust Data Analysis:

  • Data is processed in temporary location. If and only if ALL involved processes exit gracefully, the output files are copied to the final output directory.
  • Processing can be aborted and continued from the command line at any time. Failures during data processing do not lead to unstable state of analysis.
  • The final output directory names are suffixed with a hashtag which is based on the commands executed to generate the output data. Data is not easily overwritten and this helps to check for necessary recomputations.
  • Fail fast, fail often. During initiation uap controls the availability of all required tools, calculates the paths of all files of the analysis, and controls if these files exist. If any problems are encountered the user is requested to fix them.

Consistency:

  • Steps and files are defined in a directed acyclic graph (DAG). The DAG defines dependencies between in- and output files.
  • Prior to any execution the dependencies between files are calculated. If a file is newer or an option for a calculation has changed all dependent files are marked for recalculation.

Reproducibility:

  • Comprehensive annotations are written to the output directories. They allow for later investigation of errors or review of executed commands. They contain also versions of used tool, required runtime, memory and CPU usage, etc.

Usability:

  • Single configuration file describes entire data analysis work-flow.
  • Command-line tool to interacts with uap. It can be used to execute, monitor, and analyse defined work-flows.

Table of contents

Installation of uap

Prerequisites

The installation requires Python’s virtualenv. It is a tool to create isolated Python environments. So, please install it if its not already installed.:

$ sudo apt-get install python-virtualenv
OR
$ sudo pip install virtualenv

In addition to this git and graphviz are also needed.

The uap installation does not include all necessary tools for the data analysis. It is expected that the required tools are already installed.

Downloading the Software

Download the software from uap's github repository. like this:

$ git clone https://github.com/kmpf/uap.git

Setting Up Python Environment

After cloning the repository, change into the created directory and run the bootstrapping script bootstrap.sh:

$ cd uap
$ ./bootstrap.sh

The script creates the required Python environment (which will be located in ./python_env/). Afterwards it installs PyYAML, NumPy, biopython and psutil into the freshly created environment. There is no harm in accidentally running this script multiple times.

Making uap Globally Available

uap can be used globally. On Unix-type operating systems it is advised to add the installation path to your $PATH variable. Therefore change into the uap directory and execute:

$ pwd

Copy the output and add a line to your ~/.bashrc or ~/.bash_profile like the following and replace <uap-path> with the copied output:

PATH=$PATH:<uap-path>

Finally, make the changes known to your environment by sourcing the changed file:

$ source ~/.bashrc
$ source ~/.bash_profile

How-To Use uap

At first, you need to install uap (see Installation of uap).

Try Existing Configurations

After you have done that you need a working configuration file. Example configurations are included in uap‘s installation directory. They are stored inside the example-configurations folder. Go there and try:

$ uap index_mycoplasma_genitalium_ASM2732v1_genome.yaml status

Start your first uap analysis showcasing the controlled indexing of a genome (arguably a tiny one):

$ uap index_mycoplasma_genitalium_ASM2732v1_genome.yaml status
[uap] Set log level to ERROR
[uap][ERROR]: index_mycoplasma_genitalium_ASM2732v1_genome.yaml: Destination path does not exist: genomes/bacteria/Mycoplasma_genitalium/

Oops, the destination_path does not exist. (see Destination_path Section) Create it and start again:

$ mkdir genomes/bacteria/Mycoplasma_genitalium/
$ uap index_mycoplasma_genitalium_ASM2732v1_genome.yaml status

Waiting tasks
-------------
[w] bowtie2_index/Mycoplasma_genitalium_index-download
[w] bwa_index/Mycoplasma_genitalium_index-download
[w] fasta_index/download
[w] segemehl_index/Mycoplasma_genitalium_genome-download

Ready tasks
-----------
[r] M_genitalium_genome/download

tasks: 5 total, 4 waiting, 1 ready

If you still do get errors, please check that the tools defined in index_mycoplasma_genitalium_ASM2732v1_genome.yaml are available in your environment (see Tools Section).

The [w] stands for a waiting status of a task and the [r] stands for a ready status of a task. (see Command-Line Usage of uap)

Go on and try some more example configurations (let’s for now assume that all tools are installed and configured correctly). You want to create indexes of the human genome (hg19):

$ uap index_homo_sapiens_hg19_genome.yaml status
[uap] Set log level to ERROR
[uap][ERROR]: Output directory (genomes/animalia/chordata/mammalia/primates/homo_sapiens/hg19/chromosome_sizes) does not exist. Please create it.
$ mkdir genomes/animalia/chordata/mammalia/primates/homo_sapiens/hg19/chromosome_sizes
$ uap index_homo_sapiens_hg19_genome.yaml run-locally
<Analysis starts>

Create Your Own Configuration

Although writing the configuration may seem a bit complicated, the trouble pays off later because further interaction with the pipeline is quite simple. The structure and content of the configuration files is very detailed described on another page (see Configuration File). Here is a simple configuration:

Insert YAML here!
General Structure of Sequencing Analysis

Every analysis of high-throughput sequencing data evolves around some basic tasks. Irrespective of sequencing RNA or DNA.

  1. Get the sequencing reads as input (most likely fastq.gz)
  2. Remove adapter sequences from your sequencing reads
  3. Align the sequencing reads onto the reference genome

The After these steps are finished a lot of different analysis could be applied on the data. Furtheron example configurations for often used analyses are shown. The enumeration of steps show continues as if the basic steps were already performed.

RNAseq analysis
Differential expression

RNAseq analysis often aims at the discovery of differentially expressed (known) transcripts. Therefore mappped reads for at least two different samples have to be available.

  1. Get annotation set (for e.g. genes, transcripts, ...)
  2. Count the number of reads overlapping the annotation
  3. Perform statistical analysis, based on counts
Assemble novel transcripts

As the publicly available annotations, e.g. from GENCODE, are probably not complete, the assembly of novel transcripts from RNAseq data is another task one would perform to investigate the transcriptome.

ChIPseq analysis

ChIPseq analysis aims at the discovery of genomic loci at which protein(s) of interest were bound. The experiment is an enrichment procedure using specific antibodies. The enrichment detection is normally performed by so called peak calling programs.

  1. Get negative control
  2. Peak calling
Prepare UCSC genome browser tracks

The conversion of sequencing data into an format that can be displayed by the UCSC genome browser is needed in almost all sequencing projects.

Configuration File

uap operates on YAML files which define data analysis. These files are called configuration files.

A configuration file describes a analysis completely. Configurations consist of four sections (let’s just call them sections, although technically, they are keys):

  • destination_path – points to the directory where the result files, annotations and temporary files are written to
  • email – when submitting jobs on a cluster, messages will be sent to this email address by the cluster engine (nobody@example.com by default)
  • constants – defines constants for later use (define repeatedly used values as constants to increase readability of the following sections)
  • steps – defines the source and processing steps and their order
  • tools – defines all tools used in the analysis and how to determine their versions (for later reference)

If you want to know more about the notation that is used in this file, have a closer look at the YAML definition.

Sections of a Configuration File

Destination_path Section

The value of destination_path is the directory where uap is going to store the created files.

destination_path: "/path/to/uap/output"
Email Section

The value of email is needed if the analysis is executed on a cluster, which can use it to inform the person who started uap about status changes of submitted jobs.

email: "your.name@mail.de"
Steps Section

The steps section is the core of the analysis file, because it defines when steps are executed and how they depend on each other. All available steps are described in detail in the steps documentation: Available steps. This section contains a key for every step, therefore each step must have a unique name [1]. There are two ways to name a step to allow multiple steps of the same type and still ensure unique naming:

steps:
    # here, the step name is unchanged, it's a cutadapt step which is also
    # called 'cutadapt'
    cutadapt:
        ... # options following

    # here, we also insert a cutadapt step, but we give it a different name:
    # 'clip_adapters'
    clip_adapters (cutadapt):
        ... # options following

There are two different types of steps:

Source Steps

They provide input files for the analysis. They might start processes such as downloading files or demultiplexing sequence reads. But, they do not have dependencies, they can introduce files from outside the destination path (see Destination_path Section), and they are usually the first steps of an analysis.

For example if you want to work with fastq files, the first step is to import the required files. For this task the source step fastq_source is the right solution.

A possible step definition could look like this:

steps:
    input_step (fastq_source):
    pattern: /Path/to/fastq-files/*.gz
    group: ([SL]\w+)_R[12]-00[12].fastq.gz
    sample_id_prefix: MyPrefix
    first_read: '_R1'
    second_read: '_R2'
    paired_end: True

The single keys will be described at Available steps. For defining the group key a regular expression is used. If you are not familiar with this you can read about it and test your regular expression at pythex.org.

Processing Steps

They depend upon one or more predecessor steps and work with their output files. Output files of processing steps are automatically named and placed by uap. Processing steps are usually configurable. For a complete list of available options please visit Available steps or use the subcommand steps.

Reserved Keywords for Steps
_depends:
Dependencies are defined via the _depends key which may either be null, a step name, or a list of step names.
steps:
    # the source step which depends on nothing
    fastq_source:
        # ...

    run_folder_source:
        # ...

    # the first processing step, which depends on the source step
    cutadapt:
        _depends: [fastq_source, run_folder_source]

    # the second processing step, which depends on the cutadapt step
    fix_cutadapt:
        _depends: cutadapt
_connect:
Normally steps connected with _depends do pass data along by defining so called connections. If the name of an output connection matches the name of an input connection of its succeeding step data gets passed on automatically. But, sometimes the user wants to force the connection of differently named connections. This can be done with the _connect keyword. A common usage is to connect downloaded data with a Processing Steps.
  steps:
      # Source step to download i.e. sequence of chr1 of some species
      chr1 (raw_url_source):
          ...

      # Download chr2 sequence
      chr2 (raw_url_source):
          ...

      merge_fasta_files:
          _depends:
              - chr1
              - chr2
          # Equivalent to:
          # _depends: [chr1, chr2]
          _connect:
              in/sequence:
                  - chr1/raw
                  - chr2/raw
          # Equivalent to:
          # _connect:
          #     in/sequence: [chr1/raw, chr2/raw]

The examples shows how the ``raw_url_source`` output connection ``raw`` is
connected to the input connection ``sequence`` of the ``merge_fasta_files``
step.
_BREAK:
If you want to cut off entire branches of the step graph, set the _BREAK flag in a step definition, which will force the step to produce no runs (which will in turn give all following steps nothing to do, thereby effectively disabling these steps):
steps:
    fastq_source:
        # ...

    cutadapt:
        _depends: fastq_source

    # this step and all following steps will not be executed
    fix_cutadapt:
        _depends: cutadapt
        _BREAK: true
_volatile:
Steps can be marked with _volatile: yes. This flag tells uap that the output files of the marked step are only intermediate results.
steps:
    # the source step which depends on nothing
    fastq_source:
        # ...

    # this steps output can be deleted if all depending steps are finished
    cutadapt:
        _depends: fastq_source
        _volatile: yes
        # same as:
        # _volatile: True

    # if fix_cutadapt is finished the output files of cutadapt can be
    # volatilized
    fix_cutadapt:
        _depends: cutadapt

If all steps depending on the intermediate step are finished uap tells the user that he can free disk space. The message is output if the status is checked and looks like this:

Hint: You could save 156.9 GB of disk space by volatilizing 104 output files.
Call 'uap <project-config>.yaml volatilize --srsly' to purge the files.

If the user executes the volatilize command the output files are replaced by placeholder files.

Tools Section

The tools section must list all programs required for the execution of a particular analysis. uap uses the information given here to check if a tool is available given the current environment. This is particularly useful on cluster systems were software might not always be loaded. Also, uap logs the version of each tool used by a step.

By default, version determination is simply attempted by calling the program without command-line arguments.

If a certain argument is required, specify it in get_version.

If a tool does not exit with code 0, you can find out which code is it. Execute the required command and after this type echo $? in the same shell. The output is the exit code of the last executed command. You can use it to specify the exit code in exit_code.

tools:
    # you don't have to specify a path if the tool can be found in $PATH
    cat:
        path: cat
        get_version: --version
    # you have to specify a path if the tool can not be found in $PATH
    some-tool:
        path: /path/to/some-tool
        get_version: --version

If you are working on a cluster running UGE or SLURM you can also use their module system. You need to know what actually happens when you load or unload a module:

$ module load <module-name>
$ module unload <module-name>

As far as I know is module neither a command nor an alias. It is a BASH function. So use declare -f to find out what it is actually doing:

$ declare -f module

The output should look like this:

module ()
    {
        eval `/usr/local/modules/3.2.10-1/Modules/$MODULE_VERSION/bin/modulecmd bash $*`
    }

An other possible output is:

module ()
    {
        eval $($LMOD_CMD bash "$@");
        [ $? = 0 ] && eval $(${LMOD_SETTARG_CMD:-:} -s sh)
    }

In this case you have to look in $LMOD_CMD for the required path:

$ echo $LMOD_CMD

Now you can use this newly gathered information to load a module before use and unload it afterwards. You only need to replace $MODULE_VERSION with the current version of the module system you are using and bash with python. A potential bedtools entry in the tools section, might look like this.

tools:
    ....
    bedtools:
        module_load: /usr/local/modules/3.2.10-1/Modules/3.2.10/bin/modulecmd python load bedtools/2.24.0-1
        module_unload: /usr/local/modules/3.2.10-1/Modules/3.2.10/bin/modulecmd python unload bedtools/2.24.0-1
        path: bedtools
        get_version: --version
        exit_code: 0

Note

Use python instead of bash for loading modules via uap. Because the module is loaded from within a python environment and not within a BASH shell.

Example Configurations

Please check out the example configurations provided inside the example-configurations folder of uap‘s installation directory.

[1]PyYAML does not complain about duplicate keys

Command-Line Usage of uap

uap uses Python’s argparse. Therefore, uap provides help information on the command-line:

$ uap -h
usage: uap [-h] [-v] [--version]
           [<project-config>.yaml]
           {fix-problems,render,report,run-locally,status,steps,
           submit-to-cluster,task-info,volatilize}
           ...

This script starts and controls 'uap' analysis.

positional arguments:
  <project-config>.yaml
                        Path to YAML file which holds the pipeline
                        configuration. It has to follow the structure given
                        in the documentation.

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         Increase output verbosity
  --version             Display version information.

subcommands:
  Available subcommands.

  {fix-problems,render,report,run-locally,status,steps,submit-to-cluster,
  task-info,volatilize}
    fix-problems        Fixes problematic states by removing stall files.
    render              Renders DOT-graphs displaying information of the
                        analysis.
    report              Generates reports of steps which can do so.
    run-locally         Executes the analysis on the local machine.
    status              Displays information about the status of the analysis.
    steps               Displays information about the steps available in UAP.
    submit-to-cluster   Submits the jobs created by the seq-pipeline to a
                        cluster
    task-info           Displays information about certain source or
                        processing tasks.
    volatilize          Saves disk space by volatilizing intermediate results

For further information please visit http://uap.readthedocs.org/en/latest/
For citation use ...

Almost all subcommands require a YAML configuration file (see Configuration File) except for uap steps, which does not dependent on a concrete analysis.

Everytime uap is started with a Configuration File several things happen:

  1. The configuration file is read
  2. The tools given in the tools section are checked
  3. The input files are checked
  4. The state of all runs are calculated.

If any of these steps fails, uap will print an error message with and it will crash. This may seem a bit harsh, but after all, it’s better to fail early than to fail late if failing is unavoidable.

uap creates a symbolic link, if it does not exist already, pointing to the destination path called <project-config>.yaml-out. The symbolic link is created in the directory containing the <project-config>.yaml.

There are a couple of global command line parameters which are valid for all scripts (well, actually, it’s only one):

  • --even-if-dirty:

    Before doing anything else, the pipeline checks whether its source code has been modified in any way via Git. If yes, processing is stopped immediately unless this flag is specified. If you specify the flag, the fact that the repository was dirty will be recorded in all annotations which are produces including a full Git diff. A shortcut is --even

The subcommands are described in detail below.

Explanation of Subcommands

steps

The steps subcommand lists all available source and processing steps:

usage: uap [<project-config>.yaml] steps [-h] [--even-if-dirty] [--show STEP]

This script displays by default a list of all steps the pipeline can use.

optional arguments:
  -h, --help       show this help message and exit
  --even-if-dirty  Must be set if the local git repository contains uncommited
                   changes. Otherwise the pipeline will not start.
  --show STEP      Show the details of a specific step.
status

The status subcommand lists all runs of an analysis. A run is describes the concrete processing of a sample by a step. Samples are usually defined at the source steps and are then propagated through the analysis. Here is the help message:

$ uap <project-config>.yaml status -h
usage: uap [<project-config>.yaml] status [-h] [--even-if-dirty]
                                          [--cluster CLUSTER] [--summarize]
                                          [--graph] [--sources]
                                          [-t [TASK [TASK ...]]]

This script displays by default information about all tasks of the pipeline
as configured in '<project-config>.yaml'. But the displayed information can
be narrowed down via command line options.
IMPORTANT: Hints given by this script are just valid if the jobs were
submitted to the cluster.

optional arguments:
  -h, --help            show this help message and exit
  --even-if-dirty       Must be set if the local git repository contains
                        uncommited changes. Otherwise the pipeline will not
                        start.
  --cluster CLUSTER     Specify the cluster type (sge, slurm), defaults to
                        auto.
  --summarize           Displays summarized information of the analysis.
  --graph               Displays the dependency graph of the analysis.
  --sources             Displays only information about the source runs.
  -t [TASK [TASK ...]], --task [TASK [TASK ...]]
                        Displays only the named task IDs. Can take multiple
                        task ID(s) as input. A task ID looks like this
                        'step_name/run_id'. A list of all task IDs is returned
                        by running:
                        $ uap <project-config>.yaml status

At any time, each run is in one of the following states:

  • waiting – the run is waiting for input files to appear, or its input files are not up-to-date regarding their respective dependencies
  • ready – all input files are present and up-to-date regarding their upstream input files (and so on, recursively), the run is ready and can be started
  • queued – the run is currently queued and will be started “soon” (only available if you use a compute cluster)
  • executing – the run is currently running on this or another machine
  • finished – all output files are in place and up-to-date

Here is an example output:

$ uap <project-config>.yaml status
Waiting tasks
-------------
[w] cufflinks/Sample_COPD_2023

Ready tasks
-----------
[r] tophat2/Sample_COPD_2023

Finished tasks
--------------
[f] cutadapt/Sample_COPD_2023-R1
[f] cutadapt/Sample_COPD_2023-R2
[f] fix_cutadapt/Sample_COPD_2023

tasks: 5 total, 1 waiting, 1 ready, 3 finished

To get a more concise summary, specify --summarize:

$ uap <project-config>.yaml status --summarize
Waiting tasks
-------------
[w]   1 cufflinks

Ready tasks
-----------
[r]   1 tophat2

Finished tasks
--------------
[f]   2 cutadapt
[f]   1 fix_cutadapt

tasks: 5 total, 1 waiting, 1 ready, 3 finished

...or print a fancy ASCII art graph with --graph:

$ uap <project-config>.yaml status --graph
samples (1 finished)
└─cutadapt (2 finished)
  └─fix_cutadapt (1 finished)
    └─tophat2 (1 ready)
      └─cufflinks (1 waiting)

Detailed information about a specific task can be obtained by specifying the run ID on the command line:

$ uap index_mycoplasma_genitalium_ASM2732v1_genome.yaml status -t \
  bowtie2_index/Mycoplasma_genitalium_index-download --even
[uap] Set log level to ERROR
output_directory: genomes/bacteria/Mycoplasma_genitalium/bowtie2_index/
                  Mycoplasma_genitalium_index-download-cMQPtBxs
output_files:
  out/bowtie_index:
    Mycoplasma_genitalium_index-download.1.bt2: &id001
    - genomes/bacteria/Mycoplasma_genitalium/Mycoplasma_genitalium.ASM2732v1.fa
    Mycoplasma_genitalium_index-download.2.bt2: *id001
    Mycoplasma_genitalium_index-download.3.bt2: *id001
    Mycoplasma_genitalium_index-download.4.bt2: *id001
    Mycoplasma_genitalium_index-download.rev.1.bt2: *id001
    Mycoplasma_genitalium_index-download.rev.2.bt2: *id001
private_info: {}
public_info: {}
run_id: Mycoplasma_genitalium_index-download
state: FINISHED

This data structure is called the “run info” of a certain run and it represents a kind of plan which includes information about which output files will be generated and which input files they depend on – this is stored in output_files.

Source steps can be viewed separately by specifying --sources:

$ uap <project-config>.yaml status --sources
[uap] Set log level to ERROR
M_genitalium_genome/download
task-info

The task-info subcommand writes the commands which were or will be executed to the terminal in the form of a semi-functional BASH script. Semi-functional means that at the moment output redirections for some commands are missing in the BASH script. Also included are the run info information as already described for the status subcommand.

An example output showing the download of the Mycoplasma genitalium genome:

$ uap index_mycoplasma_genitalium_ASM2732v1_genome.yaml task-info -t \
  M_genitalium_genome/download --even

  [uap] Set log level to ERROR
  #!/usr/bin/env bash

  # M_genitalium_genome/download -- Report
  # ======================================
  #
  # output_directory: genomes/bacteria/Mycoplasma_genitalium/M_genitalium_genome/download-5dych7Xj
  # output_files:
  #   out/raw:
  #     genomes/bacteria/Mycoplasma_genitalium/Mycoplasma_genitalium.ASM2732v1.fa: []
  # private_info: {}
  # public_info: {}
  # run_id: download
  # state: FINISHED
  #
  # M_genitalium_genome/download -- Commands
  # ========================================

  # 1. Group of Commands -- 1. Command
  # ----------------------------------

  curl ftp://ftp.ncbi.nih.gov/genomes/genbank/bacteria/Mycoplasma_genitalium/latest_assembly_versions/GCA_000027325.1_ASM2732v1/GCA_000027325.1_ASM2732v1_genomic.fna.gz

  # 2. Group of Commands -- 1. Command
  # ----------------------------------

  ../tools/compare_secure_hashes.py --algorithm md5 --secure-hash a3e6e5655e4996dc2d49f876be9d1c27 genomes/bacteria/Mycoplasma_genitalium/M_genitalium_genome/download-5dych7Xj/L9PXBmbPKlemghJGNM97JwVuzMdGCA_000027325.1_ASM2732v1_genomic.fna.gz

  # 3. Group of Commands -- 1. Pipeline
  # -----------------------------------

  pigz --decompress --stdout --processes 1 genomes/bacteria/Mycoplasma_genitalium/M_genitalium_genome/download-5dych7Xj/L9PXBmbPKlemghJGNM97JwVuzMdGCA_000027325.1_ASM2732v1_genomic.fna.gz | dd bs=4M of=/home/hubert/develop/uap/example-configurations/genomes/bacteria/Mycoplasma_genitalium/Mycoplasma_genitalium.ASM2732v1.fa

This subcommand allows the user to run parts of the analysis manually without uap and control for causes of failure.

run-locally

The run-locally subcommand runs all non-finished runs (or a subset) sequentially on the local machine. Feel free to cancel this script at any time, it won’t put your project in a unstable state. However, if the run-locally subcommand receives a SIGKILL signal, the currently executing job will continue to run and the corresponding run will be reported as executing by calling status subcommand for five more minutes (SIGTERM should be fine and exit gracefully but doesn’t just yet). After that time, you will be warned that a job is marked as being currently run but no activity has been seen for a while, along with further instructions about what to do in such a case (don’t worry, it shouldn’t happen by accident).

To execute one or more certain runs, specify the run IDs on the command line. To execute all runs of a certain step, specify the step name on the command line.

This subcommands usage information:

$ uap <project-config>.yaml run-locally -h
usage: uap [<project-config>.yaml] run-locally [-h] [--even-if-dirty]
                                             [step_task [step_task ...]]

This command  starts 'uap' on the local machine. It can be used to start:
* all tasks of the pipeline as configured in <project-config>.yaml
* all tasks defined by a specific step in <project-config>.yaml
* one or more steps
To start the complete pipeline as configured in <project-config>.yaml execute:
  $ uap <project-config>.yaml run-locally
To start a specific step execute:
  $ uap <project-config>.yaml run-locally <step_name>
To start a specific task execute:
  $ uap <project-config>.yaml run-locally <step_name/run_id>
The step_name is the name of an entry in the 'steps:' section as defined in
'<project-config>.yaml'. A specific task is defined via its task ID
'step_name/run_id'. A list of all task IDs is returned by running:
  $ uap <project-config>.yaml status

positional arguments:
  step_task        Can take multiple step names as input. A step name is the
                   name of any entry in the 'steps:' section as defined in
                   '<config>.yaml'. A list of all task IDs is returned by running:
                     $ uap <project-config>.yaml status.

optional arguments:
  -h, --help       show this help message and exit
  --even-if-dirty  Must be set if the local git repository contains uncommited
                   changes. Otherwise the pipeline will not start.

Note

Why is it safe to cancel the pipeline? The pipeline is written in a way which expects processes to fail or cluster jobs to disappear without notice. This problem is mitigated by a design which relies on file presence and file timestamps to determine whether a run is finished or not. Output files are automatically written to temporary locations and later moved to their real target directory, and it is not until the last file rename operation has finished that a run is regarded as finished.

submit-to-cluster

The submit-to-cluster subcommand determines which runs still have to be carried out and which supported cluster engine is available. It then submits the jobs to the cluster if a cluster engine has been found. Dependencies are passed to cluster engine in a way that jobs that depend on other jobs won’t get scheduled until their dependencies have been satisfied. The files qsub-template.sh and sbatch-template.sh are used to submit jobs. Fields with #{ } are substituted with appropriate values. Each submitted job calls uap with the run-locally subcommand on the cluster nodes.

The file quotas.yaml can be used to define different quotas for different systems:

"frontend[12]":
    default: 5
    cutadapt: 100

In the example above, a default quota of 5 is defined for hosts with a hostname of frontend1 or frontend2 (the name is a regular expression). A quota of 5 means that no more than 5 jobs of one kind will be run in parallel. Different quotas can be defined for each step: because cutadapt is highly I/O-efficient, it has a higher quota.

This subcommand provides usage information:

$ uap <project-config>.yaml submit-to-cluster -h
usage: uap [<project-config>.yaml] submit-to-cluster [-h] [--even-if-dirty]
                                                     [--cluster CLUSTER]
                                                     [step_task [step_task ...]]

This script submits all tasks configured in <project-config>.yaml to a
SGE/OGE/UGE or SLURM cluster. The list of tasks can be narrowed down by
specifying a step name (in which case all runs of this steps will be considered)
or individual tasks (step_name/run_id).

positional arguments:
  step_task          Can take multiple step names as input. A step name is
                     the name of any entry in the 'steps:' section as defined
                     in '<project-config>.yaml'. A list of all task IDs is
                     returned by running:
                       $ uap <project-config>.yaml status

  optional arguments:
    -h, --help         show this help message and exit
    --even-if-dirty    Must be set if the local git repository contains
                       uncommited changes. Otherwise the pipeline will not
                       start.
    --cluster CLUSTER  Specify the cluster type. Choices: [auto, sge, slurm].
                       Default: [auto].
fix-problems

The fix-problems subcommand removes temporary files written by uap if they are not required anymore. This subcommand provides usage information:

$ uap <project-config>.yaml fix-problems -h
usage: uap [<project-config>.yaml] fix-problems [-h] [--even-if-dirty]
                                                [--cluster CLUSTER]
                                                [--details] [--srsly]

optional arguments:
  -h, --help         show this help message and exit
  --even-if-dirty    Must be set if the local git repository contains
                     uncommited changes. Otherwise the pipeline will not start.
  --cluster CLUSTER  Specify the cluster type (sge, slurm), defaults to auto.
  --details          Displays information about problematic files which need
                     to be deleted to fix problem.
  --srsly            Deletes problematic files.

uap writes temporary files to indicate if a job is queued or executed. Sometimes (especially on the compute cluster) jobs fail, without even starting uap. This leaves the temporary file, written on job submission, indicating that a run was queued on the cluster without process (because it already failed). The status subcommand will inform the user if fix-problems needs to be executed to clean up the mess. The hint given by status would look like:

Warning: There are 10 tasks marked as queued, but they do not seem to be queued
Hint: Run 'uap <project-config>.yaml fix-problems --details' to see the details.
Hint: Run 'uap <project-config>.yaml fix-problems --srsly' to fix these problems
      (that is, delete all problematic ping files).

Be nice and do as you’ve told. Now you are able to resubmit your runs to the cluster. You’ve fixed the problem, haven’t you?

volatilize

The volatilize subcommand is useful to reduce the required disk space of your analysis. It works only in conjunction with the volatile keyword set in the Configuration File. As already mentioned there, steps marked as _volatile compute their output files as normal but they can be deleted if their dependent steps are finished.

This subcommand provides usage information:

$ uap <project-config>.yaml volatilize -h
usage: uap [<project-config>.yaml] volatilize [-h] [--even-if-dirty]
                                              [--details] [--srsly]

Save disk space by volatilizing intermediate results. Only steps marked with
'_volatile: True' are considered.

optional arguments:
  -h, --help       show this help message and exit
  --even-if-dirty  Must be set if the local git repository contains uncommited
                   changes. Otherwise the pipeline will not start.
  --details        Shows which files can be volatilized.
  --srsly          Replaces files marked for volatilization with a placeholder.

After running volatilize --srsly the output files of the volatilized step are replaced by placeholder files. The placeholder files have the same name as the original files suffixed with .volatile.placeholder.yaml.

render

The render subcommand generates graphs using graphviz. The graphs either show the complete analysis or the execution of a single run. At the moment --simple only has an effect in combination with --steps.

This subcommand provides usage information:

$ uap <project-config>.yaml render -h

usage: uap [<project-config>.yaml] render [-h] [--even-if-dirty] [--files]
                                          [--steps] [--simple]
                                          [step_task [step_task ...]]

'render' generates DOT-graphs. Without arguments it takes the log file of
each task and generates a graph, showing details of the computation.

positional arguments:
  step_task        Displays only the named task IDs. Can take multiple task
                   ID(s) as input. A task ID looks like this
                   'step_name/run_id'. A list of all task IDs is returned by
                   running 'uap <project-config>.yaml status'.

optional arguments:
  -h, --help       show this help message and exit
  --even-if-dirty  Must be set if the local git repository contains
                   uncommited changes. Otherwise the pipeline will not start.
  --files          Renders a graph showing all files of the analysis.
                   [Not implemented yet!]
  --steps          Renders a graph showing all steps of the analysis and their
                   connections.
  --simple         Rendered graphs are simplified.
report

The report subcommand is at the moment experimental. It might be used to create standardized output to enable easy loading and processing in downstream tools e.g. R.

Extending uap Functionality

Implement new steps

uap can be easily extended by implementing new source or processing steps. Therefore basic python programming skills are necessary. New steps are added to uap by placing a single Python file in:

include/sources
for source steps
include/steps
for processing steps.

Let’s get through that file(s) step by step.

Step 1: Import Statements and Logger

Please organize your imports in a similar fashion as shown below. Essential imports are from logging import getLogger and from abstract_step import .... The former is necessary for getting access to the application wide logger and the latter to inherit either from AbstractStep or AbstractSourceStep.

# First import standard libraries
import os
from logging import getLogger

# Secondly import third party libraries
import yaml

# Thirdly import local application files
from abstract_step import AbstractStep # or AbstractSourceStep

# Get application wide logger
logger = getLogger("uap_logger")

..
Step 2: Class Definition and Constructor

Each step has to define a class with a constructor and a single functions. The new class needs to be derived from either AbstractStep, for processing steps, or AbstractSourceStep, for source steps. The __init__ method should contain the declarations of:

  • tools used in the step: self.require_tool('tool_name')
  • input and output connection(s): self.add_connection('in/*') or self.add_connection('out/*')
  • options: self.add_option()
Tools:
Normally, steps use tools to perform there task. Each tool that is going to be used by a step needs to be requested via the method require_tool('tool_name'). When the step is executed uap searches for tool_name in the tools section of the configuration and uses the information given there to verify the tools accessibility.
Connections:
They are defined by the method add_connection(...). Information is transferred from one step to another via these connections. An output connection (out/something) of a predecessor step is automatically connected with an input connection of the same name (in/something). Connection names should describe the data itself NOT the data type. For instance, use in/genome over in/fasta. The data type of the input data should be checked in the step anyway to execute the correct corresponding commands.
Options:
Steps can have any number of options. Options are defined by the method add_option(). There are a bunch of parameters which can be set to specify the option.
..
# Either inherit from AbstractSourceStep or AbstractStep
# class NameOfNewSourceStep(AbstractSourceStep):
class NameOfNewProcessingStep(AbstractStep):
    # Overwrite constructor
    def __init__(self, pipeline):
        # Call super classes constructor
        super(NameOfNewProcessingStep, self).__init__(pipeline)

        # Define connections
        self.add_connection('in/some_incoming_data')
        self.add_connection('out/some_outgoing_data')

        # Request tools
        self.require_tool('cat')

        # Add options
        self.add_option('some_option', str, optional=False,
                        description='Mandatory option')

The single function runs is used to plan all jobs based on a list of input files or runs and possibly additional information from previous steps. The basic scaffold is shown below.

import sys
from abstract_step import *
import pipeline
import re
import process_pool
import yaml

class Macs14(AbstractStep):

    # the constructor
    def __init__(self, pipeline):
        super(Macs14, self).__init__(pipeline)

        # define in and out connections the strings have to start with 'in/'
        # or 'out/'
        self.add_connection('in/something')
        self.add_connection('out/tag1')
        self.add_connection('out/tag2')
        ...

        self.require_tool('cat4m')
        self.require_tool('pigz')
        ...

    # all checks of options and input values should be done here
    def setup_runs(self, complete_input_run_info, connection_info):
        # a hash containing information about this step
        output_run_info = {}

        # analyze the complete_input_run_info hash provided by the pipeline
        for step_name, step_input_info in complete_input_run_info.items():
            for input_run_id, input_run_info in step_input_info.items():
               # assemble your output_run_info
               # output_run_info has to look like this
               output_run_info:
                   run_id_1:
                       "output_files":
                           tag1:
                               output_file_1: [input_file_1, input_file_2, ...]
                               output_file_2: [input_file_1, input_file_2, ...]
                           tag2:
                               output_file_3: [input_file_1, input_file_2, ...]
                               output_file_4: [input_file_1, input_file_2, ...]
                       "info":
                           ...
                       more:
                           ...
                       keys:
                           ...
                   run_id_2:
                       ...

        return output_run_info

    # called to actually launch the job (run_info is the hash returned from
    # setup_runs)
    def execute(self, run_id, run_info):

        with process_pool.ProcessPool(self) as pool:
            with pool.Pipeline(pool) as pipeline:
                # assemble the steps pipline here
                pipeline.append(...)
                ...
                # finally launch it
                pool.launch(...)

The code shown above is the framework for a new step. The most essential part is the hash returned by setup_runs(), here called output_run_info.

run_id:It has to be the unique name of a run (obviously, because its a key value). output_run_info can contain multiple run_id hashes.
"output_files":This is the only hash key that has to have a fix name. This is used to link input to output files.
tag[12]:Every tag has to match \w+$ in the string 'out/tag', which was given to self.add_connection('out/tag'). This can be any string, but it has to match with the last part of the connection string.
output_file_\d:Each tag has to contain at least one such key. It has to be the name of the output file produced by the connection 'out/tag'. The value of this has to be a list of related input files. The list can have any number of entries even zero. Multiple output_file_\d can rely on the same set of input files.

Also very important is to understand the concept of connections. They provide input files prior steps created already. The names of the connections can be arbitrarily chosen, but should not describe the file format but more general terms. For example an out/alignment can provide gzipped SAM or BAM files. So you have to check in setup runs for the file type provided by a connection and react accordingly. Inspect complete_input_run_info to find out what your step gets as input.

uap tools

You will need to run bash commands like cat, pigz or something else in python. In this cases use the uap tool exec_group (see run::new_exec_group())

For example you want to separate multiple lines with a specific string out of a file in a new output file and in addition to this copy the output file. A possible bash way is:

$ cat source_file | grep search_string > output_file
$ cp output_file new_file

For sure, for this task grep would be sufficient. But for the example we want to use a pipe.

Now the uap way:

# create an new exec_group object
exec_group = run.new_exec_group()

# create an output file for the pipeline
cat_out = run.add_output_file(
    'file',
    '%s.txt' % (run_id),
    [input_path])

# create a command with cat and grep combined through pipe
with exec_group.add_pipeline() as cat_pipe:
    # create the cat command
    cat_command = [self.get_tool('cat'), input_path]

    # create the grep command
    search_string = 'foobar'
    grep_command = [self.get_tool('grep'), search_string]

    # add commands to the command pipeline
    cat_pipe.add_command(cat_command)
    cat_pipe.add_command(grep_command, stdout_path= cat_out)

# create a copy output file
cp_out = run.add_output_file(
    'file',
    '%s_copy.txt' % (run_id),
    [input_path])

# create copy command
cp_command = [self.get_tool('cp'), cat_out, cp_out]

# add copy command to the pipeline
exec_group.add_command(cp_command)

All the single commands will be collected and uap will execute the command list in the specified order.

Best practices

There are a couple of things which should be kept in mind when implementing new steps or modifying existing steps:

  • Make sure errors already show up in runs. So, look out for things that may fail in runs. Stick to fail early, fail often. That way errors show up before submitting jobs to the cluster and wasting precious cluster waiting time is avoided.
  • Make sure that the tools you’ll need in runs are available. Check for the availability of tools within the constructor __init__.
# make sure tools are available
self.require_tool('pigz')
self.require_tool('cutadapt')
  • Make sure your disk access is as cluster-friendly as possible (which primarily means using large block sizes and preferably no seek operations). If possible, use unix_pipeline to wrap your commands in pigz, dd, or cat4m with a large block size like 4 MB. Although this is not possible in every case (for example when seeking in files is involved), it is straightforward with tools that read a continuous stream from stdin and write a continuous stream to stdout.
  • NEVER remove files! If files need to be removed report the issue and exit uap. Only the user should delete files.
  • Always use os.path.join(...) when you handle paths.
  • Use bash commands like mkfifo over python library equivalents like os.mkfifo()
  • If you need to decide between possible ways to implement a step, stick to the more flexibel (often more configuration extensive one). You don’t know what other user might need, so let them decide.

Add the new step to your configuration

To make a new step known to uap, it has to be copied into either of these folders:

include/sources/
for all source steps
include/steps/
for all processing steps

If the Python step file exist at the correct location the step needs to be added to the YAML configuration file as described in Configuration File.

Software Design

uap is designed as a plugin architecture. The plugins are internally called steps. Two different types of steps exist, the source and processing steps. Source steps are used to include data from outside the destination path (see Destination_path Section) into the analysis. Processing steps are blueprints. Each step corresponds to the blueprint of a single data processing where uap itself controls the ordered execution of the plugged in so called steps. Steps are organized in a dependency graph (a directed acyclic graph) – every step may have one or more parent steps, which may in turn have other parent steps, and so on. Steps without parents are usually sources which provide source files, for example FASTQ files with the raw sequences obtained from the sequencer, genome sequence databases or annotation tracks.

Each step defines a number of runs and each run represents a piece of the entire data analysis, typically at the level of a single sample. A certain run of a certain step is called a task. While the steps only describe what needs to be done on a very abstract level, it is through the individual runs of each step that a uap wide list of actual tasks becomes available. Each run may provide a number of output files which depend on output files of one or several runs from parent steps.

To make the relationship between tasks, steps and runs more clear, we look at one example from a configuration file:

The status request output of

uap index_mycoplasma_genitalium_ASM2732v1_genome.yaml status

is

Waiting tasks
-------------
[w] bowtie2_index/Mycoplasma_genitalium_index-download
[w] bwa_index/Mycoplasma_genitalium_index-download
[w] fasta_index/download
[w] segemehl_index/Mycoplasma_genitalium_genome-download

Ready tasks
-----------
[r] M_genitalium_genome/download

 tasks: 5 total, 4 waiting, 1 ready

Here are 5 tasks listed. The first one is ‘’bowtie2_index/Mycoplasma_genitalium_index-download’‘. The first part is the step ‘’bowtie2_index’’ which is defined in the configuration file. The second part is the specific run ‘’Mycoplasma_genitalium_index-download’‘.

Source steps define a run for every input sample, and a subsequent step may:

  • define the same number of runs,
  • define more runs (for example when R1 and R2 reads in a paired-end RNASeq experiment should be treated separately),
  • define fewer runs (usually towards the end of a pipeline, where results are summarized).

Annotation Files

The annotation files contain detailed information about every output file. Also, the Git SHA1 hash of the uap repository at the time of data processing is included. The executed commands are listed. Annotation contains information about inter-process streams and output files, including SHA1 checksums, file sizes, and line counts as well.

Upon successful completion of a task, an extensive YAML-formatted annotation is placed next to the output files in a file called .[task_id]-annotation.yaml. Also, for every output file, a symbolic link to this file is created: .[output_filename].annotation.yaml.

Finally, the annotation is rendered via GraphViz, if available. Rendering can also be done at a later time using annotations as input. (see render) The annotation can be used to determine at a later time what exactly happened. Also, annotations may help to identify bottlenecks.

_images/cutadapt.png

Annotation graph of a cutadapt run. CPU and RAM usage for individual processes are shown, file sizes and line counts are shown for output files and inter-process streams.

_images/cpu-starving.png

In this graph, it becomes evident that the fix_cutadapt.py process in the middle gets throttled by the following two pigz processes, which only run with one core each and therefore cannot compress the results fast enough.

Available steps

Source steps

bcl2fastq_source
Connections:
  • Output Connection:
    • ‘out/configureBcl2Fastq_log_stderr’
    • ‘out/make_log_stderr’
    • ‘out/sample_sheet’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bcl2fastq_source [style=filled, fillcolor="#fce94f"];
   out_0 [label="configureBcl2Fastq_log_stderr"];
   bcl2fastq_source -> out_0;
   out_1 [label="make_log_stderr"];
   bcl2fastq_source -> out_1;
   out_2 [label="sample_sheet"];
   bcl2fastq_source -> out_2;
}

Options:
  • adapter-sequence (str, optional)
  • adapter-stringency (str, optional)
  • fastq-cluster-count (int, optional)
  • filter-dir (str, optional)
  • flowcell-id (str, optional)
  • ignore-missing-bcl (bool, optional)
  • ignore-missing-control (bool, optional)
  • ignore-missing-stats (bool, optional)
  • input-dir (str, required) – file URL
  • intensities-dir (str, optional)
  • mismatches (int, optional)
  • no-eamss (str, optional)
  • output-dir (str, optional)
  • positions-dir (str, optional)
  • positions-format (str, optional)
  • sample-sheet (str, required)
  • tiles (str, optional)
  • use-bases-mask (str, optional) – Conversion mask characters:- Y or y: use- N or n: discard- I or i: use for indexingIf not given, the mask will be guessed from theRunInfo.xml file in the run folder.For instance, in a 2x76 indexed paired end run, themask Y76,I6n,y75n means: “use all 76 bases from thefirst end, discard the last base of the indexing read,and use only the first 75 bases of the second end”.
  • with-failed-reads (str, optional)

Required tools: configureBclToFastq.pl, make, mkdir, mv

This step provides input files which already exists and therefore creates no tasks in the pipeline.

fastq_source

The FastqSource class acts as a source for FASTQ files. This source creates a run for every sample.

Specify a file name pattern in pattern and define how sample names should be determined from file names by specifyign a regular expression in group.

Sample index barcodes may specified by providing a filename to a CSV file containing the columns Sample_ID and Index or directly by defining a dictionary which maps indices to sample names.

Connections:
  • Output Connection:
    • ‘out/first_read’
    • ‘out/second_read’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   fastq_source [style=filled, fillcolor="#fce94f"];
   out_0 [label="first_read"];
   fastq_source -> out_0;
   out_1 [label="second_read"];
   fastq_source -> out_1;
}

Options:
  • first_read (str, required) – Part of the file name that marks all files containing sequencing data of the first read. Example: ‘R1.fastq’ or ‘_1.fastq’
  • group (str, optional) – A regular expression which is applied to found files, and which is used to determine the sample name from the file name. For example, (Sample_\d+)_R[12].fastq.gz, when applied to a file called Sample_1_R1.fastq.gz, would result in a sample name of Sample_1. You can specify multiple capture groups in the regular expression.
  • indices (str/dict, optional) – path to a CSV file or a dictionary of sample_id: barcode entries.
  • paired_end (bool, required) – Specify whether the samples are paired end or not.
  • pattern (str, optional) – A file name pattern, for example /home/test/fastq/Sample_*.fastq.gz.
  • sample_id_prefix (str, optional) – This optional prefix is prepended to every sample name.
  • sample_to_files_map (dict/str, optional) – A listing of sample names and their associated files. This must be provided as a YAML dictionary.
  • second_read (str, required) – Part of the file name that marks all files containing sequencing data of the second read. Example: ‘R2.fastq’ or ‘_2.fastq’

This step provides input files which already exists and therefore creates no tasks in the pipeline.

fetch_chrom_sizes_source
Connections:
  • Output Connection:
    • ‘out/chromosome_sizes’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   fetch_chrom_sizes_source [style=filled, fillcolor="#fce94f"];
   out_0 [label="chromosome_sizes"];
   fetch_chrom_sizes_source -> out_0;
}

Options:
  • path (str, required) – directory to move file to
  • ucsc-database (str, required) – Name of UCSC database e.g. hg38, mm9

Required tools: cp, fetchChromSizes

This step provides input files which already exists and therefore creates no tasks in the pipeline.

raw_file_source
Connections:
  • Output Connection:
    • ‘out/raw’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   raw_file_source [style=filled, fillcolor="#fce94f"];
   out_0 [label="raw"];
   raw_file_source -> out_0;
}

Options:
  • group (str, optional) – A regular expression which is applied to found files, and which is used to determine the sample name from the file name. For example, (Sample_d+)_R[12].fastq.gz`, when applied to a file called Sample_1_R1.fastq.gz, would result in a sample name of Sample_1. You can specify multiple capture groups in the regular expression.
  • pattern (str, optional) – A file name pattern, for example /home/test/fastq/Sample_*.fastq.gz.
  • sample_id_prefix (str, optional)
  • sample_to_files_map (dict/str, optional) – A listing of sample names and their associated files. This must be provided as a YAML dictionary.

This step provides input files which already exists and therefore creates no tasks in the pipeline.

raw_file_sources

The RawFileSources class acts as a temporary fix to get files into the pipeline. This source creates a run for every sample.

Specify a file name pattern in pattern and define how sample names should be determined from file names by specifyign a regular expression in group.

Connections:
  • Output Connection:
    • ‘out/raws’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   raw_file_sources [style=filled, fillcolor="#fce94f"];
   out_0 [label="raws"];
   raw_file_sources -> out_0;
}

Options:
  • group (str, required) – A regular expression which is applied to found files, and which is used to determine the sample name from the file name. For example, (Sample_\d+)_R[12].fastq.gz, when applied to a file called Sample_1_R1.fastq.gz, would result in a sample name of Sample_1. You can specify multiple capture groups in the regular expression.
  • paired_end (bool, required) – Specify whether the samples are paired end or not.
  • pattern (str, required) – A file name pattern, for example /home/test/fastq/Sample_*.fastq.gz.
  • sample_id_prefix (str, optional) – This optional prefix is prepended to every sample name.

This step provides input files which already exists and therefore creates no tasks in the pipeline.

raw_url_source
Connections:
  • Output Connection:
    • ‘out/raw’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   raw_url_source [style=filled, fillcolor="#fce94f"];
   out_0 [label="raw"];
   raw_url_source -> out_0;
}

Options:
  • filename (str, optional) – local file name of downloaded file
  • hashing-algorithm (str, optional) – hashing algorithm to use
    • possible values: ‘md5’, ‘sha1’, ‘sha224’, ‘sha256’, ‘sha384’, ‘sha512’
  • path (str, required) – directory to move downloaded file to
  • secure-hash (str, optional) – expected secure hash of downloaded file
  • uncompress (bool, optional) – Shall the file be uncompressed after downloading
  • url (str, required) – file URL

Required tools: compare_secure_hashes, cp, curl, dd, mkdir, pigz

This step provides input files which already exists and therefore creates no tasks in the pipeline.

run_folder_source

This source looks for fastq.gz files in [path]/Unaligned/Project_*/Sample_* and pulls additional information from CSV sample sheets it finds. It also makes sure that index information for all samples is coherent and unambiguous.

Connections:
  • Output Connection:
    • ‘out/first_read’
    • ‘out/second_read’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   run_folder_source [style=filled, fillcolor="#fce94f"];
   out_0 [label="first_read"];
   run_folder_source -> out_0;
   out_1 [label="second_read"];
   run_folder_source -> out_1;
}

Options:
  • first_read (str, required) – Part of the file name that marks all files containing sequencing data of the first read. Example: ‘_R1.fastq’ or ‘_1.fastq’
  • paired_end (bool, required)
  • path (str, required)
  • project (str, required)
  • second_read (str, required) – Part of the file name that marks all files containing sequencing data of the second read. Example: ‘R2.fastq’ or ‘_2.fastq’

This step provides input files which already exists and therefore creates no tasks in the pipeline.

Processing steps

bam_to_genome_browser
Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bam_to_genome_browser [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> bam_to_genome_browser;
   out_1 [label="alignments"];
   bam_to_genome_browser -> out_1;
}

Options:
  • bedtools-bamtobed-color (str, optional)
  • bedtools-bamtobed-tag (str, optional)
  • bedtools-genomecov-3 (bool, optional)
  • bedtools-genomecov-5 (bool, optional)
  • bedtools-genomecov-max (int, optional)
  • bedtools-genomecov-report-zero-coverage (bool, required)
  • bedtools-genomecov-scale (float, optional)
  • bedtools-genomecov-split (bool, required)
  • bedtools-genomecov-strand (str, optional)
    • possible values: ‘+’, ‘-‘
  • chromosome-sizes (str, required)
  • output-format (str, required)
    • possible values: ‘bed’, ‘bigBed’, ‘bedGraph’, ‘bigWig’
  • trackline (dict, optional)
  • trackopts (dict, optional)

Required tools: bedGraphToBigWig, bedToBigBed, bedtools, dd, mkfifo, pigz

CPU Cores: 8

bowtie2

Bowtie2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.

http://bowtie-bio.sourceforge.net/bowtie2/index.shtml

typical command line:

bowtie2 [options]* -x <bt2-idx> {-1 <m1> -2 <m2> | -U <r>} -S [<hit>]
Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bowtie2 [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> bowtie2;
   in_1 [label="second_read"];
   in_1 -> bowtie2;
   out_2 [label="alignments"];
   bowtie2 -> out_2;
}

Options:
  • index (str, required) – Path to bowtie2 index (not containing file suffixes).

Required tools: bowtie2, dd, mkfifo, pigz

CPU Cores: 6

bowtie2_generate_index

bowtie2-build builds a Bowtie index from a set of DNA sequences. bowtie2-build outputs a set of 6 files with suffixes .1.bt2, .2.bt2, .3.bt2, .4.bt2, .rev.1.bt2, and .rev.2.bt2. In the case of a large index these suffixes will have a bt2l termination. These files together constitute the index: they are all that is needed to align reads to that reference. The original sequence FASTA files are no longer used by Bowtie 2 once the index is built.

http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml#the-bowtie2-build-indexer

typical command line:

bowtie2-build [options]* <reference_in> <bt2_index_base>
Connections:
  • Input Connection:
    • ‘in/reference_sequence’
  • Output Connection:
    • ‘out/bowtie_index’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bowtie2_generate_index [style=filled, fillcolor="#fce94f"];
   in_0 [label="reference_sequence"];
   in_0 -> bowtie2_generate_index;
   out_1 [label="bowtie_index"];
   bowtie2_generate_index -> out_1;
}

Options:
  • bmax (int, optional) – The maximum number of suffixes allowed in a block. Allowing more suffixes per block makes indexing faster, but increases peak memory usage. Setting this option overrides any previous setting for –bmax, or –bmaxdivn. Default (in terms of the –bmaxdivn parameter) is –bmaxdivn 4. This is configured automatically by default; use -a/–noauto to configure manually.
  • bmaxdivn (int, optional) – The maximum number of suffixes allowed in a block, expressed as a fraction of the length of the reference. Setting this option overrides any previous setting for –bmax, or –bmaxdivn. Default: –bmaxdivn 4. This is configured automatically by default; use -a/–noauto to configure manually.
  • cutoff (int, optional) – Index only the first <int> bases of the reference sequences (cumulative across sequences) and ignore the rest.
  • dcv (int, optional) – Use <int> as the period for the difference-cover sample. A larger period yields less memory overhead, but may make suffix sorting slower, especially if repeats are present. Must be a power of 2 no greater than 4096. Default: 1024. This is configured automatically by default; use -a/–noauto to configure manually.
  • ftabchars (int, optional) – The ftab is the lookup table used to calculate an initial Burrows-Wheeler range with respect to the first <int> characters of the query. A larger <int> yields a larger lookup table but faster query times. The ftab has size 4^(<int>+1) bytes. The default setting is 10 (ftab is 4MB).
  • index-basename (str, required) – Base name used for the bowtie2 index.
  • large-index (bool, optional) – Force bowtie2-build to build a large index, even if the reference is less than ~ 4 billion nucleotides long.
  • noauto (bool, optional) – Disable the default behavior whereby bowtie2-build automatically selects values for the –bmax, –dcv and –packed parameters according to available memory. Instead, user may specify values for those parameters. If memory is exhausted during indexing, an error message will be printed; it is up to the user to try new parameters.
  • nodc (bool, optional) – Disable use of the difference-cover sample. Suffix sorting becomes quadratic-time in the worst case (where the worst case is an extremely repetitive reference). Default: off.
  • offrate (int, optional) – To map alignments back to positions on the reference sequences, it’s necessary to annotate (‘mark’) some or all of the Burrows-Wheeler rows with their corresponding location on the genome. -o/–offrate governs how many rows get marked: the indexer will mark every 2^<int> rows. Marking more rows makes reference-position lookups faster, but requires more memory to hold the annotations at runtime. The default is 5 (every 32nd row is marked; for human genome, annotations occupy about 340 megabytes).
  • packed (bool, optional) – Use a packed (2-bits-per-nucleotide) representation for DNA strings. This saves memory but makes indexing 2-3 times slower. Default: off. This is configured automatically by default; use -a/–noauto to configure manually.
  • seed (int, optional) – Use <int> as the seed for pseudo-random number generator.

Required tools: bowtie2-build, dd, pigz

CPU Cores: 6

bwa_backtrack

bwa-backtrack is the bwa algorithm designed for Illumina sequence reads up to 100bp. The computation of the alignments is done by running ‘bwa aln’ first, to align the reads, followed by running ‘bwa samse’ or ‘bwa sampe’ afterwards to generate the final SAM output.

http://bio-bwa.sourceforge.net/

typical command line for single-end data:

bwa aln <bwa-index> <first-read.fastq> > <first-read.sai>
bwa samse <bwa-index> <first-read.sai> <first-read.fastq> > <sam-output>

typical command line for paired-end data:

bwa aln <bwa-index> <first-read.fastq> > <first-read.sai>
bwa aln <bwa-index> <second-read.fastq> > <second-read.sai>
bwa sampe <bwa-index> <first-read.sai> <second-read.sai> <first-read.fastq> <second-read.fastq> > <sam-output>
Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bwa_backtrack [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> bwa_backtrack;
   in_1 [label="second_read"];
   in_1 -> bwa_backtrack;
   out_2 [label="alignments"];
   bwa_backtrack -> out_2;
}

Options:
  • aln-0 (bool, optional) – When aln-b is specified, only use single-end reads in mapping.

  • aln-1 (bool, optional) – When aln-b is specified, only use the first read in a read pair in mapping (skip single-end reads and the second reads).

  • aln-2 (bool, optional) – When aln-b is specified, only use the second read in a read pair in mapping.

  • aln-B (int, optional) – Length of barcode starting from the 5’-end. When INT is positive, the barcode of each read will be trimmed before mapping and will be written at the BC SAM tag. For paired-end reads, the barcode from both ends are concatenated. [0]

  • aln-E (int, optional) – Gap extension penalty [4]

  • aln-I (bool, optional) – The input is in the Illumina 1.3+ read format (quality equals ASCII-64).

  • aln-M (int, optional) – Mismatch penalty. BWA will not search for suboptimal hits with a score lower than (bestScore-misMsc). [3]

  • aln-N (bool, optional) – Disable iterative search. All hits with no more than maxDiff differences will be found. This mode is much slower than the default.

  • aln-O (int, optional) – Gap open penalty [11]

  • aln-R (int, optional) – Proceed with suboptimal alignments if there are no more than INT equally best hits. This option only affects paired-end mapping. Increasing this threshold helps to improve the pairing accuracy at the cost of speed, especially for short reads (~32bp).

  • aln-b (bool, optional) – Specify the input read sequence file is the BAM format. For paired-end data, two ends in a pair must be grouped together and options aln-1 or aln-2 are usually applied to specify which end should be mapped. Typical command lines for mapping pair-end data in the BAM format are:

    bwa aln ref.fa -b1 reads.bam > 1.sai
    bwa aln ref.fa -b2 reads.bam > 2.sai
    bwa sampe ref.fa 1.sai 2.sai reads.bam reads.bam > aln.sam
    
  • aln-c (bool, optional) – Reverse query but not complement it, which is required for alignment in the color space. (Disabled since 0.6.x)

  • aln-d (int, optional) – Disallow a long deletion within INT bp towards the 3’-end [16]

  • aln-e (int, optional) – Maximum number of gap extensions, -1 for k-difference mode (disallowing long gaps) [-1]

  • aln-i (int, optional) – Disallow an indel within INT bp towards the ends [5]

  • aln-k (int, optional) – Maximum edit distance in the seed [2]

  • aln-l (int, optional) – Take the first INT subsequence as seed. If INT is larger than the query sequence, seeding will be disabled. For long reads, this option is typically ranged from 25 to 35 for ‘-k 2’. [inf]

  • aln-n (float, optional) – Maximum edit distance if the value is INT, or the fraction of missing alignments given 2% uniform base error rate if FLOAT. In the latter case, the maximum edit distance is automatically chosen for different read lengths. [0.04]

  • aln-o (int, optional) – Maximum number of gap opens [1]

  • aln-q (int, optional) – Parameter for read trimming. BWA trims a read down to argmax_x{sum_{i=x+1}^l(INT-q_i)} if q_l<INT where l is the original read length. [0]

  • aln-t (int, optional) – Number of threads (multi-threading mode) [1]

  • index (str, required) – Path to BWA index

  • sampe-N (int, optional) – Maximum number of alignments to output in the XA tag for reads paired properly. If a read has more than INT hits, the XA tag will not be written. [3]

  • sampe-P (bool, optional) – Load the entire FM-index into memory to reduce disk operations (base-space reads only). With this option, at least 1.25N bytes of memory are required, where N is the length of the genome.

  • sampe-a (int, optional) – Maximum insert size for a read pair to be considered being mapped properly. Since 0.4.5, this option is only used when there are not enough good alignment to infer the distribution of insert sizes. [500]

  • sampe-n (int, optional) – Maximum number of alignments to output in the XA tag for reads paired properly. If a read has more than INT hits, the XA tag will not be written. [3]

  • sampe-o (int, optional) – Maximum occurrences of a read for pairing. A read with more occurrneces will be treated as a single-end read. Reducing this parameter helps faster pairing. [100000]

  • sampe-r (str, optional) – Specify the read group in a format like '@RG ID:foo SM:bar’. [null]

  • samse-n (int, optional) – Maximum number of alignments to output in the XA tag for reads paired properly. If a read has more than INT hits, the XA tag will not be written. [3]

  • samse-r (str, optional) – Specify the read group in a format like '@RG ID:foo SM:bar’. [null]

Required tools: bwa, dd, mkfifo, pigz

CPU Cores: 8

bwa_generate_index

This step generates the index database from sequences in the FASTA format.

Typical command line:

bwa index -p <index-basename> <seqeunce.fasta>
Connections:
  • Input Connection:
    • ‘in/reference_sequence’
  • Output Connection:
    • ‘out/bwa_index’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bwa_generate_index [style=filled, fillcolor="#fce94f"];
   in_0 [label="reference_sequence"];
   in_0 -> bwa_generate_index;
   out_1 [label="bwa_index"];
   bwa_generate_index -> out_1;
}

Options:
  • index-basename (str, required) – Prefix of the created index database

Required tools: bwa, dd, pigz

CPU Cores: 6

bwa_mem

Align 70bp-1Mbp query sequences with the BWA-MEM algorithm. Briefly, the algorithm works by seeding alignments with maximal exact matches (MEMs) and then extending seeds with the affine-gap Smith-Waterman algorithm (SW).

http://bio-bwa.sourceforge.net/bwa.shtml

Typical command line:

bwa mem [options] <bwa-index> <first-read.fastq> [<second-read.fastq>] > <sam-output>
Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   bwa_mem [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> bwa_mem;
   in_1 [label="second_read"];
   in_1 -> bwa_mem;
   out_2 [label="alignments"];
   bwa_mem -> out_2;
}

Options:
  • A (int, optional) – score for a sequence match, which scales options -TdBOELU unless overridden [1]

  • B (int, optional) – penalty for a mismatch [4]

  • C (bool, optional) – append FASTA/FASTQ comment to SAM output

  • D (float, optional) – drop chains shorter than FLOAT fraction of the longest overlapping chain [0.50]

  • E (str, optional) – gap extension penalty; a gap of size k cost ‘{-O} + {-E}*k’ [1,1]

  • H (str, optional) – insert STR to header if it starts with @; or insert lines in FILE [null]

  • L (str, optional) – penalty for 5’- and 3’-end clipping [5,5]

  • M (str, optional) – mark shorter split hits as secondary

  • O (str, optional) – gap open penalties for deletions and insertions [6,6]

  • P (bool, optional) – skip pairing; mate rescue performed unless -S also in use

  • R (str, optional) – read group header line such as '@RG ID:foo SM:bar’ [null]

  • S (bool, optional) – skip mate rescue

  • T (int, optional) – minimum score to output [30]

  • U (int, optional) – penalty for an unpaired read pair [17]

  • V (bool, optional) – output the reference FASTA header in the XR tag

  • W (int, optional) – discard a chain if seeded bases shorter than INT [0]

  • Y (str, optional) – use soft clipping for supplementary alignments

  • a (bool, optional) – output all alignments for SE or unpaired PE

  • c (int, optional) – skip seeds with more than INT occurrences [500]

  • d (int, optional) – off-diagonal X-dropoff [100]

  • e (bool, optional) – discard full-length exact matches

  • h (str, optional) – if there are <INT hits with score >80% of the max score, output all in XA [5,200]

  • index (str, required) – Path to BWA index

  • j (bool, optional) – treat ALT contigs as part of the primary assembly (i.e. ignore <idxbase>.alt file)

  • k (int, optional) – minimum seed length [19]

  • m (int, optional) – perform at most INT rounds of mate rescues for each read [50]

  • p (bool, optional) – smart pairing (ignoring in2.fq)

  • r (float, optional) – look for internal seeds inside a seed longer than {-k} * FLOAT [1.5]

  • t (int, optional) – number of threads [6]

  • v (int, optional) – verbose level: 1=error, 2=warning, 3=message, 4+=debugging [3]

  • w (int, optional) – band width for banded alignment [100]

  • x (str, optional) – read type. Setting -x changes multiple parameters unless overriden [null]

    pacbio: -k17 -W40 -r10 -A1 -B1 -O1 -E1 -L0 (PacBio reads to ref) ont2d: -k14 -W20 -r10 -A1 -B1 -O1 -E1 -L0 (Oxford Nanopore 2D-reads to ref) intractg: -B9 -O16 -L5 (intra-species contigs to ref)

  • y (int, optional) – seed occurrence for the 3rd round seeding [20]

Required tools: bwa, dd, mkfifo, pigz

CPU Cores: 6

chromhmm_binarizebam

This command converts coordinates of aligned reads into binarized data form from which a chromatin state model can be learned. The binarization is based on a poisson background model. If no control data is specified the parameter to the poisson distribution is the global average number of reads per bin. If control data is specified the global average number of reads is multiplied by the local enrichment for control reads as determined by the specified parameters. Optionally intermediate signal files can also be outputted and these signal files can later be directly converted into binary form using the BinarizeSignal command.

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’
    • ‘out/metrics’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   chromhmm_binarizebam [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> chromhmm_binarizebam;
   out_1 [label="alignments"];
   chromhmm_binarizebam -> out_1;
   out_2 [label="metrics"];
   chromhmm_binarizebam -> out_2;
}

Options:
  • b (int, optional)
  • c (str, optional)
  • center (bool, optional)
  • chrom_sizes_file (str, required)
  • control (dict, required)
  • e (int, optional)
  • f (int, optional)
  • g (int, optional)
  • n (int, optional)
  • o (str, optional)
  • p (float, optional)
  • peaks (bool, optional)
  • s (int, optional)
  • strictthresh (bool, optional)
  • t (str, optional)
  • u (int, optional)
  • w (int, optional)

Required tools: ChromHMM, echo, ln

CPU Cores: 8

cutadapt

Cutadapt finds and removes adapter sequences, primers, poly-A tails and other types of unwanted sequence from your high-throughput sequencing reads.

https://cutadapt.readthedocs.org/en/stable/

Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/first_read’
    • ‘out/log_first_read’
    • ‘out/log_second_read’
    • ‘out/second_read’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   cutadapt [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> cutadapt;
   in_1 [label="second_read"];
   in_1 -> cutadapt;
   out_2 [label="first_read"];
   cutadapt -> out_2;
   out_3 [label="log_first_read"];
   cutadapt -> out_3;
   out_4 [label="log_second_read"];
   cutadapt -> out_4;
   out_5 [label="second_read"];
   cutadapt -> out_5;
}

Options:
  • adapter-R1 (str, optional) – Adapter sequence to be clipped off of thefirst read.
  • adapter-R2 (str, optional) – Adapter sequence to be clipped off of thesecond read
  • adapter-file (str, optional) – File containing adapter sequences to be clipped off of the reads.
  • adapter-type (str, optional)
    • possible values: ‘-a’, ‘-g’, ‘-b’
  • fix_qnames (bool, required) – If set to true, only the leftmost string without spaces of the QNAME field of the FASTQ data is kept. This might be necessary for downstream analysis.
  • use_reverse_complement (bool, required) – The reverse complement of adapter sequences ‘adapter-R1’ and ‘adapter-R2’ are used for adapter clipping.

Required tools: cat, cutadapt, dd, fix_qnames, mkfifo, pigz

CPU Cores: 4

fastqc

The fastqc step is a wrapper for the fastqc tool. It generates some quality metrics for fastq files. For this specific instance only the zip archive is preserved.

http://www.bioinformatics.babraham.ac.uk/projects/fastqc/

Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/first_read_fastqc_report’
    • ‘out/first_read_fastqc_report_webpage’
    • ‘out/first_read_log_stderr’
    • ‘out/second_read_fastqc_report’
    • ‘out/second_read_fastqc_report_webpage’
    • ‘out/second_read_log_stderr’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   fastqc [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> fastqc;
   in_1 [label="second_read"];
   in_1 -> fastqc;
   out_2 [label="first_read_fastqc_report"];
   fastqc -> out_2;
   out_3 [label="first_read_fastqc_report_webpage"];
   fastqc -> out_3;
   out_4 [label="first_read_log_stderr"];
   fastqc -> out_4;
   out_5 [label="second_read_fastqc_report"];
   fastqc -> out_5;
   out_6 [label="second_read_fastqc_report_webpage"];
   fastqc -> out_6;
   out_7 [label="second_read_log_stderr"];
   fastqc -> out_7;
}

Options: Required tools: fastqc, mkdir, mv

CPU Cores: 1

fastx_quality_stats

fastx_quality_stats generates a text file containing quality information of the input FASTQ data.

Documentation: http://hannonlab.cshl.edu/fastx_toolkit/

Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/first_read_quality_stats’
    • ‘out/second_read_quality_stats’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   fastx_quality_stats [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> fastx_quality_stats;
   in_1 [label="second_read"];
   in_1 -> fastx_quality_stats;
   out_2 [label="first_read_quality_stats"];
   fastx_quality_stats -> out_2;
   out_3 [label="second_read_quality_stats"];
   fastx_quality_stats -> out_3;
}

Options:
  • new_output_format (bool, optional)
  • quality (int, optional)

Required tools: cat, dd, fastx_quality_stats, mkfifo, pigz

CPU Cores: 4

fix_cutadapt

This step takes FASTQ data and removes both reads of a paired-end read, if one of them has been completely removed by cutadapt (or any other software).

Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/first_read’
    • ‘out/second_read’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   fix_cutadapt [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> fix_cutadapt;
   in_1 [label="second_read"];
   in_1 -> fix_cutadapt;
   out_2 [label="first_read"];
   fix_cutadapt -> out_2;
   out_3 [label="second_read"];
   fix_cutadapt -> out_3;
}

Options: Required tools: cat, dd, fix_cutadapt, mkfifo, pigz

CPU Cores: 4

htseq_count

The htseq-count script counts the number of reads overlapping a feature. Input needs to be a file with aligned sequencing reads and a list of genomic features. For more information see: http://www-huber.embl.de/users/anders/HTSeq/doc/count.html

Connections:
  • Input Connection:
    • ‘in/alignments’
    • ‘in/features’
  • Output Connection:
    • ‘out/counts’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   htseq_count [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> htseq_count;
   in_1 [label="features"];
   in_1 -> htseq_count;
   out_2 [label="counts"];
   htseq_count -> out_2;
}

Options:
  • a (int, optional)
  • feature-file (str, optional)
  • idattr (str, optional)
  • mode (str, optional)
    • possible values: ‘union’, ‘intersection-strict’, ‘intersection-nonempty’
  • order (str, required)
    • possible values: ‘name’, ‘pos’
  • stranded (str, required)
    • possible values: ‘yes’, ‘no’, ‘reverse’
  • type (str, optional)

Required tools: dd, htseq-count, pigz, samtools

CPU Cores: 2

macs2

Model-based Analysis of ChIP-Seq (MACS) is a algorithm, for the identifcation of transcript factor binding sites. MACS captures the influence of genome complexity to evaluate the significance of enriched ChIP regions, and MACS improves the spatial resolution of binding sites through combining the information of both sequencing tag position and orientation. MACS can be easily used for ChIP-Seq data alone, or with control sample data to increase the specificity.

https://github.com/taoliu/MACS

typical command line for single-end data:

macs2 callpeak --treatment <aligned-reads> [--control <aligned-reads>] --name <run-id> --gsize 2.7e9
Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/broadpeaks’
    • ‘out/broadpeaks-xls’
    • ‘out/diagnosis’
    • ‘out/gappedpeaks’
    • ‘out/log’
    • ‘out/model’
    • ‘out/narrowpeaks’
    • ‘out/narrowpeaks-xls’
    • ‘out/summits’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   macs2 [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> macs2;
   out_1 [label="broadpeaks"];
   macs2 -> out_1;
   out_2 [label="broadpeaks-xls"];
   macs2 -> out_2;
   out_3 [label="diagnosis"];
   macs2 -> out_3;
   out_4 [label="gappedpeaks"];
   macs2 -> out_4;
   out_5 [label="log"];
   macs2 -> out_5;
   out_6 [label="model"];
   macs2 -> out_6;
   out_7 [label="narrowpeaks"];
   macs2 -> out_7;
   out_8 [label="narrowpeaks-xls"];
   macs2 -> out_8;
   out_9 [label="summits"];
   macs2 -> out_9;
}

Options:
  • broad (bool, optional)
  • broad-cutoff (float, optional)
  • buffer-size (int, optional)
  • call-summits (bool, optional)
  • control (dict, required)
  • down-sample (bool, optional)
  • format (str, required)
    • possible values: ‘AUTO’, ‘ELAND’, ‘ELANDMULTI’, ‘ELANDMULTIPET’, ‘ELANDEXPORT’, ‘BED’, ‘SAM’, ‘BAM’, ‘BAMPE’, ‘BOWTIE’
  • gsize (str, required)
  • keep-dup (int, optional)
  • llocal (str, optional)
  • pvalue (float, optional)
  • qvalue (float, optional)
  • read-length (int, optional)
  • shift (int, optional)
  • slocal (str, optional)
  • to-large (bool, optional)
  • verbose (int, optional)
    • possible values: ‘0’, ‘1’, ‘2’, ‘3’

Required tools: macs2, mkdir, mv, pigz

CPU Cores: 4

merge_fasta_files

This step merges all .fasta(.gz) files belonging to a certain sample. The output files are gzipped.

Connections:
  • Input Connection:
    • ‘in/sequence’
  • Output Connection:
    • ‘out/sequence’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   merge_fasta_files [style=filled, fillcolor="#fce94f"];
   in_0 [label="sequence"];
   in_0 -> merge_fasta_files;
   out_1 [label="sequence"];
   merge_fasta_files -> out_1;
}

Options:
  • compress-output (bool, optional) – If set to true output is gzipped.
  • output-fasta-basename (str, optional) – Name used as prefix for FASTA output.

Required tools: cat, dd, mkfifo, pigz

CPU Cores: 12

merge_fastq_files

This step merges all .fastq(.gz) files belonging to a certain sample. First and second read files are merged separately. The output files are gzipped.

Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/first_read’
    • ‘out/second_read’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   merge_fastq_files [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> merge_fastq_files;
   in_1 [label="second_read"];
   in_1 -> merge_fastq_files;
   out_2 [label="first_read"];
   merge_fastq_files -> out_2;
   out_3 [label="second_read"];
   merge_fastq_files -> out_3;
}

Options: Required tools: cat, dd, mkfifo, pigz

CPU Cores: 12

narrowpeak_to_bed
Connections:
  • Input Connection:
    • ‘in/broadpeaks’
    • ‘in/narrowpeaks’
  • Output Connection:
    • ‘out/broadpeaks-bed’
    • ‘out/narrowpeaks-bed’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   narrowpeak_to_bed [style=filled, fillcolor="#fce94f"];
   in_0 [label="broadpeaks"];
   in_0 -> narrowpeak_to_bed;
   in_1 [label="narrowpeaks"];
   in_1 -> narrowpeak_to_bed;
   out_2 [label="broadpeaks-bed"];
   narrowpeak_to_bed -> out_2;
   out_3 [label="narrowpeaks-bed"];
   narrowpeak_to_bed -> out_3;
}

Options:
  • genome (str, required)
  • sort-by-name (bool, required)
  • temp-sort-directory (str, required) – Intermediate sort files are stored intothis directory.

Required tools: bedClip, bedtools

CPU Cores: 8

picard_add_replace_read_groups

Replace read groups in a BAM file. This tool enables the user to replace all read groups in the INPUT file with a single new read group and assign all reads to this read group in the OUTPUT BAM file.

Documentation: https://broadinstitute.github.io/picard/command-line-overview.html#AddOrReplaceReadGroups

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   picard_add_replace_read_groups [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> picard_add_replace_read_groups;
   out_1 [label="alignments"];
   picard_add_replace_read_groups -> out_1;
}

Options:
  • COMPRESSION_LEVEL (int, optional) – Compression level for all compressed files created (e.g. BAM and GELI). Default value: 5. This option can be set to “null” to clear the default value.
  • CREATE_INDEX (bool, optional) – Whether to create a BAM index when writing a coordinate-sorted BAM file. Default value: false. This option can be set to “null” to clear the default value.
  • CREATE_MD5_FILE (bool, optional) – Whether to create an MD5 digest for any BAM or FASTQ files created. Default value: false. This option can be set to “null” to clear the default value.
  • GA4GH_CLIENT_SECRETS (str, optional) – Google Genomics API client_secrets.json file path. Default value: client_secrets.json. This option can be set to “null” to clear the default value.
  • MAX_RECORDS_IN_RAM (int, optional) – When writing SAM files that need to be sorted, this will specify the number of records stored in RAM before spilling to disk. Increasing this number reduces the number of file handles needed to sort a SAM file, and increases the amount of RAM needed. Default value: 500000. This option can be set to “null” to clear the default value.
  • QUIET (bool, optional) – Whether to suppress job-summary info on System.err. Default value: false. This option can be set to “null” to clear the default value.
  • REFERENCE_SEQUENCE (str, optional) – Reference sequence file. Default value: null.
  • RGCN (str, optional) – Read Group sequencing center name. Default value: null.
  • RGDS (str, optional) – Read Group description. Default value: null.
  • RGDT (str, optional) – Read Group run date. Default value: null.
  • RGID (str, optional) – Read Group ID Default value: 1. This option can be set to ‘null’ to clear the default value.
  • RGLB (str, required) – Read Group library
  • RGPG (str, optional) – Read Group program group. Default value: null.
  • RGPI (int, optional) – Read Group predicted insert size. Default value: null.
  • RGPL (str, required) – Read Group platform (e.g. illumina, solid)
  • RGPM (str, optional) – Read Group platform model. Default value: null.
  • RGPU (str, required) – Read Group platform unit (eg. run barcode)
  • SORT_ORDER (str, optional) – Optional sort order to output in. If not supplied OUTPUT is in the same order as INPUT. Default value: null. Possible values: {unsorted, queryname, coordinate, duplicate}
    • possible values: ‘unsorted’, ‘queryname’, ‘coordinate’, ‘duplicate’
  • TMP_DIR (str, optional) – A file. Default value: null. This option may be specified 0 or more times.
  • VALIDATION_STRINGENCY (str, optional) – Validation stringency for all SAM files read by this program. Setting stringency to SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. Default value: STRICT. This option can be set to “null” to clear the default value.
    • possible values: ‘STRICT’, ‘LENIENT’, ‘SILENT’
  • VERBOSITY (str, optional) – Control verbosity of logging. Default value: INFO. This option can be set to “null” to clear the default value.
    • possible values: ‘ERROR’, ‘WARNING’, ‘INFO’, ‘DEBUG’

Required tools: picard-tools

CPU Cores: 6

picard_markduplicates

Identifies duplicate reads. This tool locates and tags duplicate reads (both PCR and optical/ sequencing-driven) in a BAM or SAM file, where duplicate reads are defined as originating from the same original fragment of DNA. Duplicates are identified as read pairs having identical 5’ positions (coordinate and strand) for both reads in a mate pair (and optinally, matching unique molecular identifier reads; see BARCODE_TAG option). Optical, or more broadly Sequencing, duplicates are duplicates that appear clustered together spatially during sequencing and can arise from optical/ imagine-processing artifacts or from bio-chemical processes during clonal amplification and sequencing; they are identified using the READ_NAME_REGEX and the OPTICAL_DUPLICATE_PIXEL_DISTANCE options. The tool’s main output is a new SAM or BAM file in which duplicates have been identified in the SAM flags field, or optionally removed (see REMOVE_DUPLICATE and REMOVE_SEQUENCING_DUPLICATES), and optionally marked with a duplicate type in the ‘DT’ optional attribute. In addition, it also outputs a metrics file containing the numbers of READ_PAIRS_EXAMINED, UNMAPPED_READS, UNPAIRED_READS, UNPAIRED_READ_DUPLICATES, READ_PAIR_DUPLICATES, and READ_PAIR_OPTICAL_DUPLICATES.

Usage example:

java -jar picard.jar MarkDuplicates I=input.bam O=marked_duplicates.bam M=marked_dup_metrics.txt

Documentation: https://broadinstitute.github.io/picard/command-line-overview.html#MarkDuplicates

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’
    • ‘out/metrics’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   picard_markduplicates [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> picard_markduplicates;
   out_1 [label="alignments"];
   picard_markduplicates -> out_1;
   out_2 [label="metrics"];
   picard_markduplicates -> out_2;
}

Options:
  • ASSUME_SORTED (bool, optional)
  • COMMENT (str, optional)
  • COMPRESSION_LEVEL (int, optional) – Compression level for all compressed files created (e.g. BAM and GELI). Default value: 5. This option can be set to “null” to clear the default value.
  • CREATE_INDEX (bool, optional) – Whether to create a BAM index when writing a coordinate-sorted BAM file. Default value: false. This option can be set to “null” to clear the default value.
  • CREATE_MD5_FILE (bool, optional) – Whether to create an MD5 digest for any BAM or FASTQ files created. Default value: false. This option can be set to “null” to clear the default value.
  • GA4GH_CLIENT_SECRETS (str, optional) – Google Genomics API client_secrets.json file path. Default value: client_secrets.json. This option can be set to “null” to clear the default value.
  • MAX_FILE_HANDLES (int, optional)
  • MAX_RECORDS_IN_RAM (int, optional) – When writing SAM files that need to be sorted, this will specify the number of records stored in RAM before spilling to disk. Increasing this number reduces the number of file handles needed to sort a SAM file, and increases the amount of RAM needed. Default value: 500000. This option can be set to “null” to clear the default value.
  • OPTICAL_DUPLICATE_PIXEL_DISTANCE (int, optional)
  • PROGRAM_GROUP_COMMAND_LINE (str, optional)
  • PROGRAM_GROUP_NAME (str, optional)
  • PROGRAM_GROUP_VERSION (str, optional)
  • PROGRAM_RECORD_ID (str, optional)
  • QUIET (bool, optional) – Whether to suppress job-summary info on System.err. Default value: false. This option can be set to “null” to clear the default value.
  • READ_NAME_REGEX (str, optional)
  • REFERENCE_SEQUENCE (str, optional) – Reference sequence file. Default value: null.
  • SORTING_COLLECTION_SIZE_RATIO (float, optional)
  • TMP_DIR (str, optional) – A file. Default value: null. This option may be specified 0 or more times.
  • VALIDATION_STRINGENCY (str, optional) – Validation stringency for all SAM files read by this program. Setting stringency to SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. Default value: STRICT. This option can be set to “null” to clear the default value.
    • possible values: ‘STRICT’, ‘LENIENT’, ‘SILENT’
  • VERBOSITY (str, optional) – Control verbosity of logging. Default value: INFO. This option can be set to “null” to clear the default value.
    • possible values: ‘ERROR’, ‘WARNING’, ‘INFO’, ‘DEBUG’

Required tools: picard-tools

CPU Cores: 12

picard_merge_sam_bam_files

Documentation: https://broadinstitute.github.io/picard/command-line-overview.html#MergeSamFiles

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   picard_merge_sam_bam_files [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> picard_merge_sam_bam_files;
   out_1 [label="alignments"];
   picard_merge_sam_bam_files -> out_1;
}

Options:
  • ASSUME_SORTED (bool, optional) – If true, assume that the input files are in the same sort order as the requested output sort order, even if their headers say otherwise. Default value: false. This option can be set to ‘null’ to clear the default value. Possible values: {true, false}
  • COMMENT (str, optional) – Comment(s) to include in the merged output file’s header. Default value: null.
  • COMPRESSION_LEVEL (int, optional) – Compression level for all compressed files created (e.g. BAM and GELI). Default value: 5. This option can be set to “null” to clear the default value.
  • CREATE_INDEX (bool, optional) – Whether to create a BAM index when writing a coordinate-sorted BAM file. Default value: false. This option can be set to “null” to clear the default value.
  • CREATE_MD5_FILE (bool, optional) – Whether to create an MD5 digest for any BAM or FASTQ files created. Default value: false. This option can be set to “null” to clear the default value.
  • GA4GH_CLIENT_SECRETS (str, optional) – Google Genomics API client_secrets.json file path. Default value: client_secrets.json. This option can be set to “null” to clear the default value.
  • INTERVALS (str, optional) – An interval list file that contains the locations of the positions to merge. Assume bam are sorted and indexed. The resulting file will contain alignments that may overlap with genomic regions outside the requested region. Unmapped reads are discarded. Default value: null.
  • MAX_RECORDS_IN_RAM (int, optional) – When writing SAM files that need to be sorted, this will specify the number of records stored in RAM before spilling to disk. Increasing this number reduces the number of file handles needed to sort a SAM file, and increases the amount of RAM needed. Default value: 500000. This option can be set to “null” to clear the default value.
  • MERGE_SEQUENCE_DICTIONARIES (bool, optional) – Merge the sequence dictionaries. Default value: false. This option can be set to ‘null’ to clear the default value. Possible values: {true, false}
  • QUIET (bool, optional) – Whether to suppress job-summary info on System.err. Default value: false. This option can be set to “null” to clear the default value.
  • REFERENCE_SEQUENCE (str, optional) – Reference sequence file. Default value: null.
  • SORT_ORDER (str, optional) – Sort order of output file. Default value: coordinate. This option can be set to ‘null’ to clear the default value. Possible values: {unsorted, queryname, coordinate, duplicate}
    • possible values: ‘unsorted’, ‘queryname’, ‘coordinate’, ‘duplicate’
  • TMP_DIR (str, optional) – A file. Default value: null. This option may be specified 0 or more times.
  • USE_THREADING (bool, optional) – Option to create a background thread to encode, compress and write to disk the output file. The threaded version uses about 20% more CPU and decreases runtime by ~20% when writing out a compressed BAM file. Default value: false. This option can be set to ‘null’ to clear the default value. Possible values: {true, false}
  • VALIDATION_STRINGENCY (str, optional) – Validation stringency for all SAM files read by this program. Setting stringency to SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. Default value: STRICT. This option can be set to “null” to clear the default value.
    • possible values: ‘STRICT’, ‘LENIENT’, ‘SILENT’
  • VERBOSITY (str, optional) – Control verbosity of logging. Default value: INFO. This option can be set to “null” to clear the default value.
    • possible values: ‘ERROR’, ‘WARNING’, ‘INFO’, ‘DEBUG’

Required tools: ln, picard-tools

CPU Cores: 12

preseq_complexity_curve

The preseq package is aimed at predicting the yield of distinct reads from a genomic library from an initial sequencing experiment. The estimates can then be used to examine the utility of further sequencing, optimize the sequencing depth, or to screen multiple libraries to avoid low complexity samples.

c_curve computes the expected yield of distinct reads for experiments smaller than the input experiment in a .bed or .bam file through resampling. The full set of parameters can be outputed by simply typing the program name. If output.txt is the desired output file name and input.bed is the input .bed file, then simply type:

preseq c_curve -o output.txt input.sort.bed
Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/complexity_curve’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   preseq_complexity_curve [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> preseq_complexity_curve;
   out_1 [label="complexity_curve"];
   preseq_complexity_curve -> out_1;
}

Options:
  • hist (bool, optional) – input is a text file containing the observed histogram
  • pe (bool, required) – input is paired end read file
  • seg_len (int, optional) – maximum segment length when merging paired end bam reads (default: 5000)
  • step (int, optional) – step size gin extrapolations (default: 1e+06)
  • vals (bool, optional) – input is a text file containing only the observed counts

Required tools: preseq

CPU Cores: 4

preseq_future_genome_coverage

The preseq package is aimed at predicting the yield of distinct reads from a genomic library from an initial sequencing experiment. The estimates can then be used to examine the utility of further sequencing, optimize the sequencing depth, or to screen multiple libraries to avoid low complexity samples.

gc_extrap computes the expected genomic coverage for deeper sequencing for single cell sequencing experiments. The input should be a mr or bed file. The tool bam2mr is provided to convert sorted bam or sam files to mapped read format.

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/future_genome_coverage’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   preseq_future_genome_coverage [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> preseq_future_genome_coverage;
   out_1 [label="future_genome_coverage"];
   preseq_future_genome_coverage -> out_1;
}

Options:
  • bin_size (int, optional) – bin size (default: 10)
  • bootstraps (int, optional) – number of bootstraps (default: 100)
  • cval (float, optional) – level for confidence intervals (default: 0.95)
  • extrap (int, optional) – maximum extrapolation in base pairs (default: 1e+12)
  • max_width (int, optional) – max fragment length, set equal to read length for single end reads
  • quick (bool, optional) – quick mode: run gc_extrap without bootstrapping for confidence intervals
  • step (int, optional) – step size in bases between extrapolations (default: 1e+08)
  • terms (int, optional) – maximum number of terms

Required tools: preseq

CPU Cores: 4

preseq_future_yield

The preseq package is aimed at predicting the yield of distinct reads from a genomic library from an initial sequencing experiment. The estimates can then be used to examine the utility of further sequencing, optimize the sequencing depth, or to screen multiple libraries to avoid low complexity samples.

lc_extrap computes the expected future yield of distinct reads and bounds on the number of total distinct reads in the library and the associated confidence intervals.

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/future_yield’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   preseq_future_yield [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> preseq_future_yield;
   out_1 [label="future_yield"];
   preseq_future_yield -> out_1;
}

Options:
  • bootstraps (int, optional) – number of bootstraps (default: 100)
  • cval (float, optional) – level for confidence intervals (default: 0.95)
  • dupl_level (float, optional) – fraction of duplicate to predict (default: 0.5)
  • extrap (int, optional) – maximum extrapolation (default: 1e+10)
  • hist (bool, optional) – input is a text file containing the observed histogram
  • pe (bool, required) – input is paired end read file
  • quick (bool, optional) – quick mode, estimate yield without bootstrapping for confidence intervals
  • seg_len (int, optional) – maximum segment length when merging paired end bam reads (default: 5000)
  • step (int, optional) – step size in extrapolations (default: 1e+06)
  • terms (int, optional) – maximum number of terms
  • vals (bool, optional) – input is a text file containing only the observed counts

Required tools: preseq

CPU Cores: 4

remove_duplicate_reads_runs

Duplicates are removed by Picard tools ‘MarkDuplicates’.

typical command line:

MarkDuplicates INPUT=<SAM/BAM> OUTPUT=<SAM/BAM> METRICS_FILE=<metrics-out> REMOVE_DUPLICATES=true
Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’
    • ‘out/metrics’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   remove_duplicate_reads_runs [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> remove_duplicate_reads_runs;
   out_1 [label="alignments"];
   remove_duplicate_reads_runs -> out_1;
   out_2 [label="metrics"];
   remove_duplicate_reads_runs -> out_2;
}

Options: Required tools: MarkDuplicates

CPU Cores: 12

rseqc

The RSeQC step can be used to evaluate aligned reads in a BAM file. RSeQC does not only report raw sequence-based metrics, but also quality control metrics like read distribution, gene coverage, and sequencing depth.

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/bam_stat’
    • ‘out/infer_experiment’
    • ‘out/read_distribution’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   rseqc [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> rseqc;
   out_1 [label="bam_stat"];
   rseqc -> out_1;
   out_2 [label="infer_experiment"];
   rseqc -> out_2;
   out_3 [label="read_distribution"];
   rseqc -> out_3;
}

Options:
  • reference (str, required) – Reference gene model in bed fomat. [required]

Required tools: bam_stat.py, cat, infer_experiment.py, read_distribution.py

CPU Cores: 1

sam_to_sorted_bam

The step sam_to_sorted_bam builds on ‘samtools sort’ to sort SAM files and output BAM files.

Sort alignments by leftmost coordinates, or by read name when -n is used. An appropriate @HD-SO sort order header tag will be added or an existing one updated if necessary.

Documentation: http://www.htslib.org/doc/samtools.html

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   sam_to_sorted_bam [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> sam_to_sorted_bam;
   out_1 [label="alignments"];
   sam_to_sorted_bam -> out_1;
}

Options:
  • genome-faidx (str, required)
  • sort-by-name (bool, required)
  • temp-sort-directory (str, required) – Intermediate sort files are stored intothis directory.

Required tools: dd, pigz, samtools

CPU Cores: 8

samtools_faidx

Index reference sequence in the FASTA format or extract subsequence from indexed reference sequence. If no region is specified, faidx will index the file and create <ref.fasta>.fai on the disk. If regions are specified, the subsequences will be retrieved and printed to stdout in the FASTA format.

Connections:
  • Input Connection:
    • ‘in/sequence’
  • Output Connection:
    • ‘out/indices’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   samtools_faidx [style=filled, fillcolor="#fce94f"];
   in_0 [label="sequence"];
   in_0 -> samtools_faidx;
   out_1 [label="indices"];
   samtools_faidx -> out_1;
}

Options: Required tools: mv, samtools

CPU Cores: 4

samtools_index

Index a coordinate-sorted BAM or CRAM file for fast random access. (Note that this does not work with SAM files even if they are bgzip compressed to index such files, use tabix(1) instead.)

Documentation: http://www.htslib.org/doc/samtools.html

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/alignments’
    • ‘out/index_stats’
    • ‘out/indices’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   samtools_index [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> samtools_index;
   out_1 [label="alignments"];
   samtools_index -> out_1;
   out_2 [label="index_stats"];
   samtools_index -> out_2;
   out_3 [label="indices"];
   samtools_index -> out_3;
}

Options:
  • index_type (str, required)
    • possible values: ‘bai’, ‘csi’

Required tools: ln, samtools

CPU Cores: 4

samtools_stats

samtools stats collects statistics from BAM files and outputs in a text format. The output can be visualized graphically using plot-bamstats.

Documentation: http://www.htslib.org/doc/samtools.html

Connections:
  • Input Connection:
    • ‘in/alignments’
  • Output Connection:
    • ‘out/stats’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   samtools_stats [style=filled, fillcolor="#fce94f"];
   in_0 [label="alignments"];
   in_0 -> samtools_stats;
   out_1 [label="stats"];
   samtools_stats -> out_1;
}

Options: Required tools: dd, pigz, samtools

CPU Cores: 1

segemehl

segemehl is a software to map short sequencer reads to reference genomes. Unlike other methods, segemehl is able to detect not only mismatches but also insertions and deletions. Furthermore, segemehl is not limited to a specific read length and is able to mapprimer- or polyadenylation contaminated reads correctly.

This step creates at first two FIFOs. The first is used to provide the genome data for segemehl and the second is used for the output of the unmapped reads:

mkfifo genome_fifo unmapped_fifo
cat <genome-fasta> -o genome_fifo

The executed segemehl command is this:

segemehl -d genome_fifo -i <genome-index-file> -q <read1-fastq> [-p <read2-fastq>] -u unmapped_fifo -H 1 -t 11 -s -S -D 0 -o /dev/stdout |  pigz --blocksize 4096 --processes 2 -c

The unmapped reads are saved via these commands:

cat unmapped_fifo | pigz --blocksize 4096 --processes 2 -c > <unmapped-fastq>
Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/alignments’
    • ‘out/log’
    • ‘out/unmapped’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   segemehl [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> segemehl;
   in_1 [label="second_read"];
   in_1 -> segemehl;
   out_2 [label="alignments"];
   segemehl -> out_2;
   out_3 [label="log"];
   segemehl -> out_3;
   out_4 [label="unmapped"];
   segemehl -> out_4;
}

Options:
  • MEOP (bool, optional) – output MEOP field for easier variance calling in SAM (XE:Z:)
  • SEGEMEHL (bool, optional) – output SEGEMEHL format (needs to be selected for brief)
  • accuracy (int, optional) – min percentage of matches per read in semi-global alignment (default:90)
  • autoclip (bool, optional) – autoclip unknown 3prime adapter
  • bisulfite (int, optional) – bisulfite mapping with methylC-seq/Lister et al. (=1) or bs-seq/Cokus et al. protocol (=2) (default:0)
    • possible values: ‘0’, ‘1’, ‘2’
  • brief (bool, optional) – brief output
  • clipacc (int, optional) – clipping accuracy (default:70)
  • differences (int, optional) – search seeds initially with <n> differences (default:1)
  • dropoff (int, optional) – dropoff parameter for extension (default:8)
  • evalue (float, optional) – max evalue (default:5.000000)
  • extensionpenalty (int, optional) – penalty for a mismatch during extension (default:4)
  • extensionscore (int, optional) – score of a match during extension (default:2)
  • fix-qnames (bool, optional) – The QNAMES field of the input will be purged from spaces and everything thereafter.
  • genome (str, required) – Path to genome file
  • hardclip (bool, optional) – enable hard clipping
  • hitstrategy (int, optional) – report only best scoring hits (=1) or all (=0) (default:1)
    • possible values: ‘0’, ‘1’
  • index (str, required) – Path to genome index for segemehl
  • jump (int, optional) – search seeds with jump size <n> (0=automatic) (default:0)
  • maxinsertsize (int, optional) – maximum size of the inserts (paired end) (default:5000)
  • maxinterval (int, optional) – maximum width of a suffix array interval, i.e. a query seed will be omitted if it matches more than <n> times (default:100)
  • maxsplitevalue (float, optional) – max evalue for splits (default:50.000000)
  • minfraglen (int, optional) – min length of a spliced fragment (default:20)
  • minfragscore (int, optional) – min score of a spliced fragment (default:18)
  • minsize (int, optional) – minimum size of queries (default:12)
  • minsplicecover (int, optional) – min coverage for spliced transcripts (default:80)
  • nohead (bool, optional) – do not output header
  • order (bool, optional) – sorts the output by chromsome and position (might take a while!)
  • polyA (bool, optional) – clip polyA tail
  • prime3 (str, optional) – add 3’ adapter (default:none)
  • prime5 (str, optional) – add 5’ adapter (default:none)
  • showalign (bool, optional) – show alignments
  • silent (bool, optional) – shut up!
  • splicescorescale (float, optional) – report spliced alignment with score s only if <f>*s is larger than next best spliced alignment (default:1.000000)
  • splits (bool, optional) – detect split/spliced reads (default:none)

Required tools: cat, dd, fix_qnames, mkfifo, pigz, segemehl

CPU Cores: 12

segemehl_generate_index

The step segemehl_generate_index generates a index for given reference sequences.

Documentation: http://www.bioinf.uni-leipzig.de/Software/segemehl

Connections:
  • Input Connection:
    • ‘in/reference_sequence’
  • Output Connection:
    • ‘out/log’
    • ‘out/segemehl_index’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   segemehl_generate_index [style=filled, fillcolor="#fce94f"];
   in_0 [label="reference_sequence"];
   in_0 -> segemehl_generate_index;
   out_1 [label="log"];
   segemehl_generate_index -> out_1;
   out_2 [label="segemehl_index"];
   segemehl_generate_index -> out_2;
}

Options:
  • index-basename (str, required) – Basename for created segemehl index.

Required tools: dd, mkfifo, pigz, segemehl

CPU Cores: 4

tophat2

TopHat is a fast splice junction mapper for RNA-Seq reads. It aligns RNA-Seq reads to mammalian-sized genomes using the ultra high-throughput short read aligner Bowtie, and then analyzes the mapping results to identify splice junctions between exons.

http://tophat.cbcb.umd.edu/

typical command line:

tophat [options]* <index_base> <reads1_1[,...,readsN_1]> [reads1_2,...readsN_2]
Connections:
  • Input Connection:
    • ‘in/first_read’
    • ‘in/second_read’
  • Output Connection:
    • ‘out/align_summary’
    • ‘out/alignments’
    • ‘out/deletions’
    • ‘out/insertions’
    • ‘out/junctions’
    • ‘out/log_stderr’
    • ‘out/misc_logs’
    • ‘out/prep_reads’
    • ‘out/unmapped’

digraph foo {
   rankdir = LR;
   splines = true;
   graph [fontname = Helvetica, fontsize = 12, size = "14, 11", nodesep = 0.2, ranksep = 0.3];
   node [fontname = Helvetica, fontsize = 12, shape = rect];
   edge [fontname = Helvetica, fontsize = 12];
   tophat2 [style=filled, fillcolor="#fce94f"];
   in_0 [label="first_read"];
   in_0 -> tophat2;
   in_1 [label="second_read"];
   in_1 -> tophat2;
   out_2 [label="align_summary"];
   tophat2 -> out_2;
   out_3 [label="alignments"];
   tophat2 -> out_3;
   out_4 [label="deletions"];
   tophat2 -> out_4;
   out_5 [label="insertions"];
   tophat2 -> out_5;
   out_6 [label="junctions"];
   tophat2 -> out_6;
   out_7 [label="log_stderr"];
   tophat2 -> out_7;
   out_8 [label="misc_logs"];
   tophat2 -> out_8;
   out_9 [label="prep_reads"];
   tophat2 -> out_9;
   out_10 [label="unmapped"];
   tophat2 -> out_10;
}

Options:
  • index (str, required) – Path to genome index for tophat2
  • library_type (str, required) – The default is unstranded (fr-unstranded). If either fr-firststrand or fr-secondstrand is specified, every read alignment will have an XS attribute tag as explained below. Consider supplying library type options below to select the correct RNA-seq protocol.(https://ccb.jhu.edu/software/tophat/manual.shtml)
    • possible values: ‘fr-unstranded’, ‘fr-firststrand’, ‘fr-secondstrand’

Required tools: mkdir, mv, tar, tophat2

CPU Cores: 6

API documentation

Pipeline-specific modules

abstract_step

Classes AbstractStep and AbstractSourceStep are defined here.

The class AbstractStep has to be inherited by all processing step classes. The class AbstractSourceStep has to be inherited by all source step classes.

Processing steps generate output files from input files whereas source steps only provide output files. Both step types may generates tasks, but only source steps can introduce files from outside the destination path into the pipeline.

class abstract_step.AbstractStep(pipeline)[source]
add_connection(connection, constraints=None)[source]

Add a connection, which must start with ‘in/’ or ‘out/’.

add_dependency(parent)[source]

Add a parent step to this steps dependencies.

parent – parent step this step depends on

add_input_connection(connection, constraints=None)[source]

Add an input connection to this step

add_option(key, *option_types, **kwargs)[source]

Add an option. Multiple types may be specified.

add_output_connection(connection, constraints=None)[source]

Add an output connection to this step

declare_run(run_id)[source]

Declare a run. Use it like this:

with self.declare_run(run_id) as run:
    # add output files and information to the run here
dependencies = None

All steps this step depends on.

finalize()[source]

Finalizes the step.

The intention is to make further changes to the step impossible, but apparently, it’s checked nowhere at the moment.

find_upstream_info_for_input_paths(input_paths, key)[source]

Find a piece of public information in all upstream steps. If the information is not found or defined in more than one upstream step, this will crash.

generate_one_report()[source]

Gathers the output files for each outgoing connection and calls self.reports() to do the job of creating a report.

generate_report(run_id)[source]

Gathers the output files for each outgoing connection and calls self.reports() to do the job of creating a report.

get_annotation_for_input_file(path)[source]

Determine the annotation for a given input file (that is, the connection name).

get_cores()[source]

Returns the number of cores used in this step.

get_in_connections()[source]

Return all in-connections for this step

get_input_files_for_run_id_and_connection(run_id, in_key)[source]

Returns a list of all input files given a run_id and a connection

get_input_runs()[source]

Return a dict which contains all runs per parent steps.

get_module_loads()[source]

Return dictionary with module load commands to execute before starting any other command of this step

get_module_unloads()[source]

Return dictionary with module unload commands to execute before starting any other command of this step

get_option(key)[source]

Query an option.

get_options()[source]

Returns a dictionary of all given options

get_out_connections()[source]

Return all out-connections for this step

get_post_commands()[source]

Return dictionary with commands to execute after finishing any other command of this step

get_pre_commands()[source]

Return dictionary with commands to execute before starting any other command of this step

get_run(run_id)[source]

Returns a single run object for run_id or None.

get_run_ids()[source]

Returns sorted list of runs generated by step.

get_run_ids_and_input_files_for_connection(in_key)[source]
Returns an iterator/generator with run_id and input_files where:
  • run_id is a string
  • input_files is a list of input paths
get_run_ids_in_connections_input_files()[source]

Return a dictionary with all run IDs from parent steps, the in connections they provide data for, and the names of the files:

run_id_1:
    in_connection_1: [input_path_1, input_path_2, ...]
    in_connection_2: ...
run_id_2: ...

Format of in_connection: in/<connection>. Input paths are absolute.

get_run_ids_out_connections_output_files()[source]

Return a dictionary with all run IDs of the current step, their out connections, and the files that belong to them:

run_id_1:
    in_connection_1: [input_path_1, input_path_2, ...]
    in_connection_2: ...
run_id_2: ...

Format of in_connection: in/<connection>. Input paths are absolute.

get_run_state(run_id)[source]

Returns run state of a run.

Determine the run state (that is, not basic but extended run state) of a run, building on the value returned by get_run_state_basic().

If a run is ready, this will:
  • return executing if an up-to-date executing ping file is found
  • otherwise return queued if a queued ping file is found
If a run is waiting, this will:
  • return queued if a queued ping file is found

Otherwise, it will just return the value obtained from get_run_state_basic().

Attention: The status indicators executing and queued may be temporarily wrong due to the possiblity of having out-of-date ping files lying around.

get_run_state_basic(run_id)[source]

Determines basic run state of a run.

Determine the basic run state of a run, which is, at any time, one of waiting, ready, or finished.

These states are determined from the current configuration and the timestamps of result files present in the file system. In addition to these three basic states, there are two additional states which are less reliable (see get_run_state()).

get_runs()[source]

Getter method for runs of this step.

If there are no runs as this method is called, they are created here.

get_single_input_file_for_connection(in_key)[source]

Return a single input file for a given connection, also make sure that there’s exactly one such input file.

classmethod get_step_class_for_key(key)[source]

Returns a step (or source step) class for a given key which corresponds to the name of the module the class is defined in. Pass ‘cutadapt’ and you will get the cutadapt.Cutadapt class which you may then instantiate.

get_step_name()[source]

Returns this steps name.

Returns the step name which is initially equal to the step type (== module name) but can be changed via set_step_name() or via the YAML configuration.

get_step_type()[source]

Returns the original step name (== module name).

get_tool(key)[source]

Return full path to a configured tool.

is_option_set_in_config(key)[source]

Determine whether an optional option (that is, a non-required option) has been set in the configuration.

reports(run_id, out_connection_output_files)[source]

Abstract method this must be implemented by actual step.

Raise NotImplementedError if subclass does not override this method.

require_tool(tool)[source]

Declare that this step requires an external tool. Query it later with get_tool().

run(run_id)[source]

Create a temporary output directory and execute a run. After the run has finished, it is checked that all output files are in place and the output files are moved to the final output location. Finally, YAML annotations are written.

runs(run_ids_connections_files)[source]

Abstract method this must be implemented by actual step.

Raise NotImplementedError if subclass does not override this method.

set_cores(cores)[source]

Specify the number of CPU cores this step will use.

set_options(options)[source]

Checks and stores step options.

The options are either set to values given in YAML config or the default values set in self.add_option().

set_step_name(step_name)[source]

Change the step name.

The step name is initially set to the module name. This method is used in case we need multiple steps of the same kind.

class abstract_step.AbstractSourceStep(pipeline)[source]

A subclass all source steps inherit from and which distinguishes source steps from all real processing steps because they do not yield any tasks, because their “output files” are in fact files which are already there.

Note that the name might be a bit misleading because this class only applies to source steps which ‘serve’ existing files. A step which has no input but produces input data for other steps and actually has to do something for it, on the other hand, would be a normal AbstractStep subclass because it produces tasks.

pipeline
class pipeline.Pipeline(**kwargs)[source]

The Pipeline class represents the entire processing pipeline which is defined and configured via the configuration file config.yaml.

Individual steps may be defined in a tree, and their combination with samples as generated by one or more source leads to an array of tasks.

all_tasks_topologically_sorted = None

List of all tasks in topological order.

check_tools()[source]

checks whether all tools references by the configuration are available and records their versions as determined by [tool] --version etc.

cluster_config = {'sge': {'set_stdout': '-o', 'stat': 'qstat', 'set_stderr': '-e', 'template': '/home/docs/checkouts/readthedocs.org/user_builds/izi-uap/checkouts/documentation_review/include/../submit-scripts/qsub-template.sh', 'hold_jid': '-hold_jid', 'set_job_name': '-N', 'submit': 'qsub', 'parse_job_id': 'Your job (\\d+)'}, 'slurm': {'set_stdout': '-o', 'stat': 'squeue', 'set_stderr': '-e', 'template': '/home/docs/checkouts/readthedocs.org/user_builds/izi-uap/checkouts/documentation_review/include/../submit-scripts/sbatch-template.sh', 'hold_jid': '--dependency=afterany:%s', 'set_job_name': '--job-name=%s', 'submit': 'sbatch', 'parse_job_id': 'Submitted batch job (\\d+)'}, 'uge': {'set_stdout': '-o', 'stat': 'qstat', 'set_stderr': '-e', 'template': '/home/docs/checkouts/readthedocs.org/user_builds/izi-uap/checkouts/documentation_review/include/../submit-scripts/qsub-template.sh', 'hold_jid': '-hold_jid', 'set_job_name': '-N', 'submit': 'qsub', 'parse_job_id': 'Your job (\\d+)'}}

Cluster-related configuration for every cluster system supported.

cluster_type = None

The cluster type to be used (must be one of the keys specified in cluster_config).

config = None

Dictionary representation of configuration YAML file.

file_dependencies = None

This dict stores file dependencies within this pipeline, but regardless of step, output file tag or run ID. This dict has, for all output files generated by the pipeline, a set of input files that output file depends on.

file_dependencies_reverse = None

This dict stores file dependencies within this pipeline, but regardless of step, output file tag or run ID. This dict has, for all input files required pipeline, a set of output files which are generated using this input file.

input_files_for_task_id = None

This dict stores a set of input files for every task id in the pipeline.

notify(message, attachment=None)[source]

prints a notification to the screen and optionally delivers the message on additional channels (as defined by the configuration)

output_files_for_task_id = None

This dict stores a set of output files for every task id in the pipeline.

pipeline_path = '/home/docs/checkouts/readthedocs.org/user_builds/izi-uap/checkouts/documentation_review/include'

Absolute path to this very file. It is used to circumvent path issues.

states = Enum(['READY', 'EXECUTING', 'WAITING', 'QUEUED', 'FINISHED'])

Possible states a task can be in.

steps = None

This dict stores step objects by their name. Each step knows his dependencies.

task_for_task_id = None

This dict stores task objects by task IDs.

task_id_for_output_file = None

This dict stores a task ID for every output file created by the pipeline.

task_ids_for_input_file = None

This dict stores a set of task IDs for every input file used in the pipeline.

topological_step_order = None

List with topologically ordered steps.

run
class run.Run(step, run_id)[source]

The Run class is a helper class which represents a run in a step. Declare runs inside AbstractStep.runs() via:

with self.new_run(run_id) as run:
    # declare output files, private and public info here

After that, use the available methods to configure the run. The run has typically no information about input connections only about input files.

add_empty_output_connection(tag)[source]

An empty output connection has ‘None’ as output file and ‘None’ as input file.

add_output_file(tag, out_path, in_paths)[source]

Add an output file to this run. Output file names must be unique across all runs defined by a step, so it may be a good idea to include the run_id into the output filename.

  • tag: You must specify the connection annotation which must have been

    previously declared via AbstractStep.add_connection(“out/...”), but this doesn’t have to be done in the step constructor, it’s also possible in declare_runs() right before this method is called.

  • out_path: The output file path, without a directory. The pipeline

    assigns directories for you (this parameter must not contain a slash).

  • in_paths: A list of input files this output file depends on. It is

    crucial to get this right, so that the pipeline can determine which steps are up-to-date at any given time. You have to specify absolute paths here, including a directory, and you can obtain them via AbstractStep.run_ids_and_input_files_for_connection and related functions.

add_private_info(key, value)[source]

Add private information to a run. Use this to store data which you will need when the run is executed. As opposed to public information, private information is not visible to subsequent steps.

You can store paths to input files here, but not paths to output files as their expected location is not defined until we’re in AbstractStep.execute (hint: they get written to a temporary directory inside execute()).

add_public_info(key, value)[source]

Add public information to a run. For example, a FASTQ reader may store the index barcode here for subsequent steps to query via AbstractStep.find_upstream_info().

add_temporary_directory(prefix='', suffix='', designation=None)[source]

Convenience method for creation of temporary directories. Basically, just calls self.add_temporary_file(). The magic happens in ProcessPool.__exit__()

get_basic_state()[source]

Determines basic run state of a run.

Determine the basic run state of a run, which is, at any time, one of waiting, ready, or finished.

These states are determined from the current configuration and the timestamps of result files present in the file system. In addition to these three basic states, there are two additional states which are less reliable (see get_run_state()).

get_execution_hashtag()[source]

Creates a hash tag based on the commands to be executed.

This causes runs to be marked for rerunning if the commands to be executed change.

get_input_files_for_output_file(out_path)[source]

Return all input files a given output file depends on.

get_output_directory()[source]

Returns the final output directory.

get_output_directory_du_jour()[source]

Returns the state-dependent output directory of the step.

Returns this steps output directory according to its current state: - if we are currently calling a step’s declare_runs() method, this will return None - if we are currently calling a step’s execute() method, this will return the temporary directory - otherwise, it will return the real output directory

get_output_directory_du_jour_placeholder()[source]

Returns a placeholder for the temporary output directory, which needs to be replaced by the actual temp directory inside the abstract_step.execute() method

get_output_files_abspath()[source]

Return a dictionary of all defined output files, grouped by connection annotation:

annotation_1:
    out_path_1: [in_path_1, in_path_2, ...]
    out_path_2: ...
annotation_2: ...

The out_path consists of the output directory du jour and the output file name.

get_output_files_for_annotation_and_tags(annotation, tags)[source]

Retrieve a set of output files of the given annotation, assigned to the same number of specified tags. If you have two ‘alignment’ output files and they are called out-a.txt and out-b.txt, you can use this function like this:

  • tags: [‘a’, ‘b’]
  • result: {‘a’: ‘out-a.txt’, ‘b’: ‘out-b.txt’}
get_private_info(key)[source]

Query private information which must have been previously stored via ” “add_private_info().

get_public_info(key)[source]

Query public information which must have been previously stored via ” “add_public_info().

get_single_output_file_for_annotation(annotation)[source]

Retrieve exactly one output file of the given annotation, and crash if there isn’t exactly one.

get_temp_output_directory()[source]

Returns the temporary output directory of a run.

has_private_info(key)[source]

Query whether a piece of public information has been defined.

has_public_info(key)[source]

Query whether a piece of public information has been defined.

remove_temporary_paths()[source]

Everything stored in self._temp_paths is examined and deleted if possible. Also, self._known_paths ‘type’ info is updated here. NOTE: Included additional stat checks to detect FIFOs as well as other special files.

update_public_info(key, value)[source]

Update public information already existing in a run. For example, all steps which handle FASTQ files want to know how to distinguish between files of read 1 and files of read 2. So each step that provides FASTQ should update this information if the file names are altered. The stored information can be acquired via: AbstractStep.find_upstream_info().

write_annotation_file(path)[source]

Write the YAML annotation after a successful or failed run. The annotation can later be used to render the process graph.

task
class task.Task(pipeline, step, run_id, run_index)[source]

A task represents a certain run of a certain step.

generate_report()[source]

Generate the report for the task. Skip this if task is not finished yet.

get_parent_tasks()[source]

Returns a list of parent tasks which this task depends on.

get_pipeline()[source]

Returns the pipeline this task belongs to.

get_run()[source]

Returns the run object for this task.

get_step()[source]

Returns the step of this task.

get_task_state()[source]

Proxy method for step.get_run_state().

get_task_state_basic()[source]

Proxy method for step.get_run_state().

input_files()[source]

Return a list of input files required by this task.

output_files()[source]

Return a list of output files produced by this task.

run()[source]

Run the task. Skip if it’s already finished. Raise Exception if it’s not ready.

Miscellaneous modules

process_pool

This module can be used to launch child processes and wait for them. Processes may either run on their own or pipelines can be built with them.

class process_pool.ProcessPool(run)[source]

The process pool provides an environment for launching and monitoring processes. You can launch any number of unrelated processes plus any number of pipelines in which several processes are chained together.

Use it like this:

with process_pool.ProcessPool(self) as pool:
    # launch processes or create pipelines here

When the scope opened by the with statement is left, all processes are launched and being watched. The process pool then waits until all processes have finished. You cannot launch a process pool within another process pool, but you can launch multiple pipeline and independent processes within a single process pool. Also, you can launch several process pools sequentially.

COPY_BLOCK_SIZE = 4194304

When stdout or stderr streams should be written to output files, this is the buffer size which is used for writing.

class Pipeline(pool)[source]

This class can be used to chain multiple processes together.

Use it like this:

with pool.Pipeline(pool) as pipeline:
    # append processes to the pipeline here
append(args, stdout_path=None, stderr_path=None, hints={})[source]

Append a process to the pipeline. Parameters get stored and are passed to ProcessPool.launch() later, so the same behaviour applies.

ProcessPool.SIGTERM_TIMEOUT = 10

After a SIGTERM signal is issued, wait this many seconds before going postal.

ProcessPool.TAIL_LENGTH = 1024

Size of the tail which gets recorded from both stdout and stderr streams of every process launched with this class, in bytes.

ProcessPool.get_log()[source]

Return the log as a dictionary.

classmethod ProcessPool.kill()[source]

Kills all user-launched processes. After that, the remaining process will end and a report will be written.

classmethod ProcessPool.kill_all_child_processes()[source]

Kill all child processes of this process by sending a SIGTERM to each of them. This includes all children which were not launched by this module, and their children etc.

ProcessPool.launch(args, stdout_path=None, stderr_path=None, hints={})[source]

Launch a process. Arguments, including the program itself, are passed in args. If the program is not a binary but a script which cannot be invoked directly from the command line, the first element of args must be a list like this: [‘python’, ‘script.py’].

Use stdout_path and stderr_path to redirect stdout and stderr streams to files. In any case, the output of both streams gets watched, the process pool calculates SHA1 checksums automatically and also keeps the last 1024 bytes of every stream. This may be useful if a process crashes and writes error messages to stderr in which case you can see them even if you didn’t redirect stderr to a log file.

Hints can be specified but are not essential. They help to determine the direction of arrows for the run annotation graphs rendered by GraphViz (sometimes, it’s not clear from the command line whether a certain file is an input or output file to a given process).

ProcessPool.log(message)[source]

Append a message to the pipeline log.

fscache
class fscache.FSCache[source]

Use this class if you expect to make the same os.path.* calls many times during a short time. The first time you call a method with certain arguments, the call is made, but all subsequent calls are served from a cache.

Usage example:

# Instantiate a new file system cache.
fsc = FSCache()

# This call will stat the file system.
print(fsc.exists('/home'))

# This call will leave the file system alone, the cached result will be returned.
print(fsc.exists('/home'))

You may call any method which is available in os.path.

misc
misc.append_suffix_to_path(path, suffix)[source]

Append a suffix to a path, for example:

  • path: /home/michael/chocolate-cookies.txt.gz
  • suffix: done right
  • result: /home/michael/chocolate-cookies-done-right.txt.gz
misc.assign_strings(paths, tags)[source]

Assign N strings (path names, for example) to N tags. Example:

  • paths = [‘RIB0000794-cutadapt-R1.fastq.gz’, ‘RIB0000794-cutadapt-R2.fastq.gz’]
  • tags = [‘R1’, ‘R2’]
  • result = { ‘R1’: ‘RIB0000794-cutadapt-R1.fastq.gz’, ‘R2’: ‘RIB0000794-cutadapt-R2.fastq.gz’ }

If this is not possible without ambiguities, a StandardError is thrown. Attention: The number of paths must be equal to the number of tags, a 1:1 relation is returned, if possible.

misc.bytes_to_str(num)[source]

Convert a number representing a number of bytes into a human-readable string such as “4.7 GB”

misc.duration_to_str(duration, long=False)[source]

Minor adjustment for Python’s duration to string conversion, removed microsecond accuracy and replaces ‘days’ with ‘d’

misc.natsorted(l)[source]

Return a ‘naturally sorted’ permutation of l.

Credits: http://www.codinghorror.com/blog/2007/12/sorting-for-humans-natural-sort-order.html

Remarks

This documentation has been created using sphinx and reStructuredText.

Indices and tables