Welcome to the Proton Decay Study with CNNs!

Liquid argon time projection chambers (LArTPCs) are an innovative technology used in neutrino physics measurements that can also be utilized in establishing lifetimes on several partial lifetimes for proton and neutron decay. Current analyses suffer from low efficiencies and purities that arise from the misidentification of nucleon decay final states as background processes and vice-versa. One solution is to utilize convolutional neural networks (CNNs) to identify decay topologies in LArTPC data. In this study, CNNs are trained on Monte Carlo simulated data, labeled by truth, and then assessed by out-of-sample simulation. Currently running LArTPCs play an instrumental role in establishing the capabilities of this technology. Simultaneousy, the next generation tens-of-kilotons flagship LArTPC experiment – one of whose main charges is to search for nucleon decay – is planning on using this technology in the future. We discuss analysis possibilities and further, a potential application of proton decay-sensitive CNN-enabled data acquisition.

Contents:

Installation

As far as pre-requistes go, the big one is Tensorflow. We require Tensorflow>=1.0.0, but this is not strictly enforced due to some issues on certain machines.

The easiest way to install this package is by using pip:

pip install git+https://github.com/HEP-DL/proton_decay_study

If there is an existing installation, you can provide an upgrade flag:

pip install git+https://github.com/HEP-DL/proton_decay_study

Usage

Nominally, this should be used via the Python API.

There are a few convenience endpoint defitions in cli.py.

For instance, the Kevnet training can be called with

train_kevnet --steps=100 --epochs=1000 --history=stage1.json --output=stage1.h5 dl_data/v04_00_00/*.h5

The other endpoints can be called similarly

proton_decay_study

proton_decay_study package

Subpackages

proton_decay_study.callbacks package
Submodules
proton_decay_study.callbacks.default module

Defines the default callbacks for usage in the mod:proton_decay_study

class proton_decay_study.callbacks.default.HistoryRecord[source]

Bases: keras.callbacks.Callback

This is a stub in place for working on recording the training history

on_batch_end(batch, logs={})[source]
on_train_begin(logs={})[source]
write(filename)[source]
Module contents
proton_decay_study.config package
Module contents
class proton_decay_study.config.Config[source]

Bases: object

Represents configuration of network training

proton_decay_study.generators package
Submodules
proton_decay_study.generators.base module
class proton_decay_study.generators.base.BaseDataGenerator[source]

Bases: object

Base data generator which hooks into the networks to provide an interface to the incoming data.

logger = <logging.Logger object>
next()[source]
proton_decay_study.generators.gen3d module
class proton_decay_study.generators.gen3d.Gen3D(datapaths, datasetname, labelsetname, batch_size=1)[source]

Bases: proton_decay_study.generators.base.BaseDataGenerator

Creates a generator for a list of files

input

Input shape property

Returns: A tuple representing

logger = <logging.Logger object>
next()[source]
output

Output shape property

Returns: A tuple representing the shape of the first data this picks out of the file

class proton_decay_study.generators.gen3d.Gen3DRandom(datapaths, datasetname, labelsetname, batch_size=1)[source]

Bases: proton_decay_study.generators.gen3d.Gen3D

proton_decay_study.generators.multi_file module
class proton_decay_study.generators.multi_file.MultiFileDataGenerator(datapaths, datasetname, labelsetname, batch_size=10)[source]

Bases: proton_decay_study.generators.base.BaseDataGenerator

Creates a generator for a list of files

input
logger = <logging.Logger object>
next()[source]

This should iterate over both files and datasets within a file.

output
proton_decay_study.generators.single_file module
class proton_decay_study.generators.single_file.SingleFileDataGenerator(datapath, dataset, labelset, batch_size=10)[source]

Bases: proton_decay_study.generators.base.BaseDataGenerator

Creates a generator for a single file

logger = <logging.Logger object>
next()[source]
proton_decay_study.generators.threaded_gen3d module
class proton_decay_study.generators.threaded_gen3d.SingleFileThread(datasetname, labelsetname, batch_size)[source]

Bases: threading.Thread

Wrapper thread for buffering data from a single file

activeThreads = []
static killRunThreads(frame)[source]

Sets the thread kill flag to each of the ongoing analysis threads

logger = <logging.Logger object>
queue = <Queue.Queue instance>
queueLock = <thread.lock object>
run()[source]

Loops over queue to accept new configurations

single_status()[source]
static startThreads(datasetname, labelsetname, batch_size)[source]
static status()[source]
threadLock = <thread.lock object>
visit(parent)[source]
static waitTillComplete()[source]
class proton_decay_study.generators.threaded_gen3d.ThreadedMultiFileDataGenerator(datapaths, datasetname, labelsetname, batch_size=1, nThreads=8)[source]

Bases: proton_decay_study.generators.base.BaseDataGenerator

Uses threads to pull asynchronously from files

check_and_refill()[source]
input
logger = <logging.Logger object>
next()[source]
output
status()[source]
proton_decay_study.generators.threaded_multi_file module
class proton_decay_study.generators.threaded_multi_file.SingleFileThread(datasetname, labelsetname, batch_size)[source]

Bases: threading.Thread

Represents a single file that is asynchronously

activeThreads = []
get
static killRunThreads(frame)[source]

Sets the thread kill flag to each of the ongoing analysis threads

logger = <logging.Logger object>
queue = <Queue.Queue instance>
queueLock = <thread.lock object>
run()[source]

Loops over queue to accept new configurations

static startThreads(datasetname, labelsetname, batch_size)[source]
threadLock = <thread.lock object>
static waitTillComplete()[source]
class proton_decay_study.generators.threaded_multi_file.ThreadedMultiFileDataGenerator(datapaths, datasetname, labelsetname, batch_size=1, nThreads=2)[source]

Bases: proton_decay_study.generators.base.BaseDataGenerator

Uses threads to pull asynchronously from files

input
logger = <logging.Logger object>
next()[source]
output
Module contents
proton_decay_study.models package
Submodules
proton_decay_study.models.kevnet module
class proton_decay_study.models.kevnet.Kevnet(generator)[source]

Bases: keras.engine.training.Model

assemble(generator)[source]
logger = <logging.Logger object>
proton_decay_study.models.kevnet_fcn module
class proton_decay_study.models.kevnet_fcn.capetian_modifier(generator)[source]

Bases: proton_decay_study.models.kevnet.Kevnet

logger = <logging.Logger object>
class proton_decay_study.models.kevnet_fcn.percussive_treasurership(generator)[source]

Bases: proton_decay_study.models.kevnet.Kevnet

logger = <logging.Logger object>
Module contents
proton_decay_study.visualization package
Submodules
proton_decay_study.visualization.intermediate module
class proton_decay_study.visualization.intermediate.IntermediateVisualizer(model, layer_name, data)[source]

Bases: keras.engine.training.Model

infer()[source]
proton_decay_study.visualization.kevnet module
class proton_decay_study.visualization.kevnet.KevNetVisualizer(model, data)[source]
initialize()[source]
layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1']
logger = <logging.Logger object>
mkdir()[source]
run()[source]
Module contents

Submodules

proton_decay_study.cli module

Module contents

Base package for Proton Decay Study :author: Kevin Wierman

Contributions

Your contributions to this are more than welcome!

Indices and tables