Welcome to SPN’s documentation!

SPN is library to build, train and save neural networks based on Theano.

SPN defines a neural network image on hard disk to reuse and modify.

Tutorials

Traning Neural Networks on MNIST

This tutorial goes through MNIST digits problem to explain the usage of SPN.

Preparing the Data

Defining a MLP Network

User’s Guide

API Reference

The following is the document extracted from code.

Network

class mlbase.network.Network[source]

Theano based neural network.

build(reload=False)[source]

Build the training function and predict function after collecting all the necessary information.

buildForwardSize()[source]

Initialize parameter based on size info for each layer.

connect(prelayer, nextlayer, reload=False)[source]

Connect prelayer to nextlayer.

getLastLinkName()[source]

Get last link file name, including path prefix.

getSaveModelName(dateTime=None)[source]

Return default model saving file name, including path prefix.

load(istream)[source]

Load the model from input stream. reset() is called to clean up network instance.

nextLayer()[source]

Use this method to iterate over all known layers. This is a DAG walker. Guarantee all previous layers are visited for the next visiting layer.

reset()[source]

For sequential layerout network, use append().

To add more layers, the first layer is set with setInput(). Network can do this, because it remember which layer to append to by using member variable currentLayer.

save(ostream)[source]

Save model to the stream.

saveToFile(fileName=None)[source]

Save the network to a file. Use the given file name if supplied. This may take some time when the model is large.

Create sym link to leates saved model.

mlbase.layers

Network input

class mlbase.layers.RawInput(inputsize)[source]

This is THE INPUT Class. Class type is checked during network building.

Parameters:input (tuple or list of inte) – Input shape without batch size.
setBatchSize(psize)[source]

This method is suposed to called by network.setInput()

class mlbase.layers.NonLinear[source]
predictForward(inputtensor)[source]

forward link used in training

inputtensor: a tuple of theano tensor

return: a tuple of theano tensor

class mlbase.layers.Relu[source]
class mlbase.layers.Elu(alpha=1.0)[source]
class mlbase.layers.ConcatenatedReLU[source]
class mlbase.layers.Sine[source]
class mlbase.layers.Cosine[source]
class mlbase.layers.SeqLayer(name, bases, namespace, **kwds)[source]
class mlbase.layers.DAGPlan[source]
classmethod input()[source]

Intend to be the input for the layers.

class mlbase.layers.DAG(name, bases, namespace, **kwds)[source]

<no title>

NonLinear
Relu
Elu
ConcatenatedReLU
Sine
Cosine

<no title>

<no title>

SeqLayer
DAGPlan
DAG

<no title>

<no title>

<no title>

<no title>

<no title>

<no title>

<no title>

<no title>

Network input

RawInput This is THE INPUT Class.

<no title>

<no title>

<no title>

Cost function

class mlbase.cost.CostFunc[source]

General cost function base class.

Y: result from forward network. tY: the given true result.

class mlbase.cost.TwoStageCost[source]

Cost function that needs two stage computation.

Step 1: obtain data statistics. Step 2: obtain label for each sample.

class mlbase.cost.IndependentCost[source]

Cost function for each sample cost known and final cost is a statistics for all sample cost.

mlbase.cost.aggregate(loss, weights=None, mode='mean')[source]

This code is from lasagne/objectives.py

class mlbase.cost.CrossEntropy[source]

Wrap of categorical_crossentropy from theano

class mlbase.cost.ImageDiff[source]

This is the base class for cost function for images. The input format is like:

tensor4, (patch, channel, column, row)

The channel should be 1 or 3.

class mlbase.cost.ImageSSE[source]

The sum of square error. Use aggregate() to get mean square error.

class mlbase.cost.ImageDice[source]

Dice coefficient. Y is the set of salient pixel in one binary image tY is another set of salient pixel in the other binary image. The Dice coefficient is: 2 * |Y ^ tY| / (|Y| + |tY|)

Optimizer

class mlbase.gradient_optimizer.GradientOptimizer(lr)[source]
class mlbase.gradient_optimizer.RMSprop(lr=0.01, rho=0.9, epsilon=1e-06)[source]
class mlbase.gradient_optimizer.Adam(lr=0.01, beta1=0.9, beta2=0.999, epsilon=1e-07)[source]
class mlbase.gradient_optimizer.Momentum(lr=0.01, mu=0.5)[source]
class mlbase.gradient_optimizer.Nesterov(lr=0.01, mu=0.5)[source]
class mlbase.gradient_optimizer.Adagrad(lr=0.01, epsilon=1e-07)[source]

Regularization

class mlbase.regularization.Regulator(weight_decay=0.0005, reg_func=<function l2>)[source]

Regulator added to cost function.

mlbase.regularization.l1(x)[source]

L1 penalty

mlbase.regularization.l2(x)[source]

L2 penalty

Utility

class mlbase.layers.RawInput(inputsize)[source]

This is THE INPUT Class. Class type is checked during network building.

Parameters:input (tuple or list of inte) – Input shape without batch size.
setBatchSize(psize)[source]

This method is suposed to called by network.setInput()

Indices and tables