Layered Documentation

layered.activation module

class Activation[source]

Bases: object

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]

Compute the derivative of the cost with respect to the input of this activation function. Outgoing is what this function returned in the forward pass and above is the derivative of the cost with respect to the outgoing activation.

class Identity[source]

Bases: layered.activation.Activation

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]
class Sigmoid[source]

Bases: layered.activation.Activation

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]
class Relu[source]

Bases: layered.activation.Activation

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]
class Softmax[source]

Bases: layered.activation.Activation

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]
class SparseField(inhibition=0.05, leaking=0.0)[source]

Bases: layered.activation.Activation

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]
class SparseRange(range_=0.3, function=<layered.activation.Sigmoid object>)[source]

Bases: layered.activation.Activation

E%-Max Winner-Take-All.

Binary activation. First, the activation function is applied. Then all neurons within the specified range below the strongest neuron are set to one. All others are set to zero. The gradient is the one of the activation function for active neurons and zero otherwise.

See: A Second Function of Gamma Frequency Oscillations: An E%-Max Winner-Take-All Mechanism Selects Which Cells Fire. (2009)

__call__(incoming)[source]
delta(incoming, outgoing, above)[source]

layered.cost module

class Cost[source]

Bases: object

__call__(prediction, target)[source]
delta(prediction, target)[source]
class SquaredError[source]

Bases: layered.cost.Cost

Fast and simple cost function.

__call__(prediction, target)[source]
delta(prediction, target)[source]
class CrossEntropy(epsilon=1e-11)[source]

Bases: layered.cost.Cost

Logistic cost function used for classification tasks. Learns faster in the beginning than SquaredError because large errors are penalized exponentially. This makes sense in classification since only the best class will be the predicted one.

__call__(prediction, target)[source]
delta(prediction, target)[source]

layered.dataset module

class Dataset[source]

Bases: object

urls = []
cache = True
classmethod folder()[source]
parse()[source]

Subclass responsibility. The filenames of downloaded files will be passed as individual parameters to this function. Therefore, it must accept as many parameters as provided class-site urls. Should return a tuple of training examples and testing examples.

dump()[source]
load()[source]
download(url)[source]
static split(examples, ratio=0.8)[source]

Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.

class Test(amount=10)[source]

Bases: layered.dataset.Dataset

cache = False
parse()[source]
download(url)
dump()
folder()
load()
split(examples, ratio=0.8)

Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.

urls = []
class Regression(amount=10000, inputs=10)[source]

Bases: layered.dataset.Dataset

Synthetically generated dataset for regression. The task is to predict the sum and product of all the input values. All values are normalized between zero and one.

cache = False
parse()[source]
download(url)
dump()
folder()
load()
split(examples, ratio=0.8)

Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.

urls = []
class Modulo(amount=60000, inputs=32, classes=7)[source]

Bases: layered.dataset.Dataset

Sythetically generated classification dataset. The task is to predict the modulo classes of random integers encoded as bit arrays of length 32.

cache = False
parse()[source]
download(url)
dump()
folder()
load()
split(examples, ratio=0.8)

Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.

urls = []
class Mnist[source]

Bases: layered.dataset.Dataset

The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. (from http://yann.lecun.com/exdb/mnist/)

urls = ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz']
parse(train_x, train_y, test_x, test_y)[source]
cache = True
download(url)
dump()
folder()
load()
static read(data, labels)[source]
split(examples, ratio=0.8)

Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.

layered.evaluation module

compute_costs(network, weights, cost, examples)[source]
compute_error(network, weights, examples)[source]

layered.example module

class Example(data, target)[source]

Bases: object

Immutable class representing one example in a dataset.

data
target

layered.gradient module

class Gradient(network, cost)[source]

Bases: object

__call__(weights, example)[source]
class Backprop(network, cost)[source]

Bases: layered.gradient.Gradient

Use the backpropagation algorithm to efficiently determine the gradient of the cost function with respect to each individual weight.

__call__(weights, example)[source]
class NumericalGradient(network, cost, distance=1e-05)[source]

Bases: layered.gradient.Gradient

Approximate the gradient for each weight individually by sampling the error function slightly above and below the current value of the weight.

__call__(weights, example)[source]

Modify each weight individually in both directions to calculate a numeric gradient of the weights.

class CheckedBackprop(network, cost, distance=1e-05, tolerance=1e-08)[source]

Bases: layered.gradient.Gradient

Computes the gradient both analytically trough backpropagation and numerically to validate the backpropagation implementation and derivatives of activation functions and cost functions. This is slow by its nature and it’s recommended to validate derivatives on small networks.

__call__(weights, example)[source]
class BatchBackprop(network, cost)[source]

Bases: object

Calculate the average gradient over a batch of examples.

__call__(weights, examples)[source]
class ParallelBackprop(network, cost, workers=4)[source]

Bases: object

Alternative to BatchBackprop that yields the same results but utilizes multiprocessing to make use of more than one processor core.

__call__(weights, examples)[source]

layered.network module

class Layer(size, activation)[source]

Bases: object

apply(incoming)[source]

Store the incoming activation, apply the activation function and store the result as outgoing activation.

delta(above)[source]

The derivative of the activation function at the current state.

class Matrices(shapes, elements=None)[source]

Bases: object

__getitem__(index)[source]
__setitem__(index, data)[source]
copy()[source]
class Network(layers)[source]

Bases: object

feed(weights, data)[source]

Evaluate the network with alternative weights on the input data and return the output activation.

static forward(weight, activations)[source]
static backward(weight, activations)[source]

layered.optimization module

class GradientDecent[source]

Bases: object

Adapt the weights in the opposite direction of the gradient to reduce the error.

__call__(weights, gradient, learning_rate=0.1)[source]
class Momentum[source]

Bases: object

Slow down changes of direction in the gradient by aggregating previous values of the gradient and multiplying them in.

__call__(gradient, rate=0.9)[source]
class WeightDecay[source]

Bases: object

Slowly moves each weight closer to zero for regularization. This can help the model to find simpler solutions.

__call__(weights, rate=0.0001)[source]
class WeightTying(*groups)[source]

Bases: object

Constraint groups of slices of the gradient to have the same value by averaging them. Should be applied to the initial weights and each gradient.

__call__(matrices)[source]

layered.plot module

class Interface(title='', xlabel='', ylabel='', style=None)[source]

Bases: object

style
title
xlabel
ylabel
class State[source]

Bases: object

class Window(refresh=0.5)[source]

Bases: object

register(position, interface)[source]
start(work)[source]

Hand the main thread to the window and continue work in the provided function. A state is passed as the first argument that contains a running flag. The function is expected to exit if the flag becomes false. The flag can also be set to false to stop the window event loop and continue in the main thread after the start() call.

stop()[source]

Close the window and stops the worker thread. The main thread will resume with the next command after the start() call.

update()[source]

Redraw the figure to show changed data. This is automatically called after start() was run.

class Plot(title, xlabel, ylabel, style=None, fixed=None)[source]

Bases: layered.plot.Interface

style
title
xlabel
ylabel
__call__(values)[source]

layered.problem module

class Problem(content=None)[source]

Bases: object

parse(definition)[source]

layered.trainer module

class Trainer(problem, load=None, save=None, visual=False, check=False)[source]

Bases: object

__call__()[source]

Train the model and visualize progress.

layered.utility module

repeated(iterable, times)[source]
batched(iterable, size)[source]
averaged(callable_, batch)[source]
listify(fn=None, wrapper=<class 'list'>)[source]

From http://stackoverflow.com/a/12377059/1079110

ensure_folder(path)[source]
hstack_lines(blocks, sep=' ')[source]
pairwise(iterable)[source]