Layered Documentation¶
layered.activation module¶
layered.cost module¶
-
class
SquaredError
[source]¶ Bases:
layered.cost.Cost
Fast and simple cost function.
-
class
CrossEntropy
(epsilon=1e-11)[source]¶ Bases:
layered.cost.Cost
Logistic cost function used for classification tasks. Learns faster in the beginning than SquaredError because large errors are penalized exponentially. This makes sense in classification since only the best class will be the predicted one.
layered.dataset module¶
-
class
Dataset
[source]¶ Bases:
object
-
urls
= []¶
-
cache
= True¶
-
-
class
Test
(amount=10)[source]¶ Bases:
layered.dataset.Dataset
-
cache
= False¶
-
download
(url)¶
-
dump
()¶
-
folder
()¶
-
load
()¶
-
split
(examples, ratio=0.8)¶ Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.
-
urls
= []¶
-
-
class
Regression
(amount=10000, inputs=10)[source]¶ Bases:
layered.dataset.Dataset
Synthetically generated dataset for regression. The task is to predict the sum and product of all the input values. All values are normalized between zero and one.
-
cache
= False¶
-
download
(url)¶
-
dump
()¶
-
folder
()¶
-
load
()¶
-
split
(examples, ratio=0.8)¶ Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.
-
urls
= []¶
-
-
class
Modulo
(amount=60000, inputs=32, classes=7)[source]¶ Bases:
layered.dataset.Dataset
Sythetically generated classification dataset. The task is to predict the modulo classes of random integers encoded as bit arrays of length 32.
-
cache
= False¶
-
download
(url)¶
-
dump
()¶
-
folder
()¶
-
load
()¶
-
split
(examples, ratio=0.8)¶ Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.
-
urls
= []¶
-
-
class
Mnist
[source]¶ Bases:
layered.dataset.Dataset
The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. (from http://yann.lecun.com/exdb/mnist/)
-
urls
= ['http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz']¶
-
cache
= True¶
-
download
(url)¶
-
dump
()¶
-
folder
()¶
-
load
()¶
-
split
(examples, ratio=0.8)¶ Utility function that can be used within the parse() implementation of sub classes to split a list of example into two lists for training and testing.
-
layered.evaluation module¶
layered.example module¶
layered.gradient module¶
-
class
Backprop
(network, cost)[source]¶ Bases:
layered.gradient.Gradient
Use the backpropagation algorithm to efficiently determine the gradient of the cost function with respect to each individual weight.
-
class
NumericalGradient
(network, cost, distance=1e-05)[source]¶ Bases:
layered.gradient.Gradient
Approximate the gradient for each weight individually by sampling the error function slightly above and below the current value of the weight.
-
class
CheckedBackprop
(network, cost, distance=1e-05, tolerance=1e-08)[source]¶ Bases:
layered.gradient.Gradient
Computes the gradient both analytically trough backpropagation and numerically to validate the backpropagation implementation and derivatives of activation functions and cost functions. This is slow by its nature and it’s recommended to validate derivatives on small networks.
layered.network module¶
-
class
Layer
(size, activation)[source]¶ Bases:
object
layered.optimization module¶
-
class
GradientDecent
[source]¶ Bases:
object
Adapt the weights in the opposite direction of the gradient to reduce the error.
-
class
Momentum
[source]¶ Bases:
object
Slow down changes of direction in the gradient by aggregating previous values of the gradient and multiplying them in.
layered.plot module¶
-
class
Interface
(title='', xlabel='', ylabel='', style=None)[source]¶ Bases:
object
-
style
¶
-
title
¶
-
xlabel
¶
-
ylabel
¶
-
-
class
Window
(refresh=0.5)[source]¶ Bases:
object
-
start
(work)[source]¶ Hand the main thread to the window and continue work in the provided function. A state is passed as the first argument that contains a running flag. The function is expected to exit if the flag becomes false. The flag can also be set to false to stop the window event loop and continue in the main thread after the start() call.
-