OpenDenoising: An Open Benchmark for Image Restoration Methods¶
OpenDenoising is an open-source benchmark for comparing the performance of denoising algorithms. We currently support generic denoising functions through Python and Matlab, as well as deep learning based denoisers through frameworks. The table bellow shows the compatibility between our benchmark and various frameworks,
Programming Language | Framework | Training | Inference |
Matlab | Matconvnet | x | |
Deep Learning Toolbox | x | x | |
Python | Keras | x | x |
Tensorflow | x | x | |
Pytorch | x | x | |
ONNX | x |
Installation guide¶
Here we present the steps to install the OpenDenoising benchmark.
Creating a virtual environment¶
On a directory of your choice, clone the OpenDenoising benchmark repository by using,
$ git clone https://github.com/opendenoising/benchmark
In order to use the present benchmark, we recommend creating a virtual environment for it. On a computer having Python3 installed, a GPU CUDA-compatible, CUDA installed:
- Install virtualenv:
$ sudo apt install virtualenv
- Create a virtual environment anywhere with VENV-NAME the name given to the environment:
$ virtualenv --system-site-packages -p python3 ~/virtualenvironments/VENV_NAME
- Activate the venv:
$ source ~/virtualenvironments/VENV_NAME/bin/activate
If the last command succeeded, the command line should now be preceded by (VENV_NAME). The virtual environment can be exited using:
$ deactivate
Package Requirements¶
GPU Users¶
Here is a list of required packages to run OpenDenoising benchmark:
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
keras2onnx==1.5.0
matplotlib==3.1.0
numpy==1.16.4
onnx==1.5.0
onnx-tf==1.3.0
onnxruntime-gpu==0.5.0
opencv-python==4.1.0.25
pandas==0.24.2
Pillow==6.0.0
PyGithub==1.43.8
scikit-image==0.15.0
scipy==1.3.0
seaborn==0.9.0
six==1.12.0
tensorboard==1.14.0
tensorflow-gpu==1.14.0
tf2onnx==1.5.1
torch==1.2.0
torchvision==0.4.0
tqdm==4.32.2
To install them, you can simply go to the project’s root, and run the following command,
$ pip install -r requirements_gpu.txt
CPU Users¶
Here is a list of required packages to run OpenDenoising benchmark:
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
keras2onnx==1.5.0
matplotlib==3.1.0
numpy==1.16.4
onnx==1.5.0
onnx-tf==1.3.0
onnxruntime==0.5.0
opencv-python==4.1.0.25
pandas==0.24.2
Pillow==6.0.0
PyGithub==1.43.8
scikit-image==0.15.0
scipy==1.3.0
seaborn==0.9.0
six==1.12.0
tensorboard==1.14.0
tensorflow==1.14.0
tf2onnx==1.5.1
torch==1.2.0
torchvision==0.4.0
tqdm==4.32.2
To install them, you can simply go to the project’s root, and run the following command,
$ pip install -r requirements_cpu.txt
We recommend you to use a Virtual Environment to run the benchmark.
Note: If you want to run Matlab code in the benchmark, you need to have a Matlab of version at least 2018b, with a valid license. You need to install Matlab’s Python Engine.
[Optional] Matlab dependencies¶
Our Matlab support covers Matlab Deep Learning Toolbox (training and inference) and Matconvnet (only inference). Here we detail the steps for installing Matlab’s dependencies.
Warning for Matlab users:
If you will use Matlab Deep Learning toolbox with recent GPU cards (such as RTX 2080 ti), you should add the Following lines to your startup script:
warning off parallel:gpu:device:DeviceLibsNeedsRecompiling
try
gpuArray.eye(2)^2;
catch ME
end
try
nnet.internal.cnngpu.reluForward(1);
catch ME
end
otherwise, when you run a MatlabModel you can run into errors. For more informations, take a look on this post. You should also add “./OpenDenoising/data/” to Matlab’s Path by using [Set Path](https://fr.mathworks.com/help/matlab/matlab_env/add-remove-or-reorder-folders-on-the-search-path.html).
Adding the Benchmark to matlab’s path¶
Let “PATH_TO_BENCHMARK” denote the path to the OpenDenoising folder in your computer. To add it to Matlab’s main path, you need to modify the file “pathdef.m”. If you are on Windows, all you have to do is use “set path” tool on Matlab’s main window. However if you are using Linux and you do not have the rights to modify it, you can run the following commands on the terminal,
$ sudo nano /usr/local/MATLAB/R2018b/toolbox/local/pathdef.m
This will open nano on the needed file with the right permissions. You need to write the following line before the default entries,
'PATH_TO_BENCHMARK/data:', ...
Remark: If you are using any third-party software that depends on Matlab (such as BM3D), you also need to include it to the pathdef.
Installing Matlab’s Python engine¶
Open an terminal, then, go to matlab engine setup folder,
$ cd /usr/local/MATLAB/R2018b/extern/engines/python
Following matlab’s instructions, install the engine on your venv folder,
$ sudo $VENVROOT/bin/python setup.py install --prefix="$VENVROOT/"
Notice that, since we are running the sudo command, the command line will “ignore” all your aliases, so you need to specify the path to your venv python. Equally, the –prefix option specify where matlab will output its files, so that you can run its engine. To test if your installation was succesfull, you can try to execute the following python script:
import matlab.engine
eng = matlab.engine.start_matlab()
x = 4.0
eng.workspace['y'] = x
a = eng.eval('sqrt(y)')
print(a)
Matconvnet installation¶
Remark: be sure to add Matconvnet to Matlab default path.
Setting up multiple CUDA versions¶
If you will use [Matconvnet toolbox](http://www.vlfeat.org/matconvnet/), you need to install gcc-6 by running
$ sudo apt install gcc-6 g++-6
before compiling the library on Matlab. Moreover, since the toolbox requires CUDA 9.1 (which is a different version from Tensorflow’s requirement), you need to install multiple CUDA’s on your system (which are independent from each other). Assuming you already have on your system a CUDA version different from 9.1, you need to follow these steps,
- Download CUDA Toolkit 9.1 from NVIDIA’s website, then execute it using the ‘–override’ option, as follows:
$ ./cuda_9.1.85_387.26_linux.run --override
The override option is needed, so that the installer won’t fail because of driver version (if you have a newer version of CUDA, it is likely that you have a more recent driver). Once you run the previous line, the installer will ask you the following questions,
You are attempting to install on an unsupported configuration. Do you wish to continue?
> y
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 387.26?
> n
Install the CUDA 9.1 Toolkit?
> y
Install the CUDA 9.1 Samples?
> y
Enter CUDA Samples Location
> Default location
Enter Toolkit Location
> Default location
Do you want to install a symbolic link at /usr/local/cuda?
> n
By doing this, CUDA 9.1 will be installed on /usr/local/cuda-9.1. The crucial part of having two CUDAs installed, without messing your previous installation, is to not create the symbolic link between cuda-9.1 folder and CUDA folder. Moreover, such choice does not stop you from using CUDA-9.1 in Matconvnet.
- Add the different CUDA paths to LD_LIBRARY_PATH:
$ export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:/usr/local/cuda-9.1/lib64:\$LD_LIBRARY_PATH
At the end of this process, your LD_LIBRARY_PATH should contain the following line as substring:
/usr/local/cuda/lib64:/usr/local/cuda-10.1/lib64:/usr/local/cuda-9.1/lib64
Compiling Matconvnet library¶
Go to the directory where you extracted matconvnet files, then, after lauching matlab, use the following commands,
cd matlab
CudaPath = "/usr/local/cuda-9.1";
vl_compilenn('EnableGpu', true, 'CudaRoot', CudaPath, 'EnableCudnn', true)
vl_compilenn is a matlab function that will compilate matconvnet library. Here’s what each option means,
EnableGpu: enables GPU usage by matconvnet.
CudaRoot: indicates the path to Cuda's root folder.
EnableCudnn: enables matconvnet to use cudnn acceleration.
obs (27.06.19): For matlab 2018b users, matconvnet compiling script happens to have a bug, which can be easily corrected by replacing line 620 by,
args = horzcat({'-outdir', mex_dir}, ...
flags.base, flags.mexlink, ...
'-R2018a',...
{['LDFLAGS=$LDFLAGS ' strjoin(flags.mexlink_ldflags)]}, ...
{['LDOPTIMFLAGS=$LDOPTIMFLAGS ' strjoin(flags.mexlink_ldoptimflags)]}, ...
{['LINKLIBS=' strjoin(flags.mexlink_linklibs) ' $LINKLIBS']}, ...
objs);
and line 359 to:
flags.mexlink = {'-lmwblas'};
For more informations, consult this github page. After compiling the libary, you should consider adding Matconvnet to Matlab’s path by using Set Path.
Check Driver requirements¶
The following table summarizes the driver requirements:
Framework | Cuda Version | Gcc Compiler |
---|---|---|
Tensorflow 1.14 | 10.0 | 7 |
Matlab 2018b | 9.1 | 6 |
Pytorch 1.2 | 10.0 | |
OnnxRuntime 0.5 | 10.0 |
Benchmark API¶
model module¶
This module contains classes used for wrapping denoising algorithms in order to provide common functionality and behavior.
The frst distinction we make is between deep learning based denoising algorithms, and filtering algorithms. The functionality
of these two models is standardized by model.AbstractDenoiser
.
Deep Learning models, however, are based on frameworks such as Keras and Pytorch which differ in syntax. To standardize the behavior of Deep Learning models across frameworks,
we propose the
model.AbstractDeepLearningModel
interface, which unifies three kinds of functionalities that all deep learning models
should provide:
- charge_model, which builds the computational graph of the network.
- train, which trains the network on data.
- inference, which denoises data.
To give a better idea of the class structure and its relations to the other modules, we show the following UML class diagram,

The documentation of this module is divided as follows,
- Interface Classes: Documents the two main abstract classes which are AbstractDenoiser and AbstractDeepLearningModel.
- Wrapper Classes: Documents the 6 wrapper classes in the benchmark, which are FilteringModel, KerasModel, PytorchModel, TfModel, MatlabModel, OnnxModel, MatconvnetModel.
- Built-in Architectures: Documents the neural network architectures already provided by the benchmark.
- Filtering functions: Documents the filtering functions already provided by the benchmark.
- Model utilities: Documents functions for model conversion and graph editing.
Denoising Models¶
Interface Classes¶
-
class
model.
AbstractDenoiser
(model_name='DenoisingModel')[source]¶ Bases:
abc.ABC
Common interface for Denoiser classes. This class defines the basic functionalities that a Denoiser needs to have, such as __call__ function, which takes as input a noised image, and returns its reconstruction.
-
model_name
¶ Model string identifier.
Type: string
-
__call__
(self, image)[source]¶ Denoises a given image.
Parameters: image ( numpy.ndarray
) – Batch of noisy images. Expected a 4D array with shape (batch_size, height, width, channels).
-
-
class
model.
AbstractDeepLearningModel
(model_name='DeepLearningModel', logdir='./training_logs/', framework=None, return_diff=False)[source]¶ Bases:
OpenDenoising.model.abstract_denoiser.AbstractDenoiser
Common interface for Deep Learning based image denoisers.
-
__call__
(self, image)[source]¶ Denoises a given image.
Parameters: image ( numpy.ndarray
) – Batch of noisy images. Expected a 4D array with shape (batch_size, height, width, channels).
-
__init__
(self, model_name='DeepLearningModel', logdir='./training_logs/', framework=None, return_diff=False)[source]¶ Common interface for Deep Learning based image denoisers.
-
model
¶ Object representing the Denoiser Model in the framework used.
-
logdir
¶ String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
-
Wrapper Classes¶
-
class
model.
FilteringModel
(model_name='FilteringModel')[source] Bases:
OpenDenoising.model.abstract_denoiser.AbstractDenoiser
FilteringModel represents a Denoising Model that does not depend on Neural Networks.
-
model_function
¶ Filtering denoising function. It should accept at least one argument, a image batch
numpy.ndarray
. It should also have only one return, anothernumpy.ndarray
with same shape, corresponding to the denoising result.Type: function
-
__call__
(self, image)[source] Denoises a batch of images noised_image
Parameters: image ( numpy.ndarray
) – batch of images with shape (batch_size, height, width, channels)Returns: batch of images denoised by model_func, with shape (b, h ,w , c) Return type: numpy.ndarray
-
__init__
(self, model_name='FilteringModel')[source] Initialize self. See help(type(self)) for accurate signature.
-
__repr__
(self)[source] Return repr(self).
-
charge_model
(self, model_function, **kwargs)[source] Charges the denoising function into the class wrapper.
Parameters: model_function ( function
) – Filtering denoising function. It should accept at least one argument, a image batchnumpy.ndarray
. It should also have only one return, anothernumpy.ndarray
with same shape, corresponding to the denoising result. Notice that, if your function needs more arguments than the noisy image batch, these can be passed through keyword arguments to charge_model (see examples section).
-
-
class
model.
KerasModel
(model_name='DeepLearningModel', logdir='./logs/Keras', return_diff=False)[source] Bases:
OpenDenoising.model.abstract_deep_learning_model.AbstractDeepLearningModel
KerasModel wrapper class.
-
model
¶ Denoiser Keras model used for training and inference.
Type: keras.models.Model
-
return_diff
¶ If True, return the difference between predicted image, and image at inference time.
Type: bool
See also
model.AbstractDenoiser
- for the basic functionalities of Image Denoisers.
model.AbstractDeepLearningModel
- for the basic functionalities of Deep Learning based Denoisers.
-
__call__
(self, image)[source] Denoises a batch of images.
Parameters: image ( numpy.ndarray
) – 4D batch of noised images. It has shape: (batch_size, height, width, channels)Returns: Restored batch of images, with same shape as the input. Return type: numpy.ndarray
-
__init__
(self, model_name='DeepLearningModel', logdir='./logs/Keras', return_diff=False)[source] Common interface for Deep Learning based image denoisers.
-
model
Object representing the Denoiser Model in the framework used.
-
logdir
¶ String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
framework
¶ String containing the name of the chosen framework (e.g. Keras, Tensorflow, Pytorch).
Type: str
-
return_diff
If True, return the difference between predicted image, and image.
Type: bool
-
-
__len__
(self)[source] Counts the number of parameters in the networks.
Returns: nparams – Number of parameters in the network. Return type: int
-
charge_model
(self, model_function=None, model_path=None, model_weights=None, **kwargs)[source] Keras model charging function.
There are four main cases for “charge_model” function:
- Charge model architecture by using a function “model_function”.
- Charge model using .json file, previously saved from an existing architecture through the method keras.models.Model.to_json().
- Charge model using .yaml file, previously saved from an existing architecture through the method keras.models.Model.to_yaml()
- Charge model using .hdf5 file, previously saved from an existing architecture through the method keras.models.Model.save().
From these four cases, notice that only the last loads the model and the weights at the same time. Therefore, at the moment you are loading your model, you should consider specify “model_weights” so that this class can find and charge your model weights, which can be saved in both .h5 and .hdf5 formats.
If this is not the case, and your architecture has not been previously trained, you can run the training and then save the weights by using keras.models.Model.save_weights() method, or by using KerasModel.train(), method present on this class.
Parameters: Notes
If your building function accepts optional arguments, you can specify them by using kwargs.
Examples
Loading Keras DnCNN from class. Notice that in our implementation, depth is an optional argument.
>>> from OpenDenoising import model >>> mymodel = model.KerasModel(model_name="mymodel") >>> mymodel.charge_model(model_function=model.architectures.keras.dncnn, depth=17)
Loading Keras DnCNN from a .hdf5 file.
>>> from OpenDenoising import model >>> mymodel = model.PytorchModel(model_name="mymodel") >>> mymodel.charge_model(model_path=PATH)
Loading Keras DnCNN from a .json + .hdf5 file.
>>> from OpenDenoising import model >>> mymodel = model.PytorchModel(model_name="mymodel") >>> mymodel.charge_model(model_path=PATH_TO_JSON, model_weights=PATH_TO_HDF5)
-
train
(self, train_generator, valid_generator=None, n_epochs=100.0, n_stages=500.0, learning_rate=0.001, optimizer_name=None, metrics=None, kcallbacks=None, loss=None, valid_steps=10, **kwargs)[source] Function to run the training of a Keras Model.
Notes
There are two cases where training should be launched:
- You only loaded your model architecture. In that case, this function will train your model from scratch using the dataset specified by train_generator and valid_generator.
- You loaded an architecture and weights for your model, but you want to reuse them. It may be the case where you want to run your training for a few more epochs, or even perform transfer learning.
Parameters: - train_generator (
data.AbstractDataset
) – Train data generator. Notice that these generators should output paired image samples, the first, a noised version of the image, and the second, the ground-truth. - valid_generator (
data.AbstractDataset
) – Validation data generator - n_epochs (int) – Number of epochs for which the training will be executed
- n_stages (int) – Number of batches of data are drawn per epoch.
- learning_rate (float) – Initial value for learning rate value for optimization
- optimizer_name (str) – Name of optimizer employed. Check Keras documentation for more information.
- metrics (list) – List of tensorflow functions implementing scalar metrics (see metrics in evaluation).
- kcallbacks (list) – List of keras callback instances. Consult Keras documentation and evaluation module for more information.
- loss (function) – Tensorflow-based loss function. It should take as input two Tensors and output a scalar Tensor holding the loss computation.
- valid_steps (int) – Number of batches drawn during evaluation.
-
-
class
model.
PytorchModel
(model_name='DeepLearningModel', logdir='./logs/Pytorch', return_diff=False)[source] Bases:
OpenDenoising.model.abstract_deep_learning_model.AbstractDeepLearningModel
Pytorch wrapper class.
See also
model.AbstractDenoiser
- for the basic functionalities of Image Denoisers.
model.AbstractDeepLearningModel
- for the basic functionalities of Deep Learning based Denoisers.
-
__call__
(self, image)[source] Denoises a batch of images.
Parameters: image ( numpy.ndarray
) – 4D batch of noised images. It has shape: (batch_size, height, width, channels)Returns: Restored batch of images, with same shape as the input. Return type: numpy.ndarray
-
__init__
(self, model_name='DeepLearningModel', logdir='./logs/Pytorch', return_diff=False)[source] Common interface for Deep Learning based image denoisers.
-
model
¶ Object representing the Denoiser Model in the framework used.
-
logdir
¶ String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
-
__len__
(self)[source] Counts the number of parameters in the network.
Returns: nparams – Number of parameters in the network. Return type: int
-
charge_model
(self, model_function=None, model_path=None, **kwargs)[source] Pytorch model charging function. You can charge a model either by specifying a class that implements the network architecture (passing it through model_function) or by specifying the path to a .pt or .pth file. If you class constructor accepts optional arguments, you can specify these by using Keyword arguments.
Parameters: - model_function (
torch.nn.Module
) – Pytorch network Class implementing the network architecture. - model_path (str) – String containing the path to a .pt or .pth file.
Examples
Loading Pytorch DnCNN from class. Notice that in our implementation, depth is an optional argument.
>>> from OpenDenoising import model >>> mymodel = model.PytorchModel(model_name="mymodel") >>> mymodel.charge_model(model_function=model.architectures.pytorch.DnCNN, depth=17)
Loading Pytorch DnCNN from a file.
>>> from OpenDenoising import model >>> mymodel = model.PytorchModel(model_name="mymodel") >>> mymodel.charge_model(model_path=PATH)
- model_function (
-
train
(self, train_generator, valid_generator=None, n_epochs=250, n_stages=500, learning_rate=0.001, metrics=None, optimizer_name=None, kcallbacks=None, loss=None, valid_steps=10)[source] Trains a Pytorch model.
Parameters: - train_generator (data.AbstractDataset) – dataset object inheriting from AbstractDataset class. It is a generator object that yields data from train dataset folder.
- valid_generator (data.AbstractDataset) – dataset object inheriting from AbstractDataset class. It is a generator object that yields data from valid dataset folder.
- n_epochs (int) – number of training epochs.
- n_stages (int) – number of batches seen per epoch.
- learning_rate (float) – constant multiplication constant for optimizer or initial value for training with dynamic learning rate (see callbacks)
- metrics (list) – List of metric functions. These functions should have two inputs, two instances of
numpy.ndarray
. It outputs a float corresponding to the metric computed on those two arrays. For more information, take a look on the Benchmarking module. - optimizer_name (str) – Name of optimizer to use. Check Pytorch documentation for a complete list.
- kcallbacks (list) – List of custom_callbacks.
- loss (
torch.nn.modules.loss
) – Pytorch loss function. - valid_steps (int) – If valid_generator was specified, valid_steps determines the number of valid batches that will be seen per validation run.
-
class
model.
TfModel
(model_name='DeepLearningModel', logdir='./logs/Tensorflow', return_diff=False)[source] Bases:
OpenDenoising.model.abstract_deep_learning_model.AbstractDeepLearningModel
Tensorflow model class wrapper.
Parameters: - loss (
tf.Tensor
) – Tensor holding the computation for the loss function. - saver (
tf.train.Saver
) – Object for saving the model at each epoch iteration. - metrics (list) – List of
tf.Tensor
objects holding the computation for each metric. - opt_step (
tf.Operation
) – Tensorflow operation corresponding to the update performed on model’s variables. - tf_session (
tf.Session
) – Instance of tensorflow session. - model_input (
tf.Tensor
) – Tensor corresponding to model’s input. - is_training (
tf.Tensor
) – If batch normalization is used, corresponds to a tf placeholder controlling training and inference phases of batch normalization. - model_output (
tf.Tensor
) – Tensor corresponding to model’s output. - ground_truth (
tf.placeholder
) – Placeholder corresponding to original training images (clean).
See also
model.AbstractDenoiser
- for the basic functionalities of Image Denoisers.
model.AbstractDeepLearningModel
- for the basic functionalities of Deep Learning based Denoisers.
-
__call__
(self, image)[source] Denoises a batch of images.
Parameters: image ( np.ndarray
) – 4D batch of noised images. It has shape: (batch_size, height, width, channels)Returns: Restored batch of images, with same shape as the input. Return type: np.ndarray
-
__init__
(self, model_name='DeepLearningModel', logdir='./logs/Tensorflow', return_diff=False)[source] Common interface for Deep Learning based image denoisers.
-
model
¶ Object representing the Denoiser Model in the framework used.
-
logdir
¶ String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
-
__len__
(self)[source] Counts the number of parameters in the networks.
Returns: nparams – Number of parameters in the network. Return type: int
-
charge_model
(self, model_function=None, model_path=None, **kwargs)[source] Charges Tensorflow model into the wrapper class by using a model file, or a building architecture function.
Parameters: - model_function (
function
) – Building architecture function, which returns at least two tensors: one for the graph input, and another for the graph output. - model_path (str) –
String containing the path to a .pb or .meta file, holding the computational graph for the model. Note that each of these files correspond to a different tensorflow saving API.
- .pb files correspond to saved_model API. This API saves the model in a folder containing the .pb file, along with a folder called “variables”, holding the weight values.
- .meta files correspond to tf.train API. This API saves the model through four files (.meta, .index, .data and checkpoint).
You can save your models in one of these two formats.
Notes
This function accepts Keyword arguments which can be used to pass additional parameters to model_function.
Examples
Loading Tensorflow DnCNN from class. Notice that in our implementation, depth is an optional argument.
>>> from OpenDenoising import model >>> mymodel = model.TfModel(model_name="mymodel") >>> mymodel.charge_model(model_function=model.architectures.tensorflow.DnCNN, depth=17)
Loading Tensorflow DnCNN from a file. Note that the file which you are going to charge on TfModel need to be a .pb or a .meta file. In the first case, we assume you have used the SavedModel API, while in the second case, we assume the Checkpoint API.
>>> from OpenDenoising import model >>> mymodel = model.PytorchModel(model_name="mymodel") >>> mymodel.charge_model(model_path=PATH_TO_PB_OR_META)
- model_function (
-
train
(self, train_generator, valid_generator=None, n_epochs=250, n_stages=500, learning_rate=0.001, metrics=None, optimizer_name=None, kcallbacks=None, loss=None, valid_steps=10, saving_api='SavedModel')[source] Trains a tensorflow model.
Parameters: - train_generator (data.AbstractDataset) – Dataset object inheriting from AbstractDataset class. It is a generator object that yields data from train dataset folder.
- valid_generator (data.AbstractDataset) – Dataset object inheriting from AbstractDataset class. It is a generator object that yields data from valid dataset folder.
- n_epochs (int) – Number of training epochs.
- n_stages (int) – Number of batches seen per epoch.
- learning_rate (float) – Constant multiplication constant for optimizer or initial value for training with dynamic learning rate (see callbacks)
- metrics (list) – List of tensorflow functions implementing scalar metrics (see metrics in evaluation).
- optimizer_name (str) – Name of optimizer to use. Check tf.train documentation for a complete list.
- kcallbacks (list) – List of callbacks.
- loss (
function
) – Tensorflow-based loss function. It should take as input two Tensors and output a scalar Tensor holding the loss computation. - valid_steps (int) – If valid_generator was specified, valid_steps determines the number of valid batches that will be seen per validation run.
- saving_api (string) – If training_api = “tftrain”, uses tf.train.Saver as the model saver. Otherwise, uses saved_model API.
- loss (
-
class
model.
MatlabModel
(model_name='MatlabModel', logdir='./logs/Matlab', return_diff=False)[source] Bases:
OpenDenoising.model.abstract_deep_learning_model.AbstractDeepLearningModel
Matlab Deep Learning toolbox wrapper class.
Notes
To use this class you need a Matlab license with access to the Deep Learning toolbox.
See also
model.AbstractDenoiser
- for the basic functionalities of Image Denoisers.
model.AbstractDeepLearningModel
- for the basic functionalities of Deep Learning based Denoisers.
-
__call__
(self, image)[source] Denoises a batch of images.
Notes
To perform inference on MatlabModels, you need to have a variable on Matlab’s workspace called ‘net’. This variable is the output of a training session, or the result of a load(‘obj’).
Parameters: image ( numpy.ndarray
) – 4D batch of noised images. It has shape: (batch_size, height, width, channels)Returns: Restored batch of images, with same shape as the input. Return type: numpy.ndarray
-
__init__
(self, model_name='MatlabModel', logdir='./logs/Matlab', return_diff=False)[source] Common interface for Deep Learning based image denoisers.
-
model
¶ Object representing the Denoiser Model in the framework used.
-
logdir
String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
-
__len__
(self)[source] Counts the number of parameters in the networks.
Returns: nparams – Number of parameters in the network. Return type: int
-
charge_model
(self, model_function=None, model_path=None, **kwargs)[source] MatlabModel model charging functions
This function works by using Matlab’s Python engine to make Matlab internal calls. You can either specify the path to a Matlab function that will build the model, or to a .mat file holding the pretrained model.
Notes
You may specify optional arguments to the model building function through keyword arguments.
Parameters:
-
train
(self, train_generator, valid_generator=None, n_epochs=250, n_stages=500, learning_rate=0.001, optimizer_name='adam', valid_steps=10)[source] Trains a Matlab denoiser model.
Notes
Instead of using Clean/Full/Blind dataset Python classes, you should use the class MatlabDatasetWrapper, which exports a dataset to matlab workspace.
Parameters: - train_generator (str) – Name of the matlab imageDatastore variable (for instance, if you have train_imds in your workspace you should pass ‘train_imds’ for train_generator).
- valid_generator (str) – Name of the matlab imageDatastore variable (for instance, if you have train_imds in your workspace you should pass ‘valid_imds’ for valid_generator).
- n_epochs (int) – Number of training epochs.
- n_stages (int) – Number of image batches are drawn during a training epoch.
- learning_rate (float) – Initial value for learning rate value for optimization.
- optimizer_name (str) – One among {‘sgdm’, ‘adam’, ‘rmsprop’}
See also
data.MatlabDatasetWrapper
- for the class providing data to train such kinds of models.
-
class
model.
OnnxModel
(model_name='DeepLearningModel', return_diff=False)[source] Bases:
OpenDenoising.model.abstract_deep_learning_model.AbstractDeepLearningModel
Onnx models class wrapper. Note that Onnx models only support inference, so training is unavailable.
-
runtime_session
¶ onnxruntime session to run inference.
Type: onnxruntime.capi.session.InferenceSession
-
model_input
¶ onnxruntime input tensor
Type: onnxruntime.capi.onnxruntime_pybind11_state.NodeArg
-
model_output
¶ onnxruntime output tensor
Type: onnxruntime.capi.onnxruntime_pybind11_state.NodeArg
See also
model.AbstractDenoiser
- for the basic functionalities of Image Denoisers.
model.AbstractDeepLearningModel
- for the basic functionalities of Deep Learning based Denoisers.
-
__call__
(self, image)[source] Denoises a batch of images.
Parameters: image ( numpy.ndarray
) – 4D batch of noised images. It has shape: (batch_size, height, width, channels)Returns: Restored batch of images, with same shape as the input. Return type: numpy.ndarray
-
__init__
(self, model_name='DeepLearningModel', return_diff=False)[source] Common interface for Deep Learning based image denoisers.
-
model
¶ Object representing the Denoiser Model in the framework used.
-
logdir
¶ String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
-
__len__
(self)[source] Counts the number of parameters in the networks.
Returns: nparams – Number of parameters in the network. Return type: int
-
charge_model
(self, model_path=None)[source] This method charges a onnx model into the class wrapper. It uses onnx module to load the model graph from a .onnx file, then creates a runtime session from onnxruntime module.
Parameters: model_path (str) – String containing the path to the .onnx model file.
-
-
class
model.
MatconvnetModel
(model_name='MatconvnetModel', return_diff=False)[source] Bases:
OpenDenoising.model.abstract_deep_learning_model.AbstractDeepLearningModel
Matlab Matconvnet wrapper class.
Notes
Matconvnet models are available only for inference.
-
denoising_func
¶ Function object representing the Matlab function responsable for denoising images.
Type: function
See also
model.AbstractDenoiser
- for the basic functionalities of Image Denoisers.
model.AbstractDeepLearningModel
- for the basic functionalities of Deep Learning based Denoisers.
-
__call__
(self, image)[source] Denoises a batch of images.
Parameters: image ( numpy.ndarray
) – 4D batch of noised images. It has shape: (batch_size, height, width, channels)Returns: Restored batch of images, with same shape as the input. Return type: numpy.ndarray
-
__init__
(self, model_name='MatconvnetModel', return_diff=False)[source] Common interface for Deep Learning based image denoisers.
-
model
¶ Object representing the Denoiser Model in the framework used.
-
logdir
¶ String containing the path to the model log directory. Such directory will contain training information, as well as model checkpoints.
Type: str
-
train_info
¶ Dictionary containing the time spent on training, how much parameters the network has, and if it has been trained.
Type: dict
-
-
Built-in Architectures¶
Keras Architectures¶
-
model.architectures.keras.
dncnn
(depth=17, n_filters=64, kernel_size=(3, 3), n_channels=1, channels_first=False)[source]¶ Keras implementation of DnCNN. Implementation followed the original paper [1]. Authors original code can be found on their Github Page.
Parameters: - depth (int) – Number of fully convolutional layers in dncnn. In the original paper, the authors have used depth=17 for non- blind denoising and depth=20 for blind denoising.
- n_filters (int) – Number of filters on each convolutional layer.
- kernel_size (int tuple) – 2D Tuple specifying the size of the kernel window used to compute activations.
- n_channels (int) – Number of image channels that the network processes (1 for grayscale, 3 for RGB)
- channels_first (bool) – Whether channels comes first (NCHW, True) or last (NHWC, False)
Returns: Keras model object representing the Neural Network.
Return type: keras.models.Model
References
[1] Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing. 2017 Examples
>>> from OpenDenoising.model.architectures.keras import dncnn >>> dncnn_s = dncnn(depth=17) >>> dncnn_b = dncnn(depth=20)
-
model.architectures.keras.
rednet
(depth=20, n_filters=128, kernel_size=(3, 3), skip_step=2, n_channels=1)[source]¶ Keras implementation of RedNet. Implementation following the paper [1].
Notes
In [1], authors have suggested three architectures:
- RED10, which has 10 layers and does not use any skip connection (hence skip_step = 0)
- RED20, which has 20 layers and uses skip_step = 2
- RED30, which has 30 layers and uses skip_step = 2
Moreover, the default number of filters is 128, while the kernel size is (3, 3).
Parameters: - depth (int) – Number of fully convolutional layers in dncnn. In the original paper, the authors have used depth=17 for non-blind denoising and depth=20 for blind denoising.
- n_filters (int) – Number of filters at each convolutional layer.
- kernel_size (list) – 2D Tuple specifying the size of the kernel window used to compute activations.
- skip_step (int) – Step for connecting encoder layers with decoder layers through add. For skip_step=2, at each 2 layers, the j-th encoder layer E_j is connected with the i = (depth - j) th decoder layer D_i.
- n_channels (int) – Number of image channels that the network processes (1 for grayscale, 3 for RGB)
Returns: Keras Model representing the Denoiser neural network
Return type: keras.models.Model
References
[1] (1, 2) Mao, X., Shen, C., & Yang, Y. B. (2016). Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in neural information processing systems
Tensorflow Architectures¶
-
model.architectures.tensorflow.
dncnn
(depth=17, n_filters=64, kernel_size=3, n_channels=1, channels_first=False)[source]¶ Tensorflow implementation of DnCNN. Implementation followed the original paper [1]. Authors original code can be found on their Github Page.
Notes
Implementation was based on the following Github page.
Parameters: - depth (int) – Number of fully convolutional layers in dncnn. In the original paper, the authors have used depth=17 for non- blind denoising and depth=20 for blind denoising.
- n_filters (int) – Number of filters on each convolutional layer.
- kernel_size (int tuple) – 2D Tuple specifying the size of the kernel window used to compute activations.
- n_channels (int) – Number of image channels that the network processes (1 for grayscale, 3 for RGB)
- channels_first (bool) – Whether channels comes first (NCHW, True) or last (NHWC, False)
Returns: - input_tensor (
tf.Tensor
) – Network graph input tensor - is_training (
tf.Tensor
) – Placeholder indicating if the network is being trained or evaluated - output_tensor (
tf.Tensor
) – Network graph output tensor
References
[1] Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing. 2017 Examples
>>> from OpenDenoising.model.architectures.tensorflow import dncnn >>> (dncnn_s_input, dncnn_s_is_training, dncnn_s_output) = dncnn(depth=17) >>> (dncnn_b_input, dncnn_b_is_training, dncnn_b_output) = dncnn(depth=20)
Pytorch Architectures¶
-
class
model.architectures.pytorch.
DnCNN
(depth=17, n_filters=64, kernel_size=3, n_channels=1)[source]¶ -
__init__
(self, depth=17, n_filters=64, kernel_size=3, n_channels=1)[source]¶ Pytorch implementation of DnCNN. Implementation followed the original paper [1]. Authors original code can be found on their Github Page.
Notes
This implementation is based on the following Github page.
Parameters: - depth (int) – Number of fully convolutional layers in dncnn. In the original paper, the authors have used depth=17 for non- blind denoising and depth=20 for blind denoising.
- n_filters (int) – Number of filters on each convolutional layer.
- kernel_size (int tuple) – 2D Tuple specifying the size of the kernel window used to compute activations.
- n_channels (int) – Number of image channels that the network processes (1 for grayscale, 3 for RGB)
References
[1] Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing. 2017 Example
>>> from OpenDenoising.model.architectures.pytorch import DnCNN >>> dncnn_s = DnCNN(depth=17) >>> dncnn_b = DnCNN(depth=20)
-
forward
(self, x)[source]¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
Model utilities¶
-
model.utils.
pb2onnx
(path_to_pb, path_to_onnx, input_node_names=['input'], output_node_names=['output'])[source]¶ Converts Tensorflow’s ProtoBuf file to Onnx model.
Parameters: - path_to_pb (str) – String containing the path to the .pb file containing the tensorflow graph to convert to onnx.
- path_to_onnx (str) – String containing the path to the location where the .onnx file will be saved.
- input_node_names (list) – List of strings containing the input node names
- output_node_names (list) – List of strings containing the output node names
-
model.utils.
freeze_tf_graph
(model_file=None, output_filepath='./output_graph.pb', output_node_names=None)[source]¶ Freezes tensorflow graph, then writes it in a new .pb file.
Parameters:
data module¶
This module extends keras.utils.Sequence. dataset generators for the three main cases in image denoising:
- Clean Dataset: Only clean (ground-truth) images are available. Hence, you will need to specify a function to corrupt the clean images, so that you can train your network with pairs (noisy, clean). On the section Artificial Noises, we cover built-in functions for adding noise to clean images.
- Full Dataset: Both clean and noisy images are available. In that case, the dataset yields the image paris (clean, noisy).
- Blind Dataset: Only noisy images are available. These datasets can be used for qualitative avaliation.
We remark that these classes can be used for training and inference on our Benchmark. It is also noteworthy that MatlabModel
objects, which are based on Matlab Deep Learning Toolbox, need to use the MatlabDatasetWrapper to be efficiently trained.
For more information, look at data.MatlabDatasetWrapper
and model.MatlabModel
classes documentation.
Along with these classes, we also provide built-in functionalities for preprocessing, such as Data Augmentation and patch extraction. These are covered in the Preprocessing Functions section.
Data generation¶
-
class
data.
AbstractDatasetGenerator
(path, batch_size=32, shuffle=True, name='AbstractDataset', n_channels=1)[source]¶ Bases:
keras.utils.data_utils.Sequence
Dataset generator based on Keras library. implementation based on https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly
-
channels_first
¶ Whether data is formatted as (BatchSize, Height, Width, Channels) or (BatchSize, Channels, Height, Width).
Type: bool
-
-
class
data.
BlindDatasetGenerator
(path, batch_size=32, shuffle=True, name='CleanDataset', n_channels=1, preprocessing=None, target_fcn=None)[source]¶ Bases:
OpenDenoising.data.abstract_dataset.AbstractDatasetGenerator
Dataset generator based on Keras library. This class is used for Blind denoising problems, where only noisy images are available.
-
target_fcn
¶ Function implementing how to generate target images from noisy ones.
Type: function
-
__getitem__
(self, i)[source]¶ Generates image batches from filenames.
Parameters: i (int) – Batch index to get. Returns: - inp (
numpy.ndararray
) – Batch of noisy images. - ref (
numpy.ndarray
) – Batch of target images.
- inp (
-
-
class
data.
CleanDatasetGenerator
(path, batch_size=32, noise_config=None, shuffle=True, name='CleanDataset', n_channels=1, preprocessing=None)[source]¶ Bases:
OpenDenoising.data.abstract_dataset.AbstractDatasetGenerator
Dataset generator based on Keras library. This class is used for non-blind denoising problems where only clean images are available. To use such dataset to train denoising networks, you need to specify a type of artificial noise that will be added to each clean image.
-
channels_first
¶ Whether data is formatted as (BatchSize, Height, Width, Channels) or (BatchSize, Channels, Height, Width).
Type: bool
-
noise_config
¶ Dictionary whose keys are functions implementing the noise process, and the value is a list containing the noise function parameters. If you do not want to specify any parameters, your list should be empty.
Type: dict
Examples
The following example corresponds to a Dataset Generator which reads images from “./images”, yields batches of length 32, applies Gaussian noise with intensity drawn uniformely from the range [0, 55] followed by “Super Resolution noise” of intensity 4. Moreover, the dataset shuffles the data, yields them in NHWC format, and does not apply any preprocessing function. NOTE: your list should be in the same order as your arguments.
>>> from OpenDenoising import data >>> noise_config = {data.utils.gaussian_blind_noise: [0, 55], ... data.utils.super_resolution_noise: [4]} >>> datagen = data.CleanDatasetGenerator("./images", 32, noise_config, True, False, "MyData", 1, None)
-
-
class
data.
FullDatasetGenerator
(path, batch_size=32, shuffle=True, name='FullDataset', n_channels=1, preprocessing=None)[source]¶ Bases:
OpenDenoising.data.abstract_dataset.AbstractDatasetGenerator
Dataset generator based on Keras library. This class is used for non-blind denoising problems. Unlike ClenDatasetGenerator class, this class corresponds to the case where both clean and noisy samples are available and paired (for each noisy image, there is one and only one clean image with same filename).
-
class
data.
MatlabDatasetWrapper
(engine, images_path='./tmp/BSDS500/Train/ref', partition='Train', ext=None, patch_size=40, n_patches=16, noiseFcn="@(I) imnoise(I, 'gaussian', 0, 25/255)", channel_format='grayscale', type='Clean')[source]¶ Bases:
object
This class wraps FullMatlabDataset and CleanMatlabDataset classes. It makes internal calls to Matlab through Python Matlab’s engine to load one these classes into the workspace.
Notes
This functions does not implement the interface provided by keras.utils.Sequence. It should only be used alongside with MatlabModel objects.
-
engine
¶ Matlab engine instance.
Type: matlab.engine.MatlabEngine
-
ext
¶ Dictionary holding images extensions. Note that this dictionary should have keys only.
Type: dict
-
noiseFcn
¶ For CleanDataset only. Specifies the noising function that will be applied to images. It should be a function that accepts as input an image, and returns another image. If you need to specify parameters, you can do so by using Matlab’s anonymous function syntax (by specifying noiseFcn=”@(I) imnoise(I, ‘gaussian’, 0, 25/255)”), for instance.
Type: str
-
channel_format
¶ String containing ‘grayscale’ for grayscale images, or ‘RGB’ for RGB images.
Type: str
See also
model.MatlabModel
- for the type of model for which this class was designed to interact.
-
Artificial Noises¶
-
data.
gaussian_noise
(ref, noise_level=15)[source]¶ Gaussian noise
Parameters: - ref (
numpy.ndarray
) – Image to be noised. - noise_level (
numpy.ndarray
) – Level of corruption. Always give the noise_level in terms of 0-255 pixel intensity range.
Returns: inp – Noised image.
Return type: - ref (
-
data.
poisson_noise
(ref)[source]¶ Poisson noise
Parameters: ref ( numpy.ndarray
) – Image to be noised.Returns: inp – Noised image. Return type: numpy.ndarray
-
data.
salt_and_pepper_noise
(ref, noise_level=15)[source]¶ Salt and pepper noise
Parameters: - ref (
numpy.ndarray
) – Image to be noised. - noise_level (
numpy.ndarray
) – Percentage of perturbed pixels.
Returns: inp – Noised image.
Return type: - ref (
-
data.
speckle_noise
(ref, noise_level=15)[source]¶ Speckle noise
Parameters: - ref (
numpy.ndarray
) – Image to be noised. - noise_level (
numpy.ndarray
) – Percentage of perturbed pixels.
Returns: inp – Noised image.
Return type: - ref (
-
data.
super_resolution_noise
(ref, noise_level=2)[source]¶ Noise due to down-sampling followed by up-sampling an image
Parameters: - ref (
numpy.ndarray
) – Image to be noised. - noise_level (
numpy.ndarray
) – scaling factor. For instance, for an image (512, 512), a factor 2 down-samples the image to (256, 256), then up-samples it again to (512, 512).
Returns: inp – Noised image.
Return type: - ref (
Preprocessing Functions¶
-
data.
dncnn_augmentation
(inp, ref=None, aug_times=1, channels_first=False)[source]¶ Data augmentation policy employed on DnCNN
Parameters: - inp (
numpy.ndarray
) – Noised image. - ref (
numpy.ndarray
) – Ground-truth images. - aug_times (int) – Number of times augmentation if applied.
Returns: - inp (
numpy.ndarray
) – Augmented noised images - ref (
numpy.ndarray
) – Augmented ground-truth images
- inp (
-
data.
gen_patches
(inp, ref, patch_size, channels_first=False, mode='sequential', n_patches=-1)[source]¶ Patch generation function.
Parameters: - inp (
numpy.ndarray
) – Noised image which patches will be extracted. - ref (
numpy.ndarray
) – Reference image which patches will be extracted. - patch_size (int) – Size of patch window (number of pixels in each axis).
- channels_first (bool) – Whether data is formatted as NCHW (True) or NHWC (False).
- mode (str) – One between {‘sequential’, ‘random’}. If mode = ‘sequential’, extracts patches sequentially on each axis. If mode = ‘random’, extracts patches randomly.
- n_patches (int) –
Number of patches to be extracted from the image. Should be specified only if mode = ‘random’. If not specified, then,
\[n\_patches = \dfrac{h \times w}{patch_{size}^{2}}\]
Returns: - input_patches (
numpy.ndarray
) – Extracted input patches. - reference_patches (
numpy.ndarray
) – Extracted reference patches.
- inp (
evaluation module¶
The evaluation module was designed to provide functions that implement numerical metrics and visualisations during training and inference, as well as callbacks.
Callbacks¶
-
class
evaluation.
TensorboardImage
(valid_generator, denoiser, preprocess='clip')[source]¶ Bases:
keras.callbacks.callbacks.Callback
Tensorboard Keras callback. At each epoch end, it plots on Tensorboard metrics/loss information on validation data, as well as a denoising summary.
-
valid_generator
¶ Image dataset generator. Provide validation data for model evaluation.
Type: data.AbstractDatasetGenerator
-
denoiser
¶ Image denoising object.
Type: model.AbstractDeepLearningModel
-
-
class
evaluation.
LrSchedulerCallback
[source]¶ Bases:
keras.callbacks.callbacks.Callback
Custom Learning Rate scheduler based on Keras. This class is mainly used for compatibility between TfModel, PytorchModel and KerasModel. Please note that this class should not be used as a LearningRateScheduler. To specify one, you need to use a class that inherits from LrSchedulerCallback.
-
on_epoch_end
(self, epoch, logs=None)[source]¶ Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during train mode.
- # Arguments
epoch: integer, index of epoch. logs: dict, metric results for this training epoch, and for the
validation epoch if validation is performed. Validation result keys are prefixed with val_.
-
-
class
evaluation.
DnCNNSchedule
(initial_lr=0.001)[source]¶ Bases:
evaluation.callbacks.LrSchedulerCallback
DnCNN learning rate decay scheduler as specified in the original paper.
After epoch 30, drops the initial learning rate by a factor of 10. After epoch 60, drops the initial learning rate by a factor of 20.
-
class
evaluation.
StepSchedule
(initial_lr=0.001, factor=0.5, dropEvery=10)[source]¶ Bases:
evaluation.callbacks.LrSchedulerCallback
Drops the learning rate at each ‘dropEvery’ iterations by a factor of ‘factor’.
-
class
evaluation.
PolynomialSchedule
(initial_lr=0.001, maxEpochs=100, power=1.0)[source]¶ Bases:
evaluation.callbacks.LrSchedulerCallback
Drops the learning rate following a polynomial schedule:
\[\alpha = \alpha_{0}\biggr(1 - \dfrac{epoch}{maxEpochs}\biggr)^{power}\]
-
class
evaluation.
ExponentialSchedule
(initial_lr=0.001, gamma=0.5)[source]¶ Bases:
evaluation.callbacks.LrSchedulerCallback
Drops the learning rate following a exponential schedule:
\[\alpha = \alpha_{0}\times\gamma^{epoch}\]
-
class
evaluation.
CheckpointCallback
(denoiser, monitor='loss', mode='max', period=1)[source]¶ Bases:
keras.callbacks.callbacks.Callback
Creates training checkpoints for Deep Learning models.
-
denoiser
¶ Denoiser object to be saved.
Type: model.AbstractDenoiser
-
monitor
¶ Name of the metric being tracked. If the name is not present on logs, tracks the loss value.
Type: str
-
Metrics¶
-
class
evaluation.
Metric
(name, tf_metric=None, np_metric=None)[source]¶ Bases:
object
The metric class is used for interfacing between tensorflow and numpy metrics.
Notes
This class is recommended instead of directly using functions, because functions that work with tensors are not recommended to act on numpy arrays (for each time they are called on numpy arrays, a new tensor is created, thus causing memory overflow). See Examples for more informations. We remark that, for inference on the benchmark, you should always specify np metrics.
-
tf_metric
¶ Tensorflow function implementing the metric on tensors.
Type: function
-
np_metric
¶ Numpy function implementing metric on ndarrays.
Type: function
Examples
The most basic usage of Metric class is when you have functions for processing tensors and numpy arrays. We provide as built-in metrics SSIM, PSNR and MSE. For instance,
>>> import numpy as np >>> import tensorflow as tf >>> from OpenDenoising.evaluation import Metric, tf_ssim, skimage_ssim >>> ssim = Metric(name="SSIM", tf_metric=tf_ssim, np_metric=skimage_ssim) >>> x = tf.placeholder(tf.float32, [None, None, None, 1]) >>> y = tf.placeholder(tf.float32, [None, None, None, 1]) >>> ssim(x, y) <tf.Tensor 'Mean_3:0' shape=() dtype=float32> >>> x_np = np.random.randn(10, 256, 256, 1) >>> y_np = np.random.randn(10, 256, 256, 1) >>> ssim(x_np, y_np) 0.007000506155677978
That is, if you have specified the two metrics, the class handles if the result is a tensor, or a numeric value.
-
Tensorflow Metrics¶
-
evaluation.
tf_ssim
(y_true, y_pred)[source]¶ Structural Similarity Index.
Parameters: - y_true (
tf.Tensor
) – Tensor corresponding to ground-truth images (clean). - y_pred (
tf.Tensor
) – Tensor corresponding to the Network’s prediction.
Returns: Tensor corresponding to the evaluated metric.
Return type: tf.Tensor
- y_true (
-
evaluation.
tf_mse
(y_true, y_pred)[source]¶ Mean Squared Error.
\[MSE = \dfrac{1}{N \times H \times W \times C}\sum_{n=0}^{N}\sum_{i=0}^{H}\sum_{j=0}^{W}\sum_{k=0}^{C}(y_{true} (n, i, j, k)-y_{pred}(n, i, j, k))^{2}\]Parameters: - y_true (
tf.Tensor
) – Tensor corresponding to ground-truth images (clean). - y_pred (
tf.Tensor
) – Tensor corresponding to the Network’s prediction.
Returns: Tensor corresponding to the evaluated metric.
Return type: tf.Tensor
- y_true (
-
evaluation.
tf_psnr
(y_true, y_pred)[source]¶ Peak Signal to Noise Ratio loss.
\[PSNR = \dfrac{10}{N}\sum_{n=0}^{N}log_{10}\biggr(\dfrac{max(y_{true}(n)^{2})}{MSE(y_{true}(n), y_{pred}(n))}\biggr)\]Parameters: - y_true (
tf.Tensor
) – Tensor corresponding to ground-truth images (clean). - y_pred (
tf.Tensor
) – Tensor corresponding to the Network’s prediction.
Returns: Tensor corresponding to the evaluated metric.
Return type: tf.Tensor
- y_true (
-
evaluation.
tf_se
(y_true, y_pred)[source]¶ Squared Error loss.
\[SE = \sum_{n=0}^{N}\sum_{i=0}^{H}\sum_{j=0}^{W}\sum_{k=0}^{C}(y_{true}(n, i, j, k)-y_{pred}(n, i, j, k))^{2}\]Parameters: - y_true (
tf.Tensor
) – Tensor corresponding to ground-truth images (clean). - y_pred (
tf.Tensor
) – Tensor corresponding to the Network’s prediction.
Returns: Tensor corresponding to the evaluated metric.
Return type: tf.Tensor
- y_true (
Skimage Metrics¶
-
evaluation.
skimage_ssim
(y_true, y_pred)[source]¶ Skimage SSIM wrapper.
Parameters: - y_true (
numpy.ndarray
) – 4D numpy array containing ground-truth images. - y_pred (
numpy.ndarray
) – 4D numpy array containing the Network’s prediction.
Returns: Scalar value of SSIM between y_true and y_pred
Return type: - y_true (
-
evaluation.
skimage_mse
(y_true, y_pred)[source]¶ Skimage MSE wrapper.
Parameters: - y_true (
numpy.ndarray
) – 4D numpy array containing ground-truth images. - y_pred (
numpy.ndarray
) – 4D numpy array containing the Network’s prediction.
Returns: Scalar value of MSE between y_true and y_pred
Return type: - y_true (
-
evaluation.
skimage_psnr
(y_true, y_pred)[source]¶ Skimage PSNR wrapper.
Parameters: - y_true (
numpy.ndarray
) – 4D numpy array containing ground-truth images. - y_pred (
numpy.ndarray
) – 4D numpy array containing the Network’s prediction.
Returns: Scalar value of PSNR between y_true and y_pred
Return type: - y_true (
Visualisations¶
-
class
evaluation.
Visualisation
(func, name)[source]¶ Bases:
object
Wraps visualisation functions.
-
func
¶ Reference to a function that will create the plot.
Type: function
-
__call__
(self, file_dir, **kwargs)[source]¶ Parameters: Examples
Assuming you are output your benchmark results to “./results”, and that you have a Benchmark with name “MyBenchTests”, you can use the boxplot function to generate visualisations from the output .csv files.
>>>from OpenDenoising.evaluation import Visualisation, boxplot >>>vis = Visualisation(func=boxplot, name=”boxplot_PSNR”) >>>vis(file_dir=”./results/”)
-
Functions¶
-
evaluation.
boxplot
(csv_dir, output_dir, metric='PSNR', show=True)[source]¶ Wraps Seaborn boxplot function.
Parameters: - csv_dir (str) – String containing the path to CSV file directory holding the data.
- output_dir (str) – String containing the path to save the image.
- metric (str) – String containing the name of the metric being shown by the plot.
- show (bool) – If bool is True, shows the plot rather than only saving it to output_path.
Benchmark Class¶
-
class
OpenDenoising.
Benchmark
(name='evaluation', output_dir='./results')[source]¶ Bases:
object
Benchmark class.
The purpose of this class is to evaluate models on given datasets. The evaluations are registered through the function “register_function”. After registering the desired metrics, you can run numeric evaluations through “numeric_evaluation”. To further visualize the results, you may also register visualization functions (to generate figures) through “register_visualization”. Once you have computed the metrics, you can call “graphic_evaluation” to generate the registered plots.
-
models
¶ List of
model.AbstractDenoiser
objects.Type: list
-
metrics
¶ List of dictionaries holding the following fields,
- Name (str): metric name.
- Func (
function
): function object. - Value (list): List of values, one for each file in dataset.
- Mean (float): Mean of “Values” field.
- Variance (float): Variance of “Values” field.
Type: list
-
datasets
¶ List of
data.AbstractDatasetGenerator
Type: list
-
partial
¶ DataFrame holding per-file denoising results.
Type: pandas.DataFrame
-
general
¶ DataFrame holding aggregates of denoising results.
Type: pandas.DataFrame
-
__init__
(self, name='evaluation', output_dir='./results')[source]¶ Initialize self. See help(type(self)) for accurate signature.
-
evaluate
(self)[source]¶ Perform the entire evaluation on datasets and models.
For each pair (model, dataset), runs inference on each dataset image using model. The results are stored in a pandas DataFrame, and later written into two csv files:
- (EVALUATION_NAME)/General_Results: contains aggregates (mean and variance of each metric) about the ran tests
- (EVALUATION_NAME)/Partial_Results: contains the performance of each model on each dataset image.
- These tables are stored on ‘output_dir’. Moreover, the visual restoration results are stored into
- ‘output_dir/EVALUATION_NAME/RestoredImages’ folder. If you have visualisations registered into your evaluator, the plots are saved in ‘output_dir/EVALUATION_NAME/Figures’ folder.
-
evaluate_model_on_dataset
(self, denoiser, test_generator)[source]¶ Evaluates denoiser on dataset represented by test_generator.
Parameters: - denoiser (
model.AbstractDeepLearningModel
) – Denoiser object - test_generator (
data.AbstractDatasetGenerator
) – Dataset generator object. It generates data to evaluate the denoiser.
Returns: List of evaluated metrics.
Return type: - denoiser (
-
Examples¶
Each tutorial in this section corresponds to a Jupyter Notebook hosted at OpenDenoising Github Page. You may see them directly on the browser, or access the Github page and download them, so that you can execute them on your own environment.
Basic Examples¶
These tutorials cover the basic utilisation of OpenDenoising benchmark. Benchmark Tutorial covers how evaluation is done and Data Module Tutorial covers how to construct a Dataset Generator.
Example: running evaluation¶
Here we present a tutorial on how you can use the OpenDenoising benchmark to compare the performance of denoising algorithms. You can run through this examples running the code snippets sequentially.
First, being on the project’s root, you need to import the modules provided by the OpenDenoising benchmark:
from OpenDenoising import data
from OpenDenoising import model
from OpenDenoising import evaluation
from OpenDenoising import Benchmark
To execute multiple models that access the GPU, you need to allow Tensorflow/Keras to allocate memory only when needed. This is done through,
import keras
import tensorflow as tf
# Configures Tensorflow session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
keras.backend.set_session(session)
For now on, we suppose you are running your codes on the project root folder.
Defining Datasets¶
To define a dataset to evaluate your algorithms, you need to have at hand saved image files in the following folder structure:
Clean Dataset (only references)
DatasetName/ |-- Train | |-- ref |-- Valid | |-- ref
Full Dataset (references and noisy images)
DatasetName/ |-- Train | |-- in | |-- ref |-- Valid | |-- in | |-- ref
To run this example, you can use the data
module to download test datasets,
data.download_dncnn_testsets(output_dir="./tmp/TestSets", testset="BSD68")
data.download_dncnn_testsets(output_dir="./tmp/TestSets", testset="Set12")
The previous snippet will create the entire folder structure on a temporary folder called “tmp”. Moreover to create the object for generating image samples, you can use the following commands,
# BSD Dataset
BSD68 = data.DatasetFactory.create(path="./tmp/TestSets/BSD68/",
batch_size=1,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSD68")
# Set12 Dataset
Set12 = data.DatasetFactory.create(path="./tmp/TestSets/Set12/",
batch_size=1,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="Set12")
datasets = [BSD68, Set12]
Defining Models¶
Deep Learning Models
In “./Additional Files”, you have at your disposal various pre-trained models. To load them, you only need to specify the path to the file containing their architecture/weights. For more details about how the model module works, you can look the Model module tutorial.
Bellow, we charge each model using the respective wrapper class for its framework.
# Keras rednet30
keras_rednet30 = model.KerasModel(model_name="Keras_Rednet30")
keras_rednet30.charge_model(model_path="./Additional Files/Keras Models/rednet30.hdf5")
# Keras rednet20
keras_rednet20 = model.KerasModel(model_name="Keras_Rednet20")
keras_rednet20.charge_model(model_path="./Additional Files/Keras Models/rednet20.hdf5")
# Keras rednet10
keras_rednet10 = model.KerasModel(model_name="Keras_Rednet10")
keras_rednet10.charge_model(model_path="./Additional Files/Keras Models/rednet10.hdf5")
# Onnx dncnn from Matlab
onnx_dncnn = model.OnnxModel(model_name="Onnx_DnCNN")
onnx_dncnn.charge_model(model_path="./Additional Files/Onnx Models/dncnn.onnx")
Filtering Models
The specification of filtering models is made the same way. Since these kinds of model do not need to be trained, you only need to specify the function that will perform the denoising. Bellow, we specify BM3D implemented on Python through Matlab’s engine.
Note If you have not installed Matlab support, or have not installed BM3D library from the author’s website, do not execute the next snippet.
# BM3D from Matlab
bm3d_filter = model.FilteringModel(model_name="BM3D_filter")
bm3d_filter.charge_model(model_function=model.filtering.BM3D, sigma=25.0, profile="np")
List of Models
If you have instantiated BM3D model,
models = [bm3d_filter, onnx_dncnn, keras_rednet10, keras_rednet20, keras_rednet30]
Otherwise,
models = [onnx_dncnn, keras_rednet10, keras_rednet20, keras_rednet30]
Metrics¶
Metrics are mathematical functions that allow the assessment of image quality. The evaluation
module contains
a list of built-in metrics commonly used on image processing.
MSE¶
The Mean Squared Error metric is a metric used to calculate the mean deviation of pixels between two images \(y_{true}\) and \(y_{pred}\),
SSIM¶
The Structural Similarity Index is a metric that evaluates the perceived quality of a given image, with respect to a reference image. Let \(x\) and \(y\) be image patches, the SSIM between them is,
where
- \(\mu_{x}\), \(\mu_{y}\) are respectively the mean of pixels in each patch.
- \(\sigma_{x}\), \(\sigma_{y}\) are respectively the variance of pixels in each patch.
- \(\sigma_{xy}\) is the covariance between patches \(x\) and \(y\).
- \(c_{1} = 0.01\), \(c_{2} = 0.03\)
PSNR¶
The Peak Signal to Noise Ratio is metric used for measuring noise present on signals. Its computation is based on the MSE metric,
where \(max(y_{true})\) corresponds to the maximum pixel value on \(y_{true}\).
Creating Custom Metrics¶
The OpenDenoising benchmark has two types of functions: those that act on symbolic tensors, and those that act on
actual numeric arrays (from numpy). The backend used to process tensors is Tensorflow, and its functions cannot be called
directly on numpy.ndarray
objects.
This introduces a double behavior on Metric functions (those that act upon tensors, and those that act upon arrays). To cope with this issue, we propose a class called “Metric”, that wraps tensorflow-based and numpy-based functions, handling when to call one or another.
For evaluation purposes, we only need to specify metrics that process numpy arrays. To define PSNR, SSIM and MSE metrics we run the following snippet,
mse_metric = evaluation.Metric(name="MSE", np_metric=evaluation.skimage_mse)
ssim_metric = evaluation.Metric(name="SSIM", np_metric=evaluation.skimage_ssim)
psnr_metric = evaluation.Metric(name="PSNR", np_metric=evaluation.skimage_psnr)
metrics = [mse_metric, psnr_metric, ssim_metric]
Visualisations¶
Visualisations are functions to create plots based on the evaluation results. To define a visualisation you need to
specify the function to generate the plot, and the use the class OpenDenoising.evaluation.Visualisation
to
wrap it. The OpenDenoising benchmark provides box plots of default metrics as built-ins options for visualisations,
as follows,
boxplot_PSNR = evaluation.Visualisation(func=partial(evaluation.boxplot, metric="PSNR"),
name="Boxplot_PSNR")
boxplot_SSIM = evaluation.Visualisation(func=partial(evaluation.boxplot, metric="SSIM"),
name="Boxplot_SSIM")
boxplot_MSE = evaluation.Visualisation(func=partial(evaluation.boxplot, metric="MSE"),
name="Boxplot_MSE")
visualisations = [boxplot_PSNR, boxplot_SSIM, boxplot_MSE]
Evaluation¶
To run an evaluation session you need to instantiate the OpenDenoising.Benchmark
class, and then register
the list we have created so far (datasets, models, metrics and visualisations) through the method register, as follows,
benchmark = Benchmark(name="BSD68_Test12", output_dir='./tmp/results')
# Register metrics
benchmark.register(metrics)
# Register datasets
benchmark.register(datasets)
# Register models
benchmark.register(models)
# Register visualisations
benchmark.register(visualisations)
benchmark.evaluate()
This snippet has as output:



All evaluation results are saved on ‘./tmp/results/BSD68_Test12’ folder, which was specified by output_dir and name. There you may find two .csv files (partial_results.csv and general_results.csv). partial_results.csv holds the denoising results for each image present on each dataset, for each model, and general_results holds statistics for model and dataset (mean and variance).
Example: creating datasets¶
Here we present a tutorial on how you can use the OpenDenoising data module to provide data to your denoising algorithms. You should follow the snippets on this tutorial sequentially.
First, being on the project’s root, you need to import the necessary modules,
import cv2
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from skimage.measure import compare_psnr
For now on, we suppose you are running your codes on the project root folder.
CleanDatasetGenerator Class¶
Many Deep Learning models for image restoration are trained using artificial types of noise. In such setting, one have a dataset of clean images, that is, images without any kind of noise or degradation process. To train the network, these images are noised using artificial noise models, such as gaussian noise or salt and pepper noise.
We call these datasets containing only ground-truth images a “clean dataset”, and they are handled by the class CleanDatasetGenerator. For mode details on how you can artificially add noise to images, look at the Artificial Noising section.
During this section, we will base our analysis on the dataset used to train DnCNN network. The dataset used is called Berkeley Segmentation Dataset (BSDS), consisting of 500 images. For training, BSDS consider 400 cropped images, and further uses 68 images not present on training data for validation/testing.
Additionally, each dataset type has a specific folder structure. We suppose that your Clean Datasets have the following structure:
DatasetName/
|-- Train
| |-- ref
|-- Valid
| |-- ref
Downloading BSDS Dataset¶
The benchmark include a tool for automatically download the training and validation datasets from Github pages. These datasets may be further be saved to a “./tmp” folder, as follows:
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
Creating a CleanDatasetGenerator¶
To create a Clean Dataset, you can use the DatasetFactory class by specifying the following parameters:
- batch_size: Following DnCNN’s paper, the number of patches per batch is 128. However, the batch_size corresponds to the number of images into each batch. Considering a patch size of 40, we end up with 16 times more patches than images. Hence, to end up with 128 patches per batch, we need 8 images per batch.
- n_channels: as our images are grayscale, n_channels = 1
- preprocessing: Following DnCNN’s paper, we need to extract \(40 \times 40\) patches from the images we just downloaded. We can do this by using the function “data.gen_patches”.
- name: we will specify “BSDS_Train” as the dataset’s name.
- noise_config: Here we can configure the artificial noise that will be added to each image sample \(\mathbf{x}\). For our first example, we will add gaussian noise with intensity $sigma=25$ (always specified with respect to 0-255 range).
These specifications are specified by the following snippet,
batch_size = 8
n_channels = 1
patch_size = 40
channels_first = False
noise_config = {data.utils.gaussian_noise: [25]}
preprocessing = [partial(data.gen_patches, patch_size=patch_size, channels_first=channels_first)]
With these configurations, you can create your train and valid generators,
train_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Train",
batch_size=batch_size,
n_channels=1,
noise_config=noise_config,
preprocessing=preprocessing,
name="BSDS_Train")
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=batch_size,
n_channels=1,
noise_config=noise_config,
name="BSDS_Valid")
Notice that we need to specify the path to the root folder, and not to “ref”. The “ref” folder, in that case, is the only folder containing images (as we generate noisy images at execution time). Using these two instances of our class, we may generate images that will be fed to Deep Learning models for training and inference,
The instances of DatasetGenerator class behave as if they were lists. Being so, you can loop through its contents by using list comprehension. For instance,
for Xbatch, Ybatch in train_generator:
# Do something
will read the images on “./tmp” and output “Xbatch” (noisy images) and “Ybatch” (clean images). You may also use the Python built-in function next, which reads data sequentially. Moreover, to see the images generated by the generator you may run the following snippet,
Xbatch, Ybatch = next(train_generator)
fig, axes = plt.subplots(5, 2, figsize=(10, 15))
for i in range(5):
axes[i, 0].imshow(np.squeeze(Xbatch[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(Ybatch[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Patch")

To see the images in valid_generator, a similar snippet can be run,
Xbatch, Ybatch = next(valid_generator)
fig, axes = plt.subplots(5, 2, figsize=(10, 15))
for i in range(5):
axes[i, 0].imshow(np.squeeze(Xbatch[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(Ybatch[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Patch")

Artificial Noising¶
In this section we provide the details for adding artificial noise into clean images. First, we cover the basic corruption
functions in the OpenDenoising.data
module,
Gaussian Noise
For additive noises, such as the Gaussian Noise, the noised images \(\mathbf{y}\) obeys the following expression,
where \(\mathbf{x}\) the ground-truth and \(\mathbf{\epsilon}\) the noise component. For the Gaussian Noise model, \(\mathbf{\epsilon} \sim \mathcal{N}(0, \sigma^{2})\), that is, is an Additive White Gaussian Noise (it is additive, and has zero mean).
The main parameter controlling the level of Gaussian Noise is the variance \(\sigma\). Considering its specification, it is noteworthy that the value of \(\sigma\), and consequently the impact of such noise on the outcome \(\mathbf{y}\) is dependent on the range of the original image \(\mathbf{x}\). As a convention, we remark that $sigma$ should be specified with respect to the uint8 range, that is, [0, 255].
The following snippet shows an example of images contaminated with gaussian noise,
x = cv2.imread('./tmp/BSDS500/Train/ref/test_400.png', 0) # Reads a grayscale image
x = x.astype('float32') / 255 # uint8 => float32
y_1 = data.utils.gaussian_noise(x, noise_level=10)
y_2 = data.utils.gaussian_noise(x, noise_level=15)
y_3 = data.utils.gaussian_noise(x, noise_level=25)
y_4 = data.utils.gaussian_noise(x, noise_level=40)
y_5 = data.utils.gaussian_noise(x, noise_level=50)
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
plt.suptitle('Gaussian Noise')
axes[0, 0].imshow(x, cmap='gray')
axes[0, 0].axis('off')
axes[0, 0].set_title('Ground-truth image')
axes[0, 1].imshow(y_1, cmap='gray')
axes[0, 1].axis('off')
axes[0, 1].set_title(r'$\sigma$=10')
axes[0, 2].imshow(y_2, cmap='gray')
axes[0, 2].axis('off')
axes[0, 2].set_title(r'$\sigma$=15')
axes[1, 0].imshow(y_3, cmap='gray')
axes[1, 0].axis('off')
axes[1, 0].set_title(r'$\sigma$=25')
axes[1, 1].imshow(y_4, cmap='gray')
axes[1, 1].axis('off')
axes[1, 1].set_title(r'$\sigma$=40')
axes[1, 2].imshow(y_5, cmap='gray')
axes[1, 2].axis('off')
axes[1, 2].set_title(r'$\sigma$=50')

Remark: a similar kind of noise is specified by data.utils.gaussian_blind_noise
, which is used, for instance,
to train the DnCNN network for Blind denoising (noised images only). In that case, the \(\sigma\) parameter is drawn
uniformly from the range [\(\sigma_{min}\), \(\sigma_{max}\)]. The function, hence, accepts two parameters,
one for the minimum value of \(\sigma\), and other, for its maximum value.
Salt and Pepper Noise
The salt and pepper noise, also called the shot noise, has a probability \(p\) of disturbing a given pixel. Once a pixel is perturbed, it has equal probability of being saturated to either 1, or 0.
To specify the salt and pepper noise, you need to specify its probability of disturbing a pixel.
y_1 = data.utils.salt_and_pepper_noise(x, noise_level=10)
y_2 = data.utils.salt_and_pepper_noise(x, noise_level=15)
y_3 = data.utils.salt_and_pepper_noise(x, noise_level=25)
y_4 = data.utils.salt_and_pepper_noise(x, noise_level=40)
y_5 = data.utils.salt_and_pepper_noise(x, noise_level=50)
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
plt.suptitle('Salt and Pepper Noise')
axes[0, 0].imshow(x, cmap='gray')
axes[0, 0].axis('off')
axes[0, 0].set_title('Ground-truth image')
axes[0, 1].imshow(y_1, cmap='gray')
axes[0, 1].axis('off')
axes[0, 1].set_title(r'$p$=10%')
axes[0, 2].imshow(y_2, cmap='gray')
axes[0, 2].axis('off')
axes[0, 2].set_title(r'$p$=15%')
axes[1, 0].imshow(y_3, cmap='gray')
axes[1, 0].axis('off')
axes[1, 0].set_title(r'$p$=25%')
axes[1, 1].imshow(y_4, cmap='gray')
axes[1, 1].axis('off')
axes[1, 1].set_title(r'$p$=40%')
axes[1, 2].imshow(y_5, cmap='gray')
axes[1, 2].axis('off')
axes[1, 2].set_title(r'$p$=50%')

Image Restoration degradations
Image Restoration is a broader topic than image denoising, comprehending corruption models that follow a more general expression:
where \(\mathbf{H}\) is called the degradation operator. It is clear that when \(\mathbf{H}\) is the identity, the denoising problem is restored. Due to their similarity, neural networks may be trained to solve both kinds of problems. Moreover, since the State-of-the-Art is commonly evaluated for both denoising and restoration problems, we have included two of the most common degradation processes: Super Resolution and JPEG Deblocking.
We encourage you to use the terms denoising and restoration, as well as noise and degradation interchangeably throughout the dataset.
Super Resolution Noise
Super-Resolution is a sub-problem of Image Restoration where we want to resize an image \((h, w)\) to \((n\times h, n\times w)\) while minimizing the quality loss. To train a Deep Neural network to perform such task is equivalent to train a model to restore an image that was deteriorated while performing the resize operation.
To generate images with resolution artifcats, we perform two steps:
- Take an image of size \([h, w]\). Downsample it using bicubic interpolation to \([h / n, w / n]\).
- Upsample it using bicubic interpolation back to \([h, w]\).
The resulting image will exhibit low-resolution artifacts, which can be treated as any other kind of artificial noise.
The introduction of resolution artifacts in image is done through the function data.utils.super_resolution_noise()
,
and the level of degradetaion is controlled through the parameter noise_level, which corresponds to the n,
described in the two steps above.
y_1 = data.utils.super_resolution_noise(x, noise_level=2)
y_2 = data.utils.super_resolution_noise(x, noise_level=3)
y_3 = data.utils.super_resolution_noise(x, noise_level=4)
y_4 = data.utils.super_resolution_noise(x, noise_level=5)
y_5 = data.utils.super_resolution_noise(x, noise_level=6)
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
plt.suptitle('Super Resolution "Noise"')
axes[0, 0].imshow(x, cmap='gray')
axes[0, 0].axis('off')
axes[0, 0].set_title('Ground-truth image')
axes[0, 1].imshow(y_1, cmap='gray')
axes[0, 1].axis('off')
axes[0, 1].set_title(r'n=2')
axes[0, 2].imshow(y_2, cmap='gray')
axes[0, 2].axis('off')
axes[0, 2].set_title(r'n=3')
axes[1, 0].imshow(y_3, cmap='gray')
axes[1, 0].axis('off')
axes[1, 0].set_title(r'n=4')
axes[1, 1].imshow(y_4, cmap='gray')
axes[1, 1].axis('off')
axes[1, 1].set_title(r'n=5')
axes[1, 2].imshow(y_5, cmap='gray')
axes[1, 2].axis('off')
axes[1, 2].set_title(r'n=6')

JPEG Artifacts
As super resolution, JPEG deblocking is another kind of image restoration task, where we want to restore an image that
was degraded by compressing it using JPEG algorithm. The introduction of JPEG artifcats in the image is done by using
data.utils.jpeg_artifacts()
. It has one parameter, controlling the intensity of compression, which is
compression_rate (given as a percentage of information lost).
y_1 = data.utils.super_resolution_noise(x, noise_level=10)
y_2 = data.utils.super_resolution_noise(x, noise_level=20)
y_3 = data.utils.super_resolution_noise(x, noise_level=50)
y_4 = data.utils.super_resolution_noise(x, noise_level=75)
y_5 = data.utils.super_resolution_noise(x, noise_level=90)
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
plt.suptitle('JPEG "Noise"')
axes[0, 0].imshow(x, cmap='gray')
axes[0, 0].axis('off')
axes[0, 0].set_title('Ground-truth image')
axes[0, 1].imshow(y_1, cmap='gray')
axes[0, 1].axis('off')
axes[0, 1].set_title(r'compression_rate=10')
axes[0, 2].imshow(y_2, cmap='gray')
axes[0, 2].axis('off')
axes[0, 2].set_title(r'compression_rate=20')
axes[1, 0].imshow(y_3, cmap='gray')
axes[1, 0].axis('off')
axes[1, 0].set_title(r'compression_rate=50')
axes[1, 1].imshow(y_4, cmap='gray')
axes[1, 1].axis('off')
axes[1, 1].set_title(r'compression_rate=75')
axes[1, 2].imshow(y_5, cmap='gray')
axes[1, 2].axis('off')
axes[1, 2].set_title(r'compression_rate=90')

Extending Noise Types
In the CleanDatasetGenerator, noise is artificially added to images at each time an image is read from memory. You should keep in mind that, if your noising function introduces too much overhead into the batch generation process, you should avoid specifying it. Instead, if that is the case, you can add noise to the images, then save the noised ones on $($DATASETPATH)/in/$ and use them as if they were a “FullDataset” (see bellow).
Each CleanDatasetGenerator has an internal dictionary of noising functions. This dictionary consists of pairs “function: args”, where function is the noising function that will corupt the data, and args are the arguments for it. You can specify more than one noise, knowing that they will be applied sequentially, as can be seen bellow,
noise_config = {
data.utils.gaussian_blind_noise: [0, 55],
data.utils.salt_and_pepper_noise: [10]
}
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config=noise_config,
name="BSDS_Valid")
Ybatch, Xbatch = next(valid_generator)
fig, axes = plt.subplots(5, 2, figsize=(10, 15))
for i in range(5):
axes[i, 0].imshow(np.squeeze(Xbatch[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(np.clip(Ybatch[i], 0, 1)), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")

FullDatasetGenerator Class¶
Some other datasets happen to have matched image pairs \(\mathbf{x}_{i}, \mathbf{y}_{i}\). In that case, instead of generating an artificial noise to train the dataset, we may use the pairs for training Deep Learning Model, as well as to assess model quality. Full Datasets need to have the following folder structure,
DatasetName/
|-- Train
| |-- in
| |-- ref
|-- Valid
| |-- in
| |-- ref
Here we use as example the PolyU real-world denoising dataset. You can either download it from their Github page, or use the data module to automatically download it,
polyu_path = "./tmp/PolyU/"
data.download_PolyU(polyu_path)
The procedure for creating Full Datasets is quite the same, the only difference being that we do not have to specify the noise config dictionary. Since DatasetFactory receives the dataset root, it automatically recognizes images in “ref” as the ground_truth, and images in “in” as the noisy samples, as shown bellow,
polyU_cropped = data.DatasetFactory.create(path="./tmp/PolyU/Train",
batch_size=16,
n_channels=3,
name="PolyU_Cropped")
Ybatch, Xbatch = next(polyU_cropped)
fig, axes = plt.subplots(5, 2, figsize=(10, 15))
for i in range(5):
axes[i, 0].imshow(np.squeeze(Xbatch[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(Ybatch[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")

Creating a FullDataset from clean images¶
If your preprocessing or corruption functions happen to introduce too much overhead in the batch generation process, you
may consider using a OpenDenoising.data.FullDatasetGenrator
instead of a OpenDenoising.data.CleanDatasetGenrator
.
To do so, you may use the function OpenDenoising.data.generate_full_dataset
, which executes the exactly same
process of CleanDataset during batch generation, except that it saves the generated images into memory.
For instance, the following snippet reads data from BSDS train images from “./tmp/BSDS500/Train/ref/”, crops \(40 \times 40\) patches from each image file, and saves these patches to “./tmp/Cropped_40_BSDS_Gauss_25/”.
from functools import partial
from OpenDenoising.data.utils import gaussian_noise
from OpenDenoising.data.utils import gen_patches
from OpenDenoising.data.utils import generate_full_dataset
PATH_TO_IMGS = "./tmp/BSDS500/Train/ref/"
PATH_TO_SAVE = "./tmp/Cropped_40_BSDS_Gauss_25/"
generate_full_dataset(PATH_TO_IMGS, PATH_TO_SAVE, noise_config={gaussian_noise: [25]},
preprocessing=[partial(gen_patches, patch_size=40)], n_channels=1)
After running the code, you may notice that the following folder structure has been created,
./tmp/BSDS500/Train/
|-- Train
| |-- in
| |-- ref
Hence, you may use DatasetFactory to create a FullDataset by specifying “./tmp/BSDS500/Train/” as the images path.
In-depth Tutorials¶
These tutorials cover the details of each wrapper class. It covers both implementation and training of models.
Filtering Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the FilteringModel wrapper to use your Python or Matlab filtering functions in the benchmark.
First, being on the project’s root, you need to import the necessary modules,
import numpy as np
import matlab.engine
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from OpenDenoising import model
eng = matlab.engine.start_matlab()
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
To define a filtering model, you need either a Python or Matlab function that performs the denoising.
Using a Python-based function for Image Denoising.¶
Python-based functions should agree with the following convention:
def my_filter(image, optional_arguments):
# Perform the filter algorithm
return restored_image
where image is a 4D numpy ndarray, and restored_image is a 4D numpy ndarray with same shape as image. As an example, consider the median_filter implemented on model.filtering,
def median_filter(image, filter_size=(3, 3), stride=(1, 1)):
"""Naive implementation of median filter using numpy.
Parameters
----------
image : :class:`numpy.ndarray`
4D batch of noised images. It has shape: (batch_size, height, width, channels).
filter_size : list
2D list containing the size of filter's kernel.
stride : list
2D list containing the horizontal and vertical strides.
Returns
-------
output : :class:`numpy.ndarray`
4D batch of denoised images. It has shape: (batch_size, height, width, channels).
"""
if image.ndim == 2:
image = np.expand_dims(np.expand_dims(image, axis=0), axis=-1)
p = filter_size[0] // 2
sh, sw = stride
_image = np.pad(image, pad_width=((0, 0), (p, p), (p, p), (0, 0)), mode="constant")
N, h, w, c = _image.shape
output = np.zeros(image.shape)
for i in range(p, h - p, sh):
# Loops over horizontal axis
for j in range(p, w - p, sw):
# Loops over vertical axis
window = _image[:, i - p: i + p, j - p: j + p, :]
output[:, i - p, j - p, :] = np.median(window, axis=(1, 2))
return output
To charge the model in a Filtering Model, you can use,
Using a Matlab-based function for Image Denoising.¶
You can also use Matlab functions to perform image denoising. To do so, you need to wrap your Matlab function with a Python function. As an example, consider the BM3D matlab’s implementation. You can download the author’s code and add it to Matlab’s “pathdef.m” file. As an example of function wrapping matlab’s code, consider the following:
def BM3D(z, sigma=25.0, profile="np", channels_first=False):
"""This function wraps MATLAB's BM3D implementation, available on matlab_libs/BM3Dlib. The original code is
available to the public through the author's page, on http://www.cs.tut.fi/~foi/GCF-BM3D/
Parameters
----------
z : :class:`numpy.ndarray`
4D batch of noised images. It has shape: (batch_size, height, width, channels).
sigma : float
Level of gaussian noise.
profile : str
One between {'np', 'lc', 'high', 'vn', 'vn_old'}. Algorithm's profile.
Available for grayscale:
* 'np': Normal profile.
* 'lc': Fast profile.
* 'high': High quality profile.
* 'vn': High noise profile (sigma > 40.0)
* 'vn_old': old 'vn' profile. Yields inferior results than 'vn'.
Available for RGB:
* 'np': Normal profile.
* 'lc': Fast profile.
Returns
-------
y_est : :class:`numpy.ndarray`
4D batch of denoised images. It has shape: (batch_size, height, width, channels).
"""
_z = z.copy()
rgb = True if _z.shape[-1] == 3 else False
if rgb:
assert (profile in ["np", "lc", "high", "vn", "vn_old"]), "Expected profile to be 'np', 'lc', 'high', 'vn' " \
"or 'vn_old' but got {}.".format(profile)
else:
assert (profile in ["np", "lc"]), "Expected profile to be 'np', 'lc' bug got {}".format(profile)
# Convert input arrays to matlab
m_sigma = matlab.double([sigma])
m_show = matlab.int64([0])
# Call BM3D function on matlab
y_est = []
for i in range(len(_z)):
m_z = matlab.double(_z[i, :, :, :].tolist())
if rgb:
_, y_est = eng.CBM3D(m_z, m_z, m_sigma, profile, m_show, nargout=2)
else:
_, y = eng.BM3D(m_z, m_z, m_sigma, profile, m_show, nargout=2)
y_est.append(np.asarray(y))
y_est = np.asarray(y_est).reshape(z.shape)
return y_est
Here, we make a few remarks,
- You need your engine to be defined in order to call the Matlab function.
- Inside the Python’s matlab module, you have a series of classes making the bridge between Python and Matlab variables. Moreover, Python does not handle so well operations between these classes (for instance, the addition between two matlab.double variables). A way to get around this, is to do all operations inside of Matlab’s functions. Therefore, your Python wrapper function should prepare the arguments to be passed to your Matlab function, followed by a call “eng.my_matlab_function”.
- If your matlab function returns more than one argument, you need to specify the “nargout” parameter (even if it is not present in your Matlab code).
- After Matlab done its computation, it outputs “Matlab arrays”. These can be converted into numpy arrays by calling “numpy.asarray” function.


Providing your own Matlab functions¶
Consider one more example of how you can include Matlab functions into your FilteringModels. We define a script for a function called “kernel_filter” in the folder “./Examples/Jupyter Notebooks/Additional Files”. The Matlab function has the following implementation:
function y_est = kernel_filter(z, kernel)
k = size(kernel);
k = floor(k(1) / 2);
ndims = length(size(z));
y_est = zeros(size(z));
if ndims == 2
z_ = padarray(z, [k, k], 0, 'both');
[N, h, w, c] = size(z_);
for i=k+1:h
for j=k+1:w
window = sum(z_(i-k:i+k, j-k:j+k) .* kernel, 'all');
y_est(n, i - k, j - k) = window;
end
end
else
z_ = padarray(z, [k, k, 0], 0, 'both');
[N, h, w, c] = size(z_);
kernel = repmat(kernel, [1, 1, 3]);
for i=k+1:h-k
for j=k+1:w-k
window = sum(sum(z_(i-k:i+k, j-k:j+k, :) .* kernel));
y_est(i - k, j - k, :) = window;
end
end
end
As we can see, this function denoises one single image at time, so the 4D-in 4D-out convention needs to be handled in the Python wrapper function. Then, you can verify that the function is indeed on your Matlab’s path by executing:
/home/efernand/repos/Summer_Internship_2019/Code/examples/Jupyter Notebooks/Additional Files/kernel_filter.m

Keras Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the FilteringModel wrapper to use your Python or Matlab filtering functions in the benchmark.
First, being on the project’s root, you need to import the necessary modules,
import gc
import keras
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from OpenDenoising import model
from keras import layers, models
from OpenDenoising import evaluation
eng = matlab.engine.start_matlab()
For now on, we suppose you are running your codes on the project root folder.
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Training images generator
train_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Train",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
preprocessing=[partial(data.gen_patches, patch_size=40),
partial(data.dncnn_augmentation, aug_times=1)],
name="BSDS_Train")
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
To execute multiple models that access the GPU, you need to allow Tensorflow/Keras to allocate memory only when needed. This is done through,
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
keras.backend.set_session(session)
Keras offers two ways to construct a model, whether by explicitly programming it using their API, or by using a file to load the computational graph. Once you have constructed your model, you will have at hand a “keras.models.Model” class instance.
Since the use of different frameworks is not the same, to force Keras models to adequate their functionality to the Benchmark needs, we provide the KerasModel class, that redefines some of the Keras API functionalities.
Charging a model¶
The first step to build a KerasModel instance, is to effectively charge a keras model (“keras.models.Model”) into the class. This is done through the method “charge_model”. There are two ways to charge the model into the wrapper class: by using a function, or by using a file. These two cases are managed by the use of three parameters of the method “charge_model”:
- model_function: This argument receives a function object (with __call__ method defined). The function object is responsable to build the Keras model inside the class.
- model_path: This argument is a string containing the path to a .hdf5 file (weights + architecture) or a .json/.yaml file (architecture).
- model_weights: If you passed the model architecture through a .json file, and you do have a .hdf5 containing weights only, you can pass the path to the .hdf5 weight file using the “model_weights” parameter.
From a function¶
To charge a “keras.models.Model” into the wrapper class, you need to explicitly program the Keras model. To do so, you should provide to the method “charge_model” a function that returns an instance of “keras.models.Model” class corresponding to your architecture. As an example, consider the following implementation of DnCNN network:
def dncnn():
x = layers.InputLayer(shape=[None, None, 1])
y = layers.Conv2D(filters=64, kernel_size=5, strides=(1, 1), padding='same')(x)
y = layers.Activation("relu")(y)
# Middle layers: Conv + ReLU + BN
for i in range(1, 16):
y = layers.Conv2D(filters=64, kernel_size=5, strides=(1, 1), padding='same', use_bias=False)(y)
y = layers.BatchNormalization(axis=-1, momentum=0.0, epsilon=1e-3)(y)
y = layers.Activation("relu")(y)
y = layers.Conv2D(filters=1, kernel_size=5, strides=(1, 1), use_bias=False, padding='same')(y)
y = layers.Subtract()([x, y])
# Keras model
return models.Model(x, y)
additionally to this example, you should consider the following convention to architecture functions:
def my_arch_func(optional arguments):
# Steps to build your Keras model
return keras.models.Model(inputs, outputs)
In the following blocks of code, we show how we can charge the model into a “KerasModel” wrapper class by using the “dncnn” function.
# Creating the KerasModel instance
kerasmodel_ex1 = model.KerasModel(model_name="Example1")
print("KerasModel {} created succesfully.".format(kerasmodel_ex1))
# Defining the function to be charged
def dncnn():
x = layers.Input(shape=[None, None, 1])
y = layers.Conv2D(filters=64, kernel_size=5, strides=(1, 1), padding='same')(x)
y = layers.Activation("relu")(y)
# Middle layers: Conv + ReLU + BN
for i in range(1, 16):
y = layers.Conv2D(filters=64, kernel_size=5, strides=(1, 1), padding='same', use_bias=False)(y)
y = layers.BatchNormalization(axis=-1, momentum=0.0, epsilon=1e-3)(y)
y = layers.Activation("relu")(y)
y = layers.Conv2D(filters=1, kernel_size=5, strides=(1, 1), use_bias=False, padding='same')(y)
y = layers.Subtract()([x, y])
# Keras model
return models.Model(x, y)
# Charging model into Example1
kerasmodel_ex1.charge_model(model_function=dncnn)
This last snippet outputs the following warning:
W0821 09:12:14.533067 140116547233600 keras_model.py:118] You have loaded your model from a python function, which does not hold any information about weight values. Be sure to train the network before running your tests.
since you have loaded the model without any information about its weights, we remark that you should run a training session before performing inference.
Finally, it may be the case that your architecture has additional parameters. Consider the following example,
def dncnn(depth=17, n_filters=64, kernel_size=(3, 3), n_channels=1):
x = layers.Input(shape=[None, None, 1])
y = layers.Conv2D(filters=n_filters, kernel_size=kernel_size, strides=(1, 1), padding='same')(x)
y = layers.Activation("relu")(y)
# Middle layers: Conv + ReLU + BN
for i in range(1, depth - 1):
y = layers.Conv2D(filters=n_filters, kernel_size=kernel_size, strides=(1, 1), padding='same', use_bias=False)(y)
y = layers.BatchNormalization(axis=-1, momentum=0.0, epsilon=1e-3)(y)
y = layers.Activation("relu")(y)
y = layers.Conv2D(filters=1, kernel_size=kernel_size, strides=(1, 1), use_bias=False, padding='same')(y)
y = layers.Subtract()([x, y])
# Keras model
return models.Model(x, y)
this corresponds to the same architecture as the previous DnCNN, except that it has additional parameters, such as “depth”, “n_filters”, “kernel_size” and “n_channels”. You can still pass these to the charge_model function,
def dncnn_opt_params(depth=17, n_filters=64, kernel_size=(3, 3), n_channels=1):
x = layers.Input(shape=[None, None, n_channels])
y = layers.Conv2D(filters=n_filters, kernel_size=kernel_size, strides=(1, 1), padding='same')(x)
y = layers.Activation("relu")(y)
# Middle layers: Conv + ReLU + BN
for i in range(1, depth - 1):
y = layers.Conv2D(filters=n_filters, kernel_size=kernel_size, strides=(1, 1), padding='same', use_bias=False)(y)
y = layers.BatchNormalization(axis=-1, momentum=0.0, epsilon=1e-3)(y)
y = layers.Activation("relu")(y)
y = layers.Conv2D(filters=1, kernel_size=kernel_size, strides=(1, 1), use_bias=False, padding='same')(y)
y = layers.Subtract()([x, y])
# Keras model
return models.Model(x, y)
kerasmodel_ex2 = model.KerasModel(model_name="Example2")
print("KerasModel {} created succesfully.".format(kerasmodel_ex2))
kerasmodel_ex2.charge_model(model_function=dncnn_opt_params, depth=20, kernel_size=(7, 7), n_channels=3)
From a file¶
There are two ways of charging a model from a file:
- Charging an architecture (without weights) through a .json or .yaml file. This is done through a model previously saved using “keras.models.Model.to_json()” or “keras.models.Model.to_yaml()” method. As an example, consider the following:
net = dncnn()
net.to_json("path_to_save_your_json_file")
net.to_yaml("path_to_save_your_yaml_file")
In those two cases, the network is saved without any information about its training or weights, so you should run a training session before using your model for inference.
- Charging the complete model (weights + architecture) using a .json/.yaml file + .hdf5 file, or only a .hdf5 file. Keras can save either only weights or weights + architecture into a .hdf5 file. That depends on the commands you have used, for instance,
net = dncnn()
# Training of neural net
net.save("model.hdf5") # This saves both weights and architecture.
net.save_weights("weights.hdf5") # This saves only the weights.
To charge a model using a file, you simply need to pass it to “charge_model” through the “model_path” parameters. An example is shown bellow, on Running Inference section. To free the memory allocated to the previous two models, run,
# Frees memory
kerasmodel_ex1 = None
kerasmodel_ex2 = None
tf.reset_default_graph()
gc.collect()
Running Inference¶
All denoisers can perform denoisnig by using the “__call__” magic function. In the following example we load the pre-trained DnCNN model saved using Keras to perform inference,
# Loads model from .hdf5 file.
kerasmodel_ex3 = model.KerasModel(model_name="Inference_ex1")
kerasmodel_ex3.charge_model(model_path="./Examples/JupyterNotebooks/Additional Files/dncnn.hdf5")
Then, inference is done as if the KerasModel instance were a function,
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = kerasmodel_ex3(noisy_imgs)
display_results(clean_imgs, noisy_imgs, rest_imgs, str(kerasmodel_ex3))
This code snippet generates the following output,

kerasmodel_ex3 = None
gc.collect()
Training a KerasModel¶
To run a training session, you only need to have a dataset, such as defined in the Data Tutorial. Once you created a DatasetGenerator for your training images (and possibly, for you validation images) you can call the “train” method from KerasModel class, which takes the following parameters,
- train_generator: any instance of a class inheriting from
data.AbstractDatasetGenerator
. This class will yield the data pairs. - valid_generator: optional. Specify it if you have validation data at hand.
- n_epochs: number of training epochs. Default is 100.
- n_stages: number of training batches drawn at random from the dataset at each training epoch. Default value is 500.
- learning_rate: constant regulating the weight updates in your model. Default is 1e-3.
- optimizer_name: you can specify the optimizer’s name for you model. You can do this by lookin at the names in Keras documentation. Default is “Adam” optimizer.
- metrics: list of metrics that will be tracked during training. There are a couple of useful metrics implemented on evaluation module (such as PSNR, SSIM, MSE) but you can also implement your own following Keras conventions.
- kcallbacks: list of Keras callbacks. You can either use Keras
default callbacks or the callbacks
defined on
evaluation
module. - loss: A metric that will be used in optimization as the objective
function to be minimized. You can either use Keras
default losses or the metrics
defined on
evaluation
module. - valid_steps: number of validation batches drawn at each validation epoch.
To show how a keras model can be trained, consider the training of a DnCNN as stated on its original paper:
- DnCNN for gaussian denoising has depth 17, n_filters 64, kernel_size (3, 3).
- It is trained on \(40 \times 40\) patches extracted from BSDS images, corrupted with fixed-variance gaussian noise (\(\sigma=25\), for instance).
For evaluation, we will use a disjoint subset of BSDS, consisting on 68 images which are not present in the training dataset.
# KerasModel
kerasmodel_ex4 = model.KerasModel(model_name="Example4", logdir='./logs/Keras/DnCNN')
kerasmodel_ex4.charge_model(model_function=dncnn_opt_params, depth=17, kernel_size=(3, 3), n_channels=1)
kerasmodel_ex4.train(train_generator=train_generator,
valid_generator=valid_generator,
n_epochs=100,
n_stages=465,
learning_rate=1e-3,
optimizer_name="Adam",
metrics=[evaluation.DnCNNSchedule(),
evaluation.CheckpointCallback(kerasmodel_ex4, monitor="val_PSNR"),
evaluation.TensorboardImage(valid_generator, kerasmodel_ex4)],
loss=evaluation.mse,
valid_steps=10)
Finally, you may free the memory allocated to the model by using,
kerasmodel_ex4 = None
tf.reset_default_graph()
gc.collect()
Tensorflow Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the TensorflowModel wrapper to use your Tensorflow deep learning models in the benchmark.
# Python packages
import gc
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from OpenDenoising import model
from OpenDenoising import evaluation
For now on, we suppose you are running your codes on the project root folder.
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Training images generator
train_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Train",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
preprocessing=[partial(data.gen_patches, patch_size=40),
partial(data.dncnn_augmentation, aug_times=1)],
name="BSDS_Train")
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
To execute multiple models that access the GPU, you need to allow Tensorflow/Keras to allocate memory only when needed. This is done through,
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
keras.backend.set_session(session)
Charging a model¶
As Keras models, we have two possibilities to build the computational graph of a Deep Learning model: by using a function, or by using a file. As the functions for creating Keras Models, the functions used to build Tensorflow model will construct the operations of your architecture.
Another peculiarity from Tensorflow models is Batch Normalization. As stated in their documentation a Batch Normalization layer uses a parameter called “training”, that switches computations between training and evaluation phases. As it turns out, this parameter is essential for the performance of your model, and since most of Deep Learning architectures involve a Batch Normalization layer, we assume that every Tensorflow model has a placeholder holding a boolean to swith between training and inference.
Loading Tensorflow models from files is also simillar to loading Keras models from files. The way Tensorflow handles pre-trained models depends on which module it is used. There are two main APIs for doing this,
- The tf.train.Saver API, which saves models as checkpoints.
- The SavedModel API, which adds abstractions to help when reloading the network’s graph.
As a remark, we encourage you to use informative names in your variables, such as “input” for graph’s input, and “output” for its output.
From a function¶
Tensorflow does all its work on the background, as it builds a computational graph that stays in memory whether or not it is attached to Python’s variables. For instance,
>>> a = tf.Variable(initial_value=np.ones([5, 1]), name="MyTestVariable")
tf.Variable 'MyTestVariable:0' shape=(5, 1) dtype=float64_ref
>>> a = None
>>> tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="MyTestVariable")[0]
tf.Variable 'MyTestVariable:0' shape=(5, 1) dtype=float64_ref
As we can see, the Tensorflow variable “MyTestVariable” continues in memory regardless its connection to Python’s variable “a”. Hence, the model function only needs to build the Tensorflow computational graph. Once the computational graph is constructed, we may use Tensorflow’s graph utils to retrieve tensor names. For instance, consider the following function:
def tf_dncnn(depth=17, n_filters=64, kernel_size=3, n_channels=1, channels_first=False):
"""Tensorflow implementation of dncnn. Implementation was based on https://github.com/wbhu/DnCNN-tensorflow.
Parameters
----------
depth : int
Number of fully convolutional layers in dncnn. In the original paper, the authors have used depth=17 for non-
blind denoising and depth=20 for blind denoising.
n_filters : int
Number of filters on each convolutional layer.
kernel_size : int tuple
2D Tuple specifying the size of the kernel window used to compute activations.
n_channels : int
Number of image channels that the network processes (1 for grayscale, 3 for RGB)
channels_first : bool
Whether channels comes first (NCHW, True) or last (NHWC, False)
"""
assert (n_channels == 1 or n_channels == 3), "Expected 'n_channels' to be 1 or 3, but got {}".format(n_channels)
if channels_first:
data_format = "channels_first"
input_tensor = tf.placeholder(tf.float32, [None, n_channels, None, None], name="input")
else:
data_format = "channels_last"
input_tensor = tf.placeholder(tf.float32, [None, None, None, n_channels], name="input")
is_training = tf.placeholder(tf.bool, (), name="is_training")
with tf.variable_scope('block1'):
output = tf.layers.conv2d(inputs=input_tensor,
filters=n_filters,
kernel_size=kernel_size,
padding='same',
data_format=data_format,
activation=tf.nn.relu)
for layers in range(2, depth):
with tf.variable_scope('block%d' % layers):
output = tf.layers.conv2d(inputs=output,
filters=n_filters,
kernel_size=kernel_size,
padding='same',
name='conv%d' % layers,
data_format=data_format,
use_bias=False)
output = tf.nn.relu(tf.layers.batch_normalization(output, training=is_training))
with tf.variable_scope('block{}'.format(depth)):
noise = tf.layers.conv2d(inputs=output,
filters=n_channels,
kernel_size=kernel_size,
padding='same',
data_format=data_format,
use_bias=False)
output = tf.subtract(input_tensor, noise, name="output")
tfmodel_ex1 = model.TfModel(model_name="TensorflowDnCNN")
tfmodel_ex1.charge_model(model_function=tf_dncnn)
Loading model from model function. Be sure to train your network before using it.
To deallocate Tensorflow’s graph from memory, you may use the following snippet,
# Resets variables and tf graph
tfmodel_ex1 = None
tf.reset_default_graph()
gc.collect()
From a file¶
Saving and restoring tensorflow models from files depend on which API was used during training, but their outputs are simillar. Bellow, we cover how we can build a TfModel using each of the APIs.
Using tf.train API¶
The tf.train API saves Tensorflow’s computational graph is based on the tf.train.Saver class. An instance of such class saves a Tensorflow computational graph in four files,
- .ckpt.meta file, which holds the graph structure.
- .ckpt.data file, which holds the variable values.
- .ckpt.index file, which is a table making the correspondence between tensors and its metadata.
- checkpoint, holds the name of the previous three files.
In order to restore the model, you can create a saver’s instance by calling the function tf.train.import_meta_graph, which takes as input the path to the “.meta” file. Then, you can restore the graph using the method “restore” from the saver.
These details are hidden from the user, so that you only need to specify the log folder during training, so that the model is automatically saved there, and specify the “model_path” parameter in “charge_model” so that the TfModel class can find the files and load them. As an example, consider the files in “./Additional Files/Tensorflow Models”. There, we have the four necessary files to rebuild our Tensorflow model.
tfmodel_ex2 = model.TfModel(model_name="TensorflowDnCNN")
tfmodel_ex2.charge_model(model_path="./Additional Files/Tensorflow Models/from_checkpoint/model.ckpt.meta")
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = tfmodel_ex2(noisy_imgs)
display_results(clean_imgs, noisy_imgs, rest_imgs, str(tfmodel_ex2))

Then deallocates Tensorflow’s graph from memory to execute the other sections,
tfmodel_ex2 = None
tf.reset_default_graph()
gc.collect()
Using SavedModel API¶
The charging of a Tensorflow model saved through SavedModel API can be done by passing to model_path the path to the .pb file. We remark that, following the requirements of the API, you need to have a folder in the directory called “variables”, that will hold variable values.
tfmodel_ex3 = model.TfModel(model_name="TensorflowDnCNN", logdir="./training_logs/Tensorflow")
tfmodel_ex3.charge_model(model_path="./Additional Files/Tensorflow Models/from_saved_model/saved_model.pb")
Loading model using SavedModel API.
Running inference¶
Inference on TfModels can be done as if the instance was a function (the class implements “call”) function, as can be saw bellow, where we reuse the TfModel loaded before.
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = tfmodel_ex3(noisy_imgs)
display_results(clean_imgs, noisy_imgs, rest_imgs, str(tfmodel_ex3))

Then deallocates Tensorflow’s graph from memory to execute the other sections,
tfmodel_ex3 = None
tf.reset_default_graph()
gc.collect()
Training a TfModel¶
To run a training session, you only need to have a training dataset, such as defined in the DatasetUsage.ipynb. Once you created a DatasetGenerator for your training images (and possibly, for you validation images) you can call the “train” method from KerasModel class, which takes the following parameters,
- train_generator: any instance of a dataset generator class. This class will yield the data pairs (noisy image, clean image).
- valid_generator: optional. Specify it if you have validation data available.
- n_epochs: number of training epochs. Default is 100.
- n_stages: number of training batches drawn at random from the dataset at each training epoch. Default value is 500.
- learning_rate: constant regulating the weight updates in your model. Default is 1e-3.
- optimizer_name: you can specify the optimizer’s name for you model. You can do this by lookin at the names in Tensorflow documentation. Default is “AdamOptimizer” optimizer.
- metrics: list of metrics that will be tracked during training. There are a couple of useful metrics implemented on evaluation module (such as PSNR, SSIM, MSE) but you can also implement your own following Keras conventions.
- kcallbacks: list of Keras callbacks. You can either use Keras
default callbacks or the callbacks
defined on
evaluation
module. - loss: a function following the same specification as metrics. It will
be using during optimization as an objective function to be
minimized. You can either use Keras
default losses or the metrics
defined on
evaluation
module. - valid_steps: number of validation batches drawn at each validation epoch.
To show how a Tensorflow model can be trained, consider the training of a DnCNN as stated on its original paper:
- DnCNN for gaussian denoising has depth 17, n_filters 64, kernel_size (3, 3).
- It is trained on \(40 \times 40\) patches extracted from BSDS images, corrupted with fixed-variance gaussian noise (\(\sigma=25\), for instance).
For evaluation, we will use a disjoint subset of BSDS, consisting on 68 images which are not present in the training dataset.
# Creating and charging the model
tfmodel_ex4 = model.TfModel(model_name="TensorflowDnCNN", logdir="./training_logs/Tensorflow")
tfmodel_ex4.charge_model(model_function=tf_dncnn)
tfmodel_ex4.train(train_generator=train_generator,
valid_generator=valid_generator,
n_epochs=100,
n_stages=465,
learning_rate=1e-3,
optimizer_name="AdamOptimizer",
metrics=[evaluation.psnr,
evaluation.ssim,
evaluation.mse],
kcallbacks=[evaluation.DnCNNSchedule(),
evaluation.CheckpointCallback(tfmodel_ex4, monitor="val_PSNR"),
evaluation.TensorboardImage(valid_generator, tfmodel_ex4)],
loss=evaluation.mse,
valid_steps=10)
Pytorch Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the Pytorch wrapper to use your Pytorch deep learning models in the benchmark.
# Python packages
import gc
import torch
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from OpenDenoising import model
from OpenDenoising import evaluation
For now on, we suppose you are running your codes on the project root folder.
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Training images generator
train_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Train",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
preprocessing=[partial(data.gen_patches, patch_size=40),
partial(data.dncnn_augmentation, aug_times=1)],
name="BSDS_Train")
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
Charging a model¶
To charge a model, you can either specify the path to a “.pth” or “.ph” file using “model_path” argument, or an instance of a class inheriting from torch.nn.Module class using “model_function” argument.
The “.ph”/“.pth” files need to be saved from the entire model, that is, an instance of a class which inherits from torch.nn.Module (for more information, look at Pytorch documentation).
For the “model_function” argument, you need to specify a Pytorch model class, which inherits from torch.nn.Module.
Charging from a Class¶
Consider the following implementation of DnCNN on Pytorch (code based on this Github page),
class DnCNN(nn.Module):
def __init__(self, depth=17, n_filters=64, kernel_size=3, n_channels=1):
super(DnCNN, self).__init__()
layers = [
nn.Conv2d(in_channels=n_channels, out_channels=n_filters, kernel_size=kernel_size,
padding=1, bias=False),
nn.ReLU(inplace=True)
]
for _ in range(depth-2):
layers.append(nn.Conv2d(in_channels=n_filters, out_channels=n_filters, kernel_size=kernel_size,
padding=1, bias=False))
layers.append(nn.BatchNorm2d(n_filters))
layers.append(nn.ReLU(inplace=True))
layers.append(nn.Conv2d(in_channels=n_filters, out_channels=n_channels, kernel_size=kernel_size,
padding=1, bias=False))
self.dncnn = nn.Sequential(*layers)
def forward(self, x):
out = self.dncnn(x)
return out
You can charge your model by passing it to the “model_function” argument. If you need to pass any optional arguments to the class init function, you can do so by using kwargs,
torch_ex1 = model.PytorchModel(model_name="dncnn_pytorch")
torch_ex1.charge_model(model_function=model.architectures.pytorch.DnCNN, depth=17)
Charging from a file¶
To charge a model from a file, you need to save it with extension “.pt” or “.pth”. Important to mention, you need to save your whole model, and not only its state dict.
torch_ex2 = model.PytorchModel(model_name="dncnn_pytorch")
torch_ex2.charge_model(model_path="./Additional Files/Pytorch Models/dncnn.pth", depth=17)
Running inference¶
Since PytorchModel implements “call” function, inference can be done by simply using the instance as a function. Moreover, it is important to remark that Pytorch models only accept NCHW (batch size - channel - height - width) data format. To overcome this issue, the “call” function automatically handles conversion between NCHW and NHWC. In all cases, you should expect the shape of inputs to be equal to the output shape.
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = torch_ex2(noisy_imgs)
# Display results
display_results(clean_imgs, noisy_imgs, rest_imgs, str(torch_ex2))

Training a Pytorch Model¶
To train a Pytorch model, you need to specify at least one dataset (for training, as evaluation is optional). The rest of the parameters is discussed bellow,
n_epochs (int)
Number of training epochs.
n_stages (int)
Number of batches drawn from the dataset at each epoch. The total number of iterations corresponds to
optimizer_name (str)
Name of the optimizer’s class (Take a look on Pytorch’s documentation).
metrics (list)
List of metric functions. These functions should have two inputs, two instances of ‘numpy.ndarray’ class. It outputs a float corresponding to the metric computed on those two arrays. For more information, take a look on the Benchmarking module.
kcallbacks (list)
List of callbacks. Take a look on the evaluation documentation.
loss (torch.nn.modules.loss)
Pytorch loss function.
valid_steps (int)
Number of validation batches drawn at the end of each epoch.
torch_ex3 = model.PytorchModel(model_name="dncnn_pytorch", logdir="./training_logs/Pytorch")
torch_ex3.charge_model(model_function=model.architectures.pytorch.DnCNN, depth=17)
torch_ex3.train(train_generator=train_generator,
valid_generator=valid_generator,
n_epochs=100,
n_stages=465,
learning_rate=1e-3,
optimizer_name="Adam",
metrics=[evaluation.psnr,
evaluation.ssim,
evaluation.mse],
kcallbacks=[evaluation.DnCNNSchedule(),
evaluation.CheckpointCallback(torch_ex3, monitor="val_PSNR"),
evaluation.TensorboardImage(valid_generator, torch_ex3)],
loss=torch.nn.MSELoss(reduction="sum"),
valid_steps=10)
Matconvnet Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the Matconvnet wrapper to use your Matconvnet deep learning models in the benchmark.
First, being on the project’s root, you need to import the necessary modules,
# Python packages
import gc
import matlab.engine
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from OpenDenoising import model
from OpenDenoising import evaluation
eng = matlab.engine.start_matlab()
For now on, we suppose you are running your codes on the project root folder.
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
Matconvnet models are Deep Learning models based on the Matconvnet toolbox for Matlab. These models may be used in Python through Matlab’s Python engine, which enables the users to run matlab code inside Python.
Although possible to run, the implementation of Matlab’s numerical types is not always straightforward, hence, every arithimetic calculation is done through Matlab functions. Here we go through two examples of Deep Neural networks implemented using Matconvnet’s Simple and DAGNN classes, DnCNN and MWCNN.
Remark Matconvnet model’s are only available for evaluation.
Charging a model¶
Following Matconvnet’s tutorial, the Toolbox has two main wrappers,
- SimpleNN, a wrapper suitable for networks consisting of linear chains of computationalblocks. It is largely implemented by thevl_simplennfunction (evaluation of the CNN and ofits derivatives), with a few other support functions such asvl_simplenn_move(moving theCNN between CPU and GPU) andvl_simplenn_display(obtain and/or print informationabout the CNN).
- DagNN wrapper, which is more complex than SimpleNN as it has to support arbitrary graphtopologies. Its design is object oriented, with one class implementing each layer type.
In order to use your Matlab code along with our Benchmark, you will need to implement a “denoise” function, which is responsible for doing three main tasks,
- Load the network architecture. This function reads a .mat file which contains the network architecture and its weights.
- Move the network and the noisy input array to GPU (if GPU is enabled).
- Perform denoising, by calling either vl_simplenn (simplenn) or net.eval (dagnn).
- Pass the restored image array back to CPU, if GPU is enabled.
The “denoise” function should have “denoise” in its name (i.e. “DnCNN_denoise”), and should agree with the following convention:
function image_restored = mynet_denoise(net_path, image, useGPU)
% Loading net
% Move net + arrays to GPU
% perform denoising
% Move result to CPU
end
SimpleNN¶
The SimpleNN wrapper class is well suited for linear chains of neural network layers. In order to construct the neural network object, you need to specify the path to a “.mat” file containing a pre-trained SimpleNN network. In our example, we consider a SimpleNN-based DnCNN taken from the Github Repository of DnCNN’s paper. The model was trained to denoise images with \(\sigma=25\) gaussian noise. In order to use the model, we propose the following Matlab function,
function [image_restored] = DnCNN_denoise(net_path, image, useGPU)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Given an image array, simplenn_denoise performs image denoising by %
% loading a deep neural network architecture located on 'net_path'. %
% Params: %
% 1) net_path: path to .m file containing the model. %
% 2) image: 2d float array containing the image %
% 3) useGPU: boolean defining if it will use GPU or not. %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Load Network architecture
net_obj = load(net_path); % Loads the network into the workspace
net = net_obj.net; % Gets the network from the network object in workspace
net = vl_simplenn_tidy(net); % Fixes the network in case of version incompatibilities
%% Move net to GPU
if useGPU == 1
net = vl_simplenn_move(net, 'gpu'); % Pass net to GPU
image = gpuArray(image); % Pass image to GPU
end
%% Perform denoising
res = vl_simplenn(net, image, [], [], ... % vl_simplenn is the function used for to evaluate the network
'conserveMemory', true, ... % on the input 'image'. 'conserveMemory' parameter deletes all
'mode', 'test'); % intermediate responses. 'mode' sets BatchNorm to test mode.
image_restored = res(end).x; % Gets the output of last layer.
%% Move result to CPU
if useGPU == 1
image = gather(image); % Gets image array from GPU.
image_restored = gather(image_restored); % Gets output array from GPU.
end
image_restored = single(image - image_restored); % In DnCNN, the network is used to predict the noise rather
% than the image itself.
end
Considering such function, which needs to be defined in the same folder as the model weight + architecture “.mat” file, you only need to pass the path to the “.mat” file. Being so, consider the files on “./Additional Files/Matconvnet Models/”,
model_path = "/home/efernand/repos/Summer_Internship_2019/Code/examples" \
"/Jupyter Notebooks/Additional Files/Matconvnet Models/simplenn/dncnn.mat"
matconvnet_ex1 = model.MatconvnetModel(model_name="DnCNN")
matconvnet_ex1.charge_model(model_path=model_path)
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = matconvnet_ex1(noisy_imgs)
# Display results
display_results(clean_imgs, noisy_imgs, rest_imgs, str(matconvnet_ex1))

DAGNN networks represents a wrapper for richer kinds of networks. For our DAGNN network, we take MWCNN as example, which rely on custom DWT (Discrete Wavelet Transform)/ IDWT(Inverse Discrete Wavelet Transform) layers to compute its output.
The process to compute the network’s output is the same, except from a few syntax changes while loading the model, as follows,
function [image_restored] = MWCNN_denoise(net_path, image, useGPU)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Given an image array, mwcnn performs image denoising by loading %
% a deep neural network architecture located on 'net_path'. %
% Params: %
% 1) net_path: path to .m file containing the model. %
% 2) image: 2d float array containing the image %
% 3) useGPU: boolean defining if it will use GPU or not. %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Load Network architecture
net_obj = load(net_path); % Loads network object into workspace
net = net_obj.net; % Gets network from network object
net = dagnn.DagNN.loadobj(net); % Loads the network into the wrapper
net.removeLayer('objective'); % Remove objective layer (only needed for training)
out_idx = net.getVarIndex('prediction'); % Get prediction layer
net.vars(net.getVarIndex('prediction')).precious = 1; % Fix network variable values.
net.mode = 'test'; % Set network to test model.
%% Preprocess image
% Scale images between -2 and +2.
% OBS: needed only for MWCNN.
input = image * 4 - 2;
%% Move net to GPU
if useGPU == 1
net.move('gpu'); % Pass net to GPU
input = gpuArray(input); % Pass image to GPU
end
%% Computes output
net.eval({'input', input});
image_restored = gather(...
squeeze(...
gather(...
net.vars(out_idx).value + 2 ...
) / 4 ...
)...
);
end
model_path = "/home/efernand/repos/Summer_Internship_2019/Code/examples" \
"/Jupyter Notebooks/Additional Files/Matconvnet Models/dagnn/mwcnn.mat"
matconvnet_ex2 = model.MatconvnetModel(model_name="mwcnn")
matconvnet_ex2.charge_model(model_path=model_path)
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = matconvnet_ex2(noisy_imgs)
# Display results
display_results(clean_imgs, noisy_imgs, rest_imgs, str(matconvnet_ex2))

Matlab Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the MatlabModel wrapper to use your Matlab deep learning models in the benchmark.
# Python packages
import gc
import numpy as np
import matlab.engine
import matplotlib.pyplot as plt
from functools import partial
from OpenDenoising import data
from OpenDenoising import model
from OpenDenoising import evaluation
eng = matlab.engine.start_matlab()
For now on, we suppose you are running your codes on the project root folder.
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
Charging a model¶
To charge a Matlab model, you can either specify a Matlab function or a .mat file containing the architecture you want to train/test. In both cases, you need to specify a string.
From a file¶
To charge a model from a file, you need to specify the path to the .mat file containing the model’s architecture. Notice that for models that predict the residual, rather than the restored image, “return_diff” should be specified as True,
matlabModel = model.MatlabModel(return_diff=True)
matlabModel.charge_model(model_path="./Additional Files/Matlab Models/dncnn_matlab.mat")
After charging the model into the wrapper object, the network object will be available on Matlab’s workspace. The following command prints the workspace:
print(matlabModel.engine.workspace)
Matlab's Workspace:
Name Size Bytes Class Attributes
layers 50x1 4465192 nnet.cnn.layer.Layer
From a Function¶
To specify a model from a function, you need to specify the path to the .m file that has the function that will build your model. This string is used internally to add the .m file to the path, and the to call the function using Matlab’s engine.
Note: you may still pass extra arguments through kwargs, as if they were going to feed a normal Python function.
matlabModel2 = model.MatlabModel(return_diff=True)
matlabModel2.charge_model(model_function="./OpenDenoising/model/architectures/matlab/dncnn.m")
Inference with MatlabModel¶
To perform inference, you may use the “__call__” method in MatlabModel class. This method uses the Matlab’s engine to internally call “denoiseImage” matlab function, that uses the network object to denoise an input batch.
# Get batch from valid_generator
noisy_imgs, clean_imgs = next(valid_generator)
# Performs inference on noisy images
rest_imgs = matlabModel(noisy_imgs)
display_results(clean_imgs, noisy_imgs, noisy_imgs - rest_imgs, str(matlabModel))

Training a MatlabModel¶
To train a MatlabModel, you need to specify a training (and possibly a validation) dataset through a string. This string correspond to the name of the dataset in Matlab’s workspace.
To create the dataset in the workspace, you can use the classes ‘imageDatastore’, ‘CleanMatlabDataset’ and ‘FullMatlabDataset’, which are Matlab classes for generating data to train Deep Learning models.
Using a CleanDataset¶
As in the case of Python’s CleanDatasetGenerator, to specify a Clean Dataset using Matlab you need to specify the noising function, called noiseFcn. This function should be specified as a string, that has the lambda signature on it.
For instance, if you want to use Gaussia noise on your dataset, you need to specify:
noiseFcn = “@(I) imnoise(I, ‘gaussian’, 0, 25/255)”.
For more complex kinds of functions, you can implement it as a .m function, and specify its arguments via the same strategy.
Note: You should make sure that “./OpenDenoising/data/” folder is on Matlab’s path (add it to pathdef.m).
dataset_train_wrapper = data.MatlabCleanDatasetGenerator(matlabModel2.engine, images_path="./tmp/BSDS500/Train/ref",
partition="Train")
dataset_train_wrapper()
dataset_valid_wrapper = data.MatlabCleanDatasetGenerator(matlabModel2.engine, images_path="./tmp/BSDS500/Valid/ref",
partition="Valid")
dataset_valid_wrapper()
print(matlabModel2.engine.workspace)
Matlab's Workspace:
Name Size Bytes Class Attributes
ME 1x1 1138 MException
imds_Train 1x1 8 matlab.io.datastore.ImageDatastore
imds_Train_noise 1x1 8 CleanMatlabDataset
imds_Valid 1x1 8 matlab.io.datastore.ImageDatastore
imds_Valid_noise 1x1 8 CleanMatlabDataset
layers 50x1 4465192 nnet.cnn.layer.Layer
matlabModel2.train(train_generator="imds_Train_noise", valid_generator="imds_Valid_noise")
Onnx Model tutorial¶
This tutorial is a part of Model module guide. Here, we explore how you can use the OnnxModel wrapper to use your Onnx deep learning models in the benchmark.
For now on, we suppose you are running your codes on the project root folder.
The following function will be used throughout this tutorial to display denoising results,
def display_results(clean_imgs, noisy_imgs, rest_images, name):
"""Display denoising results."""
fig, axes = plt.subplots(5, 3, figsize=(15, 15))
plt.suptitle("Denoising results using {}".format(name))
for i in range(5):
axes[i, 0].imshow(np.squeeze(clean_imgs[i]), cmap="gray")
axes[i, 0].axis("off")
axes[i, 0].set_title("Ground-Truth")
axes[i, 1].imshow(np.squeeze(noisy_imgs[i]), cmap="gray")
axes[i, 1].axis("off")
axes[i, 1].set_title("Noised Image")
axes[i, 2].imshow(np.squeeze(rest_imgs[i]), cmap="gray")
axes[i, 2].axis("off")
axes[i, 2].set_title("Restored Images")
Moreover, you may download the data we will use by using the following function,
data.download_BSDS_grayscale(output_dir="./tmp/BSDS500/")
The models will be evaluated using the BSDS dataset,
# Validation images generator
valid_generator = data.DatasetFactory.create(path="./tmp/BSDS500/Valid",
batch_size=8,
n_channels=1,
noise_config={data.utils.gaussian_noise: [25]},
name="BSDS_Valid")
Onnx Model¶
This notebook covers how OnnxModel class can be used to hold Deep Learning models, and to perform inference. We remark that Onnx is a framework focused on deploying trained neural networks, hence, there is no support for training models in this format.
Charging a model¶
The charging can only be done by the specification of a “.onnx” file, which holds the model’s computational graph as well as weights. To perform inference, we rely on onnxruntime module for Python.
After charging a model into OnnxModel, a onnxruntime session is created so that you can perform inference.
Running Inference¶
Inference can be done by using the “__call__” method required by the AbstractDeepLearningModel interface. This method ensures that derived classes (as OnnxModel) can be used a function. Bellow, a batch of noisy/clean images is drawn from the dataset, and the OnnxModel used to predict the restored image,

Converting models to Onnx¶
Onnx is thought as a bridge between different Deep Learning frameworks. It is used to deploy models for inference (assuming they were trained previously).
Each language has its own way of converting its models into Onnx. Some of them, such as Pytorch and Matlab, have such support natively. Others, rely on non-official modules such as keras2onnx or tf2onnx.
Keras to Onnx¶
Pytorch to Onnx¶
[Graph Input] name: input, shape: [5, 1, 40, 40]
[Graph Output] name: output, shape: [5, 1, 40, 40]
Notice that the past two tensors are of fixed size (both batch size, height and width). We address this problem in the next section.
Note on Pytorch2Onnx¶
Since Pytorch does not support dynamic shapes (i.e. None values in shape), the exported Onnx model will have fixed shape, which can be problematic at inference (you can only process images by slicing them into fixed-sized patches).
However, there is a turn-around for such problem, that is to process Onnx graph and switch the height/width values for “?” values (analogous to None in Tensorflow/Keras).
If you face problems with fixed-sized inputs, you can use the model.utils module for conversion:
New graph:
[Graph Input]
name: "input"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_param: "?"
}
dim {
dim_value: 1
}
dim {
dim_param: "?"
}
dim {
dim_param: "?"
}
}
}
}
[Graph Output] name: name: "output"
type {
tensor_type {
elem_type: 1
shape {
dim {
dim_param: "?"
}
dim {
dim_value: 1
}
dim {
dim_param: "?"
}
dim {
dim_param: "?"
}
}
}
}
In order to convert a Tensorflow model to Onnx, you need to convert all its variables to constants. To do so, the model.utils module has a function called freeze_tf_graph that converts all the variables in the current Tensorflow graph to constants.
You can either specify a model_file (containing your Tensorflow Model) to be read and frozen, or let the function get the default graph and session. In the first case, the default graph is expected to be empty (that is, you have not previously defined any tensorflow computation).
In this example, we will freeze the Tensorflow model present on “./Additional Files/Tensorflow Models/from_saved_model”.
Matlab to Onnx¶
Once you have trained your model using Matlab’s Deep Learning Toolbox, you should either a network object on your workspace, or a .mat file saved on a folder.
Taking the last case as example, we consider the file “./Additional Files/Matlab Models/dncnn_matlab.mat” previously trained. In order to convert it to Onnx, you can either do it from Python (by using matlab’s engine) or directly on Matlab. Keep in mind that every command run with engine.evalc is a pure matlab command.
Matlab's Workspace:
Name Size Bytes Class Attributes
ME 1x1 1091 MException
net 1x1 2258128 SeriesNetwork