Welcome to PyAbel’s documentation!¶
Contents:
PyAbel README¶
Introduction¶
PyAbel
is a Python package that provides functions for the forward and inverse Abel transforms. The forward Abel transform takes a slice of a cylindrically symmetric 3D object and provides the 2D projection of that object. The inverse abel transform takes a 2D projection and reconstructs a slice of the cylindrically symmetric 3D distribution.
Inverse Abel transforms play an important role in analyzing the projections of angle-resolved photoelectron/photoion spectra, plasma plumes, flames, and solar occultation.
PyAbel provides efficient implementations of several Abel transform algorithms, as well as related tools for centering images, symmetrizing images, and calculating properties such as the radial intensity distribution and the anisotropy parameters.

Transform methods¶
The numerical Abel transform is computationally intensive, and a basic numerical integration of the analytical equations does not reliably converge. Consequently, numerous algorithms have been developed in order to approximate the Abel transform in a reliable and efficient manner. So far, PyAbel includes the following transform methods:
*
The BASEX method of Dribinski and co-workers, which uses a Gaussian basis set to provide a quick, robust transform. This is one of the de facto standard methods in photoelectron/photoion spectroscopy.- The Hansen–Law recursive method of Hansen and Law, which provides an extremely fast transform with low centerline noise.
- The Direct numerical integration of the analytical Abel transform equations, which is implemented in Cython for efficiency. In general, while the forward Abel transform is useful, the inverse Abel transform requires very fine sampling of features (lots of pixels in the image) for good convergence to the analytical result, and is included mainly for completeness and for comparison purposes. For the inverse Abel transform, other methods are generally more reliable.
*
The Three Point method of Dasch and co-workers, which provides a fast and robust transform by exploiting the observation that underlying radial distribution is primarily determined from changes in the line-of-sight projection data in the neighborhood of each radial data point. This technique works very well in cases where the real difference between adjacent projections is much greater than the noise in the projections (i.e. where the raw data is not oversampled).- (Planned implementation) The Fourier–Hankel method, which is computationally efficient, but contains significant centerline noise and is known to introduce artifacts.
- (Planned implementation) The Onion Peeling method.
- (Planned implementation) The POP (polar onion peeling) method. POP projects the image onto a basis set of Legendre polynomial-based functions, which can greatly reduce the noise in the reconstruction. However, this method only applies to images that contain features at constant radii. I.e., it works for the spherical shells seen in photoelectron/ion spectra, but not for flames.
*
Methods marked with an asterisk require the generation of basis sets. The first time each method is run for a specific image size, a basis set must be generated, which can take several seconds or minutes. However, this basis set is saved to disk (generally to the current directory) and can be reused, making subsequent transforms very efficient. Users who are transforming numerous images using these methods will want to keep this in mind and specify the directory containing the basis sets.
Installation¶
PyAbel requires Python 2.7 or 3.3-3.5. Numpy and Scipy are also required, and Matplotlib is required to run the examples. If you don’t already have Python, we recommend an “all in one” Python package such as the Anaconda Python Distribution, which is available for free.
With setuptools¶
If you prefer the development version from GitHub, download it here, cd to the PyAbel directory, and use
python setup.py install
Or, if you wish to edit the PyAbel source code without re-installing each time
python setup.py develop
Example of use¶
Using PyAbel can be simple. The following Python code imports the PyAbel package, generates a sample image, performs a forward transform using the Hansen–Law method, and then a reverse transform using the Three Point method:
import abel
original = abel.tools.analytical.sample_image()
forward_abel = abel.transform(original, direction='forward', method='hansenlaw' )['transform']
inverse_abel = abel.transform(forward_abel, direction='inverse', method='three_point')['transform']
Note: the abel.transform()
function returns a Python dict
object, where the 2D Abel transform is accessed through the 'transform'
key.
The results can then be plotted using Matplotlib:
import matplotlib.pyplot as plt
import numpy as np
fig, axs = plt.subplots(1, 2, figsize=(6, 4))
axs[0].imshow(forward_abel, clim=(0, np.max(forward_abel)*0.6), origin='lower', extent=(-1,1,-1,1))
axs[1].imshow(inverse_abel, clim=(0, np.max(inverse_abel)*0.4), origin='lower', extent=(-1,1,-1,1))
axs[0].set_title('Forward Abel Transform')
axs[1].set_title('Inverse Abel Transform')
plt.tight_layout()
plt.show()
Output:

Note
Additional examples can be viewed on the PyAbel examples page and even more are found in the PyAbel/examples directory.
Documentation¶
General information about the various Abel transforms available in PyAbel is available at the links above. The complete documentation for all of the methods in PyAbel is hosted at https://pyabel.readthedocs.org.
Support¶
If you have a question or suggestion about PyAbel, the best way to contact the PyAbel Developers Team is to open a new issue.
Contributing¶
We welcome suggestions for improvement! Either open a new Issue or make a Pull Request.
Contributing.md has more information on how to contribute, such as how to run the unit tests and how to build the documentation.
License¶
PyAble is licensed under the MIT license, so it can be used for pretty much whatever you want! Of course, it is provided “as is” with absolutely no warrenty.
Citation¶
First and foremost, please cite the paper(s) corresponding to the implementation of the Abel Transform that you use in your work. The references can be found at the links above.
If you find PyAbel useful in you work, it would bring us great joy if you would cite the project. [DOI coming soon!]
Have fun!
Contributing to PyAbel¶
PyAbel is an open source project and we welcome improvements! The easiest way to get started is to open a new issue.
If you would like to make a Pull Request, the following information may be useful.
Unit tests¶
Before submitting at Pull Request, be sure to run the unit tests. The test suite can be run from within the PyAbel package with
nosetests abel/tests/ --verbosity=2 --with-coverage --cover-package=abel
or, from any folder with
python -c "import abel.tests; abel.tests.run_cli(coverage=True)"
which performs an equivalent call.
Note that this requires that you have Nose and (optionally) Coverage installed. You can install these with
pip install nose
pip install coverage
Documentation¶
PyAbel uses Sphinx and Napoleon to process Numpy style docstrings, and is synchronized to pyabel.readthedocs.org. To build the documentation locally, you will need Sphinx, the recommonmark package, and the sphinx_rtd_theme. You can install all this this using
pip install sphinx
pip install recommonmark
pip install sphinx_rtd_theme
Once you have that installed, then you can build the documentation using
cd PyAbel/doc/
make html
Then you can open doc/_build/hmtl/index.html
to look at the documentation. Sometimes you need to use
make clean
make html
to clear out the old documentation and get things to re-build properly.
When you get tired of typing make html
every time you make a change to the documentation, it’s nice to use use sphix-autobuild to automatically update the documentation in your browser for you. So, install sphinx-autobuild using
pip install sphinx-autobuild
Now you should be able to
cd PyAbel/doc/
make livehtml
which should launch a browser window displaying the docs. When you save a change to any of the docs, the re-build should happen automatically and the docs should update in a matter of a few seconds.
Alternatively, restview is a nice way to preview the .rst
files.
Before merging¶
If possible, before merging your pull request please rebase your fork on the last master on PyAbel. This could be done as explained in this post
# Add the remote, call it "upstream" (only the fist time)
git remote add upstream git@github.com:PyAbel/PyAbel.git
# Fetch all the branches of that remote into remote-tracking branches,
# such as upstream/master:
git fetch upstream
# Make sure that you're on your master branch
# or any other branch your are working on
git checkout master # or your other working branch
# Rewrite your master branch so that any commits of yours that
# aren't already in upstream/master are replayed on top of that
# other branch:
git rebase upstream/master
# push the changes to your fork
git push -f
See this wiki for more information.
Adding a new forward or inverse Abel implementation¶
We are always looking for new implementation of forward or inverse Abel transform, therefore if you have an implementation that you would want to contribute to PyAbel, don’t hesitate to do so.
In order to allow a consistent user experience between different implementations, and insure an overall code quality, please consider the following points in your pull request.
Naming conventions¶
The implementation named <implementation>, located under abel/<implementation>.py should use the following naming system for top-level functions,
<implemenation>_transform>
: core transform (when defined)_bs_<implementation>
: function that generates the basis sets (if necessary)bs_<implementation>_cached
: function that loads the basis sets from disk, and generates them if they are not found (if necessary).
Unit tests¶
To detect issues early, the submitted implementation should have the following properties and pass the corresponding unit tests,
- The reconstruction has the same shape as the original image. Currently all transform methods operate with odd-width images and should raise an exception if provided with an even-width image.
- Given an array of 0 elements, the reconstruction should also be a 0 array.
- The implementation should be able to calculated the inverse (or forward) transform of a Gaussian function defined by a standard deviation
sigma
, with better than a10 %
relative error with respect to the analytical solution for0 > r > 2*sigma
.
Unit tests for a given implementation are located under abel/tests/test_<implemenation>.py
, which should contain at least the following 3 functions test_<implementation>_shape
, test_<implementation>_zeros
, test_<implementation>_gaussian
. See abel/tests/test_basex.py
for a concrete example.
abel package¶
abel.transform module¶
-
abel.transform.
transform
(IM, direction=u'inverse', method=u'three_point', center=u'none', verbose=True, symmetry_axis=None, use_quadrants=(True, True, True, True), recast_as_float64=True, transform_options={}, center_options={})[source]¶ This is the main function of PyAbel, providing both forward and inverse abel transforms for full images. In addition, this function can perform image centering and symmetrization.
Parameters: - IM (a NxM numpy array) – This is the image to be transformed
- direction (str) –
The type of Abel transform to be performed.
forward
- A ‘forward’ Abel transform takes a (2D) slice of a 3D image and returns the 2D projection.
inverse
- An ‘inverse’ Abel transform takes a 2D projection and reconstructs a 2D slice of the 3D image.
The default is
inverse
. - method (str) –
specifies which numerical approximation to the Abel transform should be employed (see below). The options are
hansenlaw
- the recursive algorithm described by Hansen and Law
basex
- the Gaussian “basis set expansion” method of Dribinski et al.
direct
- a naive implementation of the analytical formula by Roman Yurchuk.
three_point
- the three-point transform of Dasch and co-workers
- center (tuple or str) –
If a tuple (float, float) is provided, this specifies the image center in (y,x) (row, column) format. A value None can be supplied if no centering is desired in one dimension, for example ‘center=(None, 250)’. If a string is provided, an automatic centering algorithm is used
image_center
- center is assumed to be the center of the image.
slice
- the center is found my comparing slices in the horizontal and vertical directions
com
- the center is calculated as the center of mass
gaussian
- the center is found using a fit to a Gaussian function. This only makes sense if your data looks like a Gaussian.
none
- (Default) No centering is performed. An image with an odd number of columns must be provided.
- verbose (boolean) – True/False to determine if non-critical output should be printed.
- symmetry_axis (int or tuple) – Symmetrize the image about the numpy axis 0 (vertical), 1 (horizontal), (0,1) (both axes)
- use_quadrants (tuple of 4 booleans) – select quadrants to be used in the analysis: (Q0,Q1,Q2,Q3).
Quadrants are numbered counter-clockwide from upper right.
See note below for description of quadrants.
Default is
(True, True, True, True)
, which uses all quadrants. - recast_as_float64 (boolean) – True/False that determines if the input image should be recast to
float64
. Many images are imported in other formats (such asuint8
oruint16
) and this does not always play well with the transorm algorithms. This should probably always be set to True. (Default is True.) - transform_options (tuple) – Additional arguments passed to the individual transform functions. See the documentation for the individual transform method for options.
- center_options (tuple) – Additional arguments to be passed to the centering function.
Note
- Quadrant averaging
The quadrants can be averaged using the
use_quadrants
keyword in order to provide better data quality.The quadrants are numbered starting from Q0 in the upper right and proceeding counter-clockwise:
+--------+--------+ | Q1 * | * Q0 | | * | * | | * | * | AQ1 | AQ0 +--------o--------+ --(inverse Abel transform)--> ----o---- | * | * | AQ2 | AQ3 | * | * | | Q2 * | * Q3 | AQi == inverse Abel transform +--------+--------+ of quadrant Qi
Three cases are possible:
symmetry_axis = 0 (vertical):
Combine: Q01 = Q0 + Q1, Q23 = Q2 + Q3 inverse image AQ01 | AQ01 -----o----- AQ23 | AQ23
symmetry_axis = 1 (horizontal):
Combine: Q12 = Q1 + Q2, Q03 = Q0 + Q3 inverse image AQ12 | AQ03 -----o----- (top and bottom equivalent) AQ12 | AQ03
symmetry_axis = (0, 1) (both):
Combine: Q = Q0 + Q1 + Q2 + Q3 inverse image AQ | AQ ---o--- (all quadrants equivalent) AQ | AQ
Returns: results – The transform function returns a dictionary of results depending on the options selected results['transform']
- (always returned) is the 2D forward/reverse Abel transform
results['radial_intensity']
- is not currently implemented
results['residual']
- is not currently implemented
Return type: dict Notes
As mentioned above, PyAbel offers several different approximations to the the exact abel transform. All the the methods should produce similar results, but depending on the level and type of noise found in the image, certain methods may perform better than others. Please see the “Transform Methods” section of the documentation for complete information.
hansenlaw
This “recursive algorithm” produces reliable results and is quite fast (~0.1 sec for a 1001x1001 image). It makes no assumptions about the data (apart from cylindrical symmetry). It tends to require that the data is finely sampled for good convergence.
E. W. Hansen and P.-L. Law “Recursive methods for computing the Abel transform and its inverse” J. Opt. Soc. A*2, 510-520 (1985) http://dx.doi.org/10.1364/JOSAA.2.000510
basex
*The “basis set exapansion” algorithm describes the data in terms of gaussian functions, which themselves can be abel transformed analytically. Because the gaussian functions are approximately the size of each pixel, this method also does not make any assumption about the shape of the data. This method is one of the de-facto standards in photoelectron/photoion imaging.
Dribinski et al, 2002 (Rev. Sci. Instrum. 73, 2634) http://dx.doi.org/10.1063/1.1482156direct
- This method attempts a direct integration of the Abel transform integral. It makes no assumptions about the data (apart from cylindrical symmetry), but it typically requires fine sampling to converge. Such methods are typically inefficient, but thanks to this Cython implementation (by Roman Yurchuk), this ‘direct’ method is competitive with the other methods.
three_point
*The “Three Point” Abel transform method exploits the observation that the value of the Abel inverted data at any radial position r is primarily determined from changes in the projection data in the neighborhood of r. This method is also very efficient once it has generated the basis sets.
Dasch, 1992 (Applied Optics, Vol 31, No 8, March 1992, Pg 1146-1152).
*
- The methods marked with a * indicate methods that generate basis sets. The first time they are run for a new image size, it takes seconds to minutes to generate the basis set. However, this basis set is saved to disk can can be reloaded, meaning that future transforms are performed much more quickly.
abel.basex module¶
-
abel.basex.
basex_core_transform
(rawdata, M_vert, M_horz, Mc_vert, Mc_horz, vert_left, horz_right, dr=1.0)[source]¶ This is the internal function that does the actual BASEX transform. It requires that the matrices of basis set coefficients be passed.
Parameters: - rawdata (NxM numpy array) – the raw image.
- M_vert_etc. (Numpy arrays) – 2D arrays given by the basis set calculation function
- dr (float) – pixel size. This only affects the absolute scaling of the output.
Returns: IM – The abel-transformed image, a slice of the 3D distribution
Return type: NxM numpy array
-
abel.basex.
basex_transform
(data, nbf=u'auto', basis_dir=u'./', dr=1.0, verbose=True, direction=u'inverse')[source]¶ This function performs the BASEX (BAsis Set EXpansion) Abel Transform. It works on a “right side” image. I.e., it works on just half of a cylindrically symmetric object and
data[0,0]
should correspond to a central pixel. To perform a BASEX transorm on a whole image, useabel.transform(image, method='basex')
This BASEX implementation only works with images that have an odd-integer width.
Note: only the inverse transform is currently implemented.
Parameters: - data (a NxM numpy array) – The image to be inverse transformed. The width (M) must be odd and
data[:,0]
should correspond to the central column of the image. - nbf (str or list) – number of basis functions. If
nbf='auto'
, it is set to(n//2 + 1)
. This is what you should always use, since this BASEX implementation does not work reliably in other situations! In the future, you could use list, in format [nbf_vert, nbf_horz] - basis_dir (str) – path to the directory for saving / loading the basis set coefficients. If None, the basis set will not be saved to disk.
- dr (float) – size of one pixel in the radial direction. This only affects the absolute scaling of the transformed image.
- verbose (boolean) – Determines if statements should be printed.
- direction (str) – The type of Abel transform to be performed.
Currently only accepts value
'inverse'
.
Returns: recon – the transformed (half) image
Return type: NxM numpy array
- data (a NxM numpy array) – The image to be inverse transformed. The width (M) must be odd and
-
abel.basex.
get_bs_basex_cached
(n_vert, n_horz, nbf=u'auto', basis_dir=u'.', verbose=False)[source]¶ Internal function.
Gets BASEX basis sets, using the disk as a cache (i.e. load from disk if they exist, if not calculate them and save a copy on disk). To prevent saving the basis sets to disk, set
basis_dir=None
.Parameters: - n_horz (n_vert,) – Abel inverse transform will be performed on a
n_vert x n_horz
area of the image - nbf (int or list) – number of basis functions. If
nbf='auto'
,n_horz
is set to(n//2 + 1)
. - basis_dir (str) – path to the directory for saving / loading the basis set coefficients. If None, the basis sets will not be saved to disk.
Returns: M_vert, M_horz, Mc_vert, Mc_horz, vert_left, horz_right – the matrices that compose the basis set.
Return type: numpy arrays
- n_horz (n_vert,) – Abel inverse transform will be performed on a
abel.hansenlaw module¶
-
abel.hansenlaw.
hansenlaw_transform
(IM, dr=1, direction=u'inverse')[source]¶ Forward/Inverse Abel transformation using the algorithm of Hansen and Law J. Opt. Soc. Am. A 2, 510-520 (1985) equation 2a:
\[f(r) = \frac{1}{\pi} \int_{r}^{\inf} \frac{g^\prime(R)}{\sqrt{R^2-r^2}} dR,\]where
\(f(r)\) is the reconstructed image (source) function, \(g'(R)\) is the derivative of the projection (measured) function
Evaluation follows Eqs. (15 or 17), using (16a), (16b), and (16c or 18) of the Hansen and Law paper. For the full image transform, use
abel.transform
.For the inverse Abel transform of image g:
f = abel.transform(g, direction="inverse", method="hansenlaw")["transform"]
For the forward Abel transform of image f:
g = abel.transform(r, direction="forward", method="hansenlaw")["transform"]
This function performs the Hansen-Law transform on only one “right-side” image, typically one quadrant of the full image:
Qtrans = abel.hansenlaw.hansenlaw_transform(Q, direction="inverse")
Recursion method proceeds from the outer edge of the image toward the image centre (origin). i.e. when
n=N-1
,R=Rmax
, and whenn=0
,R=0
. This fits well with processing the image one quadrant (chosen orientation to be rightside-top), or one right-half image at a time.Parameters: - IM (2D np.array) – One quadrant (or half) of the image oriented top-right.
- dr (float) – Sampling size (=1 for pixel images), used for Jacobian scaling
- direction (string ('forward' or 'inverse')) –
forward
orinverse
Abel transform
Returns: AIM – forward/inverse Abel transform image
Return type: 2D numpy array
Note
Image should be a right-side image, like this:
. +-------- +--------+ . | * | * | . | * | * | <---------- IM . | * | * | . +-------- o--------+ . | * | * | . | * | * | . | * | * | . +-------- +--------+
Image centre
o
should be within a pixel (i.e. an odd number of columns) Useabel.tools.center.center_image(IM, method='com', odd_size=True)
abel.three_point module¶
-
abel.three_point.
OP_D
(i, j)[source]¶ Calculates three-point abel inversion operator D_ij, following Eq (6) in Dasch 1992 (Applied Optics). The original reference contains several typos. One correction is done in function OP1 following Karl Martin’s PhD thesis See here: https://www.lib.utexas.edu/etd/d/2002/martinkm07836/martinkm07836.pdf
-
abel.three_point.
get_bs_three_point_cached
(col, basis_dir='.', verbose=False)[source]¶ Internal function.
Gets thre_point operator matrix corresponding to specified image size, using the disk as a cache. (i.e., load from disk if they exist, if not, calculate them and save a copy on disk)
Parameters: - col (integer) – Width of image to be inverted using three_point method. Three_point operator matrix will be of size (col x col)
- basis_dir (string) – path to the directory for saving / loading the three_point operator matrix. If None, the operator matrix will not be saved to disk.
- verbose (True/False) – Set to True to see more output for debugging
-
abel.three_point.
three_point
(data, center, dr=1.0, basis_dir='./', verbose=False, direction='inverse')[source]¶ This function splits the image into two halves, sends each half to three_point_transform(), stitches the output back together, and returns the full transform to the user.
Parameters: - data (NxM numpy array) – The raw data is presumed to be symmetric about the vertical axis.
- center (integer or tuple (x,y)) – The location of the symmetry axis (center column index of the image) or the center of the image in (x,y) format.
- dr (float) – Size of one pixel in the radial direction
- basis_dir (string) – path to the directory for saving / loading the three_point operator matrix. If None, the operator matrix will not be saved to disk.
- verbose (True/False) – Set to True to see more output for debugging
- direction (str) – The type of Abel transform to be performed. Currently only accepts value ‘inverse’
Returns: inv_IM – Abel inversion of IM - a rows x cols array
Return type: numpy array
-
abel.three_point.
three_point_core_transform
(IM, D)[source]¶ This is the internal function that performs the actual three_point transform. It requires that the matrix of basis set coefficients, D, be passed as a parameter.
Parameters: - IM (numpy array) – Raw data - a rows x cols array
- D (basis sets) – generated by the get_bs_three_point_cached() function
Returns: inv_IM – Abel inversion of IM - a rows x cols array
Return type: numpy array
-
abel.three_point.
three_point_transform
(IM, basis_dir='.', direction='inverse', verbose=False)[source]¶ Inverse Abel transformation using the algorithm of: Dasch, Applied Optics, Vol. 31, No. 8, 1146-1152 (1992).
Parameters: IM (numpy array) – Raw data - a rows x cols array Returns: inv_IM – Abel inversion of IM - a rows x cols array Return type: numpy array
abel.direct module¶
-
abel.direct.
simpson_rule_wrong
(f, x=None, dx=None, axis=1, **args)[source]¶ This function computes the Simpson rule https://en.wikipedia.org/wiki/Simpson%27s_rule#Python both in the cases of odd and even intervals which is not technically valid
-
abel.direct.
direct_transform
(fr, dr=None, r=None, direction=u'inverse', derivative=<function gradient>, int_func=<function simpson_rule_wrong>, correction=True, backend=u'C')[source]¶ This algorithm does a direct computation of the Abel transform
- integration near the singular value is done analytically
- integration further from the singular value with the Simpson rule.
Parameters: - fr (1d or 2d numpy array) – input array to which direct/inversed Abel transform will be applied. For a 2d array, the first dimension is assumed to be the z axis and the second the r axis.
- dr (float) – spatial mesh resolution (optional, default to 1.0)
- f (1D ndarray) – the spatial mesh (optional)
- derivative (callable) – a function that can return the derivative of the fr array with respect to r (only used in the inverse Abel transform).
- inverse (boolean) – If True inverse Abel transform is applied, otherwise use a forward Abel transform.
- correction (boolean) – if False integration is performed with the Simpson rule, the pixel where the weighting function has a singular value is ignored if True in addition to the integration with the Simpson rule, integration near the singular value is done analytically, assuming a piecewise linear data.
Returns: out – with either the direct or the inverse abel transform.
Return type: 1d or 2d numpy array of the same shape as fr
abel.tools.analytical module¶
-
abel.tools.analytical.
sample_image
(n=361, name='dribinski', sigma=2, temperature=200)[source]¶ Sample images, made up of Gaussian functions
Parameters: - n (integer) – image size n rows x n cols
- name (str) – one of “dribinski” or “Ominus”
- sigma (float) – Gaussian 1/e width (pixels)
- temperature (float) – for ‘Ominus’ only anion levels have Boltzmann population weight (2J+1) exp(-177.1 h c 100/k/temperature)
Returns: IM – image
Return type: 2D np.array
-
class
abel.tools.analytical.
StepAnalytical
(n, r_max, r1, r2, A0=1.0, ratio_valid_step=1.0, symmetric=True)[source]¶
-
class
abel.tools.analytical.
GaussianAnalytical
(n, r_max, sigma=1.0, A0=1.0, ratio_valid_sigma=2.0, symmetric=True)[source]¶
-
abel.tools.analytical.
abel_step_analytical
(r, A0, r0, r1)[source]¶ Directed Abel transform of a step function located between r0 and r1, with a height A0
A0 + +-------------+ | | | | | | 0 | -----------------+ +------------- +------------------+-------------+------------> 0 r0 r1 r axis
This function is mostly used for unit testing the inverse Abel transform
Parameters: - r1 (r0,) – vecor of positions along the r axis. Must start with 0.
- r1 – positions of the step along the r axis
- A0 (float or 1D array:) – height of the step. If 1D array the height can be variable along the Z axis
Returns: Return type: 1D array if A0 is a float, a 2D array otherwise
abel.tools.center module¶
-
abel.tools.center.
find_center
(IM, method=u'image_center', verbose=False, **kwargs)[source]¶ Parameters: - IM (2D np.array) – image data
- method (str) –
this determines how the center should be found. The options are:
image_center
- the center of the image is used as the center. The trivial result.
com
- the center is found as the center of mass
gaussian
- the center is extracted by a fit to a Gaussian function. This is probably only appropriate if the data resembles a gaussian.
slice
- the image is broken into slices, and these slices compared for symmetry.
Returns: out – coordinate of the center of the image in the (y,x) format (row, column)
Return type: (float, float)
-
abel.tools.center.
center_image
(IM, center=u'com', odd_size=True, verbose=False, **kwargs)[source]¶ Center image with the custom value or by several methods provided in find_center function
Parameters: - IM (2D np.array) – The image data.
- center (tuple or str) –
center can either be (float, float), the coordinate of the center of the image in the (y,x) format (row, column)
Or, it can be a string, to specify an automatic centering method. The options are:
image_center
- the center of the image is used as the center. The trivial result.
com
- the center is found as the center of mass
gaussian
- the center is extracted by a fit to a Gaussian function. This is probably only appropriate if the data resembles a gaussian.
slice
- the image is broken into slices, and these slices compared for symmetry.
- odd_size (boolean) – if True, an image will be returned containing an odd number of columns. Most of the transform methods require this, so it’s best to set this to True if the image will subsequently be Abel transformed.
- crop (str) –
This determines how the image should be cropped. The options are:
maintain_size
- return image of the same size. Some of the original image may be lost and some regions may be filled with zeros.
valid_region
- return the largest image that can be created without padding. All of the returned image will correspond to the original image. However, portions of the original image will be lost. If you can tolerate clipping the edges of the image, this is probably the method to choose.
maintain_data
- the image will be padded with zeros such that none of the original image will be cropped.
Returns: out – Centered image
Return type: 2D np.array
-
abel.tools.center.
set_center
(data, center, crop=u'maintain_size', verbose=False)[source]¶ Move image center to mid-point of image
Parameters: - data (2D np.array) – The image data
- center (tuple) – image pixel coordinate center (row, col)
- crop (str) –
This determines how the image should be cropped. The options are:
maintain_size
- return image of the same size. Some of the original image may be lost and some regions may be filled with zeros.
valid_region
- return the largest image that can be created without padding. All of the returned image will correspond to the original image. However, portions of the original image will be lost. If you can tolerate clipping the edges of the image, this is probably the method to choose.
maintain_data
- the image will be padded with zeros such that none of the original image will be cropped.
- verbose (boolean) – True: print diagnostics
-
abel.tools.center.
find_center_by_center_of_mass
(IM, verbose=False, round_output=False, **kwargs)[source]¶ Find image center by calculating its center of mass
-
abel.tools.center.
find_center_by_center_of_image
(data, verbose=False, **kwargs)[source]¶ Find image center simply from its dimensions.
-
abel.tools.center.
find_center_by_gaussian_fit
(IM, verbose=False, round_output=False, **kwargs)[source]¶ Find image center by fitting the summation along x and y axis of the data to two 1D Gaussian function
-
abel.tools.center.
axis_slices
(IM, radial_range=(0, -1), slice_width=10)[source]¶ returns vertical and horizontal slice profiles, summed across slice_width.
Parameters: - IM (2D np.array) – image data
- radial_range (tuple floats) – (rmin, rmax) range to limit data
- slice_width (integer) – width of the image slice, default 10 pixels
Returns: top, bottom, left, right – image slices oriented in the same direction
Return type: 1D np.arrays shape (rmin:rmax, 1)
-
abel.tools.center.
find_image_center_by_slice
(IM, slice_width=10, radial_range=(0, -1), axis=(0, 1), **kwargs)[source]¶ Center image by comparing opposite side, vertical (
axis=0
) and/or horizontal slice (axis=1
) profiles. To center along both axis, useaxis=(0,1)
.Parameters: - IM (2D np.array) – The image data.
- slice_width (integer) – Sum together this number of rows (cols) to improve signal, default 10.
- radial_range (tuple) – (rmin,rmax): radial range [rmin:rmax] for slice profile comparison.
- axis (integer or tuple) – Center with along
axis=0
(vertical), oraxis=1
(horizontal), oraxis=(0,1)
(both vertical and horizontal).
Returns: (vertical_shift, horizontal_shift) – (axis=0 shift, axis=1 shift)
Return type: tuple of floats
abel.tools.math module¶
-
abel.tools.math.
gradient
(f, x=None, dx=1, axis=-1)[source]¶ Return the gradient of 1 or 2-dimensional array. The gradient is computed using central differences in the interior and first differences at the boundaries.
Irregular sampling is supported (it isn’t supported by np.gradient)
Parameters: - f (1d or 2d numpy array) – Input array.
- x (array_like, optional) – Points where the function f is evaluated. It must be of the same
length as
f.shape[axis]
. If None, regular sampling is assumed (see dx) - dx (float, optional) – If x is None, spacing given by dx is assumed. Default is 1.
- axis (int, optional) – The axis along which the difference is taken.
Returns: out – Returns the gradient along the given axis.
Return type: array_like
Notes
To-Do: implement smooth noise-robust differentiators for use on experimental data. http://www.holoborodko.com/pavel/numerical-methods/numerical-derivative/smooth-low-noise-differentiators/
-
abel.tools.math.
gaussian
(x, a, mu, sigma, c)[source]¶ Gaussian function
\(f(x)=a e^{-(x - \mu)^2 / (2 \sigma^2)} + c\)
ref: https://en.wikipedia.org/wiki/Gaussian_function
Parameters: - x (1D np.array) – coordinate
- a (float) – the height of the curve’s peak
- mu (float) – the position of the center of the peak
- sigma (float) – the standard deviation, sometimes called the Gaussian RMS width
- c (float) – non-zero background
Returns: out – the Gaussian profile
Return type: 1D np.array
abel.tools.polar module¶
-
abel.tools.polar.
reproject_image_into_polar
(data, origin=None, Jacobian=False, dr=1, dt=None)[source]¶ Reprojects a 2D numpy array (
data
) into a polar coordinate system. “origin” is a tuple of (x0, y0) relative to the bottom-left image corner, and defaults to the center of the image.Parameters: - data (2D np.array) –
- origin (tuple) – The coordinate of the image center, relative to bottom-left
- Jacobian (boolean) – Include
r
intensity scaling in the coordinate transform. This should be included to account for the changing pixel size that occurs during the transform. - dr (float) – Radial coordinate spacing for the grid interpolation tests show that there is not much point in going below 0.5
- dt (float) – Angular coordinate spacing (in degrees)
if
dt=None
, dt will be set such that the number of theta values is equal to the height of the image.
Returns: - output (2D np.array) – The polar image (r, theta)
- r_grid (2D np.array) – meshgrid of radial coordinates
- theta_grid (2D np.array) – meshgrid of theta coordinates
Notes
Adapted from: http://stackoverflow.com/questions/3798333/image-information-along-a-polar-coordinate-system
-
abel.tools.polar.
index_coords
(data, origin=None)[source]¶ Creates x & y coords for the indicies in a numpy array
Parameters: - data (numpy array) – 2D data
- origin ((x,y) tuple) – defaults to the center of the image. Specify origin=(0,0) to set the origin to the bottom-left corner of the image.
Returns: x, y
Return type: arrays
-
abel.tools.polar.
cart2polar
(x, y)[source]¶ Transform Cartesian coordinates to polar
Parameters: y (x,) – Cartesian coordinates Returns: r, theta – Polar coordinates Return type: floats or arrays
abel.tools.symmetry module¶
-
abel.tools.symmetry.
get_image_quadrants
(IM, reorient=True, symmetry_axis=None, use_quadrants=(True, True, True, True))[source]¶ Given an image (m,n) return its 4 quadrants Q0, Q1, Q2, Q3 as defined below.
Parameters: - IM (2D np.array) – Image data shape (rows, cols)
- reorient (boolean) – Reorient quadrants to match the orientation of Q0 (top-right)
- symmetry_axis (int or tuple) – can have values of
None
,0
,1
, or(0,1)
and specifies no symmetry, vertical symmetry axis, horizontal symmetry axis, and both vertical and horizontal symmetry axes. Quadrants are averaged. See Note. - use_quadrants (boolean tuple) – Include quadrant (Q0, Q1, Q2, Q3) in the symmetry combination(s) and final image
Returns: Q0, Q1, Q2, Q3 – shape: (
rows//2+rows%2, cols//2+cols%2
) all oriented in the same direction as Q0 ifreorient=True
Return type: tuple of 2D np.arrays
Notes
The symmetry_axis keyword averages quadrants like this:
+--------+--------+ | Q1 * | * Q0 | | * | * | | * | * | cQ1 | cQ0 +--------o--------+ --(output) -> ----o---- | * | * | cQ2 | cQ3 | * | * | | Q2 * | * Q3 | cQi == averaged combined quadrants +--------+--------+ symmetry_axis = None - individual quadrants symmetry_axis = 0 (vertical) - average Q0+Q1, and Q2+Q3 symmetry_axis = 1 (horizontal) - average Q1+Q2, and Q0+Q3 symmetry_axis = (0, 1) (both) - combine and average all 4 quadrants
The end results look like this:
(0) symmetry_axis = None returned image Q1 | Q0 ----o---- Q2 | Q3 (1) symmetry_axis = 0 Combine: Q01 = Q0 + Q1, Q23 = Q2 + Q3 returned image Q01 | Q01 -----o----- Q23 | Q23 (2) symmetry_axis = 1 Combine: Q12 = Q1 + Q2, Q03 = Q0 + Q3 returned image Q12 | Q03 -----o----- Q12 | Q03 (3) symmetry_axis = (0, 1) Combine all quadrants: Q = Q0 + Q1 + Q2 + Q3 returned image Q | Q ---o--- all quadrants equivalent Q | Q
-
abel.tools.symmetry.
put_image_quadrants
(Q, odd_size=True, symmetry_axis=None)[source]¶ Reassemble image from 4 quadrants Q = (Q0, Q1, Q2, Q3) The reverse process to get_image_quadrants(reorient=True)
Note: the quadrants should all be oriented as Q0, the upper right quadrant
Parameters: - Q (tuple of np.array (Q0, Q1, Q2, Q3)) –
Image quadrants all oriented as Q0 shape (
rows//2+rows%2, cols//2+cols%2
)+--------+--------+ | Q1 * | * Q0 | | * | * | | * | * | +--------o--------+ | * | * | | * | * | | Q2 * | * Q3 | +--------+--------+
- odd_size (boolean) – Whether final image is odd or even pixel size odd size will trim 1 row from Q1, Q0, and 1 column from Q1, Q2
- symmetry_axis (int or tuple) –
impose image symmetry
symmetry_axis = 0 (vertical) - Q0 == Q1 and Q3 == Q2
symmetry_axis = 1 (horizontal) - Q2 == Q1 and Q3 == Q0
Returns: IM – Reassembled image of shape (rows, cols):
symmetry_axis = None 0 1 (0,1) Q1 | Q0 Q1 | Q1 Q1 | Q0 Q1 | Q1 ----o---- or ----o---- or ----o---- or ----o---- Q2 | Q3 Q2 | Q2 Q1 | Q0 Q1 | Q1
Return type: np.array
- Q (tuple of np.array (Q0, Q1, Q2, Q3)) –
abel.tools.vmi module¶
-
abel.tools.vmi.
angular_integration
(IM, origin=None, Jacobian=False, dr=1, dt=None)[source]¶ Angular integration of the image.
Returns the one-dimentional intensity profile as a function of the radial coordinate.
Parameters: - IM (2D np.array) – The data image.
- origin (tuple) – Image center coordinate relative to bottom-left corner
defaults to
rows//2+rows%2,cols//2+cols%2
. - Jacobian (boolean) – Include \(r\sin(\theta)\) in the angular sum (integration).
Also,
Jacobian=True
is passed toabel.tools.polar.reproject_image_into_polar()
, which includes another value ofr
, thus providing the appropriate total Jacobian of \(r^2\sin(\theta)\). - dr (float) – Radial coordinate grid spacing, in pixels (default 1).
- dt (float) – Theta coordinate grid spacing in degrees.
if
dt=None
, dt will be set such that the number of theta values is equal to the height of the image (which should typically ensure good sampling.)
Returns: - r (1D np.array) – radial coordinates
- speeds (1D np.array) – Integrated intensity array (vs radius).
-
abel.tools.vmi.
calculate_angular_distributions
(IM, radial_ranges=None)[source]¶ Intensity variation in the angular coordinate, theta.
This function is the theta-coordinate complement to
abel.tools.vmi.angular_integration()
(optionally and more useful) returning intensity vs angle for defined radial ranges.
Parameters: - IM (2D np.array) – Image data
- radial_ranges (list of tuples) –
[(r0, r1), (r2, r3), ...]
Evaluate the intensity vs angle for the radial rangesr0_r1
,r2_r3
, etc.
Returns: - intensity_vs_theta (2D np.array) – Intensity vs angle distribution for each selected radial range.
- theta (1D np.array) – Angle coordinates, referenced to vertical direction.
-
abel.tools.vmi.
anisotropy_parameter
(theta, intensity, theta_ranges=None)[source]¶ Evaluate anisotropy parameter beta, for I vs theta data.
I = xs_total/4pi [ 1 + beta P2(cos theta) ] Eq. (1)
where
P2(x)=(3x^2-1)/2
is a 2nd order Legendre polynomial.Cooper and Zare “Angular distribution of photoelectrons” J Chem Phys 48, 942-943 (1968)
Parameters: - theta (1D np.array) – Angle coordinates, referenced to the vertical direction.
- intensity (1D np.array) – Intensity variation (with angle)
- theta_ranges (list of tuples) – Angular ranges over which to fit
[(theta1, theta2), (theta3, theta4)]
. Allows data to be excluded from fit
Returns: - (beta, error_beta) (tuple of floats) – The anisotropy parameters and the errors associated with each one.
- (amplitude, error_amplitude) (tuple of floats) – Amplitude of signal and an error for each amplitude. Compare this with the data to check the fit.
abel.benchmark module¶
-
class
abel.benchmark.
AbelTiming
(n=[301, 501], n_max_bs=700, transform_repeat=1)[source]¶ Bases:
object
-
abel.benchmark.
is_symmetric
(arr, i_sym=True, j_sym=True)[source]¶ Takes in an array of shape (n, m) and check if it is symmetric
Parameters: - arr (1D or 2D array) –
- i_sym (array) – symmetric with respect to the 1st axis
- j_sym (array) – symmetric with respect to the 2nd axis
Returns: - a binary array with the symmetry condition for the corresponding quadrants.
- The global validity can be checked with array.all()
- Note (if both i_sym=True and i_sym=True, the input array is checked)
- for polar symmetry.
- See https (//github.com/PyAbel/PyAbel/issues/34#issuecomment-160344809)
- for the defintion of a center of the image.
-
abel.benchmark.
absolute_ratio_benchmark
(analytical, recon, kind=u'inverse')[source]¶ Check the absolute ratio between an analytical function and the result of a inv. Abel reconstruction.
Parameters: - analytical (one of the classes from abel.analytical, initialized) –
- recon (1D ndarray) – a reconstruction (i.e. inverse abel) given by some PyAbel implementation
Transform Methods¶
Contents:
Comparison of Abel Transform Methods¶
Introduction¶
Each new Abel transform method claims to be the best, providing a lower-noise, more accurate transform. The point of PyAbel is to provide an easy platform to try several abel transform methods and determine which provides the best results for a specific dataset.
So far, we have found that for a high-quality dataset, all of the transform methods produce good results.
Speed benchmarks¶
The abel.benchmark.AbelTiming
class provides the ability to benchmark the relative speed of the Abel transform algorithms.
Transform quality¶
...coming soon! ...
BASEX¶
Introduction¶
BASEX (Basis set expansion method) transform utilizes well-behaved functions (i.e., functions which have a known analytic Abel inverse) to transform images. In the current iteration of PyAbel, these functions (called basis functions) are modified Gaussian-type functions. This technique was developed in 2002 at UCLA by Dribinski, Ossadtchi, Mandelshtam, and Reisler [Dribinski2002].
How it works¶
This technique is based on expressing line-of-sight projection images (raw_data
) as sums of functions which have known analytic Abel inverses. The provided raw images are expanded in a basis set composed of these basis functions with some weighting coefficients determined through a least-squares fitting process.
These weighting coefficients are then applied to the (known) analytic inverse of these basis functions, which directly provides the Abel inverse of the raw images. Thus, the transform can be completed using simple linear algebra.
In the current iteration of PyAbel, these basis functions are modified gaussians (see Eqs 14 and 15 in Dribinski et al. 2002). The process of evaluating these functions is computationally intensive, and the basis set generation process can take several seconds to minutes for larger images (larger than ~1000x1000 pixels). However, once calculated, these basis sets can be reused, and are therefore stored on disk and loaded quickly for future use. The transform then proceeds very quickly, since each raw image inversion is a simple matrix multiplication step.
When to use it¶
According to Dribinski et al. BASEX has several advantages:
- For synthesized noise-free projections, BASEX reconstructs an essentially exact and artifact-free image, eschewing the need for interpolation procedures, which may introduce additional errors or assumptions.
- BASEX is computationally cheap and only requires matrix multiplication once the basis sets have been generated and saved to disk.
- The current basis set is composed of the modified Gaussian functions which are highly localized, uniform in coverage, and sufficiently narrow. This allows for resolution of very sharp features in the raw data with the basis functions. Moreover, the reconstruction procedure does not contribute to noise in the reconstructed image; noise appears in the image only when it exists in the projection.
- Resolution of images reconstructed with BASEX are superior to those obtained with the Fourier-Hankel method, particularly for noisy projections. However, to obtain maximum resolution, it is important to properly center the projections prior to transforming with BASEX.
- BASEX reconstructed images have an exact analytical expression, which allows for an analytical, higher resolution, calculation of the speed distribution, without increasing computation time.
How to use it¶
The recommended way to complete the inverse Abel transform using the BASEX algorithm for a full image is to use the abel.transform()
function:
abel.transform(myImage, method='basex', direction='inverse')
Note that the forward BASEX transform is not yet implemented in PyAbel.
If you would like to access the BASEX algorithm directly (to transform a right-side half-image), you can use abel.basex.basex_transform()
.
Notes¶
More information about interpreting the equations in the paper and implementing the up/down asymmetric transform is discussed in PyAbel Issue #54
Direct¶
Introduction¶
This method attempts a direct integration of the Abel transform integral. It makes no assumptions about the data (apart from cylindrical symmetry), but it typically requires fine sampling to converge. Such methods are typically inefficient, but thanks to this Cython implementation (by Roman Yurchuk), this ‘direct’ method is competitive with the other methods.
How it works¶
Information about the algorithm and the numerical optimizations is contained in PR #52
When to use it¶
When a robust forward transform is required, this method works quite well. It is not typically recommended for the inverse transform, but it can work well for smooth functions that are finely sampled.
How to use it¶
To complete the forward or inverse transform of a full image with the direct method, simply use the abel.transform()
function:
abel.transform(myImage, method='direct', direction='forward')
abel.transform(myImage, method='direct', direction='inverse')
If you would like to access the Direct algorithm directly (to transform a right-side half-image), you can use abel.direct.direct_transform()
.
Citation¶
[1] Dribinski et al, 2002 (Rev. Sci. Instrum. 73, 2634), (pdf)
Hansen-Law¶
Introduction¶
The Hansen and Law transform is the work of E. W. Hansen and P.-L. Law [1].
From the abstract:
... new family of algorithms, principally for Abel inversion, that are recursive and hence computationally efficient. The methods are based on a linear, space-variant, state-variable model of the Abel transform. The model is the basis for deterministic algorithms, applicable when data are noise free, and least-squares estimation (Kalman filter) algorithms, which accommodate the noisy data case.
The key advantage of the algorithm is its computational simplicity that amounts to only a few lines of code.
How it works¶

The Hansen and Law method makes use of a coordinate transformation, which is used to model the Abel transform, and derive reconstruction filters. The Abel transform is treated as a system modeled by a set of linear differential equations.

The algorithm iterates along each row of the image, starting at the out edge, and ending at the center.
to be continued...
When to use it¶
The Hansen-Law algorithm offers one of the fastest, most robust methods for both the forward and inverse transforms. It requires reasonably fine sampling of the data to provide exact agreement with the analytical result, but otherwise this method is a hidden gem of the field.
How to use it¶
To complete the forward or inverse transform of a full image with the hansenlaw method
, simply use the abel.transform()
function:
abel.transform(myImage, method='hansenlaw', direction='forward')
abel.transform(myImage, method='hansenlaw', direction='inverse')
If you would like to access the Hansen-Law algorithm directly (to transform a right-side half-image), you can use abel.hansenlaw.hansenlaw_transform()
.
Historical Note¶
The Hansen and Law algorithm was almost lost to the scientific community. It was rediscovered by Jason Gascooke (Flinders University, South Australia) for use in his velocity-map image analysis and written up in his PhD thesis:
J. R. Gascooke, PhD Thesis: “Energy Transfer in Polyatomic-Rare Gas Collisions and Van Der Waals Molecule Dissociation”, Flinders University (2000). Unfortunately, not available in electronic format.
Three Point¶
Introduction¶
The “Three Point” Abel transform method exploits the observation that the value of the Abel inverted data at any radial position r
is primarily determined from changes in the projection data in the neighborhood of r
. This technique was developed by Dasch [1].
How it works¶
The projection data (raw data \(\mathbf{P}\)) is expanded as a quadratic function of \(r - r_{j*}\) in the neighborhood of each data point in \(\mathbf{P}\). In other words, \(\mathbf{P}'(r) = dP/dr\) is estimated using a 3-point approximation (to the derivative), similar to central differencing. Doing so enables an analytical integration of the inverse Abel integral around each point \(r_j\). The result of this integration is expressed as a linear operator \(\mathbf{D}\), operating on the projection data \(\mathbf{P}\) to give the underlying radial distribution \(\mathbf{F}\).
When to use it¶
Dasch recommends this method based on its speed of implementation, robustness in the presence of sharp edges, and low noise. He also notes that this technique works best for cases where the real difference between adjacent projections is much greater than the noise in the projections. This is important, because if the projections are oversampled (raw data \(\mathbf{P}\) taken with data points very close to each other), the spacing between adjacent projections is decreased, and the real difference between them becomes comparable with the noise in the projections. In such situations, the deconvolution is highly inaccurate, and the projection data \(\mathbf{P}\) must be smoothed before this technique is used. (Consider smoothing with scipy.ndimage.filters.gaussian_filter.)
How to use it¶
To complete the inverse transform of a full image with the hansenlaw method
, simply use the abel.transform()
function:
abel.transform(myImage, method='three_point', direction='inverse')
Note that the forward Three point transform is not yet implemented in PyAbel.
If you would like to access the Three Point algorithm directly (to transform a right-side half-image), you can use abel.three_point.three_point_transform()
.
Example¶
import numpy as np
import matplotlib.pyplot as plt
import abel
n = 101
center = n//2
r_max = 4
r = np.linspace(-1*r_max, r_max, n)
dr = r[1]-r[0]
fig, axes = plt.subplots(ncols = 2)
fig.set_size_inches(8,4)
axes[0].set_xlabel("Lateral position, x")
axes[0].set_ylabel("F(x)")
axes[0].set_title("Original LOS signal")
axes[1].set_xlabel("Radial position, r")
axes[1].set_ylabel("f(r)")
axes[1].set_title("Inverted radial signal")
Mat = np.sqrt(np.pi)*np.exp(-1*r*r)
Left_Mat = Mat[:center+1][::-1]
Right_Mat = Mat[center:]
AnalyticAbelMat = np.exp(-1*r*r)
DaschAbelMat_Left = abel.three_point.three_point_transform(Left_Mat)[0]/dr
DaschAbelMat_Right = abel.three_point.three_point_transform(Right_Mat)[0]/dr
axes[0].plot(r, Mat, 'r', label = r'$\sqrt{\pi}e^{-r^2}$')
axes[0].legend()
axes[1].plot(r, AnalyticAbelMat, 'k', lw = 1.5, alpha = 0.5, label = r'$e^{-r^2}$')
axes[1].plot(r[:center+1], DaschAbelMat_Left[::-1], 'r--.', label = '3-pt Abel')
axes[1].plot(r[center:], DaschAbelMat_Right, 'r--.')
box = axes[1].get_position()
axes[1].set_position([box.x0, box.y0, box.width * 0.8, box.height])
axes[1].legend(loc='center left', bbox_to_anchor=(1, 0.5))
fig.tight_layout()
plt.show()
(Source code, png, hires.png, pdf)

Notes¶
The algorithm contained two typos in Eq (7) in the original citation [1]. A corrected form of these equations is presented in Karl Martin’s 2002 PhD thesis [2]. PyAbel uses the corrected version of the algorithm.
For more information on the PyAbel implementation of the three-point algorithm, please see Issue #61 and Pull Request #64.
Citation¶
[1] Dasch, Applied Optics, Vol 31, No 8, March 1992, Pg 1146-1152.
[2] Martin, Karl. PhD Thesis, University of Texas at Austin. Acoustic Modification of Sooting Combustion. 2002: https://www.lib.utexas.edu/etd/d/2002/martinkm07836/martinkm07836.pdf
Onion Peeling (not implemented)¶
Introduction¶
The onion peeling method has been premiminarily ported to Python, but not yet incorporated into Pyabel.
See the discussion here: https://github.com/PyAbel/PyAbel/issues/56
How it works¶
It doesn’t quite work reliably yet.
When to use it¶
Probably not until sombody finished it.
How to use it¶
Fix the bugs!
Example¶
Polar Onion Peeling (not implemented)¶
Introduction¶
TThe polar onion peeling (POP) method is still under development.
See the discussion here: https://github.com/PyAbel/PyAbel/issues/30
How it works¶
It doesn’t exists in PyAbel!
When to use it¶
When you implement it! :)
How to use it¶
Code it!
Example¶
Put it here!
Fourier-Hankel¶
Introduction¶
The Fourier-Hankel method break the Abel transform in to a Fourier transform and a Hankel transform. It takes advantage of the fact there there are fast numerical implementations of the Fourier and Hankel transforms to provide a quick alorithm. It is known to produce artifacts in the transform [Dribinski2002]
This method is not yet implemented in PyAbel. See the discussion in Issue #26 for more information.
How it works¶
It doesn’t work in PyAbel yet.
When to use it¶
To compare with other methods? Very large images?
How to use it¶
Implement it!
Example¶
Notes¶
Citation¶
Examples¶
Contents:
Example: Direct Gaussian¶
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import matplotlib.pyplot as plt
from time import time
import sys
from abel.direct import direct_transform
from abel.tools.analytical import GaussianAnalytical
n = 101
r_max = 30
sigma = 10
ref = GaussianAnalytical(n, r_max, sigma, symmetric=False)
fig, ax = plt.subplots(1,2)
ax[0].set_title('Forward transform of a Gaussian')
ax[1].set_title('Inverse transform of a Gaussian')
ax[0].plot(ref.r, ref.abel, 'b', label='Analytical transform')
recon = direct_transform(ref.func, dr=ref.dr, direction="forward",
correction=True, backend='C')
ax[0].plot(ref.r, recon , '--o',c='red', label='direct')
recon = direct_transform(ref.func, dr=ref.dr, direction="forward",
correction=True, backend='Python')
ax[0].plot(ref.r, recon , ':d', c='k', label='direct naive')
ax[1].plot(ref.r, ref.func, 'b', label='Original function')
recon = direct_transform(ref.abel, dr=ref.dr, direction="inverse",
correction=True)
ax[1].plot(ref.r, recon , '--o', c='red', label='direct')
recon = direct_transform(ref.abel, dr=ref.dr, direction="inverse",
correction=False)
ax[1].plot(ref.r, recon , ':d', c='k', label='direct - naive')
for axi in ax:
axi.set_xlim(0, 20)
axi.legend()
plt.show()
(Source code, png, hires.png, pdf)

Example: HansenLaw¶
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import abel
import scipy.misc
import matplotlib.pylab as plt
# Hansen and Law inverse Abel transform of a velocity-map image
# O2- photodetachement at 454 nm.
# The spectrum was recorded in 2010
# ANU / The Australian National University
# J. Chem. Phys. 133, 174311 (2010) DOI: 10.1063/1.3493349
# image file in examples/data
filename = 'data/O2-ANU1024.txt.bz2'
# Load as a numpy array
print('Loading ' + filename)
IM = np.loadtxt(filename)
# use plt.imread(filename) to load image formats (.png, .jpg, etc)
rows, cols = IM.shape # image size
# Image center should be mid-pixel, i.e. odd number of colums
if cols % 2 == 0:
print ("HL: even pixel width image, re-adjust image centre")
# re-center image based on horizontal and vertical slice profiles
# covering the radial range [300:400] pixels from the center
IM = abel.tools.center.center_image(IM, center="com", odd_size=True)
rows, cols = IM.shape # new image size
c2 = cols//2 # half-image
print('image size {:d}x{:d}'.format(rows, cols))
# Step 2: perform the Hansen & Law transform!
print('Performing Hansen and Law inverse Abel transform:')
AIM = abel.transform(IM, method='hansenlaw',
use_quadrants=(True, True, True, True),
symmetry_axis=None,
verbose=True)['transform']
rs, speeds = abel.tools.vmi.angular_integration(AIM, dr=1)
# Set up some axes
fig = plt.figure(figsize=(15, 4))
ax1 = plt.subplot(131)
ax2 = plt.subplot(132)
ax3 = plt.subplot(133)
# Plot the raw data
im1 = ax1.imshow(IM, origin='lower', aspect='auto')
fig.colorbar(im1, ax=ax1, fraction=.1, shrink=0.9, pad=0.03)
ax1.set_xlabel('x (pixels)')
ax1.set_ylabel('y (pixels)')
ax1.set_title('velocity map image')
# Plot the 2D transform
im2 = ax2.imshow(AIM, origin='lower', aspect='auto', vmin=0,
vmax=AIM[:c2-50, :c2-50].max())
fig.colorbar(im2, ax=ax2, fraction=.1, shrink=0.9, pad=0.03)
ax2.set_xlabel('x (pixels)')
ax2.set_ylabel('y (pixels)')
ax2.set_title('Hansen Law inverse Abel')
# Plot the 1D speed distribution
ax3.plot(rs, speeds/speeds[200:].max())
ax3.axis(xmax=500, ymin=-0.05, ymax=1.1)
ax3.set_xlabel('Speed (pixel)')
ax3.set_ylabel('Intensity')
ax3.set_title('Speed distribution')
# Prettify the plot a little bit:
plt.subplots_adjust(left=0.06, bottom=0.17, right=0.95, top=0.89, wspace=0.35,
hspace=0.37)
# Save a image of the plot
plt.savefig(filename[:-7]+"png", dpi=150)
# Show the plots
plt.show()
(Source code, png, hires.png, pdf)

Example: O2 PES PAD¶
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import abel
import matplotlib.pylab as plt
from scipy.ndimage.interpolation import shift
# This example demonstrates Hansen and Law inverse Abel transform
# of an image obtained using a velocity map imaging (VMI) photoelecton
# spectrometer to record the photoelectron angular distribution resulting
# from photodetachement of O2- at 454 nm.
# Measured at The Australian National University
# J. Chem. Phys. 133, 174311 (2010) DOI: 10.1063/1.3493349
# image file
filename = 'data/O2-ANU1024.txt.bz2'
# numpy handles .gz or plain .txt extensions
# Load image as a numpy array
print('Loading ' + filename)
IM = np.loadtxt(filename)
# use scipy.misc.imread(filename) to load image formats (.png, .jpg, etc)
rows, cols = IM.shape # image size
# Image center should be mid-pixel, i.e. odd number of colums
if cols % 2 != 1:
print ("HL: even pixel width image, re-adjusting image centre")
IM = shift(IM, (-1/2, -1/2))[:-1, :-1]
rows, cols = IM.shape # new image size
r2 = rows//2 # half-height image size
c2 = cols//2 # half-width image size
print ('image size {:d}x{:d}'.format(rows, cols))
# Hansen & Law inverse Abel transform
print('Performing Hansen and Law inverse Abel transform:')
AIM = abel.transform(IM, method="hansenlaw", direction="inverse",
symmetry_axis=None)['transform']
# PES - photoelectron speed distribution -------------
print('Calculating speed distribution:')
r, speed = abel.tools.vmi.angular_integration(AIM)
# normalize to max intensity peak
speed /= speed[200:].max() # exclude transform noise near centerline of image
# PAD - photoelectron angular distribution ------------
print('Calculating angular distribution:')
# radial ranges (of spectral features) to follow intensity vs angle
# view the speed distribution to determine radial ranges
r_range = [(93, 111), (145, 162), (255, 280), (330, 350), (350, 370),
(370, 390), (390, 410), (410, 430)]
# map to intensity vs theta for each radial range
intensities, theta = abel.tools.vmi.calculate_angular_distributions(AIM,
radial_ranges=r_range)
print("radial-range anisotropy parameter (beta)")
for rr, intensity in zip(r_range, intensities):
# evaluate anisotropy parameter from least-squares fit to
# intensity vs angle
beta, amp = abel.tools.vmi.anisotropy_parameter(theta, intensity)
result = " {:3d}-{:3d} {:+.2f}+-{:.2f}".format(*rr+beta)
print(result)
# plots of the analysis
fig = plt.figure(figsize=(15, 4))
ax1 = plt.subplot(131)
ax2 = plt.subplot(132)
ax3 = plt.subplot(133)
# join 1/2 raw data : 1/2 inversion image
vmax = IM[:, :c2-100].max()
AIM *= vmax/AIM[:, c2+100:].max()
JIM = np.concatenate((IM[:, :c2], AIM[:, c2:]), axis=1)
rr = r_range[-3]
intensity = intensities[-3]
beta, amp = abel.tools.vmi.anisotropy_parameter(theta, intensity)
# draw a 1/2 circle representing this radial range
# for rw in range(rows):
# for cl in range(c2,cols):
# circ = (rw-r2)**2 + (cl-c2)**2
# if circ >= rr[0]**2 and circ <= rr[1]**2:
# JIM[rw,cl] = vmax
# Prettify the plot a little bit:
# Plot the raw data
im1 = ax1.imshow(JIM, origin='lower', aspect='auto', vmin=0, vmax=vmax)
fig.colorbar(im1, ax=ax1, fraction=.1, shrink=0.9, pad=0.03)
ax1.set_xlabel('x (pixels)')
ax1.set_ylabel('y (pixels)')
ax1.set_title('velocity map image| inverse Abel ')
# Plot the 1D speed distribution
ax2.plot(speed)
ax2.plot((rr[0], rr[0], rr[1], rr[1]), (1, 1.1, 1.1, 1), 'r-') # red highlight
ax2.axis(xmax=450, ymin=-0.05, ymax=1.2)
ax2.set_xlabel('radial pixel')
ax2.set_ylabel('intensity')
ax2.set_title('Speed distribution')
# Plot anisotropy variation
ax3.plot(theta, intensity, 'r',
label="expt. data r=[{:d}:{:d}]".format(*rr))
def P2(x): # 2nd order Legendre polynomial
return (3*x*x-1)/2
def PAD(theta, beta, amp):
return amp*(1 + beta*P2(np.cos(theta)))
ax3.plot(theta, PAD(theta, beta[0], amp[0]), 'b', lw=2, label="fit")
ax3.annotate("$\\beta = ${:+.2f}+-{:.2f}".format(*beta), (-2, -1.1))
ax3.legend(loc=1, labelspacing=0.1, fontsize='small')
ax3.axis(ymin=-2, ymax=12)
ax3.set_xlabel("angle $\\theta$ (radians)")
ax3.set_ylabel("intensity")
ax3.set_title("anisotropy parameter")
# Plot the angular distribution
plt.subplots_adjust(left=0.06, bottom=0.17, right=0.95, top=0.89,
wspace=0.35, hspace=0.37)
# Save a image of the plot
# plt.savefig(filename[:-7]+"png", dpi=150)
# Show the plots
plt.show()
(Source code, png, hires.png, pdf)

Example: HansenLaw Xenon¶
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import matplotlib.pyplot as plt
import abel
import scipy.misc
# This example demonstrates Hansen and Law inverse Abel transform
# of an image obtained using a velocity map imaging (VMI) photoelecton
# spectrometer to record the photoelectron angular distribution resulting
# from photodetachement of O2- at 454 nm.
# This spectrum was recorded in 2010
# ANU / The Australian National University
# J. Chem. Phys. 133, 174311 (2010) DOI: 10.1063/1.3493349
# Before you start, centre of the NxN numpy array should be the centre
# of image symmetry
# +----+----+
# | | |
# +----o----+
# | | |
# + ---+----+
# Specify the path to the file
#filename = 'data/O2-ANU1024.txt.bz2'
filename = 'data/Xenon_ATI_VMI_800_nm_649x519.tif'
# Name the output files
name = filename.split('.')[0].split('/')[1]
output_image = name + '_inverse_Abel_transform_HansenLaw.png'
output_text = name + '_speeds_HansenLaw.dat'
output_plot = name + '_comparison_HansenLaw.pdf'
# Step 1: Load an image file as a numpy array
print('Loading ' + filename)
#im = np.loadtxt(filename)
im = plt.imread(filename)
(rows,cols) = np.shape(im)
print ('image size {:d}x{:d}'.format(rows,cols))
#im = abel.tools.center.center_image (im, (340,245))
# Step 2: perform the Hansen & Law transform!
print('Performing Hansen and Law inverse Abel transform:')
recon = abel.transform(im, method="hansenlaw", direction="inverse",
symmetry_axis=None, verbose=True)['transform']
r, speeds = abel.tools.vmi.angular_integration(recon)
# save the transform in 8-bit format:
#scipy.misc.imsave(output_image,recon)
# save the speed distribution
#np.savetxt(output_text,speeds)
## Finally, let's plot the data
# Set up some axes
fig = plt.figure(figsize=(15,4))
ax1 = plt.subplot(131)
ax2 = plt.subplot(132)
ax3 = plt.subplot(133)
# Plot the raw data
im1 = ax1.imshow(im, origin='lower', aspect='auto')
fig.colorbar(im1, ax=ax1, fraction=.1, shrink=0.9, pad=0.03)
ax1.set_xlabel('x (pixels)')
ax1.set_ylabel('y (pixels)')
ax1.set_title('velocity map image')
# Plot the 2D transform
im2 = ax2.imshow(recon, origin='lower', aspect='auto')
fig.colorbar(im2, ax=ax2, fraction=.1, shrink=0.9, pad=0.03)
ax2.set_xlabel('x (pixels)')
ax2.set_ylabel('y (pixels)')
ax2.set_title('Hansen Law inverse Abel')
# Plot the 1D speed distribution
ax3.plot(speeds)
ax3.set_xlabel('Speed (pixel)')
ax3.set_ylabel('Yield (log)')
ax3.set_title('Speed distribution')
#ax3.set_yscale('log')
# Prettify the plot a little bit:
plt.subplots_adjust(left=0.06, bottom=0.17, right=0.95, top=0.89, wspace=0.35,
hspace=0.37)
# Save a image of the plot
plt.savefig(output_plot, dpi=150)
# Show the plots
plt.show()
Example: Basex gaussian¶
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
import abel
# This example performs a BASEX transform of a simple 1D Gaussian function and compares
# this to the analytical inverse Abel transform
fig, ax= plt.subplots(1,1)
plt.title('Abel tranforms of a gaussian function')
# Analytical inverse Abel:
n = 101
r_max = 20
sigma = 10
ref = abel.tools.analytical.GaussianAnalytical(n, r_max, sigma,symmetric=False)
ax.plot(ref.r, ref.func, 'b', label='Original signal')
ax.plot(ref.r, ref.abel*0.05, 'r', label='Direct Abel transform x0.05 [analytical]')
center = n//2
# BASEX Transform:
# Calculate the inverse abel transform for the centered data
recon = abel.basex.basex_transform(ref.abel, verbose=True, basis_dir=None,
dr=ref.dr, direction='inverse')
ax.plot(ref.r, recon , 'o',color='red', label='Inverse transform [BASEX]', ms=5, mec='none',alpha=0.5)
ax.legend()
ax.set_xlim(0,20)
ax.set_xlabel('x')
ax.set_ylabel('f(x)')
plt.legend()
plt.show()
(Source code, png, hires.png, pdf)

Example: Basex Photoelectron¶
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os.path
import numpy as np
import matplotlib.pyplot as plt
import abel
# This example demonstrates a BASEX transform of an image obtained using a
# velocity map imaging (VMI) photoelecton spectrometer to record the
# photoelectron angualar distribution resulting from above threshold ionization (ATI)
# in xenon gas using a ~40 femtosecond, 800 nm laser pulse.
# This spectrum was recorded in 2012 in the Kapteyn-Murnane research group at
# JILA / The University of Colorado at Boulder
# by Dan Hickstein and co-workers (contact DanHickstein@gmail.com)
# http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.109.073004
#
# Before you start your own transform, identify the central pixel of the image.
# It's nice to use a program like ImageJ for this.
# http://imagej.nih.gov/ij/
# Specify the path to the file
filename = os.path.join('data', 'Xenon_ATI_VMI_800_nm_649x519.tif')
# Name the output files
output_image = filename[:-4] + '_Abel_transform.png'
output_text = filename[:-4] + '_speeds.txt'
output_plot = filename[:-4] + '_comparison.pdf'
# Step 1: Load an image file as a numpy array
print('Loading ' + filename)
raw_data = plt.imread(filename).astype('float64')
# Step 2: Specify the center in y,x (vert,horiz) format
center = (245,340)
# or, use automatic centering
# center = 'com'
# center = 'gaussian'
# Step 3: perform the BASEX transform!
print('Performing the inverse Abel transform:')
recon = abel.transform(raw_data, direction='inverse', method='basex',
center=center, transform_options={'basis_dir':'./'},
verbose=True)['transform']
speeds = abel.tools.vmi.angular_integration(recon)
# Set up some axes
fig = plt.figure(figsize=(15,4))
ax1 = plt.subplot(131)
ax2 = plt.subplot(132)
ax3 = plt.subplot(133)
# Plot the raw data
im1 = ax1.imshow(raw_data,origin='lower',aspect='auto')
fig.colorbar(im1,ax=ax1,fraction=.1,shrink=0.9,pad=0.03)
ax1.set_xlabel('x (pixels)')
ax1.set_ylabel('y (pixels)')
# Plot the 2D transform
im2 = ax2.imshow(recon,origin='lower',aspect='auto',clim=(0,2000))
fig.colorbar(im2,ax=ax2,fraction=.1,shrink=0.9,pad=0.03)
ax2.set_xlabel('x (pixels)')
ax2.set_ylabel('y (pixels)')
# Plot the 1D speed distribution
ax3.plot(*speeds)
ax3.set_xlabel('Speed (pixel)')
ax3.set_ylabel('Yield (log)')
ax3.set_yscale('log')
ax3.set_ylim(1e2,1e5)
# Prettify the plot a little bit:
plt.subplots_adjust(left=0.06,bottom=0.17,right=0.95,top=0.89,wspace=0.35,hspace=0.37)
# Show the plots
plt.show()
(Source code, png, hires.png, pdf)

Example: All_Dribinski¶
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# This example compares the available inverse Abel transform methods
# for the Ominus sample image
#
# Note it transforms only the Q0 (top-right) quadrant
# using the fundamental transform code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import numpy as np
import abel
import collections
import matplotlib.pylab as plt
from time import time
fig, (ax1,ax2) = plt.subplots(1, 2, figsize=(8,4))
# inverse Abel transform methods -----------------------------
# dictionary of method: function()
transforms = {
"direct": abel.direct.direct_transform,
"hansenlaw": abel.hansenlaw.hansenlaw_transform,
"basex": abel.basex.basex_transform,
"three_point": abel.three_point.three_point_transform,
}
# sort dictionary:
transforms = collections.OrderedDict(sorted(transforms.items()))
# number of transforms:
ntrans = np.size(transforms.keys())
IM = abel.tools.analytical.sample_image(n=301, name="dribinski")
h, w = IM.shape
# forward transform:
fIM = abel.transform(IM, direction="forward", method="hansenlaw")['transform']
Q0, Q1, Q2, Q3 = abel.tools.symmetry.get_image_quadrants(fIM, reorient=True)
Q0fresh = Q0.copy() # keep clean copy
print ("quadrant shape {}".format(Q0.shape))
# process Q0 quadrant using each method --------------------
iabelQ = [] # keep inverse Abel transformed image
for q, method in enumerate(transforms.keys()):
Q0 = Q0fresh.copy() # top-right quadrant of O2- image
print ("\n------- {:s} inverse ...".format(method))
t0 = time()
# inverse Abel transform using 'method'
IAQ0 = transforms[method](Q0, direction="inverse")
print (" {:.4f} sec".format(time()-t0))
iabelQ.append(IAQ0) # store for plot
# polar projection and speed profile
radial, speed = abel.tools.vmi.angular_integration(IAQ0, origin=(0, 0), Jacobian=False)
# normalize image intensity and speed distribution
IAQ0 /= IAQ0.max()
speed /= speed.max()
# method label for each quadrant
annot_angle = -(45+q*90)*np.pi/180 # -ve because numpy coords from top
if q > 3:
annot_angle += 50*np.pi/180 # shared quadrant - move the label
annot_coord = (h/2+(h*0.9)*np.cos(annot_angle)/2 -50,
w/2+(w*0.9)*np.sin(annot_angle)/2)
ax1.annotate(method, annot_coord, color="yellow")
# plot speed distribution
ax2.plot(radial, speed, label=method)
# reassemble image, each quadrant a different method
# for < 4 images pad using a blank quadrant
blank = np.zeros(IAQ0.shape)
for q in range(ntrans, 4):
iabelQ.append(blank)
# more than 4, split quadrant
if ntrans == 5:
# split last quadrant into 2 = upper and lower triangles
tmp_img = np.tril(np.flipud(iabelQ[-2])) +\
np.triu(np.flipud(iabelQ[-1]))
iabelQ[3] = np.flipud(tmp_img)
im = abel.tools.symmetry.put_image_quadrants((iabelQ[0], iabelQ[1],
iabelQ[2], iabelQ[3]),
odd_size=True)
ax1.imshow(im, vmin=0, vmax=0.15)
ax1.set_title('Inverse Abel comparison')
ax2.set_xlim(0, 200)
ax2.set_ylim(-0.5,2)
ax2.legend(loc=0, labelspacing=0.1, frameon=False)
ax2.set_title('Angular integration')
ax2.set_xlabel('Radial coordinate (pixel)')
ax2.set_ylabel('Integrated intensity')
plt.suptitle('Dribinski sample image')
plt.tight_layout()
plt.savefig('example_all_dribinski.png', dpi=100)
plt.show()
(Source code, png, hires.png, pdf)
