SiegPy: Siegert states with Python

This Python module aims at providing the tools to show how Siegert (or resonant) states can conveniently replace the usual continuum of unoccupied states in some numerical problems in quantum mechanics (e.g., for LR-TDDFT or in the GW approximation in many-body perturbation theory).

Note

It currently focuses on the 1D Square-Well Potential (SWP) case. It will be generalized to 1D potentials with compact support in future releases.

Warning

It requires python>=3.4.

For an overview of what is possible with SiegPy, see the tutorial made of notebooks here. For a detailed documentation of the classes and their associated methods, go to Code Documentation.

Table of contents

Getting started

Build Status codecov docs

SiegPy: “Siegert states with Python”

This module provides the tools to find the numerical Siegert (or resonant) states of 1D potentials with compact support, as well as the analytical ones of the 1D Square-Well Potential:

>>> from siegpy import SWPBasisSet, SWPotential
>>> pot = SWPotential(5, 5)
>>> siegerts = SWPBasisSet.find_Siegert_states(pot, re_k_max=5, im_k_max=2, step_re=3)
>>> len(siegerts)
18
>>> siegerts.plot_energies()
>>> siegerts.plot_wavefunctions()

A basis set made of such states is discrete and can later be compared to the exact results (i.e. using bound and continuum states) for:

  • its completeness, compared to a basis set using the usual continuum states,
  • its ability to reproduce the response function (i.e., to provide an approximation of the Green’s function with discrete states),
  • its ability to reproduce the time-propagation of an initial wavepacket.
>>> from siegpy import Rectangular
>>> r = Rectangular(-1.5, 1.5)
>>> siegerts.plot_completeness_convergence(r)
>>> time_grid = [0, 0.1, 0.25, 0.5, 1.0, 2.0, 3.0]
>>> import numpy as np
>>> xgrid = np.linspace(-3, 3, 201)
>>> siegerts.grid = xgrid
>>> siegerts.Siegert_propagation(r, time_grid)

Installation

Currently, git clone this repository and run pip install . in the SiegPy folder.

This module requires Python3.4+ and the following packages, which will be installed while running the previous command if necessary:

  • numpy
  • scipy
  • matplotlib
  • mpmath

Documentation

The documentation can be found here.

Ipython notebooks are stored in the ‘doc/notebooks’ folder and are part of the documentation as tutorials for the SiegPy module. They can also be considered as an introduction to the Siegert states. They are supposed to be read in a specific order, which can be is given in the doc/notebooks.rst file.

The whole documentation can be compiled manually. This requires the installation of the some extra packages (such as sphinx, nbsphinx, sphinx_rtd_theme, …). This can be done by running pip install .[doc]. You can then go in the doc folder and running the make html command (or alternatively python3 -m sphinx . build -jN, where N is the number of precessors used for the compilation). This command creates the code documentation and runs all the notebooks in order to create a tutorial.

For developpers

For developpers:

  • Run pip3 install -e .[test] to get all the packages required for testing. Running pytest in the SiegPy folder will then launch all the tests.
    • Unit tests are performed using pytest.
    • pytest-cov shows which part of the code is currently not covered by unit tests (this is performed by running the pytest command).
    • flake8 is the tool used to check for PEP8 compliance (run flake8 in the SiegPy folder).
  • If you modify notebooks, install and use nbstripout in order to clean the ouput cells before each git commit to save memory on the repository. See https://github.com/kynan/nbstripout for details.
  • Documentation is created using sphinx (http://www.sphinx-doc.org/). The use of restructured text in the docstrings is therefore recommended.

Tutorial

This tutorial is made of notebooks that are available in the Siegpy/doc/notebooks folder after cloning the project. The aim of these series of notebooks is to present the most relevant capabilities offered by the SiegPy module while giving an overview of the Siegert states and how useful they can be in order to reproduce results obtained using the cumbersome continuum of states.

1D Square-Well Potential: the analytical case

Find the Siegert and continuum states of a 1D Square-Well Potential case

The goal of this tutorial is to present the basic objects of the SiegPy module that allows to find the Siegert states of a 1D Square-Well Potential (SWP).

Import some classes of the SiegPy module

After the installation of the SiegPy module with pip (see the online documentation), you can import it as any other Python package.

To begin with the tutorial, from siegpy we import:

  • the SWPotential class, to describe a 1D SWP,
  • the SWPBasisSet class, to represent a basis set made of the eigenstates of a 1D SWP,
  • the SWPSiegert and the SWPContinuum classes, to represent the two types of eigenstates of the 1D SWP, namely its Siegert states and its continuum states.
[1]:
from siegpy import SWPotential, SWPBasisSet, SWPSiegert, SWPContinuum
import numpy as np  # We also import numpy
Define a 1D Square-Well Potential.

A 1D SWP is caracterized by its width l and depth V0. For convenience, it is always centered on \(x = 0\). A space grid may also be given, for the potential to be plotted.

[2]:
# Potential parameters
V0 = 8. # Depth of the potential
l  = 3. # Width of the potential

# Space grid parameters
npts = 401 # Number of points of the grid
xmax = l   # Extension of the grid
xgrid = np.linspace(-l, l, npts)

# Define a 1D SW potential
potential = SWPotential(l, V0, grid=xgrid)

You can plot it, print it and check it is a SWPotential instance:

[3]:
potential.plot()
print(potential)
print(isinstance(potential, SWPotential))
_images/notebooks_find_Siegerts_1DSWP_6_0.png
1D Square-Well Potential of width 3.00 and depth 8.00
True
Find the Siegert states

The search of the wavenumbers of the Siegert states is limited to a certain part of the complex wavenumber plane (here, the absolute value of the real part (resp. imaginary part) of the Siegert wavenumbers must not exceed re_kmax (resp. im_kmax).

To find the wavenumbers, one has to define a grid of input-guess, made of complex wavenumbers. This grid is defined by a grid step on the real axis re_hk and an optional one on the imaginary axis im_hk. The search of the bound and anti bound states on the imaginary axis is carried out automatically (given that the number of states on this axis and the range over which they are to be found is defined by the depth and width of the potential).

In the example below, siegerts is a basis set, and its elements are Siegert states. A Siegert state of the 1D SWP is an instance of the SWPSiegert class.

[4]:
# Look for Siegert states in the complex plane
re_kmax = 10.
im_kmax = 3.
re_hk = 1.
im_hk = 2.
siegerts = SWPBasisSet.find_Siegert_states(potential, re_kmax, re_hk, im_kmax,
                                           im_hk=im_hk, grid=xgrid)

The basis set is a made of Siegert states and is a SWPBasisSet instance:

[5]:
# siegerts is a SWPBasisSet instance
assert isinstance(siegerts, SWPBasisSet)

# Each state in the basis set is an instance of the SWPSiegert class
assert all([isinstance(state, SWPSiegert) for state in siegerts])

You can easily iterate over the states in the basis set:

[6]:
print("Number of Siegert states in the basis set:", len(siegerts))

for i, state in enumerate(siegerts):
    print("State n°{}: {}".format(i+1, state))
Number of Siegert states in the basis set: 20
State n°1: Even bound eigenstate of energy -7.598+0.000j
State n°2: Odd bound eigenstate of energy -6.405+0.000j
State n°3: Even bound eigenstate of energy -4.470+0.000j
State n°4: Odd bound eigenstate of energy -1.931+0.000j
State n°5: Even anti-bound eigenstate of energy -7.205+0.000j
State n°6: Odd anti-bound eigenstate of energy -4.732+0.000j
State n°7: Even resonant eigenstate of energy 0.302-0.704j
State n°8: Odd resonant eigenstate of energy 5.054-2.584j
State n°9: Even resonant eigenstate of energy 10.921-4.194j
State n°10: Odd resonant eigenstate of energy 17.899-5.844j
State n°11: Even resonant eigenstate of energy 25.985-7.561j
State n°12: Odd resonant eigenstate of energy 35.177-9.348j
State n°13: Even resonant eigenstate of energy 45.474-11.201j
State n°14: Even anti-resonant eigenstate of energy 0.302+0.704j
State n°15: Odd anti-resonant eigenstate of energy 5.054+2.584j
State n°16: Even anti-resonant eigenstate of energy 10.921+4.194j
State n°17: Odd anti-resonant eigenstate of energy 17.899+5.844j
State n°18: Even anti-resonant eigenstate of energy 25.985+7.561j
State n°19: Odd anti-resonant eigenstate of energy 35.177+9.348j
State n°20: Even anti-resonant eigenstate of energy 45.474+11.201j

Note that the states are ordered by type (bound, then anti-bound, then resonant, and finally anti-resonant states), and that they are stored from the lowest to the highest energy for each type. You can easily create a basis set containing only a specific type of Siegert state:

[7]:
bnds = siegerts.bounds
assert isinstance(bnds, SWPBasisSet)
len(bnds)
[7]:
4
[8]:
abnds = siegerts.antibounds
len(abnds)
[8]:
2
[9]:
res = siegerts.resonants
len(res)
[9]:
7
[10]:
ares = siegerts.antiresonants
len(ares)
[10]:
7

This way, you can be sure that the total number of states correspond to the 20 Siegert states of the original basis set. You could also use the attributes even or odd to create a SWPBasisSet instance made of the Siegert states of a given parity.

Plot the Siegert states and the basis sets

Three types of plot are provided for that purpose:

  1. The first one allows to plot the wavefunctions of any indivual Siegert states.
  • Another one represent the wavefunctions of all the Siegert states in the basis set on one plot.
  • The third type plots of the wavenumbers or the energies of the states in the basis set.
Wavefunction of a Siegert state

Given that a grid was passed to the find_Siegert_states method, each Siegert state can be plotted with the plot method, just like the potential. You may also modify the range of the plot by specifying the optional arguments xlim and ylim.

[11]:
for state in siegerts:
    # Note the use of the representaton of a Siegert state as title
    state.plot(xlim=(-3*l/4., 3*l/4.), ylim=(-1, 1), title=repr(state))
_images/notebooks_find_Siegerts_1DSWP_21_0.png
_images/notebooks_find_Siegerts_1DSWP_21_1.png
_images/notebooks_find_Siegerts_1DSWP_21_2.png
_images/notebooks_find_Siegerts_1DSWP_21_3.png
_images/notebooks_find_Siegerts_1DSWP_21_4.png
_images/notebooks_find_Siegerts_1DSWP_21_5.png
_images/notebooks_find_Siegerts_1DSWP_21_6.png
_images/notebooks_find_Siegerts_1DSWP_21_7.png
_images/notebooks_find_Siegerts_1DSWP_21_8.png
_images/notebooks_find_Siegerts_1DSWP_21_9.png
_images/notebooks_find_Siegerts_1DSWP_21_10.png
_images/notebooks_find_Siegerts_1DSWP_21_11.png
_images/notebooks_find_Siegerts_1DSWP_21_12.png
_images/notebooks_find_Siegerts_1DSWP_21_13.png
_images/notebooks_find_Siegerts_1DSWP_21_14.png
_images/notebooks_find_Siegerts_1DSWP_21_15.png
_images/notebooks_find_Siegerts_1DSWP_21_16.png
_images/notebooks_find_Siegerts_1DSWP_21_17.png
_images/notebooks_find_Siegerts_1DSWP_21_18.png
_images/notebooks_find_Siegerts_1DSWP_21_19.png

Note that the resonant and antiresonant states go by pair: the wavefunction of an antiresonant state correspond to the conjugate of the resonant wavefunction, and the same relation exists for their energies. The wavefunction of any Siegert state that is not bound diverges for infinite \(x\).

Plot all the wavefunctions of the basis set

A SWPBasisSet instance has a plot_wavefunctions method to plot the wavefunctions of all the Siegert states in the basis set (except for the anti-bound states, for clarity). Each wavefunction is translated by the corresponding energy (or absolute value of the energy for resonant and anti-resonant states).

[12]:
siegerts.plot_wavefunctions()
_images/notebooks_find_Siegerts_1DSWP_24_0.png

As you can see, the wavefunctions of a pair of resonant and anti-resonant states have the same real part and an opposite imaginary part.

The number of resonant states wavefunctions to plot may be defined by nres. You may use file_save to define where to save the plot (the available file formats depend on your matplotlib backend).

[13]:
siegerts.plot_wavefunctions(nres=2, file_save='wavefunctions_2_res.pdf')
_images/notebooks_find_Siegerts_1DSWP_26_0.png

Note how the resonant couple is related to the bound states: the parity oscillates between even and odd for the bound states, and this order is kept for resonant states. Also note the number of antinodes that is increasing from bottom to top; in particular, the last bound state is even and has four antinodes inside the potential, while the first resonant couple is odd and exhibit five antinodes.

Eventually, xlim and ylim are other optional parameters to set the plot range. If no resonant states are desired, then only the bound states are plotted

[14]:
siegerts.plot_wavefunctions(nres=0, file_save='wavefunctions_no_res.pdf',
                            ylim=(-V0-0.5, 0.5))
_images/notebooks_find_Siegerts_1DSWP_28_0.png
Plot the wavenumbers and energies of the basis set

The wavenumber and energy plots can also be saved in a file by defining the optional argument file_save. Again, the range of the axis of the plot can be defined by xlim and ylim.

[15]:
siegerts.plot_wavenumbers(file_save="poles_wn.pdf")
_images/notebooks_find_Siegerts_1DSWP_30_0.png
[16]:
siegerts.plot_energies(xlim=(-V0, 20), ylim=(-7, 7), file_save="poles_en.pdf")
_images/notebooks_find_Siegerts_1DSWP_31_0.png

The four types of Siegert states are clearly visible:

  • the bound states have positive, purely imaginary wavenumbers (and therefore negative, purely real energies),
  • the anti-bound states have negative, purely imaginary wavenumbers (and therefore negative, purely real energies),
  • the wavenumber of a resonant state has a positive real part and a negative imaginary part (and therefore its energy has a positive real part and a negative imaginary part),
  • the wavenumber of an anti-resonant state has a negative real part and a negative imaginary part (and therefore its energy has a positive real part and a positive imaginary part),
Variation of the potential depth

It is then easy to compare the Siegert states found for different SW potentials. The example given here shows that a couple of resonant/anti-resonant states collapses into two anti-bound states as the depth of the potential is increased.

[17]:
# Define a some potential depths
depths = np.arange(4.65, 4.751, 0.01) # Multiple potential depths
l = 3.                                # One potential width

# Parameters for the search of Siegert states in the complex
# wavenumber plane
re_kmax = 10.
im_kmax = 3.
re_hk = 1.
im_hk = 2.

# Loop over the depths to create multiple potentials and
# find their Siegert states
basis_sets = []
for V0 in depths:
    pot = SWPotential(l, V0)
    basis_sets.append(SWPBasisSet.find_Siegert_states(pot, re_kmax, re_hk,
                                                      im_kmax, im_hk=im_hk))
[18]:
# Plot the wavenumbers for all the potentials
xl=3.5
for siegerts in basis_sets:
    siegerts.plot_wavenumbers(xlim=(-xl,xl), title="$V_0 = {:.2f}$".format(siegerts.potential.depth))
_images/notebooks_find_Siegerts_1DSWP_35_0.png
_images/notebooks_find_Siegerts_1DSWP_35_1.png
_images/notebooks_find_Siegerts_1DSWP_35_2.png
_images/notebooks_find_Siegerts_1DSWP_35_3.png
_images/notebooks_find_Siegerts_1DSWP_35_4.png
_images/notebooks_find_Siegerts_1DSWP_35_5.png
_images/notebooks_find_Siegerts_1DSWP_35_6.png
_images/notebooks_find_Siegerts_1DSWP_35_7.png
_images/notebooks_find_Siegerts_1DSWP_35_8.png
_images/notebooks_find_Siegerts_1DSWP_35_9.png
_images/notebooks_find_Siegerts_1DSWP_35_10.png

As you can see, a resonant couple has coalesced into two anti-bound states, one moving down the imaginary axis, the other moving up:

[19]:
print("Anti-bound states:")
for siegerts in basis_sets:
    abnds = siegerts.antibounds
    print("depth:", abnds[0].potential.depth)
    for ab in abnds:
        print("...", ab.wavenumber)
Anti-bound states:
depth: 4.65
... (-0-2.73258994206498j)
depth: 4.66
... (-0-2.7364789069706705j)
depth: 4.67
... (-0-2.7403612638709727j)
depth: 4.68
... (-0-2.744237049146454j)
depth: 4.6899999999999995
... (-0-2.748106298836105j)
depth: 4.699999999999999
... (-0-2.75196904864183j)
depth: 4.709999999999999
... (-0-2.755825333932863j)
... (-0-0.7091523632651646j)
... (-0-0.6240913937381729j)
depth: 4.719999999999999
... (-0-2.7596751897501117j)
... (-0-0.8106026103043074j)
... (-0-0.5216987792464509j)
depth: 4.729999999999999
... (-0-2.763518650810427j)
... (-0-0.8654958483570949j)
... (-0-0.4658655887420741j)
depth: 4.739999999999998
... (-0-2.7673557515107996j)
... (-0-0.9080864452359473j)
... (-0-0.422337443385848j)
depth: 4.749999999999998
... (-0-2.7711865259324946j)
... (-0-0.9441225291582832j)
... (-0-0.3853662040064116j)

If the potential were to be even more decreased, then the anti-bound state moving up would cross the real axis and become a bound state, keeping the same number of antinodes inside the potential than the original resonant state. It is as if the potential was not attractive enough to fit all the Siegert states as bound states.

Find the continuum states

The continuum states are known analytically. They are in infinite number and have a positive energy (hence, continuum). This continuum can be discretized over a given grid of wavenumbers thanks to the find_continuum method, given a (strictly positive) maximum wavenumber kmax and a grid-step hk. An optional minimal wavenumber kmin can be passed as argument.

[20]:
# Let us go back to the initial potential
V0 = 8.
l  = 3.
potential = SWPotential(l, V0, grid=xgrid)

# Find some continuum states of this potential, whose wavenumbers range
# fromn 1 to 2 with a wavenumber grid step of 0.25
hk = 0.25
kmax = 2
continuum = SWPBasisSet.find_continuum_states(potential, kmax, hk, kmin=1)

You can check that the result is a SWPBasisSet instance whose states are SWPContinuum instances that have the expected wavenumbers:

[21]:
# continuum is a SWPBasisSet instance:
assert isinstance(continuum, SWPBasisSet)

# All the states in continuum are SWPContinuum instances:
assert all([isinstance(state, SWPContinuum) for state in continuum])

# Their wavenumbers correspond to the expected values:
continuum.wavenumbers
[21]:
[1.0, 1.0, 1.25, 1.25, 1.5, 1.5, 1.75, 1.75, 2.0, 2.0]

Note that each wavenumber is repeated twice: this is beacause there are an even and an odd continuum state for each positive wavenumber.

Continuum states around a resonance

The antinodes of the wavefunctions of continuum states wavefunctions reach a maximum inside the potential for energies corresponding to the absolute value of resonant energies. This is only true for the continuum state of the same parity as the resonance.

Around the first resonance
[22]:
# Find some of the Siegert states once again
siegerts = SWPBasisSet.find_Siegert_states(potential, 4, re_hk, im_kmax)

# Get the first resonant state
ires = 0
res = siegerts.resonants[ires]
print("Resonance of absolute energy: {:.3f} (parity: {})"
      .format(abs(res.energy), res.parity))

# Define a grid of wavenumbers around that resonance
# and find the continuum states
hk = 0.25
kmid = abs(res.wavenumber)
kmin = kmid-hk
kmax = kmid+3*hk/2
continuum = SWPBasisSet.find_continuum_states(potential, kmax, hk,
                                              kmin=kmin, grid=xgrid)

# Plot the even continuum states
for c in continuum.even:
    c.abs().plot(ylim=(0,0.6), title=repr(c))
Resonance of absolute energy: 0.766 (parity: e)
_images/notebooks_find_Siegerts_1DSWP_45_1.png
_images/notebooks_find_Siegerts_1DSWP_45_2.png
_images/notebooks_find_Siegerts_1DSWP_45_3.png

As you can see above, the maximum of amplitude is reached at the resonant absolute energy for the even continuum states, while it is not the case for the odd continuum states below:

[23]:
for c in continuum.odd:
    c.abs().plot(ylim=(0,0.6), title=repr(c))
_images/notebooks_find_Siegerts_1DSWP_47_0.png
_images/notebooks_find_Siegerts_1DSWP_47_1.png
_images/notebooks_find_Siegerts_1DSWP_47_2.png
Around the second resonance

The second resonance being odd, it is the odd continuum state wavefunction that reaches a maximum for a wavenumbereauql to the absolute value of the resonant wavenumber:

[24]:
# Get the first resonant state
ires = 1
res = siegerts.resonants[ires]
print("Resonance of absolute energy: {:.3f} (parity: {})"
      .format(abs(res.energy), res.parity))

# Define a grid of wavenumbers around that resonance
# and find the continuum states
hk = 0.25
kmid = abs(res.wavenumber)
kmin = kmid-hk
kmax = kmid+3*hk/2
continuum = SWPBasisSet.find_continuum_states(potential, kmax, hk,
                                              kmin=kmin, grid=xgrid)

# Plot the odd continuum states
for c in continuum.odd:
    c.abs().plot(ylim=(0,0.6), title=repr(c))
Resonance of absolute energy: 5.676 (parity: o)
_images/notebooks_find_Siegerts_1DSWP_49_1.png
_images/notebooks_find_Siegerts_1DSWP_49_2.png
_images/notebooks_find_Siegerts_1DSWP_49_3.png

This concludes this tutorial showing the basics of the SiegPy module. You should be able to:

  • define a square-well potential with the SWPotential class,
  • find Siegert and continuum states that are eigenstates of the Hamiltonian involving a square-well potential and store them into a SWPBasisSet instance,
  • use some of their attributes and methods, especially to plot some quantities of interest.

You should also know that:

  • there are four different types of Siegert states,
  • the Siegert and continuum states of the SWP are instances of the SWPSiegert and SWPContinuum classes, respectively.

Completeness relation: an introduction

One of the main interest of using the Siegert states in solving quantum mechanical problems is that they form a complete basis set made of discrete states only (you may think of it as a generalization of the quantized bound states), that may suitably replace the cumbersome continuum of unocciped states usually studied.

The simplest way of observing that is by comparing the following exact completeness relation (CR), that involved the bound (subscript \(b\)) and the continuum states of the 1D SWP (subscript \(p\)):

\(\text{CR}(k) = \text{CR}(k) = \sum_b \frac{\left\langle g|\varphi_b\right\rangle^2}{\left\langle g | g \right\rangle} + \sum_{p=\pm} \int_0^k \text{d} k_1 \frac{\left\langle g |\varphi_p\right\rangle \left\langle \varphi_p | g\right\rangle}{\left\langle g | g \right\rangle}\)

to the one approximated by the so-called Mittag-Leffler Expansion (MLE):

\(CR_{MLE}(k) = \frac{1}{2} \sum_{S=a,b} \frac{\left( g | \varphi_S \right)^2}{ \left\langle g | g \right\rangle } + \frac{1}{2} \sum_{S=c,d}^{|k_S| \leq |k|} \frac{\left( g | \varphi_S \right)^2}{ \left\langle g | g \right\rangle }\)

where \(g\) is a test function, \(\varphi_S\) is a Siegert state (where the subscripts \(a\), \(b\), \(c\), \(d\) stand for anti-bound, bound, resonant, anti-resonant) of wavenumber \(k_S\) and \(\varphi_p\) is a continuum state (where the subscripts + and - stand for even and odd continuum states) of wavenumber \(k_1\).

The exact CR is a continuous function of \(k\), while the MLE of the CR is evaluated at discrete values of \(k\), being the absolute values of the resonant/anti-resonant wavenumbers (and k=0 to get the bound/anti-bound contribution to the MLE of the CR).

This comparison will be performed here in the case of the 1D Square-Well Potential (1DSWP).

To perform such a study, one has to:

  • Define a 1D SW potential,
  • Create two basis sets: one made of Siegerts states only, the other made of continuum and bound states
  • Define a test function, which can be a rectangular function or a Gaussian function
  • Perform the exact evaluation of the CR and the approximated Mittag-Leffler expansion (MLE) of the CR, and compare them.
Initialization: find continuum and Siegert states

Before studying the completeness relation, we need to create two basis sets: one made of Siegert states only, the other made of bound and continuum states.

Import useful modules and classes
[1]:
# Make the notebook aware of some classes of the SiegPy module
from siegpy import SWPotential, SWPBasisSet, Rectangular, Gaussian
# Other imports
import numpy as np
import matplotlib.pyplot as plt
Define a 1D Square-Well Potential.

A 1D SW potential is caracterized by its width l and depth V0.

[2]:
V0 = 10  # Depth of the potential
l = np.sqrt(2) * np.pi  # Width of the potential
potential = SWPotential(l, V0)
Create a basis-set made of Siegert states only.

The search of the wavenumbers of the Siegert states is limited to a certain part of the complex wavenumber plane. Here, the absolute value of the real part (resp. imaginary part) of the Siegert wavenumbers must not exceed re_kmax (resp. im_kmax).

To find the wavenumbers, one has to define a grid of input-guess, made of complex wavenumbers. This grid is defined by a grid step on the real axis re_hk and an optional one on the imaginary axis im_hk. The search of the bound and anti bound states on the imaginary axis is carried out automatically (given that the number of states on this axis and the range over which they are to be found is defined by the depth and width of the potential).

[3]:
re_kmax = 20.
im_kmax = 3.
re_hk = 1.
im_hk = 2.
siegerts = SWPBasisSet.find_Siegert_states(potential, re_kmax, re_hk,
                                           im_kmax, im_hk=im_hk, analytic=True)
Create an exact basis-set, made of bound and continuum states.

The bound states can be taken from the previous calculation finding the Siegert states (given that bound states are nothing but a particuliar type of Siegert states), while the continuum states are discretized over a grid of real wavenumbers. This grid is caracterized by the grid step h_k and the maximal wavenumber k_max.

[4]:
# Find the continuum states
hk = 0.01   # Grid step for the continuum states wavenumbers
kmax = 20.  # Maximal wavenumber of the continuum state to find.
continuum = SWPBasisSet.find_continuum_states(potential, kmax, hk)
[5]:
# Create the exact basis set
exact = siegerts.bounds + continuum
Define test functions to evaluate the completeness relation.

The completeness relation is evaluated using a test function \(g\). For the MLE of the CR to hold, the test function must lie in region \(II\) (inside the potential, where \(|x| \leq l/2\)).

Two types of test functions are implemented to yield analytical scalar product with the Siegert and continuum states of the 1D SWP:

  • the rectangular function (Rectangular class), that can obviously lie exactly within region \(II\).
  • the Gaussian function (Gaussian class), that lies inside region \(II\) only approximately, when its \(\sigma\) is small enough so that it may be safely considered as 0 outside of region \(II\).
[6]:
xc = 0.0      # Center of the test functions
h = 3.0       # Amplitude of the test functions
a = l/8.      # Width of the rectangular function
sigma = l/20. # Width of the Gaussian
gauss = Gaussian(sigma, xc, h=h)  # Gaussian test function
rect = Rectangular.from_width_and_center(a, xc, h=h)   # Rectangular test function
Computation of the completeness relation

The computation of the completeness relation can be performed “exactly” (i.e. analytically with a discretized grid of continuum states) or approximately using the MLE, for Gaussian or rectangular test functions, .

Case 1: Gaussian test function
Compute the conversion of the completeness relation as a function of the number of states included
  • The convergence of the MLE of the CR as a function of the number of the resonant couples included in the completeness relation is computed thanks to the MLE_completeness_convergence method applied to the basis set made of Siegert states. This method returns the grid of wavenumbers and the convergence of the CR as a tuple.
  • The convergence of the exact CR as a function of the number of continuum states included in the completeness relation is computed thanks to the exact_completeness_convergence method applied to the exact basis set made of bound and continuum states. This method also returns the grid of wavenumbers and the convergence of the CR as a tuple.
[7]:
k_MLE_g, CR_MLE_gauss = siegerts.MLE_completeness_convergence(gauss)
k_exa_g, CR_exact_gauss = exact.exact_completeness_convergence(gauss)
Compare both convergences
[8]:
#plt.axhline(0, color='black')
plt.axhline(1, color='black', lw=1.5)
plt.plot(k_exa_g, np.real(CR_exact_gauss), color='#d73027', label='Exact')
plt.plot(k_MLE_g, np.real(CR_MLE_gauss), color='#4575b4', label='MLE',
         ls='', marker='.', ms=10)
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.legend()
plt.savefig('CR_gauss.pdf')
plt.title(repr(gauss))
plt.show()
_images/notebooks_completeness_relation_15_0.png

The exact completeness relation tend to 1, and so does the approximated MLE of the CR, that closely follows the exact result. This shows that finding the resonant states up to a certain absolute value of the wavenumber (or energy) is equivalent to finding the continuum states up to a certain wavenumber (or energy). Only a few Siegert states are sufficient to reproduce the influence of a continuum of eigenstates on the completeness relation.

Note that half of the resonant couples have a zero influence on the completeness relation. This is explained by the parity of the corresponding states: the Gaussian function is even (because it is centered in 0), so the scalar product with the odd Siegert states is obviously equal to zero. You would also see that the continuum contribution to the CR is only due to the even continuum states.

Case 2: Rectangular test function.

You could have actually obtained the same plot by using a single line (and without needing to compute the exact basis set made of continuum beforehand) by applying the plot_completeness_convergence method to the Siegert states basis set. If the basis set does not contain any continuum states, then a basis set of continuum states is created. If the optional parameters hk and nres are not specified, then default values are used. Another important option is that you can provide an output filename in order to save the plot, via the file_save optional argument. We present these funtionalities with the rectangular test function defined earlier:

[9]:
siegerts.plot_completeness_convergence(rect, file_save='CR_rect.pdf')
_images/notebooks_completeness_relation_18_0.png

As you can see, the exact completeness relation also tends to one, however more slowly than in the case of the Gaussian test function. This is explained by the dicontinuities of the rectangular function, that are more difficult to represent using the basis sets made of continuous wavefunctions (e.g. those of the continuum and of the Siegert states).

Another optional argument of the plot_completeness_convergence method is nres, which enables to choose the number of resonant states used to perform the convergence:

[10]:
siegerts.plot_completeness_convergence(rect, nres=6)
_images/notebooks_completeness_relation_21_0.png

This concludes this notebook! By now, you should be able to test the convergence of the completeness relation for any Gaussian and rectangular test functions using the Gaussian and Rectangular class.

You also learned that the Mittag-Leffler expansion is the natural expansion that uses Siegert states instead of continuum states in the completeness relation, so that a discrete basis set can accurately reproduce the result of the cumbersome continuum of states of positive energy.

Still, we only studied cases where the convergence of the completeness relation with Siegert states is closely related to the exact result obtained with continuum states. The next notebook will show that there are indeed some problematic cases…

Completeness relation: some problematic cases

This notebook directly follows the completeness_relation.ipynb notebook, where the computation of the convergence of the completeness realtion as a function of the number of continuum states or of resonant couples included in the exact completeness relation (CR) and in the approximated Mittag-Leffler expansion (MLE), respectively.

We saw that the MLE of the CR follows the exact CR. However, this is not always the case, and we’ll see that it actually depends on the test function.

Initialization: create a large basis set of Siegert states

In order to have a clearer view on that problem, this requires larger basis sets. Knowing that finding the Siegert states is the most time-consuming operation, a binary file named siegerts.dat and containing such a large basis set is provided for convenience.

This large basis set was obtained by applying the find_Siegert_states class method with a large value for re_k_max, and it was written to a binary file by applying the write method to it, with the name of the file as a parameter:

# Find the Siegert states
siegerts = SWPBasisSet.find_Siegert_states(SWPotential(4.442882938158366, 10), 200, 1, 3)
# Write the basis set in a file named siegerts.dat
siegerts.write("siegerts.dat")

These lines created the siegerts.dat binary file that can then be read (these operations use the pickle module).

Import useful classes
[1]:
# Make the notebook aware of some classes of the SiegPy module
from siegpy import SWPBasisSet, Rectangular
Read the Siegert states to create a larger basis set of Siegert states

The binary file is read using the from_file class method of the SWPBasisSet class.

[2]:
siegerts_large = SWPBasisSet.from_file('siegerts.dat')
l = siegerts_large.potential.width
Case 1: Large, centered, rectangular test functions

When the centered test function becomes large compared to the width of the potential, the MLE of the CR no longer follows the exact result. It is easy to show that fact by plotting the convergence of the CR for multiple rectangular functions of increasing width.

To that end, a_over_l is the ratio of the width of the rectangular (\(a\)) test function to the potential width (\(l\)). A loop over some values of this parameter allows to create large rectangular test functions that are then used to plot the convergence of the completeness relation:

[3]:
x_c = 0
for a_over_l in [0.8, 0.9, 0.95, 0.99, 0.999]:
    large_rect = Rectangular.from_width_and_center(a_over_l*l, x_c)
    siegerts_large.plot_completeness_convergence(large_rect)
_images/notebooks_completeness_relation_problematic_cases_7_0.png
_images/notebooks_completeness_relation_problematic_cases_7_1.png
_images/notebooks_completeness_relation_problematic_cases_7_2.png
_images/notebooks_completeness_relation_problematic_cases_7_3.png
_images/notebooks_completeness_relation_problematic_cases_7_4.png

When the centered rectangular test function gets close to the width of the potential, the MLE of the CR oscillates around the exact CR. The amplitude of these oscillations increses while their frequency decrases as the width of the test functions gets larger.

For the smallest rectangular test functions, the amplitudes of the oscillations seem to decrease for increasing wavenumbers: one can conjecture that the MLE of the CR still converges to the exact value of 1, if given a large enough number of Siegert states (this number increasing when the test function gets larger, or actually closer to the border of region \(II\), as shown in the next section).

Case 2: Non-centered rectangular test functions getting closer to the border of region \(II\)

What was observed in the previous case is also seen for test functions that are not as large as the previous ones, but that come close to the border of region \(II\) instead.

This time, the width of the test function is kept constant, while its center is moved away from the center of region \(II\). For a given width, there is a maximal value \(x_{c, max}\) for the test function to remain in region \(II\). The ratio \(x_c / x_{c, max}\) is therefore the parameter that is varied to create the different test functions.

[4]:
a = l / 4.
x_c_max = (l - a) / 2
for x_c_ratio in [0.8, 0.9, 0.95, 0.99, 0.999]:
    rect = Rectangular.from_width_and_center(a, x_c_ratio*x_c_max)
    siegerts_large.plot_completeness_convergence(rect)
    # Uncomment the following lines to compare the exact CR obtained using
    # the different test functions:
    #k_grid, cr = siegerts_large.exact_completeness_convergence(rect, hk=0.1, kmax=20)
    #print(cr)
_images/notebooks_completeness_relation_problematic_cases_10_0.png
_images/notebooks_completeness_relation_problematic_cases_10_1.png
_images/notebooks_completeness_relation_problematic_cases_10_2.png
_images/notebooks_completeness_relation_problematic_cases_10_3.png
_images/notebooks_completeness_relation_problematic_cases_10_4.png

The pattern of the convergence is more complex for uncentered test functions, but it keeps a similar behavior: the convergence of the MLE of the CR to the exact result is more difficult to reach as long as the rectangular test function gets closer to the border of region \(II\), even for a test function that provides a good convergence of the CR when centered in region \(II\).

Explanation:

You may notice that the convergence of the exact CR is not greatly modified by the value of the center of the rectangular test function. *How can the MLE of the CR be so different as long as the test function gets closer to the border while the exact convergence of the CR does not vary?*

This observation actually gives a clue: remember (from the previous notebook) that the continuum states anti-nodes have constant amplitudes in region \(II\) because their wavenumber is real, whereas resonant states have complex wavenumbers, leading to anti-nodes of increasing amplitude closer to the border of region \(II\). Also remember that the amplitude of the anti-nodes close to the border of region \(II\) gets even larger for the highest resonances.

Therefore, while the position of a test function of a given width may not have a large influence on the scalar products with the continuum states, the scalar product between the test function and the resonant states is clearly more dependent on that test function center, this influence being more critical as the test function gets close to the border of region \(II\). From that, one understands why the worst case scenario regarding the convergence of the MLE of the CR is that of a thin test function close to the border of region \(II\).

Completeness relation: a wider picture

The SiegPy module only provides the ability to study the convergence of the completeness relation one test function at a time, but you may want a wider picture, involving the results of many test functions. This notebook aims at showing how easily you can get that kind of information.

To that end, the same test example is presented for the two types of test functions (rectangular and Gaussian). It consists in comparing the convergence of the completeness relation for the exact CR and the MLE of the CR while varying the width of the test functions and the number of states used in the CR.

Initialization: create two basis sets

Again, it is more convenient to use a lot of Siegert states, and we will reuse the binary file siegerts.dat to save the computation time of finding the Siegert states. Finding the continuum states is not as time consuming, since they can have any real, positive wavenumber.

Import useful modules and classes
[1]:
# Make the notebook aware of some of the SiegPy module classes
from siegpy import SWPBasisSet, Rectangular, Gaussian
# Other imports
import numpy as np
import matplotlib.pyplot as plt
Read a data file containing a lot of Siegert states

This allows the creation of a large basis set without using the time-consuming find_siegert_states method.

[2]:
siegerts = SWPBasisSet.from_file('siegerts.dat')
l = siegerts[0].potential.width
Create an exact basis-set, made of bound and continuum states.

The bound states can be taken from the previous calculation finding the Siegert states (given that bound states are nothing but a particuliar type of Siegert states), while the continuum states are discretized over a grid of real wavenumbers. This grid is caracterized by the grid step h_k and the maximal wavenumber k_max.

[3]:
# Find the continuum states
h_k = 0.1 # Grid step for the continuum states wavenumbers
k_max = abs(siegerts.resonants[-1].wavenumber)+h_k # Maximal wavenumber
continuum = SWPBasisSet.find_continuum_states(siegerts[0].potential, k_max, h_k)
[4]:
# Create the exact basis set
exact = siegerts.bounds + continuum
Case 1: centered rectangular functions of varying width
Define the test functions

The widths \(a\) of the rectangular functions are varied comparatively to the width of the potential \(l\). In order to have the widest picture possible, a logarithmic scale will be used for the x-axis of the plot, representing the width factors \(a/l\).

[5]:
x_c = 0. #center of the test functions
width_factors = 10**(np.linspace(-4, -0.01, 25))
rects = [Rectangular.from_width_and_center(a_over_l*l, x_c) for a_over_l in width_factors]
print(["{:.3e}".format(rect.width/l) for rect in rects])
['1.000e-04', '1.466e-04', '2.150e-04', '3.153e-04', '4.624e-04', '6.780e-04', '9.943e-04', '1.458e-03', '2.138e-03', '3.135e-03', '4.597e-03', '6.741e-03', '9.886e-03', '1.450e-02', '2.126e-02', '3.117e-02', '4.571e-02', '6.703e-02', '9.829e-02', '1.441e-01', '2.113e-01', '3.099e-01', '4.545e-01', '6.664e-01', '9.772e-01']
Compute the convergence of the MLE completeness relation for each rectangular functions
[6]:
CR_MLE_conv = [siegerts.MLE_completeness_convergence(rect) for rect in rects]
Compute the exact completeness relation for each rectangular functions
[7]:
CR_exact_conv = [exact.exact_completeness_convergence(rect) for rect in rects]
Plot the results

We now present the evolution of the completeness relation evaluated for multiple rectangular functions centered in region \(II\) and of various width \(a\). The plot therefore uses \(\frac{a}{l}\) as the x-axis (where \(l\) is the width of the potential). There are 4 different colors, each representing a different number of resonant couples \(N_{res}\) used in the evaluation of the CR.

[8]:
def find_icont(ires, exact, siegerts):
    """
    Function returning the index of the continuum states corresponding to
    the absolute value of the wavenumbers of the resonant states
    given their index ires.
    """
    cont_wn = [c.wavenumber for c in exact.continuum.even]
    sieg_wn = np.array([abs(s.wavenumber) for s in siegerts.resonants])
    ires = [i-1 for i in ires]
    kres = sieg_wn[ires]
    return [(np.abs(cont_wn-k)).argmin() for k in kres]


# Number of resonant states to use in the completeness relation:
ires = [25, 50, 100, 250]
# Corresponding index of the continuum states:
icont = find_icont(ires, exact, siegerts)
# Set some global values
colors = ['#41b6c4', '#1d91c0', '#225ea8', '#081d58']
ms_res = 5  # Marker size for resonant states
ms_cont = 10  # Marker size for continuum states
# Plot the expected result: the CR should tend to 1
plt.axhline(1, color='k', ls='-', lw=1.5)
# Plot the points corresponding the the Siegert basis set
for j, i in enumerate(ires):
    CR = [CR_conv[i] for k, CR_conv in CR_MLE_conv]
    plt.plot(width_factors, np.real(CR), color=colors[j],
             marker='o', ls='', label='$N_{res}$ = '+str(i), ms=ms_res)
# Plot the points corresponding the the exact basis set
for j, i in enumerate(icont):
    CR = [CR_conv[i] for k, CR_conv in CR_exact_conv]
    plt.plot(width_factors, np.real(CR), color=colors[j],
             marker='o', mfc='none', ls='', ms=ms_cont)
# Set values for the plot
plt.xlabel('$a/l$')
plt.ylabel('$CR$')
ymax = 1.1
plt.ylim(0, ymax)
plt.xscale('log')
plt.plot(10**(-4), 2*ymax, label='Exact', marker='o',
         mfc='none', ls='', ms=ms_cont, color='k')
plt.plot(10**(-4), 2*ymax, label='MLE', marker='o',
         ls='', color='k', ms=ms_res)
plt.legend()
plt.savefig('CR_rect_multiple_centered.pdf')
plt.show()
_images/notebooks_completeness_relation_multiple_test_functions_16_0.png

The main point is that the exact CR and the approximate MLE of the CR follow each other closely: a basis set of Siegert states with the resonant states up to a certain energy is as complete as a basis set made of the bound states and the continuum states up to the same energy.

For almost every width \(a\), increasing the number of resonant couples taken into account lead to an increase of the completeness relation. This is not the case when the test function is close to the width of the potential: these correspond to the problematic cases that were presented in the previous notebooks.

Not surprisingly, the basis sets are more efficient to reproduce large test functions: it requires highly oscillating Siegert states (i.e. those of high energy) to reproduce thin rectangular functions.

Case 2: centered Gaussian functions of varying width

The width of the Gaussian function is varied comparatively to the width of the potential.

Define the test functions
[9]:
x_c = 0. #center of the test functions
width_factors = 10**(np.linspace(-4, -0.01, 25))
gauss = [Gaussian(width_factor*l, x_c) for width_factor in width_factors]
Compute the convergence of the MLE completeness relation for each Gaussian functions
[10]:
CR_MLE_conv_g = [siegerts.MLE_completeness_convergence(g) for g in gauss]
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:550: RuntimeWarning: invalid value encountered in cdouble_scalars
  expkp * (erf(zkmp) + 1) - expkm * (erf(zkpm) - 1))
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:552: RuntimeWarning: invalid value encountered in cdouble_scalars
  term2 = factor2 * (expqp * (erf(zqpp) - erf(zqmp)) -
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:553: RuntimeWarning: invalid value encountered in cdouble_scalars
  expqm * (erf(zqmm) - erf(zqpm)))
Compute the exact completeness relation for each Gaussian functions
[11]:
CR_exact_conv_g = [exact.exact_completeness_convergence(g) for g in gauss]
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:814: RuntimeWarning: invalid value encountered in cdouble_scalars
  termk1 = expkp * (fm * (erf(zkpp) - 1.) - fp * (erf(zkmp) + 1.))
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:815: RuntimeWarning: invalid value encountered in cdouble_scalars
  termk2 = expkm * (fp * (erf(zkpm) - 1.) - fm * (erf(zkmm) + 1.))
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:818: RuntimeWarning: invalid value encountered in cdouble_scalars
  termq1 = expqp * (erf(zqpp) - erf(zqmp))
/Users/maximemoriniere/Documents/SiegPy/siegpy/swpeigenstates.py:819: RuntimeWarning: invalid value encountered in cdouble_scalars
  termq2 = expqm * (erf(zqpm) - erf(zqmm))

The RuntimeWaring is due to the fact that infinite values are found somewhere in the evaluation of the scalar products between eigenstates and a Gaussian test function. This will cause the absence of some values in the plot.

Plot the results

The results are plotted in the same manner as the previous case:

[12]:
# Plot the expected result: the CR should tend to 1
plt.axhline(1, color='k', ls='-', lw=1.5)
# Plot the points corresponding the the Siegert basis set
for j, i in enumerate(ires):
    CR = [CR_conv[i] for k, CR_conv in CR_MLE_conv_g]
    plt.plot(width_factors, np.real(CR), color=colors[j],
             marker='o', ls='', label='$N_{res}$ = '+str(i), ms=ms_res)
# Plot the points corresponding the the exact basis set
for j, i in enumerate(icont):
    CR = [CR_conv[i] for k, CR_conv in CR_exact_conv_g]
    plt.plot(width_factors, np.real(CR), color=colors[j],
             marker='o', mfc='none', ls='', ms=ms_cont)
# Set values for the plot
plt.xlabel('$\sigma/l$')
plt.ylabel('$CR$')
ymax = 1.1
plt.ylim(0, ymax)
plt.xscale('log')
plt.plot(10**(-4), 2*ymax, label='Exact', marker='o',
         mfc='none', ls='', ms=ms_cont, color='k')
plt.plot(10**(-4), 2*ymax, label='MLE', marker='o',
         ls='', color='k', ms=ms_res)
plt.legend()
plt.savefig('CR_gauss_multiple_centered.pdf')
plt.show()
_images/notebooks_completeness_relation_multiple_test_functions_26_0.png

The conclusions are the same as before:

  • the exact CR and the approximate MLE of the CR follow each other closely,
  • for a width \(\sigma\), increasing the number of resonant couples taken into account lead to an increase of the completeness relation. There are however numerical instabilities for the largest numbers of resonant states when \(\sigma\) is too large and spreads over multiple regions. This is shown by the absence of some points in the plot for large values of \(\sigma / l\),
  • the basis sets are more efficient to reproduce large test functions.

Comparing both plots, you can see it is easier to exactly reproduce a Gaussian rather than a Rectangular function: for a large set of test functions, the CR is exactly converged to 1. For \(a / l > 10^{-2}\), it even required less than 25 resonant couples to reach convergence.

Strength function: an introduction

We saw that the exact completeness relation using the bound and continuum states can be efficiently approximated by a discrete basis set made of continuum states. Still, this is not a quantity of physical interest: it could be interesting to find such a quantity which strongly relies on the continuum states in order to replace them by Siegert states.

The first example we will provide in this tutorial concerns the strength function (SF):

\(S(k) = - \frac{1}{\pi} \Im \left\langle g | G(k) | g \right\rangle\)

where \(G\) is the Green’s function (or resolvent) of the Hamiltonian of the considered system and \(g\) is a test function.

The exact Green’s function may be written as follows:

\(G(k) = \sum_b \frac{\left| \varphi_b \right\rangle \left\langle \varphi_b \right|}{k^2/2 - {k_b}^2/2} + \sum_{p = \pm} \int_0^{+\infty} \text{d} k_1 \frac{\left| \varphi_p \right\rangle \left\langle \varphi_p \right|}{k^2/2 - {k_1}^2/2}\)

where the first sum runs over the bound states of the system (of wavenumber \(k_b\)), while the second one runs over the continuum states of wavenumber (\(k_1\)), which can be even (subscript \(+\)) or odd (subscript \(-\)).

We will compare this exact result with the one obtained by using the approximated Green’s function given by the Mittag-Leffler expansion (MLE):

\(G_{MLE}(k) = \sum_{S = a, b, c, d} \frac{\left| \varphi_S \right) \left( \varphi_S \right|}{k_S (k - k_S)}\)

where the sum runs over all the Siegert states of the system (of wavenumber \(k_S\), where \(S\) represents a type of Siegert states: \(a\) for anti-bound, \(b\) for bound, \(c\) for resonant or capturing states and \(d\) for anti-resonant or decaying states).

We still consider the case of a 1D Square-Well potential, where the Siegert and continuum states are known analytically. It is therefore possible to get the exact strength function and to compare it to the MLE of the SF.

Initialization: create the basis sets
Import some modules and classes
[1]:
# Make the notebook aware of some of the SiegPy module classes
from siegpy import SWPBasisSet, Rectangular, Gaussian, SWPotential
# Other imports
import numpy as np
import matplotlib.pyplot as plt
Create a basis-set made of Siegert states only
[2]:
siegerts = SWPBasisSet.from_file("siegerts.dat", nres=25)
Create an exact basis-set, made of bound and continuum states

The bound states are taken from the Siegert basis set while the continuum states are discretized over a grid of real wavenumbers.

[3]:
# Find the potential of the basis set made of Siegert states
potential = siegerts.potential
l = potential.width

# Find some continuum states
k_max = abs(siegerts.resonants.wavenumbers[-1])
h_k = 0.01
continuum = SWPBasisSet.find_continuum_states(potential, k_max, h_k)

# Create the exact basis set
exact = siegerts.bounds + continuum

This step is actually optional when running a real strength function calculation (but is mandatory for the purpose of this notebook), as using a dense basis set over a large range of wavenumbers leads to prohibitively long computation times to get a converged strength function. It is actually better to build a continuum basis set “on the fly” while computing each desired point of the strength function.

Define test functions to evaluate the strength function.

The strength function (SF) is evaluated using a test function \(g\). For the MLE of the SF to hold, this test function must lie in region \(II\) (inside the potential, where \(|x| \leq l/2\)). The cases of a Gaussian test function and of a rectangular test function are studied.

[4]:
x_c = 0.0  # Center of the test functions
a = l/2.  # Width of the rectangular function
sigma = l/20.  # Width of the Gaussian
test_gauss = Gaussian(sigma, x_c)  # Gaussian test function
test_rect = Rectangular.from_width_and_center(a, x_c)  # Rectangular test function
Create the grid of wavenumbers where the strength function is evaluated

kgrid defines the wavenumber grid used for plotting. It is not mandatory to keep the same dense grid used to initialize the continuum states. It is even rather strongly advised to make this grid as small as possible: this saves computation time, since a costly integration over the continuum states is performed for each point \(k\) in kgrid to evaluate the exact strength function .

[5]:
step = 0.1  # Grid-step for plotting
k_max = 10  # Maximal wavenumber for plotting
kgrid = np.arange(step, k_max, step)
Case 1: strength function for a Gaussian test function

Both exact and approximate strength functions are calculated, the latter being very cheap to obtain, whereas the exact SF takes a long time due to the integrations over the continuum, that must extend to a large k_max and be very dense (small h_k). It is very difficult to get a converged result with respect to the amelioration of the continuum states basis (i.e. with smaller h_k and larger k_max). Learning from the numerical convergence of this scheme, it was possible to produce another scheme allowing a better convergence (using smaller eta and h_k) within a reasonable amount of time, by avoiding the need to discretize a dense basis set of continuum states over the whole range of wavenumbers.

Note that there is no need to use a basis made of continuum states to compute the exact strength function “on the fly”. Some default values for h_k, eta and a tolerance parameters are passed to this method. They were found to give a good compromise between accuracy and speed for a large number of cases. Such default values for eta and h_k do not allow to get an exact SF using the other method in a limited amount of time.

Compared to the exact calculations, computing the MLE of the SF is cheap.

[6]:
SF_MLE_gauss = siegerts.MLE_strength_function(test_gauss, kgrid)
SF_exact_gauss = exact.exact_strength_function(test_gauss, kgrid)
SF_exact_gauss_other = siegerts.exact_strength_function_OTF(test_gauss, kgrid)

The MLE of the SF reproduces perfectly the exact converged SF (evaluated with the minimal “on the fly” continuum basis set). The exact SF using the whole continuum basis set is not converged yet, even though it gives rather accurate results, especially for large \(k\).

[7]:
# Compare the exact SF to the MLE of the SF
plt.plot(kgrid, SF_exact_gauss, lw=4, label='Exact')
plt.plot(kgrid, SF_exact_gauss_other, color='#d73027', lw=5, label='Exact (converged)')
plt.plot(kgrid, SF_MLE_gauss, color='#000000', ls='--', label='MLE')
plt.xlabel("$k$")
plt.ylabel("$R(k)$")
plt.title(repr(test_gauss))
plt.legend()
plt.show()
_images/notebooks_strength_function_16_0.png

The same plot can be obtained by applying the plot_strength_function method to a basis set made of Siegert states. The exact converged result are then computed. This method requires the same parameters: a test function and a wavenumber grid. You may also pass some other arguments, such as the filename file_save to save the plot in a file.

[8]:
siegerts.plot_strength_function(test_gauss, kgrid, file_save='SF_gauss_centered.pdf')
_images/notebooks_strength_function_18_0.png

The main advantage of using the MLE of the strength function is that you gain substantial physical information on which resonance causes a particular peak of the strength function. Each peak of the strength function is indeed related to one particular resonant couple (in blue). The only exception is that the first peak correponds to the sum of the bound and anti-bound states contribution (in red).

This is triggered by the optional nres parameter, that specifies the number of resonant couples that are used. Note that if nres is set to 0, then only the peak corresponding to the bound and anti-bound states contributions to the strength function is plotted.

[9]:
siegerts.plot_strength_function(test_gauss, kgrid, exact=False, nres=6)
_images/notebooks_strength_function_20_0.png

As you can see, the position and the shape of a particular peak is related to one particular resonance.

Case 2: strength function for a rectangular test function

The same study can be done using the previously defined rectangular test function:

[10]:
siegerts.plot_strength_function(test_rect, kgrid, file_save='SF_rect_centered_half_width.pdf')
_images/notebooks_strength_function_23_0.png

Regarding the relation between one peak to a resonance couple, it may happen that some resonant/anti-resonant couples contributions are purely negative. A particular example is when the rectangular function is centered and its width is half the potential width, as is the case here. Half of these couples give a contribution leading to a peak, while the contributions of the other half lead to zeros of the strength function:

[11]:
siegerts.plot_strength_function(test_rect, kgrid, exact=False, nres=8)
_images/notebooks_strength_function_25_0.png

Another “funny” example is when the width of the rectangular function gets close to the width of the potential: the contribution of each resonant couple is then negative, as if it was “boring” through the bound/anti-bound states contribution. In this particular case, only the zeros of the strength function are related to the resonant couples:

[12]:
rect = Rectangular.from_width_and_center(0.999*l, x_c)
siegerts.plot_strength_function(rect, kgrid, nres=8)
_images/notebooks_strength_function_27_0.png

Remember that the test function is close to the border of region \(II\). This means that this is a problematic case for the completeness relation but, as you can see, it is not problematic at all when it comes to reproduce the strength function.

This concludes the first notebook about the strength function using Siegert states. You saw that the strength function can be well approximated by using the Mittag-Leffler exapnsion of the Green’s function, even in some cases that were problematic for the completeness relation. By using Siegert states, you may also gain some physical information, by relating a peak to a particular resonance.

You should now be able to compute the strength function of any 1D SW potential using any analytic test function available, such as the Gaussian and rectangular functions.

Strength function: convergence of the Mittag-Leffler expansion

We saw in the strength_function.ipynb notebook that the strength function (SF) can be approximated by using a Mittag-Leffler expansion (MLE) of the Green’s function. In this notebook, we’ll see how the MLE of the SF converges to the exact result when the number of resonant couples used is increased.

Initialization

This part very similar to the previous notebook, but don’t forget to run the cells before jumping to the next section!

Import some modules and classes
[1]:
# Make the notebook aware of some of the SiegPy module classes
from siegpy import SWPBasisSet, Rectangular, Gaussian, SWPotential
# Other imports
import numpy as np
import matplotlib.pyplot as plt
Create a basis-set made of Siegert states only
[2]:
siegerts = SWPBasisSet.from_file("siegerts.dat", nres=25)
Define test functions to evaluate the strength function
[3]:
l = siegerts.potential.width  # width of the potential
x_c = 0.0  # Center of the test functions
a = l/2  # Width of the rectangular function
sigma = l/20  # Width of the Gaussian
test_gauss = Gaussian(sigma, x_c)  # Gaussian test function
test_rect = Rectangular.from_width_and_center(a, x_c)  # Rectangular test function
Create the grid of wavenumbers where the strength function is evaluated

It is not mandatory to keep the same grid as for the continuum states, and it is strongly advised to decrease this number as much as possible to save computation time, since an integration over all continuum states is performed for each point in this grid, that only defines the grid used for plotting.

[4]:
h_k = 0.05  # Grid-step for plotting
k_max = 10  # Maximal wavenumber for plotting
kgrid = np.arange(h_k, k_max, h_k)
Tests of convergence of the MLE of the strength function

We present the evolution of the MLE of the strength function as a function of the number of resonant couples included in the approximated Green’s function.

We will see that with only a few resonant couples considered, it is possible to get a correct evaluation of the strength function on a rather large range of wavenumber

Case 1: Gaussian test function

The first step corresponds in evaluating the strength function for basis sets of increasing size. To that end, the contribution of all the bound and anti-bound states is evaluated separately (stored in b_a_MLE_g), while the strength function associated to each resonant couple is stored in the list r_MLE_g.

[5]:
# Create basis sets with bound and antibound states, with resonant states
# and with anti-resonant states.
bnds_abnds = siegerts.bounds + siegerts.antibounds
res = siegerts.resonants
ares = siegerts.antiresonants

# Contribution of the bound and anti-bound states to the MLE of the SF
b_a_MLE_g = bnds_abnds.MLE_strength_function(test_gauss, kgrid)
# Contribution of each resonant couple to the MLE of the SF
r_MLE_g   = [SWPBasisSet(states=[res[i], ares[i]]).MLE_strength_function(test_gauss, kgrid) for i in range(len(res))]

After some matplotlib manipulation, you get the following plot:

[6]:
#Plot the reference, with all siegert states
plt.plot(kgrid, b_a_MLE_g + sum(r_MLE_g), color='k', lw=5,
         label='$N_{res}$ = '+str(len(res)))
#Plot the first: bound states contribution only
colors = ['#0c2c84', '#1d91c0', '#7fcdbb', '#c7e9b4']
plt.plot(kgrid, b_a_MLE_g, color=colors[0], label='$N_{res}$ = 0')
#Plot the rest
for i in [2, 4, 6]:
    plt.plot(kgrid, b_a_MLE_g + sum(r_MLE_g[:i]),
             color=colors[i//2], label='$N_{res}$ = '+str(i))
plt.xlabel("$k$")
plt.ylabel("$R(k)$")
plt.title('Gaussian test function')
plt.legend()
plt.show()
_images/notebooks_convergence_MLE_strength_function_13_0.png

As you can see, using a handful of resonant couples leads to almost converged result over a rather large range of wavenumbers. Remember that the exact result would require a rather dense basis set of continuum states to reach the same convergence, while the physical information on the resonances would be absent. The approximation of the Green’s function by Siegert states is very promising.

Case 2: rectangular test function

You can easily repeat that same study for a rectangular test function:

[7]:
# Contribution of the bound and anti-bound states to the MLE of the SF
b_a_MLE_r = bnds_abnds.MLE_strength_function(test_rect, kgrid)
# Contribution of each resonant couples to the MLE of the SF
r_MLE_r   = [SWPBasisSet([res[i], ares[i]]).MLE_strength_function(test_rect, kgrid) for i in range(len(res))]
[8]:
#Plot the reference, with all siegert states
plt.plot(kgrid, b_a_MLE_r + sum(r_MLE_r), color='k', lw=5,
         label='$N_{res}$ = '+str(len(res)))
#Plot the first: bound states contribution only
colors = ['#0c2c84', '#1d91c0', '#41b6c4', '#7fcdbb', '#c7e9b4']
plt.plot(kgrid, b_a_MLE_r, color=colors[0], label='$N_{res}$ = 0')
#Plot the rest
for i in [2, 4, 6, 8]:
    plt.plot(kgrid, b_a_MLE_r + sum(r_MLE_r[:i]),
             color=colors[i//2], label='$N_{res}$ = '+str(2*i))
plt.xlabel("$k$")
plt.ylabel("$R(k)$")
plt.title('Rectangular test function')
plt.legend()
plt.show()
_images/notebooks_convergence_MLE_strength_function_17_0.png

Note that, even if the strength function erroneously reaches negative values, the position and the shape of the first peaks are still very well reproduced by only a few resonant couples.

Time propagation: an introduction

After the strength function, the time propagation of an initial wavepacket is another example of how relevant the Siegert states can be in order to reproduce quantities of physical interest.

The exact time propagation of an initial wavepacket \(f\) using the bound and continuum states is given by:

\(f_{exact}(x, t) = \sum_b \left\langle \varphi_b | f \right\rangle \varphi_b(x) e^{- i E_b t} + \sum_{p=\pm} \int_0^\infty \text{d} k \left\langle \varphi_p | f \right\rangle \varphi_p(x) e^{- i E(k) t}\)

It will be compared to a Siegert states expansion of this time-propagation:

\(f_{S}(x, t) = \sum_{S=?} \alpha_S \left( \varphi_b | f \right\rangle \varphi_b(x) e^{- i E_S t}\)

where the important part is the value of the weight \(\alpha_S\): it will be shown later that the exact Siegert states expansion has a precise definition for these weights that does not correspond to the Mittag-Leffler Expansion (for which each state is counted with a 1/2 weight) except for the initial time \(t=0\).

The test function \(f\) may be a rectangular or a Gaussian function. We will focus on the latter type of test function, since the completeness relation is more easily fulfilled for this type of test function, and this influences greatly the quality of the time-propagation (both for exact and Siegert states expansions, since this reduces the wavenumber (or energy) range of the states used for the time propagation).

This comparison is performed in the case of the 1D Square-Well Potential (1DSWP), where the continuum and Siegert states are known analytically.

Initialization

Before actually performing the time propagation of an initial wavepacket, some initial actions must be performed, to define the potential and basis sets made of the corresponding eigenstates. The main difference here is that the eigenstates must be discretized over a space grid in order for the time propagation of a wavepacket to be performed.

Import some modules and classes

This does not require any different module imports: no surprise there

[1]:
# Make the notebook aware of some of the SiegPy module classes
from siegpy import Gaussian, SWPBasisSet
# Other imports
import numpy as np
import matplotlib.pyplot as plt
Define a 1D Square-Well Potential and create a basis set made of Siegert states only

To save computation time, the potential and the basis set of Siegert states are read from the file siegerts.dat. This is still not different from some previous notebooks.

[2]:
# Read the Siegert states from a data file
siegerts = SWPBasisSet.from_file('siegerts.dat', nres=25)

# Define the potential
potential = siegerts[0].potential

The difference actually comes from the need to discretize the each eigenstate over a grid (remember the equations in the introduction: they require \(\varphi_p(x)\) and \(\varphi_S(x)\)).

Given that:

  • the Siegert states expansion is supposed to be valid in region \(II\) (\(|x| \leq l/2\)),
  • the scalar products between Siegert states and the initial wavepacket are still computed analytically (each state has the attribute analytic set to True),

there is no need to use a large, dense grid: it need only be dense enough for plotting.

[3]:
# Discretize the Siegert states over a space grid in region II:
l = potential.width
xgrid = np.linspace(-l/2, l/2, 201)
siegerts.grid = xgrid

In the case you do not have access to a previous data file, the commands are similar, but in a different order:

# You also need to import the SWPotential class
import SWPBasisSet, SWPPotential
# Define a potential
potential = SWPotential(5, 10)
# Create a grid
l = potential.width
xgrid = np.linspace(-l/2, l/2, 201)
# Create the basis set with the grid
siegerts = SWPBasisSet.find_Siegert_states(potential, 10, 1, 3, grid=xgrid)

If the data file contains eigenstates that are already discretized, it is even simpler:

# You don't need to import the SWPotential class
import SWPBasisSet
# Read the Siegert states from a data file
siegerts = SWPBasisSet.from_file('siegerts.dat')
Create an exact basis set

The exact basis set is made of the bound and continuum states of the system. The bound states of the siegerts basis set may readily be reused, while a grid of continuum states still has to be computed. This is done using the find_continuum method of the potential (do not forget to discretize them over a grid by specifying the grid optional parameter):

[4]:
# Find the continuum states of the potential
h_k = 0.05
k_max = 20
cont = SWPBasisSet.find_continuum_states(potential, k_max, h_k, grid=xgrid)

# Make the exact basis set
exact = cont + siegerts.bounds
Define a test function

The test function \(f\) is a Gaussian that must mostly spread in region \(II\) (inside the potential, where \(|x| \leq l/2\)). We start with a centered Gaussian.

[5]:
sigma = l/20. # width of the Gaussian
x_c = 0.0     # center of the Gaussian
gauss = Gaussian(sigma, x_c, grid=xgrid)
Exact and Mittag-Leffler expansion of the time propagation of a Gaussian wavepacket

To perform the time propagation, the time_grid parameter is the last one to define. It should be a list or a numpy array containing the times for which the propagated wavepacket is evaluated. Note that all the times in time_grid must be positive (if not, all times are translated so that they all are positive).

Computation

The exact time propagation of the Gaussian wavepacket is computed by applying the exact_propagation method to the exact basis set. The Mittag-Leffler expansion of this time propagation is computed by applying the MLE_propagation method to the Siegert states basis set.

[6]:
# Definition of the time grid
time_grid = [0.0, 0.05, 0.1, 0.15, 0.3, 0.4, 0.5, 0.6, 0.7, 1.0, 1.5]

# Evaluation of the exact time propagation
exact_tp = exact.exact_propagation(gauss, time_grid)

# Evaluation of the Mittag-Leffler expansion of the time propagation
MLE_tp = siegerts.MLE_propagation(gauss, time_grid)

Note:

Rather than using the exact basis set exact defined above, the exact_propagation method may be applied to the basis set siegerts, since the continuum states may be created by this method, provided that values for the optional parameters hk (for the wavenumber grid step) and kmax (for the wavenumber range) are given:

# Evaluation of the exact time propagation, creating the continuum states in place
exact_tp = siegerts.exact_propagation(gauss, time_grid, kmax=k_max, hk=h_k)
Plotting

The results of the exact_propagation and MLE_propagation methods are arrays of wavepackets \(f(x, t)\), evaluated at the different times \(t\) in time_grid. We can compare both results for each time:

[7]:
h = gauss.amplitude
k0 = gauss.momentum
for i, t in enumerate(time_grid):
    plt.ylim(-0.6, 1.0)
    plt.title("t = {}".format(t))
    plt.plot(xgrid, np.real(exact_tp[i]), color='k', label='Re[$f_{exact}(t)$]', lw=5)
    plt.plot(xgrid, np.imag(exact_tp[i]), color='grey', label='Im[$f_{exact}(t)$]', lw=5)
    plt.plot(xgrid, np.real(MLE_tp[i]), color='#2166ac', label='Re[$f_{MLE}(t)$]')
    plt.plot(xgrid, np.imag(MLE_tp[i]), color='#d1e5f0', label='Im[$f_{MLE}(t)$]')
    # plt.plot(gauss.grid, np.real(gauss.wf), color='k')
    plt.xlabel("$x$")
    plt.ylabel("$f(x, t)$")
    plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
    plt.show()
_images/notebooks_time_propagation_17_0.png
_images/notebooks_time_propagation_17_1.png
_images/notebooks_time_propagation_17_2.png
_images/notebooks_time_propagation_17_3.png
_images/notebooks_time_propagation_17_4.png
_images/notebooks_time_propagation_17_5.png
_images/notebooks_time_propagation_17_6.png
_images/notebooks_time_propagation_17_7.png
_images/notebooks_time_propagation_17_8.png
_images/notebooks_time_propagation_17_9.png
_images/notebooks_time_propagation_17_10.png

While the Mittag-Leffler expansion seems to reproduce exactly the time propagation for short times, it starts diverging from the exact results as soon as the propagated wavepacket is reaching the border of region \(II\). Both regimes can be understood:

  • We know that the MLE of the initial Gaussian wavepacket is correct, since the completeness relation for this test function is fulfilled. This is why the Gaussian test function is well reproduced for \(t = 0\). Furthermore, the MLE is supposed to hold while the test function is in region \(II\). This is the reason why the MLE is valid for short times.
  • From the second formula of the introduction, it is inferred that the MLE of the time propagation of a wavepacket must diverge for long positive times because of the anti-resonant states contribution: the exponential term \(e^{-i E_S t}\) diverges because of the positive imaginary part of the anti-resonant states energies. Note that the resonant states contribution would make the MLE of the time propagation diverge for long negative times.
Another type of expansion: the Berggren Expansion

The Mittag-Leffler, however giving correct results for short times, cannot be the correct Siegert states expansion: another expansion must be found. The so-called Berggren expansion (which is discussed in the literature, e.g. 1, 2) may be a good candidate: it requires the use of the bound and resonant states only (all counted with a weight 1, compared to the 1/2 weight of the MLE). The anti-resonant states divergence for large positive times disappears de facto.

This expansion can be applied to the Siegert states basis set via the Berggren_propagation method, using the same time grid and Gaussian test function:

[8]:
# Evaluation of the Berggren expansion of the time propagation
Ber_tp = siegerts.Berggren_propagation(gauss, time_grid)

The result of this expansion is then compared to the exact time propagation:

[9]:
for i, t in enumerate(time_grid):
    plt.ylim(-0.6, 1.0)
    plt.title("t = {}".format(t))
    plt.plot(xgrid, np.real(exact_tp[i]), color='k', label='Re[$f_{exact}(t)$]', lw=5)
    plt.plot(xgrid, np.imag(exact_tp[i]), color='grey', label='Im[$f_{exact}(t)$]', lw=5)
    plt.plot(xgrid, np.real(Ber_tp[i]), color='#d73027', label='Re[$f_{Ber}(t)$]')
    plt.plot(xgrid, np.imag(Ber_tp[i]), color='#fc8d59', label='Im[$f_{Ber}(t)$]')
    #plt.plot(gauss.grid, np.real(gauss.wf), color='k')
    plt.xlabel("$x$")
    plt.ylabel("$f(x, t)$")
    plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
    plt.show()
_images/notebooks_time_propagation_22_0.png
_images/notebooks_time_propagation_22_1.png
_images/notebooks_time_propagation_22_2.png
_images/notebooks_time_propagation_22_3.png
_images/notebooks_time_propagation_22_4.png
_images/notebooks_time_propagation_22_5.png
_images/notebooks_time_propagation_22_6.png
_images/notebooks_time_propagation_22_7.png
_images/notebooks_time_propagation_22_8.png
_images/notebooks_time_propagation_22_9.png
_images/notebooks_time_propagation_22_10.png

In the short time limit, this expansion is not as good as the Mittag-Leffler expansion. This is particularly evidenced at \(t=0\): the initial Gaussian wavepacket is complex if the Berggren expansion is used. This proves that the Berggren expansion is not the natural Siegert states expansion, while the Mittag-Leffler expansion is. For the latter, the imaginary part of the resonant and anti-resonant states contribution are the opposite, and their sum is therefore equal to zero. The Berggren expansion lacks this anti-resonant states expansion for short times.

However, and as expected, the Berggren expansion does not diverge for large times. The agreement with the exact time propagation is even very good for larger times.

This is actually expected: the resonant states contributions to the time propagation tend to 0 for large times (due to the negative imaginary part of their energies), and this is also true for the continuum states (this is known as the RAGE theorem in the mathematical community). The agreement for very long time is therefore expected, since only the bound states contributions remain.

Still, the Berggren expansion is very satisfactory for an intermediate time range, when the continuum states contributions are still relatively high compared to the bound states contributions. There might be a transition between the Mittag-Leffler regime and the Berggren regime.

Exact Siegert expansion

We found which expansion should hold in two limits (short and long time), and a transition from a Mittag-Leffler expansion to a Berggren expansion is expected. This transition was explained in Santra et al., PRA 71 (2005). To achieve that, the authors analytically derived the correct weights for the Siegert states (the \(\alpha_S\) in the second equation in the introduction of this notebook). Such weights are time-dependent and also depend on the Siegert state (see eq. 69 of this paper). This exact Siegert states expansion exactly amount to the Mittag-Leffler for the initial time \(t=0\) and is somehow related to the Berggren expansion for \(t \neq 0\), since only the bound and resonant states have an exponential decay.

This exact Siegert states propagation is therefore implemented in SiegPy. It can be called directly by the exact_Siegert_propagation method of a SWPBasisSet instance:

[10]:
exact_S_tp = siegerts.exact_Siegert_propagation(gauss, time_grid)
[11]:
for i, t in enumerate(time_grid):
    plt.ylim(-0.6, 1.0)
    plt.title("t = {}".format(t))
    plt.plot(xgrid, np.real(exact_tp[i]), color='k', label='Re[$f_{exact}(t)$]', lw=5)
    plt.plot(xgrid, np.imag(exact_tp[i]), color='grey', label='Im[$f_{exact}(t)$]', lw=5)
    plt.plot(xgrid, np.real(exact_S_tp[i]), color='#276419', label='Re[$f_{S, exact}(t)$]')
    plt.plot(xgrid, np.imag(exact_S_tp[i]), color='#b8e186', label='Im[$f_{S, exact}(t)$]')
    #plt.plot(gauss.grid, np.real(gauss.wf), color='k')
    plt.xlabel("$x$")
    plt.ylabel("$f(x, t)$")
    plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
    plt.show()
_images/notebooks_time_propagation_26_0.png
_images/notebooks_time_propagation_26_1.png
_images/notebooks_time_propagation_26_2.png
_images/notebooks_time_propagation_26_3.png
_images/notebooks_time_propagation_26_4.png
_images/notebooks_time_propagation_26_5.png
_images/notebooks_time_propagation_26_6.png
_images/notebooks_time_propagation_26_7.png
_images/notebooks_time_propagation_26_8.png
_images/notebooks_time_propagation_26_9.png
_images/notebooks_time_propagation_26_10.png

As expected, the exact Siegert states expansion gives results that are in excellent agreement with the exact time propagation using the bound and continuum states. This agreement is valid for any time considered (if the exact basis set is dense enough, see the convergence_exact_time_propagation notebook).

Notes:

1- Even though the MLE_propagation or Berggren_expansion methods should be preferred, the Siegert_propagation method actually allows to reproduce their results by specifying a value to the optional parameter weights:

# MLE expansion
MLE_weights = {'b': 1/2, 'ab': 1/2, 'r': 1/2, 'ar': 1/2}
MLE_tp = siegerts.Siegert_propagation(gauss, time_grid,
                                      weights=MLE_weights)

# Berggren expansion
Ber_weights = {'b': 1, 'r': 1}
Ber_tp = siegerts.Siegert_propagation(gauss, time_grid,
                                      weights=Ber_weights)

Using the Siegert_propagation method in this manner allows to define another Siegert state expansion, or to isolate the contribution of a particular type of Siegert states to the time-propagation without creating another basis set.

2- In order to get the plots, you can also use the plot_propagation method. Its parameters are the test function and the time_grid. Its optional arguments exact and exact_Siegert are set to True by default, meaning that both exact expansions will be plotted. This means you can plot one or None of those expansions by setting one or both of these arguments to False. Two other optional arguments, MLE and Berggren, are set to False by default. The MLE and the Berggren expansions can also be plotted if these arguments are set to True, respectively.

Here are some use cases, allowing to reproduce the three series of plots presented above in a more concise manner:

# Limit the y-axis range of the plots
my_ylim=(-0.6, 1.0)
# Plot the exact result and the MLE
siegerts.plot_propagation(gauss, time_grid, exact_Siegert=False, MLE=True, ylim=my_ylim)
# Plot the exact result and the Berggren expansion
siegerts.plot_propagation(gauss, time_grid, exact_Siegert=False, Berggren=True, ylim=my_ylim)
# Plot both exact expansions
siegerts.plot_propagation(gauss, time_grid, ylim=my_ylim)

Note that it is not mandatory to have continuum states in the basis set when plotting the exact expansion, since they can be quickly created on-the-fly (see the note in the exact and MLE section of this notebook). The above example would actually compute three times the same continuum basis set and its contributions to the time-propagation (it is not a problem because these computations being inexpensive; the gain in concision is well worth the computation repetition).

Conclusion

You should now be able to perform the most important Siegert states expansion of the time propagation of a wavepacket, and also to compare them to the exact result using the bound and continuum states.

You saw that:

  • there is an exact Siegert states expansion that is not the Mittag-Leffler expansion,
  • it however is the same as the Mittag-Leffler expansion for the initial time \(t=0\),
  • it is somehow related to the Berggren expansion for \(t \neq 0\).

Time propagation: wavepacket with an initial momentum

This notebook is very similar to the the time_propagation.ipynb notebook: the goal is to reproduce the exact time propagation of an initial wavepacket \(f\):

\(f_{exact}(x, t) = \sum_b \left\langle \varphi_b | f \right\rangle \varphi_b(x) e^{- i E_b t} + \sum_{p=\pm} \int_0^\infty \text{d} k \left\langle \varphi_p | f \right\rangle \varphi_p(x) e^{- i E(k) t}\)

with a Siegert states expansion:

\(f_{S}(x, t) = \sum_{S=?}\) \(\alpha_S\) \(\left( \varphi_S | f(t=0) \right\rangle \varphi_S(x) e^{- i E_S t}\)

where \(\alpha_S\) corresponds to a weight.

The main difference with the previous notebook is the use of a non-centered (\(x_c \neq 0\)) Gaussian wavefunction \(f\) with an initial momentum \(k_0\):

\(f = h e^{-\frac{(x-x_c)^2}{2 \sigma^2}} e^{i k_0 x}\)

We already presented an exact Siegert expansion, where the weight \(\alpha_S\) depends both on the time and on the state (in contrast with the Mittag-Leffler or Berggren expansions). We will therefore mainly use this exact Siegert expansion here.

Initialization: import useful modules and classes
[1]:
# Make the notebook aware of some of the SiegPy module classes
from siegpy import SWPBasisSet, Gaussian
# Other imports
import numpy as np
import matplotlib.pyplot as plt
Case 1: small initial momentum

Nothing is very different from the previous notebook: the main difference resides in the initial momentum of the test function, and the fact that the plot_propagation method presented in the end of the previous notebook will be used throughout this one (meaning that there will be no need to compute the continuum states beforehand).

Define a 1D Square-Well Potential and create a basis set made of Siegert states only

To save computation time, the potential and the basis set of Siegert states are read from the file siegerts.dat. A grid is finally defined and applied to discretize the eigenstates.

[2]:
# Read the potential from a data file:
filename = 'siegerts.dat'
siegerts = SWPBasisSet.from_file(filename, nres=25)

# Define a space grid in region II:
potential = siegerts[0].potential
l = potential.width
xgrid = np.linspace(-l/2, l/2, 201)

# Discretize the Siegert states over the grid:
siegerts.grid = xgrid
Define a test function

The test function \(f\) is a Gaussian that must spread in region \(II\) (inside the potential, where \(|x| \leq l/2\)). It is non-centered (\(x_c \neq 0\)) and has an initial momentum (\(k_0 \neq 0\)). This means that test function is complex-valued.

[3]:
sigma = l/20. # width of the Gaussian
x_c   = -l/4. # center of the Gaussian
k_0   = 1.    # initial momentum
gauss = Gaussian(sigma, x_c, k0=k_0, grid=xgrid)
gauss.plot()
_images/notebooks_time_propagation_initial_momentum_7_0.png
Make sure that both basis sets are complete for this test function
[4]:
siegerts.plot_completeness_convergence(gauss)
_images/notebooks_time_propagation_initial_momentum_9_0.png

Using the continuum states up to \(k = 20\) is enough for the basis set to be considered as complete. In comparison, using only 25 resonant couples (plus the bound and anti-bound states) leads to the same result.

Exact and exact Siegert expansion of the time propagation of a Gaussian wavepacket

The propagation of the wavepacket is evaluated for the times in time_grid, using the same protocol as in the previous notebook:

[5]:
# Definition of the time grid
time_grid = [0.0, 0.05, 0.1, 0.15, 0.3, 1.0, 2.0, 3.0]
# Plot the time propagation of both exact expansions
siegerts.plot_propagation(gauss, time_grid)
_images/notebooks_time_propagation_initial_momentum_12_0.png
_images/notebooks_time_propagation_initial_momentum_12_1.png
_images/notebooks_time_propagation_initial_momentum_12_2.png
_images/notebooks_time_propagation_initial_momentum_12_3.png
_images/notebooks_time_propagation_initial_momentum_12_4.png
_images/notebooks_time_propagation_initial_momentum_12_5.png
_images/notebooks_time_propagation_initial_momentum_12_6.png
_images/notebooks_time_propagation_initial_momentum_12_7.png

Again, for any time - and as expected - both exact expansions give the same time propagation of the initial wavepacket. The non-zero initial momentum does not influence that fact.

Case 2: larger initial momentum

The case of a large initial momentum exhibits the overcompleteness of the Siegert states basis set: we will show that using different Siegert states expansion may lead to very similar and (almost) correct time propagation under certain conditions.

To that end, the four types of expansions presented in the previous notebook will be used:

  • the exact time propagation using the bound and continuum states (used as the reference),
  • the Mittag-Leffler expansion (weight \(\alpha_S = 1/2\) for all Siegert states),
  • the Berggren expansion (weight \(\alpha_S = 1\) for bound and resonant states, 0 for the others).
  • the exact Siegert expansion, using the time and state-dependent weights \(\alpha_S\) from Eq. 69 of Santra et al., *PRA* **71** (2005),

We will see that the Berggren expansion, being rather accurate to reproduce such an initial wavepacket, keeps the same accuracy at any time, while the Mittag-Leffler is still accurate in the short time limit only. We will still conclude that the best agreement with the reference time propagation is obtained by the exact Siegert expansion.

New initial wavepacket

The same Gaussian is used, save its initial momentum, that is larger.

[6]:
k_0 = 20. # initial momentum
gauss = Gaussian(sigma, x_c, k0=k_0, grid=xgrid)
gauss.plot()
_images/notebooks_time_propagation_initial_momentum_16_0.png
Make sure that both basis sets are complete for this test function
[7]:
siegerts.plot_completeness_convergence(gauss)
_images/notebooks_time_propagation_initial_momentum_18_0.png

When compared to the same Gaussian with small or no initial momentum, the convergence of the completeness relation is very different. The basis sets previously defined are no longer complete for this initial state.

As you can see, the bound and anti-bound states contributions to the completeness relations are negligible. The same is true for the continuum or resonant states to the completeness relation below \(k=10\). More continuum and Siegert states have to be included in both basis sets!

[8]:
# Extend the Siegert states basis set
siegerts = SWPBasisSet.from_file(filename, grid=xgrid, nres=50)

# Test the exact and MLE of the completeness relation for the new basis sets
siegerts.plot_completeness_convergence(gauss)
_images/notebooks_time_propagation_initial_momentum_20_0.png

The continuum states (or resonant couples) whose wavenumbers (or absolute value of the wavenumbers) lie in the 10 to 30 range (e.g., centered on the initial momentum) are the most important to reproduce the initial wavepacket. All other continuum or Siegert states have negligible influence on the completeness relation.

Time propagation of the Gaussian wavepacket
Exact vs MLE

As expected, the Mittag-Leffler expansion (MLE) still diverges for long times, this is why it is not be plotted after certain time, though it is correct for \(t=0\) and rather good for very short times, while the wavepacket does not reach the border of region \(II\):

[9]:
time_grid_MLE = np.linspace(0, 0.1, 5)
siegerts.plot_propagation(gauss, time_grid_MLE, exact_Siegert=False, MLE=True)
_images/notebooks_time_propagation_initial_momentum_24_0.png
_images/notebooks_time_propagation_initial_momentum_24_1.png
_images/notebooks_time_propagation_initial_momentum_24_2.png
_images/notebooks_time_propagation_initial_momentum_24_3.png
_images/notebooks_time_propagation_initial_momentum_24_4.png
Exact vs Berggren expansion

We use a longer time_grid, because the Berggren expansion does not diverge as time increases (contrary to the MLE):

[10]:
time_grid = np.linspace(0, 1, 9)
siegerts.plot_propagation(gauss, time_grid, exact_Siegert=False, Berggren=True)
_images/notebooks_time_propagation_initial_momentum_26_0.png
_images/notebooks_time_propagation_initial_momentum_26_1.png
_images/notebooks_time_propagation_initial_momentum_26_2.png
_images/notebooks_time_propagation_initial_momentum_26_3.png
_images/notebooks_time_propagation_initial_momentum_26_4.png
_images/notebooks_time_propagation_initial_momentum_26_5.png
_images/notebooks_time_propagation_initial_momentum_26_6.png
_images/notebooks_time_propagation_initial_momentum_26_7.png
_images/notebooks_time_propagation_initial_momentum_26_8.png

The first important point concerning the Berggren expansion is that it seems very accurate to express the initial wavepacket for \(t=0\). It is as if there were more than one correct Siegert states expansion for such a test function. This fact actually points to the overcompleteness of the Siegert states basis set, a point that has been extensively discussed in the literature. The Berggren expansion still keeps a very good accuracy for all times, but it actually shows some very small deviations in the long time limit.

Also note that the momentum of the initial wavepacket is large enough for the propagation to be almost reflection-free at the border of region \(II\): the wavepacket is almost entirely transmitted out of region \(II\). This is why the amplitude of the y-axis is not constrained to a defined range after a certain time.

Exact and exact Siegert states expansion
[11]:
siegerts.plot_propagation(gauss, time_grid)
_images/notebooks_time_propagation_initial_momentum_29_0.png
_images/notebooks_time_propagation_initial_momentum_29_1.png
_images/notebooks_time_propagation_initial_momentum_29_2.png
_images/notebooks_time_propagation_initial_momentum_29_3.png
_images/notebooks_time_propagation_initial_momentum_29_4.png
_images/notebooks_time_propagation_initial_momentum_29_5.png
_images/notebooks_time_propagation_initial_momentum_29_6.png
_images/notebooks_time_propagation_initial_momentum_29_7.png
_images/notebooks_time_propagation_initial_momentum_29_8.png

The exact Siegert states expansion uses time and state-dependent weights for the time-propagation of a wavepacket (where the limit for \(t=0\) is nothing but the Mittag-Leffler expansion). It gives excellent agreement with the exact result for all times, even for the larger times, contrary to the Berggren expansion.

Conclusion

The momentum of the initial state does not modify the conclusions presented in the previous notebooks: the Siegert state expansion presented in Santra et al., *PRA* **71** (2005) gives the best agreement with the reference result, whatever the test function, whatever the time (long or short time).

We also presented a case where the overcompleteness of the Siegert states basis set is clearly seen: when the initial momentum is large, the Berggren expansion gives a very good agreement with the exact result.

Time propagation: error estimation of the Siegert states expansions

The time_propagation_initial_momentum.ipynb notebook showed the overcompleteness of the Siegert states basis set, where we saw that the Berggren expansion and the one presented in Santra et al., *PRA* **71** (2005) give almost similar results.

The goal of this notebook is to assert the quality of different Siegert states expansion when it comes to evaluate the time-propagation of an initial wavepacket. The same initial wave-packets as in the previous notebook will be used: the interest of the present notebook is only to quantify what was presented in the previous one.

Three different Siegert states expansions will be considered: the Mittag-Leffler expansion (MLE), the Berggren expansion and the exact Siegert states expansion (see notebook time_propagation.ipynb for more details on these expansions). They will be compared to the exact time-propagation using the continuum states.

Initialization
Import useful modules and classes
[ ]:
# Make the notebook aware of some of the SiegPy module classes
from siegpy import SWPBasisSet, Gaussian, SWPotential
# Other imports
import numpy as np
Define functions to estimate and show the errors

The absolute and relative errors of different Siegert states expansions with respect to the reference result using bound and continuum states are computed for various times.

[ ]:
def print_errors(exact, siegert_exp, dx):
    """
    Function printing the absolute and relative error of a given
    Siegert states expansion of the time propagation of a wave-packet
    for different times.

    :param exact: Exact time-propagation of a wavepacket.
    :type exact: 2D numpy.array
    :param siegert_exp: Siegert states expansion of the time-
                        propagation of the same wavepacket.
    :type siegert_exp: 2D numpy.array
    :param dx: Space grid-step.
    :type: float
    """
    # Print lines to initialize the table
    title = "  t    abs. error   rel. error"
    line = "-"*len(title)
    print(line)
    print(title)
    print(line)
    # Get the error estimations for each time
    for i, t in enumerate(time_grid):
        abs_err = np.trapz(np.abs(exact[i] - siegert_exp[i]), dx=dx)
        norm = np.trapz(np.abs(exact[i]), dx=dx)
        #abs_err = np.trapz(np.abs(exact[i] - siegert_exp[i])**2, dx=dx)
        #norm = np.trapz(np.abs(exact[i])**2, dx=dx)
        rel_err = abs_err / norm
        print("{:.2f}   {: .3e}   {: .3e}".format(t, abs_err, rel_err))
    # Print lines to finalize the table
    print(line)
    print()


def SSE_errors(filename, nres, k_max, h_k, nx, test_func, time_grid,
               exact=True, MLE=True, Ber=True):
    """
    Function performing the time-propagation of an initial wave-packet
    using different Siegert states expansions and printing the absolute
    and relative errors when compared to the exact time-propagation of
    this initial wavepacket.

    :param filename: Name of the file where to look for the Siegert states.
    :type filename: str
    :param nres: Number of resonant states to use in the Siegert states expansions.
    :type nres: int
    :param k_max: Wavenumber of the last continuum state involved in the exact calculations.
    :type k_max: float
    :param h_k: Wavenumber grid step of the continuum basis set.
    :type h_k: float
    :param nx: Number of space grid points.
    :type nx: int
    :param test_func: Initial wave-packet.
    :type test_func: Wavefunction
    :param time_grid: Times at which the
    :type time_grid: list or numpy.array
    :param choice: Optional, used to say which Siegert state expansion is desired.
    :type choice: dict
    """
    # Siegert states basis set initialized from a file
    siegerts = SWPBasisSet.from_file(filename, nres=nres)

    # Read the potential from a data file
    potential = siegerts[0].potential
    l = potential.width

    # Define a grid space
    xgrid = np.linspace(-l/2, l/2, nx)
    dx = xgrid[1] - xgrid[0]

    # Discretize the Siegert states over the grid
    for s in siegerts:
        s.grid = xgrid

    # Exact basis set
    cont = SWPBasisSet.find_continuum_states(potential, k_max, h_k, grid=xgrid)
    exact = cont + siegerts.bounds

    # Evaluation of the exact time propagation
    exact_tp = exact.exact_propagation(test_func, time_grid)

    if exact:
        # Evaluation of the exact Siegert expansion of the time propagation
        exact_S_tp = siegerts.exact_Siegert_propagation(test_func, time_grid)
        # Find the error with respect ot the exact time propagation
        print("         Exact Siegert")
        print_errors(exact_tp, exact_S_tp, dx)

    if MLE:
        # Evaluation of the Mittag-Leffler expansion of the time propagation
        MLE_tp = siegerts.MLE_propagation(test_func, time_grid)
        # Find the error with respect ot the exact time propagation
        print("              MLE")
        print_errors(exact_tp, MLE_tp, dx)

    if Ber:
        # Evaluation of the Berggren expansion of the time propagation
        Ber_tp = siegerts.Berggren_propagation(test_func, time_grid)
        # Find the error with respect ot the exact time propagation
        print("            Berggren")
        print_errors(exact_tp, Ber_tp, dx)
Define the filename of the data file
[ ]:
# Siegert states basis set initialized from a file
filename = 'siegerts.dat'
siegerts = SWPBasisSet.from_file(filename)
Case 1: initial wave-packet with a small initial momentum

The first initial wave-packet considered uses a small initial momentum.

Define the test function
[ ]:
l = siegerts.potential.width  # width of the potential
sigma = l/20.  # width of the Gaussian
x_c   = -l/4.  # center of the Gaussian
k_0   = 1.  # initial momentum
gauss_small_momentum = Gaussian(sigma, x_c, k0=k_0)
Estimation of the errors of the Siegert states expansions

As a starting point, let us use parameters allowing for a moderate accuracy of the error measurements. The Siegert states and the exact basis sets have a close extension (\(|k_{res, max}| \approx k_{max}\)) and the space grid is made of a limited number of points.

[5]:
# Definition of the time grid
time_grid = [0.0, 0.25, 0.5, 0.75, 1.0, 2.0, 3.0]

SSE_errors(filename, 50, 40, 0.01, 201, gauss_small_momentum, time_grid)
         Exact Siegert
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    8.722e-06    1.566e-05
0.25    6.219e-07    5.567e-07
0.50    4.425e-07    3.656e-07
0.75    3.630e-07    3.132e-07
1.00    3.156e-07    2.864e-07
2.00    2.260e-07    2.090e-07
3.00    1.863e-07    1.684e-07
------------------------------

              MLE
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    8.722e-06    1.566e-05
0.25    6.329e-01    5.665e-01
0.50    2.953e+03    2.441e+03
0.75    9.029e+08    7.791e+08
1.00    3.302e+14    2.996e+14
2.00    5.546e+36    5.130e+36
3.00    9.162e+58    8.281e+58
------------------------------

            Berggren
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.575e-01    2.828e-01
0.25    7.557e-02    6.764e-02
0.50    4.717e-02    3.898e-02
0.75    3.292e-02    2.841e-02
1.00    2.455e-02    2.228e-02
2.00    1.082e-02    1.001e-02
3.00    6.360e-03    5.749e-03
------------------------------

Let us see the influence of using denser wavenumber and space grids, that should give more accurate results:

[6]:
SSE_errors(filename, 50, 40, 0.005, 1601, gauss_small_momentum, time_grid)
         Exact Siegert
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    8.562e-06    1.538e-05
0.25    6.024e-07    5.392e-07
0.50    4.262e-07    3.522e-07
0.75    3.481e-07    3.004e-07
1.00    3.016e-07    2.736e-07
2.00    2.136e-07    1.976e-07
3.00    1.746e-07    1.578e-07
------------------------------

              MLE
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    8.562e-06    1.538e-05
0.25    6.329e-01    5.665e-01
0.50    2.953e+03    2.440e+03
0.75    9.028e+08    7.791e+08
1.00    3.302e+14    2.996e+14
2.00    5.545e+36    5.130e+36
3.00    9.160e+58    8.280e+58
------------------------------

            Berggren
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.574e-01    2.828e-01
0.25    7.556e-02    6.764e-02
0.50    4.717e-02    3.897e-02
0.75    3.292e-02    2.840e-02
1.00    2.455e-02    2.228e-02
2.00    1.082e-02    1.001e-02
3.00    6.360e-03    5.749e-03
------------------------------

As you can see, the previous results were almost converged: the evaluation of the erros is only slightly modified by the densification of the grids (especially for the exact Siegert states expansion).

Combining the use of denser wavenumber and space grids grids with an increase of the range of the continuum states wavenumber grid (associated to an increase of the number of resonant couples), the results should be even more accurate:

[7]:
SSE_errors(filename, 78, 60, 0.005, 1601, gauss_small_momentum, time_grid)
         Exact Siegert
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    8.542e-06    1.534e-05
0.25    6.023e-07    5.391e-07
0.50    4.261e-07    3.521e-07
0.75    3.481e-07    3.004e-07
1.00    3.016e-07    2.736e-07
2.00    2.136e-07    1.976e-07
3.00    1.746e-07    1.578e-07
------------------------------

              MLE
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    8.542e-06    1.534e-05
0.25    6.873e+01    6.153e+01
0.50    1.511e+11    1.248e+11
0.75    5.496e+20    4.743e+20
1.00    1.836e+30    1.665e+30
2.00    1.995e+68    1.845e+68
3.00    2.288e+106    2.069e+106
------------------------------

            Berggren
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.574e-01    2.828e-01
0.25    7.556e-02    6.764e-02
0.50    4.717e-02    3.897e-02
0.75    3.292e-02    2.840e-02
1.00    2.455e-02    2.228e-02
2.00    1.082e-02    1.001e-02
3.00    6.360e-03    5.749e-03
------------------------------

The errors Berggren and the Siegert states expansion are the same as earlier, showing that the results are converged in terms of the extension of the basis sets. However, the MLE is even more wrong, and diverges even more badly; this is not a surprise, since more divergent terms are added in the sum.

We are now convinced that the error evaluation is sufficiently converged for our purpose. As expected, the results give the same conclusions as in the previous notebook:

  • It is clear that the MLE diverges in the long time limit.
  • The exact Siegert states expansion produces the best result, whatever the time.
  • The errors induced by the use of the Berggren expansion decreases over time, making it a rather good expansion in the long-time limit, but not as good as the exact Siegert states expansion.
Case 2: initial wave-packet with large initial momentum

We saw in the previous notebook that the Berggren expansion was very good for all times in the case of an initial wave-packet with a large initial momentum. What are the relative and absolute errors in that case?

Define the test function
[ ]:
k_0 = 20.  # initial momentum
gauss_large_momentum = Gaussian(sigma, x_c, k0=k_0)
Estimation of the errors of the Siegert states expansions

We start with the same parameters leading to moderate accuracy results:

[9]:
SSE_errors(filename, 50, 40, 0.01, 201, gauss_large_momentum, time_grid)
         Exact Siegert
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.441e-04    2.588e-04
0.25    5.216e-06    5.375e-05
0.50    4.181e-06    3.773e-04
0.75    3.731e-06    1.190e-03
1.00    3.338e-06    2.036e-03
2.00    2.074e-06    2.424e-03
3.00    1.822e-06    1.985e-03
------------------------------

              MLE
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.441e-04    2.588e-04
0.25    2.464e+02    2.539e+03
0.50    5.505e+06    4.969e+08
0.75    7.641e+11    2.438e+14
1.00    3.037e+17    1.852e+20
2.00    4.225e+39    4.939e+42
3.00    7.028e+61    7.657e+64
------------------------------

            Berggren
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    5.335e-04    9.582e-04
0.25    2.731e-04    2.814e-03
0.50    1.774e-04    1.601e-02
0.75    1.270e-04    4.053e-02
1.00    9.689e-05    5.911e-02
2.00    4.642e-05    5.427e-02
3.00    2.899e-05    3.158e-02
------------------------------

Using denser wavenumber and space grids, we show that the previous results could already be considered as almost converged with respect to those parameters:

[10]:
SSE_errors(filename, 50, 40, 0.005, 1601, gauss_large_momentum, time_grid)
         Exact Siegert
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.440e-04    2.586e-04
0.25    5.207e-06    5.365e-05
0.50    4.216e-06    3.806e-04
0.75    3.762e-06    1.201e-03
1.00    3.340e-06    2.037e-03
2.00    2.099e-06    2.454e-03
3.00    1.772e-06    1.931e-03
------------------------------

              MLE
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.440e-04    2.586e-04
0.25    2.464e+02    2.539e+03
0.50    5.505e+06    4.969e+08
0.75    7.639e+11    2.438e+14
1.00    3.036e+17    1.852e+20
2.00    4.225e+39    4.939e+42
3.00    7.027e+61    7.656e+64
------------------------------

            Berggren
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    5.332e-04    9.575e-04
0.25    2.731e-04    2.815e-03
0.50    1.774e-04    1.602e-02
0.75    1.270e-04    4.053e-02
1.00    9.690e-05    5.911e-02
2.00    4.643e-05    5.427e-02
3.00    2.899e-05    3.158e-02
------------------------------

The impact of these parameters (wavenumber and space grid densities) is smaller than in the case of an initial wavepacket with a smaller initial momentum. The previous results could already be considered as converged with respect to those parameters.

Finally, increasing the number of states while using the dense wavenumber and space grids lead to the following results:

[11]:
SSE_errors(filename, 78, 60, 0.005, 1601, gauss_large_momentum, time_grid)
         Exact Siegert
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.522e-05    2.733e-05
0.25    7.945e-07    8.187e-06
0.50    5.606e-07    5.060e-05
0.75    4.574e-07    1.460e-04
1.00    3.960e-07    2.416e-04
2.00    2.799e-07    3.271e-04
3.00    2.285e-07    2.489e-04
------------------------------

              MLE
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    1.522e-05    2.733e-05
0.25    2.767e+02    2.851e+03
0.50    1.249e+11    1.127e+13
0.75    4.538e+20    1.448e+23
1.00    1.517e+30    9.255e+32
2.00    1.649e+68    1.928e+71
3.00    1.892e+106    2.061e+109
------------------------------

            Berggren
------------------------------
  t    abs. error   rel. error
------------------------------
0.00    4.602e-04    8.264e-04
0.25    2.731e-04    2.815e-03
0.50    1.774e-04    1.602e-02
0.75    1.270e-04    4.052e-02
1.00    9.689e-05    5.910e-02
2.00    4.643e-05    5.428e-02
3.00    2.899e-05    3.158e-02
------------------------------

These parameters impact the quality of the error, especially for short time. This is not suprising because the completeness relation for the intitial wave-packet with a larger initial momentum converges at a larger wavenumber (see the previous notebook): the convergence with respect to the highest wavenumber of the basis set is more difficult to reach.

The results point to the same conclusions as in the first case, mainly showing that even if the Berggren expansion is in very good agreement for all times, the exact Siegert states gives better results.

Conclusion

This concludes our study of the error estimation, showing that the exact Siegert state expansion truly is the most efficient when it comes to reproduce the time propagation of wavepacket, whatever the time and the wavepacket.

For more details on the convergence of the error estimation with respect to the different parameters, you might consider going through the time_propagation_error_estimation_convergence.ipynb notebook.

1D Square-Well Potential: the numerical case

Numerical potential: bound and continuum states

The present notebook introduces how to find the eigenvalues of a numerical Hamiltonian using SiegPy. We will focus on the 1D Square-Well Potential (1DSWP) case, for comparisons with analytical results to be possible.

Initialization

The initialization only consists in importing all the necessary modules and classes and defining the potential to be used throughout the notebook.

Import classes and modules

You should be familiar with every import from SiegPy except the last three.

We’ll see that finding numerical eigenstates of a potential requires a Hamiltonian instance, the latter taking a coordinate mapping as argument to be initialized, hence the UniformCoordMap (we will soon explain what do we mean by that). BasisSet is the most general class defining a basis set in SiegPy. The SWPBasisSet class extensively used throughout the previous notebooks actually inherits from the BasisSet class.

[1]:
from siegpy import (SWPotential, SWPBasisSet, BasisSet,
                    Hamiltonian, UniformCoordMap)
# We also import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
Define a potential

We first read a basis set made of analytical Siegert states from a file, in order to easily compare our future numerical results with the analytical ones. We can then reuse the same potential, except that it has to be discretized over a grid.

[2]:
# Read the analytical basis set made of Siegert states
siegerts = SWPBasisSet.from_file("siegerts.dat")

# Find its potential, and discretize it over a grid
pot = siegerts.potential
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
pot.plot()
_images/notebooks_find_numerical_bound_and_continuum_5_0.png
Finding numerical eigenstates

Finding numerical solutions of a Hamiltonian is done in two times:

  • first, a Hamiltonian is defined in matrix form,
  • and then its eigenvalues and eigenvectors are found in order to create a BasisSet instance.
Define the Hamitonian

The Hamiltonian is made of the sum of the kinetic and potential terms. In order to write it in matrix form, the Laplacian operator also has to be defined in matrix form.

This requires the use of a set of filters (i.e., a set of coefficients) (see here for some finite difference filters up to order 8). For example, the finite difference filter of order 1 of the Laplacian reads [1, -2, 1], meaning that the derivative of a function \(f\) at a given grid point \(x_n\) reads \(f''(x_n) \approx \frac{f(x_{x-1}) - 2 f(x_n) + f(x_{n+1})}{h^2}\). The defaults filters used consist of more complex Daubechies wavelets filters (named Sym8_filters).

The main goal of the SiegPy module is to study the Siegert states of a given potential, and this requires the use of a Complex Scaling (CS) method, consisting of a given coordinate mapping \(F\) such that the usual position \(x\) is transformed into \(F(x)\). A Hamiltonian therefore requires a coordinate mapping as argument. However, we are only interested in finding the usual solution of the Hamiltonian, i.e. we want \(F(x) = x\). This is easily achieved by taking a so-called uniform coordinate mapping \(F(x) = x e^{i \theta}\) with \(\theta = 0\). Such a coordinate mapping is defined in the first line of the following cell.

Notes:

  • The Laplacian filter is not the only one used, the gradient filter is also often required to define the virial (see below) and Hamiltonian operators, hence the plural for filters used above. The wavelet filters also require a so-called magic filter.
  • Even though Daubechies wavelets filters are used as default, finite difference filters of order 2 and 8 are also provided in SiegPy (they are named FD2_filters and FD8_filters respectively). To use them, you only need to import them from SiegPy and specify the filters optional argument of the Hamiltonian class.
  • More filters can easily be created thanks to the Filters and WaveletFilters class (imported with the command from siegpy.filters import Filters, WaveletFilters)
[3]:
# Initialize the coordinate mapping $F: x \mapsto x$
cm = UniformCoordMap(0)

# Initialize the Hamiltonian
ham = Hamiltonian(pot, cm)
# To use another set of filters, uncomment the following lines:
# from siegpy import FD2_filters
# ham = Hamiltonian(pot, cm, filters=FD2_filters)
Solve the Hamiltonian

You then only need to apply the solve method to the Hamiltonian instance ham, and this returns a BasisSet instance. It doesn’t get any simpler than this!

[4]:
basis = ham.solve()
type(basis)
[4]:
siegpy.basisset.BasisSet

The basis set is made of Eigenstate instances. For example, the first state correspond to the bound state with the lowest energy:

[5]:
first_state = basis[0]
print(type(first_state))
print(first_state.energy)
first_state.plot()
<class 'siegpy.eigenstates.Eigenstate'>
(-9.796086879584973+0j)
_images/notebooks_find_numerical_bound_and_continuum_12_1.png

The basis set is made of the same number of states as the dimension of the Hamiltonian matrix, that is the number of grid points.

[6]:
len(basis) == len(ham.matrix) == len(xgrid)
[6]:
True

The states are divided in two categories: the bound and continuum states:

[7]:
len(basis.bounds), len(basis.continuum)
[7]:
(7, 494)

The first category is made of the states with negative energy, while the other is made of the states of positive energy.

Create plots of the basis set

We can plot the the energies of the basis set. The energy range is limited here to show that the spectrum of continuum states cannot be as regular as it was in the analytical case:

[8]:
basis.plot_energies(xlim=(-10, 10))
_images/notebooks_find_numerical_bound_and_continuum_19_0.png

A virial value is attributed to each of the states in the basis set. This will allow us to discriminate Siegert states from the other types of states (the lower the virial, the more likely the state can be considered as a Siegert state). You can already see that bound states have virial values orders of magnitude below those of the continuum states.

Notes:

  • The eigenstates in the basis set are sorted by increasing virial value.
  • See here for a reference on how the virial operator is defined.

A broader view is given here, where the wavenumbers of all the states are plotted:

[9]:
basis.plot_wavenumbers()
_images/notebooks_find_numerical_bound_and_continuum_22_0.png

The continuum states wavenumbers extend to large values along the real axis. It is interesting to note that the virial values of all the continuum states is order of magnitudes below that of bound states: the expectation values of the virial operator allow to separate Siegert states from other types of states. This virial operator actually measures if the position-momentum commutator, when applied to a given eigenstate \(\varphi\) is such that \(\left\langle \varphi | [\hat{X}, \hat{P}] | \varphi \right\rangle = i \left\langle \varphi | \varphi \right\rangle\). This equality is verified numerically for bound states, but not for continuum states.

The wavefunctions can also be plotted. You can set the number of states to plot with the nstates optional argument. You can also plot the potential by passing it as argument:

[10]:
basis.plot_wavefunctions(nstates=9)
_images/notebooks_find_numerical_bound_and_continuum_24_0.png

Here, the 9 first states correspond to the seven bound states of the potental and the first two continuum states, that are almost degenerate.

Comparison with the analytical bound states

The numerical and analytical bound states can be easily compared:

[11]:
# Create a basis set made of the numerical bound states only
numerical_bounds = basis.bounds
# Create a basis set made of the analytical bound states only
exact_bounds = siegerts.bounds
exact_bounds.grid = xgrid  # Use the same grid
[12]:
# Loop over both basis set, to plot the bound states wavefunctions
# and compare their energies
for i in range(len(exact_bounds)):
    title = "Num. energy = {0.real}\nExact energy = {1.real}"\
            .format(numerical_bounds[i].energy,
                    exact_bounds[i].energy)
    plt.plot(numerical_bounds[i].grid,
             np.real(numerical_bounds[i].values), label="Num. WF")
    plt.plot(exact_bounds[i].grid,
             np.real(exact_bounds[i].values), label="Exact WF")
    plt.legend()
    plt.title(title)
    plt.show()
_images/notebooks_find_numerical_bound_and_continuum_28_0.png
_images/notebooks_find_numerical_bound_and_continuum_28_1.png
_images/notebooks_find_numerical_bound_and_continuum_28_2.png
_images/notebooks_find_numerical_bound_and_continuum_28_3.png
_images/notebooks_find_numerical_bound_and_continuum_28_4.png
_images/notebooks_find_numerical_bound_and_continuum_28_5.png
_images/notebooks_find_numerical_bound_and_continuum_28_6.png

Except for a phase factor for some bound states, the numerical bound states wavefunctions coincide to the analytical ones. The energies are also in a correct agreement. Note that the agreement is worse for the highest energy bound states. This is due to the fact the grid is finite: extending it to larger values (i.e., increasing xmax) and increasing the density of grid points would lead to an even better correspondence (and lower virials).

Numerical continuum states

Let us plot the lowest virial continuum states:

[13]:
numerical_continuum = basis.continuum
for state in numerical_continuum[:10]:
    title = "Num. energy = {0.real:.3f}".format(state.energy)
    state.plot(title=title)
_images/notebooks_find_numerical_bound_and_continuum_31_0.png
_images/notebooks_find_numerical_bound_and_continuum_31_1.png
_images/notebooks_find_numerical_bound_and_continuum_31_2.png
_images/notebooks_find_numerical_bound_and_continuum_31_3.png
_images/notebooks_find_numerical_bound_and_continuum_31_4.png
_images/notebooks_find_numerical_bound_and_continuum_31_5.png
_images/notebooks_find_numerical_bound_and_continuum_31_6.png
_images/notebooks_find_numerical_bound_and_continuum_31_7.png
_images/notebooks_find_numerical_bound_and_continuum_31_8.png
_images/notebooks_find_numerical_bound_and_continuum_31_9.png

In general, the lowest the virial value, the lowest the continuum state energy, but there may be some exceptions. Just like the analytical continuum states, they exhibit a constant amplitude of the nodes in each region, the amplitudes being the same in region \(I\) and \(III\).

Notes:

  • The oscillations of the wavefunctions in region \(I\) and \(III\) (outside of the potential) are associated to particle in a box: there is an increasing number of sinusoid oscillations in these regions (for a given number of nodes in region \(II\)). For example, the first and third continuum state are very similar in region \(II\), but not in region \(I\) and \(III\), where the latter wavefunction exhibit one more node in each of these regions. This comes from the fact that the grid is finite: it as if the finite square well potential was embedded in an larger, infinite square-well potential.
  • Numerical continuum states go by pair, but they are not degenerate. See, for instance, the first and second states.
Conclusion

This notebook focused on presenting how to define a numerical Hamiltonian and how to solve it to find bound and continuum states. This requires the definition of a coordinate mapping \(F\), that is simply \(F: x \mapsto x\) here. More refined coordinate mappings will be presented in other notebooks, allowing to find numerical resonant states of a numerical potential.

We saw that the methods used to plot the energies, wavenumbers and wavefunctions in the analytical case can still be used.

Finally, the bound states found are comparable to the the analytical ones.

Completeness relation using numerical bound and continuum states

The completeness relation (CR) of a numerical basis set made of bound and continuum states will be studied, just as in the analytical case.

Two test functions will be used: one with or without an initial momentum.

Initialization

The outline is not very different from the analytical case: one needs to import some modules and classes, create a basis set (here, made of numerical eigenstates of a Hamiltonian), define a test function and then use the plot_completeness_convergence method of the basis set.

Import modules and classes

You should now be acquainted with all the SiegPy classes required for this notebook:

[1]:
from siegpy import (SWPotential, SWPBasisSet, Gaussian,
                    Hamiltonian, UniformCoordMap, BasisSet)
# We also import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
Define a potential
[2]:
# Read the analytical basis set made of Siegert states
siegerts = SWPBasisSet.from_file("siegerts.dat")

# Find its potential, and discretize it over a grid
pot = siegerts.potential
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
pot.plot()
_images/notebooks_completeness_relation_numerical_bound_and_continuum_5_0.png
Solve a Hamiltonian to create a basis set
[3]:
# Initialize the coordinate mapping $F: x \mapsto x$
cm = UniformCoordMap(0)

# Initialize the Hamiltonian
ham = Hamiltonian(pot, cm)

# Solve it to create a basis set
basis = ham.solve()
Test function without initial momentum

The completeness relation of the numerical basis set amde of bound and continuum state will first be tested against a test function without an initial momentum.

Definition of the test function
[4]:
g = Gaussian(l/12, 0, grid=xgrid)
g.plot()
_images/notebooks_completeness_relation_numerical_bound_and_continuum_10_0.png
Plot the convergence of the completeness relation

The convergence of the CR can be plotted by using the plot_completeness_convergence method. The range of wavenumbers can be limited by setting xlim.

[5]:
basis.plot_completeness_convergence(g, klim=(0, 20))
_images/notebooks_completeness_relation_numerical_bound_and_continuum_12_0.png
Comparison with the analytical results

The data used to create the previous plot can be retrieved by applying the completeness_convergence method. The analytical result is obtained in the same manner (using the analytical basis set created from a file at the beginning of the notebook), for both analytical and numerical results to be compared. The convergence of the analytical Mittag-Leffler expansion (MLE) is also reported, as a reminder.

[6]:
# Use all the numerical eigenstates
kgrid_num, CR_conv_num = basis.completeness_convergence(g)
# Use only a limited number of Siegert states to find the analytical MLE of the CR
kgrid_MLE, CR_conv_MLE = siegerts.MLE_completeness_convergence(g, nres=25)
# Use only a limited range of continuum states for the exact CR
kgrid_exact, CR_conv_exact = siegerts.exact_completeness_convergence(g, hk=0.05, kmax=20)
# Set the plot
plt.axhline(1, lw=1.5, color='k')
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_num, np.real(CR_conv_num), label='Num.', ls='', marker='.', ms=10, color='k')
plt.plot(kgrid_MLE, np.real(CR_conv_MLE), label='MLE', ls='', marker='.', ms=10, color='b')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend()
plt.show()
_images/notebooks_completeness_relation_numerical_bound_and_continuum_14_0.png

The numerical results are comparable to the analytical ones. Note that it requires more continuum than resonant states to reach the completeness of the basis set for this test function.

Note:

Using a smaller grid step would increase the density of continuum states above the threshold energy (i.e., 0).

Test function with initial momentum

The second test function is the same as the previous one, save for the initial momentum, that is not 0. The test function therefore becomes complex.

Definition of the test function
[7]:
g_k0 = Gaussian(l/12, 0, k0=10, grid=xgrid)
g_k0.plot()
_images/notebooks_completeness_relation_numerical_bound_and_continuum_18_0.png
Comparison with the analytical results

Let us directly compare the numerical and analytical results:

[8]:
# Use only a limited number of Siegert states to find the analytical MLE of the CR
kgrid_MLE, CR_conv_MLE = siegerts.MLE_completeness_convergence(g_k0, nres=25)
# Use only a limited range of continuum states for the exact CR
kgrid_exact, CR_conv_exact = siegerts.exact_completeness_convergence(g_k0, hk=0.05, kmax=20)
# Use all the numerical eigenstates
kgrid_num, CR_conv_num = basis.completeness_convergence(g_k0)
# Set the plot
plt.axhline(1, lw=1.5, color='k')
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_num, np.real(CR_conv_num), label='Num.', ls='', marker='.', ms=10, color='k')
plt.plot(kgrid_MLE, np.real(CR_conv_MLE), label='MLE', ls='', marker='.', ms=10, color='b')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend()
plt.show()
_images/notebooks_completeness_relation_numerical_bound_and_continuum_20_0.png

Again, both analytical and numerical completeness relation are comparable, even for a complex test function.

Conclusion

The completeness of a numerical basis set is studied in a very similar manner as that of an analytical basis set: the same procedure, if not the same objects and methods, are used in both cases.

This allows to show that the numerical continuum states (that actually form a discrete basis set) reproduce the analytical results.

Numerical potential: Siegert states

The present notebook introduces how to find the Siegert states of a numerical Hamiltonian using SiegPy. We still will focus on the 1D Square-Well Potential (1DSWP) case, for comparisons with analytical results to be possible. It is very similar to finding the numerical bound and continuum states, as presented in a previous notebook: the main difference is the use of another coordinate mapping.

Initialization

Again, the initialization consists in importing some useful classes and modules and defining a potential.

Import classes and modules

The only modification with respect to the similar notebook where we presented is the use of a different coordinate mapping, namely ErfKGCoordMap instead of UniformCoordMap. For now, it is enough to say that this type of coordinate mapping is more efficient in finding Siegert states.

[1]:
from siegpy import (SWPotential, SWPBasisSet, BasisSet,
                    Hamiltonian, ErfKGCoordMap)
# We also import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
Define a potential

We use the same potential as in the previous notebook.

[2]:
# Read the analytical basis set made of Siegert states
siegerts = SWPBasisSet.from_file("siegerts.dat")

# Find its potential, and discretize it over a grid
pot = siegerts.potential
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
pot.plot()
_images/notebooks_find_numerical_Siegert_5_0.png
Finding numerical eigenstates

Finding numerical Siegert states of a Hamiltonian is done in the same manner as in the previous notebook, namely:

  • first, a Hamiltonian is defined in matrix form,
  • and then its eigenvalues and eigenvectors are found in order to create a BasisSet instance.
Define the Hamitonian

As pointed earlier, the main modification in this notebook is the use of another type of complex scaling. Instead of simply using a uniform complex scaling of the space variable (\(x \mapsto x e^{i \theta}\), with \(\theta > 0\)), it has been shown that using a (smooth) exterior complex scaling is a better solution to find high quality numerical Siegert states. The idea is simple: an inner region is left unscaled, while the complex scaling is (smoothly) turned on the farthest regions. This also has the advantage of leaving the inner potential unscaled.

The coordinate mapping ErfKGCoordMap used in this notebook is one of the many is a coordinate mapping of the type \(x \mapsto (x \pm x_0) e^{i \theta g(x)} \mp x_0\), with \(g\) being a function smoothly going from 0 to 1 around \(\mp x_0\). In this case, \(g: x \mapsto 1 + \frac{1}{2} \left[ \text{erf}(\lambda (x-x_0)) - \text{erf}(\lambda (x+x_0))\right]\).

Other types of smooth exterior coordinate mappings provided in the SiegPy module are TanhKGCoordMap, ErfSimonCoordMap and TanhSimonCoordMap (see SiegPy documentation for more details).

[3]:
# Initialize a coordinate mapping
x0 = 6.0
lbda = 1.5
cm = ErfKGCoordMap(0.6, x0, lbda)

# Initialize the Hamiltonian with the potential and the coordinate mapping
ham = Hamiltonian(pot, cm)
Solving the Hamiltonian

For the moment, nothing new here: you can easily solve the Hamiltonian to create a basis set:

[4]:
basis = ham.solve()

You can then plot the energies, wavenumbers and wavefunctions as usual:

[5]:
basis.plot_energies()
_images/notebooks_find_numerical_Siegert_12_0.png

The spectrum is more complex, but, as you can see, there are lots of states that have a low virial value. They are located near the origin. By zooming in, we get the following picture:

[6]:
basis.plot_energies(xlim=(-100, 3500), ylim=(-150, 5))
_images/notebooks_find_numerical_Siegert_14_0.png

The states with the lowest virial seem to be bound and resonant states. This is confirmed by zooming in even more closely:

[7]:
basis.plot_energies(xlim=(-10, 20), ylim=(-4.25, 0.25))
_images/notebooks_find_numerical_Siegert_16_0.png

However, if you would plot the wavefunctions by limiting the number of resonant states via the nres optional argument (as usual), you would get nothing:

[8]:
basis.plot_wavefunctions(nres=2)
_images/notebooks_find_numerical_Siegert_18_0.png

This is because we did not provide a maximal virial value to sort between the Siegert states and the rest of the eigenvalues: all states have an unknwon type:

[9]:
len(basis.unknown)
[9]:
501

In this case, it is possible to specify the nstates optional argument, that plots the first states of the basis set (i.e. those of lowest virial):

[10]:
basis.plot_wavefunctions(nstates=9)
_images/notebooks_find_numerical_Siegert_22_0.png

The states with the lowest virial values correspond to bound and resonant states of lowest energies in this basis set (but it might not always be the case, especially if the grid is not dense enough or if the coordinate mapping parameters are not well chosen). Note that the resonant states wavefunctions tend to zero on the edges of the simulation box: this is due to the smooth exterior complex scaling, that compensates for the diverging wavefunctions, making the numerical resonant states square integrable.

Solving the Hamiltonian: virial filtering

We saw that the Siegert states have the lowest virial values in the basis set. It might therefore be interesting to define a maximal virial to sort the Siegert states from the rest of the states (of unknown type). This can be set while solving the Hamiltonian by providing a postive value to the max_virial argument:

[11]:
basis_filtered = ham.solve(max_virial=3*10**(-6))

This allows to separate bound and resonant states from the rest of the states:

[12]:
len(basis_filtered.bounds), len(basis_filtered.resonants), len(basis_filtered.unknown)
[12]:
(7, 30, 464)

The plot of the wavenumbers or energies then only represents the Siegert states if the show_unknown optional argument is set to False:

[13]:
basis_filtered.plot_energies(show_unknown=False)
_images/notebooks_find_numerical_Siegert_29_0.png

To plot all the other states, you must set the optional argument show_unknown of plot_energies or plot_wavenumbers to True.

Regarding the plot_wavefunctions, using the nres optional argument gives the expected result:

[14]:
basis_filtered.plot_wavefunctions(nres=2)
_images/notebooks_find_numerical_Siegert_32_0.png
Comparison of the numerical Siegert states with the analytical ones

Only bound and resonant states were found with the current complex scaling (i.e., the anti-bound and anti-resonant states are missing), and we will compare their wavefunctions and energies with those of the analytical ones.

Bound states

Let us first compare the numerical bound states to the analytical ones:

[15]:
# Discretize the analytical siegert states over the same grid as the numerical ones
siegerts.grid = xgrid
# Plot the anaytical and numerical bound states
for i in range(7):
    analytical_bnd = siegerts.bounds[i]
    numerical_bnd = basis_filtered.bounds[i]
    title = "Num. energy = {0.real}\nExact energy = {1.real}"\
            .format(numerical_bnd.energy, analytical_bnd.energy)
    plt.plot(analytical_bnd.grid, np.real(analytical_bnd.values), label="Re[$WF_{analytical}$]")
    plt.plot(analytical_bnd.grid, np.imag(analytical_bnd.values), label="Im[$WF_{analytical}$]")
    plt.plot(numerical_bnd.grid,np.real(numerical_bnd.values), label="Re[$WF_{numerical}$]")
    plt.plot(numerical_bnd.grid,np.imag(numerical_bnd.values), label="Im[$WF_{numerical}$]")
    plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
    plt.title(title)
    plt.show()
_images/notebooks_find_numerical_Siegert_35_0.png
_images/notebooks_find_numerical_Siegert_35_1.png
_images/notebooks_find_numerical_Siegert_35_2.png
_images/notebooks_find_numerical_Siegert_35_3.png
_images/notebooks_find_numerical_Siegert_35_4.png
_images/notebooks_find_numerical_Siegert_35_5.png
_images/notebooks_find_numerical_Siegert_35_6.png

The energies are almost the same as the ones obtained numerically in the previous notebook (without a coordinate mapping), the main difference occuring for the bound state of highest energy: this state exhibits a longer expontential tail, that is affected by the smooth exterior coordinate mapping. This is actually the case for all the other state, but the lower the bound state energy (i.e. the smaller the exponential tail), the smaller the influence of the coordinate mapping.

Resonant states

The resonant states obtained can also be compared to the analytical resonant states:

[16]:
# Loop over the first three resonant states
for i in range(3):
    analytical_res = siegerts.resonants[i]
    numerical_res = basis_filtered.resonants[i]
    title = "Num. energy = {0:.3f}\nExact energy = {1:.3f}"\
            .format(numerical_res.energy, analytical_res.energy)
    plt.plot(analytical_res.grid, np.real(analytical_res.values), label="Re[$WF_{analytical}$]")
    plt.plot(analytical_res.grid, np.imag(analytical_res.values), label="Im[$WF_{analytical}$]")
    plt.plot(numerical_res.grid, -np.real(numerical_res.values), label="Re[$- WF_{numerical}$]")
    plt.plot(numerical_res.grid, -np.imag(numerical_res.values), label="Im[$- WF_{numerical}$]")
    plt.title(title)
    plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
    plt.show()
_images/notebooks_find_numerical_Siegert_39_0.png
_images/notebooks_find_numerical_Siegert_39_1.png
_images/notebooks_find_numerical_Siegert_39_2.png

The wavefunctions of the numerical resonants states are the same as the analytical ones (save for an unrelevant minus sign): their shape and normalization are well reproduced in the unscaled region. The main differences come from the energies, even though they are consistent with the analytical results, and the exponential divergence of the wavefunction, that is “absorbed” by the coordinate mapping.

Conclusion

This notebook presented how to find numerical Siegert states (namely, bound and resonant states). This was possible thanks to the application of a smooth exterior complex mapping. The expectation value of the virial operator has been presented as an important quantity allowing to separate numerical Siegert states from the rest of the numerical eigenstates. It was shown that the bound and resonant states found numerically are comparable to the analytical ones (in terms of energy and wavefunction, including their normalization).

Completeness relation with numerical Siegert states

The different types of states of the numerical basis set have been presented in the previous notebook, let us use them to study the completeness relation (CR). The same two tests as in the notebook about the completeness of a basis set made of bound and continuum will be presented: a Gaussian test function will used, with or without an initial momentum.

Initialization

There should be nothing new in this section.

Import classes and modules
[1]:
from siegpy import (SWPotential, SWPBasisSet, Gaussian,
                    Hamiltonian, ErfKGCoordMap, BasisSet)
# We also import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt
Define a potential
[2]:
# Read the analytical basis set made of Siegert states
siegerts = SWPBasisSet.from_file("siegerts.dat")

# Find its potential, and discretize it over a grid
pot = siegerts.potential
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
pot.plot()
_images/notebooks_completeness_relation_numerical_Siegert_5_0.png
Solve a Hamiltonian to create a basis set
[3]:
# Initialize a coordinate mapping
x0 = 6.0
lbda = 1.5
cm = ErfKGCoordMap(0.6, x0, lbda)

# Initialize the Hamiltonian with the potential and the coordinate mapping
ham = Hamiltonian(pot, cm)

# Solve the Hamiltonian
basis_filtered = ham.solve(max_virial=3*10**(-6))
Test function without an initial momentum

We start by defining the test function:

Definition of the test function
[4]:
g = Gaussian(l/12, 0, grid=xgrid)
g.plot()
_images/notebooks_completeness_relation_numerical_Siegert_10_0.png
Comparison with the analytical Berggren expansion

Only the bound and resonant states are found numerically, so it is not possible to compare directly the numerical results with the analytical Mittag-Leffler expansion of the CR using the Berggren_completeness_convergence method. However, it is possible to compare with the analytical Berggren expansion of the CR. We use the exact CR using the analytical bound and continuum states as a reference.

[5]:
# Compute the Berggren expansion completeness convergence numerically
kgrid_Ber_num, CR_conv_Ber_num = basis_filtered.Berggren_completeness_convergence(g)
# Compute the Berggren expansion completeness convergence analytically
kgrid_Ber_an, CR_conv_Ber_an = siegerts.Berggren_completeness_convergence(g, nres=25)
# Use only a limited range of continuum states for the exact CR
kgrid_exact, CR_conv_exact = siegerts.exact_completeness_convergence(g, hk=0.05, kmax=20)
# Set the plot of the real part of the CR
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_Ber_an, np.real(CR_conv_Ber_an), label='An. Berggren', ls='', marker='s', ms=5, color='g')
plt.plot(kgrid_Ber_num, np.real(CR_conv_Ber_num), label='Num. Berggren', ls='', marker='.', ms=10, color='g')
plt.xlabel("$k$")
plt.ylabel("Re[$CR$]")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
# Set the plot of the imaginary part of the CR
plt.plot(kgrid_exact, np.imag(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_Ber_an, np.imag(CR_conv_Ber_an), label='An. Berggren', ls='', marker='s', ms=5, color='g')
plt.plot(kgrid_Ber_num, np.imag(CR_conv_Ber_num), label='Num. Berggren', ls='', marker='.', ms=10, color='g')
plt.xlabel("$k$")
plt.ylabel("Im[$CR$]")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_completeness_relation_numerical_Siegert_12_0.png
_images/notebooks_completeness_relation_numerical_Siegert_12_1.png

As expected, the Berggren expansion of the completeness relation gives a complex result and does not compare to the analytical exact result. Still, the agreement with the analytical Berggren expansion is very good: this quantifies the quality of the numerical Siegert states compared to the analytical ones.

The question now is: what is the influence of the states of unknown type on the completeness relation? This question can be answered by applying the completeness_convergence method on the filtered basis set (where the contribution of all the states of unknown type is added to the bound states contributions of the CR; this only amounts to a translation of the Berggren expansion of the CR). This result is then compared with the analytical exact CR and the analytical MLE of the CR:

[6]:
# Use only a limited number of Siegert states to find the analytical MLE of the CR
kgrid_MLE, CR_conv_MLE = siegerts.MLE_completeness_convergence(g, nres=25)
# Use all the numerical eigenstates of low virial
kgrid_num, CR_conv_num = basis_filtered.completeness_convergence(g)
# Set the plot of the real part of the CR
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.real(CR_conv_MLE), label='MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_num, np.real(CR_conv_num), label='Num. all states', ls='', marker='.', ms=10, color='k')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
# Set the plot of the imaginary part of the CR
plt.plot(kgrid_exact, np.imag(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.imag(CR_conv_MLE), label='MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_num, np.imag(CR_conv_num), label='Num. all states', ls='', marker='.', ms=10, color='k')
plt.xlabel("$k$")
plt.ylabel("Im[$CR$]")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_completeness_relation_numerical_Siegert_14_0.png
_images/notebooks_completeness_relation_numerical_Siegert_14_1.png

The numerical results very nicely fit the analytical MLE of the CR! It is as if the states of unkown type give the remaining contribution leading to the Mittag-Leffler Expansion of the CR. For instance, see how the imaginary part of all the non-resonant states exactly compensate the imaginary part of the resonant states contributions.

The question now is: which are the states of unknown type that account for this compensation?

One could think of the Siegert states of unknown type with the lowest virial:

[7]:
basis_lower_filter_1 = BasisSet(states=[s for s in basis_filtered if np.abs(s.virial) < 8*10**(-6)])
len(basis_lower_filter_1), len(basis_lower_filter_1.resonants)
[7]:
(42, 30)

The basis set is now smaller, and the extra states correspond to states that we could already easily consider as resonant states:

[8]:
basis_lower_filter_1.plot_energies()
_images/notebooks_completeness_relation_numerical_Siegert_18_0.png

Let us look at the impact on the completeness relation of these extra resonant states:

[9]:
# Use all the numerical eigenstates of low virial
kgrid_num_lower_1, CR_conv_num_lower_1 = basis_lower_filter_1.completeness_convergence(g)
# Set the plot of the real part of the CR
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.real(CR_conv_MLE), label='MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_Ber_num, np.real(CR_conv_Ber_num), label='Num. Berggren', ls='', marker='.', ms=10, color='g')
plt.plot(kgrid_Ber_an, np.real(CR_conv_Ber_an), label='An. Berggren', ls='', marker='s', ms=5, color='g')
plt.plot(kgrid_num_lower_1, np.real(CR_conv_num_lower_1), label='Num. all states', ls='', marker='.', ms=10, color='k')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
# Set the plot of the imaginary part of the CR
plt.plot(kgrid_exact, np.imag(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.imag(CR_conv_MLE), label='MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_Ber_num, np.imag(CR_conv_Ber_num), label='Num. Berggren', ls='', marker='.', ms=10, color='g')
plt.plot(kgrid_Ber_an, np.imag(CR_conv_Ber_an), label='An. Berggren', ls='', marker='s', ms=5, color='g')
plt.plot(kgrid_num_lower_1, np.imag(CR_conv_num_lower_1), label='Num. all states', ls='', marker='.', ms=10, color='k')
plt.xlabel("$k$")
plt.ylabel("Im[$CR$]")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_completeness_relation_numerical_Siegert_20_0.png
_images/notebooks_completeness_relation_numerical_Siegert_20_1.png

This means that using more resonant states in the expansion keeps on giving results that are close to the analytical Berggren expansion of the CR. This is actually not that suprising.

What happens if the virial limit is set to a higher value?

[10]:
basis_lower_filter_2 = BasisSet(states=[s for s in basis_filtered if np.abs(s.virial) < 1.5*10**(-3)])
len(basis_lower_filter_2), len(basis_lower_filter_2.resonants)
[10]:
(81, 30)

The basis set is then larger, and the extra states correspond to more states that could be considered as resonant states and also some states that are located along the imaginary axis of energy:

[11]:
basis_lower_filter_2.plot_energies(show_unknown=True)
_images/notebooks_completeness_relation_numerical_Siegert_24_0.png

Let us look at their impact on the expansion of the completeness relation:

[12]:
# Use all the numerical eigenstates of low virial
kgrid_num_lower_2, CR_conv_num_lower_2 = basis_lower_filter_2.completeness_convergence(g)
# Set the plot of the real part of the CR
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.real(CR_conv_MLE), label='MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_num_lower_2, np.real(CR_conv_num_lower_2), label='Num. all states', ls='', marker='.', ms=10, color='k')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
# Set the plot of the imaginary part of the CR
plt.plot(kgrid_exact, np.imag(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.imag(CR_conv_MLE), label='MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_num_lower_2, np.imag(CR_conv_num_lower_2), label='Num. all states', ls='', marker='.', ms=10, color='k')
plt.xlabel("$k$")
plt.ylabel("Im[$CR$]")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_completeness_relation_numerical_Siegert_26_0.png
_images/notebooks_completeness_relation_numerical_Siegert_26_1.png

These states of low virial along the imaginary axis are the relevant ones to reproduce the MLE of the CR! This means it is not relevant to only look for the resonant states when solving the numerical Hamiltonian, but any state of low virial is sufficient, whether they truly are Siegert states or not.

One way of separating them from the other states of unknown types that could be considered as resonant states is by looking for states with small real energies. Let us look at the wavefunctions of some of them:

[13]:
# Find the real energy of the last resonant state of the basis set
max_re = max([s.energy.real for s in basis_lower_filter_2.resonants])
# Plot the first 5 states whose energies are along the imaginary axis
max_states = 5
counter = 0
for i in range(43, len(basis_lower_filter_2)):
    if basis_lower_filter_2[i].energy < max_re and counter < max_states:
        counter += 1
        basis_lower_filter_2[i].plot(ylim=(-0.25, 0.25),
                                     title="i={}: {}".format(i, basis_lower_filter_2[i]))
_images/notebooks_completeness_relation_numerical_Siegert_28_0.png
_images/notebooks_completeness_relation_numerical_Siegert_28_1.png
_images/notebooks_completeness_relation_numerical_Siegert_28_2.png
_images/notebooks_completeness_relation_numerical_Siegert_28_3.png
_images/notebooks_completeness_relation_numerical_Siegert_28_4.png

Theses states actually look like the numerical continuum states, in terms of nodes in each region (inside or outside the potential). It is as if the coordinate mapping was forcing them to rotate, leaving them with complex wavefunctions.

Interestingly, their contribution to the completeness relation is important: it would be interesting to see how this would be influenced by the grid step of the potential.

Test function with an initial momentum

Let us now move to the case of a test function with an initial momentum:

Definition of the test function
[14]:
g_k0 = Gaussian(l/12, 0, k0=10, grid=xgrid)
g_k0.plot()
_images/notebooks_completeness_relation_numerical_Siegert_32_0.png
Comparison with analytical results

The convergence of the completeness relation in the same manner as in the first case:

  • Analytically, using the MLE or the Berggren expansions and the exact expansion using bound and continuum states as references,
  • Numerically, using the Berggren expansion or using all the states in the basis set.
[15]:
# Use only a limited number of Siegert states to find the analytical MLE of the CR
kgrid_MLE, CR_conv_MLE = siegerts.MLE_completeness_convergence(g_k0, 25)
# Use only a limited range of continuum states for the exact CR
kgrid_exact, CR_conv_exact = siegerts.exact_completeness_convergence(g_k0, hk=0.05, kmax=20)
# Use all the numerical eigenstates of low virial
kgrid_num, CR_conv_num = basis_filtered.completeness_convergence(g_k0)
# Compute the Berggren expansion completeness convergence numerically
kgrid_Ber_num, CR_conv_Ber_num = basis_filtered.Berggren_completeness_convergence(g_k0)
# Compute the Berggren expansion completeness convergence analytically
kgrid_Ber_an, CR_conv_Ber_an = siegerts.Berggren_completeness_convergence(g_k0, nres=25)
# Set the plot
plt.plot(kgrid_exact, np.real(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.real(CR_conv_MLE), label='An. MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_Ber_an, np.real(CR_conv_Ber_an), label='An. Berggren', ls='', marker='s', ms=5, color='g')
plt.plot(kgrid_num, np.real(CR_conv_num), label='Num. all states', ls='', marker='.', ms=13, color='k')
plt.plot(kgrid_Ber_num, np.real(CR_conv_Ber_num), label='Num. Berggren', ls='', marker='.', ms=10, color='g')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
# Set the plot
plt.plot(kgrid_exact, np.imag(CR_conv_exact), label='Exact', color='#d73027')
plt.plot(kgrid_MLE, np.imag(CR_conv_MLE), label='An. MLE', ls='', marker='s', ms=7, color='b')
plt.plot(kgrid_Ber_an, np.imag(CR_conv_Ber_an), label='An. Berggren', ls='', marker='s', ms=5, color='g')
plt.plot(kgrid_num, np.imag(CR_conv_num), label='Num. all states', ls='', marker='.', ms=13, color='k')
plt.plot(kgrid_Ber_num, np.imag(CR_conv_Ber_num), label='Num. Berggren', ls='', marker='.', ms=10, color='g')
plt.xlabel("$k$")
plt.ylabel("$CR$")
plt.xlim(0, 20)
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_completeness_relation_numerical_Siegert_34_0.png
_images/notebooks_completeness_relation_numerical_Siegert_34_1.png

In this case, the numerical expansion is not really affected by the extra states, since both numerical expansions superimpose. This means that the expansion found numerically is related to the Berggren expansion rather than the Mittag-Leffler expansion. This could be a little disturbing, but remember that it was shown in the notebook treating the time-propagation of an initial state with an initial momentum that a Berggren expansion could very well approximate such a phenomenon: the numerical Siegert states found here should produce correct result when used in thetime-propagation context.

Conclusion

This notebook focused on the completeness relation, and it was shown that filtering the states with the lowest virial values made it possible to use only a limited number of states to describe the completeness relation in a similar manner as in the analytical case.

Virial operators

We saw in previous notebooks that the expectation values of the virial operators were able to discriminate the Siegert states from any other type of states. All the previous notebooks only used one type of virial operator, but there are two definitions in SiegPy. The goal of this notebook is to present the second type of virial operator, and how it compares to the default one.

Initialization

The initialization is simple: we only import the useful modules and classes and then define a potential (which is a 1D Square-Well Potential discretized over a space grid).

Import some modules and classes
[1]:
from siegpy import (Hamiltonian, SWPotential, ErfKGCoordMap,
                    SWPBasisSet)
import numpy as np
Define a potential
[2]:
# The potential is read from a file...
siegerts = SWPBasisSet.from_file("siegerts.dat")
pot = siegerts.potential
# ... and then discretized over a grid
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
Choosing the virial operator

The default virial operator gives one (and only one) value. It actually measures if the position-momentum commutator is correctly evaluated when applied to a given eigenstate \(\varphi\), i.e., it is such that \(\left\langle \varphi | [\hat{X}, \hat{P}] | \varphi \right\rangle = i \left\langle \varphi | \varphi \right\rangle\). It is referred to as the Generalized Coplex Virial Theorem (GCVT).

The other is available for SmoothExtCoordMap instances only, for which a virial operator \(\hat{W_\xi} = \frac{\text{d} \hat{H}}{\text{d} \xi}\) per parameter \(\xi\) (here the angle \(\theta\), the inflexion point \(x_0\) and the sharpness \(\lambda\)) is defined (\(\hat{H}\) is the Hamiltonian considered). These virial operators measure the stability of each eigenvalue with respect to a given parameter change. Each virial operator then gives rise to one virial expectation value per parameter, namely \(v_\xi = \left\langle \varphi | \hat{W_\xi} | \varphi \right\rangle\). These three values are finally mapped into one final virial value \(v\) by using the following equation \(v = \sum_{\xi \in [\theta, x_0, \lambda]} | v_\xi |^2\).

The choice of the type of virial operator used is done while initializing the coordinate mapping, by setting the optional parameter GCVT either to True or False.

[3]:
# Initial parameters
theta = 0.6
x0 = 6.0
lbda = 1.5
# Default virial operator: GCVT=True
cm_1 = ErfKGCoordMap(theta, x0, lbda, grid=xgrid)
# Other type of virial operator: GCVT=False
cm_2 = ErfKGCoordMap(theta, x0, lbda, grid=xgrid, GCVT=False)
Comparing the virial operators
Define and solve two Hamiltonians

One Hamitonian per type of virial operator is defined and then solved.

[4]:
ham_1 = Hamiltonian(pot, cm_1)
ham_2 = Hamiltonian(pot, cm_2)
[5]:
basis_1 = ham_1.solve()
basis_2 = ham_2.solve()
Plot the energies

The plot_energies method of the basis sets allows to see how the virial values are distributed over the energy spectrum. Note that both basis sets are made of the same states: only their associated virial expecation values are different.

[6]:
basis_1.plot_energies()
basis_2.plot_energies()
_images/notebooks_influence_virials_siegerts_13_0.png
_images/notebooks_influence_virials_siegerts_13_1.png

As you can see, the distribution of the virial expectation values differ greatly between both virial operators:

  • The range of virial values is much larger for the second type of virial operator.
  • Most importantly, the resonant states with the lowest virial expectation values are very different: for the default virial operator, the lowest energy resonant states have the lowest virial expecation values, while a maximum value is reached far away from the 0 energy for the second type of virial operator:
[11]:
for state in basis_2[:12]:
    print(state.energy, state.virial)
(-9.79608688842+4.10071842408e-14j) 1.14043002476e-10
(-9.18591925228-3.37715726075e-14j) 6.82940051154e-10
(-8.17464337214+4.21396607849e-14j) 3.18687656971e-09
(-6.77260946384+2.61389816265e-14j) 1.80012213375e-08
(1346.81487901-70.0870431848j) 1.33383817064e-07
(1309.64971825-68.7242290591j) 1.43891530556e-07
(1384.60670122-71.4821234553j) 1.53861789445e-07
(-4.99980420562-1.44406627299e-11j) 1.64899673628e-07
(1273.09631514-67.3909816174j) 1.88483607705e-07
(1423.04138812-72.9122466557j) 2.49175889283e-07
(1237.1410264-66.0847007287j) 2.74099981855e-07
(1201.77142174-64.80290349j) 4.14687367264e-07
[12]:
for state in basis_1[:12]:
    print(state.energy, state.virial)
(-9.79608688842+4.10071842408e-14j) 7.97704509282e-10
(-9.18591925228-3.37715726075e-14j) 3.17680862089e-09
(-8.17464337214+4.21396607849e-14j) 7.08919431335e-09
(-6.77260946384+2.61389816265e-14j) 1.24276304178e-08
(-4.99980420562-1.44406627299e-11j) 1.89478510917e-08
(-2.90018622708-6.14397704638e-09j) 2.59530586763e-08
(-0.626085020601+7.33146819469e-06j) 4.82648903582e-08
(1.87104013925-0.944793379409j) 5.54727310476e-08
(5.52735942521-1.73866474126j) 6.89860589855e-08
(9.68066945927-2.4642996033j) 8.75410584435e-08
(14.3306084456-3.18636534093j) 1.08537828898e-07
(19.4768829199-3.92000638226j) 1.31971445351e-07

Zooming in more closely to the 0 energy, the differences can be more easily spotted in this energy range:

[8]:
basis_1.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5))
basis_2.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5))
_images/notebooks_influence_virials_siegerts_18_0.png
_images/notebooks_influence_virials_siegerts_18_1.png

The variation of the bound state virial values is much larger for the second type of virial operator, and it makes it more difficult (if not impossible) to say that the first resonant states can be sorted out from the rest of the eigenstates by their virial expectation value.

Conclusion

The default virial operator, measuring how the position-momentum commutator is correctly evaluated by the eigenstates, is better than the other one when it comes to separate the Siegert states from the rest of the numerical eigenstates. It is therefore advised to keep using this default virial operator.

Influence of the filters

Only wavelet filters were used in the previous notebooks, but some finite difference filters are also provided in SiegPy. The goal of this notebook is to compare the results obtained with these other filters to the default results.

Initialization

The initialization only consists in importing the relevant classes and modules and defining a potential and a particular coordinate mapping.

Import some modules and classes
[1]:
from siegpy import (Hamiltonian, SWPotential, ErfKGCoordMap, Gaussian,
                    SWPBasisSet, FD2_filters, FD8_filters)
import numpy as np
Define a potential
[2]:
# The potential is read from a file...
siegerts = SWPBasisSet.from_file("siegerts.dat")
pot = siegerts.potential
# ... and then discretized over a grid
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
Define a coordinate mapping
[3]:
theta = 0.6
x0 = 6.0
lbda = 1.5
cm = ErfKGCoordMap(theta, x0, lbda, grid=xgrid)
Compare the filters

Three family of filters are available within Siegpy, presented here by increasing length:

  • The finite difference filters of order 2 FD2_filters,
  • The finite difference filters of order 8 FD8_filters,
  • The wavelet filters Sym8_filters (used as default).

The filter used is defined at the Hamiltonian initialization level:

[4]:
ham_1 = Hamiltonian(pot, cm, filters=FD2_filters)
ham_2 = Hamiltonian(pot, cm, filters=FD8_filters)
ham_3 = Hamiltonian(pot, cm) # default Sym8_filters

The three hamiltonians are then solved as usual:

[5]:
basis_1 = ham_1.solve()
basis_2 = ham_2.solve()
basis_3 = ham_3.solve()

The eigenstates can then be compared by plotting the spectrum and the virial expectation value for each state:

[6]:
basis_1.plot_energies()
basis_2.plot_energies()
basis_3.plot_energies()
_images/notebooks_influence_filters_siegerts_13_0.png
_images/notebooks_influence_filters_siegerts_13_1.png
_images/notebooks_influence_filters_siegerts_13_2.png

The ranges of the spectra are very different and increase with the length of the filters used. Note that the finite difference filters of order 2 give low virial expectation values compared to the other filters.

Let us zoom in for a more detailed view on the resonant states found using each filter:

[7]:
basis_1.plot_energies(xlim=(-100, 4000), ylim=(-300, 5))
basis_2.plot_energies(xlim=(-100, 4000), ylim=(-300, 5))
basis_3.plot_energies(xlim=(-100, 4000), ylim=(-300, 5))
_images/notebooks_influence_filters_siegerts_15_0.png
_images/notebooks_influence_filters_siegerts_15_1.png
_images/notebooks_influence_filters_siegerts_15_2.png

It is not a surprise to witness a larger number of resonant states of low virial when using the largest filters. However, the same bound and resonant states around the 0 energy seem to be found: let us zoom in again to have a clearer view on this part of the spectrum:

[8]:
basis_1.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5))
basis_2.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5))
basis_3.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5))
_images/notebooks_influence_filters_siegerts_17_0.png
_images/notebooks_influence_filters_siegerts_17_1.png
_images/notebooks_influence_filters_siegerts_17_2.png

It is worth noting that the same bound and resonant states are found using the three filters. The only (important) difference concerns the expectation values of the virial operators, that are not sufficiently low for the resonant states found using both finite difference filters. Still, using the basis set obtained with the finitie difference filters of order 8 allows to get the same results as the default wavelet filters when it comes to study the convergence of the completeness relation:

[9]:
g = Gaussian(l/12, 0, grid=xgrid)
titles = ['FD8 filters', 'Sym8 filters']
for i, basis in enumerate([basis_2, basis_3]):
    basis.max_virial = 10**(-5)
    basis.plot_completeness_convergence(g, xlim=(0, 15), title=titles[i])
_images/notebooks_influence_filters_siegerts_19_0.png
_images/notebooks_influence_filters_siegerts_19_1.png

The difference is that the resonant states have lower virial expectation values using the wavelet filters, meaning that it is easier to reach higher energy (or equivalently wavenumbers here).

Conclusion

The quality of the results obtained depends on the family of filters used. The default wavelet family Sym8_filters is the best choice, as it makes one able to discriminate more Siegert states from the rest of the numerical eigenstates.

Influence of the coordinate mapping parameters: case study of the ErfKGCoordMap

Another important set of parameters when solving Hamiltonians numerically with SiegPy is the choice of the coordinate mapping. There are many coordinate mappings available in the SiegPy module, but let us focus first on one type of smooth, exterior coordinate mapping, namely the ErfKGCoordMap.

This type of coordinate mapping depends on three parameters:

  • the complex scaling angle \(\theta\),
  • the inflexion point \(x_0\),
  • the sharpness parameter \(\lambda\).

The goal of the present notebook is to witness the influence of these parameters on the numerical spectrum obtained after the Hamiltonian diagonalization.

Initialization

The initialization consists in importing the relevant modules and classes and defining a potential that will be used throuhout the notebook.

Import some modules and classes
[25]:
from siegpy import (Hamiltonian, SWPotential, SWPBasisSet,
                    ErfKGCoordMap)
import numpy as np
import matplotlib.pyplot as plt
Define a potential
[2]:
# The potential is read from a file...
siegerts = SWPBasisSet.from_file("siegerts.dat")
pot = siegerts.potential
# ... and then discretized over a grid
l = pot.width
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
pot.grid = xgrid
Influence of \(\theta\)

Let us first look at the influence of the \(\theta\) parameter by varying it, while the other parameters are constant.

Define the coordinate mappings
[3]:
thetas = [0.2, 0.4, 0.6, 0.8, 1.0]
x0 = 6.0
lbda = 1.5
cms_tta = [ErfKGCoordMap(theta, x0, lbda, grid=xgrid) for theta in thetas]

All the coordinate mappings are plotted below:

[30]:
for i, cm in enumerate(cms_tta):
    lab = str(cm.theta)
    plt.plot(np.real(cm.values), np.imag(cm.values), label=lab)
    plt.xlabel("Re[$F$]")
    plt.ylabel("Im[$F$]")
    plt.title(r"Varying $\theta$")
    plt.legend(loc=6, bbox_to_anchor=(1, .5))
plt.show()
_images/notebooks_influence_coordMap_siegerts_1_10_0.png
[41]:
for part in [{"str": "Re", "func": np.real},
             {"str": "Im", "func": np.imag}]:
    for i, cm in enumerate(cms_tta):
        lab = str(cm.theta)
        plt.plot(cm.grid, part["func"](cm.values), label=lab)
        plt.xlabel("x")
        plt.ylabel(part["str"]+"[$F$]")
        plt.title(r"Varying $\theta$")
        plt.legend(loc=6, bbox_to_anchor=(1, .5))
    plt.show()
_images/notebooks_influence_coordMap_siegerts_1_11_0.png
_images/notebooks_influence_coordMap_siegerts_1_11_1.png
Define the Hamiltonians
[4]:
hams_tta = [Hamiltonian(pot, cm) for cm in cms_tta]
Solve the Hamiltonians
[5]:
basissets_tta = [ham.solve() for ham in hams_tta]
Compare the results

Let us first have a look at the global picture, by plotting the energies and virial expectation values of all the eigenstates:

[6]:
for basis in basissets_tta:
    tit = r"$\theta$ = {}".format(basis.coord_map.theta)
    basis.plot_energies(title=tit)
_images/notebooks_influence_coordMap_siegerts_1_17_0.png
_images/notebooks_influence_coordMap_siegerts_1_17_1.png
_images/notebooks_influence_coordMap_siegerts_1_17_2.png
_images/notebooks_influence_coordMap_siegerts_1_17_3.png
_images/notebooks_influence_coordMap_siegerts_1_17_4.png

While the overall distibution of energies in the complex energy plane is modified by the complex scaling angle \(\theta\), the virial expectation values seem to be consistent over the whole range of angles. In particular, the resonant states do not seem to be largely affected by the angle modification. Let us zoom in for a clearer view:

[7]:
for basis in basissets_tta:
    tit = r"$\theta$ = {}".format(basis.coord_map.theta)
    basis.plot_energies(xlim=(-100, 4000), ylim=(-300, 5), title=tit)
_images/notebooks_influence_coordMap_siegerts_1_19_0.png
_images/notebooks_influence_coordMap_siegerts_1_19_1.png
_images/notebooks_influence_coordMap_siegerts_1_19_2.png
_images/notebooks_influence_coordMap_siegerts_1_19_3.png
_images/notebooks_influence_coordMap_siegerts_1_19_4.png

The highest energy resonant states seem to be unaffected by the complex scaling angle. Let us zoom in again, to look at what happens close to the 0 energy:

[8]:
for basis in basissets_tta:
    tit = r"$\theta$ = {}".format(basis.coord_map.theta)
    basis.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5), title=tit)
_images/notebooks_influence_coordMap_siegerts_1_21_0.png
_images/notebooks_influence_coordMap_siegerts_1_21_1.png
_images/notebooks_influence_coordMap_siegerts_1_21_2.png
_images/notebooks_influence_coordMap_siegerts_1_21_3.png
_images/notebooks_influence_coordMap_siegerts_1_21_4.png

There are many things to be said here:

  • Focusing on the \(\theta = 0.2\) case, one sees that, if the highest energy resonant states are found with a low virial expectation value, the closest one to the 0 energy are somehow hidden between the other solutions. This is due to the fact that the complex scaling angle is too small for these resonant states to be “unveiled” by the coordinate mapping. There is actually a critical angle \(\theta_c\) associated to each resonant state, given by \(\theta_c = \frac{\text{arctan}(E_I/E_R)}{2}\), where \(E_R\) (respectively \(E_I\) is the real (resp. imaginary) part of the resonant state energy. If the complex scaling angle is higher than this number, then the resonant state appear as a single eigenstate:
[9]:
for i in range(3):
    en = siegerts.resonants[i].energy  # Energy of the resonant state
    theta_c = np.arctan(abs(en.imag / en.real)) / 2  # critical angle
    print("Resonant state n°{}: theta_c = {}".format(i, theta_c))
Resonant state n°0: theta_c = 0.22854546301537398
Resonant state n°1: theta_c = 0.1516923699784277
Resonant state n°2: theta_c = 0.12463899292846521

This explains why the first resonant state does not appear as a single state if \(\theta = 0.2\), while all the others appear as single states (because the imaginary part of the resonants states increases more slowly than the real part). For all the higher complex scaling angles, this first resonant state appears as a single eigenstate of the complex scaled Hamiltonian.

Another interesting point is to see that the energies of the “continuum” states close to the 0 energy are rotated proportionnally to the complex scaling angle.

Finally, when the complex scaling angle becomes too high, the bound states (that are not greatly modified by the complex scaling angle otherwise) start exhibiting lower virial expectation values, especially those with the highest energy.

The complex scaling angle therefore has to be chosen so that it is high enough to unveil all the important resonant states while leaving allowing for a good discrimination of the Siegert states.

Influence of \(x_0\)

Let us now study the influence of the inlfexion point on the spectrum of the complex scaled Hamiltonians. The same pattern is kept: only \(x_0\) is varied, while the other parameters are constant:

Define the coordinate mappings
[10]:
theta = 0.6
x0s = [5.0, 5.5, 6.0, 6.5, 7.0]  # Only x0 varies
lbda = 1.5
cms_x0 = [ErfKGCoordMap(theta, x0, lbda, grid=xgrid) for x0 in x0s]
[29]:
for i, cm in enumerate(cms_x0):
    lab = str(cm.x0)
    plt.plot(np.real(cm.values), np.imag(cm.values), label=lab)
    plt.xlabel("Re[$F$]")
    plt.ylabel("Im[$F$]")
    plt.title(r"Varying $x_0$")
    plt.legend(loc=6, bbox_to_anchor=(1, .5))
plt.show()
_images/notebooks_influence_coordMap_siegerts_1_28_0.png
[42]:
for part in [{"str": "Re", "func": np.real},
             {"str": "Im", "func": np.imag}]:
    for i, cm in enumerate(cms_x0):
        lab = str(cm.x0)
        plt.plot(cm.grid, part["func"](cm.values), label=lab)
        plt.xlabel("x")
        plt.ylabel(part["str"]+"[$F$]")
        plt.title(r"Varying $x_0$")
        plt.legend(loc=6, bbox_to_anchor=(1, .5))
    plt.show()
_images/notebooks_influence_coordMap_siegerts_1_29_0.png
_images/notebooks_influence_coordMap_siegerts_1_29_1.png
Define the Hamiltonians
[11]:
hams_x0 = [Hamiltonian(pot, cm) for cm in cms_x0]
Solve the Hamiltonians
[12]:
basissets_x0 = [ham.solve() for ham in hams_x0]
Compare the results

Let us first have a look at the global picture, by plotting the energies and virial expectation values of all the eigenstates:

[13]:
for basis in basissets_x0:
    tit = "$x_0$ = {}".format(basis.coord_map.x0)
    basis.plot_energies(title=tit)
_images/notebooks_influence_coordMap_siegerts_1_35_0.png
_images/notebooks_influence_coordMap_siegerts_1_35_1.png
_images/notebooks_influence_coordMap_siegerts_1_35_2.png
_images/notebooks_influence_coordMap_siegerts_1_35_3.png
_images/notebooks_influence_coordMap_siegerts_1_35_4.png

Again, the distribution of the eigenstates varies greatly with \(x_0\), but not the real energy range, nor the range of the expectation values of the virial operator.

[14]:
for basis in basissets_x0:
    tit = "$x_0$ = {}".format(basis.coord_map.x0)
    basis.plot_energies(xlim=(-100, 4000), ylim=(-300, 5), title=tit)
_images/notebooks_influence_coordMap_siegerts_1_37_0.png
_images/notebooks_influence_coordMap_siegerts_1_37_1.png
_images/notebooks_influence_coordMap_siegerts_1_37_2.png
_images/notebooks_influence_coordMap_siegerts_1_37_3.png
_images/notebooks_influence_coordMap_siegerts_1_37_4.png

Zooming in a little closer shows little modification of the Siegert states spectrum, and this is further evidenced by zooming in even more closely:

[15]:
for basis in basissets_x0:
    tit = "$x_0$ = {}".format(basis.coord_map.x0)
    basis.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5), title=tit)
_images/notebooks_influence_coordMap_siegerts_1_39_0.png
_images/notebooks_influence_coordMap_siegerts_1_39_1.png
_images/notebooks_influence_coordMap_siegerts_1_39_2.png
_images/notebooks_influence_coordMap_siegerts_1_39_3.png
_images/notebooks_influence_coordMap_siegerts_1_39_4.png

It nevertheless seems important to leave a sufficiently large “buffer” region at the end of the discretization grid in order to let the coordinate mapping be fully efficient (a degradatation of the energies and virial values can be observed for \(x_0 = 7.0\))

The difference is actually mostly seen on the wavefunctions, especially those of the resonant states:

[16]:
for i, basis in enumerate(basissets_x0):
    basis.max_virial = 10**(-6)
    tit = r"$x_0 = {}$".format(x0s[i])
    basis.plot_wavefunctions(nres=2, title=tit)
_images/notebooks_influence_coordMap_siegerts_1_41_0.png
_images/notebooks_influence_coordMap_siegerts_1_41_1.png
_images/notebooks_influence_coordMap_siegerts_1_41_2.png
_images/notebooks_influence_coordMap_siegerts_1_41_3.png
_images/notebooks_influence_coordMap_siegerts_1_41_4.png

The larger the value of \(x_0\), the larger the extension of the resonant state wavefunction. This is exactly the role of this parameter, so there is no surprise here.

However, it may have an influence on the bound states wavefunctions, especially for those of highest energy, which have longer exponential tails:

[17]:
for i, basis in enumerate(basissets_x0):
    basis.max_virial = 10**(-6)
    last_bound = basis.bounds[-1]
    tit = r"$x_0 = {}$; E = {:.4f}".format(x0s[i], last_bound.energy)
    last_bound.plot(title=tit)
_images/notebooks_influence_coordMap_siegerts_1_43_0.png
_images/notebooks_influence_coordMap_siegerts_1_43_1.png
_images/notebooks_influence_coordMap_siegerts_1_43_2.png
_images/notebooks_influence_coordMap_siegerts_1_43_3.png
_images/notebooks_influence_coordMap_siegerts_1_43_4.png

This is not a problem since it does not greatly affect the energy and the wavefunctions inside the potential.

Influence of \(\lambda\)

Let us finally have a look at the last parameter: the sharpness parameter.

Define the coordinate mappings
[18]:
theta = 0.6
x0 = 6.0
lbdas = [0.1, 0.5, 1.5, 2.5, 5.0, 10.0]
cms_lbda = [ErfKGCoordMap(theta, x0, lbda, grid=xgrid) for lbda in lbdas]

For the smallest sharpness value, the coordinate mapping almost corresponds to the uniform coordinate mapping \(F: x \mapsto x e^{i \theta}\), while the largest values exhibit sharp transitions:

[28]:
for i, cm in enumerate(cms_lbda):
    lab = str(cm.lbda)
    plt.plot(np.real(cm.values), np.imag(cm.values), label=lab)
    plt.xlabel("Re[$F$]")
    plt.ylabel("Im[$F$]")
    plt.title(r"Varying $\lambda$")
    plt.legend(loc=6, bbox_to_anchor=(1, .5))
plt.show()
_images/notebooks_influence_coordMap_siegerts_1_49_0.png
[43]:
for part in [{"str": "Re", "func": np.real},
             {"str": "Im", "func": np.imag}]:
    for i, cm in enumerate(cms_lbda):
        lab = str(cm.lbda)
        plt.plot(cm.grid, part["func"](cm.values), label=lab)
        plt.xlabel("x")
        plt.ylabel(part["str"]+"[$F$]")
        plt.title(r"Varying $\lambda$")
        plt.legend(loc=6, bbox_to_anchor=(1, .5))
    plt.show()
_images/notebooks_influence_coordMap_siegerts_1_50_0.png
_images/notebooks_influence_coordMap_siegerts_1_50_1.png
Define the Hamiltonians
[19]:
hams_lbda = [Hamiltonian(pot, cm) for cm in cms_lbda]
Solve the Hamiltonians
[20]:
basissets_lbda = [ham.solve() for ham in hams_lbda]
Compare the results

Let us first have a look at the global picture, by plotting the energies and virial expectation values of all the eigenstates:

[21]:
for basis in basissets_lbda:
    tit = "$\lambda$ = {}".format(basis.coord_map.lbda)
    basis.plot_energies(title=tit)
_images/notebooks_influence_coordMap_siegerts_1_56_0.png
_images/notebooks_influence_coordMap_siegerts_1_56_1.png
_images/notebooks_influence_coordMap_siegerts_1_56_2.png
_images/notebooks_influence_coordMap_siegerts_1_56_3.png
_images/notebooks_influence_coordMap_siegerts_1_56_4.png
_images/notebooks_influence_coordMap_siegerts_1_56_5.png

There are two limiting cases: either the sharpness parameter is too large or it is too small, leading to wrong or degraded results, where the bound and/or the resonant states are not well reproduced.

[22]:
for basis in basissets_lbda:
    tit = "$\lambda$ = {}".format(basis.coord_map.lbda)
    basis.plot_energies(xlim=(-100, 4000), ylim=(-300, 5), title=tit)
_images/notebooks_influence_coordMap_siegerts_1_58_0.png
_images/notebooks_influence_coordMap_siegerts_1_58_1.png
_images/notebooks_influence_coordMap_siegerts_1_58_2.png
_images/notebooks_influence_coordMap_siegerts_1_58_3.png
_images/notebooks_influence_coordMap_siegerts_1_58_4.png
_images/notebooks_influence_coordMap_siegerts_1_58_5.png

By zooming more closely, it appears that if lambda is too small, then the energies can be completelt wrong, while the range of the virial values is consistent with the other calcultions, while, if it gets too large, the range of resonant states get so small that no resonant states can be discriminated from the rest of the states, even though \(\theta\) is large enough for the resonant states to be singled-out:

[44]:
for basis in basissets_lbda:
    tit = "$\lambda$ = {}".format(basis.coord_map.lbda)
    basis.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5), title=tit)
_images/notebooks_influence_coordMap_siegerts_1_60_0.png
_images/notebooks_influence_coordMap_siegerts_1_60_1.png
_images/notebooks_influence_coordMap_siegerts_1_60_2.png
_images/notebooks_influence_coordMap_siegerts_1_60_3.png
_images/notebooks_influence_coordMap_siegerts_1_60_4.png
_images/notebooks_influence_coordMap_siegerts_1_60_5.png

The last plot two plots show that a degradation of the virial values or even of the energies can happen if the transition is too sharp.

Conclusions:

We saw the influence of each parameter on the smooth exterior complex scaling. Let us summarize the main results:

  • For \(\theta\):
    • if it is too small, some resonant states might be not be singled-out from the rest of the numerical eigenstates (\(\theta\) must be larger than the critical angle of the resonant states one is interested in),
    • if it is too large, then the bound and resonant states might be less well reproduced (lower virial values).
  • For \(x_0\):
    • if it is too large (with respect to the grid extension), then the coordinate mapping cannot tend to its asymptotical value, therefore leading to a degradation of the resutls.
    • On the contrary, if it is too small, then the buffer region is large, and most of the grid points have no use. These cases correspond to a bad use of the computational resources.
  • For \(\lambda\):
    • if it is too small, the coordinate mapping tends to the uniform complex scaling, and the energies are all aligned.
    • if it is too large, the transition between the absence and the presence of the scaling is too sharp (the coordinate mapping amounts to the bare exterior complex scaling) and the resonant states cannot be singled out the rest of the numerical eigenstates, even though the critical angle is large enough.

This shows that one has to be careful with the values of the smooth exterior coordinate mapping parameters, but a generic approach can be defined: make sure that the buffer zone is large enough for the asymptotic values of the coordinate mapping to be reached, with a not too sharp transition aroud \(\pm x_0\) and a large enough angle for the resonant states to be singled-out.

Influence of the grid for continuum states

Last but not least, the grid is a very important parameter when it comes to find the eigenstates of a Hamiltonian matrix. Two parameters characterize a 1D grid: the grid spacing (the distance between two grid points) and the grid extension (the range spanned by the \(x\)-axis). Let us find out how these parameters influence the continuum and bound states found numerically.

Initialization
Import some modules and classes
[1]:
from siegpy import (Hamiltonian, SWPotential, UniformCoordMap,
                    SWPBasisSet)
import numpy as np
from copy import deepcopy
Define a potential
[2]:
siegerts = SWPBasisSet.from_file("siegerts.dat")
pot = siegerts.potential
Influence of the grid step

Having two parameters to consider, we’ll vary one while keeping the other constant. Let us begin with the grid step. The grid extension being constant, this amounts to using different number of grid points.

Define the same potential with multiple grid steps
[3]:
# Set the grid extension
xmax = 7.5
# Set the various grids
factors = [1, 2, 4, 8, 16]
npts_list = [f*50 + 1 for f in factors]
print('npts_list:', npts_list)
xgrids = [np.linspace(-xmax, xmax, npts) for npts in npts_list]
# List the grid steps
hx_list = [g[1]-g[0] for g in xgrids]
print("grid spacings:", hx_list)
# Discretize the potential over the multiple grid steps
pots = []
for xgrid in xgrids:
    pots.append(deepcopy(pot))
    pots[-1].grid = xgrid
npts_list: [51, 101, 201, 401, 801]
grid spacings: [0.29999999999999982, 0.15000000000000036, 0.075000000000000178, 0.037499999999999645, 0.018749999999999822]
Define the coordinate mapping

The same coordinate mapping will be used throughout this notebook.

[4]:
theta = 0.0
cm = UniformCoordMap(theta)
Define the Hamiltonians
[5]:
hams_hx = [Hamiltonian(pot, cm) for pot in pots]
Solve the Hamiltonians
[6]:
basissets_hx = [ham.solve() for ham in hams_hx]
Compare the spectra

As usual, the easiest way is to plot the energies of the eigenstates and their virial values:

[7]:
for i, basis in enumerate(basissets_hx):
    tit = "$h_x$ = {:.3e}; npts = {}".format(hx_list[i], len(basis))
    basis.plot_energies(title=tit)
_images/notebooks_influence_grid_continuum_16_0.png
_images/notebooks_influence_grid_continuum_16_1.png
_images/notebooks_influence_grid_continuum_16_2.png
_images/notebooks_influence_grid_continuum_16_3.png
_images/notebooks_influence_grid_continuum_16_4.png

By decreasing the grid step (i.e., increasing the number of grid points), the energy range of continuum states increases, and so does the virial values range (lower virial values are obtained if more grid points are used, i.e. the quality of the eigenstates is better).

Let us zoom in close to the 0 energy, to have a clearer view:

[8]:
for i, basis in enumerate(basissets_hx):
    tit = "$h_x$ = {:.3e}; npts = {}".format(hx_list[i], len(basis))
    basis.plot_energies(xlim=(-10, 10), title=tit)
_images/notebooks_influence_grid_continuum_18_0.png
_images/notebooks_influence_grid_continuum_18_1.png
_images/notebooks_influence_grid_continuum_18_2.png
_images/notebooks_influence_grid_continuum_18_3.png
_images/notebooks_influence_grid_continuum_18_4.png

This shows that the spectrum in this range is slighty modified by the grid spacing: the results are almost converged in this energy range, even for the lowest number of grid points. Still, it requires much more grid points to have good enough virial values to have a good discrimination of the bound and continuum states.

Finally, the next cell shows the convergence of some low energy bound and continuum states as a function of the decreasing grid spacing:

[10]:
# Show convergence of bound states energies
for i in range(7):
    print("Bound state n° {}:".format(i+1))
    for j, basis in enumerate(basissets_hx):
        print("hx = {:.3e}; E = {:.4e}".format(hx_list[j], basis.bounds[i].energy.real))
# Show convergence of continuum states energies
for i in 0, 10, 20, 30:
    print("Continuum state n° {}:".format(i+1))
    for j, basis in enumerate(basissets_hx):
        print("hx = {:.3e}; E = {:.4e}".format(hx_list[j], basis.continuum[i].energy.real))
Bound state n° 1:
hx = 3.000e-01; E = -9.8030e+00
hx = 1.500e-01; E = -9.7871e+00
hx = 7.500e-02; E = -9.7926e+00
hx = 3.750e-02; E = -9.7955e+00
hx = 1.875e-02; E = -9.7939e+00
Bound state n° 2:
hx = 3.000e-01; E = -9.2128e+00
hx = 1.500e-01; E = -9.1499e+00
hx = 7.500e-02; E = -9.1720e+00
hx = 3.750e-02; E = -9.1836e+00
hx = 1.875e-02; E = -9.1771e+00
Bound state n° 3:
hx = 3.000e-01; E = -8.2322e+00
hx = 1.500e-01; E = -8.0936e+00
hx = 7.500e-02; E = -8.1434e+00
hx = 3.750e-02; E = -8.1694e+00
hx = 1.875e-02; E = -8.1549e+00
Bound state n° 4:
hx = 3.000e-01; E = -6.8672e+00
hx = 1.500e-01; E = -6.6289e+00
hx = 7.500e-02; E = -6.7175e+00
hx = 3.750e-02; E = -6.7633e+00
hx = 1.875e-02; E = -6.7381e+00
Bound state n° 5:
hx = 3.000e-01; E = -5.1297e+00
hx = 1.500e-01; E = -4.7772e+00
hx = 7.500e-02; E = -4.9152e+00
hx = 3.750e-02; E = -4.9856e+00
hx = 1.875e-02; E = -4.9474e+00
Bound state n° 6:
hx = 3.000e-01; E = -3.0481e+00
hx = 1.500e-01; E = -2.5889e+00
hx = 7.500e-02; E = -2.7830e+00
hx = 3.750e-02; E = -2.8806e+00
hx = 1.875e-02; E = -2.8288e+00
Bound state n° 7:
hx = 3.000e-01; E = -7.3081e-01
hx = 1.500e-01; E = -2.8027e-01
hx = 7.500e-02; E = -4.9385e-01
hx = 3.750e-02; E = -6.0378e-01
hx = 1.875e-02; E = -5.4730e-01
Continuum state n° 1:
hx = 3.000e-01; E = 1.5328e-01
hx = 1.500e-01; E = 1.6071e-01
hx = 7.500e-02; E = 1.6387e-01
hx = 3.750e-02; E = 1.6528e-01
hx = 1.875e-02; E = 1.6652e-01
Continuum state n° 11:
hx = 3.000e-01; E = 4.2292e+00
hx = 1.500e-01; E = 4.4525e+00
hx = 7.500e-02; E = 4.5239e+00
hx = 3.750e-02; E = 5.4710e+00
hx = 1.875e-02; E = 5.5180e+00
Continuum state n° 21:
hx = 3.000e-01; E = 1.3607e+01
hx = 1.500e-01; E = 1.5289e+01
hx = 7.500e-02; E = 1.5444e+01
hx = 3.750e-02; E = 1.5531e+01
hx = 1.875e-02; E = 1.5622e+01
Continuum state n° 31:
hx = 3.000e-01; E = 2.9786e+01
hx = 1.500e-01; E = 2.9469e+01
hx = 7.500e-02; E = 2.8429e+01
hx = 3.750e-02; E = 2.8670e+01
hx = 1.875e-02; E = 2.8809e+01
Influence of the grid extension

Let us now review how the grid extension influences the eigenstates.

Define the same potential with multiple grid extensions
[11]:
# Set the grid spacing
hx = 0.0375
# Set the various grids via the grid extension
factors = range(1, 6)
xmax_list = [f*100*hx for f in factors]
print('xmax_list:', xmax_list)
xgrids = [np.arange(-xmax, xmax+hx/2, hx) for xmax in xmax_list]
# List the number of points
npts_list = [len(g) for g in xgrids]
print('npts_list:', npts_list)
# Discretize the potential over the multiple grid steps
pots = []
for xgrid in xgrids:
    pots.append(deepcopy(pot))
    pots[-1].grid = xgrid
xmax_list: [3.75, 7.5, 11.25, 15.0, 18.75]
npts_list: [201, 401, 601, 801, 1001]
Define the Hamiltonians
[12]:
# The same uniform coordinate mapping is used
hams_xmax = [Hamiltonian(pot, cm) for pot in pots]
Solve the Hamiltonians
[13]:
basissets_xmax = [ham.solve() for ham in hams_xmax]
Compare the spectra

As usual, the easiest way is to plot the energies of the eigenstates and their virial values:

[19]:
for basis in basissets_xmax:
    tit = "xmax = {:.2f}; npts = {}".format(basis[0].grid[-1], len(basis))
    basis.plot_energies(title=tit)
_images/notebooks_influence_grid_continuum_29_0.png
_images/notebooks_influence_grid_continuum_29_1.png
_images/notebooks_influence_grid_continuum_29_2.png
_images/notebooks_influence_grid_continuum_29_3.png
_images/notebooks_influence_grid_continuum_29_4.png

By increasing the grid extension while keeping the same grid step, the energy range of continuum states increases and the virial values range is not modified, even though the number of eigenstates is larger.

Let us zoom in close to the 0 energy, to have a clearer view:

[16]:
for basis in basissets_xmax:
    tit = "xmax = {:.2f}; npts = {}".format(basis[0].grid[-1], len(basis))
    basis.plot_energies(xlim=(-10, 10), title=tit)
_images/notebooks_influence_grid_continuum_31_0.png
_images/notebooks_influence_grid_continuum_31_1.png
_images/notebooks_influence_grid_continuum_31_2.png
_images/notebooks_influence_grid_continuum_31_3.png
_images/notebooks_influence_grid_continuum_31_4.png

These plots show that the spectrum in this range is largely modified by the grid extension: there is a convergence of the bound states, but not of the continuum states. The latter fact is known as the continuum collapse, and is related to the confinement (i.e., it is as if the 1D square-well potential was embedded inside a larger, infinite square well potential). It causes trouble whenever one is interested in having a finer sampling of the high energy spectrum of continuum states, because using a larger grid extension causes an accumulation above the threshold energy (here 0). This is actually one motivation for using the Siegert states instead of continuum states: thanks to the smooth exterior complex scaling, there is no need to largely increase the grid extension, because then the resonant states resemble bound states. This will be presented in the next notebook.

The bound states being normalizable, then can however easily converge with respect to the grid extension: it has to be large enough to accomodate for the exponential tails of the highest energy bound states.

These facts are evidenced by the next cells:

[17]:
# Show convergence of bound states energies
for i in range(7):
    print("Bound state n° {}:".format(i+1))
    for j, basis in enumerate(basissets_xmax):
        print("xmax = {:.2f}; E = {:.4e}".format(xmax_list[j], basis.bounds[i].energy.real))
Bound state n° 1:
xmax = 3.75; E = -9.7955e+00
xmax = 7.50; E = -9.7955e+00
xmax = 11.25; E = -9.7955e+00
xmax = 15.00; E = -9.7955e+00
xmax = 18.75; E = -9.7955e+00
Bound state n° 2:
xmax = 3.75; E = -9.1836e+00
xmax = 7.50; E = -9.1836e+00
xmax = 11.25; E = -9.1836e+00
xmax = 15.00; E = -9.1836e+00
xmax = 18.75; E = -9.1836e+00
Bound state n° 3:
xmax = 3.75; E = -8.1694e+00
xmax = 7.50; E = -8.1694e+00
xmax = 11.25; E = -8.1694e+00
xmax = 15.00; E = -8.1694e+00
xmax = 18.75; E = -8.1694e+00
Bound state n° 4:
xmax = 3.75; E = -6.7633e+00
xmax = 7.50; E = -6.7633e+00
xmax = 11.25; E = -6.7633e+00
xmax = 15.00; E = -6.7633e+00
xmax = 18.75; E = -6.7633e+00
Bound state n° 5:
xmax = 3.75; E = -4.9855e+00
xmax = 7.50; E = -4.9856e+00
xmax = 11.25; E = -4.9856e+00
xmax = 15.00; E = -4.9856e+00
xmax = 18.75; E = -4.9856e+00
Bound state n° 6:
xmax = 3.75; E = -2.8798e+00
xmax = 7.50; E = -2.8806e+00
xmax = 11.25; E = -2.8806e+00
xmax = 15.00; E = -2.8806e+00
xmax = 18.75; E = -2.8806e+00
Bound state n° 7:
xmax = 3.75; E = -5.8033e-01
xmax = 7.50; E = -6.0378e-01
xmax = 11.25; E = -6.0378e-01
xmax = 15.00; E = -6.0378e-01
xmax = 18.75; E = -6.0378e-01

The convergence of the bound states is easier for the ones of lowest energy, because they require a smaller grid extension to accomodate for their exponential tail.

The situation is completely for the continuum states:

[18]:
# Show convergence of continuum states energies
for i in 0, 10, 20, 30:
    print("Continuum state n° {}:".format(i+1))
    for j, basis in enumerate(basissets_xmax):
        msg = "xmax = {:.2f}; E = {:.4e}".format(xmax_list[j], basis.continuum[i].energy.real)
        show_plots = False  # Set to True to show plots instead
        if show_plots:
            basis.continuum[i].plot(title=msg)
        else:
            print(msg)
Continuum state n° 1:
xmax = 3.75; E = 1.2714e+00
xmax = 7.50; E = 1.6528e-01
xmax = 11.25; E = 5.8330e-02
xmax = 15.00; E = 2.9455e-02
xmax = 18.75; E = 1.7712e-02
Continuum state n° 11:
xmax = 3.75; E = 1.9194e+01
xmax = 7.50; E = 5.4710e+00
xmax = 11.25; E = 1.8837e+00
xmax = 15.00; E = 1.0282e+00
xmax = 18.75; E = 6.2960e-01
Continuum state n° 21:
xmax = 3.75; E = 6.1785e+01
xmax = 7.50; E = 1.5531e+01
xmax = 11.25; E = 6.4879e+00
xmax = 15.00; E = 3.1642e+00
xmax = 18.75; E = 1.8083e+00
Continuum state n° 31:
xmax = 3.75; E = 1.1872e+02
xmax = 7.50; E = 2.8670e+01
xmax = 11.25; E = 1.2183e+01
xmax = 15.00; E = 7.0408e+00
xmax = 18.75; E = 4.1276e+00

The continuum collapse is clearly evidenced here: the energy of the \(n\)-th continuum state decreases when the grid extension increases. By plotting the wavefunctions of these states, one could see that the wavefunctions of these states are not always related to eachother.

Conclusion

The two grid step and the grid extension are the two parameters to consider to reach a convergence of the numerical eigenstates (at least when using grids where the points are not evenly distributed). They have two distincts influence on the obtained spectra:

  • Decreasing the grid step allows for states of better quality (i.e., of lower virial), and the bound and continuum states converge to a given value, even though the convergence is rather slow (e.g., it may require a large number of grid points). It also allows to define continuum states of higher energy.
  • Increasing the grid extension allows for a convergence of the bound states only (the grid extension must be large enough for their exponential tail to vanish), but not for the continuum states, where an accumulation of states above the zero energy is observed (known as the continuum collapse) while leaving the range of energies and virial values unchanged. This means that there is no need to increase the grid extension if the bound states are converged with respect to this parameter and if the discretization of the continuum states allows to reach high enough energies with a sufficient precision.

Influence of the grid for Siegert states

The previous notebook introduced the influence of two grid parameters, the grid spacing and the grid extension, on the spectrum of bound and continuum states. Let us apply the same approach when a smooth exterior complex scaling is used, and see how these parameters influence the spectrum of Siegert states found.

Initialization
Import some modules and classes
[1]:
from siegpy import (Hamiltonian, SWPotential, ErfKGCoordMap,
                    SWPBasisSet)
import numpy as np
from copy import deepcopy
Define a potential
[2]:
siegerts = SWPBasisSet.from_file("siegerts.dat")
pot = siegerts.potential
Influence of the grid step

As in the previous notebook, let us first consider the influence of the grid step on the eigenstates found numerically.

Define the same potential with multiple grid steps
[3]:
# Set the grid extension
xmax = 7.5
# Set the various grids
factors = [1, 2, 4, 8, 16]
npts_list = [f*50 + 1 for f in factors]
print('npts_list:', npts_list)
xgrids = [np.linspace(-xmax, xmax, npts) for npts in npts_list]
# List the grid steps
hx_list = [g[1]-g[0] for g in xgrids]
print("grid spacings:", hx_list)
# Discretize the potential over the multiple grid steps
pots = []
for xgrid in xgrids:
    pots.append(deepcopy(pot))
    pots[-1].grid = xgrid
npts_list: [51, 101, 201, 401, 801]
grid spacings: [0.29999999999999982, 0.15000000000000036, 0.075000000000000178, 0.037499999999999645, 0.018749999999999822]
Define the coordinate mapping

The same smooth exterior coordinate mapping will be used throughout this notebook:

[4]:
theta = 0.6
x0 = 6.0
lbda = 1.5
cm = ErfKGCoordMap(theta, x0, lbda)
Define the Hamiltonians
[5]:
hams_hx = [Hamiltonian(pot, cm) for pot in pots]
Solve the Hamiltonians
[6]:
basissets_hx = [ham.solve() for ham in hams_hx]
Compare the spectra

Again, the easiest way of comparing the spectra is to plot the energies of the eigenstates and their virial values:

[7]:
for i, basis in enumerate(basissets_hx):
    tit = "$h_x$ = {:.3e}; npts = {}".format(hx_list[i], len(basis))
    basis.plot_energies(title=tit, show_unknown=True)
_images/notebooks_influence_grid_siegerts_16_0.png
_images/notebooks_influence_grid_siegerts_16_1.png
_images/notebooks_influence_grid_siegerts_16_2.png
_images/notebooks_influence_grid_siegerts_16_3.png
_images/notebooks_influence_grid_siegerts_16_4.png

It should not be a surprise to see that energies (and wavefunctions) can be complex after the application of smooth exterior complex scaling. Apart from that, the spectra are globally comparable with the previous case (when finding numerical bound and continuum states): by decreasing the grid step (i.e., increasing the number of grid points), the energy range increases, and so does the range of virial values (lower virial values are obtained if more grid points are used, i.e. the quality of the numerical eigenstates gets better).

Also note how the minimal virial values and the maximal real energy and the minimal virial value are comparable to those found in the case of the uniform complex scaling.

Another way of comparing the basis sets is to define the same range for the energy plot, in order to see how the resonant states are affected by the grid step decrease:

[24]:
for i, basis in enumerate(basissets_hx):
    tit = "$h_x$ = {:.3e}; npts = {}".format(hx_list[i], len(basis))
    basis.plot_energies(xlim=(-12, 1000), ylim=(-100, 1), title=tit, show_unknown=True)
_images/notebooks_influence_grid_siegerts_18_0.png
_images/notebooks_influence_grid_siegerts_18_1.png
_images/notebooks_influence_grid_siegerts_18_2.png
_images/notebooks_influence_grid_siegerts_18_3.png
_images/notebooks_influence_grid_siegerts_18_4.png

Decreasing the grid steps increases the range of resonant states that can be discriminated from the rest of the states. Still, note that once the grid step is sufficient to describe a resonant state, decreasing the grid step does not affect greatly the position of the resonant state. This is the equivalent of what was experienced for continuum states, where the continuum states of low energy were not largely altered by that increase of the number of grid points.

Zooming even more closely gives:

[22]:
for i, basis in enumerate(basissets_hx):
    tit = "$h_x$ = {:.3e}; npts = {}".format(hx_list[i], len(basis))
    basis.plot_energies(xlim=(-12, 25), ylim=(-5, 1), title=tit, show_unknown=True)
_images/notebooks_influence_grid_siegerts_20_0.png
_images/notebooks_influence_grid_siegerts_20_1.png
_images/notebooks_influence_grid_siegerts_20_2.png
_images/notebooks_influence_grid_siegerts_20_3.png
_images/notebooks_influence_grid_siegerts_20_4.png

The bound and resonant states can be more easily discriminated from the rest of the states as the number of grid points increases. Note how using only 201 grid points gives a similar plot as using 801 points (apart from the virial values): the effect of decreasing the grid step is the same if a smooth exterior complex scaling is used or not.

Influence of the grid extension

It is now time to study the influence of the grid extension on the spectrum of resonant states.

Define the same potential with multiple grid extensions

The grid spacing will now be fixed, while the grid extension is varied. The minimum grid extension will correspond to the inflexion point point of the smooth exterior coordinate mapping.

[10]:
# Set the grid spacing
hx = 0.0375
# Set the various grids via the grid extension
factors = range(2, 5)
xmax_list = [5.5, 6.0, 6.5, 7.0] + [f*100*hx for f in factors]
print('xmax_list:', xmax_list)
xgrids = [np.arange(-xmax, xmax+3*hx/4, hx) for xmax in xmax_list]
# List the number of points
npts_list = [len(g) for g in xgrids]
print('npts_list:', npts_list)
# Discretize the potential over the multiple grid steps
pots = []
for xgrid in xgrids:
    pots.append(deepcopy(pot))
    pots[-1].grid = xgrid
xmax_list: [5.5, 6.0, 6.5, 7.0, 7.5, 11.25, 15.0]
npts_list: [295, 321, 348, 375, 401, 601, 801]
Define the Hamiltonians
[11]:
# The same uniform coordinate mapping is used
hams_xmax = [Hamiltonian(pot, cm) for pot in pots]
Solve the Hamiltonians
[12]:
basissets_xmax = [ham.solve(max_virial=10**-5) for ham in hams_xmax]
Compare the spectra
[25]:
for basis in basissets_xmax:
    tit = "xmax = {:.2f}; npts = {}".format(basis[0].grid[-1], len(basis))
    basis.plot_energies(title=tit, show_unknown=True)
_images/notebooks_influence_grid_siegerts_30_0.png
_images/notebooks_influence_grid_siegerts_30_1.png
_images/notebooks_influence_grid_siegerts_30_2.png
_images/notebooks_influence_grid_siegerts_30_3.png
_images/notebooks_influence_grid_siegerts_30_4.png
_images/notebooks_influence_grid_siegerts_30_5.png
_images/notebooks_influence_grid_siegerts_30_6.png

In the same manner as in the absence of coordinate mapping, increasing the grid extension does not modify the (real) energy range nor the range of the virial values. The main difference comes from the fact the additional states scatter in the whole complex plane.

Let us zoom in a given area of the complex energy plane:

[14]:
for basis in basissets_xmax:
    tit = "xmax = {:.2f}; npts = {}".format(basis[0].grid[-1], len(basis))
    basis.plot_energies(xlim=(-100, 2000), ylim=(-350, 25), title=tit, show_unknown=True)
_images/notebooks_influence_grid_siegerts_32_0.png
_images/notebooks_influence_grid_siegerts_32_1.png
_images/notebooks_influence_grid_siegerts_32_2.png
_images/notebooks_influence_grid_siegerts_32_3.png
_images/notebooks_influence_grid_siegerts_32_4.png
_images/notebooks_influence_grid_siegerts_32_5.png
_images/notebooks_influence_grid_siegerts_32_6.png

The increase of the grid extension does not seem to affect the quality of the Siegert states, apart for the lowest values of \(x_{max}\): the same resonant states are found, with the same virial expectation values. Only some “branches” see an increase of the density of states.

This is actually a very good point in favour of the use of resonant states instead of the continuum states: they correspond to physically meaningful states (they share low virial expectation values with bound states and form complete basis sets) and they are increasing the grid extension has no effect in their numerical computations. Even though there is a need for a “buffer” zone at both ends of the grid for the smooth exterior complex scaling to be fully effective, there is no need to increase the grid extension to large values. It might even be more difficult to converge (with respect to \(x_{max}\)) a bound state of high energy rather than resonant states.

Zooming around the zero energy gives:

[26]:
for basis in basissets_xmax:
    tit = "xmax = {:.2f}; npts = {}".format(-basis[0].grid[0], len(basis))
    basis.plot_energies(xlim=(-10, 50), ylim=(-7.5, 0.5), title=tit, show_unknown=True)
_images/notebooks_influence_grid_siegerts_34_0.png
_images/notebooks_influence_grid_siegerts_34_1.png
_images/notebooks_influence_grid_siegerts_34_2.png
_images/notebooks_influence_grid_siegerts_34_3.png
_images/notebooks_influence_grid_siegerts_34_4.png
_images/notebooks_influence_grid_siegerts_34_5.png
_images/notebooks_influence_grid_siegerts_34_6.png

The last plots further show how the increase of the grid extension does not affect the numerical Siegert states.

Conclusion

The grid step is a very important parameter, allowing to reach for resonant states of higher energy (just as in the absence of coordinate mapping, the energy range of ) and also decreases the virial operator expectation values, meaning that the numerical Siegert states are more easily discriminated from the rest of the states (and are of better quality, with lower virial values).

While the conclusions for the grid step parameters are not very different from the case without a coordinate mapping, increasing the grid extension does not affect the resonant states as it does for the continuum states: once the grid extension is large enough to discriminate the Siegert states from the rest of the states, there is no need to keep increasing it. The only limitation is the need for a buffer zone at both ends of the grid to let the coordinate mapping reach its asymtotic values. In practice, the requirement of such a buffer zone might not be dramatic, as it might require larger grid extensions to converge the bound states of highest energy.

Other numerical potentials

Symbolic Woods-Saxon potential

The Square-Well Potential (SWP) used in all the previous notebooks can actually be considered as a Woods-Saxon Potential (WSP) with an infinitely large sharpness parameter.

In this notebooks, we will therefore obtain the resonant states of multiple WS potentials of varying sharpness parameters and compare the obtained spectra with the one obtained with the SW potential that corresponds to the infinite sharpness parameter.

Initialization

Some more lines are required for matplotlib because a plotting function will be defined in the end of the notebook. The class describing the Woods-Saxon potential is also imported from SiegPy.

[1]:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
from matplotlib import cm
from siegpy import (SWPotential, Hamiltonian, BasisSet,
                    ErfKGCoordMap, WoodsSaxonPotential)
Smooth potential

The first Woods-Saxon potential has the smallest sharpness parameter considered in this notebook, and is discretized over a grid.

[2]:
# Define a grid
xmax = 9.0
xgrid = np.linspace(-xmax, xmax, 1201)
# Define the parameters of the first Woods-Saxon potenti
V0 = 10
l = np.sqrt(2)*np.pi
lbda1 = 4
# Definition of the first Woods-Saxon potential
sym_pot_ws1 = WoodsSaxonPotential(l, V0, lbda1, grid=xgrid)
sym_pot_ws1.plot()
_images/notebooks_woods-saxon_potential_7_0.png

This potential is then used to define a Hamiltonian that is finally solved.

[3]:
# Definition of the Hamiltonian to find the resonant states of this potential
cm_ws = ErfKGCoordMap(np.pi/4, xmax-1.5, 1.0)
ham_ws1 = Hamiltonian(sym_pot_ws1, cm_ws)
basis_ws1 = ham_ws1.solve(max_virial=10**(-3))

Let us finally look at some plots:

[4]:
# Definition of all the parameters to get consistent plots for
# all the potentials considered throughout this notebook
re_e_max = 21
im_e_min = -13
im_k_max = np.sqrt(2*V0)
re_k_max = np.sqrt(2*re_e_max)
re_e_lim = (-V0, re_e_max)
im_e_lim = (im_e_min, -im_e_min/30)
re_k_lim = (-re_k_max/30, re_k_max)
im_k_lim = (-im_k_max, im_k_max)
# Plot the energies, wavenumbers and some eigenstates of the potential
basis_ws1.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis_ws1.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis_ws1.plot_wavefunctions(nstates=7)
_images/notebooks_woods-saxon_potential_11_0.png
_images/notebooks_woods-saxon_potential_11_1.png
_images/notebooks_woods-saxon_potential_11_2.png

Bound states are found, and they seem rather similar to the ones found for the Square-Well potential, with low virial values. On the other hand, the resonant states are more difficult to find because the values of the imaginary part of their energy is high (in absolute value), so that it requires a large complex scaling angles. It is therefore not surprising they have rather high virial values, not so far away from those of the rotated continuum states.

Sharper potential

Let us now consider the case of a sharper potential:

[5]:
# A larger sharpness parameter gives a sharper potential.
# All the other parameters are the same as in the previous case.
lbda2 = 10
sym_pot_ws2 = WoodsSaxonPotential(l, V0, lbda2, grid=xgrid)
sym_pot_ws2.plot()
_images/notebooks_woods-saxon_potential_15_0.png

The new potential is used to define another Hamiltonian, that is finally solved:

[6]:
ham_ws2 = Hamiltonian(sym_pot_ws2, cm_ws)
basis_ws2 = ham_ws2.solve(max_virial=10**(-6))

Let us look at some plots:

[7]:
basis_ws2.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis_ws2.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis_ws2.plot_wavefunctions(nres=0)
_images/notebooks_woods-saxon_potential_19_0.png
_images/notebooks_woods-saxon_potential_19_1.png
_images/notebooks_woods-saxon_potential_19_2.png

This time, the resonant states are more easily found from the rest of the eigenstates: their imaginary value is closer to the axis \(\text{Im}[E] = 0\), and it would therefore require a lower complex scaling angle than in the previous case. Note that the virial values are also smaller for the resonant states, compared to the previous case.

Sharp potential

The last Woods-Saxon that will be considered will be even sharper:

[8]:
# The sharpness parameter is increased again
lbda3 = 50
sym_pot_ws3 = WoodsSaxonPotential(l, V0, lbda3, grid=xgrid)
sym_pot_ws3.plot()
_images/notebooks_woods-saxon_potential_23_0.png

A new Hamiltonian is created, and it is solved again, allowing the production of more plots:

[9]:
ham_ws3 = Hamiltonian(sym_pot_ws3, cm_ws)
basis_ws3 = ham_ws3.solve(max_virial=10**(-6))
[10]:
basis_ws3.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis_ws3.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis_ws3.plot_wavefunctions(nres=0)
_images/notebooks_woods-saxon_potential_26_0.png
_images/notebooks_woods-saxon_potential_26_1.png
_images/notebooks_woods-saxon_potential_26_2.png

Again, it can be seen that the resonant states of this sharp potential have lower imaginary part of energy (in absolute value) than for the previous two potentials. They also have lower virial values.

Square-Well Potential

Let us now study the case of the Square-Well potential that is the limiting case of the Woods-Saxon potentials presented above, in the limit where the sharpness parameter is infinitely large:

[11]:
# The same width, depth and grid is used
sw_pot = SWPotential(l, V0, grid=xgrid)
ham_sw = Hamiltonian(sw_pot, cm_ws)
basis_sw = ham_sw.solve(max_virial=10**(-6))
[12]:
basis_sw.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis_sw.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis_sw.plot_wavefunctions(nres=0)
_images/notebooks_woods-saxon_potential_31_0.png
_images/notebooks_woods-saxon_potential_31_1.png
_images/notebooks_woods-saxon_potential_31_2.png

This is not different from all the plots obtained in the previous notebooks. The states that are obtained seem close to those of the sharper Woods-Saxon Potential considered, which is not that surprising. One detail still has to draw the attention: the virial values are much higher in this case.

Comparison of all the results
Siegert states and virial values

In order to plot only the bound and resonant states of all the basis sets while using only one scale for the virial, a new function has to be defined:

[13]:
def scatter_plot(data_kwd, basissets, xlim=None, ylim=None, title=None,
                 file_save=None, show_unknown=False):
    # Object-oriented plots
    fig = plt.figure()
    ax = fig.add_subplot(111)
    # Add the real and imaginary axes
    ax.axhline(0, color='black', lw=1)  # black line y=0
    ax.axvline(0, color='black', lw=1)  # black line x=0
    # Define the min and max values of the virial, in order to
    # use the same normalized scale of for the virial values.
    allstates = BasisSet()
    for b in basissets:
        allstates += b
    vmin = np.min(np.log10(allstates.virials))
    vmax = np.max(np.log10(allstates.virials))
    norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)
    # Define the markers used for each basis set (not more than four
    # basis sets can be considered ; you can still add more markers here,
    # but the plot might become difficult to read)
    markers = ["o", "s", "d", "^"]
    # Loop over the basis sets to plot them
    for i, b in enumerate(basissets):
        # Select which states have to be plotted and store their virial
        to_plot = b.bounds + b.resonants + b.continuum
        if show_unknown is True or len(b.unknown) == len(b):
            to_plot += b.unknown
        virials = np.log10(to_plot.virials[::-1])
        # Set the x-axis label and get the data to be plotted, according
        # to the value of data_kwd.
        if data_kwd == 'wavenumbers':
            data_label = 'k'
            plot_data = to_plot.wavenumbers[::-1]
        elif data_kwd == 'energies':
            data_label = 'E'
            plot_data = to_plot.energies[::-1]
        # Else: raise a value error
        else:
            raise ValueError("data_kwd must be equal to 'wavenumbers' "
                             "or 'energies' (here '{}')".format(data_kwd))
        # Plot all the data
        if plot_data != []:
            # normalized scale for the colors
            c = plt.cm.CMRmap(norm(virials))
            ax.scatter(np.real(plot_data), np.imag(plot_data), c=c,
                       marker=markers[i], s=30, label="$V_{}$".format(i+1))
    # Set the colorbar and its title
    m = cm.ScalarMappable(cmap=plt.cm.CMRmap, norm=norm)
    m.set_array([])
    plt.colorbar(m, label="$log_{10}$(|virial|)")
    # Set axis labels and range and the plot title
    ax.set_xlabel("Re[${}$]".format(data_label))
    ax.set_ylabel("Im[${}$]".format(data_label))
    if xlim is not None:
        ax.set_xlim(xlim)
    ax.set_xticklabels(ax.get_xticks())
    if ylim is not None:
        ax.set_ylim(ylim)
    ax.set_yticklabels(ax.get_yticks())
    if title is not None:
        ax.set_title(title)
    plt.legend()
    # Save the plot and show it
    if file_save is not None:
        fig.savefig(file_save)
    plt.show()

This new function is finally used to plot the most interesting states:

[14]:
data_kwd = 'energies'
# Uncomment the next line to plot the wavenumbers
# data_kwd = 'wavenumbers'
if data_kwd == 'energies':
    re_lim = re_e_lim
    im_lim = im_e_lim
elif data_kwd == 'wavenumbers':
    re_lim = re_k_lim
    im_lim = im_k_lim
basissets = [basis_ws1, basis_ws2, basis_ws3, basis_sw]
scatter_plot(data_kwd, basissets, show_unknown=False,
             xlim=re_lim, ylim=im_lim)
_images/notebooks_woods-saxon_potential_38_0.png

\(V_1\) (resp. \(V_2\) and \(V_3\)) corresponds to the first (resp. second and third) Woods-Saxon potential, while \(V_4\) corresponds to the SW potential. The trend for the resonant states is clearly visible, along with the fact that the virial values of the SWP are smaller than for the sharper Woods-Saxon potential, while their energy almost coincides. Note that it is clear that the resonant states of the smoothest potential are more difficult to separate from the rest of the rotated continuum states.

From there, it is clear that the smoother the Woods-Saxon potential, the higher the absolute value of the imaginary part of the resonant states energies: this means that it will be more difficult to identify the resonant states for smooth potentials, and that the resonant states of smooth potentials would also have very short lifetime (that is proportional to the absolute value of the imaginary part of the resonant state energy).

Focus on the bound states

Let us focus on the bound states energies found for all the potentials studied above. It will also be interesting to compare with the analytical results for the 1D Square-Well potential:

[15]:
# Find the analytical Siegert states of the SWP (here, read from a file)
exact_siegerts = BasisSet.from_file("siegerts.dat")

The energies of the bound states for each basis set (i.e., for each potential) are represented in the following plot:

[16]:
# Plot the bound states
basissets = [basis_ws1, basis_ws2, basis_ws3, basis_sw, exact_siegerts]
for i, basis in enumerate(basissets):
    bounds = BasisSet(states=basis.bounds[:7])
    plt.scatter([i+1]*len(bounds), np.real(bounds.energies), marker="_", s=2500)
plt.xlim(0.5, len(basissets)+0.5)
plt.xlabel("Potential $V_i$")
plt.ylabel("Re[Energy]")
plt.show()
_images/notebooks_woods-saxon_potential_44_0.png

In this plot, \(V_5\) corresponds to the analytical SWP case. It is worth noting that the smoother potential has a more compact spectrum of bound states: the low energy bound states tend to be higher than those of the sharper potentials (because the potential width is smaller in this energy range), while the very high energy bound states tend to be more bound (because the potential has a larger width for such an energy range). We also see that the results converge to the analytical SW potential ones.

Another way to look at the energy of the bound states found for each potential is print them in a table. This is done below:

[17]:
# Maximal number of states
i_max = 7
# Print the header of the table
header = " phi  |"
for i in range(len(basissets)):
    header += "|       V{}      ".format(i+1)
print(header)
print("-"*len(header))
# Print the table
energies = np.array([[en for en in np.sort((basis.siegerts).energies[:i_max])]
                     for basis in basissets])
for i in range(i_max):
    line = "phi_{} |".format(i+1)
    for en in energies.T[i]:
        line += "| {: .3f} ".format(en)
    print(line)
 phi  ||       V1      |       V2      |       V3      |       V4      |       V5
---------------------------------------------------------------------------------------
phi_1 || -9.674-0.000j | -9.774-0.000j | -9.793+0.000j | -9.795+0.000j | -9.794+0.000j
phi_2 || -8.795-0.000j | -9.103+0.000j | -9.173+0.000j | -9.181-0.000j | -9.177+0.000j
phi_3 || -7.519-0.000j | -8.007+0.000j | -8.147+0.000j | -8.163-0.000j | -8.154+0.000j
phi_4 || -5.960+0.000j | -6.523-0.000j | -6.725-0.000j | -6.753-0.000j | -6.737+0.000j
phi_5 || -4.224+0.000j | -4.702-0.000j | -4.930-0.000j | -4.970-0.000j | -4.945+0.000j
phi_6 || -2.435-0.000j | -2.635+0.000j | -2.811+0.000j | -2.859+0.000j | -2.826+0.000j
phi_7 || -0.809+0.000j | -0.555-0.000j | -0.539-0.000j | -0.581-0.000j | -0.545+0.000j
Conclusions

We presented the use of the Woods-Saxon potential in SiegPy and compared the Siegert states obtained when varying the sharpness of the potential, with the Square-Well potential case as a limit of an infinite sharpness parameter.

Obtaining the numerical resonant states of the smoother potential proves to be more difficult because their energies have imaginary parts that get higher (in absolute value). This means that large complex scaling angles have to be used, while the virial values obtained are very small compared to sharper potentials. However, if the potential is too sharp, the virial values are higher. It was also possible to see how the numerical bound states obtained converge to the ones of the SW potential when the sharpness parameter is increased.

Multiple Gaussian potentials

In this notebook, we will present another type of potentials, made of Gaussian functions. There will be two or four Gaussian functions used to define each potential.

The initial potential to be studied in this notebook is made of two Gaussian wells, and we will see the influence of two parameters on the obtained spectra:

  • the addition of two Gaussian barriers (one on each side of the two initial Gaussian wells)
  • the distance between the two Gaussian wells
Initialization

The same initialization is required than for the notebook about the Woods-Saxon potential, the two difference being that two other classes of symbolic potentials are imported: TwoGaussianPotential and FourGaussianPotential.

[1]:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
from matplotlib import cm
from siegpy import (Hamiltonian, ErfKGCoordMap, BasisSet,
                    TwoGaussianPotential, FourGaussianPotential)
Two identical Gaussian wells

The first potential is made of two identical Gaussian wells that are not too broadly separated:

[2]:
# Define a grid
xmax = 11.5
xgrid = np.linspace(-xmax, xmax,1201)
# Define a depth, sigma and center for the Gaussian potential
sigma = 0.4
xc = 2
V0 = 10
# Define a Potential with two Gaussian functions
sym_pot1 = TwoGaussianPotential(sigma, -xc, -V0, sigma, xc, -V0, grid=xgrid)
sym_pot1.plot()
_images/notebooks_gaussian_potentials_6_0.png

The depth and sigma defined here will be used throughout this notebook.

The potential is then used to define a Hamiltonian, that is finally solved in order to create a basis set:

[3]:
cm_g = ErfKGCoordMap(0.6, 10.0, 1.0)
ham1 = Hamiltonian(sym_pot1, cm_g)
basis1 = ham1.solve(max_virial=10**(-6))

Finally, let us show some plots:

[4]:
# Definition of all the parameters to get consistent plots for
# all the potentials considered throughout this notebook
re_e_max = 15
im_e_min = -6
im_k_max = np.sqrt(2*V0)
re_k_max = np.sqrt(2*re_e_max)
re_e_lim = (-V0, re_e_max)
im_e_lim = (im_e_min, -im_e_min/30)
re_k_lim = (-re_k_max/30, re_k_max)
im_k_lim = (-im_k_max, im_k_max)
# Plot the energies, wavenumbers and some eigenstates of the potential
basis1.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis1.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis1.plot_wavefunctions(nres=4, xlim=(-4, 4), ylim=(-V0-1, 5))
_images/notebooks_gaussian_potentials_11_0.png
_images/notebooks_gaussian_potentials_11_1.png
_images/notebooks_gaussian_potentials_11_2.png

Both bound and resonant states are found and are separable from the rest of the states, even if the resonant states virial values are rather high for the resonant states.

Note that the bound states actually are degenerate: the separation between both Gaussian wells is large enough to avoid a large overlap of the bound states wavefunctions of each Gaussian well. It is easy to see that by using simple linear combinations (addition and subtraction) of these degenerate bound states wavefunctions of a given energy, one gets the bound states of each Gaussian well.

Some resonant states can also be separated from the rotated continuum states thanks to their virial values, even though only few of them are found in this case. The first resonant state has a rather long lifetime (proportional to the inverse of its imaginary part (in absolute value)), while the other next resonant states are associated to decreasing lifetimes.

It is worth noting that the resonant states wavefunctions have anti-nodes centered between both Gaussian wells, and the number of nodes in this region keeps increasing, starting from one anti-node for the first resonant state. This is more clearly presented below:

[5]:
# Sort the resonant states by increasing real energy
res = basis1.resonants.states
res.sort(key=lambda x: x.energy.real)
# Plot each one of them
for i, state in enumerate(res):
    xmax = 3.5; ymax = 1.5
    tit = "Resonant state n˚{}".format(i+1)
    state.plot(title=tit, xlim=(-xmax, xmax), ylim=(-ymax, ymax))
_images/notebooks_gaussian_potentials_13_0.png
_images/notebooks_gaussian_potentials_13_1.png
_images/notebooks_gaussian_potentials_13_2.png
_images/notebooks_gaussian_potentials_13_3.png
_images/notebooks_gaussian_potentials_13_4.png
_images/notebooks_gaussian_potentials_13_5.png
Increase of the separation between both potentials

Let us consider a similar case, wit the same Gaussian wells, the main difference being that they are separated by a larger distance, that should lead to even less overlap between the bound states of each potential:

[6]:
# Center of the new Gaussian wells
xc2 = 4
# Definition of the new potential
sym_pot2 = TwoGaussianPotential(sigma, -xc2, -V0, sigma, xc2, -V0, grid=xgrid)
sym_pot2.plot()
_images/notebooks_gaussian_potentials_16_0.png

As usual, this potential is used to define a Hamiltonian, which is in turn solved, so that a basis set is created:

[7]:
ham2 = Hamiltonian(sym_pot2, cm_g)
basis2 = ham2.solve(max_virial=10**(-8.7))

The same plots are done for this new potential:

[8]:
basis2.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis2.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis2.plot_wavefunctions(nres=0)
_images/notebooks_gaussian_potentials_20_0.png
_images/notebooks_gaussian_potentials_20_1.png
_images/notebooks_gaussian_potentials_20_2.png

The spectrum of bound states is the same as in the previous case, while the spectrum of resonant states is very different: the number of resonant states in the same energy range is larger and they are associated to larger lifetimes. This shows that it is easier to have low energy resonant states when the distance between two potential wells is increased. This effect is actually similar to what is known for the bound states of a potential well of increasing width: the energies of the low energy bound states tend to get closer to each other. The same phenomenon occurs here for the resonant states: it is as if they were trapped between the two potential wells, that effectively act as a single potential well for the resonant states.

The resonant wavefunctions follow the same pattern as the one found for the previous potential, with an increasing number of anti-nodes between both wells for the resonant states of increasing energy:

[9]:
# Sort the resonant states by increasing real energy
res = basis2.resonants.states
res.sort(key=lambda x: x.energy.real)
# Plot each one of them
for i, state in enumerate(res[:6]):
    xmax = 6.5; ymax = 1.5
    tit = "Resonant state n˚{}".format(i+1)
    state.plot(title=tit, xlim=(-xmax, xmax), ylim=(-ymax, ymax))
_images/notebooks_gaussian_potentials_22_0.png
_images/notebooks_gaussian_potentials_22_1.png
_images/notebooks_gaussian_potentials_22_2.png
_images/notebooks_gaussian_potentials_22_3.png
_images/notebooks_gaussian_potentials_22_4.png
_images/notebooks_gaussian_potentials_22_5.png
Gaussian barriers around the two Gaussian wells

Let us finally consider a last case where a potential barrier is added on each side of the initial Gaussian wells. These barriers are Gaussian functions that have the same center as the Gaussian wells used to define the previous potential:

[10]:
# Depth of the extra Gaussian barriers
V1 = 0.2 * V0
# Definition of the potential
sym_pot3 = FourGaussianPotential(sigma, -xc2, V1, sigma, -xc, -V0,
                                 sigma, xc, -V0, sigma, xc2, V1, grid=xgrid)
sym_pot3.plot()
_images/notebooks_gaussian_potentials_25_0.png

A Hamiltonian is created from this potential, and solved to give a basis set:

[11]:
ham3 = Hamiltonian(sym_pot3, cm_g)
basis3 = ham3.solve(max_virial=10**(-9))

The basis set is then used to create some plots:

[12]:
basis3.plot_energies(xlim=re_e_lim, ylim=im_e_lim)
basis3.plot_wavenumbers(xlim=re_k_lim, ylim=im_k_lim)
basis3.plot_wavefunctions(nres=7, ylim=(-11, 7.5))
_images/notebooks_gaussian_potentials_29_0.png
_images/notebooks_gaussian_potentials_29_1.png
_images/notebooks_gaussian_potentials_29_2.png

Again, the bound states and the resonant states can easily be separated from the rest of the resonant states. The virial values are even larger for the resonant states. Again, the bound states are degenerate, and they are the same as the ones found for the other potentials without the barriers.

The first three resonant states can be considered as quasi-bound, in the sense that their lifetime is very large and because their energy is below the maximum of the Gaussian barriers. Also note how their wavefunction does not tend to diverge, especially compared to the other resonant states of higher energy, which are nothing but the same type of resonant states we used to find for other potentials. Also note how the number of anti-nodes of the resonant states wavefunction still follows the same increase when the resonant energy is increased. The presence of the barrier does not affect this property of the resonant states, whether they are quasi-bound or not. The first resonant states wavefunctions are presented below:

[13]:
# Sort the resonant states by increasing real energy
res = basis3.resonants.states
res.sort(key=lambda x: x.energy.real)
# Plot each one of them
for i, state in enumerate(res[:6]):
    xmax = 9.5; ymax = 1.5
    tit = "Resonant state n˚{}".format(i+1)
    state.plot(title=tit, ylim=(-ymax, ymax))
_images/notebooks_gaussian_potentials_31_0.png
_images/notebooks_gaussian_potentials_31_1.png
_images/notebooks_gaussian_potentials_31_2.png
_images/notebooks_gaussian_potentials_31_3.png
_images/notebooks_gaussian_potentials_31_4.png
_images/notebooks_gaussian_potentials_31_5.png
Comparison of the eigenstates

To get a clearer view on the spectra obtained with the various potentials, the same function as for the case of the Woods-Saxon potential is defined:

[14]:
def scatter_plot(data_kwd, basissets, xlim=None, ylim=None, title=None,
                 file_save=None, show_unknown=False):
    # Object-oriented plots
    fig = plt.figure()
    ax = fig.add_subplot(111)
    # Add the real and imaginary axes
    ax.axhline(0, color='black', lw=1)  # black line y=0
    ax.axvline(0, color='black', lw=1)  # black line x=0
    # Define the min and max values of the virial, in order to
    # use the same normalized scale of for the virial values.
    allstates = BasisSet()
    for b in basissets:
        allstates += b
    vmin = np.min(np.log10(allstates.virials))
    vmax = np.max(np.log10(allstates.virials))
    norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)
    # Define the markers used for each basis set (not more than four
    # basis sets can be considered ; you can still add more markers here,
    # but the plot might become difficult to read)
    markers = ["o", "s", "d", "^"]
    # Loop over the basis sets to plot them
    for i, b in enumerate(basissets):
        # Select which states have to be plotted and store their virial
        to_plot = b.bounds + b.resonants + b.continuum
        if show_unknown is True or len(b.unknown) == len(b):
            to_plot += b.unknown
        virials = np.log10(to_plot.virials[::-1])
        # Set the x-axis label and get the data to be plotted, according
        # to the value of data_kwd.
        if data_kwd == 'wavenumbers':
            data_label = 'k'
            plot_data = to_plot.wavenumbers[::-1]
        elif data_kwd == 'energies':
            data_label = 'E'
            plot_data = to_plot.energies[::-1]
        # Else: raise a value error
        else:
            raise ValueError("data_kwd must be equal to 'wavenumbers' "
                             "or 'energies' (here '{}')".format(data_kwd))
        # Plot all the data
        if plot_data != []:
            # normalized scale for the colors
            c = plt.cm.CMRmap(norm(virials))
            ax.scatter(np.real(plot_data), np.imag(plot_data), c=c,
                       marker=markers[i], s=30, label="$V_{}$".format(i+1))
    # Set the colorbar and its title
    m = cm.ScalarMappable(cmap=plt.cm.CMRmap, norm=norm)
    m.set_array([])
    plt.colorbar(m, label="$log_{10}$(|virial|)")
    # Set axis labels and range and the plot title
    ax.set_xlabel("Re[${}$]".format(data_label))
    ax.set_ylabel("Im[${}$]".format(data_label))
    if xlim is not None:
        ax.set_xlim(xlim)
    ax.set_xticklabels(ax.get_xticks())
    if ylim is not None:
        ax.set_ylim(ylim)
    ax.set_yticklabels(ax.get_yticks())
    if title is not None:
        ax.set_title(title)
    # Save the plot and show it
    if file_save is not None:
        fig.savefig(file_save)
    plt.legend()
    plt.show()

It is finally used to plot the Siegert states of all the potential with the same scale of virial values for the various potentials:

[15]:
data_kwd = 'energies'
# Uncomment the next line to plot the wavenumbers
# data_kwd = 'wavenumbers'
if data_kwd == 'energies':
    re_lim = re_e_lim
    im_lim = im_e_lim
elif data_kwd == 'wavenumbers':
    re_lim = re_k_lim
    im_lim = im_k_lim
basissets = [basis1, basis2, basis3]
scatter_plot(data_kwd, basissets, show_unknown=False,
             xlim=re_lim, ylim=im_lim)
_images/notebooks_gaussian_potentials_36_0.png

This shows that the spectrum of resonant states can vary greatly while the bound states are not very different. These differences can be caused by the distance between the two wells (which tend to stabilize more resonant states close to the threshold energy \(E=0\)) or by the presence of barriers (which creates quasi-bound states, which are nothing but very stable resonant states).

The energy of the lowest Siegert states is also presented in the following table:

[16]:
# Print the header of the table
header = "   phi   |"
for i in range(len(basissets)):
    header += "|       V{}      ".format(i+1)
print(header)
print("-"*len(header))
# Print the table:
# - first, the bound states
bnd_energies = np.array([[en for en in np.sort(basis.bounds.energies)]
                         for basis in basissets])
for i in range(len(bnd_energies.T)):
    line = " phi_b_{} |".format(i+1)
    for en in bnd_energies.T[i]:
        line += "| {: .3f} ".format(en)
    print(line)
# - then, the resonant states
n_res = 5
res_energies = np.array([[en for en in np.sort(basis.resonants.energies[:n_res])]
                         for basis in basissets])
for i in range(n_res):
    line = " phi_r_{} |".format(i+1)
    for en in res_energies.T[i]:
        line += "| {: .3f} ".format(en)
    print(line)
   phi   ||       V1      |       V2      |       V3
----------------------------------------------------------
 phi_b_1 || -6.636+0.000j | -6.636+0.000j | -6.635+0.000j
 phi_b_2 || -6.636-0.000j | -6.636+0.000j | -6.635+0.000j
 phi_b_3 || -1.404-0.000j | -1.391+0.000j | -1.376+0.000j
 phi_b_4 || -1.377-0.000j | -1.391+0.000j | -1.347+0.000j
 phi_r_1 ||  0.381-0.156j |  1.376-0.395j |  1.149-0.035j
 phi_r_2 ||  1.270-0.833j |  2.074-0.688j |  1.775-0.122j
 phi_r_3 ||  2.439-1.802j |  2.837-1.038j |  2.627-0.286j
 phi_r_4 ||  3.922-2.764j |  3.651-1.373j |  3.661-0.568j
 phi_r_5 ||  5.872-3.810j |  4.587-1.642j |  4.809-0.959j
[17]:

Defining a new symbolic potential

In this notebook, we’ll focus on the possibility of easily studying other symbolic potentials rather than on the physical information gained from the presented results. This is made possible thanks to the SymbolicPotential class, which is actually the base class for the WoodsSaxonPotential, TwoGaussianPotential and FourGaussianPotential classes.

Initialization

The SymbolicPotential class is imported from SiegPy:

[1]:
import numpy as np
import matplotlib.pyplot as plt
from siegpy import SymbolicPotential, Hamiltonian, UniformCoordMap
A simple example

This potential is of the form \(-\frac{a}{x^2+b}\). The simplest way of initializing a symbolic potential is by defining a string where "x" is the only variable (all the parameters must be evaluated in the string). Another possibility is by directly using the sympy module to define a symbolic function (where x is the only allowed variable, or symbol).

[2]:
# Define a grid
xmax = 9.0
xgrid = np.linspace(-xmax, xmax, 1001)
# Define the symbolic function as a string
a = 1
b = 0.1
sym_func = "-{} / (x**2 + {})".format(a, b)
# Initialize the ymbolic potential and plot it
pot = SymbolicPotential(sym_func, grid=xgrid)
pot.plot()
_images/notebooks_other_symbolic_potentials_7_0.png

From there, the usual way of finding the eigenstates and plotting the results can be achieved:

[3]:
cm = UniformCoordMap(0)
ham = Hamiltonian(pot, cm)
basis = ham.solve()
[4]:
basis.plot_energies(xlim=(-10, 5), ylim=(-5, 1))
basis.plot_wavenumbers(xlim=(-1, 10), ylim=(-15, 1))
basis.plot_wavefunctions(nstates=3)
_images/notebooks_other_symbolic_potentials_10_0.png
_images/notebooks_other_symbolic_potentials_10_1.png
_images/notebooks_other_symbolic_potentials_10_2.png

Note how this potential has a bound state close to the 0 energy: this state might require a more extended grid to be considered as converged.

Strength function from numerical basis sets

This notebook shows how the strength function can be approximated from a basis set made of the solutions of a numerical Hamiltonian. Given that only the bound and resonant states are found numerically, it is not possible to compute the contribution of the peak due to the bound and anti-bound states. Still, using the relations between the resonant and anti-resonant wavenumbers and wavefunctions, it is possible to build the strength function due to the resonant couples, as will be shown below.

Two main examples will be studied:

  • the first one concerns the 1D Square-Well potential, where a comparison with the analytic result is possible. We’ll see that the numerical and analytical results are in rather good agreement.
  • we’ll then be able to proceed to the case of the two Gaussian potentials, where we’ll see the effect of the distance between the two wells on the strength function.
Initialization
[1]:
import numpy as np
import matplotlib.pyplot as plt
from siegpy import (Hamiltonian, ErfKGCoordMap, SWPotential,
                    SWPBasisSet, Gaussian, TwoGaussianPotential,
                    BasisSet)
Square-Well potential case

Let us first start by finding the resonant states of a numerical Hamiltonian. The usual procedure is used:

[2]:
# Define a grid
xmax = 7.5
xgrid = np.linspace(-xmax, xmax, 501)
[3]:
# Define a potential, and discretize it over the grid
l = np.pi * np.sqrt(2)
V0 = 10
pot = SWPotential(l, V0, grid=xgrid)
[4]:
# Initialize a coordinate mapping
x0 = 6.0
lbda = 1.5
cm = ErfKGCoordMap(0.6, x0, lbda)
[5]:
# Initialize the Hamiltonian
ham = Hamiltonian(pot, cm)
[6]:
# Find the eigenstates of the Hamiltonian
# with a virial selection to separate the
# Siegert states from the other eigenstates
basis = ham.solve(max_virial=10**(-6))

Once the basis set is initialized, one can define a test function and compute the strength function based on the resonant couples on a given wavenumber grid:

[7]:
# Define the test function
xc = 0  # Center of the test functions
sigma = l/20.  # Width of the Gaussian
gauss1 = Gaussian(sigma, xc, grid=xgrid)
[8]:
# Define the wavenumber grid for the strength function
step = 0.1  # Grid-step for plotting
k_max = 10  # Maximal wavenumber for plotting
kgrid = np.arange(step, k_max, step)

The same plot_strength_function method can be used, even though only the Mittag-Leffler Expansion of the strength function is available, compared to the analytical case. The contribution of a given number of resonant couples can still be highlighted:

[9]:
# Plot the MLE strength function
basis.plot_strength_function(gauss1, kgrid, nres=5)
_images/notebooks_numerical_strength_function_13_0.png

The test function being centered in an even potential, the contribution of the odd resonant couples is obviously 0. The second and fourth resonant states are not odd, and this is why they have a non-zero contribution.

It is then possible to compare with the analytical result.

Remember the strength function is given by: \(S(k) = - \frac{1}{\pi} \Im \left\langle g | G(k) | g \right\rangle\)

where \(G\) is the Green’s function (or resolvent) of the Hamiltonian of the considered system and \(g\) is the test function. The exact Green’s function may be written as follows:

\(G(k) = \sum_b \frac{\left| \varphi_b \right\rangle \left\langle \varphi_b \right|}{k^2/2 - {k_b}^2/2} + \sum_{p = \pm} \int_0^{+\infty} \text{d} k_1 \frac{\left| \varphi_p \right\rangle \left\langle \varphi_p \right|}{k^2/2 - {k_1}^2/2}\)

and approximated by : \(G_{MLE}(k) = \sum_{S = a, b, c, d} \frac{\left| \varphi_S \right) \left( \varphi_S \right|}{k_S (k - k_S)}\).

This means that the numerical strength function depends on both the quality of the scalar product with the test function and on the position of the wavenumbers in the complex wavenumber plane. To make sure that the errors are not due to an insufficiently dense discretization grid, a basis set using the non-analytical scalar products for the analytical Siegert states is created (i.e., analytic = False), along with the usual basis set using the analytical scalar product formulas with the test functions (i.e., analytic = True).

[10]:
# Read the analytical basis set made of Siegert states
siegerts = SWPBasisSet.from_file("siegerts.dat", nres=20)
siegerts_not_an = SWPBasisSet.from_file("siegerts.dat", nres=20, analytic=False, grid=xgrid)

First, we can plot the exact contribution of the resonant couples only, using the analytical scalar product:

[11]:
res_couples = siegerts.resonants + siegerts.antiresonants
res_couples.plot_strength_function(gauss1, kgrid, exact=False, nres=5)
_images/notebooks_numerical_strength_function_18_0.png

The numerical results seem to reproduce the exact results. In order to have a clearer view on that point, let us plot both results in the same figure, along with the exact result using the bound and anti-bound states, and the exact result where analytic = False.

[12]:
# Compute the various strength functions
sf = basis.MLE_strength_function(gauss1, kgrid)
exact_sf = siegerts.MLE_strength_function(gauss1, kgrid)
exact_res_sf = res_couples.MLE_strength_function(gauss1, kgrid)
res_couples_not_an = siegerts_not_an.resonants + siegerts_not_an.antiresonants
exact_res_sf_not_an = res_couples_not_an.MLE_strength_function(gauss1, kgrid)
# Plot them in the same figure
plt.plot(kgrid, sf, label="Numerical")
plt.plot(kgrid, exact_sf, label="Exact (with b/ab)")
plt.plot(kgrid, exact_res_sf, label="Exact", lw=4)
plt.plot(kgrid, exact_res_sf_not_an, label="Exact (non-an. scal. prod.)")
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_numerical_strength_function_20_0.png

The peak corresponding to the bound and anti-bound states is missing, but it is still possible to get a very good agreement for all the other peaks. We also see that the discretization grid should be sufficient to give good enough non-analytic scalar products (both green and red curves are superimposed): the error is mainly due to the fact that the numerical Siegert states wavenumbers and wavefunctions do not exactly correspond to the analytical ones.

Off-centered test function

If the same Gaussian is off-centered, the peaks associated to the odd resonant couples now show up. This means that there are more features in the spectrum:

[13]:
# Define the off-centered test function
xc2 = l/8
gauss2 = Gaussian(sigma, xc2, grid=xgrid)
[14]:
# Numerical MLE of the strength function
basis.plot_strength_function(gauss2, kgrid, nres=5)
_images/notebooks_numerical_strength_function_24_0.png
[15]:
# Analytical strength function (except for bound and anti-bound peak)
res_couples = siegerts.resonants + siegerts.antiresonants
res_couples.plot_strength_function(gauss2, kgrid, exact=False, nres=5)
_images/notebooks_numerical_strength_function_25_0.png
[16]:
# Compute the various strength functions
sf = basis.MLE_strength_function(gauss2, kgrid)
exact_sf = siegerts.MLE_strength_function(gauss2, kgrid)
exact_res_sf = res_couples.MLE_strength_function(gauss2, kgrid)
res_couples_not_an = siegerts_not_an.resonants + siegerts_not_an.antiresonants
exact_res_sf_not_an = res_couples_not_an.MLE_strength_function(gauss2, kgrid)
# Plot them in the same figure
plt.plot(kgrid, sf, label="Numerical")
plt.plot(kgrid, exact_sf, label="Exact (with b/ab)")
plt.plot(kgrid, exact_res_sf, label="Exact", lw=4)
plt.plot(kgrid, exact_res_sf_not_an, label="Exact (non-an. scal. prod.)")
plt.legend(loc=6, bbox_to_anchor=(1, 0.5))
plt.show()
_images/notebooks_numerical_strength_function_26_0.png

The conclusion found for the other test function still holds: the numerical strength function reproduces the exact one! It is possible to gain insight on the strength function by simply using a few numerical Siegert states.

Two Gaussian functions potential

Given the previous section, it can be confidently said that the numerical Siegert states give high quality results for the strength function. New potentials can be safely studied: let us focus on the potential made of two Gaussian wells.

The influence of the distance between the two potential wells will be studied: the strength function for two case will be presented, while using the same test function. The first step therefore consists in obtain these strength functions: the same procedure as the one used above is used for both potentials.

Gaussian wells close to each other

Let us first compute the strength function for the case of two Gaussian wells close to each other:

[17]:
# Define a grid
xmax = 11.5
xgrid = np.linspace(-xmax, xmax,1201)
# Define a depth, sigma and center for the Gaussian potential
sigma = 0.4
xc = 2
V0 = 10
# Define a Potential with two Gaussian functions
sym_pot1 = TwoGaussianPotential(sigma, -xc, -V0, sigma, xc, -V0, grid=xgrid)
sym_pot1.plot()
_images/notebooks_numerical_strength_function_30_0.png
[18]:
# Create and solve the Hamiltonian
cm_g = ErfKGCoordMap(0.6, 10.0, 1.0)
ham1 = Hamiltonian(sym_pot1, cm_g)
basis1 = ham1.solve(max_virial=10**(-6))
[19]:
# Define the wavenumber grid
step1 = 0.025  # Grid-step for plotting
kmax1 = 6  # Maximal wavenumber for plotting
kgrid1 = np.arange(step1, kmax1, step1)
# Make sure the test function is discretized on the right grid
gauss2.grid = xgrid
# Plot the MLE of the strength function
basis1.plot_strength_function(gauss2, kgrid1, nres=5)
_images/notebooks_numerical_strength_function_32_0.png
Gaussian wells far from each other

The strength function for the two Gaussian wells further from each other is obtained in a similar fashion

[20]:
# Center of the new Gaussian wells
xc2 = 4
# Definition of the new potential
sym_pot2 = TwoGaussianPotential(sigma, -xc2, -V0, sigma, xc2, -V0, grid=xgrid)
sym_pot2.plot()
_images/notebooks_numerical_strength_function_34_0.png
[21]:
ham2 = Hamiltonian(sym_pot2, cm_g)
basis2 = ham2.solve(max_virial=10**(-8.7))
[22]:
# Define the wavenumber grid
step2 = 0.01  # Grid-step for plotting
kmax2 = 2  # Maximal wavenumber for plotting
kgrid2 = np.arange(step2, kmax2, step2)
# Plot the MLE of the strength function
basis2.plot_strength_function(gauss2, kgrid2, nres=5)
_images/notebooks_numerical_strength_function_36_0.png
Discussion

Two very different strength functions have just been obtained, revealing the influence on the distance between the two Gaussian wells, which is actually related to the positions of the resonant states in the complex energy or wavenumber plane. Let us plot the energies of the resonant states of both basis sets to have that in mind:

[23]:
# Plot the energies of the resonant states of both basis sets
for basis in basis1, basis2:
    basis.plot_energies(xlim=(0, kmax1), ylim=(-4, 0), show_unknown=False)
_images/notebooks_numerical_strength_function_39_0.png
_images/notebooks_numerical_strength_function_39_1.png

As discussed in a previous notebook, increasing the distance between both Gaussian wells leads to denser (more Siegert states per energy range) and more stable Siegert states (their imaginary part is closer to 0). This fact is reflected in the strength function: the peak associated to a more stable state has a higher intensity and a smaller width: it actually tends to a Dirac delta for a bound state.

Having that in mind, it explains why the strength function obtained for the largest distance between the Gaussian wells has more intense peaks compared to the other one, with smaller width (the first peaks scarcely overlap). There are also more peaks per energy range.

Conclusion

This notebook showed that the strength function using numerical Siegert states for any 1D potential could be readily obtained thanks to SiegPy. Even though it does not allow to reproduce the peak due to the bound and anti-bound states contributions, all other peaks can be reproduced, and they are in very good agreement to the exact strength function, that is known for the 1D SWP, showing that the results for other potentials can be trusted.

Code Documentation

This section focuses on the documentation of the SiegPy module and its API. You will find here the documentation for the most useful classes and methods, including some examples.

1D Square-Well Potential Case

The classes that are specific to the 1D Square-Well Potential are documented below.

SWPBasisSet

The SWPBasisSet class, representing a basis set made of eigenstates of a 1D Square-Well Potential (SWP), is defined below.

class siegpy.swpbasisset.SWPBasisSet(states=None)[source]

Bases: siegpy.analyticbasisset.AnalyticBasisSet

Class representing a basis set in the case of 1D Square Well Potential. It mostly implements the abstract methods of its parent class.

Parameters:states (list of SWPEigenstate instances) – Eigenstates of a 1D SW potential.
Raises:ValueError – If all states are not SWPEigenstate instances.

Example

A basis set is initialized by a list of states:

>>> pot = SWPotential(4.442882938158366, 10)
>>> bnd1 = SWPSiegert(4.42578048382546j, 'e', pot)
>>> bnd2 = SWPSiegert(4.284084610255061j, 'o', pot)
>>> bs = SWPBasisSet(states=[bnd1, bnd2])
>>> assert bs[0] == bnd1
>>> assert bs[1] == bnd2

The basis set can be empty:

>>> bs = SWPBasisSet()
>>> assert bs.is_empty
classmethod find_Siegert_states(pot, re_kmax, re_hk, im_kmax, im_hk=None, analytic=True, grid=None, bounds_only=False)[source]

The Siegert states wavenumbers are found using the Muller scheme of the mpmath findroot method. This allows one to find a complex root of a function, starting from a wavenumber as input guess.

To find the Siegert states in a given portion of the wavenumber complex plane, a grid of input guess wavenumbers is therefore required. The parameters specifying this grid are listed below:

Parameters:
  • pot (SWPotential) – 1D Square-Well Potential for which we look for the continuum states.
  • re_kmax (float) – Maximal value for the real part of the resonant states.
  • re_hk (float) – Real part of the grid step for initial roots.
  • im_kmax (float) – (Absolute) maximal value for the imaginary part of the resonant and anti-resonant states.
  • im_hk (float) – Imaginary part of the grid step for the initial roots (optional, except in the cases where the imaginary part of the resonant states becomes bigger (in absolute value) than the imaginary part of the first bound state).
  • analytic (bool) – If True, scalar products with the Siegert states will be computed analytically (default to True).
  • grid (numpy array or list or set) – Discretization grid of the wavefunctions of the Siegert states (optional).
  • bounds_only (bool) – If True, only the bounds states have to be found (default to False).
Returns:

Sorted basis set with all the Siegerts found in the user-defined range.

Return type:

SWPBasisSet

Examples

Read a basis set from a file as a reference:

>>> from siegpy.swpbasisset import SWPBasisSet
>>> bs_1 = SWPBasisSet.from_file("doc/notebooks/siegerts.dat", nres=3)
>>> len(bs_1.resonants)
3

To find the Siegert states of a given potential, proceed as follows:

>>> pot = bs_1.potential
>>> bs = SWPBasisSet.find_Siegert_states(pot, 4.5, 1.5, 1.0, im_hk=1.0)
>>> bs == bs_1
True

The previous test shows that all the Siegert states of the reference are recovered, and this includes the resonant states whose wavenumber can have a real part up to 4.5 and an imaginary part as low as -1.0.

Warning

It is not ensured that all Siegert states in the defined range are found: you may want to check the grid_step values.

For instance, if the grid step along the real wavenumber axis is too large, the reference results are not recovered:

>>> bs = SWPBasisSet.find_Siegert_states(pot, 4.5, 4.5, 1.0, im_hk=1.0)
>>> bs == bs_1
False
classmethod find_continuum_states(pot, kmax, hk, kmin=None, even_only=False, analytic=True, grid=None)[source]

Initialize a BasisSet instance made of SWPContinuum instances. The basis set has \(2*n_k\) elements if even_only=False, \(n_k\) elements otherwise (where \(n_k\) is the number of continuum states defined by the grid step hk and the minimal and maximal values of the wavenumber grid kmin and kmax).

Parameters:
  • pot (SWPotential) – 1D Square-Well Potential for which we look for the continuum states.
  • kmax (float) – Wavenumber of the last continuum state.
  • hk (float) – Grid step of the wavenumber grid.
  • kmin (float) – Wavenumber of the first continuum state (optional)
  • even_only (bool) – If True, only even continuum states are created (default to False)
  • analytic (bool) – If True, the scalar products will be computed analytically (default to True).
  • grid (numpy array or list or set) – Discretization grid of the wavefunctions of the continuum states (optional).
Returns:

Basis set of all continuum states defined by the grid of wavenumbers.

Return type:

SWPBasisSet

Raises:

WavenumberError – If hk, kmin or `kmax is not strictly positive.

Examples

Let us start by defining a potential:

>>> from siegpy.swpbasisset import SWPBasisSet
>>> bs_ref = SWPBasisSet.from_file("doc/notebooks/siegerts.dat")
>>> pot = bs_ref.potential

The continuum states are found, given a potential and a grid of initial wavenumbers (note that the minimal and maximal wavenumber cannot be 0)

>>> hk = 1; kmax = 3
>>> bs = SWPBasisSet.find_continuum_states(pot, kmax, hk)
>>> bs.wavenumbers
[1.0, 1.0, 2.0, 2.0, 3.0, 3.0]

It is possible to find only the even continuum states:

>>> p = pot
>>> bs = SWPBasisSet.find_continuum_states(p, kmax, hk, even_only=True)
>>> bs.wavenumbers
[1.0, 2.0, 3.0]
>>> assert len(bs.even) == 3 and bs.odd.is_empty

The minimal wavenumber can set:

>>> bs = SWPBasisSet.find_continuum_states(pot, kmax, hk, kmin=3)
>>> bs.wavenumbers
[3.0, 3.0]
parity
Returns:None if both parities are present, 'e' or 'o' if all the states are even or odd, respectively.
Return type:None or str
even
Returns:All the even states of the basis set.
Return type:SWPBasisSet
odd
Returns:All the odd states of the basis set.
Return type:SWPBasisSet
plot_wavefunctions(nres=None, xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the bound, resonant and anti-resonant wavefunctions of the basis set along with the potential. The continuum and anti-bound states, if any are present in the basis set, are not plotted.

The wavefunctions are translated along the y-axis by their energy (for bound states) or absolute value of their energy (for resonant and anti-resonant states).

Parameters:
  • nres (int) – Number of resonant and antiresonant wavefunctions to plot.
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional)
  • ylim (tuple(float or int, float or int)) – Range of the y axis of the plot (optional)
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional)
continuum_contributions_to_CR(test, hk=None, kmax=None)[source]

Evaluate the continuum contributions to the completeness relation.

This is an overriding of the inherited siegpy.analyticbasisset.AnalyticBasisSet.continuum_contributions_to_CR() method in order to account for the parity of the continuum states of the Square-Well potential.

Parameters:
  • test (Function) – Test function.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • kmax (float) – Maximal wavenumber of the on-the-fly continuum basis set (optional).
Returns:

Contribution of each continuum state of the basis set to the exact completeness relation.

Return type:

numpy array

Raises:

BasisSetError – If the basis set has less odd continuum states than even continuum states when the test function is not even.

static _evaluate_integrand(q, k, test, eta, potential)[source]

Evaluate the integrand used to compute the strength function “on-the-fly.”

Parameters:
  • q (float) – Wavenumber of the continuum state considered.
  • k (float) – Wavenumber for which the strength function is evaluated.
  • test (Function) – Test function.
  • eta (float) – Infinitesimal for integration (if None, default to 10 times the value of the grid-step of the continuum basis set).
  • potential (Potential) – Potential of the currently studied analytical case.
Returns:

Value of the integrand.

Return type:

complex

_propagate(test, time_grid, weights=None)[source]

Evaluate the time-propagation of a test wavepacket as the matrix product of two matrices: one to account for the time dependance of the propagation of the wavepacket (mat_time), the other for its space dependance (mat_space).

This is an overriding of the inherited siegpy.basisset.BasisSet._propagate() method to take into account the parity of the eigenstates in the 1D SWP case.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
  • weights (dict) – Dictionary of the weights to use for the time-propagation. Keys correspond to a type of Siegert states (‘ab’ for anti-bounds, ‘b’ for bounds, ‘r’ for resonants and ‘ar’ for anti-resonants) and the corresponding value is the weight to use for all the states of the given type (optional).
Returns:

Propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Raises:

BasisSetError – If the basis set has less odd continuum states than even continuum states when the test function is not even.

_add_one_continuum_state()[source]

Add two continuum state to the basis set, depending on the already existing continuum states.

This is an overriding of the inherited siegpy.analyticbasisset.AnalyticBasisSet._add_one_continuum_state() method to account for the parity of the continuum states in the 1D SWP case.

Returns:The same basis set with one more continuum state.
Return type:SWPBasisSet

SWPotential

The SWPotential class representing a 1D Square-Well Potential (1DSWP) is defined below.

class siegpy.swpotential.SWPotential(l, V0, grid=None)[source]

Bases: siegpy.functions.Rectangular, siegpy.potential.Potential

A 1D SW potential is nothing but a rectangular function that is considered as a potential. This class therefore inherits from and extends the Rectangular and Potential classes.

A 1D SWP is initialized from a width \(l\) and a depth \(V_0\). If a grid is given, the potential is discretized over this grid.

Parameters:
  • l (float) – Width of the potential.
  • V0 (float) – Depth of the potential.
  • grid (numpy array or list or set) – Discretization grid of the potential (optional).
Raises:

ValueError – If the depth and width of the potential are not strictly positive.

Examples

A SWPotential instance represents nothing but a centered rectangular function with a negative amplitude:

>>> xgrid = [-4, -2, 0, 2, 4]
>>> pot = SWPotential(5, 10, grid=xgrid)
>>> pot.width == 5 and pot.amplitude == -10
True
>>> pot.center == 0 and pot.momentum == 0
True
>>> pot.values
array([  0.+0.j, -10.+0.j, -10.+0.j, -10.+0.j,   0.+0.j])

Rather than dealing with a negative amplitude, the positive depth of the SW potential is mainly used:

>>> pot.depth
10
classmethod from_center_and_width(xc, a, k0=0.0, h=1.0, grid=None)[source]

Class method overriding the inherited siegpy.functions.Rectangular.from_center_and_width() class method.

Parameters:
  • xc (float) – Center of the potential (must be 0).
  • a (float) – Width of the potential.
  • k0 (float) – Initial momentum of the potential (must be 0).
  • h (float) – Depth of the potential (default to 1).
  • grid (list or numpy array) – Discretization grid (optional).
Returns:

An initialized 1D SW potential.

Return type:

SWPotential

Raises:

ValueError – If the initial momentum or the center is not zero.

Examples

>>> pot = SWPotential.from_center_and_width(0, 5, h=-10)
>>> pot.width == 5 and pot.depth == 10
True

A Potential cannot have a non-zero center nor an initial momentum:

>>> SWPotential.from_center_and_width(2, 5)
Traceback (most recent call last):
ValueError: A SWPotential must be centered.
>>> SWPotential.from_center_and_width(0, 5, k0=1)
Traceback (most recent call last):
ValueError: A SWPotential cannot have an initial momentum.

The same is valid for the from_width_and_center() class method:

>>> SWPotential.from_width_and_center(5, 2)
Traceback (most recent call last):
ValueError: A SWPotential must be centered.
>>> SWPotential.from_width_and_center(5, 0, k0=1)
Traceback (most recent call last):
ValueError: A SWPotential cannot have an initial momentum.
depth
Returns:Depth of the 1D SW potential.
Return type:float
__eq__(other)[source]
Parameters:other (object) – Any other object.
Returns:True if two SW potentials have the same width and depth.
Return type:bool

Examples

>>> SWPotential(5, 10) == SWPotential(5, 10, grid=[-1, 0, 1])
True
>>> SWPotential(5, 10) == Potential([-1, 0, 1], [-10, -10, -10])
False
__add__(other)[source]

Override of the addition, taking care of the case of the addition of two SW potentials with the same width.

Parameters:other (Potential) – Another potential
Returns:The sum of both potentials.
Return type:Potential or Function

Examples

The addition of two Square-Well potentials of the same width keeps the analyticity:

>>> xgrid = [-4, -2, 0, 2, 4]
>>> pot = SWPotential(5, 10, grid=xgrid)
>>> pot += SWPotential(5, 5)
>>> isinstance(pot, SWPotential)
True
>>> pot.amplitude == -15 and pot.depth == 15 and pot.width == 5
True
>>> pot.values
array([  0.+0.j, -15.+0.j, -15.+0.j, -15.+0.j,   0.+0.j])

In any other case, the analyticity is lost, but the result still is a potential:

>>> pot += Potential(xgrid, [5, 5, 5, 5, 5])
>>> isinstance(pot, SWPotential)
False
>>> isinstance(pot, Potential)
True
>>> pot.values
array([  5.+0.j, -10.+0.j, -10.+0.j, -10.+0.j,   5.+0.j])
complex_scaled_values(coord_map)[source]

Evaluates the complex scaled SW potential. It actually amounts to the potential without complex scaling, as the Square-Well Potential is a piecewise function (the complex scaling has no effect on it).

Parameters:coord_map (CoordMap) – Coordinate mapping.
Returns:Complex scaled values of the potential.
Return type:numpy array

SWPEigenstates

Here are defined the classes allowing one to create and use the eigenstates of the 1-dimendional (1D) Square Well Potential (SWP) case.

The two classes are:

Both classes derive from the SWPEigenstate abstract class, that forces to redefine the scalar product so that it is computed analytically when possible.

class siegpy.swpeigenstates.SWPEigenstate(k, parity, potential, grid, analytic)[source]

Bases: siegpy.analyticeigenstates.AnalyticEigenstate

This is the base class for any eigenstate of the 1D Square-Well Potential (1DSWP). It defines some generic methods, while leaving some others to be defined by its subclasses, that are:

  • the SWPSiegert class, defining the Siegert states of the Hamiltonian defined by the 1D SWP,
  • the SWPContinuum class, defining the continuum states of the same problem.

In addition to those of an AnalyticEigenstate, one of the main characteristics of a SWPEigenstate is its parity (the eigenstate must be an even or odd function).

Parameters:
  • k (complex) – Wavenumber of the state.
  • parity (str) – Parity of the state ('e' for even, 'o' for odd).
  • potential (SWPotential) – 1D Square-Well Potential giving rise to this eigenstate.
  • grid (list or set or numpy array) – Discretization grid.
  • analytic (bool) – If True, the scalar products must be computed analytically.
Raises:

TypeError – If the potential is not a SWPotential instance.

parity
Returns:Parity of the eigenstate ('e' if it is even or 'o' if it is odd).
Return type:str
is_even
Returns:True if the eigenstate is even.
Return type:bool
is_odd
Returns:True if the eigenstate is odd.
Return type:bool
_compute_values(grid)[source]

Evaluate the wavefunction of the eigenstate discretized over the whole grid.

Parameters:grid (list or set or numpy array) – Discretization grid.
Returns:Wavefunction discretized over the grid.
Return type:numpy array
_even_wf_1(grid_1)[source]

Note

This is an asbtract method.

Evaluate the even eigenstate wavefunction over the grid points in region I.

Parameters:grid_1 (numpy array) – Grid in region I.
Returns:Even eigenstate wavefunction discretized over the grid in region I.
Return type:numpy array
_even_wf_2(grid_2)[source]

Note

This is an asbtract method.

Evaluate the even eigenstate wavefunction over the grid points in region II.

Parameters:grid_2 (numpy array) – Grid in region II.
Returns:Even eigenstate wavefunction discretized over the grid in region II.
Return type:numpy array
_odd_wf_1(grid_1)[source]

Note

This is an asbtract method.

Evaluate the odd eigenstate wavefunction over the grid points in region I.

Parameters:grid_1 (numpy array) – Grid in region I.
Returns:Odd eigenstate wavefunction discretized over the grid in region I.
Return type:numpy array
_odd_wf_2(grid_2)[source]

Note

This is an asbtract method.

Evaluate the odd eigenstate wavefunction over the grid points in region II.

Parameters:grid_2 (numpy array) – Grid in region II.
Returns:Odd eigenstate wavefunction discretized over the grid in region II.
Return type:numpy array
scal_prod(other, xlim=None)[source]

Evaluate the scalar product of an eigenstate with another function. It can be computed analytically or not.

Note

If it has to be computed analytically, a TypeError may be raised if the analytic scalar product with the test function other is analytically unknown (at present, only scalar products with Gaussian and rectangular functions are possible).

Parameters:
  • other (Function) – Another Function.
  • xlim (tuple(float or int, float or int)) – Range of the x axis for the integration (optional).
Returns:

Result of the scalar product.

Return type:

complex

Raises:

TypeError – If the value of the analytical scalar product with the other function is unknown.

_scal_prod_with_Gaussian(gaussian)[source]
Parameters:gaussian (Gaussian) – Gaussian function.
Returns:Value of the analytic scalar product of an eigenstate with a Gaussian test function.
Return type:complex or float
_scal_prod_with_Rectangular(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of an eigenstate with a Rectangular test function.
Return type:complex or float
_sp_R_1(rect)[source]

Note

This is an asbtract method.

Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product between an eigenstate and a rectangular function spreading over region I.
Return type:float or complex
_sp_R_2(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product between an eigenstate and a rectangular function spreading over region II.
Return type:float or complex
_sp_R_2_other_cases(rect)[source]

Note

This is an asbtract method.

Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of an eigenstate with a rectangular test function in region II when the result is not obviously 0.
Return type:complex or float
_sp_R_3(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of an eigenstate with a rectangular test function in region III.
Return type:complex or float
class siegpy.swpeigenstates.SWPSiegert(ks, parity, potential, grid=None, analytic=True)[source]

Bases: siegpy.swpeigenstates.SWPEigenstate, siegpy.analyticeigenstates.AnalyticSiegert

This class defines a Siegert state in the case of the 1D SWP.

Parameters:
  • ks (complex) – Wavenumber of the Siegert state.
  • parity (str) – Parity of the Siegert state ('e' for even, 'o' for odd).
  • potential (SWPotential) – 1D Square-Well Potential giving rise to this eigenstate.
  • grid (list or set or numpy array) – Discretization grid (optional).
  • analytic (bool) – If True, the scalar products must be computed analytically (default to True).
Raises:

ParityError – If the parity is inconsistent with the Siegert state wavenumber.

_even_wf_1(grid_1)[source]

Evaluate the even Siegert state wavefunction over the grid points in region I.

Parameters:grid_1 (numpy array) – Grid in region I.
Returns:Even Siegert state wavefunction discretized over the grid in region I.
Return type:numpy array
_even_wf_2(grid_2)[source]

Evaluate the even Siegert state wavefunction over the grid points in region II.

Parameters:grid_2 (numpy array) – Grid in region II.
Returns:Even Siegert state wavefunction discretized over the grid in region II.
Return type:numpy array
_odd_wf_1(grid_1)[source]

Evaluate the odd Siegert state wavefunction over the grid points in region I.

Parameters:grid_1 (numpy array) – Grid in region I.
Returns:Odd Siegert state wavefunction discretized over the grid in region I.
Return type:numpy array
_odd_wf_2(grid_2)[source]

Evaluate the odd Siegert state wavefunction over the grid points in region II.

Parameters:grid_2 (numpy array) – Grid in region II.
Returns:Odd Siegert state wavefunction discretized over the grid in region II.
Return type:numpy array
_sp_even_gauss(gaussian, expkp, expkm, expqp, expqm, zkpp, zkmp, zkpm, zkmm, zqpp, zqmp, zqpm, zqmm)[source]
Parameters:gaussian (Gaussian) – Gaussian function.
Returns:Analytical value of the c-product between an even Siegert state and a Gaussian test function.
Return type:float or complex
_sp_odd_gauss(gaussian, expkp, expkm, expqp, expqm, zkpp, zkmp, zkpm, zkmm, zqpp, zqmp, zqpm, zqmm)[source]
Parameters:gaussian (Gaussian) – Gaussian function.
Returns:Analytical value of the c-product between an odd Siegert state and a Gaussian test function.
Return type:float or complex
_sp_R_1(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of a Siegert state with a rectangular function spreading over region I.
Return type:complex or float
_sp_R_2_other_cases(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of a Siegert state with a rectangular function spreading over region II.
Return type:complex or float
MLE_strength_function(test, kgrid)[source]

Evaluate the contribution of a Siegert state to the Mittag-Leffler expansion of the strength function, for a given test function discretized on a grid of wavenumbers kgrid.

Parameters:
  • test (Wavefunction) – Test function.
  • kgrid (numpy array) – Discretization grid of the wavenumber.
Returns:

Contribution of the Siegert state to the strength function.

Return type:

numpy array

class siegpy.swpeigenstates.SWPContinuum(k, parity, potential, grid=None, analytic=True)[source]

Bases: siegpy.swpeigenstates.SWPEigenstate, siegpy.analyticeigenstates.AnalyticContinuum

Class defining a continuum state of the 1D Square-Well potential.

Parameters:
  • k (complex) – Wavenumber of the continuum state.
  • parity (str) – Parity of the continuum state ('e' for even, 'o' for odd).
  • potential (SWPotential) – 1D Square-Well Potential giving rise to this eigenstate.
  • grid (list or set or numpy array) – Discretization grid (optional).
  • analytic (bool) – If True, the scalar products must be computed analytically (default to True).
Raises:

ParityError – If the parity is not 'e' (even) or 'o' (odd).

_even_wf_1(grid_1)[source]

Evaluate the even eigenstate wavefunction over the grid points in region I.

Parameters:grid_1 (numpy array) – Grid in region I.
Returns:Even eigenstate wavefunction discretized over the grid in region I.
Return type:numpy array
_even_wf_2(grid_2)[source]

Evaluate the even Siegert state wavefunction over the grid points in region II.

Parameters:grid_2 (numpy array) – Grid in region II.
Returns:Even Siegert state wavefunction discretized over the grid in region II.
Return type:numpy array
_odd_wf_1(grid_1)[source]

Evaluate the odd eigenstate wavefunction over the grid points in region I.

Parameters:grid_1 (numpy array) – Grid in region I.
Returns:Odd eigenstate wavefunction discretized over the grid in region I.
Return type:numpy array
_odd_wf_2(grid_2)[source]

Evaluate the odd Siegert state wavefunction over the grid points in region II.

Parameters:grid_2 (numpy array) – Grid in region II.
Returns:Odd Siegert state wavefunction discretized over the grid in region II.
Return type:numpy array
_sp_even_gauss(gaussian, expkp, expkm, expqp, expqm, zkpp, zkmp, zkpm, zkmm, zqpp, zqmp, zqpm, zqmm)[source]
Parameters:gaussian (Gaussian) – Gaussian function.
Returns:Analytical value of the c-product between an even continuum state and a Gaussian test function.
Return type:float or complex
_sp_odd_gauss(gaussian, expkp, expkm, expqp, expqm, zkpp, zkmp, zkpm, zkmm, zqpp, zqmp, zqpm, zqmm)[source]
Parameters:gaussian (Gaussian) – Gaussian function.
Returns:Analytical value of the c-product between an odd continuum state and a Gaussian test function.
Return type:float or complex
_sp_R_1(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of a continuum state with a Rectangular function spreading over region I.
Return type:complex or float
_sp_R_2_other_cases(rect)[source]
Parameters:rect (Rectangular) – Rectangular function.
Returns:Value of the analytic scalar product of an even continuum state with a Rectangular function spreading over region II.
Return type:complex or float
exception siegpy.swpeigenstates.ParityError[source]

Bases: Exception

Error thrown if the parity of an eigenstate is incorrect.

General 1D Case

The classes useful for a generic 1D case are described below.

Note

Finding the Siegert states of a generic 1D potential is not already implemented.

BasisSet

class siegpy.basisset.BasisSet(states=None, potential=None, coord_map=None, max_virial=0)[source]

Bases: object

This class is arguably the most important one in the whole module since it defines all the main methods allowing to study the Siegert states of a given Hamiltonian (and compare their relevance with the traditional continuum states).

Parameters:
  • states (list of Eigenstate instances or Eigenstate or None) – Eigenstates of a Hamiltonian. If None, it means that the BasisSet instance will be empty.
  • potential (Potential or None) – Potential leading to the eigenstates.
  • coord_map (CoordMap or None) – Coordinate mapping used to find the eigenstates.
  • max_virial (float or None) – Maximal virial used to discriminate between Siegert states and other eigenstates.
Raises:

ValueError – If the value of states is invalid.

potential
Returns:Potential used to find the eigenstates.
Return type:Potential
coord_map
Returns:Coordinate mapping used to find the eigenstates.
Return type:CoordMap
max_virial

If updated, max_virial updates the Siegert_type attribute of the Siegert states in the basis set.

Returns:Maximal virial for a state to be considered as a Siegert state.
Return type:float
states
Returns:States of a BasisSet instance.
Return type:list

Example

>>> BasisSet().states
[]
write(filename)[source]

Write the basis set in a binary file (using pickle).

Parameters:filename (str) – Name of the file to be written.

Example

>>> BasisSet().write("tmp.dat")
classmethod from_file(filename)[source]

Initialize a basis set from a binary file.

Parameters:filename (str) – Name of a file containing a basis set.
Raises:NameError – If a the file does not exist.
__add__(states)[source]

Add a list of states or the states of another BasisSet instance to a BasisSet instance.

Parameters:states (list of Eigenstate instances or BasisSet) – Eigenstates of a Hamiltonian or another basis set.
Returns:A new basis set.
Return type:BasisSet
Raises:TypeError – If states is not a basis set or a list of eigenstates.
__len__()[source]
Returns:Length of the list of states
Return type:int
__eq__(other)[source]
Returns:True if both basis sets contain the same states.
Return type:bool
is_empty
Returns:True if the basis set is empty.
Return type:bool
is_not_empty
Returns:True if the basis set is not empty.
Return type:bool
bounds
Returns:Basis set made of all the bound states of the current basis set.
Return type:BasisSet
antibounds
Returns:Basis set made of all the anti-bound states of the current basis set.
Return type:BasisSet
resonants
Returns:Basis set made of all the resonant states of the current basis set.
Return type:BasisSet
antiresonants
Returns:Basis set made of all the anti-resonant states of the current basis set.
Return type:BasisSet
siegerts
Returns:Basis set made of all the Siegert states of the current basis set.
Return type:BasisSet
continuum
Returns:Basis set made of all the continuum states of the current basis set.
Return type:BasisSet
unknown
Returns:Basis set made of all the states of unknown type of the current basis set.
Return type:BasisSet
plot_wavefunctions(nres=None, nstates=None, xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the bound, resonant and anti-resonant wavefunctions of the basis set along with the potential. The continuum states, if any in the basis set, are not plotted.

The wavefunctions are translated along the y-axis by their energy (for bound states) or absolute value of their energy (for resonant and anti-resonant states).

Parameters:
  • nres (int) – Number of resonant wavefunctions to plot (optional).
  • nstates (int) – Number of wavefunctions to plot (optional).
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • xlim – Range of the y axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
energies
Returns:The energies of the states in the current basis set.
Return type:list
wavenumbers
Returns:The wavenumbers of the states in the curent basis set.
Return type:list
virials
Returns:Virial values of the states in the current basis set.
Return type:list
no_coord_map
Returns:True if the coordinate mapping is such that \(x \mapsto x\).
Return type:bool
plot_wavenumbers(xlim=None, ylim=None, title=None, file_save=None, show_unknown=True)[source]

Plot the wavenumbers of the Siegert states in the basis set.

Parameters:
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • xlim – Range of the y axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
  • show_unknown (bool) – If True, plot the data of the states with an unknown type.
plot_energies(xlim=None, ylim=None, title=None, file_save=None, show_unknown=True)[source]

Plot the energies of the Siegert states in the basis set.

Parameters:
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • ylim (tuple(float or int, float or int)) – Range of the y axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
  • show_unknown (bool) – If True, plot the data of the states with an unknown type.
scal_prod(test)[source]

Method computing the scalar product \(\left\langle \varphi | test \right\rangle\) for each state \(\varphi\) in the basis set.

Parameters:test (Function) – Test function.
Returns:Scalar product of all the states in the basis set with the test function.
Return type:numpy array
completeness_convergence(test, klim=None)[source]

Evaluate the convergence of the completeness relation for the current basis set using a given test function.

Parameters:
  • test (Function) – Test function.
  • klim (tuple(float or int, float or int)) – Wavenumber range where the completeness convergence must be computed (optional).
Returns:

A tuple made of the array of the states wavenumbers and the array of the convergence of the completeness relation using all the states (both have the same length).

Return type:

tuple(numpy array, numpy array)

Raises:

ValueError – If the wavenumber klim range covers negative values.

Berggren_completeness_convergence(test, klim=None)[source]

Method evaluating the convergence of the CR using the Berggren expansion.

Parameters:
  • test (Function) – Test function.
  • xlim (tuple(float or int, float or int)) – Wavenumber range where the completeness convergence must be computed (optional).
Returns:

A tuple made of the array of the states wavenumbers and the array of the convergence of the Berggren completeness relation (both have the same length).

Return type:

tuple(numpy array, numpy array)

plot_completeness_convergence(test, klim=None, title=None, file_save=None)[source]

Plot the convergence of the completeness relation using all or the fiest nstates in the basis set.

Parameters:
  • test (Function) – Test function.
  • klim (tuple(float or int, float or int)) – Wavenumber range where the completeness convergence must be computed and range of the x axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
MLE_strength_function(test, kgrid)[source]

Warning

Only the peaks due to the resonant couples can be produced at the moment. Numerical anti-bound states are required for the true MLE of the strength function to be computed from a numerical basis set.

Evaluate the Mittag-Leffler expansion strength function of the basis set for a given test function, discretized on a grid of wavenumbers kgrid.

Parameters:
  • test (Function) – Test function.
  • kgrid (numpy array) – Wavenumbers for which the strength function is evaluated.
Returns:

MLE of the strength function evaluated on the kgrid.

Return type:

numpy array

plot_strength_function(test, kgrid, nres=None, title=None, file_save=None)[source]

Plot the Mittag-Leffler Expansion of the strength function for a given test function.

The MLE of the strength function evaluated using 1, …, nres resonant couples can also be plotted.

Parameters:
  • test (Function) – Test function.
  • kgrid (numpy array) – Wavenumbers for which the strength function is evaluated.
  • nres (int) – Number of resonant couples contributions to be plotted (default to None, meaning that none are plotted; if nres=0, then only the sum of the bound and anti-bound states contributions is plotted).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
Berggren_propagation(test, time_grid)[source]

Evaluate the Berggren Expansion of the time-propagation of a test wavepacket over a given time grid. Only bound and resonant states are used, with a weight 1.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
Returns:

Berggren expansion of the propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

_propagate(test, time_grid)[source]

Evaluate the contribution of all the states to the time propagation of the initial wavepacket test for all the times in time_grid.

Note

The same default weight is used for all the states.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
Returns:

Propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Potential

class siegpy.potential.Potential(grid, values)[source]

Bases: siegpy.functions.Function

Class defining a generic 1D potential.

Examples

A Potential is a function:

>>> pot = Potential([-1, 0, 1], [-3, 2, 0])
>>> pot.grid
array([-1,  0,  1])
>>> pot.values
array([-3,  2,  0])

The main difference is that only Potential instances can be added to a potential:

>>> pot += Function([1, 2, 3], [0, -1, 0])
Traceback (most recent call last):
TypeError: Cannot add a <class 'siegpy.functions.Function'> to a Potential
Parameters:
  • grid (list or set or numpy array) – Discretization grid.
  • values (list or set or numpy array) – Values of the function evaluated on the grid points.
Raises:
  • ValueError – If the grid is not made of reals or if the grid and values arrays have incoherent lengths.

  • Example

  • Note that both grid and values are converted to numpy

  • arrays:

  • >>> f = Function([-1, 0, 1], [1, 2, 3])
    
  • >>> f.grid
    
  • array([-1, 0, 1])

  • >>> f.values
    
  • array([1, 2, 3])

__add__(other)[source]

Add two potentials.

Parameters:other (Potential) – Another potential.
Returns:Sum of both potentials.
Return type:Potential
Raises:TypeError – If other is not a Potential instance.

Examples

Two potentials can be added:

>>> pot1 = Potential([1, 2, 3], [1, 1, 1])
>>> pot2 = Potential([1, 2, 3], [0, -1, 0])
>>> pot = pot1 + pot2
>>> pot.grid
array([1, 2, 3])
>>> pot.values
array([1, 0, 1])

The previous potentials are unchanged:

>>> pot1.values
array([1, 1, 1])
>>> pot2.values
array([ 0, -1,  0])
complex_scaled_values(coord_map)[source]

Evaluate the complex scaled value of the potential given a coordinate mapping.

Parameters:coord_map (CoordMap) – Coordinate mapping used.
Raises:NotImplementedError

Symbolic Potential

The SymbolicPotential class is defined below, along with some child classes:

class siegpy.symbolicpotentials.SymbolicPotential(sym_func, grid=None)[source]

Bases: siegpy.potential.Potential

A Symbolic potential is defined by a symbolic function using the sympy package. The symbolic function can be encoded in a string. The function must be a function of x only (no other parameters).

Parameters:
  • sym_func (str or sympy symbolic function) – Symbolic function of the potential.
  • grid (list or numpy array or NoneType) – Discretization grid of the potential (optional).
Raises:

ValueError – If a parameter of the symbolic function is not \(x\).

Examples

The symbolic function can be a string:

>>> f = "1 / (x**2 + 1)"
>>> pot = SymbolicPotential(f)

Updating the grid automatically updates the values of the potential:

>>> xgrid = [-1, 0, 1]
>>> pot.grid = xgrid
>>> pot.values
array([ 0.5,  1. ,  0.5])

The symbolic function must be a function of x only:

>>> from sympy.abc import a, b, x
>>> f = a / (x**2 + b)
>>> SymbolicPotential(f)
Traceback (most recent call last):
ValueError: The only variable of the analytic function should be x.

This means you must assign values to the parameters beforehand:

>>> pot = SymbolicPotential(f.subs([(a, 1), (b, 1)]), grid=xgrid)
>>> pot.values
array([ 0.5,  1. ,  0.5])
symbolic
Returns:Symbolic function of the potential.
Return type:Sympy symbolic function
grid
Returns:Values of the grid.
Return type:NoneType or numpy array
_compute_values(grid)[source]

Evaluate the values of the potential for a given grid.

Parameters:grid (list or numpy array) – Discretization grid or complex scaled grid.
Returns:Values of the potential according to its grid.
Return type:numpy array
complex_scaled_values(coord_map)[source]

Evaluate the complex scaled potential for a given coordinate mapping.

Parameters:coord_map (CoordMap) – Coordinate mapping.
Returns:Complex scaled values of the potential.
Return type:numpy array
__add__(other)[source]
Parameters:other (Potential) – Another potential.
Returns:Sum of both potentials.
Return type:SymbolicPotential or Potential
Raises:ValueError – If the other potential is not symbolic and both potentials have no grid.
class siegpy.symbolicpotentials.WoodsSaxonPotential(l, V0, lbda, grid=None)[source]

Bases: siegpy.symbolicpotentials.SymbolicPotential

This class defines a symmetric and smooth Woods-Saxon potential of the form:

\[V(x) = V_0 \left( \frac{1}{1 + e^{\lambda(x+l/2)}} - \frac{1}{1 + e^{\lambda(x-l/2)}} \right)\]

where \(V_0\) is the potential depth, \(l\) is the potential characteristic width and \(\lambda\) is the sharpness parameter.

Parameters:
  • l (float) – Characteristic width of the potential.
  • V0 (float) – Potential depth (if negative, it corresponds to a potential barrier).
  • lbda (float) – Sharpness of the potential.
  • grid (numpy array) – Discretization grid of the potential (optional).
Raises:

ValueError – If the width or the sharpness parameters is strictly negative.

width
Returns:Width of the potential.
Return type:float
depth
Returns:Depth of the potential.
Return type:float
sharpness
Returns:Sharpness of the potential.
Return type:float
class siegpy.symbolicpotentials.MultipleGaussianPotential(sym_func, grid=None)[source]

Bases: siegpy.symbolicpotentials.SymbolicPotential

This class avoids some code repetition inside the classes TwoGaussianPotential and FourGaussianPotential.

Parameters:
  • sym_func (str or sympy symbolic function) – Symbolic function of the potential.
  • grid (list or numpy array or NoneType) – Discretization grid of the potential (optional).
Raises:

ValueError – If a parameter of the symbolic function is not \(x\).

Examples

The symbolic function can be a string:

>>> f = "1 / (x**2 + 1)"
>>> pot = SymbolicPotential(f)

Updating the grid automatically updates the values of the potential:

>>> xgrid = [-1, 0, 1]
>>> pot.grid = xgrid
>>> pot.values
array([ 0.5,  1. ,  0.5])

The symbolic function must be a function of x only:

>>> from sympy.abc import a, b, x
>>> f = a / (x**2 + b)
>>> SymbolicPotential(f)
Traceback (most recent call last):
ValueError: The only variable of the analytic function should be x.

This means you must assign values to the parameters beforehand:

>>> pot = SymbolicPotential(f.subs([(a, 1), (b, 1)]), grid=xgrid)
>>> pot.values
array([ 0.5,  1. ,  0.5])
classmethod from_Gaussians(*args, **kwargs)[source]

Note

This is an asbtract class method.

Initalization of the potential from Gaussian functions and not from the parameters allowing to define these Gaussian functions.

Returns:Potential initialized from multiple Gaussian functions.
Return type:MultipleGaussianPotential
gaussians

Note

This is an asbtract property.

Returns:All the Gaussian functions used to create the potential.
Return type:list
sigmas
Returns:Sigma of the Gaussian functions of the potential.
Return type:list
centers
Returns:Center of the Gaussian functions of the potential.
Return type:list
amplitudes
Returns:Amplitude of the Gaussian functions of the potential.
Return type:list
class siegpy.symbolicpotentials.TwoGaussianPotential(sigma1, xc1, h1, sigma2, xc2, h2, grid=None)[source]

Bases: siegpy.symbolicpotentials.MultipleGaussianPotential

This class defines a potential made of the sum of two Gaussian functions.

Parameters:
  • sigma1 (float) – Sigma of the first Gaussian function.
  • xc1 (float) – Center of the first Gaussian function.
  • h1 (float) – Amplitude of the first Gaussian function.
  • sigma2 (float) – Sigma of the second Gaussian function.
  • xc2 (float) – Center of the second Gaussian function.
  • h2 (float) – Amplitude of the second Gaussian function.
  • grid (numpy array) – Discretization grid of the potential (optional).
classmethod from_Gaussians(gauss1, gauss2, grid=None)[source]

Initialization of a TwoGaussianPotential instance from two Gaussian functions.

Parameters:
  • gauss1 (Gaussian) – First Gaussian function.
  • gauss2 (Gaussian) – Second Gaussian function.
  • grid (numpy array) – Discretization grid of the potential (optional).
Returns:

Potential initialized from two Gaussian functions.

Return type:

TwoGaussianPotential

gaussian1
Returns:First Gaussian function of the potential.
Return type:Gaussian
gaussian2
Returns:Second Gaussian function of the potential.
Return type:Gaussian
gaussians
Returns:Both Gaussian functions of the potential.
Return type:list
class siegpy.symbolicpotentials.FourGaussianPotential(sigma1, xc1, h1, sigma2, xc2, h2, sigma3, xc3, h3, sigma4, xc4, h4, grid=None)[source]

Bases: siegpy.symbolicpotentials.MultipleGaussianPotential

This class defines a potential made of the sum of four Gaussian functions.

Parameters:
  • sigma1 (float) – Sigma of the first Gaussian function.
  • xc1 (float) – Center of the first Gaussian function.
  • h1 (float) – Amplitude of the first Gaussian function.
  • sigma2 (float) – Sigma of the second Gaussian function.
  • xc2 (float) – Center of the second Gaussian function.
  • h2 (float) – Amplitude of the second Gaussian function.
  • sigma3 (float) – Sigma of the third Gaussian function.
  • xc3 (float) – Center of the third Gaussian function.
  • h3 (float) – Amplitude of the third Gaussian function.
  • sigma4 (float) – Sigma of the fouth Gaussian function.
  • xc4 (float) – Center of the fouth Gaussian function.
  • h4 (float) – Amplitude of the fouth Gaussian function.
  • grid (numpy array) – Discretization grid of the potential (optional).
classmethod from_Gaussians(gauss1, gauss2, gauss3, gauss4, grid=None)[source]

Initialization of a FourGaussianPotential instance from four Gaussian functions.

Parameters:
  • gauss1 (Gaussian) – First Gaussian function.
  • gauss2 (Gaussian) – Second Gaussian function.
  • gauss3 (Gaussian) – Third Gaussian function.
  • gauss4 (Gaussian) – Fourth Gaussian function.
  • grid (numpy array) – Discretization grid of the potential (optional).
Returns:

Potential initialized from four Gaussian functions.

Return type:

FourGaussianPotential

gaussian1
Returns:First Gaussian function of the potential.
Return type:Gaussian
gaussian2
Returns:Second Gaussian function of the potential.
Return type:Gaussian
gaussian3
Returns:Third Gaussian function of the potential.
Return type:Gaussian
gaussian4
Returns:Fouth Gaussian function of the potential.
Return type:Gaussian
gaussians
Returns:Four Gaussian functions of the potential.
Return type:list

CoordMap

Various classes are defined to represent different coordinate mappings. They define the coordinate transformation (and its derivatives) so that the Reflection-Free Complex Absorbing Potentials and the virial operator can be easily discretized over the discretization grid.

The CoordMap class is the most basic one. It is an abstract class, requiring only a complex scaling angle \(\theta\) and a discretization grid.

The Uniform Complex Scaling (UCS) transformation \(F_{UCS}: x \mapsto x e^{i \theta}\) is easily derived from it in the UniformCoordMap class.

Another well-known transformation is the Exterior Complex Scaling (ECS), that leaves the potential unscaled inside a region \([-x_0, x_0]\) (i.e., \(F_{ECS}: x \mapsto x\)), while amounting to the UCS outside (i.e. \((x - x_0) e^{i \theta}\) for \(x > x_0\)). This has the advantage of leaving the innermost potential unscaled.

However, the most efficient coordinate transformations discussed in the literature (when it comes to finding Siegert states numerically) are know as Smooth Exterior Complex Scaling (SECS). Contrary to the usual ECS, there are smooth transitions between both regimes, hence their name. The sharpness of these transitions is then controlled by another parameter, \(\lambda\). The abstract base class SmoothExtCoordMap allows for the representation of such coordinate transformations.

A SECS generally relies on a function \(q\) that smoothly goes from 0 to 1 around \(\pm x_0\). This is why a SmoothFuncCoordMap class is also defined as an abstract base class deriving from the SmoothExtCoordMap class. In practice, all the implemented SECS implemented in SiegPy derive from the SmoothFuncCoordMap class.

There are two main possibilites to define a smooth coordinate transformations by using a smooth function \(q\):

  • \(F_{KG}: x \mapsto x e^{i \theta q(x)}\), that will be called the Kalita-Gupta (KG) coordinate transformation,
  • \(F_{S}: x \mapsto F_S(x)\) such that its derivative with respect to \(x\) is \(F_S^\prime = f_S: x \mapsto 1 + (e^{i \theta} - 1) q(x)\). This will be labeled as the Simon coordinate transformation.

This is the reason why two other abstract base classes were defined:

They both require a smooth function \(q\) as parameters.

Two main types of smooth functions are implemented in SiegPy: one based on the error function \(\text{erf}\), the other on \(\tanh\), hence giving rise to four classes that can readily be used to find Siegert states numerically:

See smoothfunctions for more details on the smooth functions.

class siegpy.coordinatemappings.CoordMap(theta, grid, GCVT)[source]

Bases: object

Note

This is an abstract class.

Base class of all the other coordinate mappings.

Parameters:
  • theta (float) – Complex scaling angle.
  • grid (numpy array or None) – Discretization grid.
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter.
theta
Returns:Complex scaling angle.
Return type:float
GCVT
Returns:Parameter stating which virial operator(s) have to be used.
Return type:bool
grid
Returns:Discretization grid.
Return type:numpy array
_to_be_updated(new_grid)[source]
Parameters:new_grid (numpy array) – New discretization grid.
Returns:True if the grid has to be updated.
Return type:bool
values
Returns:Values of the coordinate transorfmation.
Return type:numpy array
f_values
Returns:Values of the first derivative of the coordinate transorfmation.
Return type:numpy array
f_dx_values
Returns:Values of the second derivative of the coordinate transorfmation.
Return type:numpy array
f_dx2_values
Returns:Values of the third derivative of the coordinate transorfmation.
Return type:numpy array
dxi_values
Returns:Values of the first derivative with respect to all the parameters of the coordinate transorfmation.
Return type:numpy array
f_dxi_values
Returns:Values of the first derivative with respect to all the parameters of the first derivative of the coordinate transorfmation.
Return type:numpy array
V0_values
Returns:Values of the first additional RF-CAP.
Return type:numpy array
V1_values
Returns:Values of the second additional RF-CAP.
Return type:numpy array
V2_values
Returns:Values of the third additional RF-CAP.
Return type:numpy array
U0_values
Returns:Values of the first additional virial operator potential.
Return type:numpy array
U1_values
Returns:Values of the second additional virial operator potential.
Return type:numpy array
U2_values
Returns:Values of the third additional virial operator potential.
Return type:numpy array
U11_values
Returns:Values of the fourth additional virial operator potential.
Return type:numpy array
_update_all_values()[source]

Update the values of the coordinate mapping and its derivatives with respect to \(x\) and with respect to the coordinate mapping parameters, as well as related quantities such as the Reflection-Free Complex Absorbing Potential and the virial operator.

_update_x_deriv_values()[source]

Update the values of the coordinate mapping and its three first derivatives with respect to \(x\).

_update_x_deriv_from_grid()[source]

Update the values of the coordinate mapping and its three first derivatives with respect to \(x\), if the grid is not None.

_update_param_deriv_values()[source]

Update the values of the derivative with respect to the coordinate mapping parameters of the coordinate mapping and its first derivative with respect to \(x\).

_update_param_deriv_from_grid()[source]

Update the values of the derivative with respect to the coordinate mapping parameters of the coordinate mapping and its first derivative with respect to \(x\), if the grid is not None.

_get_values(*args)[source]

Note

This is an abstract method.

Evaluate the values of the coordinate mapping with respect to \(x\).

_get_f_values(*args)[source]

Note

This is an abstract method.

Evaluate the values of the first derivative of the coordinate mapping with respect to \(x\).

_get_f_dx_values(*args)[source]

Note

This is an abstract method.

Evaluate the values of the second derivative of the coordinate mapping with respect to \(x\).

_get_f_dx2_values(*args)[source]

Note

This is an abstract method.

Evaluate the values of the third derivative of the coordinate mapping with respect to \(x\).

_get_dxi_values(*args)[source]

Note

This is an abstract method.

Evaluate the values of the first derivative of the coordinate mapping with respect to the coordinate mapping parameters.

_get_f_dxi_values(*args)[source]

Note

This is an abstract method.

Evaluate the values of the first derivative with respect to the coordinate mapping parameters of the first derivative with respect to \(x\) of the coordinate mapping.

_update_RFCAP_values()[source]

Update the various potentials used to define the Reflection-Free Complex Absorbing Potentials

_update_virial_values()[source]

Update the values of the different potentials used to define the various virial operators.

class siegpy.coordinatemappings.UniformCoordMap(theta, GCVT=True, grid=None)[source]

Bases: siegpy.coordinatemappings.CoordMap

The uniform coordinate transformation corresponds to: \(x \mapsto x e^{i \theta}\).

Parameters:
  • theta (float) – Complex scaling angle.
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per parameter (here, theta). Defaults to True.
  • grid (numpy array) – Discretization grid (optional).
_get_values()[source]
Returns:Values of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_values()[source]
Returns:Values of the first derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_dx_values()[source]
Returns:Values of the second derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_dx2_values()[source]
Returns:Values of the third derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_dxi_values()[source]
Returns:Values of the first derivative of the coordinate mapping with respect to the coordinate mapping parameters.
Return type:numpy array
_get_f_dxi_values()[source]
Returns:Values of the first derivative with respect to the coordinate mapping parameters of the first derivative with respect to \(x\) of the coordinate mapping.
Return type:numpy array
class siegpy.coordinatemappings.SmoothExtCoordMap(theta, x0, lbda, GCVT, grid)[source]

Bases: siegpy.coordinatemappings.CoordMap

Note

This is an abstract class.

This is the base class for all the other classes implementing a particular type of smooth exterior complex scaling.

Warning

This class must be used to create child classes if no smooth function is actually required.

Parameters:
  • theta (float) – Complex scaling angle.
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are three, one for each parameter (here, theta, x0 and lbda).
  • grid (numpy array or None) – Discretization grid.
Raises:

ValueError – If the x0 or lbda is not positive.

x0
Returns:Inflection point.
Return type:float
lbda
Returns:Sharpness paramter.
Return type:float
class siegpy.coordinatemappings.SmoothFuncCoordMap(theta, smooth_func, GCVT, grid)[source]

Bases: siegpy.coordinatemappings.SmoothExtCoordMap

Note

This is an abstract class.

Class used to define methods that may not be generally shared by any type of SmoothExtCoordMap child classes, because a Smooth Exterior Coordinate Mapping could be defined without an explicit use of a smooth function.

Warning

This class must be used to create child classes if a smooth function is required.

Parameters:
  • theta (float) – Complex scaling angle.
  • smooth_func (SmoothFunction) – Smooth function \(q\).
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (including theta here).
  • grid (numpy array or None) – Discretization grid.
smooth_func
Returns:Smooth function \(q\).
Return type:SmoothFunction
_update_all_values()[source]

Update the grid and values of the smooth function before using the parent method updating all other values.

_get_gp_and_gm()[source]

Note

This function only avoids code repetition in child classes.

Returns:Two numpy arrays that are often used by the child classes, while only being based on properties.
Return type:tuple made of two numpy arrays
class siegpy.coordinatemappings.SimonCoordMap(theta, smooth_func, GCVT, grid)[source]

Bases: siegpy.coordinatemappings.SmoothFuncCoordMap

Note

This is an abstract class.

This class allows the representation of the smooth exterior coordinate mapping \(F: x \mapsto F(x)\) such that its derivative with respect to \(x\) is equal to \(1 + (e^{i \theta} - 1) q(x)\), where \(q\) is a smooth function.

Parameters:
  • theta (float) – Complex scaling angle.
  • smooth_func (SmoothFunction) – Smooth function \(q\).
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (including theta here).
  • grid (numpy array or None) – Discretization grid.
_update_x_deriv_from_grid()[source]

Update the values of the coordinate mapping and its three first derivatives with respect to \(x\), if the grid is not None.

_update_param_deriv_from_grid()[source]

Update the values of the derivative with respect to the coordinate mapping parameters of the coordinate mapping and its first derivative with respect to \(x\).

_get_values(m)[source]
Parameters:m (numpy array) – Values of the intermediary function which is the factor of \((e^{i \theta} - 1)\) term in the coordinate mapping function.
Returns:Values of the second derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_values()[source]
Returns:Values of the first derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_dx_values()[source]
Returns:Values of the second derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_dx2_values()[source]
Returns:Values of the third derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_dxi_values(m, m_dx0, m_dl)[source]
Parameters:
  • m (numpy array) – Values of the intermadiary function.
  • m_dx0 (numpy array) – First derivative of the intermediary function with respect to x0.
  • m_dl (numpy array) – First derivative of the intermediary function with respect to the sharpness parameter.
Returns:

Values of the first derivative of the coordinate mapping with respect to the coordinate mapping parameters.

Return type:

numpy array

_get_f_dxi_values()[source]
Returns:Values of the first derivative with respect to the coordinate mapping parameters of the first derivative with respect to \(x\) of the coordinate mapping.
Return type:numpy array
_get_m_values(grid, lbda, gp, gm)[source]

Note

This is an abstract method.

Parameters:
  • grid (numpy array) – Discretization grid.
  • lbda (float) – Sharpness parameter
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of an intermediate function.

Return type:

numpy array

_get_m_dx0_values(grid, lbda, gp, gm)[source]

Note

This is an abstract method.

Parameters:
  • grid (numpy array) – Discretization grid.
  • lbda (float) – Sharpness parameter
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of the first derivative with respect to x0 of an intermediate function.

Return type:

numpy array

_get_m_dl_values(grid, lbda, gp, gm)[source]

Note

This is an abstract method.

Parameters:
  • grid (numpy array) – Discretization grid.
  • lbda (float) – Sharpness parameter
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of the first derivative with respect to the sharpness parameter lambda of an intermediate function.

Return type:

numpy array

class siegpy.coordinatemappings.TanhSimonCoordMap(theta, x0, lbda, GCVT=True, grid=None)[source]

Bases: siegpy.coordinatemappings.SimonCoordMap

This class defines the smooth exterior coordinate mapping of the Simon type using the smooth function based on \(\tanh\).

Parameters:
  • theta (float) – Complex scaling angle.
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (here, theta, x0 and lbda). Defaults to True.
  • grid (numpy array or None) – Discretization grid (optional).
_get_m_values(gp, gm)[source]
Parameters:
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of an intermediate function.

Return type:

numpy array

_get_m_dx0_values(gp, gm)[source]
Parameters:
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of the first derivative with respect to x0 of an intermediate function.

Return type:

numpy array

_get_m_dl_values(gp, gm)[source]
Parameters:
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of the first derivative with respect to the sharpness parameter lambda of an intermediate function.

Return type:

numpy array

class siegpy.coordinatemappings.ErfSimonCoordMap(theta, x0, lbda, GCVT=True, grid=None)[source]

Bases: siegpy.coordinatemappings.SimonCoordMap

This class defines the smooth exterior coordinate mapping of the Simon type using the smooth function based on \(\text{erf}\).

Parameters:
  • theta (float) – Complex scaling angle.
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (here, theta, x0 and lbda). Defaults to True.
  • grid (numpy array or None) – Discretization grid (optional).
_get_m_values(gp, gm)[source]
Parameters:
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of an intermediate function.

Return type:

numpy array

_get_m_dx0_values(gp, gm)[source]
Parameters:
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of the first derivative with respect to x0 of an intermediate function.

Return type:

numpy array

_get_m_dl_values(gp, gm)[source]
Parameters:
  • gp (numpy array) – Intermadiary value.
  • gm (numpy array) – Intermadiary value.
Returns:

The values of the first derivative with respect to the sharpness parameter lambda of an intermediate function.

Return type:

numpy array

class siegpy.coordinatemappings.KGCoordMap(theta, smooth_func, GCVT, grid)[source]

Bases: siegpy.coordinatemappings.SmoothFuncCoordMap

Note

This is an abstract class.

This class allows the representation of the smooth exterior coordinate mapping \(F: x \mapsto x e^{i \theta q(x)}\), that we will call the Kalita-Gupta (KG) coordinate mapping. \(q\) represents the smooth function.

Parameters:
  • theta (float) – Complex scaling angle.
  • smooth_func (SmoothFunction) – Smooth function \(q\).
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (including theta here).
  • grid (numpy array or None) – Discretization grid.
_update_x_deriv_from_grid()[source]

Update the values of the coordinate mapping and its three first derivatives with respect to \(x\), if the grid is not None.

_update_param_deriv_from_grid()[source]

Update the values of the derivative with respect to the coordinate mapping parameters of the coordinate mapping and its first derivative with respect to \(x\).

_get_values()[source]
Returns:Values of the second derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_values(F)[source]
Parameters:F (numpy array) – Values of the coordinate mapping.
Returns:Values of the first derivative of the coordinate mapping with respect to \(x\).
Return type:numpy array
_get_f_dx_values(F, f)[source]
Parameters:
  • F (numpy array) – Values of the coordinate mapping.
  • f (numpy array) – Values of the first derivative of the coordinate mapping with respect to \(x\).
Returns:

Values of the second derivative of the coordinate mapping with respect to \(x\).

Return type:

numpy array

_get_f_dx2_values(F, f, f_dx)[source]
Parameters:
  • F (numpy array) – Values of the coordinate mapping.
  • f (numpy array) – Values of the first derivative of the coordinate mapping with respect to \(x\).
  • f_dx (numpy array) – Values of the second derivative of the coordinate mapping with respect to \(x\).
Returns:

Values of the third derivative of the coordinate mapping with respect to \(x\).

Return type:

numpy array

_get_dxi_values(*args)[source]

Does nothing, as its role is already performed by the _update_param_deriv_from_grid() method

_get_f_dxi_values(F, F_dth, F_dx0, F_dl)[source]
Parameters:
  • F (numpy array) – Values of the coordinate mapping.
  • F_dth (numpy array) – Values of the first derivative of the coordinate mapping with respect to the complex scaling angle.
  • F_dx0 (numpy array) – Values of the second derivative of the coordinate mapping with respect to x0.
  • F_dl (numpy array) – Values of the first derivative of the coordinate mapping with respect to the sharpness parameter.
Returns:

Values of the first derivative with respect to the coordinate mapping parameters of the first derivative with respect to \(x\) of the coordinate mapping.

Return type:

numpy array

class siegpy.coordinatemappings.TanhKGCoordMap(theta, x0, lbda, GCVT=True, grid=None)[source]

Bases: siegpy.coordinatemappings.KGCoordMap

This class defines the smooth exterior coordinate mapping of the Kalita-Gupta type using the smooth function based on \(\tanh\).

Parameters:
  • theta (float) – Complex scaling angle.
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (here, theta, x0 and lbda). Defaults to True.
  • grid (numpy array or None) – Discretization grid (optional).
class siegpy.coordinatemappings.ErfKGCoordMap(theta, x0, lbda, GCVT=True, grid=None)[source]

Bases: siegpy.coordinatemappings.KGCoordMap

This class defines the smooth exterior coordinate mapping of the Kalita-Gupta type using the smooth function based on \(\text{erf}\).

Parameters:
  • theta (float) – Complex scaling angle.
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter
  • GCVT (bool) – Stands for Generalized Complex Virial Theorem. If it is set to True, then only one virial value is computed, else there are one per coordinate mapping parameter (here, theta, x0 and lbda). Defaults to True.
  • grid (numpy array) – Discretization grid (optional).

Hamiltonian

class siegpy.hamiltonian.Hamiltonian(potential, coord_map, filters=<siegpy.filters.WaveletFilters object>)[source]

Bases: object

A Hamiltonian has to be defined when one is interested in finding numerically the Siegert states of a potential when the eigenstates are not known analytically.

A Hamiltonian is defined by a potential and a coordinate mapping, which gives rise to extra potentials to be added, known as the Reflection-Free Complex Absorbing Potentials (RF-CAP).

Filters allowing to define the gradient and laplacian operators are also required. The default value corresponds to Daubechies Wavelet filters.

Parameters:
  • potential (Potential) – Potential studied.
  • coord_map (CoordMap) – Coordinate mapping used.
  • filters (Filters) – Filters used to define the Hamiltonian matrix.
potential
Returns:Potential considered.
Return type:Potential
coord_map
Returns:Complex scaling considered.
Return type:CoordMap
filters
Returns:Filters used to define the matrices.
Return type:Filters
gradient_matrix
Returns:Gradient matrix used to define the matrices.
Return type:2D numpy array
laplacian_matrix
Returns:Laplacian matrix used to define the matrices.
Return type:2D numpy array
magic_matrix
Returns:Magic filter matrix used to define the matrices.
Return type:2D numpy array
matrix
Returns:Hamiltonian matrix.
Return type:2D numpy array
virial_matrix
Returns:Virial operator matrix.
Return type:2D numpy array
_find_hamiltonian_matrix()[source]

Evaluate the Hamiltonian matrix.

Returns:Hamiltonian matrix.
Return type:2D numpy array
_find_virial_matrix()[source]

Evaluate the virial operator matrix.

Returns:Virial operator matrix.
Return type:2D numpy array
_build_virial_matrix(U0, U1, U2, U11)[source]

Build the matrix operator given a set of virial potentials.

Parameters:
  • U0 (numpy array) – First virial potential.
  • U1 (numpy array) – Second virial potential.
  • U2 (numpy array) – Thirs virial potential.
  • U11 (numpy array) – Fourth virial potential.
Returns:

Virial operator as a matrix.

Return type:

2D numpy array

solve(max_virial=None)[source]

Find the eigenstates of the potential and evaluate the virial for each of them. This is the main method of the Hamiltonian class.

Parameters:max_virial – Maximal virial value for a state to be considered as a Siegert state.
Returns:Basis set made of the eigenstates of the Hamiltonian.
Return type:BasisSet
_pot_to_mat(potential_values)[source]

Convert the potential values to a diagonal matrix.

Parameters:potential_values (numpy array) – Values of a potential.
Returns:Potential as a diagonal matrix.
Return type:2D numpy array

Eigenstate

class siegpy.eigenstates.Eigenstate(grid, values, energy, Siegert_type='U', virial=None)[source]

Bases: siegpy.functions.Function

Class defining an eigenstate. The main change with the Function class is that an Eigenstate instance is associated to an energy, and possibly a Siegert type and a virial value.

Parameters:
  • grid (list or set or numpy array) – Discretization grid.
  • values (list or set or numpy array) – Function evaluated on the grid points.
  • energy (float or complex) – Energy of the eigenstate.
  • Siegert_type (str) – Type of the Eigenstate (default to ‘U’ for unknown).
  • virial (float or complex) – Value of the virial theorem for the eigenstate (optional).

Examples

An Eigenstate instance has several attributes:

>>> wf = Eigenstate([0, 1, 2], [1, 2, 3], 1.0)
>>> wf.grid
array([0, 1, 2])
>>> wf.values
array([1, 2, 3])
>>> wf.energy
1.0
energy
Returns:Energy of the eigenstate.
Return type:float or complex
wavenumber
Returns:Wavenumber of the eigenstate
Return type:complex
virial
Returns:Virial of the eigenstate.
Return type:float or complex
Siegert_type
Returns:The type of the state.
Return type:str or NoneType
scal_prod(other, xlim=None)[source]

Override of the siegpy.functions.Function.scal_prod() method to take into account the c-product for resonant and anti-resonant states.

Parameters:
  • other (Function) – Another function.
  • xlim (tuple(float or int, float or int)) – Range of the x-axis for the integration (optional).
Returns:

Value of the scalar product

Return type:

float

SmoothFunction

The SmoothFunction class and its methods are defined hereafter.

It is used as the base class of two more specific smooth functions:

class siegpy.smoothfunctions.SmoothFunction(x0, lbda, c0=1, cp=1, cm=-1, grid=None)[source]

Bases: object

Note

This is an abstract class.

Smooth functions are used when a smooth complex scaling is applied to the potential. The aim of this class is to easily update the values of the smooth functions (and its derivatives) when the Refection-Free Complex Absorbing Potentials and the Virial operator are defined.

The smooth functions \(q\) used here are of the form:

\[q(x) = c_0 + c_+ r(\lambda (x - x_0)) - c_- r(\lambda (x + x_0))\]

The smooth function should be 0 on a large part of the \([-x_0, x_0]\) range of the grid, while tending to 1 at both infinities.

The initialization of a smooth function requires the value of \(x_0\) and of the parameter \(\lambda\), indicating how smoothly the function goes from 0 to 1 (the larger, the sharper).

Parameters:
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter (the function is smoother for smaller values).
  • c0 (float) – Constant term \(c_0\) of the smooth function defintion (default to 1).
  • cp (float) – Constant term \(c_+\) of the smooth function defintion (default to 1).
  • cm (float) – Constant term \(c_-\) of the smooth function defintion (default to -1).
  • array (numpy) – Discretization grid of the smooth function (optional).
Raises:

ValueError – If x0 or lbda are not strictly positive.

x0
Returns:Inflection point.
Return type:float
lbda
Returns:Sharpness parameter.
Return type:float
grid
Returns:Discretization grid of the smooth function.
Return type:numpy array
_to_be_updated(new_grid)[source]
Returns:True if the grid has to be updated.
Return type:bool
_update_all_values()[source]

Update the values of the test functions and all of its derivatives.

values
Returns:Values of the smooth function.
Return type:numpy array
dx_values
Returns:Values of the first derivative of the smooth function.
Return type:numpy array
dx2_values
Returns:Values of the second derivative of the smooth function.
Return type:numpy array
dx3_values
Returns:Values of the third derivative of the smooth function.
Return type:numpy array
dxi_values
Returns:Values of the first derivative of the smooth function with respect to both parameters \(x_0\) and \(\lambda\).
Return type:numpy array
dx_dxi_values
Returns:Values of the first derivative with respect to both parameters \(x_0\) and \(\lambda\) of the first derivative of the smooth function.
Return type:numpy array
_get_r_values(grid)[source]

Note

This is an asbtract method.

Parameters:grid (numpy array) – Discretization grid.
Returns:Values of the function \(r\), evaluated on a given set of grid points.
Return type:numpy array
_get_r_dx_values(grid)[source]

Note

This is an asbtract method.

Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of first derivative of the function \(r\),
  • evaluated on a given set of grid points.
_get_r_dx2_values(grid)[source]

Note

This is an asbtract method.

Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of second derivative of the function \(r\),
  • evaluated on a given set of grid points.
_get_r_dx3_values(grid)[source]

Note

This is an asbtract method.

Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of third derivative of the function \(r\),
  • evaluated on a given set of grid points.
class siegpy.smoothfunctions.ErfSmoothFunction(x0, lbda, c0=1, cp=1, cm=-1, grid=None)[source]

Bases: siegpy.smoothfunctions.SmoothFunction

In this case, the function \(r\) corresponds to the error function \(\text{erf}\).

The smooth functions \(q\) used here are of the form:

\[q(x) = c_0 + c_+ r(\lambda (x - x_0)) - c_- r(\lambda (x + x_0))\]

The smooth function should be 0 on a large part of the \([-x_0, x_0]\) range of the grid, while tending to 1 at both infinities.

The initialization of a smooth function requires the value of \(x_0\) and of the parameter \(\lambda\), indicating how smoothly the function goes from 0 to 1 (the larger, the sharper).

Parameters:
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter (the function is smoother for smaller values).
  • c0 (float) – Constant term \(c_0\) of the smooth function defintion (default to 1).
  • cp (float) – Constant term \(c_+\) of the smooth function defintion (default to 1).
  • cm (float) – Constant term \(c_-\) of the smooth function defintion (default to -1).
  • array (numpy) – Discretization grid of the smooth function (optional).
Raises:

ValueError – If x0 or lbda are not strictly positive.

_get_r_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of the function \(\text{erf}\), evaluated on a
  • given set of grid points.
_get_r_dx_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of first derivative of the function
  • \(\text{erf}\), evaluated on a given set of grid points.
_get_r_dx2_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of second derivative of the function
  • \(\text{erf}\), evaluated on a given set of grid points.
_get_r_dx3_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of third derivative of the function
  • \(\text{erf}\), evaluated on a given set of grid points.
class siegpy.smoothfunctions.TanhSmoothFunction(x0, lbda, c0=1, cp=1, cm=-1, grid=None)[source]

Bases: siegpy.smoothfunctions.SmoothFunction

In this case, the function \(r\) corresponds to the hyperbolic tangent \(\tanh\).

The smooth functions \(q\) used here are of the form:

\[q(x) = c_0 + c_+ r(\lambda (x - x_0)) - c_- r(\lambda (x + x_0))\]

The smooth function should be 0 on a large part of the \([-x_0, x_0]\) range of the grid, while tending to 1 at both infinities.

The initialization of a smooth function requires the value of \(x_0\) and of the parameter \(\lambda\), indicating how smoothly the function goes from 0 to 1 (the larger, the sharper).

Parameters:
  • x0 (float) – Inflection point.
  • lbda (float) – Sharpness parameter (the function is smoother for smaller values).
  • c0 (float) – Constant term \(c_0\) of the smooth function defintion (default to 1).
  • cp (float) – Constant term \(c_+\) of the smooth function defintion (default to 1).
  • cm (float) – Constant term \(c_-\) of the smooth function defintion (default to -1).
  • array (numpy) – Discretization grid of the smooth function (optional).
Raises:

ValueError – If x0 or lbda are not strictly positive.

_get_r_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of the function \(\tanh\), evaluated on a
  • given set of grid points.
_get_r_dx_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of first derivative of the function \(\tanh\),
  • evaluated on a given set of grid points.
_get_r_dx2_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of second derivative of the function \(\tanh\),
  • evaluated on a given set of grid points.
_get_r_dx3_values(grid)[source]
Parameters:grid (numpy array) – Discretization grid.
Returns:
  • Values of third derivative of the function \(\tanh\),
  • evaluated on a given set of grid points.

Filters

The Filters class is defined below.

The WaveletFilters class inherits from the previous class and is specific to Daubechies wavelets, where a so-called magic filter has to be defined, in addition to the gradient and laplacian filters.

Both classes are used to define some families of filters:

  • FD2_filters, for the Finite Difference filters of order 2,
  • FD8_filters, for the Finite Difference filters of order 8,
  • Sym8_filters, for a family of Daubechies wavelets filters.

These three families of filters are easily available:

>>> from siegpy import FD2_filters, FD8_filters, Sym8_filters
class siegpy.filters.Filters(grad_filter, laplac_filter)[source]

Bases: object

This class specifies a family of filters that are useful to describe numerical Hamiltonians. It allows the definition of the gradient and Laplacian operators in matrix form by means of filters and a specified number of grid points.

Parameters:
  • grad_filter (numpy array or list) – Gradient filter.
  • laplac_filter (numpy array or list) – Laplacian filter.
grad_filter
Returns:Gradient filter.
Return type:numpy array
laplac_filter
Returns:Laplacian filter.
Return type:numpy array
fill_gradient_matrix(npts)[source]

Fill a gradient matrix of dimension npts.

Parameters:npts (int) – Number of grid points.
Returns:Gradient matrix.
Return type:2D numpy array
fill_laplacian_matrix(npts)[source]

Fill a Laplacian matrix of dimension npts.

Parameters:npts (int) – Number of grid points.
Returns:Laplacian matrix.
Return type:2D numpy array
static _fill_matrix(_filter, npts)[source]

This method creates a matrix operator, given a filter and a number of grid points.

Parameters:
  • _filter (numpy array) – Filter of an operator.
  • npts (int) – Number of grid points.
Returns:

Operator in matrix form, filled thanks to a filter.

Return type:

numpy array

Raises:

ValueError – If the filter is larger than the discretization grid.

class siegpy.filters.WaveletFilters(grad_filter, laplac_filter, magic_filter)[source]

Bases: siegpy.filters.Filters

This class defines the methods specific to the Daubechies wavelets filers.

The main difference with respect to the parent class is the requirement of a so-called magic filter.

Parameters:
  • grad_filter (numpy array or list) – Gradient filter.
  • laplac_filter (numpy array or list) – Laplacian filter.
  • magic_filter (numpy array or list) – Magic filter.
magic_filter
Returns:Magic filter.
Return type:numpy array
fill_magic_matrix(npts)[source]

Fill a magic filter matrix of dimension npts.

Parameters:npts (int) – Number of grid points.
Returns:Magic filter matrix.
Return type:2D numpy array

Abstract classes for further analytic cases

Some abstract classes are provided in order to simplify the implementation of more analytic cases. Subclassing them should allow for minimum code writing while preserving all the SiegPy functionalities.

They were, for instance, used to define the classes that are specific to the 1D Square-Well Potential Case

AnalyticBasisSet

This abstract base class is meant to implement the main methods available to all basis set classes that are specific to an analytical case. This ensures that all analytical cases have the same basic API, while avoiding code repetition.

As an example, it is used to define the SWPBasisSet class, which is the specific basis set class for the analytical 1D Square-Well potential case. You might notice that only a few methods were implemented there.

class siegpy.analyticbasisset.AnalyticBasisSet(states=None)[source]

Bases: siegpy.basisset.BasisSet

This should be the base class for the specific basis set classes of any analytic cases available in SiegPy.

Its methods allow to compute most the main quantities and forces the child classes to implement the other ones.

For instance, any relevant Siegert state expansion (such as the Mittag-Leffler Expansion (MLE), which uses all Siegert states with a weight 1/2, or the Berggren expansion, which uses only bound and resonant states with a weight 1) of the quantities of interest (such as the completeness relation (CR), zero operator, strength function, strength function and time-propagation must therefore be defined here, as they should be valid for any analytical case.

On the other hand, all the methods used to define the analytical eigenstates must be implemented by the child classes.

An AnalyticBasisSet instance is initialized from a list of AnalyticEigenstate instances.

Parameters:states (list) – Analytic eigenstates of a Hamiltonian. If None, it means that the BasisSet instance is empty.
Raises:ValueError – If any state comes from a different potential than the others.
classmethod from_file(filename, grid=None, analytic=True, nres=None, kmax=None, bounds_only=False)[source]

Read a basis set from a binary file.

Parameters:
  • filename (str) – Name of a file containing a basis set.
  • grid (numpy array or list or set) – Discretization grid of the wavefunctions of the Siegert states (optional).
  • analytic (bool) – If True, the scalar products will be computed analytically.
  • nres (int) – Number of resonant couples in the Siegert basis set created (optional).
  • kmax (float) – Value of the maximal continuum wavenumber in the returned basis set, also containing the bound states, if there were any in the file (optional).
  • bounds_only (bool) – If True, the basis set returned contains only bound states.
Returns:

The basis set read from the file.

Return type:

AnalyticBasisSet

Examples

All the examples given below are given for the SWP analytical case, but could be easily adapted for any other analytical cases.

A basis set is read from a file in the following manner:

>>> from siegpy import SWPBasisSet
>>> filename = "doc/notebooks/siegerts.dat"
>>> bs = SWPBasisSet.from_file(filename)

It contains a certain number of states, with a given discretization grid (here, the grid is None) and analyticity:

>>> len(bs)
566
>>> bs.grid
>>> assert bs.analytic == True

It is possible to update the grid (and the values of the eigenstates at the same time) in the 1D SW potential case:

>>> bs = SWPBasisSet.from_file(filename, grid=[-2, -1, 0, 1, 2])
>>> bs.grid
array([-2, -1,  0,  1,  2])
>>> print(bs[0].values)
[ 0.18053285+0.j  0.51185847+0.j  0.63921710-0.j  0.51185847-0.j
  0.18053285-0.j]

The analyticity of the states can also be updated:

>>> bs = SWPBasisSet.from_file(filename, analytic=False)
>>> assert bs.analytic == False

If required, only the bound states are read:

>>> bounds = SWPBasisSet.from_file(filename, bounds_only=True)
>>> assert len(bounds) == len(bs.bounds)

The number of resonant and antiresonant states can also be chosen:

>>> siegerts = SWPBasisSet.from_file(filename, nres=5)
>>> assert len(siegerts.bounds) == len(bs.bounds)
>>> assert len(siegerts.antibounds) == len(bs.antibounds)
>>> assert len(siegerts.resonants) == 5
>>> assert len(siegerts.antiresonants) == 5

If the basis set contains continuum states, it is also possible to keep only the states whose wavenumber is smaller than kmax (in addition to the bound states).

classmethod find_Siegert_states()[source]

Note

This is an asbtract class method.

Initialize a basis made of Siegert states found analytically.

Returns:A basis set made of AnalyticSiegert instances.
Return type:AnalyticBasisSet
classmethod find_continuum_states(*args, **kwargs)[source]

Note

This is an asbtract class method.

Initialize a basis made of continuum states found analytically.

Returns:A basis set made of AnalyticContinuum instances.
Return type:AnalyticBasisSet
grid
Returns:Value of the discretization grid of all the states in the basis set.
Return type:numpy array or None
Raises:ValueError – If at least one state has a different grid than the others or if the basis set is empty.

Examples

In the following example, the states have no grid:

>>> from siegpy import SWPBasisSet
>>> siegerts = SWPBasisSet.from_file("doc/notebooks/siegerts.dat")
>>> siegerts.grid is None
True

You can also use the grid attribute to update the grid (and therefore all the values of the wavefunctions) of the states in the analytic basis set at the same time:

>>> xgrid = [-1, 0, 1]
>>> siegerts.grid = xgrid
>>> siegerts[-1].grid
array([-1,  0,  1])
analytic
Returns:Value of the analytic attribute of all the states in the basis set.
Return type:bool
Raises:ValueError – If some states have different value for analytic or if the basis set is empty.

Examples

In the following example, all the states require analytic scalar products:

>>> from siegpy import SWPBasisSet
>>> siegerts = SWPBasisSet.from_file("doc/notebooks/siegerts.dat")
>>> siegerts.analytic
True

You can also use the analytic attribute to update the values for all the states in the basis set at the same time:

>>> siegerts.analytic = False
>>> all([state.analytic == False for state in siegerts])
True
potential
Returns:Potential of all the states in the basis set.
Return type:Potential
Raises:ValueError – If some states come from a different potential or if the basis set is empty.

Example

>>> from siegpy import SWPBasisSet
>>> siegerts = SWPBasisSet.from_file("doc/notebooks/siegerts.dat")
>>> siegerts.potential
1D Square-Well Potential of width 4.44 and depth 10.00
plot_wavefunctions(nres=None, xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the bound, resonant and anti-resonant wavefunctions of the basis set along with the potential. The continuum and anti-bound states, if any are present in the basis set, are not plotted.

The wavefunctions are translated along the y-axis by their energy (for bound states) or absolute value of their energy (for resonant and anti-resonant states).

Parameters:
  • nres (int) – Number of resonant and antiresonant wavefunctions to plot.
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional)
  • ylim (tuple(float or int, float or int)) – Range of the y axis of the plot (optional)
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional)
Raises:

ValueError – If the minimum of the potential cannot be found.

plot_wavenumbers(xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the wavenumbers of the Siegert states in the basis set.

Parameters:
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • ylim (tuple(float or int, float or int)) – Range of the y axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
plot_energies(xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the energies of the Siegert states in the basis set.

Parameters:
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • ylim (tuple(float or int, float or int)) – Range of the y axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
MLE_contributions_to_CR(test)[source]

Evaluate the contribution of each state of the basis set to the completeness relation, according to the Mittag-Leffler Expansion.

Each element of the array is defined by: .. math:

\frac{ \left\langle test | \varphi_S \right)
\left( \varphi_S | test \right\rangle }
{2 \left\langle test | test \right\rangle}

where \(\varphi_S\) is a Siegert state of the basis set.

Parameters:test (Function) – Test function
Returns:Contribution of each state of the basis set to the completeness relation acording to the Mittag-Leffler Expansion of the completeness relation.
Return type:numpy array
MLE_contributions_to_zero_operator(test)[source]

Evaluate the contribution of each state of the basis set to the zero operator, according to the Mittag-Leffler Expansion.

Each element of the returned array is defined by:

\[\frac{ \left\langle test | \varphi_S \right) \left( \varphi_S | test \right\rangle } {2 k_S \left\langle test | test \right\rangle}\]

where \(k_s\) is the wavenumber of the Siegert state \(\varphi_S\)

Parameters:test (Function) – Test function.
Returns:Contribution of each state of the basis set to the zero operator acording to the Mittag-Leffler Expansion of the zero operator.
Return type:numpy array
MLE_completeness(test, nres=None)[source]

Evaluate the value of the Mittag-Leffler Expansion of the completeness relation using all Siegert states in the basis set for a given test function.

Returns the result of the following sum over all Siegert states \(varphi_S\):

\[\sum_S \frac{ \left\langle test | \varphi_S \right) \left( \varphi_S | test \right\rangle } {2 \left\langle test | test \right\rangle}\]
Parameters:
  • test (Function) – Test function.
  • nres (int) – Number of (anti-)resonant states to use (optional).
Returns:

Evaluation of the completeness of the basis set using the Mittag-Leffler Expansion.

Return type:

float

MLE_zero_operator(test, nres=None)[source]

Evaluate the value of the Mittag-Leffler Expansion of the zero operator using all Siegert states in the basis set for a given test function.

Returns the result of the following sum over all Siegert states:

\[\sum_S \frac{ \left\langle test | \varphi_S \right) \left( \varphi_S | test \right\rangle } { 2 k_S \left\langle test | test \right\rangle }\]

where \(k_s\) is the wavenumber of the Siegert state \(\varphi_S\)

Parameters:
  • test (Function) – Test function.
  • nres (int) – Number of (anti-)resonant states to use (optional).
Returns:

Evaluation of the zero operator of the basis set using the Mittag-Leffler Expansion.

Return type:

float

exact_completeness(test, hk=None, kmax=None)[source]

Evaluate the exact completeness relation using the bound states of the basis set and the continuum states defined by either hk and kmax (respectively the grid-step and maximum wavenumber of the grid of continuum states) or all the continuum states in the basis set.

Parameters:
  • test (Function) – Test function.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • kmax (float) – Maximal wavenumber of the “on-the-fly” continuum basis set (optional).
Returns:

Value of the exact completeness relation, using bound and continuum states.

Return type:

float

continuum_contributions_to_CR(test, hk=None, kmax=None)[source]

Note

This is an asbtract class method.

Evaluate the continuum contributions to the completeness relation.

Parameters:
  • test (Function) – Test function.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • kmax (float) – Maximal wavenumber of the “on-the-fly” continuum basis set (optional).
Returns:

  • numpy array – Contribution of each continuum state of the basis set to the exact completeness relation.
  • r

MLE_completeness_convergence(test, nres=None)[source]

Evaluate the convergence of the Mittag-Leffler Expansion of the completeness relation for the basis set, given a test function.

Parameters:
  • test (Function) – Test function.
  • nres (int) – Number of (anti-)resonant states to use (default: use all of them).
Returns:

Two arrays of the same length. The first one is made of the absolute value of the resonant wavenumbers, while the second one is made of the values of the convergence of the MLE of the completeness relation.

Return type:

tuple(numpy array, numpy array)

Berggren_completeness_convergence(test, nres=None)[source]

Evaluate the convergence of the CR using the Berggren expansion.

Parameters:
  • test (Function) – Test function.
  • nres (int) – Number of resonant states to use (default: use all of them).
Returns:

Two arrays of the same length. The first one is made of the absolute value of the resonant wavenumbers, while the second one is made of the values of the convergence of the Berggren expansion of the completeness relation.

Return type:

tuple(numpy array, numpy array)

MLE_zero_operator_convergence(test, nres=None)[source]

Evaluate the convergence of the Mittag-Leffler Expansion of the zero operator for the basis set, given a test function.

Parameters:
  • test (Function) – Test function.
  • nres (int) – Number of (anti-)resonant states to use (default: use all of them).
Returns:

Two arrays of the same length. The first one is made of the absolute value of the resonant wavenumbers, while the second one is made of the values of the convergence of the MLE of the zero operator.

Return type:

tuple(numpy array, numpy array)

exact_completeness_convergence(test, hk=None, kmax=None)[source]

Evaluate the convergence of the exact completeness relation (using the continuum and bound states of the basis set) for a given test function.

A set of continuum states is defined on-the-fly if the values of both hk and kmax are not None (otherwise, the continuum states of the basis set, if any, are used).

Parameters:
  • test (Function) – Test function.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • kmax (float) – Maximal wavenumber of the “on-the-fly” continuum basis set (optional).
Returns:

Two arrays of the same length. The first one is made of the continuum wavenumbers, while the second is made of the values of the convergence of the exact completeness relation.

Return type:

tuple(numpy array, numpy array)

plot_completeness_convergence(test, hk=None, kmax=None, nres=None, exact=True, MLE=True, title=None, file_save=None)[source]

Plot the convergence of both the Mittag-Leffler Expansion and the exact completeness relation for a given test function.

The exact convergence is computed on-the-fly (i.e. without the need to have continuum states in the basis set):

  • if hk and kmax are not None,
  • if exact and MLE are set to True and if the basis set does not contain enough continuum states to reach the absolute value of the last resonant wavenumber used (kmax is therefore defined by the wavenumber of the last resonant state used in the MLE of the CR, and hk is set to a default value)
Parameters:
  • test (Function) – Test function.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • kmax (float) – Maximal wavenumber of the “on-the-fly” continuum basis set (optional).
  • nres (int) – Number of resonant couples contributions to be plotted (optional).
  • exact (bool) – If True, allows the plot of the exact strength function.
  • MLE (bool) – If True, allows the plot of the Mittag-Leffler Expansion of the strength function.
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
MLE_strength_function(test, kgrid)[source]

Evaluate the Mittag-Leffler expansion of the strength function for a given test function. The test function is discretized on a grid of wavenumbers kgrid.

Parameters:
  • test (Function) – Test function.
  • kgrid (numpy array) – Wavenumbers for which the strength function is evaluated.
Returns:

MLE of the strength function evaluated over the wavenumber grid kgrid.

Return type:

numpy array

exact_strength_function(test, kgrid, eta=None)[source]

Evaluate the exact strength function over a given wavenumber grid kgrid for a given a test function.

eta is an infinitesimal used to make the integration over the continuum states possible (the poles along the real axis being avoided).

Parameters:
  • test (Function) – Test function.
  • kgrid (numpy array) – Wavenumbers for which the strength function is evaluated.
  • eta (float) – Infinitesimal for integration (if None, default to 10 times the value of the grid-step of the continuum basis set).
Returns:

Exact strength function evaluated on the grid of wavenumbers kgrid.

Return type:

numpy array

exact_strength_function_OTF(test, kgrid, hk=0.0001, eta=None, tol=0.0001)[source]

Evaluate the exact strength function “on-the-fly” over a given wavenumber grid kgrid for a given a test function.

It is not necessary to have continuum states in the basis set to compute the exact strength function, because they can be computed “on-the-fly” (hence OTF). This even leads to a quicker evaluation of the strength function because the integral, to be compute for each point of kgrid, actually has an integrand that peaks around each particular value of kgrid and quickly vanishes around its maximum.

Such a minimal approximated integrand is constructed by using only the continuum states aroud the point of the kgrid considered, given the tolerance parameter tol. It allows to reach smaller values of hk and eta, and therefore more precise results, in a shorter amount of time.

Parameters:
  • test (Function) – Test function.
  • kgrid (numpy array) – Wavenumbers for which the strength function is evaluated.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • eta (float) – Infinitesimal for integration (if None, default to 10 times the value of the grid-step of the continuum basis set).
  • tol (float) – Tolerance value to truncate the integrand function (ratio of the value of a possible new point of the integrand to the maximal value of the integrand).
Returns:

Approximate exact strength function evaluated on the grid of wavenumbers kgrid.

Return type:

numpy array

static _evaluate_integrand(q, k, test, eta, potential)[source]

Note

This is an asbtract class method.

Evaluate the integrand used to compute the strength function “on-the-fly”. It must be implemented in the child class.

Parameters:
  • q (float) – Wavenumber of the continuum state considered.
  • k (float) – Wavenumber for which the strength function is evaluated.
  • test (Function) – Test function.
  • eta (float) – Infinitesimal for integration (if None, default to 10 times the value of the grid-step of the continuum basis set).
  • potential (Potential) – Potential of the currently studied analytical case.
plot_strength_function(test, kgrid, nres=None, exact=True, MLE=True, bnds_abnds=False, title=None, file_save=None)[source]

Plot the convergence of both the Mittag-Leffler Expansion and the exact strength function for a given test function.

The exact convergence is computed on-the-fly (without the need to have continuum states in the BasisSet self).

Parameters:
  • test (Function) – Test function.
  • kgrid (numpy array) – Wavenumbers for which the strength function is evaluated.
  • nres (int) – Number of resonant couples contributions to be plotted (default to None, meaning that none are plotted; if nres=0, then only the sum of the bound and anti-bound states contributions is plotted).
  • exact (bool) – If True, allows the plot of the exact strength function.
  • MLE (bool) – If True, allows the plot of the Mittag-Leffler Expansion of the strength function.
  • bnds_abnds (bool) – If True, allows to plot the individual contributions of the bound and anti-bound states.
  • title (str) – Plot title (optional).
  • file_save (str) – Filename of the plot to be saved (optional).
exact_propagation(test, time_grid)[source]

Evaluate the exact time-propagation of a wavepacket test over a given time grid (made of positive numbers only).

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
Returns:

Exact propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

exact_propagation_OTF(test, time_grid, kmax, hk)[source]

Evaluate the exact time-propagation of a given test function for a given time grid after an on-the-fly creation of the continuum states given the values of kmax and hk.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
  • hk (float) – Grid step for the wavenumbers of the “on-the-fly” continuum basis sets (optional).
  • kmax (float) – Maximal wavenumber of the “on-the-fly” continuum basis set (optional).
Returns:

  • returns: Exact propagated wavepacket for the different times – of time_grid.
  • rtype: 2D numpy array

MLE_propagation(test, time_grid)[source]

Evaluate the Mittag-Leffler Expansion of the time-propagation of a test wavepacket over a given time grid.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
Returns:

MLE of the propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Berggren_propagation(test, time_grid)[source]

Evaluate the Berggren Expansion of the time-propagation of a test wavepacket over a given time grid.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
Returns:

Berggren expansion of the propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Siegert_propagation(test, time_grid, weights=None)[source]

Evaluate a user-defined expansion over the Siegert states of the time-propagation of a test wavepacket over a given time grid.

The user chooses the weight of each type of Siegert states of the basis set. If no weights are passed, then exact Siegert states expansion is performed.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
  • weights (dict) – Dictionary of the weights to use for the time-propagation. Keys correspond to a type of Siegert states (‘ab’ for anti-bounds, ‘b’ for bounds, ‘r’ for resonants and ‘ar’ for anti-resonants) and the corresponding value is the weight to use for all the states of the given type (optional).
Returns:

Siegert states expansion of the propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Raises:

KeyError – If an invalid key is found in weights.

exact_Siegert_propagation(test, time_grid)[source]

Evaluate the exact Siegert propagation of the initial wavepacket test for the times of time_grid. In contrast with all other expansions for the propagation, the time-evolution of the Siegert states is not only due to an exponential, but also to state- and time-dependent weights (see eq. 69 of Santra et al., PRA 71 (2005)).

There is no need to define any weight, since the correct weight is known analytically.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
Returns:

Exact Siegert expansion of the propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Raises:

BasisSetError – If the basis set contains no Siegert states.

_propagate(test, time_grid, weights=None)[source]

Evaluate the time-propagation of a test wavepacket as the matrix product of two matrices: one to account for the time dependance of the propagation of the wavepacket (mat_time), the other for its space dependance (mat_space).

The contribution of the continuum states to the time propagation of the wavepacket requires a numerical integration that is performed using the composite Simpson’s rule.

The contribution of the Siegert states to the time propagation of the wavepacket is computed through a discrete sum over all the Siegert states, with a user-defined weight for each type of Siegert states.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
  • weights (dict) – Dictionary of the weights to use for the time-propagation. Keys correspond to a type of Siegert states (‘ab’ for anti-bounds, ‘b’ for bounds, ‘r’ for resonants and ‘ar’ for anti-resonants) and the corresponding value is the weight to use for all the states of the given type (optional).
Returns:

Propagated wavepacket for the different times of time_grid.

Return type:

2D numpy array

Raises:

BasisSetError – If the basis set does not contain Siegert nor continuum states.

_add_one_continuum_state()[source]

Note

This is an asbtract class method.

Add a continuum state to the basis set, depending on the already existing continuum states.

Returns:Instance of the child class with one more continuum state.
Return type:AnalyticBasisSet
plot_propagation(test, time_grid, exact=True, exact_Siegert=True, MLE=False, Berggren=False, xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the time propagation of a wavepacket using either the exact expansion (using bound and continuum states), the exact Siegert states expansion, the Mittag-Leffler Expansion or the Berggren expansion over a given time grid.

Any of these expansions can be turned on or off by setting the corresponding optional arguments (exact, exact_Siegert, MLE and Berggren respectively) to True or False. By default, both exact expansions are plotted.

Parameters:
  • test (Function) – Test function.
  • time_grid (numpy array or list of positive numbers) – Times for which the propagation is evaluated. It must contain positive numbers only.
  • exact (bool) – If True, allows the plot of the exact time-propagation (using bound and continuum states).
  • exact_Siegert (bool) – If True, allows the plot of the exact time-propagation (using all Siegert states).
  • MLE (bool) – If True, allows the plot of the Mittag-Leffler Expansion of the time-propagation.
  • Berggren (bool) – If True, allows the plot of the Berggren expansion of the time-propagation.
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • xlim – Range of the y axis of the plot (optional).
  • title (str) – Plot title (optional).
  • file_save (str) – Base name of the files where the plots are to be saved (optional).

AnalyticEigenstates

The classes documented below are the base classes you should consider subclassing before adding a new analytical case to the SiegPy module.

The AnalyticEigenstate class is the base class for two other classes:

If your analytic case is already well defined by these last classes, you may only subclass these two. However, if you need to add some methods or attributes to both classes, you may also subclass AnalyticEigenstate.

For instance, the parity of the analytic eigenstates in the 1D Square-Well Potential case made it mandatory.

class siegpy.analyticeigenstates.AnalyticEigenstate(k, potential, grid, analytic)[source]

Bases: siegpy.functions.AnalyticFunction, siegpy.eigenstates.Eigenstate

Class gathering the attributes and methods necessary to define an analytic eigenstate of any analytical problem (such as the 1D Square-Well Potential case).

Note

To consider a case as analytic, the wavefunctions of the Siegert and continuum states should be known analytically. The analytic scalar product of both types of eigenstates with at least one type of test function should also be defined.

Parameters:
  • k (complex) – Wavenumber of the eigenstate.
  • potential (Potential) – Potential for which the analytic eigenstate is known.
  • grid (list or set or numpy array) – Discretization grid.
  • analytic (bool) – If True, the scalar products must be computed analytically.
Raises:

WavenumberError – If the inferred Siegert type is unknown.

potential
Returns:Potential for which the eigenstate is a solution.
Return type:complex or float
wavenumber
Returns:Wavenumber of the eigenstate.
Return type:complex or float
energy
Returns:Energy of the Eigenstate
Return type:complex or float
_find_Siegert_type(k)[source]

Note

This is an asbtract method.

Parameters:k (float or complex) – Wavenumber of the eigenstate
Returns:Type of the eigenstate from its wavenumber k ('b' for bound states, 'ab' for anti-bound states, 'r' for resonant states, 'ar' for anti-resonant states, and None for continuum states).
Return type:str or NoneType
analytic
Returns:Analyticity of the scalar products.
Return type:bool
__eq__(other)[source]

Two analytic eigenstates are the same if they share the same wavenumber, potential and Siegert_type.

Parameters:other (object) – Another object
Returns:True if both eigenstates are the same.
Return type:bool
scal_prod(other, xlim=None)[source]

Note

This is an asbtract method.

Evaluate the scalar product between the state and a test function other. It ensures that the numerical scalar product is computed when the analytical scalar product with test functions is not implemented in the child class.

Parameters:
  • other (Function) – Test function.
  • xlim (tuple(float, float)) – Bounds of the space grid defining the interval over which the scalar product must be computed.
Returns:

Value of the scalar product of the state with a test function.

Return type:

float or complex

class siegpy.analyticeigenstates.AnalyticSiegert(k, potential, grid, analytic)[source]

Bases: siegpy.analyticeigenstates.AnalyticEigenstate

Class specifying the abstract method allowing to find the type of the analytic eigenstate, in the case it is a Siegert state.

Parameters:
  • k (complex) – Wavenumber of the eigenstate.
  • potential (Potential) – Potential for which the analytic eigenstate is known.
  • grid (list or set or numpy array) – Discretization grid.
  • analytic (bool) – If True, the scalar products must be computed analytically.
Raises:

WavenumberError – If the inferred Siegert type is unknown.

_find_Siegert_type(k)[source]
Parameters:k (complex) – Wavenumber of the Siegert state.
Returns:The type of the Siegert state, namely:
  • 'b' for a bound state
  • 'ab' for an anti-bound state
  • 'r' for a resonant state
  • 'ar' for an anti-resonant state
Return type:str
Raises:WavenumberError – If the wavenumber is equal to zero.
class siegpy.analyticeigenstates.AnalyticContinuum(k, potential, grid, analytic)[source]

Bases: siegpy.analyticeigenstates.AnalyticEigenstate

Class specifying the abstract method allowing to find the type of the analytic eigenstate, in the case it is a continuum state.

Parameters:
  • k (complex) – Wavenumber of the eigenstate.
  • potential (Potential) – Potential for which the analytic eigenstate is known.
  • grid (list or set or numpy array) – Discretization grid.
  • analytic (bool) – If True, the scalar products must be computed analytically.
Raises:

WavenumberError – If the inferred Siegert type is unknown.

_find_Siegert_type(k)[source]
Parameters:k (float) – Wavenumber of the continuum state.
Returns:None, given that a continuum state is not a Siegert state.
Return type:NoneType
Raises:WavenumberError – If the wavenumber is imaginary.

The Function class and its children

The Function class is an important base class of the SiegPy module, some of its children being the Eigenstate class and the Potential class.

This section therefore presents this class and some of its children that were not documented elsewhere, including the Test functions used in the Tutorial.

Functions

This page contains the definitions of the Function and the AnalyticFunction classes.

class siegpy.functions.Function(grid, values)[source]

Bases: object

Class defining a 1-dimendional (1D) function from its grid and the corresponding values.

A function can be plotted, and different types of operations can be performed (e.g., scalar product with another function, addition of another function). It is also possible to compute its norm, and return its conjugate and its absolute value as another function.

Parameters:
  • grid (list or set or numpy array) – Discretization grid.
  • values (list or set or numpy array) – Values of the function evaluated on the grid points.
Raises:
  • ValueError – If the grid is not made of reals or if the grid and values arrays have incoherent lengths.

  • Example

  • Note that both grid and values are converted to numpy

  • arrays:

  • >>> f = Function([-1, 0, 1], [1, 2, 3])
    
  • >>> f.grid
    
  • array([-1, 0, 1])

  • >>> f.values
    
  • array([1, 2, 3])

grid
Returns:
  • numpy array – Grid of a Function instance.
  • .. warning:: – The grid of a Function instance cannot be modified:
    >>> f = Function([-1, 0, 1], [1, 2, 3])
    >>> f.grid = [-1, 1]
    Traceback (most recent call last):
    AttributeError: can't set attribute
    
values
Returns:
  • numpy array – Values of a Function instance.
  • .. warning:: – The values of a Function instance cannot be modified:
    >>> f = Function([-1, 0, 1], [1, 2, 3])
    >>> f.values = [3, 0, 1]
    Traceback (most recent call last):
    AttributeError: can't set attribute
    
__eq__(other)[source]

Two functions are equal if they have the same grid and values.

Parameters:other (object) – Another object.
Returns:
  • boolTrue if other is a Function instance with the same grid and values.
  • Examples
  • >>> f = Function([1, 2, 3], [1, 1, 1])
  • >>> Function([1, 2, 3], [1, 1, 1]) == f
  • True
  • >>> Function([1, 2, 3], [-1, 0, 1]) == f
  • False
  • >>> Function([-1, 0, 1], [1, 1, 1]) == f
  • False
  • >>> f == 1
  • False
__add__(other)[source]

Add two functions, if both grids are the same.

Parameters:

other (Function) – Another function.

Returns:

Sum of both functions.

Return type:

Function

Raises:
  • ValueError – If both the grid of both functions differ.

  • Examples

  • Two functions can be added:

  • >>> f1 = Function([1, 2, 3], [1, 1, 1])
    
  • >>> f2 = Function([1, 2, 3], [0, -1, 0])
    
  • >>> f = f1 + f2
    
  • The grid of the new function is the same, and its values are

  • the sum of both values:

  • >>> f.grid
    
  • array([1, 2, 3])

  • >>> f.values
    
  • array([1, 0, 1])

  • It leaves the other functions unchanged:

  • >>> f1.values
    
  • array([1, 1, 1])

  • >>> f2.values
    
  • array([ 0, -1, 0])

is_even
Returns:
  • boolTrue if the function is even, False if not.
  • Examples
  • >>> Function([1, 2, 3], [1j, 1j, 1j]).is_even
  • False
  • >>> Function([-1, 0, 1], [1j, 1j, 1j]).is_even
  • True
is_odd
Returns:
  • boolTrue if the function is odd, False if not.
  • Examples
  • >>> Function([-1, 0, 1], [-1j, 0j, 1j]).is_odd
  • True
  • >>> Function([1, 2, 3], [-1j, 0j, 1j]).is_odd
  • False
plot(xlim=None, ylim=None, title=None, file_save=None)[source]

Plot the real and imaginary parts of the function.

Parameters:
  • xlim (tuple(float or int, float or int)) – Range of the x axis of the plot (optional).
  • ylim (tuple(float or int, float or int)) – Range of the y axis of the plot (optional).
  • title (str) – Title for the plot.
  • file_save (str) – Filename of the plot to be saved (optional).
abs()[source]
Returns:
  • Function – Absolute value of the function.
  • Example
  • Applying the abs() method to a Function instance
  • returns a new Function instance (i.e., the initial
  • one is unchanged)
  • >>> f = Function([1, 2, 3], [1j, 1j, 1j])
  • >>> f.abs().values
  • array([ 1., 1., 1.])
  • >>> f.values
  • array([ 0.+1.j, 0.+1.j, 0.+1.j])
conjugate()[source]
Returns:
  • Function – Conjugate of the function.
  • Example
  • Applying the conjugate() method to a Function
  • instance returns a new Function instance (i.e. the
  • initial one is unchanged)
  • >>> f = Function([1, 2, 3], [1j, 1j, 1j])
  • >>> f.conjugate().values
  • array([ 0.-1.j, 0.-1.j, 0.-1.j])
  • >>> f.values
  • array([ 0.+1.j, 0.+1.j, 0.+1.j])
scal_prod(other, xlim=None)[source]

Evaluate the usual scalar product of two functions f and g:

\(\langle f | g \rangle = \int f^*(x) g(x) \text{d}x\)

where * represents the conjugation.

Note

  • The trapezoidal integration rule is used.
Parameters:
  • other (Function) – Another function.
  • xlim (tuple(float or int, float or int)) – Range of the x-axis for the integration (optional).
Returns:

Value of the scalar product

Return type:

float

Raises:
  • ValueError – If the grid of both functions are different or if the interval given by xlim is not inside the one defined by the discretization grid.

  • Example

  • >>> grid = [-1, 0, 1]
    
  • >>> f = Function(grid, [1, 0, 1])
    
  • >>> g = Function(grid, np.ones_like(grid))
    
  • >>> f.scal_prod(g)
    
  • 1.0

norm()[source]
Returns:
  • float – Norm of the function.
  • Example
  • >>> Function([-1, 0, 1], [2j, 0, 2j]).norm()
  • (4+0j)
class siegpy.functions.AnalyticFunction(grid=None)[source]

Bases: siegpy.functions.Function

Note

This is an abstract class. A child class must implement the _compute_values class. Examples of such child classes are:

The main change with respect to the siegpy.functions.Function class is that the grid is an optional parameter, so that, if it is modified, then the values of the function are updated accordingly.

Parameters:grid (list or set or numpy array) – Discretization grid (optional).
evaluate(grid)[source]

Wrapper for the _compute_values() to evaluate the analytic function either for a grid of points or a single point in the 1-dimensional space.

Parameters:grid (int or float or numpy array) – Discretization grid.
Returns:Values of the function for all the grid points.
Return type:float or complex or numpy array
_compute_values(grid)[source]

Note

This is an abstract method.

Compute the values of the function given a discretization grid that has been converted to a numpy array by the evaluate() method.

Parameters:grid (numpy array) – Discretization grid.
Returns:Values of the analytic function over the provided grid.
Return type:numpy array
grid

grid still is an attribute, but it is more powerful than for a Function instance: when the grid of an AnalyticFunction instance is updated, so are its values.

__add__(other)[source]
Parameters:other (Function, or any class inheriting from it) – Another Function.
Returns:Sum of an AnalyticFunction instance with another function. It requires that at least one of both functions has a grid that is not None.
Return type:Function
Raises:ValueError – If both functions have grids set to None.

Test functions

Below are presented the classes that are mainly used as test functions when it comes to evaluate the completeness relation, response function or time-propagation.

Such classes are:

class siegpy.functions.Rectangular(xl, xr, k0=0.0, h=1.0, grid=None)[source]

Bases: siegpy.functions.AnalyticFunction

A rectangular function is characterized by:

In addition to the behaviour of an analytic function:

  • the norm is computed analytically,
  • the attribute is_even returns True if the rectangular function is centered, even if the discretization grid is not centered,
  • the equality and addition of two rectangular functions are also defined.
Parameters:
  • xl (float) – Left border of the rectangular function.
  • xr (float) – Right border of the rectangular function.
  • k0 (float) – Initial momentum of the rectangular function (default to 0).
  • h (float) – Maximal amplitude of the rectangular function (default to 1).
  • grid (list or set or numpy array) – Discretization grid (optional).
Raises:
  • ValueError – If the amplitude h is zero or if the width is negative.

  • Examples

  • A rectangular function has several attributes:

  • >>> r = Rectangular(-4, 2, k0=5)
    
  • >>> r.xl
    
  • -4

  • >>> r.xr
    
  • 2

  • >>> r.width
    
  • 6

  • >>> r.center
    
  • -1.0

  • >>> r.momentum
    
  • 5

  • >>> r.amplitude
    
  • 1.0

  • If no discretization grid is passed, then the atrributes

  • grid and

  • values are set to

  • None:

  • >>> r.grid is None and r.values is None
    
  • True

  • If a grid is passed, then the rectangular function is

  • discretized (meaning its attribute

  • values is not

  • None):

  • >>> r1 = Rectangular(-4, 2, k0=5, grid=[-1, 0, 1])
    
  • >>> r2 = Rectangular(-4, 2, k0=5, h=2, grid=[-1, 0, 1])
    
  • >>> np.array_equal(r2.values, 2*r1.values)
    
  • True

  • .. note:: – The only way of modifying the values of a Rectangular instance is by setting a new grid:

    >>> r.grid = [-1, 0, 1]
    >>> assert np.array_equal(r.grid, r1.grid)
    >>> assert np.array_equal(r.values, r1.values)
    >>> r.values = [2, 1, 2]
    Traceback (most recent call last):
    AttributeError: can't set attribute
    
  • .. warning:: – A rectangular function must have a strictly positive width:

    >>> Rectangular(1, -1)
    Traceback (most recent call last):
    ValueError: The width must be strictly positive.
    >>> Rectangular(1, 1)
    Traceback (most recent call last):
    ValueError: The width must be strictly positive.
    
classmethod from_center_and_width(xc, width, k0=0.0, h=1.0, grid=None)[source]

Initialization of a rectangular function centered in xc, of width a, with an amplitude h and with an initial momentum k0.

Parameters:
  • xc (float) – Center of the rectangular function.
  • width (float) – Width of the rectangular function.
  • k0 (float) – Initial momentum of the rectangular function (default to 0).
  • h (float) – Maximal amplitude of the rectangular function (default to 1.).
  • grid (list or set or numpy array) – Discretization grid (optional).
Returns:

  • Rectangular – An initialized rectangular function.
  • Example
  • >>> r = Rectangular.from_center_and_width(4, 2)
  • >>> r.xl
  • 3.0
  • >>> r.xr
  • 5.0

classmethod from_width_and_center(width, xc, k0=0.0, h=1.0, grid=None)[source]

Similar to the class method from_center_and_width().

Returns:
  • Rectangular – An initialized rectangular function.
  • Example
  • >>> r = Rectangular.from_width_and_center(4, 2)
  • >>> r.xl
  • 0.0
  • >>> r.xr
  • 4.0
xl
Returns:Left border of the rectangular function.
Return type:float
xr
Returns:Right border of the rectangular function.
Return type:float
amplitude
Returns:Amplitude of the rectangular function.
Return type:float
width
Returns:Width of the rectangular function.
Return type:float
center
Returns:Center of the rectangular function.
Return type:float
momentum
Returns:Momentum of the rectangular function.
Return type:float
__eq__(other)[source]
Parameters:object (Another) –

other: object

Returns:
  • boolTrue if both objects are Rectangular functions with the same amplitude, width, center and momentum.
  • Examples
  • Two rectangular functions with different amplitudes are
  • different
  • >>> r1 = Rectangular(-4, 2, k0=5, grid=[-1, 0, 1])
  • >>> r2 = Rectangular(-4, 2, k0=5, h=2, grid=[-1, 0, 1])
  • >>> r1 == r2
  • False
  • Two rectangular functions that are identical, except for the
  • grid, are considered to be the same
  • >>> r3 = Rectangular(-4, 2, k0=5, grid=[-1, -0.5, 0, 0.5, 1])
  • >>> r1 == r3
  • True
__add__(other)[source]

Add another function to a rectangular function.

Parameters:other (Function) – Another function.
Returns:
  • Rectangular or Function. – Sum of a rectangular function with another function.
  • Example
  • Two rectangular functions differing only by the amplitude can
  • be added to give another rectangular function
  • >>> r1 = Rectangular(-1, 1)
  • >>> r2 = Rectangular(-1, 1, h=3)
  • >>> r = r1 + r2
  • >>> assert r.amplitude == 4
  • >>> assert r.center == r1.center
  • >>> assert r.width == r1.width
  • >>> assert r.momentum == r1.momentum
  • Two different rectangular functions that were not discretized
  • cannot be added
  • >>> r1 + Rectangular(-2, 2)
  • Traceback (most recent call last)
  • ValueError (Two analytic functions not discretized cannot be added.)
  • Finally, if the rectangular function is not discretized over a
  • grid, but the other function is, then the addition can be
  • performed
  • >>> r = r1 + Function([-1, 0, 1], [1, 2, 3])
  • >>> r.grid
  • array([-1, 0, 1])
  • >>> r.values
  • array([ 2.+0.j, 3.+0.j, 4.+0.j])
is_even
Returns:
  • boolTrue if the rectangular function is even.
  • Examples
  • A centered rectangular function is even, even if t grid is
  • not centered
  • >>> Rectangular(-2, 2, grid=[-2, -1, 0]).is_even
  • True
  • A non-centered rectangular function cannot be even
  • >>> Rectangular(-2, 0).is_even
  • False
  • A centered rectangular function with an initial momentum
  • cannot be even
  • >>> Rectangular(-2, 2, k0=1).is_even
  • False
is_odd
Returns:False, as a rectangular function can’t be odd.
Return type:bool
_compute_values(grid)[source]

Evaluation of the Rectangular function for each grid point.

Parameters:grid (numpy array) – Discretization grid.
Returns:
  • float – Value of the Rectangular function over the grid.
  • Examples
  • >>> r = Rectangular(-5, 1, h=-3.5)
  • >>> r.evaluate(-1) == -3.5
  • True
  • >>> r.evaluate(-10) == 0
  • True
abs()[source]
Returns:
  • Rectangular – Absolute value of the Rectangular function.
  • Example
  • The absolute value of a Rectangular function is nothing but
  • the same Rectangular function without initial momentum
  • >>> r = Rectangular(-5, 1, h=4, k0=-6)
  • >>> assert r.abs() == Rectangular(-5, 1, h=4)
conjugate()[source]
Returns:
  • Rectangular – Conjugate of the Rectangular function.
  • Example
  • The conjugate of a Rectangular is the same Rectangular
  • function with a negative momentum
  • >>> r = Rectangular(-5, 1, h=4, k0=-6)
  • >>> assert r.conjugate() == Rectangular(-5, 1, h=4, k0=6)
norm()[source]
Returns:
  • float – Analytic norm of a rectangular function, that is equal to its width times its amplitude squared.
  • Examples
  • >>> r = Rectangular(-1, 1, h=3)
  • >>> r.norm()
  • 18
  • The norm does not depend on the discretization grid
  • >>> r.norm() == Rectangular(-1, 1, h=3, grid=[-1, 0, 1]).norm()
  • True
split(sw_pot)[source]

Split the rectangular function into three other rectangular functions, each one spreading over only one of the three regions defined by a 1D Square-Well potential SWP. If the original rectangular function does not spread over one particular region, the returned value for this region is None.

Parameters:sw_pot (SWPotential) – 1D Square-Well Potential
Returns:Three rectangular functions.
Return type:tuple of length 3
Raises:TypeError – If sw_pot is not a SWPotential instance.
class siegpy.functions.Gaussian(sigma, xc, k0=0.0, h=1.0, grid=None)[source]

Bases: siegpy.functions.AnalyticFunction

A Gaussian function is characterized by:

  • a width sigma
  • a center xc
  • an initial momentum k0
  • an amplitude h

In addition to the behaviour of an analytic function:

  • the norm is computed analytically,
  • the attribute is_even returns True if the Gaussian function is centered, even if the discretization grid is not centered,
  • the equality and addition of two Gaussian functions are also defined.
Parameters:
  • sigma (strictly positive float) – Width of the Gaussian function.
  • xc (float) – Center of the Gaussian function.
  • k0 (float) – Initial momentum of the Gaussian function (default to 0).
  • h (float) – Maximal amplitude of the Gaussian function (default to 1).
  • grid (list or set or numpy array) – Discretization grid (optional).
Raises:
  • ValueError – If sigma is negative.

  • Examples

  • A Gaussian function is characterized by various attributes:

  • >>> g = Gaussian(4, 2, k0=5)
    
  • >>> g.sigma
    
  • 4

  • >>> g.center
    
  • 2

  • >>> g.momentum
    
  • 5

  • >>> g.amplitude
    
  • 1.0

  • If no discretization grid is passed, then the atrributes

  • grid and

  • values are set to

  • None:

  • >>> g.grid is None and g.values is None
    
  • True

  • If a grid is given, the Gaussian is discretized:

  • >>> g1 = Gaussian(4, 2, k0=5, grid=[-1, 0, 1])
    
  • >>> g2 = Gaussian(4, 2, k0=5, h=2, grid=[-1, 0, 1])
    
  • >>> np.array_equal(g2.values, 2*g1.values)
    
  • True

  • .. note:: – The only way to modify the values of a Gaussian is by setting its grid:

    >>> g.grid = [-1, 0, 1]
    >>> assert np.array_equal(g.grid, g1.grid)
    >>> assert np.array_equal(g.values, g1.values)
    >>> g.values = [2, 1, 2]
    Traceback (most recent call last):
    AttributeError: can't set attribute
    
  • .. warning:: – Finally, a Gaussian function must have a strictly positive sigma:

    >>> Gaussian(-1, 1)
    Traceback (most recent call last):
    ValueError: The Gaussian must have a strictly positive sigma.
    >>> Gaussian(0, 1)
    Traceback (most recent call last):
    ValueError: The Gaussian must have a strictly positive sigma.
    
sigma
Returns:Width of the Gaussian function.
Return type:float
center
Returns:Center of the Gaussian function.
Return type:float
momentum
Returns:Momentum of the Gaussian function.
Return type:float
amplitude
Returns:Amplitude of the Gaussian function.
Return type:float
_params

Shortcut to get the main parameters of a Gaussian function.

Returns:sigma, center and amplitude of the Gaussian function.
Return type:tuple
__eq__(other)[source]
Parameters:other (object) – Another object.
Returns:
  • boolTrue if both Gaussian functions have the same sigma, center, momentum and amplitude.
  • Examples
  • Two Gaussian functions with different amplitudes are different
  • >>> g1 = Gaussian(4, 2, k0=5, grid=[-1, 0, 1])
  • >>> g2 = Gaussian(4, 2, k0=5, h=2, grid=[-1, 0, 1])
  • >>> g1 == g2
  • False
  • If only the grid differs, then the two Gaussian functions are
  • the same
  • >>> g3 = Gaussian(4, 2, k0=5, grid=[-1, -0.5, 0, 0.5, 1])
  • >>> g1 == g3
  • True
__add__(other)[source]

Add another function to a Gaussian function.

Parameters:other (Function) – Another function.
Returns:
  • Gaussian or Function. – Sum of a Gaussian function with another function.
  • Example
  • Two Gaussian functions differing only by the amplitude can be
  • added to give another Gaussian function
  • >>> g1 = Gaussian(1, 0)
  • >>> g2 = Gaussian(1, 0, h=3)
  • >>> g = g1 + g2
  • >>> assert g.amplitude == 4
  • >>> assert g.center == g1.center
  • >>> assert g.sigma == g1.sigma
  • >>> assert g.momentum == g1.momentum
  • Two different Gaussian functions that were not discretized
  • cannot be added
  • >>> g1 + Gaussian(1, 1)
  • Traceback (most recent call last)
  • ValueError (Two analytic functions not discretized cannot be added.)
  • Finally, if the Gaussian function is not discretized over a
  • grid, but the other function is, then the addition can be
  • performed
  • >>> g = g1 + Function([-1, 0, 1], [1, 2, 3])
  • >>> g.grid
  • array([-1, 0, 1])
  • >>> g.values
  • array([ 1.60653066+0.j, 3.00000000+0.j, 3.60653066+0.j])
is_even
Returns:
  • boolTrue if the Gaussian function is even.
  • Examples
  • A centered Gaussian with no initial momentum is even, even if
  • its grid is not centered
  • >>> Gaussian(2, 0, grid=[-2, -1, 0]).is_even
  • True
  • A non-centered Gaussian is not even
  • >>> Gaussian(2, 2).is_even
  • False
  • A centered Gaussian with a non-zero initial momentum is
  • not even
  • >>> Gaussian(2, 0, k0=1).is_even
  • False
is_odd
Returns:False, as a Gaussian function can’t be odd.
Return type:bool
is_inside(sw_pot, tol=1e-05)[source]

Check if the Gaussian function can be considered as inside the 1D Square-Well potential, given a tolerance value that must be larger than the values of the Gaussian function at the border of the potential.

Parameters:
  • sw_pot (SWPotential) – 1D potential.
  • tol (float) – Tolerance value (default to \(10^{-5}\)).
Returns:

True if the Gaussian function is inside the 1D SWP.

Return type:

bool

Raises:

TypeError – If sw_pot is not a SWPotential instance.

abs()[source]
Returns:
  • Gaussian – Absolute value of the Gaussian function.
  • Example
  • The absolute value of a Gaussian function is nothing but
  • the same Gaussian function without initial momentum
  • >>> g = Gaussian(5, -1, h=4, k0=-6)
  • >>> assert g.abs() == Gaussian(5, -1, h=4)
conjugate()[source]
Returns:
  • Gaussian – Conjugate of the Gaussian function.
  • Example
  • The conjugate of a Gaussian is the same Gaussian function
  • with a negative momentum
  • >>> g = Gaussian(5, -1, h=4, k0=-6)
  • >>> assert g.conjugate() == Gaussian(5, -1, h=4, k0=6)
norm()[source]
Returns:
  • float – Analytic norm of the Gaussian function.
  • Example
  • The norm of a Gaussian function does not depend on the
  • discretization grid
  • >>> g1 = Gaussian(2, 0)
  • >>> g2 = Gaussian(2, 0, grid=[-1, 0, 1])
  • >>> assert g1.norm() == g2.norm()
_compute_values(grid)[source]

Evaluate the Gaussian for each grid point \(x_0\): \(a e^{-\frac{(x_0-xc)^2}{2 sigma^2}} * e^{i k_0 x}\)

Parameters:grid (numpy array) – Discretization grid.
Returns:
  • float or complex – Value of the Gaussian function over the grid.
  • Example
  • >>> g = Gaussian(5, -1, h=-3.5)
  • >>> g.evaluate(-1) == -3.5
  • True

Indices and tables