This is a demo showing how FRApy can be used to fit a simple velocity model. In this case an arctangent model. We do not fit the velocity dispersion, but we can use our model to calculate how much of the measured velocity dispersion come from beam smearing (i.e. purelly observational hazards and not from the turbulence of the gas in the galaxy)
The data being fit is a very interesting and very magnified galaxy at z=0.721, nicknamed the dragon, for obvious visual reasons:
More information about this galaxy and its kinematic analysis was published in Patricio et al. 2018
# For prettiness
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%matplotlib notebook
import matplotlib.pylab as plt
# Temporary work-around
import sys
frapy_path = '../'
sys.path.append(frapy_path)
sys.path
# Import FRApy
from frapy import Observation,Velocity_Arctangent,fit_model,make_input_parameters,Output
Let's start by loading our observations (i.e. data), in this case, a metallicity map of a galaxy at z=0.611 observed with MUSE.
We'll use a Observation class to do this
obs = Observation(z=0.725,
data_path='Demo_data/A370_velocity.fits',
unc_path='Demo_data/A370_velocity_uncertainty.fits',
seeing = 0.7/0.2)
obs.info()
obs.plot(data_lim=(-200,200),unc_lim=(0,20))
Now let's create a velocity model, in this case the arctangent model.
model = Velocity_Arctangent(zlens=0.32,
dplx_path='Demo_data/A370_dplx.fits',
dply_path='Demo_data/A370_dply.fits')
model.lensing_info()
We can also check what are the model parameters
model_parameters = model.model_parameters(verbose=True)
print(model_parameters)
Using the 'make_displacement_maps_for_object' for our observations (passing the observation object we created into this method), we fix this problem, by producing displacement maps at the correct redshift and aligned with our data, ready to be used.
model.create_displacement_maps_for_object(obs,correct_z=False)
If we now try again to obtain a distance map, it should work
model.cx = 40
model.cy = 70
model.q = 0.9
dist = model.make_distance_map()
plt.figure()
plt.imshow(dist,origin='lower')
ang = model.make_azimuthal_map()
plt.figure()
plt.imshow(ang,origin='lower')
We can also produce the actual model we are interested in (most of the times we won't need the distance map, this is just an intermediate step common to all models)
model.r_t = 5
model.pa = 190
dummy_gradient = model.make_model()
plt.figure()
plt.imshow(dummy_gradient,origin='lower')
The data is now also available in model.data and it can be directly showed with model.plot()
model.data
model.plot()
Now we will try to fit the model to the data. We use the emcee sampler to make this.
A couple of things to notice:
We assume uniform priors for all parameters (maybe not the smartest, but maybe the safest).
We are maximising the following (log)-probability function:
$$ln(probability) = ln(priors) + ln(likelihood)$$with
$$ln(likelihood) = -\frac{1}{2} \big( \frac{(data-model)^2}{uncertainty^2} + ln(2\, \pi\, uncertainty^2)\big)$$We need both the model and the observation objects we made before, but we also have to 'tell' the fitter which parameters are going to be fit and how much they are allowed to vary.
We do this using a nested dictionary, in the form:
{parameter_name1:{'value':X, 'min':Y, 'max':Z},
parameter_name2:{'value':A, 'min':B, 'max':C},
...
}
You can build your own, or use the make_input_parameters auxiliary function to do this for you.
Note: the parameter names have to be exactly the names of the parameters of the model you're using. You can check this using model.model_parameters().
# We can access the all the possible parameter names to put in 'name' with:
#parameter_names = model.model_parameters()
input_par = make_input_parameters(name = ('cx', 'cy', 'q', 'pa', 'v_t', 'r_t'),
value = ( 24, 70, 0.7, 140, 200, 4.0),
minimum = ( 22, 68, 0.5, 90, 50, 1.0),
maximum = ( 26, 72, 0.99, 190, 900, 7.0))
print(input_par)
out = fit_model(obs,model,input_par,'a370_arctang_vel',nsteps=2500,nwalkers=24)
results = Output('a370_arctang_vel')
results.check_convergence()
results.make_cornerplot(start=1000)
best_param = results.best_parameters(start=1000)
Using this dictiorary, we can produce the 'best' model of this fit.
(Or you can directly modify the model's parameters)
best_model, residuals = results.plot_solution(best_param)
Plotting it with nicer colours...
fig, ax = plt.subplots(1,3,figsize=(12,5))
cax = ax[0].imshow(obs.data,origin='lower',cmap='RdBu',vmax=250,vmin=-250)
plt.colorbar(cax,ax=ax[0],fraction=0.03)
convolved_model = best_model.convolve_with_seeing(obs.seeing/2.355)
cax = ax[1].imshow(convolved_model,origin='lower',cmap='RdBu',vmax=250,vmin=-250)
plt.colorbar(cax,ax=ax[1],fraction=0.03)
cax = ax[2].imshow(residuals,origin='lower',cmap='PiYG',vmax=100,vmin=-100)
plt.colorbar(cax,ax=ax[2],fraction=0.03)
dummy = [x.axis('off') for x in ax]
Some basic goodness of fit metrics can also be calculated:
chi2_dof = results.goodness_of_fit(best_param)
We have included 4 kinematic models in FRApy (see Epinat for details):
* Arctangent (Puech 2008)
* Flat
* Expenential
* Isothermal sphere
TO BE COMPLETED