Introduction¶
Measurements in High Energy Physics (HEP) rely on determining the compatibility of observed collision events with theoretical predictions. The relationship between them is often formalised in a statistical model \(f(\bm{x}|\fullset)\) describing the probability of data \(\bm{x}\) given model parameters \(\fullset\). Given observed data, the likelihood \(\mathcal{L}(\fullset)\) then serves as the basis to test hypotheses on the parameters \(\fullset\). For measurements based on binned data (histograms), the \(\HiFa{}\) family of statistical models has been widely used in both Standard Model measurements [intro-4] as well as searches for new physics [intro-5]. In this package, a declarative, plain-text format for describing \(\HiFa{}\)-based likelihoods is presented that is targeted for reinterpretation and long-term preservation in analysis data repositories such as HEPData [intro-3].
HistFactory¶
Statistical models described using \(\HiFa{}\) [intro-2] center around the simultaneous measurement of disjoint binned distributions (channels) observed as event counts \(\channelcounts\). For each channel, the overall expected event rate 1 is the sum over a number of physics processes (samples). The sample rates may be subject to parametrised variations, both to express the effect of free parameters \(\freeset\) 2 and to account for systematic uncertainties as a function of constrained parameters \(\constrset\). The degree to which the latter can cause a deviation of the expected event rates from the nominal rates is limited by constraint terms. In a frequentist framework these constraint terms can be viewed as auxiliary measurements with additional global observable data \(\auxdata\), which paired with the channel data \(\channelcounts\) completes the observation \(\bm{x} = (\channelcounts,\auxdata)\). In addition to the partition of the full parameter set into free and constrained parameters \(\fullset = (\freeset,\constrset)\), a separate partition \(\fullset = (\poiset,\nuisset)\) will be useful in the context of hypothesis testing, where a subset of the parameters are declared parameters of interest \(\poiset\) and the remaining ones as nuisance parameters \(\nuisset\).
Thus, the overall structure of a \(\HiFa{}\) probability model is a product of the analysis-specific model term describing the measurements of the channels and the analysis-independent set of constraint terms:
where within a certain integrated luminosity we observe \(n_{cb}\) events given the expected rate of events \(\nu_{cb}(\freeset,\constrset)\) as a function of unconstrained parameters \(\freeset\) and constrained parameters \(\constrset\). The latter has corresponding one-dimensional constraint terms \(c_\singleconstr(a_\singleconstr|\,\singleconstr)\) with auxiliary data \(a_\singleconstr\) constraining the parameter \(\singleconstr\). The event rates \(\nu_{cb}\) are defined as
The total rates are the sum over sample rates \(\nu_{csb}\), each determined from a nominal rate \(\nu_{scb}^0\) and a set of multiplicative and additive denoted rate modifiers \(\bm{\kappa}(\fullset)\) and \(\bm{\Delta}(\fullset)\). These modifiers are functions of (usually a single) model parameters. Starting from constant nominal rates, one can derive the per-bin event rate modification by iterating over all sample rate modifications as shown in (3).
As summarised in Modifiers and Constraints, rate modifications are defined in \(\HiFa{}\) for bin \(b\), sample \(s\), channel \(c\). Each modifier is represented by a parameter \(\phi \in \{\gamma, \alpha, \lambda, \mu\}\). By convention bin-wise parameters are denoted with \(\gamma\) and interpolation parameters with \(\alpha\). The luminosity \(\lambda\) and scale factors \(\mu\) affect all bins equally. For constrained modifiers, the implied constraint term is given as well as the necessary input data required to construct it. \(\sigma_b\) corresponds to the relative uncertainty of the event rate, whereas \(\delta_b\) is the event rate uncertainty of the sample relative to the total event rate \(\nu_b = \sum_s = \nu^0_{sb}\).
Modifiers implementing uncertainties are paired with a corresponding default constraint term on the parameter limiting the rate modification. The available modifiers may affect only the total number of expected events of a sample within a given channel, i.e. only change its normalisation, while holding the distribution of events across the bins of a channel, i.e. its “shape”, invariant. Alternatively, modifiers may change the sample shapes. Here \(\HiFa{}\) supports correlated an uncorrelated bin-by-bin shape modifications. In the former, a single nuisance parameter affects the expected sample rates within the bins of a given channel, while the latter introduces one nuisance parameter for each bin, each with their own constraint term. For the correlated shape and normalisation uncertainties, \(\HiFa{}\) makes use of interpolating functions, \(f_p\) and \(g_p\), constructed from a small number of evaluations of the expected rate at fixed values of the parameter \(\alpha\) 3. For the remaining modifiers, the parameter directly affects the rate.
Description |
Modification |
Constraint Term \(c_\singleconstr\) |
Input |
---|---|---|---|
Uncorrelated Shape |
\(\kappa_{scb}(\gamma_b) = \gamma_b\) |
\(\prod_b \mathrm{Pois}\left(r_b = \sigma_b^{-2}\middle|\,\rho_b = \sigma_b^{-2}\gamma_b\right)\) |
\(\sigma_{b}\) |
Correlated Shape |
\(\Delta_{scb}(\alpha) = f_p\left(\alpha\middle|\,\Delta_{scb,\alpha=-1},\Delta_{scb,\alpha = 1}\right)\) |
\(\displaystyle\mathrm{Gaus}\left(a = 0\middle|\,\alpha,\sigma = 1\right)\) |
\(\Delta_{scb,\alpha=\pm1}\) |
Normalisation Unc. |
\(\kappa_{scb}(\alpha) = g_p\left(\alpha\middle|\,\kappa_{scb,\alpha=-1},\kappa_{scb,\alpha=1}\right)\) |
\(\displaystyle\mathrm{Gaus}\left(a = 0\middle|\,\alpha,\sigma = 1\right)\) |
\(\kappa_{scb,\alpha=\pm1}\) |
MC Stat. Uncertainty |
\(\kappa_{scb}(\gamma_b) = \gamma_b\) |
\(\prod_b \mathrm{Gaus}\left(a_{\gamma_b} = 1\middle|\,\gamma_b,\delta_b\right)\) |
\(\delta_b^2 = \sum_s\delta^2_{sb}\) |
Luminosity |
\(\kappa_{scb}(\lambda) = \lambda\) |
\(\displaystyle\mathrm{Gaus}\left(l = \lambda_0\middle|\,\lambda,\sigma_\lambda\right)\) |
\(\lambda_0,\sigma_\lambda\) |
Normalisation |
\(\kappa_{scb}(\mu_b) = \mu_b\) |
||
Data-driven Shape |
\(\kappa_{scb}(\gamma_b) = \gamma_b\) |
Given the likelihood \(\mathcal{L}(\fullset)\), constructed from observed data in all channels and the implied auxiliary data, measurements in the form of point and interval estimates can be defined. The majority of the parameters are nuisance parameters — parameters that are not the main target of the measurement but are necessary to correctly model the data. A small subset of the unconstrained parameters may be declared as parameters of interest for which measurements hypothesis tests are performed, e.g. profile likelihood methods [intro-1]. The Symbol Notation table provides a summary of all the notation introduced in this documentation.
Symbol |
Name |
---|---|
\(f(\bm{x} | \fullset)\) |
model |
\(\mathcal{L}(\fullset)\) |
likelihood |
\(\bm{x} = \{\channelcounts, \auxdata\}\) |
full dataset (including auxiliary data) |
\(\channelcounts\) |
channel data (or event counts) |
\(\auxdata\) |
auxiliary data |
\(\nu(\fullset)\) |
calculated event rates |
\(\fullset = \{\freeset, \constrset\} = \{\poiset, \nuisset\}\) |
all parameters |
\(\freeset\) |
free parameters |
\(\constrset\) |
constrained parameters |
\(\poiset\) |
parameters of interest |
\(\nuisset\) |
nuisance parameters |
\(\bm{\kappa}(\fullset)\) |
multiplicative rate modifier |
\(\bm{\Delta}(\fullset)\) |
additive rate modifier |
\(c_\singleconstr(a_\singleconstr | \singleconstr)\) |
constraint term for constrained parameter \(\singleconstr\) |
\(\sigma_\singleconstr\) |
relative uncertainty in the constrained parameter |
Declarative Formats¶
While flexible enough to describe a wide range of LHC measurements, the design of the \(\HiFa{}\) specification is sufficiently simple to admit a declarative format that fully encodes the statistical model of the analysis. This format defines the channels, all associated samples, their parameterised rate modifiers and implied constraint terms as well as the measurements. Additionally, the format represents the mathematical model, leaving the implementation of the likelihood minimisation to be analysis-dependent and/or language-dependent. Originally XML was chosen as a specification language to define the structure of the model while introducing a dependence on \(\Root{}\) to encode the nominal rates and required input data of the constraint terms [intro-2]. Using this specification, a model can be constructed and evaluated within the \(\RooFit{}\) framework.
This package introduces an updated form of the specification based on the ubiquitous plain-text JSON format and its schema-language JSON Schema. Described in more detail in Likelihood Specification, this schema fully specifies both structure and necessary constrained data in a single document and thus is implementation independent.
Additional Material¶
Footnotes¶
- 1
Here rate refers to the number of events expected to be observed within a given data-taking interval defined through its integrated luminosity. It often appears as the input parameter to the Poisson distribution, hence the name “rate”.
- 2
These free parameters frequently include the of a given process, i.e. its cross-section normalised to a particular reference cross-section such as that expected from the Standard Model or a given BSM scenario.
- 3
This is usually constructed from the nominal rate and measurements of the event rate at \(\alpha=\pm1\), where the value of the modifier at \(\alpha=\pm1\) must be provided and the value at \(\alpha=0\) corresponds to the corresponding identity operation of the modifier, i.e. \(f_{p}(\alpha=0) = 0\) and \(g_{p}(\alpha = 0)=1\) for additive and multiplicative modifiers respectively. See Section 4.1 in [intro-2].
Bibliography¶
- intro-1
Glen Cowan, Kyle Cranmer, Eilam Gross, and Ofer Vitells. Asymptotic formulae for likelihood-based tests of new physics. Eur. Phys. J. C, 71:1554, 2011. arXiv:1007.1727, doi:10.1140/epjc/s10052-011-1554-0.
- intro-2(1,2,3)
Kyle Cranmer, George Lewis, Lorenzo Moneta, Akira Shibata, and Wouter Verkerke. HistFactory: A tool for creating statistical models for use with RooFit and RooStats. Technical Report CERN-OPEN-2012-016, New York U., New York, Jan 2012. URL: https://cds.cern.ch/record/1456844.
- intro-3
Eamonn Maguire, Lukas Heinrich, and Graeme Watt. HEPData: a repository for high energy physics data. J. Phys. Conf. Ser., 898(10):102006, 2017. arXiv:1704.05473, doi:10.1088/1742-6596/898/10/102006.
- intro-4
ATLAS Collaboration. Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC. Phys. Lett. B, 726:88, 2013. arXiv:1307.1427, doi:10.1016/j.physletb.2014.05.011.
- intro-5
ATLAS Collaboration. Search for supersymmetry in final states with missing transverse momentum and multiple \(b\)-jets in proton–proton collisions at \(\sqrt s = 13\) \(\TeV \) with the ATLAS detector. ATLAS-CONF-2018-041, 2018. URL: https://cds.cern.ch/record/2632347.
Likelihood Specification¶
The structure of the JSON specification of models follows closely the original XML-based specification [likelihood-2].
Workspace¶
{
"$schema": "http://json-schema.org/draft-06/schema#",
"$id": "https://scikit-hep.org/pyhf/schemas/1.0.0/workspace.json",
"type": "object",
"properties": {
"channels": { "type": "array", "items": {"$ref": "defs.json#/definitions/channel"}, "minItems": 1 },
"measurements": { "type": "array", "items": {"$ref": "defs.json#/definitions/measurement"}, "minItems": 1 },
"observations": { "type": "array", "items": {"$ref": "defs.json#/definitions/observation" }, "minItems": 1 },
"version": { "const": "1.0.0" }
},
"additionalProperties": false,
"required": ["channels", "measurements", "observations", "version"]
}
The overall document in the above code snippet describes a workspace, which includes
channels: The channels in the model, which include a description of the samples within each channel and their possible parametrised modifiers.
measurements: A set of measurements, which define among others the parameters of interest for a given statistical analysis objective.
observations: The observed data, with which a likelihood can be constructed from the model.
A workspace consists of the channels, one set of observed data, but can include multiple measurements. If provided a JSON file, one can quickly check that it conforms to the provided workspace specification as follows:
import json, requests, jsonschema
workspace = json.load(open('/path/to/analysis_workspace.json'))
# if no exception is raised, it found and parsed the schema
schema = requests.get('https://scikit-hep.org/pyhf/schemas/1.0.0/workspace.json').json()
# If no exception is raised by validate(), the instance is valid.
jsonschema.validate(instance=workspace, schema=schema)
Channel¶
A channel is defined by a channel name and a list of samples [likelihood-1].
{
"channel": {
"type": "object",
"properties": {
"name": { "type": "string" },
"samples": { "type": "array", "items": {"$ref": "#/definitions/sample"}, "minItems": 1 }
},
"required": ["name", "samples"],
"additionalProperties": false
},
}
The Channel specification consists of a list of channel descriptions.
Each channel, an analysis region encompassing one or more measurement
bins, consists of a name
field and a samples
field (see Channel), which
holds a list of sample definitions (see Sample). Each sample definition in
turn has a name
field, a data
field for the nominal event rates
for all bins in the channel, and a modifiers
field of the list of
modifiers for the sample.
Sample¶
A sample is defined by a sample name, the sample event rate, and a list of modifiers [likelihood-1].
{
"sample": {
"type": "object",
"properties": {
"name": { "type": "string" },
"data": { "type": "array", "items": {"type": "number"}, "minItems": 1 },
"modifiers": {
"type": "array",
"items": {
"anyOf": [
{ "$ref": "#/definitions/modifier/histosys" },
{ "$ref": "#/definitions/modifier/lumi" },
{ "$ref": "#/definitions/modifier/normfactor" },
{ "$ref": "#/definitions/modifier/normsys" },
{ "$ref": "#/definitions/modifier/shapefactor" },
{ "$ref": "#/definitions/modifier/shapesys" },
{ "$ref": "#/definitions/modifier/staterror" }
]
}
}
},
"required": ["name", "data", "modifiers"],
"additionalProperties": false
},
}
Modifiers¶
The modifiers that are applicable for a given sample are encoded as a list of JSON objects with three fields. A name field, a type field denoting the class of the modifier, and a data field which provides the necessary input data as denoted in Modifiers and Constraints.
Based on the declared modifiers, the set of parameters and their constraint terms are derived implicitly as each type of modifier unambiguously defines the constraint terms it requires. Correlated shape modifiers and normalisation uncertainties have compatible constraint terms and thus modifiers can be declared that share parameters by re-using a name 1 for multiple modifiers. That is, a variation of a single parameter causes a shift within sample rates due to both shape and normalisation variations.
We review the structure of each modifier type below.
Normalisation Uncertainty (normsys)¶
The normalisation uncertainty modifies the sample rate by a overall factor \(\kappa(\alpha)\) constructed as the interpolation between downward (“lo”) and upward (“hi”) as well as the nominal setting, i.e. \(\kappa(-1) = \kappa_{\alpha=-1}\), \(\kappa(0) = 1\) and \(\kappa(+1) = \kappa_{\alpha=+1}\). In the modifier definition we record \(\kappa_{\alpha=+1}\) and \(\kappa_{\alpha=-1}\) as floats. An example is shown below:
{ "name": "mod_name", "type": "normsys", "data": {"hi": 1.1, "lo": 0.9} }
An example of a normalisation uncertainty modifier with scale factors recorded for the up/down variations of an \(n\)-bin channel.
MC Statistical Uncertainty (staterror)¶
As the sample counts are often derived from Monte Carlo (MC) datasets, they necessarily carry an uncertainty due to the finite sample size of the datasets. As explained in detail in [likelihood-2], adding uncertainties for each sample would yield a very large number of nuisance parameters with limited utility. Therefore a set of bin-wise scale factors \(\gamma_b\) is introduced to model the overall uncertainty in the bin due to MC statistics. The constrained term is constructed as a set of Gaussian constraints with a central value equal to unity for each bin in the channel. The scales \(\sigma_b\) of the constraint are computed from the individual uncertainties of samples defined within the channel relative to the total event rate of all samples: \(\delta_{csb} = \sigma_{csb}/\sum_s \nu^0_{scb}\). As not all samples are within a channel are estimated from MC simulations, only the samples with a declared statistical uncertainty modifier enter the sum. An example is shown below:
{ "name": "mod_name", "type": "staterror", "data": [0.1] }
An example of a statistical uncertainty modifier.
Luminosity (lumi)¶
Sample rates derived from theory calculations, as opposed to data-driven estimates, are scaled to the integrated luminosity corresponding to the observed data. As the luminosity measurement is itself subject to an uncertainty, it must be reflected in the rate estimates of such samples. As this modifier is of global nature, no additional per-sample information is required and thus the data field is nulled. This uncertainty is relevant, in particular, when the parameter of interest is a signal cross-section. The luminosity uncertainty \(\sigma_\lambda\) is provided as part of the parameter configuration included in the measurement specification discussed in Measurements. An example is shown below:
{ "name": "mod_name", "type": "lumi", "data": null }
An example of a luminosity modifier.
Unconstrained Normalisation (normfactor)¶
The unconstrained normalisation modifier scales the event rates of a sample by a free parameter \(\mu\). Common use cases are the signal rate of a possible BSM signal or simultaneous in-situ measurements of background samples. Such parameters are frequently the parameters of interest of a given measurement. No additional per-sample data is required. An example is shown below:
{ "name": "mod_name", "type": "normfactor", "data": null }
An example of a normalisation modifier.
Data-driven Shape (shapefactor)¶
In order to support data-driven estimation of sample rates (e.g. for multijet backgrounds), the data-driven shape modifier adds free, bin-wise multiplicative parameters. Similarly to the normalisation factors, no additional data is required as no constraint is defined. An example is shown below:
{ "name": "mod_name", "type": "shapefactor", "data": null }
An example of an uncorrelated shape modifier.
Data¶
The data provided by the analysis are the observed data for each channel (or region). This data is provided as a mapping from channel name to an array of floats, which provide the observed rates in each bin of the channel. The auxiliary data is not included as it is an input to the likelihood that does not need to be archived and can be determined automatically from the specification. An example is shown below:
{ "chan_name_one": [10, 20], "chan_name_two": [4, 0]}
An example of channel data.
Measurements¶
Given the data and the model definitions, a measurement can be defined. In the current schema, the measurements defines the name of the parameter of interest as well as parameter set configurations. 2 Here, the remaining information not covered through the channel definition is provided, e.g. for the luminosity parameter. For all modifiers, the default settings can be overridden where possible:
inits: Initial value of the parameter.
bounds: Interval bounds of the parameter.
auxdata: Auxiliary data for the associated constraint term.
sigmas: Associated uncertainty of the parameter.
An example is shown below:
{
"name": "MyMeasurement",
"config": {
"poi": "SignalCrossSection", "parameters": [
{ "name":"lumi", "auxdata":[1.0],"sigmas":[0.017], "bounds":[[0.915,1.085]],"inits":[1.0] },
{ "name":"mu_ttbar", "bounds":[[0, 5]] },
{ "name":"rw_1CR", "fixed":true }
]
}
}
An example of a measurement. This measurement, which scans over the parameter of interest SignalCrossSection
, is setting configurations for the luminosity modifier, changing the default bounds for the normfactor modifier named mu_ttbar
, and specifying that the modifier rw_1CR
is held constant (fixed
).
Observations¶
This is what we evaluate the hypothesis testing against, to determine the compatibility of signal+background hypothesis to the background-only hypothesis. This is specified as a list of objects, with each object structured as
name: the channel for which the observations are recorded
data: the bin-by-bin observations for the named channel
An example is shown below:
{
"name": "channel1",
"data": [110.0, 120.0]
}
An example of an observation. This observation recorded for a 2-bin channel channel1
, has values 110.0
and 120.0
.
Toy Example¶
{
"channels": [
{ "name": "singlechannel",
"samples": [
{ "name": "signal",
"data": [5.0, 10.0],
"modifiers": [ { "name": "mu", "type": "normfactor", "data": null} ]
},
{ "name": "background",
"data": [50.0, 60.0],
"modifiers": [ {"name": "uncorr_bkguncrt", "type": "shapesys", "data": [5.0, 12.0]} ]
}
]
}
],
"observations": [
{ "name": "singlechannel", "data": [50.0, 60.0] }
],
"measurements": [
{ "name": "Measurement", "config": {"poi": "mu", "parameters": []} }
],
"version": "1.0.0"
}
In the above example, we demonstrate a simple measurement of a single two-bin channel with two samples: a signal sample and a background sample. The signal sample has an unconstrained normalisation factor \(\mu\), while the background sample carries an uncorrelated shape systematic controlled by parameters \(\gamma_1\) and \(\gamma_2\). The background uncertainty for the bins is 10% and 20% respectively.
Additional Material¶
Footnotes¶
- 1
The name of a modifier specifies the parameter set it is controlled by. Modifiers with the same name share parameter sets.
- 2
In this context a parameter set corresponds to a named lower-dimensional subspace of the full parameters \(\fullset\). In many cases these are one-dimensional subspaces, e.g. a specific interpolation parameter \(\alpha\) or the luminosity parameter \(\lambda\). For multi-bin channels, however, e.g. all bin-wise nuisance parameters of the uncorrelated shape modifiers are grouped under a single name. Therefore in general a parameter set definition provides arrays of initial values, bounds, etc.
Bibliography¶
- likelihood-1(1,2)
Histfactory definitions schema. Accessed: 2019-06-20. URL: https://scikit-hep.org/pyhf/schemas/1.0.0/defs.json.
- likelihood-2(1,2)
Kyle Cranmer, George Lewis, Lorenzo Moneta, Akira Shibata, and Wouter Verkerke. HistFactory: A tool for creating statistical models for use with RooFit and RooStats. Technical Report CERN-OPEN-2012-016, New York U., New York, Jan 2012. URL: https://cds.cern.ch/record/1456844.
Fundamentals¶
Notebooks:
[1]:
%pylab inline
from ipywidgets import interact
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
Populating the interactive namespace from numpy and matplotlib
Piecewise Linear Interpolation¶
References: https://cds.cern.ch/record/1456844/files/CERN-OPEN-2012-016.pdf
We wish to understand interpolation using the piecewise linear function. This is interpcode=0
in the above reference. This function is defined as (nb: vector denotes bold)
with
In this notebook, we’ll demonstrate the technical implementation of these interplations starting from simple dimensionality and increasing the dimensions as we go along. In all situations, we’ll consider a single systematic that we wish to interpolate, such as Jet Energy Scale (JES).
Let’s define the interpolate function. This function will produce the deltas we would like to calculate and sum with the nominal measurement to determine the interpolated measurements value.
[2]:
def interpolate_deltas(down,nom,up,alpha):
delta_up = up - nom
delta_down = nom - down
if alpha > 0:
return delta_up*alpha
else:
return delta_down*alpha
Why are we calculating deltas? This is some additional foresight that you, the reader, may not have yet. Multiple interpolation schemes exist but they all rely on calculating the change with respect to the nominal measurement (the delta).
Case 1: The Single-binned Histogram¶
Let’s first start with considering evaluating the total number of events after applying JES corrections. This is the single-bin case. Code that runs through event selection will vary the JES parameter and provide three histograms, each with a single bin. These three histograms represent the nominal-, up-, and down- variations of the JES nuisance parameter.
When processing, we find that there are 10 events nominally, and when we vary the JES parameter downwards, we only measure 8 events. When varying upwards, we measure 15 events.
[3]:
down_1 = np.array([8])
nom_1 = np.array([10])
up_1 = np.array([15])
We would like to generate a function \(f(\alpha_\text{JES})\) that linearly interpolates the number of events for us so we can scan the phase-space for calculating PDFs. The interpolate_deltas()
function defined above does this for us.
[4]:
alphas = np.linspace(-1.,1.)
deltas = [interpolate_deltas(down_1, nom_1, up_1, alpha) for alpha in alphas]
deltas[:5]
[4]:
[array([-2.]),
array([-1.91836735]),
array([-1.83673469]),
array([-1.75510204]),
array([-1.67346939])]
So now that we’ve generated the deltas from the nominal measurement, we can plot this to see how the linear interpolation works in the single-bin case, where we plot the measured values in black, and the interpolation in dashed, blue.
[5]:
plt.plot(alphas,[nom_1+delta for delta in deltas], linestyle='--')
plt.scatter((-1,0,1),(down_1,nom_1,up_1), color='k')
plt.xlabel(r'$\alpha_\mathrm{JES}$')
plt.ylabel(r'Events')
[5]:
Text(0,0.5,'Events')

Here, we can imagine building a 1-dimensional tensor (column-vector) of measurements as a function of \(\alpha_\text{JES}\) with each row in the column vector corresponding to a given \(\alpha_\text{JES}\) value.
Case 2: The Multi-binned Histogram¶
Now, let’s increase the computational difficulty a little by increasing the dimensionality. Assume instead of a single-bin measurement, we have more measurements! We are good physicists after all. Imagine continuing on the previous example, where we add more bins, perhaps because we got more data. Imagine that this was binned by collection year, where we observed 10 events in the first year, 10.5 the next year, and so on…
[6]:
down_hist = np.linspace(8,10,11)
nom_hist = np.linspace(10,13,11)
up_hist = np.linspace(15,20,11)
Now, we still need to interpolate. Just like before, we have varied JES upwards and downwards to determine the corresponding histograms of variations. In order to interpolate, we need to interpolate by bin for each bin in the three histograms we have here (or three measurements if you prefer).
Let’s go ahead and plot these histograms as a function of the bin index with black as the nominal measurements, red and blue as the down and up variations respectively. The black points are the measurements we have, and for each bin, we would like to interpolate to get an interpolated histogram that represents the measurement as a function of \(\alpha_\text{JES}\).
[7]:
def plot_measurements(down_hist, nom_hist, up_hist):
bincenters = np.arange(len(nom_hist))
for i, h in enumerate(zip(up_hist, nom_hist, down_hist)):
plt.scatter([i]*len(h), h, color='k', alpha=0.5)
for c,h in zip(['r','k','b'],[down_hist, nom_hist, up_hist]):
plt.plot(bincenters, h, color=c, linestyle='-', alpha=0.5)
plt.xlabel('Bin index in histogram')
plt.ylabel('Events')
plot_measurements(down_hist, nom_hist, up_hist)

What does this look like if we evaluate at a single \(\alpha_\text{JES} = 0.5\)? We’ll write a function that interpolates and then plots the interpolated values as a function of bin index, in green, dashed.
[8]:
def plot_interpolated_histogram(alpha, down_hist, nom_hist, up_hist):
bincenters = np.arange(len(nom_hist))
interpolated_vals = [nominal + interpolate_deltas(down, nominal, up, alpha) for down, nominal, up in zip(down_hist,nom_hist,up_hist)]
plot_measurements(down_hist, nom_hist, up_hist)
plt.plot(bincenters,interpolated_vals, color = 'g', linestyle='--')
plot_interpolated_histogram(0.5, down_hist, nom_hist, up_hist)

We can go one step further in visualization and see what it looks like for different \(\alpha_\text{JES}\) using iPyWidget’s interactivity. Change the slider to get an idea of how the interpolation works.
[9]:
x = interact(lambda alpha: plot_interpolated_histogram(alpha, down_hist, nom_hist, up_hist), alpha = (-1,1,0.1))
The magic in plot_interpolated_histogram()
happens to be that for a given \(\alpha_\text{JES}\), we iterate over all measurements bin-by-bin to calculate the interpolated value
[nominal + interpolate_deltas(down, nominal, up, alpha) for down, nominal, up in zip(...hists...)]
So you can imagine that we’re building up a 2-dimensional tensor with each row corresponding to a different \(\alpha_\text{JES}\) and each column corresponding to the bin index of the histograms (or measurements). Let’s go ahead and build a 3-dimensional representation of our understanding so far!
[10]:
def interpolate_alpha_range(alphas, down_hist, nom_hist, up_hist):
at_alphas = []
for alpha in alphas:
interpolated_hist_at_alpha = [nominal + interpolate_deltas(down, nominal, up, alpha) for down, nominal, up in zip(down_hist, nom_hist, up_hist)]
at_alphas.append(interpolated_hist_at_alpha)
return np.array(at_alphas)
And then with this, we are interpolating over all histograms bin-by-bin and producing a 2-dimensional tensor with each row corresponding to a specific value of \(\alpha_\text{JES}\).
[11]:
alphas = np.linspace(-1,1,11)
interpolated_vals_at_alphas = interpolate_alpha_range(alphas, down_hist, nom_hist, up_hist)
print(interpolated_vals_at_alphas[alphas==-1])
print(interpolated_vals_at_alphas[alphas==0])
print(interpolated_vals_at_alphas[alphas==1])
[[ 8. 8.2 8.4 8.6 8.8 9. 9.2 9.4 9.6 9.8 10. ]]
[[10. 10.3 10.6 10.9 11.2 11.5 11.8 12.1 12.4 12.7 13. ]]
[[15. 15.5 16. 16.5 17. 17.5 18. 18.5 19. 19.5 20. ]]
We have a way to generate the 2-dimensional tensor. Let’s go ahead and add in all dimensions. Additionally, we’ll add in some extra code to show the projection of the 2-d plots that we made earlier to help understand the 3-d plot a bit better. Like before, let’s plot specifically colored lines for \(\alpha_\text{JES}=0.5\) as well as provide an interactive session.
[13]:
def plot_wire(alpha):
alphas = np.linspace(-1,1,51)
at_alphas = interpolate_alpha_range(alphas, down_hist, nom_hist, up_hist)
bincenters = np.arange(len(nom_hist))
x,y = np.meshgrid(bincenters,alphas)
z = np.asarray(at_alphas)
bottom = np.zeros_like(x)
fig = plt.figure(figsize=(10, 10))
ax1 = fig.add_subplot(111, projection='3d')
ax1.plot_wireframe(x, y, z, alpha = 0.3)
x,y = np.meshgrid(bincenters,[alpha])
z = interpolate_alpha_range([alpha], down_hist, nom_hist, up_hist)
ax1.plot_wireframe(x, y, z, edgecolor = 'g', linestyle='--')
ax1.set_xlim(0,10)
ax1.set_ylim(-1.0,1.5)
ax1.set_zlim(0,25)
ax1.view_init(azim=-125)
ax1.set_xlabel('Bin Index')
ax1.set_ylabel(r'$\alpha_\mathrm{JES}$')
ax1.set_zlabel('Events')
# add in 2D plot goodness
for c,h,zs in zip(['r','k','b'],[down_hist, nom_hist, up_hist],[-1.0,0.0,1.0]):
ax1.plot(bincenters, h, color=c, linestyle='-', alpha=0.5, zdir='y', zs=zs)
ax1.plot(bincenters, h, color=c, linestyle='-', alpha=0.25, zdir='y', zs=1.5)
ax1.plot(bincenters, z.T, color = 'g', linestyle='--', zdir='y', zs=alpha)
ax1.plot(bincenters, z.T, color = 'g', linestyle='--', alpha=0.5, zdir='y', zs=1.5)
plt.show()
plot_wire(0.5)
interact(plot_wire,alpha = (-1,1,0.1))

[13]:
<function __main__.plot_wire>
Tensorizing Interpolators¶
This notebook will introduce some tensor algebra concepts about being able to convert from calculations inside for-loops into a single calculation over the entire tensor. It is assumed that you have some familiarity with what interpolation functions are used for in pyhf
.
To get started, we’ll load up some functions we wrote whose job is to generate sets of histograms and alphas that we will compute interpolations for. This allows us to generate random, structured input data that we can use to test the tensorized form of the interpolation function against the original one we wrote. For now, we will consider only the numpy
backend for simplicity, but can replace np
to pyhf.tensorlib
to achieve identical functionality.
The function random_histosets_alphasets_pair
will produce a pair (histogramsets, alphasets)
of histograms and alphas for those histograms that represents the type of input we wish to interpolate on.
[1]:
import numpy as np
def random_histosets_alphasets_pair(nsysts = 150, nhistos_per_syst_upto = 300, nalphas = 1, nbins_upto = 1):
def generate_shapes(histogramssets,alphasets):
h_shape = [len(histogramssets),0,0,0]
a_shape = (len(alphasets),max(map(len,alphasets)))
for hs in histogramssets:
h_shape[1] = max(h_shape[1],len(hs))
for h in hs:
h_shape[2] = max(h_shape[2],len(h))
for sh in h:
h_shape[3] = max(h_shape[3],len(sh))
return tuple(h_shape),a_shape
def filled_shapes(histogramssets,alphasets):
# pad our shapes with NaNs
histos, alphas = generate_shapes(histogramssets,alphasets)
histos, alphas = np.ones(histos) * np.nan, np.ones(alphas) * np.nan
for i,syst in enumerate(histogramssets):
for j,sample in enumerate(syst):
for k,variation in enumerate(sample):
histos[i,j,k,:len(variation)] = variation
for i,alphaset in enumerate(alphasets):
alphas[i,:len(alphaset)] = alphaset
return histos,alphas
nsyst_histos = np.random.randint(1, 1+nhistos_per_syst_upto, size=nsysts)
nhistograms = [np.random.randint(1, nbins_upto+1, size=n) for n in nsyst_histos]
random_alphas = [np.random.uniform(-1, 1,size=nalphas) for n in nsyst_histos]
random_histogramssets = [
[# all histos affected by systematic $nh
[# sample $i, systematic $nh
np.random.uniform(10*i+j,10*i+j+1, size = nbin).tolist() for j in range(3)
] for i,nbin in enumerate(nh)
] for nh in nhistograms
]
h,a = filled_shapes(random_histogramssets,random_alphas)
return h,a
The (slow) interpolations¶
In all cases, the way we do interpolations is as follows:
Loop over both the
histogramssets
andalphasets
simultaneously (e.g. using python’szip()
)Loop over all histograms set in the set of histograms sets that correspond to the histograms affected by a given systematic
Loop over all of the alphas in the set of alphas
Loop over all the bins in the histogram sets simultaneously (e.g. using python’s
zip()
)Apply the interpolation across the same bin index
This is already exhausting to think about, so let’s put this in code form. Depending on the kind of interpolation being done, we’ll pass in func
as an argument to the top-level interpolation loop to switch between linear (interpcode=0
) and non-linear (interpcode=1
).
[2]:
def interpolation_looper(histogramssets, alphasets, func):
all_results = []
for histoset, alphaset in zip(histogramssets, alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
v = func(down, nom, up, alpha)
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
And we can also define our linear and non-linear interpolations we’ll consider in this notebook that we wish to tensorize.
[3]:
def interpolation_linear(histogramssets,alphasets):
def summand(down, nom, up, alpha):
delta_up = up - nom
delta_down = nom - down
if alpha > 0:
delta = delta_up*alpha
else:
delta = delta_down*alpha
return nom + delta
return interpolation_looper(histogramssets, alphasets, summand)
def interpolation_nonlinear(histogramssets,alphasets):
def product(down, nom, up, alpha):
delta_up = up/nom
delta_down = down/nom
if alpha > 0:
delta = delta_up**alpha
else:
delta = delta_down**(-alpha)
return nom*delta
return interpolation_looper(histogramssets, alphasets, product)
We will also define a helper function that allows us to pass in two functions we wish to compare the outputs for:
[4]:
def compare_fns(func1, func2):
h,a = random_histosets_alphasets_pair()
def _func_runner(func, histssets, alphasets):
return np.asarray(func(histssets,alphasets))
old = _func_runner(func1, h, a)
new = _func_runner(func2, h, a)
return (np.all(old[~np.isnan(old)] == new[~np.isnan(new)]), (h,a))
For the rest of the notebook, we will detail in explicit form how the linear interpolator gets tensorized, step-by-step. The same sequence of steps will be shown for the non-linear interpolator – but it is left up to the reader to understand the steps.
Tensorizing the Linear Interpolator¶
Step 0¶
Step 0 requires converting the innermost conditional check on alpha > 0
into something tensorizable. This also means the calculation itself is going to become tensorized. So we will convert from
if alpha > 0:
delta = delta_up*alpha
else:
delta = delta_down*alpha
to
delta = np.where(alpha > 0, delta_up*alpha, delta_down*alpha)
Let’s make that change now, and let’s check to make sure we still do the calculation correctly.
[5]:
# get the internal calculation to use tensorlib backend
def new_interpolation_linear_step0(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up - nom
delta_down = nom - down
delta = np.where(alpha > 0, delta_up*alpha, delta_down*alpha)
v = nom + delta
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
And does the calculation still match?
[6]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step0)
print(result)
True
[7]:
%%timeit
interpolation_linear(h,a)
189 ms ± 6.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[8]:
%%timeit
new_interpolation_linear_step0(h,a)
255 ms ± 11.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Great! We’re a little bit slower right now, but that’s expected. We’re just getting started.
Step 1¶
In this step, we would like to remove the innermost zip()
call over the histogram bins by calculating the interpolation between the histograms in one fell swoop. This means, instead of writing something like
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up - nom
...
one can instead write
delta_up = histo[2] - histo[1]
...
taking advantage of the automatic broadcasting of operations on input tensors. This sort of feature of the tensor backends allows us to speed up code, such as interpolation.
[9]:
# update the delta variations to remove the zip() call and remove most-nested loop
def new_interpolation_linear_step1(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
deltas_up = histo[2]-histo[1]
deltas_dn = histo[1]-histo[0]
calc_deltas = np.where(alpha > 0, deltas_up*alpha, deltas_dn*alpha)
v = histo[1] + calc_deltas
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
And does the calculation still match?
[10]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step1)
print(result)
True
[11]:
%%timeit
interpolation_linear(h,a)
188 ms ± 7.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[12]:
%%timeit
new_interpolation_linear_step1(h,a)
492 ms ± 42.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Great!
Step 2¶
In this step, we would like to move the giant array of the deltas calculated to the beginning – outside of all loops – and then only take a subset of it for the calculation itself. This allows us to figure out the entire structure of the input for the rest of the calculations as we slowly move towards including einsum()
calls (einstein summation). This means we would like to go from
for histo in histoset:
delta_up = histo[2] - histo[1]
...
to
all_deltas = ...
for nh, histo in enumerate(histoset):
deltas = all_deltas[nh]
...
Again, we are taking advantage of the automatic broadcasting of operations on input tensors to calculate all the deltas in a single action.
[13]:
# figure out the giant array of all deltas at the beginning and only take subsets of it for the calculation
def new_interpolation_linear_step2(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
alpha_deltas = []
for alpha in alphaset:
alpha_result = []
deltas_up = all_histo_deltas_up[nh]
deltas_dn = all_histo_deltas_dn[nh]
calc_deltas = np.where(alpha > 0, deltas_up*alpha, deltas_dn*alpha)
alpha_deltas.append(calc_deltas)
set_result.append([histo[1]+ d for d in alpha_deltas])
all_results.append(set_result)
return all_results
And does the calculation still match?
[14]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step2)
print(result)
True
[15]:
%%timeit
interpolation_linear(h,a)
179 ms ± 12.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[16]:
%%timeit
new_interpolation_linear_step2(h,a)
409 ms ± 20.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Great!
Step 3¶
In this step, we get to introduce einstein summation to generalize the calculations we perform across many dimensions in a more concise, straightforward way. See this blog post for some more details on einstein summation notation. In short, it allows us to write
in a much more elegant way to express many kinds of common tensor operations such as dot products, transposes, outer products, and so on. This step is generally the hardest as one needs to figure out the corresponding einsum
that keeps the calculation preserved (and matching). To some extent it requires a lot of trial and error until you get a feel for how einstein summation notation works.
As a concrete example of a conversion, we wish to go from something like
for nh,histo in enumerate(histoset):
for alpha in alphaset:
deltas_up = all_histo_deltas_up[nh]
deltas_dn = all_histo_deltas_dn[nh]
calc_deltas = np.where(alpha > 0, deltas_up*alpha, deltas_dn*alpha)
...
to get rid of the loop over alpha
for nh,histo in enumerate(histoset):
alphas_times_deltas_up = np.einsum('i,j->ij',alphaset,all_histo_deltas_up[nh])
alphas_times_deltas_dn = np.einsum('i,j->ij',alphaset,all_histo_deltas_dn[nh])
masks = np.einsum('i,j->ij',alphaset > 0,np.ones_like(all_histo_deltas_dn[nh]))
alpha_deltas = np.where(masks,alphas_times_deltas_up, alphas_times_deltas_dn)
...
In this particular case, we need an outer product that multiplies across the alphaset
to the corresponding histoset
for the up/down variations. Then we just need to select from either the up variation calculation or the down variation calculation based on the sign of alpha. Try to convince yourself that the einstein summation does what the for-loop does, but a little bit more concisely, and perhaps more clearly! How does the function look now?
[17]:
# remove the loop over alphas, starts using einsum to help generalize to more dimensions
def new_interpolation_linear_step3(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
alphas_times_deltas_up = np.einsum('i,j->ij',alphaset,all_histo_deltas_up[nh])
alphas_times_deltas_dn = np.einsum('i,j->ij',alphaset,all_histo_deltas_dn[nh])
masks = np.einsum('i,j->ij',alphaset > 0,np.ones_like(all_histo_deltas_dn[nh]))
alpha_deltas = np.where(masks,alphas_times_deltas_up, alphas_times_deltas_dn)
set_result.append([histo[1]+ d for d in alpha_deltas])
all_results.append(set_result)
return all_results
And does the calculation still match?
[18]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step3)
print(result)
True
[19]:
%%timeit
interpolation_linear(h,a)
166 ms ± 11.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[20]:
%%timeit
new_interpolation_linear_step3(h,a)
921 ms ± 133 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Great! Note that we’ve been getting a little bit slower during these steps. It will all pay off in the end when we’re fully tensorized! A lot of the internal steps are overkill with the heavy einstein summation and broadcasting at the moment, especially for how many loops in we are.
Step 4¶
Now in this step, we will move the einstein summations to the outer loop, so that we’re calculating it once! This is the big step, but a little bit easier because all we’re doing is adding extra dimensions into the calculation. The underlying calculation won’t have changed. At this point, we’ll also rename from i
and j
to a
and b
for alpha
and bin
(as in the bin in the histogram). To continue the notation as well, here’s a summary of the dimensions involved:
s
will be for the set under consideration (e.g. the modifier)a
will be for the alpha variationh
will be for the histogram affected by the modifierb
will be for the bin of the histogram
So we wish to move the einsum
code from
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
...
for nh,histo in enumerate(histoset):
alphas_times_deltas_up = np.einsum('i,j->ij',alphaset,all_histo_deltas_up[nh])
...
to
all_alphas_times_deltas_up = np.einsum('...',alphaset,all_histo_deltas_up)
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
...
for nh,histo in enumerate(histoset):
...
So how does this new function look?
[21]:
# move the einsums to outer loops to get ready to get rid of all loops
def new_interpolation_linear_step4(histogramssets,alphasets):
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
allset_all_histo_nom = histogramssets[:,:,1]
allsets_all_histos_alphas_times_deltas_up = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_up)
allsets_all_histos_alphas_times_deltas_dn = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_dn)
allsets_all_histos_masks = np.einsum('sa,s...u->s...au',alphasets > 0,np.ones_like(allset_all_histo_deltas_dn))
allsets_all_histos_deltas = np.where(allsets_all_histos_masks,allsets_all_histos_alphas_times_deltas_up, allsets_all_histos_alphas_times_deltas_dn)
all_results = []
for nset,histoset in enumerate(histogramssets):
all_histos_deltas = allsets_all_histos_deltas[nset]
set_result = []
for nh,histo in enumerate(histoset):
set_result.append([d + histoset[nh,1] for d in all_histos_deltas[nh]])
all_results.append(set_result)
return all_results
And does the calculation still match?
[22]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step4)
print(result)
True
[23]:
%%timeit
interpolation_linear(h,a)
160 ms ± 5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[24]:
%%timeit
new_interpolation_linear_step4(h,a)
119 ms ± 3.19 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Great! And look at that huge speed up in time already, just from moving the multiple, heavy einstein summation calculations up through the loops. We still have some more optimizing to do as we still have explicit loops in our code. Let’s keep at it, we’re almost there!
Step 5¶
The hard part is mostly over. We have to now think about the nominal variations. Recall that we were trying to add the nominals to the deltas in order to compute the new value. In practice, we’ll return the delta variation only, but we’ll show you how to get rid of this last loop. In this case, we want to figure out how to change code like
all_results = []
for nset,histoset in enumerate(histogramssets):
all_histos_deltas = allsets_all_histos_deltas[nset]
set_result = []
for nh,histo in enumerate(histoset):
set_result.append([d + histoset[nh,1] for d in all_histos_deltas[nh]])
all_results.append(set_result)
to get rid of that most-nested loop
all_results = []
for nset,histoset in enumerate(histogramssets):
# look ma, no more loops inside!
So how does this look?
[25]:
# slowly getting rid of our loops to build the right output tensor -- gotta think about nominals
def new_interpolation_linear_step5(histogramssets,alphasets):
allset_all_histo_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_all_histo_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
allset_all_histo_nom = histogramssets[:,:,1]
allsets_all_histos_alphas_times_deltas_up = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_up)
allsets_all_histos_alphas_times_deltas_dn = np.einsum('sa,shb->shab',alphasets,allset_all_histo_deltas_dn)
allsets_all_histos_masks = np.einsum('sa,s...u->s...au',alphasets > 0,np.ones_like(allset_all_histo_deltas_dn))
allsets_all_histos_deltas = np.where(allsets_all_histos_masks,allsets_all_histos_alphas_times_deltas_up, allsets_all_histos_alphas_times_deltas_dn)
all_results = []
for nset,(_,alphaset) in enumerate(zip(histogramssets,alphasets)):
all_histos_deltas = allsets_all_histos_deltas[nset]
noms = histogramssets[nset,:,1]
all_histos_noms_repeated = np.einsum('a,hn->han',np.ones_like(alphaset),noms)
set_result = all_histos_deltas + all_histos_noms_repeated
all_results.append(set_result)
return all_results
And does the calculation still match?
[26]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step5)
print(result)
True
[27]:
%%timeit
interpolation_linear(h,a)
160 ms ± 8.28 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[28]:
%%timeit
new_interpolation_linear_step5(h,a)
1.57 ms ± 75.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Fantastic! And look at the speed up. We’re already faster than the for-loop and we’re not even done yet.
Step 6¶
The final frontier. Also probably the best Star Wars episode. In any case, we have one more for-loop that needs to die in a slab of carbonite. This should be much easier now that you’re more comfortable with tensor broadcasting and einstein summations.
What does the function look like now?
[29]:
def new_interpolation_linear_step6(histogramssets,alphasets):
allset_allhisto_deltas_up = histogramssets[:,:,2] - histogramssets[:,:,1]
allset_allhisto_deltas_dn = histogramssets[:,:,1] - histogramssets[:,:,0]
allset_allhisto_nom = histogramssets[:,:,1]
#x is dummy index
allsets_allhistos_alphas_times_deltas_up = np.einsum('sa,shb->shab',alphasets,allset_allhisto_deltas_up)
allsets_allhistos_alphas_times_deltas_dn = np.einsum('sa,shb->shab',alphasets,allset_allhisto_deltas_dn)
allsets_allhistos_masks = np.einsum('sa,sxu->sxau',np.where(alphasets > 0, np.ones(alphasets.shape), np.zeros(alphasets.shape)),np.ones(allset_allhisto_deltas_dn.shape))
allsets_allhistos_deltas = np.where(allsets_allhistos_masks,allsets_allhistos_alphas_times_deltas_up, allsets_allhistos_alphas_times_deltas_dn)
allsets_allhistos_noms_repeated = np.einsum('sa,shb->shab',np.ones(alphasets.shape),allset_allhisto_nom)
set_results = allsets_allhistos_deltas + allsets_allhistos_noms_repeated
return set_results
And does the calculation still match?
[30]:
result, (h,a) = compare_fns(interpolation_linear, new_interpolation_linear_step6)
print(result)
True
[31]:
%%timeit
interpolation_linear(h,a)
156 ms ± 6.29 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[32]:
%%timeit
new_interpolation_linear_step6(h,a)
468 µs ± 37.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
And we’re done tensorizing it. There are some more improvements that could be made to make this interpolation calculation even more robust – but for now we’re done.
Tensorizing the Non-Linear Interpolator¶
This is very, very similar to what we’ve done for the case of the linear interpolator. As such, we will provide the resulting functions for each step, and you can see how things perform all the way at the bottom. Enjoy and learn at your own pace!
[33]:
def interpolation_nonlinear(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up/nom
delta_down = down/nom
if alpha > 0:
delta = delta_up**alpha
else:
delta = delta_down**(-alpha)
v = nom*delta
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
def new_interpolation_nonlinear_step0(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
for down,nom,up in zip(histo[0],histo[1],histo[2]):
delta_up = up/nom
delta_down = down/nom
delta = np.where(alpha > 0, np.power(delta_up, alpha), np.power(delta_down, np.abs(alpha)))
v = nom*delta
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
def new_interpolation_nonlinear_step1(histogramssets,alphasets):
all_results = []
for histoset, alphaset in zip(histogramssets,alphasets):
all_results.append([])
set_result = all_results[-1]
for histo in histoset:
set_result.append([])
histo_result = set_result[-1]
for alpha in alphaset:
alpha_result = []
deltas_up = np.divide(histo[2], histo[1])
deltas_down = np.divide(histo[0], histo[1])
bases = np.where(alpha > 0, deltas_up, deltas_down)
exponents = np.abs(alpha)
calc_deltas = np.power(bases, exponents)
v = histo[1] * calc_deltas
alpha_result.append(v)
histo_result.append(alpha_result)
return all_results
def new_interpolation_nonlinear_step2(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], histogramssets[:,:,1])
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], histogramssets[:,:,1])
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
alpha_deltas = []
for alpha in alphaset:
alpha_result = []
deltas_up = all_histo_deltas_up[nh]
deltas_down = all_histo_deltas_dn[nh]
bases = np.where(alpha > 0, deltas_up, deltas_down)
exponents = np.abs(alpha)
calc_deltas = np.power(bases, exponents)
alpha_deltas.append(calc_deltas)
set_result.append([histo[1]*d for d in alpha_deltas])
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step3(histogramssets,alphasets):
all_results = []
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], histogramssets[:,:,1])
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], histogramssets[:,:,1])
for nset,(histoset, alphaset) in enumerate(zip(histogramssets,alphasets)):
set_result = []
all_histo_deltas_up = allset_all_histo_deltas_up[nset]
all_histo_deltas_dn = allset_all_histo_deltas_dn[nset]
for nh,histo in enumerate(histoset):
# bases and exponents need to have an outer product, to esentially tile or repeat over rows/cols
bases_up = np.einsum('a,b->ab', np.ones(alphaset.shape), all_histo_deltas_up[nh])
bases_dn = np.einsum('a,b->ab', np.ones(alphaset.shape), all_histo_deltas_dn[nh])
exponents = np.einsum('a,b->ab', np.abs(alphaset), np.ones(all_histo_deltas_up[nh].shape))
masks = np.einsum('a,b->ab',alphaset > 0,np.ones(all_histo_deltas_dn[nh].shape))
bases = np.where(masks, bases_up, bases_dn)
alpha_deltas = np.power(bases, exponents)
set_result.append([histo[1]*d for d in alpha_deltas])
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step4(histogramssets,alphasets):
all_results = []
allset_all_histo_nom = histogramssets[:,:,1]
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], allset_all_histo_nom)
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], allset_all_histo_nom)
bases_up = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_up)
bases_dn = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_dn)
exponents = np.einsum('sa,shb->shab', np.abs(alphasets), np.ones(allset_all_histo_deltas_up.shape))
masks = np.einsum('sa,shb->shab',alphasets > 0,np.ones(allset_all_histo_deltas_up.shape))
bases = np.where(masks, bases_up, bases_dn)
allsets_all_histos_deltas = np.power(bases, exponents)
all_results = []
for nset,histoset in enumerate(histogramssets):
all_histos_deltas = allsets_all_histos_deltas[nset]
set_result = []
for nh,histo in enumerate(histoset):
set_result.append([histoset[nh,1]*d for d in all_histos_deltas[nh]])
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step5(histogramssets,alphasets):
all_results = []
allset_all_histo_nom = histogramssets[:,:,1]
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], allset_all_histo_nom)
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], allset_all_histo_nom)
bases_up = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_up)
bases_dn = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_dn)
exponents = np.einsum('sa,shb->shab', np.abs(alphasets), np.ones(allset_all_histo_deltas_up.shape))
masks = np.einsum('sa,shb->shab',alphasets > 0,np.ones(allset_all_histo_deltas_up.shape))
bases = np.where(masks, bases_up, bases_dn)
allsets_all_histos_deltas = np.power(bases, exponents)
all_results = []
for nset,(_,alphaset) in enumerate(zip(histogramssets,alphasets)):
all_histos_deltas = allsets_all_histos_deltas[nset]
noms = allset_all_histo_nom[nset]
all_histos_noms_repeated = np.einsum('a,hn->han',np.ones_like(alphaset),noms)
set_result = all_histos_deltas * all_histos_noms_repeated
all_results.append(set_result)
return all_results
def new_interpolation_nonlinear_step6(histogramssets,alphasets):
all_results = []
allset_all_histo_nom = histogramssets[:,:,1]
allset_all_histo_deltas_up = np.divide(histogramssets[:,:,2], allset_all_histo_nom)
allset_all_histo_deltas_dn = np.divide(histogramssets[:,:,0], allset_all_histo_nom)
bases_up = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_up)
bases_dn = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_deltas_dn)
exponents = np.einsum('sa,shb->shab', np.abs(alphasets), np.ones(allset_all_histo_deltas_up.shape))
masks = np.einsum('sa,shb->shab',alphasets > 0,np.ones(allset_all_histo_deltas_up.shape))
bases = np.where(masks, bases_up, bases_dn)
allsets_all_histos_deltas = np.power(bases, exponents)
allsets_allhistos_noms_repeated = np.einsum('sa,shb->shab', np.ones(alphasets.shape), allset_all_histo_nom)
set_results = allsets_all_histos_deltas * allsets_allhistos_noms_repeated
return set_results
[34]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step0)
print(result)
True
[35]:
%%timeit
interpolation_nonlinear(h,a)
149 ms ± 9.45 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[36]:
%%timeit
new_interpolation_nonlinear_step0(h,a)
527 ms ± 29.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[37]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step1)
print(result)
True
[38]:
%%timeit
interpolation_nonlinear(h,a)
150 ms ± 5.21 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[39]:
%%timeit
new_interpolation_nonlinear_step1(h,a)
456 ms ± 17.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[40]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step2)
print(result)
True
[41]:
%%timeit
interpolation_nonlinear(h,a)
154 ms ± 4.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[42]:
%%timeit
new_interpolation_nonlinear_step2(h,a)
412 ms ± 31 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[43]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step3)
print(result)
True
[44]:
%%timeit
interpolation_nonlinear(h,a)
145 ms ± 5.15 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[45]:
%%timeit
new_interpolation_nonlinear_step3(h,a)
1.28 s ± 74.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[46]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step4)
print(result)
True
[47]:
%%timeit
interpolation_nonlinear(h,a)
147 ms ± 8.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[48]:
%%timeit
new_interpolation_nonlinear_step4(h,a)
120 ms ± 3.06 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[49]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step5)
print(result)
True
[50]:
%%timeit
interpolation_nonlinear(h,a)
151 ms ± 5.29 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[51]:
%%timeit
new_interpolation_nonlinear_step5(h,a)
2.65 ms ± 57.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
[52]:
result, (h,a) = compare_fns(interpolation_nonlinear, new_interpolation_nonlinear_step6)
print(result)
True
[53]:
%%timeit
interpolation_nonlinear(h,a)
156 ms ± 3.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
[54]:
%%timeit
new_interpolation_nonlinear_step6(h,a)
1.49 ms ± 16 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Examples¶
Notebooks:
ShapeFactor¶
[1]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib
[2]:
import logging
import json
import pyhf
from pyhf import Model
logging.basicConfig(level = logging.INFO)
[3]:
def prep_data(sourcedata):
spec = {
'channels': [
{
'name': 'signal',
'samples': [
{
'name': 'signal',
'data': sourcedata['signal']['bindata']['sig'],
'modifiers': [
{
'name': 'mu',
'type': 'normfactor',
'data': None
}
]
},
{
'name': 'bkg1',
'data': sourcedata['signal']['bindata']['bkg1'],
'modifiers': [
{
'name': 'coupled_shapefactor',
'type': 'shapefactor',
'data': None
}
]
}
]
},
{
'name': 'control',
'samples': [
{
'name': 'background',
'data': sourcedata['control']['bindata']['bkg1'],
'modifiers': [
{
'name': 'coupled_shapefactor',
'type': 'shapefactor',
'data': None
}
]
}
]
}
]
}
pdf = Model(spec)
data = []
for c in pdf.spec['channels']:
data += sourcedata[c['name']]['bindata']['data']
data = data + pdf.config.auxdata
return data, pdf
[4]:
source = {
"channels": {
"signal": {
"binning": [2,-0.5,1.5],
"bindata": {
"data": [220.0, 230.0],
"bkg1": [100.0, 70.0],
"sig": [ 20.0, 20.0]
}
},
"control": {
"binning": [2,-0.5,1.5],
"bindata": {
"data": [200.0, 300.0],
"bkg1": [100.0, 100.0]
}
}
}
}
data, pdf = prep_data(source['channels'])
print('data: {}'.format(data))
init_pars = pdf.config.suggested_init()
print('expected data: {}'.format(pdf.expected_data(init_pars)))
par_bounds = pdf.config.suggested_bounds()
INFO:pyhf.pdf:Validating spec against schema: /home/jovyan/pyhf/src/pyhf/data/spec.json
INFO:pyhf.pdf:adding modifier mu (1 new nuisance parameters)
INFO:pyhf.pdf:adding modifier coupled_shapefactor (2 new nuisance parameters)
data: [220.0, 230.0, 200.0, 300.0]
expected data: [120. 90. 100. 100.]
[5]:
print('initialization parameters: {}'.format(pdf.config.suggested_init()))
unconpars = pyhf.infer.mle.fit(data, pdf)
print('parameters post unconstrained fit: {}'.format(unconpars))
initialization parameters: [1.0, 1.0, 1.0]
parameters post unconstrained fit: [0.99981412 2.00002042 3.00006469]
/home/jovyan/pyhf/src/pyhf/tensor/numpy_backend.py:173: RuntimeWarning: divide by zero encountered in log
return n * np.log(lam) - lam - gammaln(n + 1.0)
[6]:
def plot_results(testmus, cls_obs, cls_exp, poi_tests, test_size = 0.05):
plt.plot(poi_tests,cls_obs, c = 'k')
for i,c in zip(range(5),['grey','grey','grey','grey','grey']):
plt.plot(poi_tests, cls_exp[i], c = c)
plt.plot(testmus,[test_size]*len(testmus), c = 'r')
plt.ylim(0,1)
def invert_interval(test_mus, cls_obs, cls_exp, test_size=0.05):
crossing_test_stats = {'exp': [], 'obs': None}
for cls_exp_sigma in cls_exp:
crossing_test_stats['exp'].append(
np.interp(
test_size, list(reversed(cls_exp_sigma)), list(reversed(test_mus))
)
)
crossing_test_stats['obs'] = np.interp(
test_size, list(reversed(cls_obs)), list(reversed(test_mus))
)
return crossing_test_stats
poi_tests = np.linspace(0, 5, 61)
tests = [pyhf.infer.hypotest(poi_test, data, pdf, init_pars, par_bounds, return_expected_set=True)
for poi_test in poi_tests]
cls_obs = np.array([test[0] for test in tests]).flatten()
cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]
print('\n')
plot_results(poi_tests, cls_obs, cls_exp, poi_tests)
invert_interval(poi_tests, cls_obs, cls_exp)
[6]:
{'exp': [0.741381412468345,
0.9949353526744877,
1.3845144105754894,
1.9289946435921614,
2.594077794516857],
'obs': 2.1945970333027187}

XML Import/Export¶
[1]:
# NB: pip install pyhf[xmlio]
import pyhf
[2]:
!ls -lavh ../../../validation/xmlimport_input
total 1752
drwxr-xr-x 7 kratsg staff 238B Oct 16 22:20 .
drwxr-xr-x 21 kratsg staff 714B Apr 4 14:26 ..
drwxr-xr-x 6 kratsg staff 204B Feb 27 17:13 config
drwxr-xr-x 7 kratsg staff 238B Feb 27 23:41 data
-rw-r--r-- 1 kratsg staff 850K Oct 16 22:20 log
drwxr-xr-x 17 kratsg staff 578B Nov 15 12:24 results
-rw-r--r-- 1 kratsg staff 21K Oct 16 22:20 scan.pdf
Importing¶
In order to convert HistFactory XML+ROOT to the pyhf JSON spec for likelihoods, you need to point the command-line interface pyhf xml2json
at the top-level XML file. Additionally, as the HistFactory XML specification often uses relative paths, you might need to specify the base directory --basedir
from which all other files are located, as specified in the top-level XML. The command will be of the format
pyhf xml2json {top-level XML} --basedir {base directory}
This will print the JSON representation of the XML+ROOT specified. If you wish to store this as a JSON file, you simply need to redirect it
pyhf xml2json {top-level XML} --basedir {base directory} > spec.json
[3]:
!pyhf xml2json --hide-progress ../../../validation/xmlimport_input/config/example.xml --basedir ../../../validation/xmlimport_input | tee xml_importexport.json
{
"channels": [
{
"name": "channel1",
"samples": [
{
"data": [
20.0,
10.0
],
"modifiers": [
{
"data": {
"hi": 1.05,
"lo": 0.95
},
"name": "syst1",
"type": "normsys"
},
{
"data": null,
"name": "SigXsecOverSM",
"type": "normfactor"
}
],
"name": "signal"
},
{
"data": [
100.0,
0.0
],
"modifiers": [
{
"data": null,
"name": "lumi",
"type": "lumi"
},
{
"data": [
5.000000074505806,
0.0
],
"name": "staterror_channel1",
"type": "staterror"
},
{
"data": {
"hi": 1.05,
"lo": 0.95
},
"name": "syst2",
"type": "normsys"
}
],
"name": "background1"
},
{
"data": [
0.0,
100.0
],
"modifiers": [
{
"data": null,
"name": "lumi",
"type": "lumi"
},
{
"data": [
0.0,
10.0
],
"name": "staterror_channel1",
"type": "staterror"
},
{
"data": {
"hi": 1.05,
"lo": 0.95
},
"name": "syst3",
"type": "normsys"
}
],
"name": "background2"
}
]
}
],
"data": {
"channel1": [
122.0,
112.0
]
},
"toplvl": {
"measurements": [
{
"config": {
"parameters": [
{
"auxdata": [
1.0
],
"bounds": [
[
0.5,
1.5
]
],
"fixed": true,
"inits": [
1.0
],
"name": "lumi",
"sigmas": [
0.1
]
},
{
"fixed": true,
"name": "alpha_syst1"
}
],
"poi": "SigXsecOverSM"
},
"name": "GaussExample"
},
{
"config": {
"parameters": [
{
"auxdata": [
1.0
],
"bounds": [
[
0.5,
1.5
]
],
"fixed": true,
"inits": [
1.0
],
"name": "lumi",
"sigmas": [
0.1
]
},
{
"fixed": true,
"name": "alpha_syst1"
}
],
"poi": "SigXsecOverSM"
},
"name": "GammaExample"
},
{
"config": {
"parameters": [
{
"auxdata": [
1.0
],
"bounds": [
[
0.5,
1.5
]
],
"fixed": true,
"inits": [
1.0
],
"name": "lumi",
"sigmas": [
0.1
]
},
{
"fixed": true,
"name": "alpha_syst1"
}
],
"poi": "SigXsecOverSM"
},
"name": "LogNormExample"
},
{
"config": {
"parameters": [
{
"auxdata": [
1.0
],
"bounds": [
[
0.5,
1.5
]
],
"fixed": true,
"inits": [
1.0
],
"name": "lumi",
"sigmas": [
0.1
]
},
{
"fixed": true,
"name": "alpha_syst1"
}
],
"poi": "SigXsecOverSM"
},
"name": "ConstExample"
}
],
"resultprefix": "./results/example"
}
}
Exporting¶
In order to convert the pyhf JSON to the HistFactory XML+ROOT spec for likelihoods, you need to point the command-line interface pyhf json2xml
at the JSON file you want to convert. As everything is specified in a single file, there is no need to deal with base directories or looking up additional files. This will produce output XML+ROOT in the --output-dir=./
directory (your current working directory), storing XML configs under --specroot=config
and the data file under
--dataroot=data
. The XML configs are prefixed with --resultprefix=FitConfig
by default, so that the top-level XML file will be located at {output dir}/{prefix}.xml
. The command will be of the format
pyhf json2xml {JSON spec}
Note that the output directory must already exist.
[4]:
!mkdir -p output
!pyhf json2xml xml_importexport.json --output-dir output
!ls -lavh output/*
/Users/jovyan/pyhf/src/pyhf/writexml.py:120: RuntimeWarning: invalid value encountered in true_divide
attrs['HistoName'], np.divide(modifierspec['data'], sampledata).tolist()
-rw-r--r-- 1 kratsg staff 822B Apr 9 09:36 output/FitConfig.xml
output/config:
total 8
drwxr-xr-x 3 kratsg staff 102B Apr 9 09:36 .
drwxr-xr-x 5 kratsg staff 170B Apr 9 09:36 ..
-rw-r--r-- 1 kratsg staff 1.0K Apr 9 09:36 FitConfig_channel1.xml
output/data:
total 96
drwxr-xr-x 3 kratsg staff 102B Apr 9 09:36 .
drwxr-xr-x 5 kratsg staff 170B Apr 9 09:36 ..
-rw-r--r-- 1 kratsg staff 46K Apr 9 09:36 data.root
[5]:
!rm xml_importexport.json
!rm -rf output/
[1]:
import pyhf
import pandas
import numpy as np
import altair as alt
Visualization with Altair¶
Altair is a python API for generating Vega visuazliation specifications. We demonstracte how to use this to build an interactive chart of pyhf results.
Preparing the data¶
Altair reads the data as a pandas dataframe, so we create one.
[2]:
model = pyhf.simplemodels.hepdata_like([7], [20], [5])
data = [25] + model.config.auxdata
[3]:
muscan = np.linspace(0, 5, 31)
results = [
pyhf.infer.hypotest(mu, data, model, return_expected_set=True) for mu in muscan
]
[4]:
data = np.concatenate(
[
muscan.reshape(-1, 1),
np.asarray([r[0] for r in results]).reshape(-1, 1),
np.asarray([r[1] for r in results]).reshape(-1, 5),
],
axis=1,
)
df = pandas.DataFrame(data, columns=["mu", "obs"] + [f"exp_{i}" for i in range(5)])
df.head()
[4]:
mu | obs | exp_0 | exp_1 | exp_2 | exp_3 | exp_4 | |
---|---|---|---|---|---|---|---|
0 | 0.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 |
1 | 0.166667 | 0.885208 | 0.670809 | 0.771258 | 0.870322 | 0.949235 | 0.989385 |
2 | 0.333333 | 0.795986 | 0.438838 | 0.581516 | 0.743696 | 0.890881 | 0.975022 |
3 | 0.500000 | 0.726450 | 0.279981 | 0.428500 | 0.623443 | 0.825621 | 0.956105 |
4 | 0.666667 | 0.672216 | 0.174235 | 0.308524 | 0.512383 | 0.754629 | 0.931866 |
Defining the Chart¶
We need to filled areas for the 1,2 sigma bands and two lines for the expected and observed CLs value. For interactivity we add a hovering label of the observed result
[5]:
band1 = (
alt.Chart(df)
.mark_area(opacity=0.5, color="green")
.encode(x="mu", y="exp_1", y2="exp_3")
)
band2 = (
alt.Chart(df)
.mark_area(opacity=0.5, color="yellow")
.encode(x="mu", y="exp_0", y2="exp_4")
)
line1 = alt.Chart(df).mark_line(color="black").encode(x="mu", y="obs")
line2 = (
alt.Chart(df).mark_line(color="black", strokeDash=[5, 5]).encode(x="mu", y="exp_2")
)
nearest = alt.selection_single(
nearest=True, on="mouseover", fields=["mu"], empty="none"
)
point = (
alt.Chart(df)
.mark_point(color="black")
.encode(x="mu", y="obs", opacity=alt.condition(nearest, alt.value(1), alt.value(0)))
.add_selection(nearest)
)
text = line1.mark_text(align="left", dx=5, dy=-5).encode(
text=alt.condition(nearest, "obs", alt.value(" "))
)
band2 + band1 + line1 + line2 + point + text
[5]:
Hello World, pyhf
style¶
Two bin counting experiment with a background uncertainty
[1]:
import pyhf
Returning the observed and expected \(\mathrm{CL}_{s}\)
[2]:
pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0])
CLs_obs, CLs_exp = pyhf.infer.hypotest(1.0, [51, 48] + pdf.config.auxdata, pdf, return_expected=True)
print('Observed: {}, Expected: {}'.format(CLs_obs, CLs_exp))
Observed: 0.05290116224852556, Expected: 0.06445521290832805
Returning the observed \(\mathrm{CL}_{s}\), \(\mathrm{CL}_{s+b}\), and \(\mathrm{CL}_{b}\)
[3]:
CLs_obs, p_values = pyhf.infer.hypotest(1.0, [51, 48] + pdf.config.auxdata, pdf, return_tail_probs=True)
print('Observed CL_s: {}, CL_sb: {}, CL_b: {}'.format(CLs_obs, p_values[0], p_values[1]))
Observed CL_s: 0.05290116224852556, CL_sb: 0.023599998519978738, CL_b: 0.4461149342826869
A reminder that
[4]:
assert CLs_obs == p_values[0]/p_values[1]
Returning the expected \(\mathrm{CL}_{s}\) band values
[5]:
import numpy as np
[6]:
CLs_obs, CLs_exp_band = pyhf.infer.hypotest(1.0, [51, 48] + pdf.config.auxdata, pdf, return_expected_set=True)
print('Observed CL_s: {}\n'.format(CLs_obs))
for p_value, n_sigma in enumerate(np.arange(-2,3)):
print('Expected CL_s{}: {}'.format(' ' if n_sigma==0 else '({} σ)'.format(n_sigma),CLs_exp_band[p_value]))
Observed CL_s: 0.05290116224852556
Expected CL_s(-2 σ): 0.0026064088679947964
Expected CL_s(-1 σ): 0.013820657528619273
Expected CL_s : 0.06445521290832805
Expected CL_s(1 σ): 0.23526103626937836
Expected CL_s(2 σ): 0.5730418174887743
Multi-bin Poisson¶
[1]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib
[2]:
import logging
import json
import pyhf
from pyhf import Model, optimizer
from pyhf.simplemodels import hepdata_like
from scipy.interpolate import griddata
import scrapbook as sb
[3]:
def plot_results(testmus, cls_obs, cls_exp, poi_tests, test_size=0.05):
plt.plot(poi_tests, cls_obs, c="k")
for idx, color in zip(range(5), 5 * ["grey"]):
plt.plot(poi_tests, cls_exp[idx], c=color)
plt.plot(testmus, [test_size] * len(testmus), c="r")
plt.ylim(0, 1)
def invert_interval(test_mus, cls_obs, cls_exp, test_size=0.05):
crossing_test_stats = {'exp': [], 'obs': None}
for cls_exp_sigma in cls_exp:
crossing_test_stats['exp'].append(
np.interp(
test_size, list(reversed(cls_exp_sigma)), list(reversed(test_mus))
)
)
crossing_test_stats['obs'] = np.interp(
test_size, list(reversed(cls_obs)), list(reversed(test_mus))
)
return crossing_test_stats
def plot_histo(ax, binning, data):
bin_width = (binning[2]-binning[1])/binning[0]
bin_leftedges = np.linspace(binning[1],binning[2],binning[0]+1)[:-1]
bin_centers = [le + bin_width/2. for le in bin_leftedges]
ax.bar(bin_centers,data,1, alpha=0.5)
def plot_data(ax, binning, data):
errors = [math.sqrt(d) for d in data]
bin_width = (binning[2]-binning[1])/binning[0]
bin_leftedges = np.linspace(binning[1],binning[2],binning[0]+1)[:-1]
bin_centers = [le + bin_width/2. for le in bin_leftedges]
ax.bar(bin_centers,data,0, yerr=errors, linewidth=0, error_kw = dict(ecolor='k', elinewidth = 1))
ax.scatter(bin_centers, data, c = 'k')
[4]:
validation_datadir = '../../validation/data'
[5]:
source = json.load(open(validation_datadir + '/1bin_example1.json'))
pdf = hepdata_like(source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr'])
data = source['bindata']['data'] + pdf.config.auxdata
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
poi_tests = np.linspace(0, 5, 41)
tests = [pyhf.infer.hypotest(poi_test, data, pdf, init_pars, par_bounds, return_expected_set=True)
for poi_test in poi_tests]
cls_obs = np.array([test[0] for test in tests]).flatten()
cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]
plot_results(poi_tests, cls_obs, cls_exp, poi_tests)
invert_interval(poi_tests, cls_obs, cls_exp)
[5]:
{'exp': [1.081060678053728,
1.4517179965651272,
2.0200754881420275,
2.834863384648174,
3.848756749431549],
'obs': 2.3800254370628036}

[6]:
source = {
"binning": [2,-0.5,1.5],
"bindata": {
"data": [120.0, 145.0],
"bkg": [100.0, 150.0],
"bkgerr": [15.0, 20.0],
"sig": [30.0, 45.0]
}
}
my_observed_counts = source['bindata']['data']
pdf = hepdata_like(source['bindata']['sig'], source['bindata']['bkg'], source['bindata']['bkgerr'])
data = my_observed_counts + pdf.config.auxdata
binning = source['binning']
nompars = pdf.config.suggested_init()
bonlypars = [x for x in nompars]
bonlypars[pdf.config.poi_index] = 0.0
nom_bonly = pdf.expected_data(bonlypars, include_auxdata = False)
nom_sb = pdf.expected_data(nompars, include_auxdata = False)
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
print(init_pars)
bestfit_pars = pyhf.infer.mle.fit(data, pdf, init_pars, par_bounds)
bestfit_cts = pdf.expected_data(bestfit_pars, include_auxdata = False)
[1.0, 1.0, 1.0]
[7]:
f, axarr = plt.subplots(1,3,sharey=True)
f.set_size_inches(12,4)
plot_histo(axarr[0], binning, nom_bonly)
plot_data(axarr[0], binning, my_observed_counts)
axarr[0].set_xlim(binning[1:])
plot_histo(axarr[1], binning, nom_sb)
plot_data(axarr[1], binning, my_observed_counts)
axarr[1].set_xlim(binning[1:])
plot_histo(axarr[2], binning, bestfit_cts)
plot_data(axarr[2], binning, my_observed_counts)
axarr[2].set_xlim(binning[1:])
plt.ylim(0,300);

[8]:
##
## DUMMY 2D thing
##
def signal(m1, m2):
massscale = 150.
minmass = 100.
countscale = 2000
effective_mass = np.sqrt(m1**2 + m2**2)
return [countscale*np.exp(-(effective_mass-minmass)/massscale), 0]
def CLs(m1,m2):
signal_counts = signal(m1, m2)
pdf = hepdata_like(signal_counts, source['bindata']['bkg'], source['bindata']['bkgerr'])
try:
cls_obs, cls_exp_set = pyhf.infer.hypotest(1.0, data, pdf, init_pars, par_bounds, return_expected_set=True)
return cls_obs, cls_exp_set, True
except AssertionError:
print('fit failed for mass points ({}, {})'.format(m1, m2))
return None, None, False
[9]:
nx, ny = 15, 15
grid = grid_x, grid_y = np.mgrid[100:1000:complex(0, nx), 100:1000:complex(0, ny)]
X = grid.T.reshape(nx * ny, 2)
results = [CLs(m1, m2) for m1, m2 in X]
[10]:
X = np.array([x for x,(_,_,success) in zip(X,results) if success])
yobs = np.array([obs for obs, exp, success in results if success]).flatten()
yexp = [np.array([exp[i] for obs, exp, success in results if success]).flatten() for i in range(5)]
[11]:
int_obs = griddata(X, yobs, (grid_x, grid_y), method='linear')
int_exp = [griddata(X, yexp[i], (grid_x, grid_y), method='linear') for i in range(5)]
plt.contourf(grid_x, grid_y, int_obs, levels = np.linspace(0,1))
plt.colorbar()
plt.contour(grid_x, grid_y, int_obs, levels = [0.05], colors = 'w')
for level in int_exp:
plt.contour(grid_x, grid_y, level, levels = [0.05], colors = 'w', linestyles = 'dashed')
plt.scatter(X[:,0], X[:,1], c = yobs, vmin = 0, vmax = 1);

[12]:
sb.glue("number_2d_successpoints", len(X))
Data type cannot be displayed: application/scrapbook.scrap.json+json
Multibin Coupled HistoSys¶
[1]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib
[2]:
import logging
import json
import pyhf
from pyhf import Model
logging.basicConfig(level=logging.INFO)
[3]:
def prep_data(sourcedata):
spec = {
"channels": [
{
"name": "signal",
"samples": [
{
"name": "signal",
"data": sourcedata["signal"]["bindata"]["sig"],
"modifiers": [
{"name": "mu", "type": "normfactor", "data": None}
],
},
{
"name": "bkg1",
"data": sourcedata["signal"]["bindata"]["bkg1"],
"modifiers": [
{
"name": "coupled_histosys",
"type": "histosys",
"data": {
"lo_data": sourcedata["signal"]["bindata"]["bkg1_dn"],
"hi_data": sourcedata["signal"]["bindata"]["bkg1_up"],
},
}
],
},
{
"name": "bkg2",
"data": sourcedata["signal"]["bindata"]["bkg2"],
"modifiers": [
{
"name": "coupled_histosys",
"type": "histosys",
"data": {
"lo_data": sourcedata["signal"]["bindata"]["bkg2_dn"],
"hi_data": sourcedata["signal"]["bindata"]["bkg2_up"],
},
}
],
},
],
},
{
"name": "control",
"samples": [
{
"name": "background",
"data": sourcedata["control"]["bindata"]["bkg1"],
"modifiers": [
{
"name": "coupled_histosys",
"type": "histosys",
"data": {
"lo_data": sourcedata["control"]["bindata"]["bkg1_dn"],
"hi_data": sourcedata["control"]["bindata"]["bkg1_up"],
},
}
],
}
],
},
]
}
pdf = Model(spec)
data = []
for c in pdf.spec["channels"]:
data += sourcedata[c["name"]]["bindata"]["data"]
data = data + pdf.config.auxdata
return data, pdf
[4]:
validation_datadir = "../../validation/data"
[5]:
source = json.load(open(validation_datadir + "/2bin_2channel_coupledhisto.json"))
data, pdf = prep_data(source["channels"])
print(data)
init_pars = pdf.config.suggested_init()
par_bounds = pdf.config.suggested_bounds()
unconpars = pyhf.infer.mle.fit(data, pdf, init_pars, par_bounds)
print("parameters post unconstrained fit: {}".format(unconpars))
conpars = pyhf.infer.mle.fixed_poi_fit(0.0, data, pdf, init_pars, par_bounds)
print("parameters post constrained fit: {}".format(conpars))
pdf.expected_data(conpars)
[170.0, 220.0, 110.0, 105.0, 0.0]
parameters post unconstrained fit: [1.53170588e-12 2.21657891e+00]
parameters post constrained fit: [0. 2.21655133]
[5]:
array([116.08275666, 133.24826999, 183.24826999, 98.08967672,
2.21655133])
[6]:
def plot_results(test_mus, cls_obs, cls_exp, poi_tests, test_size=0.05):
plt.plot(poi_tests, cls_obs, c="k")
for i, c in zip(range(5), ["grey", "grey", "grey", "grey", "grey"]):
plt.plot(poi_tests, cls_exp[i], c=c)
plt.plot(poi_tests, [test_size] * len(test_mus), c="r")
plt.ylim(0, 1)
[7]:
def invert_interval(test_mus, cls_obs, cls_exp, test_size=0.05):
crossing_test_stats = {"exp": [], "obs": None}
for cls_exp_sigma in cls_exp:
crossing_test_stats["exp"].append(
np.interp(
test_size, list(reversed(cls_exp_sigma)), list(reversed(test_mus))
)
)
crossing_test_stats["obs"] = np.interp(
test_size, list(reversed(cls_obs)), list(reversed(test_mus))
)
return crossing_test_stats
[8]:
poi_tests = np.linspace(0, 5, 61)
tests = [
pyhf.infer.hypotest(
poi_test, data, pdf, init_pars, par_bounds, return_expected_set=True
)
for poi_test in poi_tests
]
cls_obs = np.array([test[0] for test in tests]).flatten()
cls_exp = [np.array([test[1][i] for test in tests]).flatten() for i in range(5)]
[9]:
print("\n")
plot_results(poi_tests, cls_obs, cls_exp, poi_tests)
invert_interval(poi_tests, cls_obs, cls_exp)
[9]:
{'exp': [0.3654675198094938,
0.4882076670368835,
0.683262284467055,
0.9650584704888153,
1.3142329292131938],
'obs': 0.3932476110375604}

[1]:
%pylab inline
Populating the interactive namespace from numpy and matplotlib
[2]:
import os
import pyhf
import pyhf.readxml
from ipywidgets import interact, fixed
Binned HEP Statistical Analysis in Python¶
HistFactory¶
HistFactory is a popular framework to analyze binned event data and commonly used in High Energy Physics. At its core it is a template for building a statistical model from individual binned distribution (‘Histograms’) and variations on them (‘Systematics’) that represent auxiliary measurements (for example an energy scale of the detector which affects the shape of a distribution)
pyhf¶
pyhf
is a work-in-progress standalone implementation of the HistFactory p.d.f. template and an implementation of the test statistics and asymptotic formulae described in the paper by Cowan, Cranmer, Gross, Vitells: Asymptotic formulae for likelihood-based tests of new physics [arXiv:1007.1727].
Models can be defined using JSON specification, but existing models based on the XML + ROOT file scheme are readable as well.
The Demo¶
The input data for the statistical analysis was built generated using the containerized workflow engine yadage (see demo from KubeCon 2018 [youtube]). Similarly to Binder this utilizes modern container technology for reproducible science. Below you see the execution graph leading up to the model input data at the bottom.
[3]:
import base64
from IPython.core.display import display, HTML
anim = base64.b64encode(open('workflow.gif','rb').read()).decode('ascii')
HTML('<img src="data:image/gif;base64,{}">'.format(anim))
[3]:
Read in the Model from XML and ROOT¶
The ROOT files are read using scikit-hep’s uproot module.
[4]:
spec = pyhf.readxml.parse('meas.xml',os.getcwd())
workspace = pyhf.Workspace(spec)
From the meas.xml
spec, we construct a probability density function (p.d.f). As the model includes systematics, it will be a simultaneous joint p.d.f. of the main model (poisson) and constraint model. The latter is defined by the implied “auxiliary measurements”.
[5]:
pdf = workspace.model(measurement_name='meas')
data = workspace.data(pdf)
# what is the measurement?
workspace.get_measurement(measurement_name='meas')
[5]:
{'name': 'meas',
'config': {'poi': 'SigXsecOverSM',
'parameters': [{'name': 'lumi',
'auxdata': [1.0],
'bounds': [[0.5, 1.5]],
'inits': [1.0],
'sigmas': [0.1]},
{'name': 'SigXsecOverSM',
'bounds': [[0.0, 3.0]],
'inits': [1.0],
'fixed': False}]}}
The p.d.f is build from one data-drived “qcd” (or multijet) estimate and two Monte Carlo-based background samples and is parametrized by five parameters: One parameter of interest SigXsecOverSM
and four nuisance parameters that affect the shape of the two Monte Carlo background estimates (both weight-only and shape systematics)
[6]:
print('Samples:\n {}'.format(workspace.samples))
print('Parameters:\n {}'.format(workspace.parameters))
Samples:
['mc1', 'mc2', 'qcd', 'signal']
Parameters:
['SigXsecOverSM', 'lumi', 'mc1_shape_conv', 'mc1_weight_var1', 'mc2_shape_conv', 'mc2_weight_var1']
[7]:
par_name_dict = {k: v["slice"].start for k, v in pdf.config.par_map.items()}
all_par_settings = {
n[0]: tuple(m)
for n, m in zip(
sorted(reversed(list(par_name_dict.items())), key=lambda x: x[1]),
pdf.config.suggested_bounds(),
)
}
default_par_settings = {n[0]: sum(tuple(m)) / 2.0 for n, m in all_par_settings.items()}
def get_mc_counts(pars):
deltas, factors = pdf._modifications(pars)
allsum = pyhf.tensorlib.concatenate(
deltas + [pyhf.tensorlib.astensor(pdf.nominal_rates)]
)
nom_plus_delta = pyhf.tensorlib.sum(allsum, axis=0)
nom_plus_delta = pyhf.tensorlib.reshape(
nom_plus_delta, (1,) + pyhf.tensorlib.shape(nom_plus_delta)
)
allfac = pyhf.tensorlib.concatenate(factors + [nom_plus_delta])
return pyhf.tensorlib.product(allfac, axis=0)
animate_plot_pieces = None
def init_plot(fig, ax, par_settings):
global animate_plot_pieces
nbins = sum(list(pdf.config.channel_nbins.values()))
x = np.arange(nbins)
data = np.zeros(nbins)
items = []
for i in [3, 2, 1, 0]:
items.append(ax.bar(x, data, 1, alpha=1.0))
animate_plot_pieces = (
items,
ax.scatter(x, workspace.data(pdf, with_aux=False), c="k", alpha=1.0, zorder=99),
)
def animate(ax=None, fig=None, **par_settings):
global animate_plot_pieces
items, obs = animate_plot_pieces
pars = pyhf.tensorlib.astensor(pdf.config.suggested_init())
for k, v in par_settings.items():
pars[par_name_dict[k]] = v
mc_counts = get_mc_counts(pars)
rectangle_collection = zip(*map(lambda x: x.patches, items))
for rectangles, binvalues in zip(rectangle_collection, mc_counts[:, 0].T):
offset = 0
for sample_index in [3, 2, 1, 0]:
rect = rectangles[sample_index]
binvalue = binvalues[sample_index]
rect.set_y(offset)
rect.set_height(binvalue)
offset += rect.get_height()
fig.canvas.draw()
def plot(ax=None, order=[3, 2, 1, 0], **par_settings):
pars = pyhf.tensorlib.astensor(pdf.config.suggested_init())
for k, v in par_settings.items():
pars[par_name_dict[k]] = v
mc_counts = get_mc_counts(pars)
bottom = None
# nb: bar_data[0] because evaluating only one parset
for i, sample_index in enumerate(order):
data = mc_counts[sample_index][0]
x = np.arange(len(data))
ax.bar(x, data, 1, bottom=bottom, alpha=1.0)
bottom = data if i == 0 else bottom + data
ax.scatter(x, workspace.data(pdf, with_aux=False), c="k", alpha=1.0, zorder=99)
Interactive Exploration of a HistFactory Model¶
One advantage of a pure-python implementation of Histfactory is the ability to explore the pdf interactively within the setting of a notebook. Try moving the sliders and oberserve the effect on the samples. For example changing the parameter of interest SigXsecOverSM
(or µ) controls the overall normalization of the (BSM) signal sample (µ=0 for background-only and µ=1 for the nominal signal-plus-background hypothesis)
[8]:
%matplotlib notebook
fig, ax = plt.subplots(1, 1)
fig.set_size_inches(10, 5)
ax.set_ylim(0, 1.5 * np.max(workspace.data(pdf, with_aux=False)))
init_plot(fig, ax, default_par_settings)
interact(animate, fig=fixed(fig), ax=fixed(ax), **all_par_settings);
[9]:
nominal = pdf.config.suggested_init()
background_only = pdf.config.suggested_init()
background_only[pdf.config.poi_index] = 0.0
best_fit = pyhf.infer.mle.fit(data, pdf)
/Users/jovyan/pyhf/src/pyhf/tensor/numpy_backend.py:253: RuntimeWarning: invalid value encountered in log
return n * np.log(lam) - lam - gammaln(n + 1.0)
Fitting¶
We can now fit the statistical model to the observed data. The best fit of the signal strength is close to the background-only hypothesis.
[10]:
f,(ax1,ax2,ax3) = plt.subplots(1,3, sharey=True, sharex=True)
f.set_size_inches(18,4)
ax1.set_ylim(0,1.5*np.max(workspace.data(pdf, with_aux=False)))
ax1.set_title(u'nominal signal + background µ = 1')
plot(ax = ax1, **{k: nominal[v] for k,v in par_name_dict.items()})
ax2.set_title(u'nominal background-only µ = 0')
plot(ax = ax2, **{k: background_only[v] for k,v in par_name_dict.items()})
ax3.set_title(u'best fit µ = {:.3g}'.format(best_fit[pdf.config.poi_index]))
plot(ax = ax3, **{k: best_fit[v] for k,v in par_name_dict.items()});
Interval Estimation (Computing Upper Limits on µ)¶
A common task in the statistical evaluation of High Energy Physics data analyses is the estimation of confidence intervals of parameters of interest. The general strategy is to perform a series of hypothesis tests and then invert the tests in order to obtain an interval with the correct coverage properties.
A common figure of merit is a modified p-value, CLs. Here we compute an upper limit based on a series of CLs tests.
[11]:
def invert_interval(test_mus, hypo_tests, test_size=0.05):
cls_obs = np.array([test[0] for test in hypo_tests]).flatten()
cls_exp = [
np.array([test[1][i] for test in hypo_tests]).flatten() for i in range(5)
]
crossing_test_stats = {"exp": [], "obs": None}
for cls_exp_sigma in cls_exp:
crossing_test_stats["exp"].append(
np.interp(
test_size, list(reversed(cls_exp_sigma)), list(reversed(test_mus))
)
)
crossing_test_stats["obs"] = np.interp(
test_size, list(reversed(cls_obs)), list(reversed(test_mus))
)
return crossing_test_stats
[12]:
mu_tests = np.linspace(0, 1, 16)
hypo_tests = [
pyhf.infer.hypotest(
mu,
data,
pdf,
pdf.config.suggested_init(),
pdf.config.suggested_bounds(),
return_expected_set=True,
)
for mu in mu_tests
]
[13]:
import pyhf.contrib.viz.brazil
fig, (ax1) = plt.subplots(1, 1)
fig.set_size_inches(7, 5)
ax1.set_title(u"Hypothesis Tests")
ax1.set_ylabel(u"CLs")
ax1.set_xlabel(u"µ")
pyhf.contrib.viz.brazil.plot_results(ax1, mu_tests, hypo_tests);
[14]:
results = invert_interval(mu_tests, hypo_tests)
print('Observed Limit: {:.2f}'.format(results['obs']))
print('-----')
for i, n_sigma in enumerate(np.arange(-2,3)):
print('Expected Limit{}: {:.3f}'.format('' if n_sigma==0 else '({} σ)'.format(n_sigma),results['exp'][i]))
Observed Limit: 0.60
-----
Expected Limit(-2 σ): 0.266
Expected Limit(-1 σ): 0.363
Expected Limit: 0.505
Expected Limit(1 σ): 0.707
Expected Limit(2 σ): 0.956
Talks¶
We are always interested in talking about pyhf
. See the abstract and a list of previously given presentations and feel free to invite us to your next conference/workshop/meeting!
Abstract¶
The HistFactory p.d.f. template [CERN-OPEN-2012-016] is per-se independent of its implementation in ROOT and it is useful to be able to run statistical analysis outside of the ROOT, RooFit, RooStats framework. pyhf is a pure-python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is based on the asymptotic formulas of “Asymptotic formulae for likelihood-based tests of new physics” [1007.1727]. pyhf supports modern computational graph libraries such as TensorFlow and PyTorch in order to make use of features such as auto-differentiation and GPU acceleration.
The HistFactory p.d.f. template \href{https://cds.cern.ch/record/1456844}{[CERN-OPEN-2012-016]} is per-se independent of its implementation in ROOT and it is useful to be able to run statistical analysis outside of the ROOT, RooFit, RooStats framework. pyhf is a pure-python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is based on the asymptotic formulas of "Asymptotic formulae for likelihood-based tests of new physics" \href{https://arxiv.org/abs/1007.1727}{[arXiv:1007.1727]}. pyhf supports modern computational graph libraries such as TensorFlow and PyTorch in order to make use of features such as autodifferentiation and GPU acceleration.
Presentations¶
This list will be updated with talks given on pyhf
:
Matthew Feickert. Likelihood preservation and statistical reproduction of searches for new physics. CHEP 2019, Nov 2019. URL: https://indico.cern.ch/event/773049/contributions/3476143/.
Matthew Feickert. pyhf: pure-Python implementation of HistFactory. PyHEP 2019 Workshop, Oct 2019. URL: https://indico.cern.ch/event/833895/contributions/3577824/.
Matthew Feickert, Lukas Heinrich, Giordon Stark, and Kyle Cranmer. pyhf: a pure Python implementation of HistFactory with tensors and autograd. DIANA Meeting - pyhf, October 2018. URL: https://indico.cern.ch/event/759480/.
Matthew Feickert, Lukas Heinrich, Giordon Stark, and Kyle Cranmer. pyhf: pure-Python implementation of HistFactory models with autograd. (Internal) Joint Machine Learning & Statistics Fora Meeting, September 2018. URL: https://indico.cern.ch/event/757657/contributions/3141134/.
Lukas Heinrich. Gaussian Process Shape Estimation and Systematics. (Internal) Joint Machine Learning & Statistics Fora Meeting, Dec 2018. URL: https://indico.cern.ch/event/777561/contributions/3234669/.
Lukas Heinrich. HEP in the Cloud Computing and Open Science Era. EP-IT Data science seminar, Oct 2019. URL: https://indico.cern.ch/event/840837/.
Lukas Heinrich. Likelihoods associated with statistical fits used in searches for new physics on HEPData and use of RECAST. (Internal) ATLAS Weekly Meeting, Nov 2019. URL: https://indico.cern.ch/event/864395/contributions/3642165/.
Lukas Heinrich. Traditional inference with machine learning tools. 1st Pan-European Advanced School on Statistics in High Energy Physics, Oct 2019. URL: https://indico.desy.de/indico/event/22731/session/4/contribution/19.
Lukas Heinrich. pyhf: Full Run-2 ATLAS likelihoods. (Internal) Joint Machine Learning & Statistics Fora Meeting, May 2019. URL: https://indico.cern.ch/event/817483/contributions/3412907/.
Lukas Heinrich, Matthew Feickert, Giordon Stark, and Kyle Cranmer. pyhf: A standalone HistFactory Implementation. (Re)interpreting the results of new physics searches at the LHC Workshop, May 2018. URL: https://indico.cern.ch/event/702612/contributions/2958658/.
Giordon Stark. Likelihood Preservation and Reproduction. West Coast LHC Jamboree 2019, Oct 2019. URL: https://indico.cern.ch/event/848030/contributions/3616614/.
Giordon Stark. New techniques for use of public likelihoods for reinterpretation of search results. 27th International Conference on Supersymmetry and Unification of Fundamental Interactions (SUSY2019), May 2019. URL: https://indico.cern.ch/event/746178/contributions/3396797/.
Tutorials¶
This list will be updated with tutorials and schools given on pyhf
:
Lukas Heinrich. Introduction to pyhf. (Internal) ATLAS Induction Day + Software Tutorial, Oct 2019. URL: https://indico.cern.ch/event/831761/contributions/3484275/.
Posters¶
This list will be updated with posters presented on pyhf
:
Matthew Feickert, Lukas Heinrich, Giordon Stark, and Kyle Cranmer. pyhf: a pure Python statistical fitting library for High Energy Physics with tensors and autograd. July 2019. 18th Scientific Computing with Python Conference (SciPy 2019). URL: http://conference.scipy.org/proceedings/scipy2019/slides.html, doi:10.25080/Majora-7ddc1dd1-019.
Matthew Feickert, Lukas Heinrich, Giordon Stark, and Kyle Cranmer. pyhf: pure Python implementation of HistFactory. November 2019. 24th International Conference on computing in High Energy & Nuclear Physics (CHEP 2019). URL: https://indico.cern.ch/event/773049/contributions/3476180/.
Lukas Heinrich, Matthew Feickert, Giordon Stark, and Kyle Cranmer. pyhf: auto-differentiable binned statistical models. 19th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2019), March 2019. URL: https://indico.cern.ch/event/708041/contributions/3272095/.
Installation¶
To install, we suggest first setting up a virtual environment
# Python3
python3 -m venv pyhf
# Python2
virtualenv --python=$(which python) pyhf
and activating it
source pyhf/bin/activate
Install latest stable release from PyPI…¶
… with NumPy backend¶
pip install pyhf
… with TensorFlow backend¶
pip install pyhf[tensorflow]
… with PyTorch backend¶
pip install pyhf[torch]
… with JAX backend¶
pip install pyhf[jax]
… with all backends¶
pip install pyhf[backends]
… with xml import/export functionality¶
pip install pyhf[xmlio]
Install latest development version from GitHub…¶
… with NumPy backend¶
pip install --ignore-installed -U "git+https://github.com/scikit-hep/pyhf.git#egg=pyhf"
… with TensorFlow backend¶
pip install --ignore-installed -U "git+https://github.com/scikit-hep/pyhf.git#egg=pyhf[tensorflow]"
… with PyTorch backend¶
pip install --ignore-installed -U "git+https://github.com/scikit-hep/pyhf.git#egg=pyhf[torch]"
… with JAX backend¶
pip install --ignore-installed -U "git+https://github.com/scikit-hep/pyhf.git#egg=pyhf[jax]"
… with all backends¶
pip install --ignore-installed -U "git+https://github.com/scikit-hep/pyhf.git#egg=pyhf[backends]"
… with xml import/export functionality¶
pip install --ignore-installed -U "git+https://github.com/scikit-hep/pyhf.git#egg=pyhf[xmlio]"
Updating pyhf
¶
Rerun the installation command. As the upgrade flag, -U
, is used then the libraries will be updated.
Developing¶
To develop, we suggest using virtual environments together with pip
or using pipenv. Once the environment is activated, clone the repo from GitHub
git clone https://github.com/scikit-hep/pyhf.git
and install all necessary packages for development
pip install --ignore-installed -U -e .[complete]
Then setup the Git pre-commit hook for Black by running
pre-commit install
Publishing¶
Publishing to PyPI and TestPyPI is automated through the PyPA’s PyPI publish GitHub Action. To publish a release to PyPI one simply needs to run
bumpversion [major|minor|patch]
to update the release version and get a tagged commit and then push the commit
and tag to master
with
git push origin master --tags
FAQ¶
Frequently Asked Questions about pyhf
and its use.
Questions¶
Is it possible to set the backend from the CLI?¶
Yes.
Use the --backend
option for pyhf cls
to specify a tensor backend.
The default backend is NumPy.
For more information see pyhf cls --help
.
Troubleshooting¶
import torch
orimport pyhf
causes aSegmentation fault (core dumped)
This is may be the result of a conflict with the NVIDIA drivers that you have installed on your machine. Try uninstalling and completely removing all of them from your machine
# On Ubuntu/Debian sudo apt-get purge nvidia*
and then installing the latest versions.
Translations¶
One key goal of pyhf
is to provide seamless translations between other statistical frameworks and pyhf
.
This page details the various ways to translate from a tool you might already be using as part of an existing analysis to pyhf
.
Many of these solutions involve extracting out the HistFactory
workspace and then running pyhf xml2json which provides a single JSON workspace that can be loaded directly into pyhf
.
HistFitter¶
In order to go from HistFitter
to pyhf
, the first step is to extract out the HistFactory
workspaces. Assuming you have an existing configuration file, config.py
, you likely run an exclusion fit like so:
HistFitter.py -f -D "before,after,corrMatrix" -F excl config.py
The name of output workspace files depends on four parameters you define in your config.py
:
analysisName
is fromconfigMgr.analysisName
prefix
is defined inconfigMgr.addFitConfig({prefix})
measurementName
is the first measurement you define viafitConfig.addMeasurement(name={measurementName},...)
channelName
are the names of channels you define viafitConfig.addChannel("cuts", [{channelName}], ...)
cachePath
is whereHistFitter
stores the cached histograms, defined byconfigMgr.histCacheFile
which defaults todata/{analysisName}.root
To dump the HistFactory workspace, you will modify the above to skip the fit -f
and plotting -D
so you end up with
HistFitter.py -wx -F excl config.py
The -w
flag tells HistFitter
to (re)create the HistFactory
workspace stored in results/{analysisName}/{prefix}_combined_{measurementName}.root
.
The -x
flag tells HistFitter
to dump the XML files into config/{analysisName}/
, with the top-level file being {prefix}.xml
and all other files being {prefix}_{channelName}_cuts.xml
.
Typically, prefix = 'FitConfig'
and measurementName = 'NormalMeasurement'
. For example, if the following exists in your config.py
from configManager import configMgr
# ...
configMgr.analysisName = '3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg'
configMgr.histCacheFile = 'cache/{0:s}.root'.format(configMgr.analysisName)
# ...
fitConfig = configMgr.addFitConfig("Excl")
# ...
channel = fitConfig.addChannel("cuts", ['SR_0L'], 1, 0.5, 1.5)
# ...
meas1=fitConfig.addMeasurement(name="DefaultMeasurement",lumi=1.0,lumiErr=0.029)
meas1.addPOI("mu_SIG1")
# ...
meas2=fitConfig.addMeasurement(name="DefaultMeasurement",lumi=1.0,lumiErr=0.029)
meas2.addPOI("mu_SIG2")
Then, you expect the following files to be made:
config/3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg/Excl.xml
config/3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg/Excl_SR_0L_cuts.xml
cache/3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg.root
results/3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg/Excl_combined_DefaultMeasurement.root
These are all the files you need in order to use pyhf xml2json. At this point, you could run
pyhf xml2json config/3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg/Excl.xml
which will read all of the XML files and load the histogram data from the histogram cache.
The HistFactory
workspace in results/
contains all of the information necessary to rebuild the XML files again. For debugging purposes, the pyhf
developers will often ask for your workspace file, which means results/3b_tag21.2.27-1_RW_ExpSyst_36100_multibin_bkg/Excl_combined_DefaultMeasurement.root
. If you want to generate the XML, you can open this file in ROOT
and run DefaultMeasurement->PrintXML()
which puts all of the XML files into the current directory you are in.
Command Line API¶
pyhf¶
pyhf [OPTIONS] COMMAND [ARGS]...
Options
-
--version
¶
Show the version and exit.
cls¶
pyhf cls [OPTIONS] [WORKSPACE]
Options
-
--output-file
<output_file>
¶ The location of the output json file. If not specified, prints to screen.
-
--measurement
<measurement>
¶
-
-p
,
--patch
<patch>
¶
-
--testpoi
<testpoi>
¶
-
--teststat
<teststat>
¶ - Options
q|qtilde
-
--backend
<backend>
¶ The tensor backend used for the calculation.
- Options
numpy|pytorch|tensorflow|jax|np|torch|tf
-
--optimizer
<optimizer>
¶
-
--optconf
<optconf>
¶
Arguments
-
WORKSPACE
¶
Optional argument
combine¶
Combine two workspaces into a single workspace.
See pyhf.workspace.Workspace.combine()
for more information.
pyhf combine [OPTIONS] [WORKSPACE_ONE] [WORKSPACE_TWO]
Options
-
--output-file
<output_file>
¶ The location of the output json file. If not specified, prints to screen.
Arguments
-
WORKSPACE_ONE
¶
Optional argument
-
WORKSPACE_TWO
¶
Optional argument
inspect¶
pyhf inspect [OPTIONS] [WORKSPACE]
Options
-
--output-file
<output_file>
¶ The location of the output json file. If not specified, prints to screen.
-
--measurement
<measurement>
¶
Arguments
-
WORKSPACE
¶
Optional argument
json2xml¶
pyhf json2xml [OPTIONS] [WORKSPACE]
Options
-
--output-dir
<output_dir>
¶
-
--specroot
<specroot>
¶
-
--dataroot
<dataroot>
¶
-
--resultprefix
<resultprefix>
¶
-
-p
,
--patch
<patch>
¶
Arguments
-
WORKSPACE
¶
Optional argument
prune¶
Prune components from the workspace.
See pyhf.workspace.Workspace.prune()
for more information.
pyhf prune [OPTIONS] [WORKSPACE]
Options
-
--output-file
<output_file>
¶ The location of the output json file. If not specified, prints to screen.
-
-c
,
--channel
<channel>
¶
-
-s
,
--sample
<sample>
¶
-
-m
,
--modifier
<modifier>
¶
-
-t
,
--modifier-type
<modifier_type>
¶ - Options
histosys|lumi|normfactor|normsys|shapefactor|shapesys|staterror
-
--measurement
<measurement>
¶
Arguments
-
WORKSPACE
¶
Optional argument
rename¶
Rename components of the workspace.
See pyhf.workspace.Workspace.rename()
for more information.
pyhf rename [OPTIONS] [WORKSPACE]
Options
-
--output-file
<output_file>
¶ The location of the output json file. If not specified, prints to screen.
-
-c
,
--channel
<channel>
¶
-
-s
,
--sample
<sample>
¶
-
-m
,
--modifier
<modifier>
¶
-
--measurement
<measurement>
¶
Arguments
-
WORKSPACE
¶
Optional argument
xml2json¶
Entrypoint XML: The top-level XML file for the PDF definition.
pyhf xml2json [OPTIONS] ENTRYPOINT_XML
Options
-
--basedir
<basedir>
¶ The base directory for the XML files to point relative to.
-
--output-file
<output_file>
¶ The location of the output json file. If not specified, prints to screen.
-
--track-progress
,
--hide-progress
¶
Arguments
-
ENTRYPOINT_XML
¶
Required argument
Python API¶
Top-Level¶
NumPy backend for pyhf |
|
scipy.optimize-based Optimizer using finite differences. |
|
NumPy backend for pyhf |
|
scipy.optimize-based Optimizer using finite differences. |
|
Get the current backend and the associated optimizer |
|
|
Probability Distribution Functions (PDFs)¶
The Normal distribution with mean |
|
The Poisson distribution with rate parameter |
|
A probability density corresponding to the joint distribution of a batch of identically distributed random variables. |
|
A probability density corresponding to the joint distribution of multiple non-identical component distributions |
Making Models from PDFs¶
The main pyhf model class. |
|
A JSON-serializable object that is built from an object that follows the |
Backends¶
The computational backends that pyhf
provides interfacing for the vector-based calculations.
NumPy backend for pyhf |
|
PyTorch backend for pyhf |
|
TensorFlow backend for pyhf |
Optimizers¶
PyTorch Optimizer Backend. |
|
scipy.optimize-based Optimizer using finite differences. |
|
Tensorflow Optimizer Backend. |
|
MINUIT Optimizer Backend. |
Modifiers¶
Interpolators¶
The piecewise-linear interpolation strategy. |
|
The piecewise-exponential interpolation strategy. |
|
The quadratic interpolation and linear extrapolation strategy. |
|
The polynomial interpolation and exponential extrapolation strategy. |
|
The piecewise-linear interpolation strategy, with polynomial at \(\left|a\right| < 1\). |
Inference¶
|
Compute \(p\)-values and test statistics for a single value of the parameter of interest. |
|
The test statistic, \(q_{\mu}\), for establishing an upper limit on the strength parameter, \(\mu\), as defiend in Equation (14) in [1007.1727]. |
|
Twice the negative Log-Likelihood. |
|
Run a unconstrained maximum likelihood fit. |
|
Run a maximum likelihood fit with the POI value fixzed. |
|
Compute Asimov Dataset (expected yields at best-fit values) for a given POI value. |
|
Compute p-values from test-statistic values. |
|
Compute the expected \(p\)-values CLsb, CLb and CLs for data corresponding to a given percentile of the alternate hypothesis. |
Exceptions¶
Various exceptions, apart from standard python exceptions, that are raised from using the pyhf
API.
InvalidMeasurement is raised when a specified measurement is invalid given the specification. |
|
InvalidSpecification is raised when a specification does not validate against the given schema. |
|
InvalidWorkspaceOperation is raised when an operation on a workspace fails. |
|
InvalidModel is raised when a given model does not have the right configuration, even though it validates correctly against the schema. |
|
InvalidModifier is raised when an invalid modifier is requested. |
|
InvalidInterpCode is raised when an invalid/unimplemented interpolation code is requested. |
|
MissingLibraries is raised when something is imported by sustained an import error due to missing additional, non-default libraries. |
|
InvalidOptimizer is raised when trying to set an optimizer that does not exist. |
|
InvalidPdfParameters is raised when trying to evaluate a pdf with invalid parameters. |
|
InvalidPdfData is raised when trying to evaluate a pdf with invalid data. |
Utilities¶
|
|
|
|
Use and Citations¶
Updating list of citations and use cases of pyhf
:
Lukas Heinrich, Holger Schulz, Jessica Turner, and Ye-Ling Zhou. Constraining A₄ Leptonic Flavour Model Parameters at Colliders and Beyond. 2018. arXiv:1810.05648.
ATLAS Collaboration. Reproducing searches for new physics with the ATLAS experiment through publication of full statistical likelihoods. Geneva, Aug 2019. URL: https://cds.cern.ch/record/2684863.
Roadmap (2019-2020)¶
This is the pyhf 2019 into 2020 Roadmap (Issue #561).
Overview and Goals¶
We will follow loosely Seibert’s Heirarchy of Needs
(Stan Seibert, SciPy 2019)
As a general overview that will include:
Improvements to docs
Add lots of examples
Add at least 5 well documented case studies
Issue cleanup
Adding core feature support
“pyhf evolution”: integration with columnar data analysis systems
GPU support and testing
Publications
Submit pyhf to JOSS
Submit pyhf to pyOpenSci
Start pyhf paper in 2020
Align with IRIS-HEP Analysis Systems NSF milestones
Time scale¶
The roadmap will be executed over mostly Quarter 3 of 2019 through Quarter 1 of 2020, with some projects continuing into Quarter 2 of 2020
2019-Q3
2019-Q4
2020-Q1
(2020-Q2)
Roadmap¶
Documentation and Deployment
[ ] Add docstrings to all functions and classes (Issues #38, #349) [2019-Q3]
[ ] Greatly revise and expand examples (Issues #168, #202, #212, #325, #342, #349, #367) [2019-Q3 → 2019-Q4]
[ ] Add small case studies with published sbottom likelihood from HEPData
[x] Move to scikit-hep GitHub organization [2019-Q3]
[ ] Develop a release schedule/criteria [2019-Q4]
[x] Automate deployment with
Azure pipeline (talk with Henry Schreiner) (Issue #517)GitHub Actions (Issue #508) [2019-Q3][ ] Finalize logo and add it to website (Issue #453) [2019-Q3 → 2019-Q4]
[ ] Write submission to JOSS (Issue #502) and write submission to pyOpenSci [2019-Q4 → 2020-Q2]
[ ] Contribute to IRIS-HEP Analysis Systems Milestones “Initial roadmap for ecosystem coherency” and “Initial roadmap for high-level cyberinfrastructure components of analysis system” [2019-Q4 → 2020-Q2]
Revision and Maintenance
[x] Add tests using HEPData published sbottom likelihoods (Issue #518) [2019-Q3]
[x] Add CI with GitHub Actions and Azure Pipelines (PR #527, Issue #517) [2019-Q3]
[ ] Investigate rewrite of pytest fixtures to use modern pytest (Issue #370) [2019-Q3 → 2019-Q4]
[x] Factorize out the statistical fitting portion into
pyhf.infer
(PR #531) [2019-Q3 → 2019-Q4][ ] Bug squashing at large [2019-Q3 → 2020-Q2]
[ ] Unexpected use cases (Issues #324, #325, #529)
[ ] Computational edge cases (Issues #332, #445)
[ ] Make sure that all backends reproduce sbottom results [2019-Q4 → 2020-Q2]
Development
[x] Batch support (PR #503) [2019-Q3]
[x] Add ParamViewer support (PR #519) [2019-Q3]
[x] Add setting of NPs constant/fixed (PR #653) [2019-Q3]
[x] Implement pdf as subclass of distributions (PR #551) [2019-Q3]
[x] Add sampling with toys (PR #558) [2019-Q3]
[ ] Make general modeling choices (e.g., Issue #293) [2019-Q4 → 2020-Q1]
[ ] Add “discovery” test stats (p0) (PR #520) [2019-Q4 → 2020-Q1]
[ ] Add better Model creation [2019-Q4 → 2020-Q1]
[ ] Add background model support (Issue #514) [2019-Q4 → 2020-Q1]
[ ] Develop interface for the optimizers similar to tensor/backend [2019-Q4 → 2020-Q1]
[x] Migrate to TensorFlow v2.0 (PR #541) [2019-Q4]
[ ] Drop Python 2.7 support at end of 2019 (Issue #469) [2019-Q4 (last week of December 2019)]
[ ] Finalize public API [2020-Q1]
[ ] Integrate pyfitcore/Statisfactory API [2020-Q1]
Research
[ ] Add pyfitcore/Statisfactory integrations (Issue #344, zfit Issue 120) [2019-Q4]
[ ] Hardware acceleration scaling studies (Issues #93, #301) [2019-Q4 → 2020-Q1]
[ ] Speedup through Numba (Issue #364) [2019-Q3 → 2019-Q4]
[ ] Dask backend (Issue #259) [2019-Q3 → 2020-Q1]
[ ] Attempt to use pyhf as fitting tool for full Analysis Systems pipeline test in early 2020 [2019-Q4 → 2020-Q1]
[ ] pyhf should satisfy IRIS-HEP Analysis Systems Milestone “GPU/accelerator-based implementation of statistical and other appropriate components” [2020-Q1 → 2020-Q2] and contributes to “Benchmarking and assessment of prototype analysis system components” [2020-Q3 → 2020-Q4]
Roadmap as Gantt Chart¶

Presentations During Roadmap Timeline¶
[x] Talk at IRIS-HEP Institute Retreat (September 12-13th, 2019)
[x] Talk at PyHEP 2019 (October 16-18th, 2019)
[x] Talk at CHEP 2019 (November 4-8th, 2019)
[x] Poster at CHEP 2019 (November 4-8th, 2019)
pure-python fitting/limit-setting/interval estimation HistFactory-style¶
The HistFactory p.d.f. template [CERN-OPEN-2012-016] is per-se independent of its implementation in ROOT and sometimes, it’s useful to be able to run statistical analysis outside of ROOT, RooFit, RooStats framework.
This repo is a pure-python implementation of that statistical model for multi-bin histogram-based analysis and its interval estimation is based on the asymptotic formulas of “Asymptotic formulae for likelihood-based tests of new physics” [arXiv:1007.1727]. The aim is also to support modern computational graph libraries such as PyTorch and TensorFlow in order to make use of features such as autodifferentiation and GPU acceleration.
Hello World¶
>>> import pyhf
>>> pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0])
>>> CLs_obs, CLs_exp = pyhf.infer.hypotest(1.0, [51, 48] + pdf.config.auxdata, pdf, return_expected=True)
>>> print('Observed: {}, Expected: {}'.format(CLs_obs, CLs_exp))
Observed: [0.05290116], Expected: [0.06445521]
What does it support¶
Implemented variations:
[x] HistoSys
[x] OverallSys
[x] ShapeSys
[x] NormFactor
[x] Multiple Channels
[x] Import from XML + ROOT via uproot
[x] ShapeFactor
[x] StatError
[x] Lumi Uncertainty
Computational Backends:
[x] NumPy
[x] PyTorch
[x] TensorFlow
[x] JAX
Available Optimizers
NumPy |
Tensorflow |
PyTorch |
---|---|---|
SLSQP ( |
Newton’s Method (autodiff) |
Newton’s Method (autodiff) |
MINUIT ( |
. |
. |
Todo¶
[ ] StatConfig
[ ] Non-asymptotic calculators
results obtained from this package are validated against output computed from HistFactory workspaces
A two bin example¶
bin 1: nobs = 100, b = 100, db = 15., nom_sig = 30.
bin 2: nobs = 145, b = 150, db = 20., nom_sig = 45.
Installation¶
To install pyhf
from PyPI with the NumPy backend run
pip install pyhf
and to install pyhf
with additional backends run
pip install pyhf[tensorflow,torch]
or a subset of the options.
To uninstall run
pip uninstall pyhf
Authors¶
Please check the contribution statistics for a list of contributors