# Welcome to the Docs for Forest-Benchmarking¶

Forest-Benchmarking is an **open source** library for performing quantum characterization, verification, and validation
(QCVV) of quantum computers using pyQuil.

To get started see

Note

To join our user community, connect to the Rigetti Slack workspace at https://rigetti-forest.slack.com.

## Contents¶

### Installation Guide¶

**A few terms to orient you as you are installing:**

**pyQuil**: An open source Python library to help you write and run quantum programs written in*quil*.**Quil**: The Quantum Instruction Language.**QVM**: The Quantum Virtual Machine is an open source implementation of a quantum abstract machine on classical hardware. The QVM lets you use a regular computer to simulate a small quantum computer and execute Quil programs.**QPU**: Quantum processing unit. This refers to the physical hardware chip which we run quantum programs on.**Quil Compiler**: The open source compiler,`quilc`

, compiles arbitrary Quil programs to the supported lattice and instruction for our architecture.**Quantum Cloud Services**: Quantum Cloud Services offers users access point to our physical quantum computers and the ablity to schedule compute time on our QPUs. If you’d like to access to our quantum computers for QCVV research, please email support@rigetti.com.

#### Step 1. Install pyQuil and the Forest SDK¶

```
pip install pyquil
```

Next you must install the SDK which includes the Rigetti Quil Compiler (quilc), and the Quantum Virtual Machine (qvm). Request the Forest SDK here. You’ll receive an email right away with the download links for macOS, Linux (.deb), Linux (.rpm), and Linux (bare-bones).

If you dont already have Jupyter or Jupyter lab now would be a good time to install that too.

```
pip install jupyterlab
```

#### Step 2. Install forest-benchmarking¶

forest-benchmarking can be installed from source or via the Python package manager PyPI.

**Note**: NumPy and SciPy must be pre-installed for installation to be successful, due to cvxpy.

**Source**

```
git clone https://github.com/rigetti/forest-benchmarking.git
cd forest-benchmarking/
pip install numpy scipy
pip install -e .
```

**PyPI**

```
pip install numpy scipy
pip install forest-benchmarking
```

#### Step 3. Build the docs (optional)¶

We use sphinx to build the documentation. To do this, first install the requirements

```
pip install sphinx
pip install sphinx_rtd_theme
pip install nbsphinx
pip install nbsphinx_link
```

then navigate into Forest-Benchmarkings docs directory and run:

```
make clean
make html
```

To view the docs navigate to the newly-created docs/_build/html directory and open the index.html file in a browser.

### Quick Start Guide¶

Below we will assume that you are developing in a jupyter notebook.

#### Getting ready to benchmark (QVM)¶

First thing you need to do is open up a terminal and run:

```
$ quilc -S
$ qvm -S
$ jupyter lab
```

Inside the notebook we need to get some basic pyQuil objects, namely a QuantumComputer, as well as a BenchmarkConnection for some routines. We’ll also need to be able to construct a pyQuil Program.

```
from pyquil import get_qc
from pyquil.api import get_benchmarker
noisy_qc = get_qc('2q-qvm', noisy=True)
bm = get_benchmarker()
from pyquil import Program
from pyquil.gates import *
```

Now we are ready to run through some simple examples. We’ll start with state tomography on the plus state.

```
from forest.benchmarking.tomography import do_tomography
# prepare the plus state
qubit = 1
state_prep = Program(H(qubit))
# tomograph the noisy plus state
state_estimate, _, _ = do_tomography(noisy_qc, state_prep, qubits=[qubit], kind='state')
```

Process tomography is quite similar (note that this will take a long time with the default arguments).

```
# specify a process
qubits = [0, 1]
process = Program(CNOT(*qubits))
# tomograph the noisy process CNOT
process_estimate, _, _ = do_tomography(noisy_qc, process, qubits, kind='process')
```

If we only care about the fidelity of our state or process then we can turn to Direct Fidelity Estimation (DFE) to save time / runs on the quantum computer. Here we use the BenchmarkConnection bm to do some of the operations with the Clifford group.

```
from forest.benchmarking.direct_fidelity_estimation import do_dfe
# fidelity of a state preparation
(fidelity_est, std_err), _, _ = do_dfe(noisy_qc, bm, state_prep, qubits=[qubit], kind='state')
# process fidelity
(proc_fidelity_est, std_err), _, _ = do_dfe(noisy_qc, bm, process, qubits, kind='process')
```

Finally we can get estimates of the average error rate for our Clifford gates using Randomized Benchmarking (RB). Here again we use bm to generate random sequences of Clifford gates (compiled to native gates).

```
from forest.benchmarking.randomized_benchmarking import do_rb
# simultaneous single qubit RB on q0 and q1
qubit_groups = [(0,), (1,)]
num_sequences_per_depth = 20
depths = [2, 10, 20] * num_sequences_per_depth
rb_decays, _, _ = do_rb(noisy_qc, bm, qubit_groups, depths)
```

These are just a few examples! Peruse the examples notebooks to see more.

#### Getting ready to benchmark (QPU)¶

Todo

QMI then re log into qcs and document getting forest benchmarking working

- log into qcs
- ?

### Observable Estimation and Error Mitigation¶

The module `observable_estimation`

is at the heart of forest benchmarking. It provides a
convenient way to construct experiments that measure observables and mitigate errors
associated with readout (measurement) process.

#### Observable Estimation¶

There are many cases in Forest Benchmarking, and QCVV broadly, where we are interested in estimating the expectations of some set of Pauli observables at the end of some circuit (aka `pyquil.Program`

). Additionally, we may be interested in running the same circuit after preparation of different starting states. The `observable_estimation.py`

module is designed to facilitate such experiments. In it you will find

- the
`ObservablesExperiment`

`class`

which is the high-level structure housing the several`ExperimentSetting`

s you wish to run around its core (unchanging)`program`

- the
`ExperimentSetting`

`dataclass`

which specifies the preparation`in_state`

to prepare and`observable`

to measure for a particular experimental run. The`in_state`

is represented by a`TensorProductState`

while the`observable`

is a`pyquil.PauliTerm`

- the function
`estimate_observables()`

which packages the steps to compose a list of`pyquil.Program`

s comprising an`ObservablesExperiment`

, run each program on the provided quantum computer, and return the`ExperimentResult`

s. - the
`ExperimentResult`

`dataclass`

that records the`mean`

and associated`std_err`

that was estimated for an`ExperimentSetting`

, along with some other metadata.

In many cases this core functionality allows you to easily specify and run experiments to get the observable expectations of interest without explicitly dealing with shot data, state and measurement preparation programs, qc memory addressing, etc.

Later in this notebook we will discuss additional functionality that enables grouping compatible settings to run in parallel and ‘calibrating’ observable estimates, but for now let’s dive into an example to build familiarity.

The procedure which likely leverages this abstraction most fruitfully is quantum process tomography. If you aren’t familiar with tomography at all, check out this notebook for some background. The goal is to experimentally determine/characterize some unknown process running on our quantum computer (QC). Typically, we have some ideal operation in mind specified by a `pyquil.Program`

that we want to run on the QC. Running process tomography will help us understand what the actual experimental implementation of this process is on our noisy QC. Since process tomography involves running our process on many different start states and measuring several different observables, an `ObservablesExperiment`

should considerably facilitate implementation of the procedure.

##### Create the experiment¶

First, we need to specify our process. For simplicity say we are interested in the QC’s implementation of a bit flip on qubit 0.

```
[1]:
```

```
from pyquil import Program
from pyquil.gates import X
process = Program(X(0)) # bit flip on qubit 0
```

Now we need to specify our various `ExperimentSettings`

. For now I’ll make use of a helper in `forest.benchmarking.tomography.py`

that does this for us.

```
[2]:
```

```
from forest.benchmarking.tomography import _pauli_process_tomo_settings
tomo_expt_settings = list(_pauli_process_tomo_settings(qubits = [0]))
print(tomo_expt_settings)
```

```
[ExperimentSetting[X+_0→(1+0j)*X0], ExperimentSetting[X+_0→(1+0j)*Y0], ExperimentSetting[X+_0→(1+0j)*Z0], ExperimentSetting[X-_0→(1+0j)*X0], ExperimentSetting[X-_0→(1+0j)*Y0], ExperimentSetting[X-_0→(1+0j)*Z0], ExperimentSetting[Y+_0→(1+0j)*X0], ExperimentSetting[Y+_0→(1+0j)*Y0], ExperimentSetting[Y+_0→(1+0j)*Z0], ExperimentSetting[Y-_0→(1+0j)*X0], ExperimentSetting[Y-_0→(1+0j)*Y0], ExperimentSetting[Y-_0→(1+0j)*Z0], ExperimentSetting[Z+_0→(1+0j)*X0], ExperimentSetting[Z+_0→(1+0j)*Y0], ExperimentSetting[Z+_0→(1+0j)*Z0], ExperimentSetting[Z-_0→(1+0j)*X0], ExperimentSetting[Z-_0→(1+0j)*Y0], ExperimentSetting[Z-_0→(1+0j)*Z0]]
```

The first element of this list is displayed as `ExperimentSetting[X+_0→(1+0j)*X0]`

. The symbols inside `ExperimentSetting`

encode information about the preparation and measurement. They should be interpreted as `"prepare"→ "measure"`

.

So the encoding of this particular setting preparation, `X+_0`

, tells us that this setting will prepare the `+`

eigenstate of `X`

, i.e. the state \(|+\rangle = \frac{1}{\sqrt 2}\left( |0\rangle + |1 \rangle\right)\) on the qubit 0. Furthermore, after the program is run on this state, we will measure the observable \(X\) on qubit 0. `(1+0j)`

is just a multiplicative coefficient, in this case \(1\). This coefficient will automatically be factored in to the estimates of the
expectation and std_err for the corresponding observable output by `estimate_observables`

.

Now we need to pair up our settings with our process program for the complete experiment description.

```
[3]:
```

```
from forest.benchmarking.observable_estimation import ObservablesExperiment
tomography_experiment = ObservablesExperiment(settings=tomo_expt_settings, program=process)
print(tomography_experiment)
```

```
X 0
0: X+_0→(1+0j)*X0
1: X+_0→(1+0j)*Y0
2: X+_0→(1+0j)*Z0
3: X-_0→(1+0j)*X0
4: X-_0→(1+0j)*Y0
5: X-_0→(1+0j)*Z0
6: Y+_0→(1+0j)*X0
7: Y+_0→(1+0j)*Y0
8: Y+_0→(1+0j)*Z0
9: Y-_0→(1+0j)*X0
10: Y-_0→(1+0j)*Y0
11: Y-_0→(1+0j)*Z0
12: Z+_0→(1+0j)*X0
13: Z+_0→(1+0j)*Y0
14: Z+_0→(1+0j)*Z0
15: Z-_0→(1+0j)*X0
16: Z-_0→(1+0j)*Y0
17: Z-_0→(1+0j)*Z0
```

The first line is our process program, i.e. `tomography_experiment.program`

, and each subsequent indexed line is one of our settings. Now we can proceed to get estimates of expectations for the observable in each setting (given the preparation, of course).

##### Run the experiment¶

We’ll need to get a quantum computer object to run our experiment on. We’ll get both an ideal Quantum Virtual Machine (QVM) which simulates our program/circuit without any noise, as well as a noisy QVM with a built-in default noise model.

```
[4]:
```

```
from pyquil import get_qc
ideal_qc = get_qc('2q-qvm', noisy=False)
noisy_qc = get_qc('2q-qvm', noisy=True)
```

Let’s predict some of the outcomes when we simulate our program on `ideal_qc`

so that our process `X(0)`

is implemented perfectly without noise.

First take setting `0: X+_0→(1+0j)*X0`

. We prepare the plus eigenstate of \(X\) after which we run our program `X(0)`

which does nothing to this particular state. When we measure the \(X\) observable on the plus eigenstate then we should get an expectation of exactly 1.

Now for setting `12: Z+_0→(1+0j)*X0`

we prepare the plus eigenstate of \(Z\), a.k.a. the state \(|0\rangle\) after which we perform our bit flip `X(0)`

which sends the state to \(|1\rangle\). When we measure \(X\) on this state we expect our results to be mixed 50/50 between plus and minus outcomes, so our expectation should converge to 0 for a large number of shots.

```
[5]:
```

```
from forest.benchmarking.observable_estimation import estimate_observables
results = list(estimate_observables(ideal_qc, tomography_experiment, num_shots=500))
for idx, result in enumerate(results):
if idx == 0:
print('\nWe expect the result to be 1.0 +- 0.0')
print(result, '\n')
elif idx == 12:
print('\nWe expect the result to be around 0.0. Try increasing the num_shots to see if it converges.')
print(result, '\n')
else:
print(result)
```

```
We expect the result to be 1.0 +- 0.0
X+_0→(1+0j)*X0: 1.0 +- 0.0
X+_0→(1+0j)*Y0: 0.036 +- 0.04469237071357929
X+_0→(1+0j)*Z0: 0.064 +- 0.044629676225578875
X-_0→(1+0j)*X0: -1.0 +- 0.0
X-_0→(1+0j)*Y0: 0.104 +- 0.04447884890596878
X-_0→(1+0j)*Z0: 0.024 +- 0.04470847794322683
Y+_0→(1+0j)*X0: -0.072 +- 0.044605291165959224
Y+_0→(1+0j)*Y0: -1.0 +- 0.0
Y+_0→(1+0j)*Z0: -0.028 +- 0.04470382533967311
Y-_0→(1+0j)*X0: 0.012 +- 0.04471813949618208
Y-_0→(1+0j)*Y0: 1.0 +- 0.0
Y-_0→(1+0j)*Z0: 0.028 +- 0.04470382533967311
We expect the result to be around 0.0. Try increasing the num_shots to see if it converges.
Z+_0→(1+0j)*X0: 0.056 +- 0.044651181395344956
Z+_0→(1+0j)*Y0: -0.068 +- 0.04461784396404649
Z+_0→(1+0j)*Z0: -1.0 +- 0.0
Z-_0→(1+0j)*X0: 0.024 +- 0.04470847794322683
Z-_0→(1+0j)*Y0: -0.132 +- 0.04433003496502118
Z-_0→(1+0j)*Z0: 1.0 +- 0.0
```

The first number after the setting is the estimate of the expectation for that observable along with the standard error.

If we perform the same experiment but simulate with the somewhat noisy QVM then we no longer expect the run of setting `0: X+_0→(1+0j)*X0`

to yield an estiamte of `1.0`

with certainty.

```
[6]:
```

```
# this time use the noisy_qc
noisy_results = list(estimate_observables(noisy_qc, tomography_experiment, num_shots=500))
for idx, result in enumerate(noisy_results):
if idx == 0:
print('\nWe expect the result to be less than 1.0 due to noise.')
print(result, '\n')
elif idx == 12:
print('\nWe expect the result to be around 0.0, but it may be biased due to readout error.')
print(result, '\n')
else:
print(result)
```

```
We expect the result to be less than 1.0 due to noise.
X+_0→(1+0j)*X0: 0.952 +- 0.01368911976717276
X+_0→(1+0j)*Y0: 0.16 +- 0.044145214916228456
X+_0→(1+0j)*Z0: 0.04 +- 0.044685568140060604
X-_0→(1+0j)*X0: -0.812 +- 0.026101953949848274
X-_0→(1+0j)*Y0: 0.032 +- 0.04469845634918503
X-_0→(1+0j)*Z0: 0.02 +- 0.04471241438347967
Y+_0→(1+0j)*X0: 0.12 +- 0.04439819816163715
Y+_0→(1+0j)*Y0: -0.808 +- 0.02634904172830579
Y+_0→(1+0j)*Z0: 0.04 +- 0.04468556814006061
Y-_0→(1+0j)*X0: 0.016 +- 0.04471563484956912
Y-_0→(1+0j)*Y0: 0.924 +- 0.01710111107501498
Y-_0→(1+0j)*Z0: 0.12 +- 0.044398198161637155
We expect the result to be around 0.0, but it may be biased due to readout error.
Z+_0→(1+0j)*X0: 0.1 +- 0.04449719092257398
Z+_0→(1+0j)*Y0: 0.08 +- 0.0445780214904161
Z+_0→(1+0j)*Z0: -0.768 +- 0.028641787653706254
Z-_0→(1+0j)*X0: 0.092 +- 0.044531696576708156
Z-_0→(1+0j)*Y0: 0.02 +- 0.04471241438347967
Z-_0→(1+0j)*Z0: 0.94 +- 0.015257784898208521
```

##### Error mitigation¶

In the last example we saw the estimated expectations deviate from the ideal. In fact, if you set `num_shots`

to some large number, say 500,000, then you’ll notice that the list of results have expectations grouped around three distinct values, roughly `.95`

, `-.82`

, and `.064`

. These values are what one would predict given the states measured are \(|+\rangle\), \(|-\rangle\), and some equal superposition, respectively, and given that the readout noise model has the default
parameters \(\Pr(\rm{measure } \ 0 \ | \ \rm{actually }\ 0) = .975\) and \(\Pr(\rm{measure } \ 1 \ | \ \rm{actually }\ 1) = .911\).

Note that the parameterization of readout noise assumes that we can only measure in the computational (Z) basis. Indeed, the way we ‘measure X’ or any other non-Z observable is by first performing some pre-measurement rotation and then measuring in the computational basis (we do these rotations so that the plus eigenstate on a single qubit corresponds to 0). Thus, all of our observables are impacted similarly by readout error; some measurements may incur some additional observable-dependent error due to the noisy pre-measurement rotations.

###### Symmetrizing readout¶

The fact that the noise is not symmetric, that is \(\Pr(\rm{measure } \ 0 \ | \ \rm{actually }\ 0) \neq \Pr(\rm{measure } \ 1 \ | \ \rm{actually }\ 1)\), complicates the impact of the noise. For a general state both parameters impact the resulting expectation. Instead of explicitly measuring each such parameter we instead opt to ‘symmetrize’ readout, which has the effect of averaging these parameters into a single ‘symmetrized readout error’. For a single qubit this is achieved by measuring two ‘symmetrized’ programs for each original measurement. The first program is exactly equivalent to the original. The second program is nearly identical, except that we perform a quantum bit flip before measuring the qubit, and post measurement we classically flip the result. We then aggregate these results into one collection of ‘symmetrized readout results’. We can directly calculate how this procedure affects the above calculations:

Where the last example is perhaps a bit more subtle; the physical state is not changed by the symmetrizing quantum bit flip but nonetheless half of the aggregated results are classically flipped so the readout bias is nonetheless eliminated.

Exhaustively symmetrizing over \(n\) qubits is done similarly by running \(2^n\) different ‘symmetrized’ programs, one for each bitstring, where each combination of qubits (and corresponding results) are flipped. Again, this has the effect of averaging over asymmetric readout error so that the aggregated results are characterized by a single ‘symmetrized readout error’ parameter. It is this single parameter that we correct for, or ‘calibrate’, below.

###### Calibrating estimates of symmetrized observable expectations¶

We know that when running on a QPU we will likely have substantial readout error (which you can estimate here). If we symmetrize the results then the symmetrized readout error will scale all of our expectations in a predictable way. We have included a calibration procedure that can help mitigate the impact of symmetrized readout error by estimating and undoing this scaling.

Specifically, after obtaining symmetrized estimates for expectations of some set of observables during an experiment you can pass on these results to `calibrate_observable_estimates`

. By default, for each observable in the set this will prepare all eigenstates of the observable and estimate the expectations. The `calibration_expectation`

recorded is the average magnitude of all of these expectations. Each new `ExperimentResult`

stores the original input `expectation`

under
`raw_expectation`

and populates `expectation`

with the original expectation scaled by the inverse `calibration_expectation`

.

The `calibrate_observable_estimates`

method itself leverages readout symmetrization. We first create an experiment that prepares one particular positive eigenstate for the observable and measures its expectation. We then symmetrize this program (under default exhaustive symmetrization this essentially prepares all eigenstates) to get back a single estimate of the `calibration_expectation`

; this is precisely an estimate of the ‘symmetrized readout error’ parameter which is what we need to
rescale our results to +/- 1.

Let’s get symmetrized results for our experiment above and then calibrate these.

```
[7]:
```

```
from forest.benchmarking.observable_estimation import calibrate_observable_estimates, exhaustive_symmetrization
pre_cal_symm_results = list(estimate_observables(noisy_qc, tomography_experiment, num_shots=500,
symmetrization_method=exhaustive_symmetrization))
post_cal_results = list(calibrate_observable_estimates(noisy_qc, pre_cal_symm_results, num_shots=1000))
for post_cal_res in post_cal_results:
print(post_cal_res, ' with pre-calibrated expectation ', round(post_cal_res.raw_expectation,2))
print('The calibration estimate is ', round(post_cal_res.calibration_expectation, 2))
print()
```

```
X+_0→(1+0j)*X0: 0.9932279909706546 +- 0.020554498352280456 with pre-calibrated expectation 0.88
The calibration estimate is 0.89
X+_0→(1+0j)*Y0: 0.08463251670378619 +- 0.035125067014716 with pre-calibrated expectation 0.08
The calibration estimate is 0.9
X+_0→(1+0j)*Z0: -0.0502283105022831 +- 0.03606940045580178 with pre-calibrated expectation -0.04
The calibration estimate is 0.88
X-_0→(1+0j)*X0: -1.0135440180586908 +- 0.019679964510975017 with pre-calibrated expectation -0.9
The calibration estimate is 0.89
X-_0→(1+0j)*Y0: 0.0 +- 0.03521467327581714 with pre-calibrated expectation 0.0
The calibration estimate is 0.9
X-_0→(1+0j)*Z0: 0.0091324200913242 +- 0.03609807995469683 with pre-calibrated expectation 0.01
The calibration estimate is 0.88
Y+_0→(1+0j)*X0: -0.0654627539503386 +- 0.03563977179455264 with pre-calibrated expectation -0.06
The calibration estimate is 0.89
Y+_0→(1+0j)*Y0: -0.9710467706013363 +- 0.02025654760268173 with pre-calibrated expectation -0.87
The calibration estimate is 0.9
Y+_0→(1+0j)*Z0: 0.08447488584474885 +- 0.036015104360305916 with pre-calibrated expectation 0.07
The calibration estimate is 0.88
Y-_0→(1+0j)*X0: 0.05191873589164785 +- 0.03565901613379986 with pre-calibrated expectation 0.05
The calibration estimate is 0.89
Y-_0→(1+0j)*Y0: 0.9910913140311804 +- 0.01938346248560857 with pre-calibrated expectation 0.89
The calibration estimate is 0.9
Y-_0→(1+0j)*Z0: -0.0045662100456621 +- 0.036098815026857044 with pre-calibrated expectation -0.0
The calibration estimate is 0.88
Z+_0→(1+0j)*X0: 0.020316027088036117 +- 0.035686630882177495 with pre-calibrated expectation 0.02
The calibration estimate is 0.89
Z+_0→(1+0j)*Y0: -0.017817371937639197 +- 0.03521070663662003 with pre-calibrated expectation -0.02
The calibration estimate is 0.9
Z+_0→(1+0j)*Z0: -0.9817351598173516 +- 0.022032317230781875 with pre-calibrated expectation -0.86
The calibration estimate is 0.88
Z-_0→(1+0j)*X0: -0.024830699774266364 +- 0.03568416614848569 with pre-calibrated expectation -0.02
The calibration estimate is 0.89
Z-_0→(1+0j)*Y0: 0.035634743875278395 +- 0.03519880403696721 with pre-calibrated expectation 0.03
The calibration estimate is 0.9
Z-_0→(1+0j)*Z0: 1.0 +- 0.021324005357311247 with pre-calibrated expectation 0.88
The calibration estimate is 0.88
```

Note that statistical fluctuations may drive the magnitude of calibrated expectations above 1.

##### Estimate compatible observables from the same data¶

We can use fewer runs on a quantum computer to **speed up** estimation of some sets of observables. Consider two different settings on two qubits: `Z+_0 * Z+_1→(1+0j)*Z0Z1`

and `Z+_0 * Z+_1→(1+0j)*Z0`

. If we think about translating these settings into circuits (assuming they are part of the same `ObservablesExperiment`

) then we note that the circuits will be identical even though our observables are different. Indeed ZZ and ZI commute, so we expect to be able to measure both observables
simultaneously. In particular, each of these terms is diagonal in a ‘Tensor Product Basis’ i.e. a basis which is the tensor product of elements of single qubit bases (note we could throw in the term IZ too). In this case the basis is the computational basis, i.e. each vector in the sum

Another example of observables which are compatible in this way would be \(X0I1\), \(I0Z1\), and \(X0Z1\). Meanwhile, although \(X0Z1\) and \(Z0X1\) commute, they are not simultaneously diagonalized in a *Tensor Product* Basis, so we don’t consider grouping these terms (this might change in the future–feel encouraged to contribute this! :).

We provide a method `group_settings`

which will automatically form groups of settings whose observables share a Tensor Product Basis and can be estimated from the same set of shot data. When you pass the resultant experiment with grouped settings into `estimate_observables`

then each group of observables is estimated using `num_shots`

many results.

We’ll group a 2 qubit tomography experiment to see how many experiment runs we save.

```
[8]:
```

```
from forest.benchmarking.observable_estimation import group_settings
from pyquil.gates import CNOT
two_q_tomo_expt = ObservablesExperiment(list(_pauli_process_tomo_settings(qubits=[0,1])), Program(CNOT(0,1)))
grouped_tomo_expt = group_settings(two_q_tomo_expt)
# print the pre-grouped experiment
print(two_q_tomo_expt)
print()
# print the grouped experiment. There are multiple settings per line
print(grouped_tomo_expt)
```

```
CNOT 0 1
0: X+_0 * X+_1→(1+0j)*X1
1: X+_0 * X+_1→(1+0j)*Y1
2: X+_0 * X+_1→(1+0j)*Z1
3: X+_0 * X+_1→(1+0j)*X0
4: X+_0 * X+_1→(1+0j)*X0X1
5: X+_0 * X+_1→(1+0j)*X0Y1
6: X+_0 * X+_1→(1+0j)*X0Z1
7: X+_0 * X+_1→(1+0j)*Y0
8: X+_0 * X+_1→(1+0j)*Y0X1
9: X+_0 * X+_1→(1+0j)*Y0Y1
... 520 not shown ...
... use e.settings_string() for all ...
530: Z-_0 * Z-_1→(1+0j)*X0Y1
531: Z-_0 * Z-_1→(1+0j)*X0Z1
532: Z-_0 * Z-_1→(1+0j)*Y0
533: Z-_0 * Z-_1→(1+0j)*Y0X1
534: Z-_0 * Z-_1→(1+0j)*Y0Y1
535: Z-_0 * Z-_1→(1+0j)*Y0Z1
536: Z-_0 * Z-_1→(1+0j)*Z0
537: Z-_0 * Z-_1→(1+0j)*Z0X1
538: Z-_0 * Z-_1→(1+0j)*Z0Y1
539: Z-_0 * Z-_1→(1+0j)*Z0Z1
CNOT 0 1
0: X+_0 * X+_1→(1+0j)*X1, X+_0 * X+_1→(1+0j)*X0, X+_0 * X+_1→(1+0j)*X0X1
1: X+_0 * X+_1→(1+0j)*Y1, X+_0 * X+_1→(1+0j)*X0Y1
2: X+_0 * X+_1→(1+0j)*Z1, X+_0 * X+_1→(1+0j)*X0Z1
3: X+_0 * X+_1→(1+0j)*Y0, X+_0 * X+_1→(1+0j)*Y0X1
4: X+_0 * X+_1→(1+0j)*Y0Y1
5: X+_0 * X+_1→(1+0j)*Y0Z1
6: X+_0 * X+_1→(1+0j)*Z0, X+_0 * X+_1→(1+0j)*Z0X1
7: X+_0 * X+_1→(1+0j)*Z0Y1
8: X+_0 * X+_1→(1+0j)*Z0Z1
9: X+_0 * X-_1→(1+0j)*X1, X+_0 * X-_1→(1+0j)*X0, X+_0 * X-_1→(1+0j)*X0X1
... 304 not shown ...
... use e.settings_string() for all ...
314: Z-_0 * Z+_1→(1+0j)*Z0Z1
315: Z-_0 * Z-_1→(1+0j)*X1, Z-_0 * Z-_1→(1+0j)*X0, Z-_0 * Z-_1→(1+0j)*X0X1
316: Z-_0 * Z-_1→(1+0j)*Y1, Z-_0 * Z-_1→(1+0j)*X0Y1
317: Z-_0 * Z-_1→(1+0j)*Z1, Z-_0 * Z-_1→(1+0j)*X0Z1
318: Z-_0 * Z-_1→(1+0j)*Y0, Z-_0 * Z-_1→(1+0j)*Y0X1
319: Z-_0 * Z-_1→(1+0j)*Y0Y1
320: Z-_0 * Z-_1→(1+0j)*Y0Z1
321: Z-_0 * Z-_1→(1+0j)*Z0, Z-_0 * Z-_1→(1+0j)*Z0X1
322: Z-_0 * Z-_1→(1+0j)*Z0Y1
323: Z-_0 * Z-_1→(1+0j)*Z0Z1
```

The number of experimental runs has been reduced from 540 to 324! (where `num_shots`

measurements will be repeated for each run). This grouped experiment can be passed to `estimate_observables`

per usual.

This potential for grouping motivates the design choice to represent the `_settings`

within an `ObservablesExperiment`

as a nested `List[List[ExperimentSetting]]`

, where the inner lists organize the groups of settings that will be estimated from a shared set of data. When creating an `ObservablesExperiment`

we’ve provided the option of passing in a flat `List[ExperimentSetting]`

which will be expanded internally into a `List`

of length-1 `List[ExperimentSetting]`

s.

Keep in mind that currently (3 July ‘19) the covariance between simultaneously estimated expectations is not recorded, so you may run into trouble if you want a variance of some function of these expectations.

##### Easy Parallelization¶

The ability to group compatible settings for parallel data acquisition on the same run of the QC makes it quite simple to generate a single parallel experiment from a group of `ObservablesExperiment`

s which operate on disjoint sets of qubits. The settings across these experiments will automatically be compatible because they operate on different qubits, and the programs can be composed without ambiguity. Even when two settings share a qubit this parallelization can be done, although in this
case it may be to less effect. When two programs act on a shared qubit they can no longer be so thoughtlessly composed since the order of operations on the shared qubit may have a significant impact on the program behaviour.

However, even when the individual experiments act on disjoint sets of qubits you must be careful not to associate ‘parallel’ with ‘simultaneous’ execution. Physically the gates specified in a pyquil `Program`

occur as soon as resources are available; meanwhile, measurement happens only after all gates. There is no specification of the exact timing of gates beyond their causal relationships. Therefore, while grouping experiments into parallel operation can be quite beneficial for time savings,
**do not** depend on any simultaneous execution of gates on different qubits and be wary of the fact that measurement happens only after **all** gates have finished. The latter fact may lead to significant time delays for the measurement of a qubit that would be measured promptly in the original program–remember that during this ‘idle time’ the qubit is likely experiencing decoherence, which will likely impact the results.

```
[9]:
```

```
from pyquil.gates import CNOT, CZ, Z
# make a couple of different tomography experiments on different qubits
from forest.benchmarking.tomography import generate_state_tomography_experiment
from forest.benchmarking.observable_estimation import merge_disjoint_experiments
disjoint_sets_of_qubits = [(0,4),(1,2),(3,)]
programs = [Program(X(0), CNOT(0,4)), Program(CZ(1,2)), Program(Z(3))]
expts_to_parallelize = []
for qubits, program in zip(disjoint_sets_of_qubits, programs):
expt = generate_state_tomography_experiment(program, qubits)
# group the settings only for fair comparison later (and for generality)
expts_to_parallelize.append(group_settings(expt))
# get a merged experiment with grouped settings for parallel data acquisition
parallel_expt = merge_disjoint_experiments(expts_to_parallelize)
print(f'Original number of runs: {sum(len(expt) for expt in expts_to_parallelize)}')
print(f'Parallelized number of runs: {len(parallel_expt)}')
```

```
Original number of runs: 21
Parallelized number of runs: 11
```

We also provide a convenient means of re-separating the results via `get_results_by_qubit_groups`

. This is demonstrated in the tomography notebooks and used in `randomized_benchmarking.py`

##### More on `ExperimentSetting`

’s `in_state`

and `observable`

¶

`ExperimentSetting`

is the `dataclass`

that specifies a preparation and measurement to be paired with the `program`

in an `ObservablesExperiment`

.

`in_state`

itself has the type of a `TensorProductState`

`dataclass`

which also resides in `observable_estimation.py`

. You can instantiate a single qubit Pauli eigenstate using methods such as `plusX(qubit)`

or `minusY(qubit)`

. Tensor products of such states can be built up by multiplying `TensorProductState`

s. We also include SIC states, which help reduce the number of runs for process tomography.

```
[10]:
```

```
from forest.benchmarking.observable_estimation import zeros_state, plusX, minusY, TensorProductState
# creating settings
# all zeros, i.e. all plusZ
typical_start_state = zeros_state(qubits = range(6))
print(typical_start_state, '\n')
# X+_2 * Y-_5
xy = plusX(2) * minusY(5)
print(xy, '\n')
# Z-_0 * Y+_3
zy = TensorProductState.from_str('Z-_0 * Y+_3')
print(zy, '\n')
# tensor states together
zyxy = zy * xy
print(zyxy, '\n')
# SIC states. You only need the 4 SIC states on a qubit for process tomography, instead of the 6 Pauli eigenstates.
from forest.benchmarking.observable_estimation import SIC0, SIC3
# SIC0_0 * SIC3_1
sic0_0 = SIC0(0)
sic3_1 = SIC3(1)
sic0sic3 = sic0_0 * sic3_1
print(sic0sic3)
```

```
Z+_0 * Z+_1 * Z+_2 * Z+_3 * Z+_4 * Z+_5
X+_2 * Y-_5
Z-_0 * Y+_3
Z-_0 * Y+_3 * X+_2 * Y-_5
SIC0_0 * SIC3_1
```

```
[11]:
```

```
from pyquil.paulis import PauliTerm, sX, sZ, sY
# creating observables
zy_pauli = sZ(0) * sY(1)
xzzy = .5j * sX(1) * sZ(2) * sZ(4) * sY(5)
print(zy_pauli)
print(xzzy)
print(xzzy[0])
print(xzzy[5])
print(xzzy.coefficient)
```

```
(1+0j)*Z0*Y1
0.5j*X1*Z2*Z4*Y5
I
Y
0.5j
```

```
[12]:
```

```
from forest.benchmarking.observable_estimation import ExperimentSetting
setting = ExperimentSetting(typical_start_state, xzzy)
print(setting)
# the -> separates the start state from the observable
obs_expt = ObservablesExperiment([setting], Program(X(0)))
print('\nIn an experiment with program X(0):\n', obs_expt)
```

```
Z+_0 * Z+_1 * Z+_2 * Z+_3 * Z+_4 * Z+_5→0.5j*X1Z2Z4Y5
In an experiment with program X(0):
X 0
0: Z+_0 * Z+_1 * Z+_2 * Z+_3 * Z+_4 * Z+_5→0.5j*X1Z2Z4Y5
```

##### Breaking `estimate_observables`

into parts¶

You may be interested in working directly with `Program`

s or shot data instead of an `ObservablesExperiment`

. We will step through the decomposition of `estimate_observables`

to illustrate the discrete steps which would allow you to do this.

###### Construct an `ObservablesExperiment`

and turn it into a list of `Program`

s¶

```
[13]:
```

```
from forest.benchmarking.observable_estimation import generate_experiment_programs
q = 0
# make ExperimentSettings for tomographing the plus state
expt_settings = [ExperimentSetting(plusX(q), pt) for pt in [sX(q), sY(q), sZ(q)]]
print('ExperimentSettings:\n==================\n',expt_settings,'\n')
# make an ObservablesExperiment.
# We already specified the plus state as initial state, so do an empty (identity) program
obs_expt = ObservablesExperiment(expt_settings, program=Program())
print('ObservablesExperiment:\n======================\n',obs_expt,'\n')
# convert it to a list of programs and a list of qubits for each program
expt_progs, qubits = generate_experiment_programs(obs_expt)
print('Programs and Qubits:\n====================\n')
for prog, qs in zip(expt_progs, qubits):
print(prog, qs, '\n')
```

```
ExperimentSettings:
==================
[ExperimentSetting[X+_0→(1+0j)*X0], ExperimentSetting[X+_0→(1+0j)*Y0], ExperimentSetting[X+_0→(1+0j)*Z0]]
ObservablesExperiment:
======================
0: X+_0→(1+0j)*X0
1: X+_0→(1+0j)*Y0
2: X+_0→(1+0j)*Z0
Programs and Qubits:
====================
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0
[0]
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
[0]
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
[0]
```

###### At this point we can run directly or first symmetrize¶

```
[14]:
```

```
from forest.benchmarking.observable_estimation import _measure_bitstrings
# this is a simple wrapper that adds measure instructions for each qubit in qubits and runs each program.
results = _measure_bitstrings(noisy_qc, expt_progs, qubits, num_shots=5)
for bits in results:
print(bits)
```

```
[[0]
[0]
[0]
[0]
[0]]
[[0]
[1]
[0]
[1]
[0]]
[[1]
[0]
[1]
[0]
[1]]
```

```
[15]:
```

```
from forest.benchmarking.observable_estimation import exhaustive_symmetrization, consolidate_symmetrization_outputs
symm_progs, symm_qs, flip_arrays, groups = exhaustive_symmetrization(expt_progs, qubits)
print('2 symmetrized programs for each original:\n======================\n')
for prog in symm_progs:
print(prog)
# now these programs can be run as above
results = _measure_bitstrings(noisy_qc, symm_progs, symm_qs, num_shots=5)
# we now need to consolidate these results using the information in flip_arrays and groups
results = consolidate_symmetrization_outputs(results, flip_arrays, groups)
print('After consolidating we have twice the number of (symmetrized) shots as above:\n======================\n')
# we now have twice the number of symmetrized shots
for bits in results:
print(bits)
```

```
2 symmetrized programs for each original:
======================
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0
RX(pi) 0
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RX(pi) 0
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(pi) 0
After consolidating we have twice the number of (symmetrized) shots as above:
======================
[[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]]
[[1]
[1]
[0]
[0]
[0]
[0]
[0]
[1]
[1]
[0]]
[[1]
[1]
[0]
[0]
[0]
[1]
[0]
[0]
[1]
[0]]
```

###### Now we can translate our bitarray shots into ExperimentResults with expectation values¶

```
[16]:
```

```
from forest.benchmarking.observable_estimation import shots_to_obs_moments, ExperimentResult
import numpy as np
expt_results_as_obs = []
# we use the settings from the ObservablesExperiment to label our results
for bitarray, meas_qs, settings in zip(results, qubits, obs_expt):
# settings is a list of settings run simultaneously for a given program;
# in our case, we just had one setting per program so this isn't strictly uncessary
for setting in settings:
observable = setting.observable # get the PauliTerm
# Obtain statistics from result of experiment
obs_mean, obs_var = shots_to_obs_moments(bitarray, meas_qs, observable)
expt_results_as_obs.append(ExperimentResult(setting, obs_mean, std_err = np.sqrt(obs_var),
total_counts = len(bitarray)))
for res in expt_results_as_obs:
print(res)
```

```
X+_0→(1+0j)*X0: 1.0 +- 0.0
X+_0→(1+0j)*Y0: 0.2 +- 0.30983866769659335
X+_0→(1+0j)*Z0: 0.2 +- 0.30983866769659335
```

Finally, we note that `calibrate_observable_estimates`

can be decomposed in a similar using calls to `get_calibration_program`

. The workflow is essentially the same as above.

```
[ ]:
```

```
```

#### Data structures¶

`ObservablesExperiment` (settings, …) |
A data structure for experiments involving estimation of the expectation of various observables measured on a core program, possibly with a collection of different preparations. |

`ExperimentSetting` (in_state, observable) |
Input and output settings for an ObservablesExperiment. |

`ExperimentResult` (setting, expectation, …) |
An expectation and standard deviation for the measurement of one experiment setting in an ObservablesExperiment. |

`TensorProductState` ([states]) |
A description of a multi-qubit quantum state that is a tensor product of many _OneQStates states. |

`_OneQState` (label, index, qubit) |
A description of a named one-qubit quantum state. |

`to_json` (fn, obj) |
Convenience method to save forest.benchmarking.observable_estimation objects as a JSON file. |

`read_json` (fn) |
Convenience method to read forest.benchmarking.observable_estimation objects from a JSON file. |

#### Functions¶

`estimate_observables` (qc, obs_expt, …) |
Standard wrapper for estimating the observables in an ObservableExperiment. |

`calibrate_observable_estimates` (qc, …) |
Calibrates the expectation and std_err of the input expt_results and updates those estimates. |

`generate_experiment_programs` (obs_expt, …) |
Generate the programs necessary to estimate the observables in an ObservablesExperiment. |

`group_settings` (obs_expt, method) |
Group settings that are diagonal in a shared tensor product basis (TPB) to minimize number of QPU runs. |

`shots_to_obs_moments` (bitarray, qubits, …) |
Calculate the mean and variance of the given observable based on the bitarray of results. |

`ratio_variance` (a, numpy.ndarray], var_a, …) |
Given random variables ‘A’ and ‘B’, compute the variance on the ratio Y = A/B. |

`merge_disjoint_experiments` (experiments, …) |
Merges the list of experiments into a single experiment that runs the sum of the individual experiment programs and contains all of the combined experiment settings. |

`get_results_by_qubit_groups` (results, …) |
Organizes ExperimentResults by the group of qubits on which the observable of the result acts. |

### Tomography¶

Tomography involves making many projective measurements of a quantum state or process and using them to reconstruct the initial-state or process.

#### Running State Tomography¶

##### Prepare the experiments¶

We wish to perform state tomography on a graph state on qubits 0-1 on the 9q-generic-noisy-qvm

```
from pyquil import Program, get_qc
from pyquil.gates import *
qubits = [0, 1]
qc = get_qc("9q-generic-noisy-qvm")
state_prep = Program([H(q) for q in qubits])
state_prep.inst(CZ(0,1))
```

The state prep program is thus:

```
H 0
H 1
CZ 0 1
```

We generate the required experiments:

```
from forest.benchmarking.tomography import *
exp_desc = generate_state_tomography_experiment(state_prep, qubits)
```

which in this case are measurements of the following operators:

```
['(1+0j)*X0',
'(1+0j)*Y0',
'(1+0j)*Z0',
'(1+0j)*X1',
'(1+0j)*X1*X0',
'(1+0j)*X1*Y0',
'(1+0j)*X1*Z0',
'(1+0j)*Y1',
'(1+0j)*Y1*X0',
'(1+0j)*Y1*Y0',
'(1+0j)*Y1*Z0',
'(1+0j)*Z1',
'(1+0j)*Z1*X0',
'(1+0j)*Z1*Y0',
'(1+0j)*Z1*Z0']
```

##### Data Acquisition¶

We can then collect data:

```
from forest.benchmarking.observable_estimation import estimate_observables
results = list(estimate_observables(qc, exp_desc))
```

##### Estimate the State¶

Finally, we analyze our data with one of the analysis routines:

```
rho_est = linear_inv_state_estimate(results, qubits)
print(np.real_if_close(np.round(rho_est, 3)))
```

```
[[ 0.263-0.j 0.209-0.014j 0.23 -0.027j -0.203-0.01j ]
[ 0.209+0.014j 0.231+0.j 0.175+0.j -0.168-0.019j]
[ 0.23 +0.027j 0.175-0.j 0.277-0.j -0.173+0.004j]
[-0.203+0.01j -0.168+0.019j -0.173-0.004j 0.229-0.j ]]
```

###### State tomography¶

It is not possible to learn (or estimate) a quantum state in a single experiment due to the no cloning theorem. In general one needs to re-prepare the state and measure it many times in different bases.

The simplest classical analogy to estimating a quantum state using tomography is estimating the bias of a coin. The bias, denoted by \(p\), is analogous to the quantum state in a way that will be explained shortly.

The coin toss is a random variable \(R\) with outcome Heads (denoted by \(1\)) and tails (denoted by \(0\)). Given the bias \(p\), the probability distribution for the random variable \(R\) is

If we have access to \(n_{\rm Tot}\) independent experiments (or measurements) and observe \(n_{\rm H}\) heads then the maximum likelihood estimate for the bias of the coin is

and the variance of this estimator is \({\rm Var}[p_{\rm Max-Like}] = p(1-p)/n_{\rm Tot}\).

The things to learn from this example are:

- it takes many measurements to estimate \(p\), it can’t be done in a single shot because of the classical no-cloning theorem
- \(p_{\rm Max-Like}\) is an estimator, there are other choices of estimators see e.g. Beta distribution and Bayes estimator

Let’s take the analogy a little further by defining a “quantum” state

where \(I\) and \(Z\) are the identity and the Pauli-z matrix. This parameterizes the states along the z-axis of the Bloch sphere, see Figure 1.

Figure 1. A cross section through the Bloch sphere. The states along the z-axis are equivalent to the allowed states (biases) of a coin.

Given the state \(\rho\) the probability of measuring zero and one are

where the measurement operators \(\Pi_i\) are defined by \(\Pi_i = |i\rangle \langle i|\) for \(i \in \{0, 1 \}\).

The relationship between the \(Z\) expectation value and the coin bias is

In this analogy the pure states \(|0\rangle\) and \(|1\rangle\) correspond to the coin biases \(p=1\), \(p=0\) and z expectation values \(z=+1\), \(z=-1\) respectively. All other (mixed) states are convex mixtures of these extremal points e.g. the fair coin, i.e. \(p=1/2\), corresponds to \(z = 0\) and the state

The simplest quantum system to tomograph is a single qubit. Like before we parameterize the state with respect to a set of operators

where \(x = \langle X \rangle\), \(y = \langle Y \rangle\), and \(z = \langle Z \rangle\). In the language of our classical coin we have three parameters we need to estimate that are constrained in the following way

The physics of our system means that our default measurement gives us the Z basis statistics. We already constructed an estimator to go from the coin flip statistics to the Z expectation: \(2p -1\).

Now we need to measure the statistics of the operators X and Y. Essentially this means we must rotate our state after we prepare it but before it is measured (or equivalently rotate our measurement basis). If we rotate the state as \(\rho\mapsto U\rho U^\dagger\) and then do our usual Z-basis measurement, this is equivalent to rotating the measured observable as \(Z \mapsto U^\dagger Z U\) and keeping our state \(\rho\) unchanged. This is the distinction between the Heisenberg and Schrödinger pictures. The Heisenberg picture point of view then allows us to see that if we apply a rotation such as \(R_y(\alpha) = \exp(-i \alpha Y /2)\) for \(\alpha = -\pi/2\) then this rotates the observable as \(R_y(\pi/2)ZR_y(-\pi/2)=\cos(\pi/2) Z + \sin(\pi/2) X = X\). Similarly, we could rotate by \(U=R_x(\pi/2)\) to measure the Y observable.

In this section we closely follow the reference [PBT].

When thinking about tomography it is useful to introduce the notation of “super kets” \(|O\rangle \rangle = {\rm vec}(O)\) for an operator \(O\). The dual vector is the corresponding “super bra” \(\langle\langle O|\) and represents \(O^\dagger\). This vector space is equipped with the Hilbert-Schmidt inner product \(\langle\langle A | B \rangle\rangle = {\rm Tr}[A^\dagger B]\). For more information see vec and unvec in superoperator_representations.md and the superoperator_tools ipython notebook.

A quantum state matrix on \(n\) qubits, a \(D =2^n\) dimensional Hilbert space, can be represented by

where the coefficients \(x_\alpha\) are defined by \(x_\alpha = \langle\langle B_\alpha | \rho \rangle\rangle\) for a basis of Hermitian operators \(\{ B_\alpha \}\) that is orthonormal under the Hilbert-Schmidt inner product \(\langle\langle B_\alpha | B_\beta\rangle\rangle =\delta_{\alpha,\beta}\).

For many qubits, tensor products of Pauli matrices are the natural Hermitian basis. For two qubits define \(B_5 = X \otimes X\) and \(B_6 = X \otimes Y\) then \(\langle\langle B_5 | B_6\rangle\rangle =0\). It is typical to choose \(B_0 = I / \sqrt {D}\) to be the only traceful element, where \(I\) is the identity on \(n\) qubits.

State tomography involves estimating or measuring the expectation values of all the operators \(\{ B_\alpha \}\). If you can reconstruct all the operators \(\{ B_\alpha \}\) then your measurement is said to be **tomographically complete**. To measure these operators we need to do rotations, like in the single qubit case, on many qubits. This is depicted in Figure 2.

Figure 2. This upper half of this diagram shows a simple 2-qubit quantum program consisting of both qubits initialized in the ∣0⟩ state, then transformed to some other state via a process V and finally measured in the natural qubit basis.

The operators that need to be estimated are programmatically given by `itertools.product(['I', 'X', 'Y', 'Z'], repeat=n_qubits)`

. From these measurements, we can reconstruct a density matrix \(\rho\) on `n_qubits`

.

Most research in quantum state tomography is to do with finding estimators with desirable properties, e.g. Minimax tomography, although experiment design is also considered e.g. adaptive quantum state tomography.

**More information**

See the following references:

*Initialization and characterization of open quantum systems*.

*Introduction to Quantum Gate Set Tomography*.

*Practical Bayesian Tomography*.

`forest.benchmarking`

¶The basic workflow is:

- Prepare a state by specifying a pyQuil program.
- Construct a list of observables that are needed to estimate the state; we collect this into an object called an
`ObservablesExperiment`

. - Acquire the data by running the program on a QVM or QPU.
- Apply an estimator to the data to obtain an estimate of the state.
- Compare the estimated state to the true state by a distance measure or visualization.

Below we break these steps down into all their ghastly glory.

`Program`

¶We’ll construct a two-qubit graph state by Hadamarding all qubits and then applying a controlled-Z operation across the edges of our graph. In the two-qubit case, there’s only one edge. The vector we end up preparing is

which corresponds to the state matrix

```
[1]:
```

```
import numpy as np
from pyquil import Program
from pyquil.gates import *
```

```
[2]:
```

```
# numerical representation of the true state
Psi = (1/2) * np.array([1, 1, 1, -1])
rho_true = np.outer(Psi, Psi.T.conj())
rho_true
```

```
[2]:
```

```
array([[ 0.25, 0.25, 0.25, -0.25],
[ 0.25, 0.25, 0.25, -0.25],
[ 0.25, 0.25, 0.25, -0.25],
[-0.25, -0.25, -0.25, 0.25]])
```

```
[3]:
```

```
# construct the state preparation program
qubits = [0, 1]
state_prep_prog = Program()
for qubit in qubits:
state_prep_prog += H(qubit)
state_prep_prog += CZ(qubits[0], qubits[1])
print(state_prep_prog)
```

```
H 0
H 1
CZ 0 1
```

`ObservablesExperiment`

for state tomography¶We use the helper function `generate_state_tomography_experiment`

to construct a tomographically complete set of measurements.

We can print this out to see the 15 observables or operator measurements we will perform. Note that we could have included an additional observable `I0I1`

, but since this trivially gives an expectation of 1 we instead omit this observable in experiment generation and include its contribution by hand in the estimation methods. Be mindful of this if generating your own settings.

```
[4]:
```

```
# import the generate_state_tomography_experiment function
from forest.benchmarking.tomography import generate_state_tomography_experiment
experiment = generate_state_tomography_experiment(program=state_prep_prog, qubits=qubits)
print(experiment)
```

```
H 0; H 1; CZ 0 1
0: Z+_0 * Z+_1→(1+0j)*X1
1: Z+_0 * Z+_1→(1+0j)*Y1
2: Z+_0 * Z+_1→(1+0j)*Z1
3: Z+_0 * Z+_1→(1+0j)*X0
4: Z+_0 * Z+_1→(1+0j)*X0X1
5: Z+_0 * Z+_1→(1+0j)*X0Y1
6: Z+_0 * Z+_1→(1+0j)*X0Z1
7: Z+_0 * Z+_1→(1+0j)*Y0
8: Z+_0 * Z+_1→(1+0j)*Y0X1
9: Z+_0 * Z+_1→(1+0j)*Y0Y1
10: Z+_0 * Z+_1→(1+0j)*Y0Z1
11: Z+_0 * Z+_1→(1+0j)*Z0
12: Z+_0 * Z+_1→(1+0j)*Z0X1
13: Z+_0 * Z+_1→(1+0j)*Z0Y1
14: Z+_0 * Z+_1→(1+0j)*Z0Z1
```

```
[5]:
```

```
# lets peek into the object
print('The object "experiment" is a:')
print(type(experiment),'\n')
print('It has a program attribute:')
print(experiment.program)
print('It also has a list of observables that need to be estimated:')
print(experiment.settings_string())
```

```
The object "experiment" is a:
<class 'forest.benchmarking.observable_estimation.ObservablesExperiment'>
It has a program attribute:
H 0
H 1
CZ 0 1
It also has a list of observables that need to be estimated:
0: Z+_0 * Z+_1→(1+0j)*X1
1: Z+_0 * Z+_1→(1+0j)*Y1
2: Z+_0 * Z+_1→(1+0j)*Z1
3: Z+_0 * Z+_1→(1+0j)*X0
4: Z+_0 * Z+_1→(1+0j)*X0X1
5: Z+_0 * Z+_1→(1+0j)*X0Y1
6: Z+_0 * Z+_1→(1+0j)*X0Z1
7: Z+_0 * Z+_1→(1+0j)*Y0
8: Z+_0 * Z+_1→(1+0j)*Y0X1
9: Z+_0 * Z+_1→(1+0j)*Y0Y1
10: Z+_0 * Z+_1→(1+0j)*Y0Z1
11: Z+_0 * Z+_1→(1+0j)*Z0
12: Z+_0 * Z+_1→(1+0j)*Z0X1
13: Z+_0 * Z+_1→(1+0j)*Z0Y1
14: Z+_0 * Z+_1→(1+0j)*Z0Z1
```

We can simultaneously estimate some of these observables, this saves on run time.

```
[6]:
```

```
from forest.benchmarking.observable_estimation import group_settings
print(group_settings(experiment))
```

```
H 0; H 1; CZ 0 1
0: Z+_0 * Z+_1→(1+0j)*X1, Z+_0 * Z+_1→(1+0j)*X0, Z+_0 * Z+_1→(1+0j)*X0X1
1: Z+_0 * Z+_1→(1+0j)*Y1, Z+_0 * Z+_1→(1+0j)*X0Y1
2: Z+_0 * Z+_1→(1+0j)*Z1, Z+_0 * Z+_1→(1+0j)*X0Z1
3: Z+_0 * Z+_1→(1+0j)*Y0, Z+_0 * Z+_1→(1+0j)*Y0X1
4: Z+_0 * Z+_1→(1+0j)*Y0Y1
5: Z+_0 * Z+_1→(1+0j)*Y0Z1
6: Z+_0 * Z+_1→(1+0j)*Z0, Z+_0 * Z+_1→(1+0j)*Z0X1
7: Z+_0 * Z+_1→(1+0j)*Z0Y1
8: Z+_0 * Z+_1→(1+0j)*Z0Z1
```

PyQuil will run the tomography programs.

We will use the QVM but at this point you can use a QPU.

```
[7]:
```

```
from pyquil import get_qc
qc = get_qc('2q-qvm')
```

The next step is to over-write full `quilc`

compilation with a much more simple version that *only* substitutes gates to Rigetti-native gates.

We do this because we don’t want to accidentally compile away our tomography circuit or map to different qubits.

```
[8]:
```

```
from forest.benchmarking.compilation import basic_compile
qc.compiler.quil_to_native_quil = basic_compile
```

Now get the data!

```
[9]:
```

```
from forest.benchmarking.observable_estimation import estimate_observables
results = list(estimate_observables(qc, experiment))
print('ExperimentResult[(input operators)→(output operator): "mean" +- "standard error"]')
results
```

```
ExperimentResult[(input operators)→(output operator): "mean" +- "standard error"]
```

```
[9]:
```

```
[ExperimentResult[Z+_0 * Z+_1→(1+0j)*X1: -0.02 +- 0.04471241438347967],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Y1: 0.032 +- 0.04469845634918503],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Z1: 0.068 +- 0.04461784396404649],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*X0: 0.012 +- 0.04471813949618208],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*X0X1: -0.012 +- 0.04471813949618208],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*X0Y1: -0.064 +- 0.044629676225578875],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*X0Z1: 1.0 +- 0.0],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Y0: 0.088 +- 0.04454786190155483],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Y0X1: -0.028 +- 0.04470382533967311],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Y0Y1: 1.0 +- 0.0],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Y0Z1: 0.012 +- 0.04471813949618208],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Z0: 0.0 +- 0.044721359549995794],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Z0X1: 1.0 +- 0.0],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Z0Y1: -0.108 +- 0.044459779576601605],
ExperimentResult[Z+_0 * Z+_1→(1+0j)*Z0Z1: 0.096 +- 0.044514806525469706]]
```

```
[10]:
```

```
from forest.benchmarking.tomography import linear_inv_state_estimate
rho_est_linv = linear_inv_state_estimate(results, qubits=qubits)
print(np.round(rho_est_linv, 3))
```

```
[[ 0.291-0.j 0.245+0.019j 0.253-0.025j -0.253+0.023j]
[ 0.245-0.019j 0.209-0.j 0.247-0.009j -0.247-0.019j]
[ 0.253+0.025j 0.247+0.009j 0.243+0.j -0.255-0.035j]
[-0.253-0.023j -0.247+0.019j -0.255+0.035j 0.257+0.j ]]
```

```
[11]:
```

```
from forest.benchmarking.operator_tools.project_state_matrix import project_state_matrix_to_physical
rho_phys = project_state_matrix_to_physical(rho_est_linv)
print(np.round(rho_phys, 3))
```

```
[[ 0.277+0.j 0.238+0.001j 0.248-0.024j -0.252+0.007j]
[ 0.238-0.001j 0.222+0.j 0.232-0.016j -0.232-0.009j]
[ 0.248+0.024j 0.232+0.016j 0.245+0.j -0.247-0.027j]
[-0.252-0.007j -0.232+0.009j -0.247+0.027j 0.256+0.j ]]
```

```
[12]:
```

```
from forest.benchmarking.tomography import iterative_mle_state_estimate
rho_mle = iterative_mle_state_estimate(results=results, qubits=qubits)
print(np.around(rho_mle, 3))
```

```
[[ 0.264+0.j 0.248-0.003j 0.256-0.017j -0.259+0.j ]
[ 0.248+0.003j 0.233-0.j 0.241-0.013j -0.243-0.003j]
[ 0.256+0.017j 0.241+0.013j 0.249+0.j -0.251-0.017j]
[-0.259-0.j -0.243+0.003j -0.251+0.017j 0.254-0.j ]]
```

```
[13]:
```

```
rho_mle_maxent = iterative_mle_state_estimate(results=results, qubits=qubits, epsilon=0.1, entropy_penalty=0.05)
print(np.around(rho_mle_maxent, 3))
```

```
[[ 0.267+0.j 0.202+0.002j 0.208-0.015j -0.21 +0.005j]
[ 0.202-0.002j 0.232+0.j 0.198-0.009j -0.199-0.006j]
[ 0.208+0.015j 0.198+0.009j 0.248-0.j -0.206-0.017j]
[-0.21 -0.005j -0.199+0.006j -0.206+0.017j 0.254-0.j ]]
```

```
[14]:
```

```
rho_mle_hedge = iterative_mle_state_estimate(results=results, qubits=qubits, epsilon=.001, beta=.61)
print(np.around(rho_mle_hedge, 3))
```

```
[[ 0.264+0.j 0.247-0.003j 0.255-0.017j -0.258+0.j ]
[ 0.247+0.003j 0.233-0.j 0.24 -0.013j -0.242-0.003j]
[ 0.255+0.017j 0.24 +0.013j 0.249-0.j -0.25 -0.017j]
[-0.258-0.j -0.242+0.003j -0.25 +0.017j 0.254+0.j ]]
```

```
[15]:
```

```
estimates = {
'True State': rho_true,
'Linear Inv': rho_est_linv,
'ProjLinInv': rho_phys,
'plain MLE': rho_mle,
'MLE MaxEnt': rho_mle_maxent,
'MLE Hedge': rho_mle_hedge}
```

```
[16]:
```

```
from forest.benchmarking.distance_measures import fidelity, trace_distance
for key, rho_e in estimates.items():
fid = np.round(fidelity(rho_true, rho_e), 3)
print(f"Fidelity(True State, {key}) = {fid}")
```

```
Fidelity(True State, True State) = 1.0
Fidelity(True State, Linear Inv) = 1.0
Fidelity(True State, ProjLinInv) = 0.974
Fidelity(True State, plain MLE) = 0.999
Fidelity(True State, MLE MaxEnt) = 0.861
Fidelity(True State, MLE Hedge) = 0.997
```

```
[17]:
```

```
for key, rho_e in estimates.items():
fid = np.round(trace_distance(rho_true, rho_e), 3)
print(f"Tr_dist(True State, {key}) = {fid}")
```

```
Tr_dist(True State, True State) = 0.0
Tr_dist(True State, Linear Inv) = 0.055
Tr_dist(True State, ProjLinInv) = 0.042
Tr_dist(True State, plain MLE) = 0.025
Tr_dist(True State, MLE MaxEnt) = 0.086
Tr_dist(True State, MLE Hedge) = 0.025
```

Notice how the Maximum entropy has a lower purity.

```
[18]:
```

```
from forest.benchmarking.distance_measures import purity
```

```
[19]:
```

```
for key, rho_e in estimates.items():
p = np.round(purity(rho_e),3)
print(f"{key} estimates a purity of {p}")
```

```
True State estimates a purity of 1.0
Linear Inv estimates a purity of 1.01
ProjLinInv estimates a purity of 0.954
plain MLE estimates a purity of 1.0
MLE MaxEnt estimates a purity of 0.75
MLE Hedge estimates a purity of 0.996
```

There are many different ways to visualize states and processes in forest.benchmarking, see state_and_process_plots.ipynb for more options.

```
[20]:
```

```
import matplotlib.pyplot as plt
from forest.benchmarking.plotting.hinton import hinton
fig, (ax1, ax2) = plt.subplots(1, 2)
hinton(rho_true, ax=ax1)
hinton(rho_mle_maxent, ax=ax2)
ax1.set_title('True')
ax2.set_title('Estimated via MLE MaxEnt')
fig.tight_layout()
```

Above we can’t really see any difference.

So lets try a more informative plot

```
[21]:
```

```
from forest.benchmarking.utils import n_qubit_pauli_basis
from forest.benchmarking.operator_tools.superoperator_transformations import vec, computational2pauli_basis_matrix
from forest.benchmarking.plotting.state_process import plot_pauli_rep_of_state, plot_pauli_bar_rep_of_state
# convert to pauli representation
n_qubits = 2
pl_basis = n_qubit_pauli_basis(n_qubits)
c2p = computational2pauli_basis_matrix(2*n_qubits)
rho_true_pauli = np.real(c2p @ vec(rho_true))
rho_mle_maxent_pauli = np.real(c2p @ vec(rho_mle_maxent))
fig1, (ax3, ax4) = plt.subplots(1, 2, figsize=(10,5))
plot_pauli_bar_rep_of_state(rho_true_pauli.flatten(), ax=ax3, labels=pl_basis.labels, title='True State')
plot_pauli_bar_rep_of_state(rho_mle_maxent_pauli.flatten(), ax=ax4, labels=pl_basis.labels, title='Estimated via MLE MaxEnt')
fig1.tight_layout()
print('Now we see a clear difference:')
```

```
Now we see a clear difference:
```

The `ObservablesExperiment`

framework allows for easy parallelization of experiments that operate on disjoint sets of qubits. Below we will demonstrate the simple example of tomographing two separate states, respectively prepared by `Program(X(0))`

and `Program(H(1))`

. To run each experiment in serial would require \(3 + 3 = 6\) experimental runs, but when we run a ‘parallel’ experiment we need only \(3\) runs.

Note that the parallel experiment is not the same as doing tomography on the two qubit state `Program(X(0), H(1))`

because in the later case we need to do more data acquisition runs on the qc, and we get more information back. The `ExperimentSetting`

s for that experiment are a superset of the parallel settings. We also cannot directly compare a parallel experiment with two serial experiments, because in a parallel experiment ‘cross-talk’ and other multi-qubit effects can impact the overall
state; that is, the physics of ‘parallel’ experiments cannot in general be neatly factored into two serial experiments.

See the linked notebook for more explanation and words of caution.

```
[22]:
```

```
from forest.benchmarking.observable_estimation import ObservablesExperiment, merge_disjoint_experiments
disjoint_sets_of_qubits = [(0,),(1,)]
programs = [Program(X(0)), Program(H(1))]
expts_to_parallelize = []
for q, program in zip(disjoint_sets_of_qubits, programs):
expt = generate_state_tomography_experiment(program, q)
expts_to_parallelize.append(expt)
# get a merged experiment with grouped settings for parallel data acquisition
parallel_expt = merge_disjoint_experiments(expts_to_parallelize)
print(f'Original number of runs: {sum(len(expt) for expt in expts_to_parallelize)}')
print(f'Parallelized number of runs: {len(parallel_expt)}')
print(parallel_expt)
```

```
Original number of runs: 6
Parallelized number of runs: 3
X 0; H 1
0: Z+_0→(1+0j)*X0, Z+_1→(1+0j)*X1
1: Z+_0→(1+0j)*Y0, Z+_1→(1+0j)*Y1
2: Z+_0→(1+0j)*Z0, Z+_1→(1+0j)*Z1
```

Collect the data. Separate the results by qubit to get back estimates of each process.

```
[23]:
```

```
from forest.benchmarking.observable_estimation import get_results_by_qubit_groups
parallel_results = estimate_observables(qc, parallel_expt)
state_estimates = []
individual_results = get_results_by_qubit_groups(parallel_results, disjoint_sets_of_qubits)
for q in disjoint_sets_of_qubits:
estimate = iterative_mle_state_estimate(individual_results[q], q, epsilon=.001, beta=.61)
state_estimates.append(estimate)
pl_basis = n_qubit_pauli_basis(n=1)
fig, axes = plt.subplots(1, len(state_estimates), figsize=(12,5))
for idx, est in enumerate(state_estimates):
hinton(est, ax=axes[idx])
plt.tight_layout()
```

```
[24]:
```

```
import forest.benchmarking.distance_measures as dm
from forest.benchmarking.tomography import estimate_variance
```

```
[25]:
```

```
from functools import partial
fast_tomo_est = partial(iterative_mle_state_estimate, epsilon=.0005, beta=.5, tol=1e-3)
```

**Purity**

```
[26]:
```

```
mle_est = estimate_variance(results, qubits, fast_tomo_est, dm.purity,
n_resamples=40, project_to_physical=True)
lin_inv_est = estimate_variance(results, qubits, linear_inv_state_estimate, dm.purity,
n_resamples=40, project_to_physical=True)
print("(mean, standard error)")
print(mle_est)
print(lin_inv_est)
```

```
(mean, standard error)
(0.9922772132153728, 3.958627603079607e-06)
(0.9435900456057654, 0.00017697565779827685)
```

**Fidelity**

```
[27]:
```

```
mle_est = estimate_variance(results, qubits, fast_tomo_est, dm.fidelity,
target_state=rho_true, n_resamples=40, project_to_physical=True)
lin_inv_est = estimate_variance(results, qubits, linear_inv_state_estimate, dm.fidelity,
target_state=rho_true, n_resamples=40, project_to_physical=True)
print("(mean, standard error)")
print(mle_est)
print(lin_inv_est)
```

```
(mean, standard error)
(0.9936759136729794, 1.6328600346275622e-06)
(0.968924323915877, 7.241582380286223e-05)
```

We can leverage the `PyQVM`

and its `ReferenceDensitySimulator`

to perform tomography on known mixed states.

```
[28]:
```

```
from pyquil.pyqvm import PyQVM
from pyquil.reference_simulator import ReferenceDensitySimulator
mixed_qvm = get_qc('1q-pyqvm')
# replace the simulator with a ReferenceDensitySimulator for mixed states
mixed_qvm.qam.wf_simulator = ReferenceDensitySimulator(n_qubits=1, rs=mixed_qvm.qam.rs)
```

```
[29]:
```

```
# we are going to supply the state ourselves, so we don't need a prep program
# we only need to indicate it is a state on qubit 0
qubits = [0]
experiment = generate_state_tomography_experiment(program=Program(), qubits=qubits)
print(experiment)
```

```
0: Z+_0→(1+0j)*X0
1: Z+_0→(1+0j)*Y0
2: Z+_0→(1+0j)*Z0
```

```
[30]:
```

```
# NBVAL_SKIP
# this cell is slow, so we skip it during testing
import forest.benchmarking.operator_tools.random_operators as rand_ops
from forest.benchmarking.distance_measures import infidelity
infidn =[]
trdn =[]
n_shots_max = [400,1000,4000,16000,64000]
number_of_states = 30
for nshots in n_shots_max:
infid = []
trd = []
for _ in range(0, number_of_states):
# set the inital state of the simulator
rho_true = rand_ops.bures_measure_state_matrix(2)
mixed_qvm.qam.wf_simulator.set_initial_state(rho_true).reset()
# gather data
resultsy = list(estimate_observables(qc=mixed_qvm, obs_expt=experiment, num_shots=nshots))
# estimate
rho_est = iterative_mle_state_estimate(results=resultsy, qubits=qubits, maxiter=100_000)
infid.append(infidelity(rho_true, rho_est))
trd.append(trace_distance(rho_true, rho_est))
infidn.append({'mean': np.mean(np.real(infid)), 'std': np.std(np.real_if_close(infid))})
trdn.append({'mean': np.mean(np.real(trd)), 'std': np.std(np.real_if_close(trd))})
```

```
[31]:
```

```
# NBVAL_SKIP
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame(infidn)
dt = pd.DataFrame(trdn)
plt.scatter(n_shots_max,df['mean'],label='1-Fidelity')
plt.scatter(n_shots_max,dt['mean'],label='Trace Dist')
plt.plot(n_shots_max,1/np.sqrt(n_shots_max),label='1/sqrt(N)')
plt.yscale('log')
plt.xscale('log')
plt.ylim([0.0000051,0.1])
plt.ylabel('Error')
plt.xlabel('Number of Shots (N)')
plt.title('Avg. error in estimating one Qubit states drawn from Bures measure')
plt.legend()
plt.show()
```

```
[ ]:
```

```
```

###### Quantum process tomography¶

In quantum computing process tomography is mostly used to experimentally measure the quality of gates. The quality is how close the experimental implementation of the gate is to the ideal gate.

On a single bit there are two processes (or gates or operations) one can perform, `identity`

and `not`

, which we will denote as \(I\) and \(X\). The `identity`

operation behaves as \(I 0 = 0\) and \(I 1 = 1\) while `not`

behaves as \(X 0 = 1\) and \(X 1 = 0\). To do classical process tomography we simply need to input a \(0\) and \(1\) into the circuit implementing a particular operation and compare the output to what we ideally expect.

The operation of a faulty classical gate can be captured by the probability of all output strings \(j\) given all possible input strings \(i\), i.e. \(\Pr({\rm output\ } j| {\rm input\ }i)\). It is convenient to collect these probabilities into a matrix which is called a confusion matrix. For a gate \(G\) on single bit \(i,j \in \{0, 1 \}\) it is

The ideal confusion matrix for `identity`

is \(\Pr( j | i ) = \delta_{i,j}\) while for `not`

it is \(\Pr( j | i ) = 1 -\delta_{i,j}\).

To motivate the additional complexity of quantum process tomography over classical process tomography consider the following.

We would like to distinguish the following quantum processes

and

To distinguish these processes we estimate the confusion matrices in the standard basis (the Z basis). After many trials the probabilities obey

where \(\Pi_j= |j\rangle \langle j|\) is a measurement operator with \(j\in\{ 0,1 \}\), \(\rho_i=|i\rangle \langle i|\) is the input state with \(i\in\{ 0,1 \}\), and \(G\) is the quantum process ie \(H\) or \(R_Y\).

Using the expression for \(\Pr(j|G, \rho_i)\) we can construct the confusion matrices for each process. Unfortunately the two confusion matrices are identical

However, if we input the states \(\rho_+=|+\rangle\langle +|\) and \(\rho_-=|-\rangle\langle -|\) the confusion matrices become

Instead of using a different input state we could have measured in different bases. A rough way to think about quantum process tomography is you need to input a tomographically complete set of input states and measure the confusion matrix of those states in a tomographically complete basis.

A *tomographically complete* set of operators is an operator basis on the Hilbert space of the system. For a single qubit this is the Pauli operators \(\{ I, X, Y, Z \}\).

The above analogy gets further stretched when we consider imperfect gates (non unitary processes). Indeed understanding quantum process tomography is beyond the scope of this notebook.

Quantum process tomography involves

- preparing a state
- executing a process (the thing you are trying to estimate)
- measuring in a basis

Figure 1. For process tomography, rotations must be prepended and appended to fully resolve the action of V on arbitrary initial states.

The process is kept fixed, while the experimental settings (the preparation and measurement) are varied using pre and post rotations, see Figure 1. To estimate a quantum process matrix on \(n\) qubits requires estimating \(D^4-D^2\) parameters where \(D=2^n\) is the dimension of the Hilbert space.

Programmatically, (prep, measure) tuples are varied over every `itertools.product`

combination of chosen input states and measurement operators.

There are two choices of tomographically complete input states, *SIC* states and *Pauli* states.

The SIC states are the states corresponding to the directions of a SIC POVM. In this case there are only four states, but we still have to measure in the Pauli basis. The scaling of the number of experiments with respect to number of qubits is therefore \(4^n 3^n\).

The alternative is to use \(\pm\) eigenstates of the Pauli operators as our tomographically complete input states. In this case there are six input states, and we still have to measure in the Pauli basis. The scaling of the number of experiments with respect to number of qubits is therefore \(6^n 3^n\).

**More information**

When thinking about process tomography it is necessary to understand superoperators. For more information see superoperator_representations.md and the superoperator_tools ipython notebook.

Also see the following references:

*Initialization and characterization of open quantum systems*.

*Introduction to Quantum Gate Set Tomography*.

*Practical Bayesian Tomography*.

*Self-Consistent Quantum Process Tomography*.

`forest.benchmarking`

¶Before reading this section make sure you are familiar with the state tomography ipython notebook.

The basic workflow is:

- Prepare a process that you wish to estimate by specifying a pyQuil program.
- Construct a list of input and output observables that are needed to estimate the state; we collect this into an object called an
`ObservablesExperiment`

. - Acquire the data by running the program on a QVM or QPU.
- Apply an estimator to the data to obtain an estimate of the process.
- Compare the estimated state to the true state by a distance measure or visualization.

Below we break these steps down into all their ghastly glory.

We choose an \(RX(\pi/2)\) which is represented as a pyQuil `Program`

.

The true process is

which is \(X\) upto an irrelevant global phase.

```
[1]:
```

```
#some imports
import numpy as np
from pyquil import Program, get_qc
from pyquil.gates import *
qc = get_qc('2q-qvm')
```

```
[2]:
```

```
# numerical representation of the true process
from pyquil.gate_matrices import RX as RX_matrix
from forest.benchmarking.operator_tools import kraus2choi
kraus_true = RX_matrix(np.pi)
print('The Kraus representation is:\n', np.round(kraus_true, 2),'\n')
choi_true = kraus2choi(kraus_true)
print('The Choi representation is:\n', np.real_if_close(np.round(choi_true, 2)))
from pyquil.gate_matrices import X as X_matrix
choi_x_gate = kraus2choi(X_matrix)
print('\n The X gate choi matrix is:\n', np.real_if_close(np.round(choi_x_gate)))
```

```
The Kraus representation is:
[[0.+0.j 0.-1.j]
[0.-1.j 0.+0.j]]
The Choi representation is:
[[0. 0. 0. 0.]
[0. 1. 1. 0.]
[0. 1. 1. 0.]
[0. 0. 0. 0.]]
The X gate choi matrix is:
[[0. 0. 0. 0.]
[0. 1. 1. 0.]
[0. 1. 1. 0.]
[0. 0. 0. 0.]]
```

```
[3]:
```

```
# construct the process program
qubits = [0]
process = Program(RX(np.pi, qubits[0]))
print(process)
```

```
RX(pi) 0
```

`ObservablesExperiment`

for process tomography¶See this notebook for more on `ObservablesExperiment`

and for a demonstration of grouping compatible experiment settings to reduce the total number of data acquisition runs needed.

Note: An `I`

measurement, though possible, is ‘trivial’ in the sense that we know the outcome is always 1; therefore, we omit the settings where I is ‘measured’. Our estimators add in the contribution of this I term automatically, so be mindful of this if you are making your own settings.

```
[4]:
```

```
from forest.benchmarking.tomography import generate_process_tomography_experiment
experiment = generate_process_tomography_experiment(process, qubits)
print(experiment)
```

```
RX(pi) 0
0: X+_0→(1+0j)*X0
1: X+_0→(1+0j)*Y0
2: X+_0→(1+0j)*Z0
3: X-_0→(1+0j)*X0
4: X-_0→(1+0j)*Y0
5: X-_0→(1+0j)*Z0
6: Y+_0→(1+0j)*X0
7: Y+_0→(1+0j)*Y0
8: Y+_0→(1+0j)*Z0
9: Y-_0→(1+0j)*X0
10: Y-_0→(1+0j)*Y0
11: Y-_0→(1+0j)*Z0
12: Z+_0→(1+0j)*X0
13: Z+_0→(1+0j)*Y0
14: Z+_0→(1+0j)*Z0
15: Z-_0→(1+0j)*X0
16: Z-_0→(1+0j)*Y0
17: Z-_0→(1+0j)*Z0
```

PyQuil will run the tomography programs.

We will use the QVM but at this point you can use a QPU.

```
[5]:
```

```
from forest.benchmarking.observable_estimation import estimate_observables
results = list(estimate_observables(qc, experiment))
results
```

```
[5]:
```

```
[ExperimentResult[X+_0→(1+0j)*X0: 1.0 +- 0.0],
ExperimentResult[X+_0→(1+0j)*Y0: 0.032 +- 0.04469845634918503],
ExperimentResult[X+_0→(1+0j)*Z0: -0.052 +- 0.0446608553433541],
ExperimentResult[X-_0→(1+0j)*X0: -1.0 +- 0.0],
ExperimentResult[X-_0→(1+0j)*Y0: 0.0 +- 0.044721359549995794],
ExperimentResult[X-_0→(1+0j)*Z0: -0.028 +- 0.04470382533967311],
ExperimentResult[Y+_0→(1+0j)*X0: -0.064 +- 0.044629676225578875],
ExperimentResult[Y+_0→(1+0j)*Y0: -1.0 +- 0.0],
ExperimentResult[Y+_0→(1+0j)*Z0: -0.04 +- 0.04468556814006061],
ExperimentResult[Y-_0→(1+0j)*X0: -0.068 +- 0.04461784396404649],
ExperimentResult[Y-_0→(1+0j)*Y0: 1.0 +- 0.0],
ExperimentResult[Y-_0→(1+0j)*Z0: -0.008 +- 0.04471992844359213],
ExperimentResult[Z+_0→(1+0j)*X0: -0.02 +- 0.04471241438347967],
ExperimentResult[Z+_0→(1+0j)*Y0: -0.004 +- 0.04472100177768829],
ExperimentResult[Z+_0→(1+0j)*Z0: -1.0 +- 0.0],
ExperimentResult[Z-_0→(1+0j)*X0: -0.024 +- 0.04470847794322683],
ExperimentResult[Z-_0→(1+0j)*Y0: 0.016 +- 0.04471563484956912],
ExperimentResult[Z-_0→(1+0j)*Z0: 1.0 +- 0.0]]
```

Sometimes the Linear Inversion Estimates can be unphysical. But we can use `proj_choi_to_physical`

to force it to be physical.

```
[6]:
```

```
from forest.benchmarking.tomography import linear_inv_process_estimate
from forest.benchmarking.operator_tools import proj_choi_to_physical
print('Linear inversion estimate:\n')
choi_lin_inv_est = linear_inv_process_estimate(results, qubits)
print(np.real_if_close(np.round(choi_lin_inv_est, 2)))
print('\n Project the above estimate to a physical estimate:\n')
choi_lin_inv_proj_phys_est = proj_choi_to_physical(choi_lin_inv_est)
print(np.real_if_close(np.round(choi_lin_inv_proj_phys_est, 2)))
```

```
Linear inversion estimate:
[[-0.01-0.j -0.01+0.j -0.01-0.01j -0. -0.01j]
[-0.01-0.j 1.01-0.j 1. +0.01j 0.01+0.01j]
[-0.01+0.01j 1. -0.01j 0.99+0.j -0.02-0.01j]
[-0. +0.01j 0.01-0.01j -0.02+0.01j 0.01+0.j ]]
Project the above estimate to a physical estimate:
[[-0. +0.j -0.01-0.j -0. -0.j -0. -0.j ]
[-0.01+0.j 1. -0.j 0.99+0.01j 0. +0.j ]
[-0. +0.j 0.99-0.01j 0.99-0.j -0.01-0.j ]
[-0. +0.j 0. -0.j -0.01+0.j 0.01+0.j ]]
```

Using the PGDB algorithm.

```
[7]:
```

```
from forest.benchmarking.tomography import pgdb_process_estimate
choi_mle_est = pgdb_process_estimate(results, qubits)
np.real_if_close(np.round(choi_mle_est, 2))
```

```
[7]:
```

```
array([[-0. +0.j , -0. -0.j , -0. -0.j , -0. -0.j ],
[-0. +0.j , 1. +0.j , 1.01+0.01j, 0. +0.j ],
[-0. +0.j , 1.01-0.01j, 1. +0.j , 0. +0.j ],
[-0. +0.j , 0. -0.j , 0. -0.j , -0. +0.j ]])
```

```
[8]:
```

```
choi_estimates = {
'True Process': choi_true,
'Linear Inv': choi_lin_inv_est,
'ProjLinInv': choi_lin_inv_proj_phys_est,
'Plain MLE': choi_mle_est
}
```

**Process fidelity**

```
[9]:
```

```
from forest.benchmarking.operator_tools import choi2pauli_liouville
from forest.benchmarking.distance_measures import process_fidelity
# process_fidelity uses pauli liouville rep
pl_true = choi2pauli_liouville(choi_true)
for key, choi_e in choi_estimates.items():
pl_e = choi2pauli_liouville(choi_e)
fid = np.round(process_fidelity(pl_true, pl_e), 3)
print(f"Fidelity(True Process, {key}) = {fid}")
```

```
Fidelity(True Process, True Process) = 1.0
Fidelity(True Process, Linear Inv) = 1.0
Fidelity(True Process, ProjLinInv) = 0.996
Fidelity(True Process, Plain MLE) = 1.003
```

**Diamond norm distance**

```
[10]:
```

```
from forest.benchmarking.distance_measures import diamond_norm_distance
# diamond_norm_distance takes the choi rep
for key, choi_e in choi_estimates.items():
fid = np.round(diamond_norm_distance(choi_true, choi_e), 3)
print(f"Diamond_norm_dist(True Process, {key}) = {fid}")
```

```
Diamond_norm_dist(True Process, True Process) = -0.0
Diamond_norm_dist(True Process, Linear Inv) = 0.074
Diamond_norm_dist(True Process, ProjLinInv) = 0.046
Diamond_norm_dist(True Process, Plain MLE) = 0.013
```

```
[11]:
```

```
import matplotlib.pyplot as plt
from forest.benchmarking.plotting.state_process import plot_pauli_transfer_matrix
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12,4))
plot_pauli_transfer_matrix(np.real(choi2pauli_liouville(choi_true)), ax1, title='Ideal')
plot_pauli_transfer_matrix(np.real(choi2pauli_liouville(choi_lin_inv_est)), ax2, title='Lin Inv Estimate')
plot_pauli_transfer_matrix(np.real(choi2pauli_liouville(choi_mle_est)), ax3, title='MLE Estimate')
plt.tight_layout()
```

```
[12]:
```

```
# the process
qubits = [0, 1]
process = Program(CNOT(qubits[0], qubits[1]))
# the experiment object
experiment = generate_process_tomography_experiment(process, qubits, in_basis='sic')
print(experiment)
```

```
CNOT 0 1
0: SIC0_0 * SIC0_1→(1+0j)*X1
1: SIC0_0 * SIC0_1→(1+0j)*Y1
2: SIC0_0 * SIC0_1→(1+0j)*Z1
3: SIC0_0 * SIC0_1→(1+0j)*X0
4: SIC0_0 * SIC0_1→(1+0j)*X0X1
5: SIC0_0 * SIC0_1→(1+0j)*X0Y1
6: SIC0_0 * SIC0_1→(1+0j)*X0Z1
7: SIC0_0 * SIC0_1→(1+0j)*Y0
8: SIC0_0 * SIC0_1→(1+0j)*Y0X1
9: SIC0_0 * SIC0_1→(1+0j)*Y0Y1
... 220 not shown ...
... use e.settings_string() for all ...
230: SIC3_0 * SIC3_1→(1+0j)*X0Y1
231: SIC3_0 * SIC3_1→(1+0j)*X0Z1
232: SIC3_0 * SIC3_1→(1+0j)*Y0
233: SIC3_0 * SIC3_1→(1+0j)*Y0X1
234: SIC3_0 * SIC3_1→(1+0j)*Y0Y1
235: SIC3_0 * SIC3_1→(1+0j)*Y0Z1
236: SIC3_0 * SIC3_1→(1+0j)*Z0
237: SIC3_0 * SIC3_1→(1+0j)*Z0X1
238: SIC3_0 * SIC3_1→(1+0j)*Z0Y1
239: SIC3_0 * SIC3_1→(1+0j)*Z0Z1
```

Here we are going to speed things up by grouping compatible settings to be estimated from the same set of data. See this notebook for more information.

```
[13]:
```

```
from forest.benchmarking.observable_estimation import group_settings
results = list(estimate_observables(qc, group_settings(experiment)))
results[:10]
```

```
[13]:
```

```
[ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*X1: -0.088 +- 0.04454786190155483],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*X0: -0.012 +- 0.04471813949618208],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*X0X1: 0.02 +- 0.04471241438347967],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*Y1: 0.016 +- 0.04471563484956912],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*X0Y1: 0.048 +- 0.04466981083461179],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*Z1: 1.0 +- 0.0],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*X0Z1: -0.048 +- 0.04466981083461179],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*Y0: -0.104 +- 0.04447884890596878],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*Y0X1: 0.064 +- 0.044629676225578875],
ExperimentResult[SIC0_0 * SIC0_1→(1+0j)*Y0Y1: 0.04 +- 0.04468556814006061]]
```

```
[14]:
```

```
def _print_big_matrix(mat):
for row in mat:
for elem in row:
elem = np.real_if_close(np.round(elem, 3), tol=1e-1)
if not np.isclose(elem, 0., atol=1e-2):
print(f'{elem:.1f}', end=' ')
else:
print(' . ', end=' ')
print()
```

```
[15]:
```

```
process_choi_est = pgdb_process_estimate(results, qubits)
_print_big_matrix(process_choi_est)
```

```
1.0 -0.0 . . 0.0 1.0 . . . . . 1.0 . . 1.0 0.0
-0.0 . . . . -0.0 . . . . . -0.0 . . -0.0 .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
0.0 . . . . . . . . . . . . . . .
1.0 -0.0 . . . 1.0 . . . . . 1.0 . . 1.0 .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . 0.0 . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
1.0 -0.0 . . . 1.0 . . . . . 1.0 . . 1.0 .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
1.0 -0.0 . . . 1.0 . . . . . 1.0 . . 1.0 .
0.0 . . . . . . . . . . . . . . .
```

```
[16]:
```

```
from pyquil.gate_matrices import CNOT as CNOT_matrix
process_choi_ideal = kraus2choi(CNOT_matrix)
_print_big_matrix(process_choi_ideal)
```

```
1.0 . . . . 1.0 . . . . . 1.0 . . 1.0 .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
1.0 . . . . 1.0 . . . . . 1.0 . . 1.0 .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
1.0 . . . . 1.0 . . . . . 1.0 . . 1.0 .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
1.0 . . . . 1.0 . . . . . 1.0 . . 1.0 .
. . . . . . . . . . . . . . . .
```

```
[17]:
```

```
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,5))
ideal_ptm = choi2pauli_liouville(process_choi_ideal)
est_ptm = choi2pauli_liouville(process_choi_est)
plot_pauli_transfer_matrix(ideal_ptm, ax1, title='Ideal')
plot_pauli_transfer_matrix(est_ptm, ax2, title='Estimate')
plt.tight_layout()
```

The `ObservablesExperiment`

framework allows for easy parallelization of experiments that operate on disjoint sets of qubits. Below we will demonstrate the simple example of tomographing two separate bit flip processes `Program(X(0))`

and `Program(X(1))`

. To run each experiment in serial would require \(n_1 + n_2 = 2n\) experimental runs (\(n_1 = n_2 = n\) in this case), but when we run a ‘parallel’ experiment we need only \(n\) runs.

Note that the parallel experiment is not the same as doing tomography on the program `Program(X(0), X(1))`

because in the later case we need to do more data acquisition runs on the qc and we get more information back (we tomographize a 2 qubit process instead of two 1 qubit processes). The `ExperimentSetting`

s for that experiment are a superset of the parallel settings. We also cannot directly compare a parallel experiment with two serial experiments, because in a parallel experiment
‘cross-talk’ and other multi-qubit effects can impact the overall process; that is, the physics of ‘parallel’ experiments cannot in general be neatly factored into two serial experiments.

See the linked notebook for more explanation and words of caution.

```
[18]:
```

```
from forest.benchmarking.observable_estimation import ObservablesExperiment, merge_disjoint_experiments
disjoint_sets_of_qubits = [(0,),(1,)]
programs = [Program(X(*q)) for q in disjoint_sets_of_qubits]
expts_to_parallelize = []
for qubits, program in zip(disjoint_sets_of_qubits, programs):
expt = generate_process_tomography_experiment(program, qubits)
# group the settings for fair comparison later
expts_to_parallelize.append(group_settings(expt))
# get a merged experiment with grouped settings for parallel data acquisition
parallel_expt = merge_disjoint_experiments(expts_to_parallelize)
print(f'Original number of runs: {sum(len(expt) for expt in expts_to_parallelize)}')
print(f'Parallelized number of runs: {len(parallel_expt)}')
print(parallel_expt)
```

```
Original number of runs: 36
Parallelized number of runs: 18
X 0; X 1
0: X+_0→(1+0j)*X0, X+_1→(1+0j)*X1
1: X+_0→(1+0j)*Y0, X+_1→(1+0j)*Y1
2: X+_0→(1+0j)*Z0, X+_1→(1+0j)*Z1
3: X-_0→(1+0j)*X0, X-_1→(1+0j)*X1
4: X-_0→(1+0j)*Y0, X-_1→(1+0j)*Y1
5: X-_0→(1+0j)*Z0, X-_1→(1+0j)*Z1
6: Y+_0→(1+0j)*X0, Y+_1→(1+0j)*X1
7: Y+_0→(1+0j)*Y0, Y+_1→(1+0j)*Y1
8: Y+_0→(1+0j)*Z0, Y+_1→(1+0j)*Z1
9: Y-_0→(1+0j)*X0, Y-_1→(1+0j)*X1
10: Y-_0→(1+0j)*Y0, Y-_1→(1+0j)*Y1
11: Y-_0→(1+0j)*Z0, Y-_1→(1+0j)*Z1
12: Z+_0→(1+0j)*X0, Z+_1→(1+0j)*X1
13: Z+_0→(1+0j)*Y0, Z+_1→(1+0j)*Y1
14: Z+_0→(1+0j)*Z0, Z+_1→(1+0j)*Z1
15: Z-_0→(1+0j)*X0, Z-_1→(1+0j)*X1
16: Z-_0→(1+0j)*Y0, Z-_1→(1+0j)*Y1
17: Z-_0→(1+0j)*Z0, Z-_1→(1+0j)*Z1
```

Collect the data. Separate the results by qubit to get back estimates of each process.

We use a noisy qvm so you can see slight differences between the two estimates.

```
[24]:
```

```
from forest.benchmarking.observable_estimation import get_results_by_qubit_groups
noisy_qc = get_qc('2q-qvm', noisy=True)
parallel_results = estimate_observables(noisy_qc, parallel_expt)
individual_results = get_results_by_qubit_groups(parallel_results, disjoint_sets_of_qubits)
process_estimates = []
for q in disjoint_sets_of_qubits:
estimate = pgdb_process_estimate(individual_results[q], q)
process_estimates.append(estimate)
fig, axes = plt.subplots(1, len(process_estimates), figsize=(12,5))
for idx, est in enumerate(process_estimates):
plot_pauli_transfer_matrix(choi2pauli_liouville(est), axes[idx], title=f'Estimate {idx}')
plt.tight_layout()
```

```
[ ]:
```

```
```

`do_tomography` (qc, program, qubits, kind, …) |
A wrapper around experiment generation, data acquisition, and estimation that runs a tomography experiment and returns the state or process estimate along with the experiment and results. |

#### State Tomography¶

`generate_state_tomography_experiment` (…) |
Generate an ObservablesExperiment containing the experimental settings required to characterize a quantum state. |

`linear_inv_state_estimate` (results, qubits) |
Estimate a quantum state using linear inversion. |

`iterative_mle_state_estimate` (results, qubits) |
Given tomography data, use one of three iterative algorithms to return an estimate of the state. |

`estimate_variance` (results, qubits, …[, …]) |
Use a simple bootstrap-like method to return an error bar on some functional of the quantum state. |

#### Process Tomography¶

`generate_process_tomography_experiment` (…) |
Generate an ObservablesExperiment containing the experiment settings required to characterize a quantum process. |

`linear_inv_process_estimate` (results, qubits) |
Estimate a quantum process using linear inversion. |

`pgdb_process_estimate` (results, qubits[, …]) |
Provide an estimate of the process via Projected Gradient Descent with Backtracking [PGD]. |

### Direct Fidelity Estimation¶

Direct fidelity estimation (DFE) uses knowledge about the ideal target state or process to perform a tailored set of measurements to quickly certify a quantum state or a quantum process at lower cost than full tomography.

#### Direct Fidelity Estimation¶

Using a method known as direct fidelity estimation (DFE), see [DFE1] and [DFE2], it is possible to estimate the fidelity between * a target pure state \(\rho_\psi = |\psi\rangle\langle \psi|\) and its experimental realization \(\sigma\), * a target unitary \(U\) and its experimental realization \(U_e\).

This can be done with a small number (relative to state and process tomography) of simple experimental settings that is independent of the system size. Such methods are useful for the experimental study of larger quantum information processing units.

In this notebook we explore some state and process DFE using the forest.benchmarking module `direct_fidelity_estimation.py`

.

##### Simplistic state DFE example¶

Suppose we have tried to prepare the state \(|0\rangle\), with state matrix \(\rho_0 = |0\rangle \langle 0 |\), but in fact prepared the state

The usual way to quantify how close \(\sigma\) and \(\rho_0\) are is to use quantum state tomography to estimate \(\sigma\) and then calculate the fidelity between \(\sigma\) and \(\rho_0\).

DFE provides a way to directly estimate the fidelity without first estimating the state \(\sigma\). To see this, first note that since \(\rho_0\) is a pure state (our estimate \(\sigma\) in general is not) we can write the fidelity as \(F(\rho_0, \sigma) = \langle 0 |\sigma|0 \rangle\). This is equivalent to

using the [cyclic property of the trace](https://en.wikipedia.org/wiki/Trace_(linear_algebra%29#Cyclic_property). Next we parameterize the pure state as

Finally we arrive at

This result shows that we only need to estimate one observable \(\langle Z \rangle \rightarrow \tilde z\) in order to estimate the fidelity between \(\rho_0\) and \(\sigma\) in this particular example.

##### Generalizing state DFE, somewhat¶

The code that we’ve developed has some assumptions baked in that are important to understand. We have implemented DFE for a restricted subset of states and processes using these assumptions.

To start, we assume that our start state is always the pure state \(|0 \rangle\) on all qubits. On \(n\) qubits this state can be decomposed as

where we have expanded the tensor multiplication into one large sum over all possible of combinations of I and Z Pauli terms. From this start point we apply the user specified program \(U\), or rather the noisy implementation \(\tilde U\), to prepare the state \(\sigma\).

Meanwhile we want to calculate the fidelity between \(\sigma\) and the ideally prepared state \(\rho = U \rho_0 U^\dagger\). By linearity we can calculate \(\rho\) by conjugating each Pauli term \(P_k\) in the sum above by the ideal program \(U\).

**The assumption** we make is that the ideal prep program \(U\) is an element of the Clifford group. Since the Clifford group is the normalizer of the Pauli group, these conjugated terms \(U P_k U^\dagger\) will again be some Pauli \(P'_k\) (in the code we use a `pyquil.BenchmarkConnection`

to quickly calculate the resultant Pauli). Following the example above we arrive at the set of Paulis \(\{P'_k\}_k\) whose
expectations we need to estimate with respect to the physical state \(\sigma\). The estimate is the average of all these Pauli expectations (see Eqn. 1 of [DFE1]). Note that we don’t need to estimate the expectation of the all \(I\) term since we know its expectation is 1. We exclude this term in experiment generation but include it in the analysis ‘by hand’.

##### Process DFE¶

The average gate fidelity between an experimental process \(\mathcal E\) and the ideal (unitary) process \(\mathcal U\) is given by

where the processes are represented by linear superoperators acting on vectorized density matrices, and d is the dimension of the Hilbert space \(\mathcal H\) that \(\mathcal E\) and \(\mathcal U\) act on. If you are unfamiliar with these terms look at superoperator tools notebook and superoperator_representations.md

Using the \(d^2\) dimensional orthonormal Pauli basis \(\{P_k\}\) for superoperators on \(\mathcal H\)–e.g. \(\{I/\sqrt{2}, X/\sqrt{2}, Y/\sqrt{2}, Z/\sqrt{2} \}\) for a single qubit–we can expand the trace and insert an identity superoperator between \(\mathcal U\) and \(\mathcal E\); this amounts to re-casting these superoperators in the Pauli-Liouville representation (aka Pauli Transfer Matrix). (again, see superoperator_representations.md if you are unfamiliar with vec notation \(P_k \iff \left| P_k \rangle\rangle\langle\langle P_k\right|\)):

Now we switch representations by unveccing \(| P_k \rangle\rangle\) and representing \(\mathcal U\) by its unitary action on the matrix \(P_k\).

Finally we can decompose the \(P_k\) acted on by \(\mathcal E\) into a sum over projectors onto each eigenvector \(\left|\phi \rangle\langle \phi \right|\) with the correct sign for the eigenvalue (which is \(\pm 1\) for Paulis).

At this point our **assumption** that \(U\) is a Clifford element again comes into play and allows us to easily compute the conjugated Pauli $ U P_k U^:nbsphinx-math:dagger `= :nbsphinx-math:sigma`_k / \sqrt{d}`$. Inserting this assumption gives a simple picture where we need to estimate the expectation of each :math:sigma_k` Pauli for the state which results from applying our noisy circuit \(\mathcal E\) to each eigenstate of \(P_k\).

The final estimate of \(F(\mathcal U,\mathcal E)\) follows from estimating these expectations, plugging them in to the sum to get \({\rm Tr} [\mathcal E \mathcal U^\dagger]\) which we insert into the equation at the beginning of the section.

##### Process DFE by reduction to state DFE¶

We can also understand process DFE by appealing to the Choi–Jamiołkowski isomorphism which says that we can represent our channel superoperators as state matrices (i.e. density matrices) \(J(\mathcal E)\) and \(J(\mathcal U)\). This allows us to rewrite the average gate fidelity as

Finally, since \(J(\mathcal U)\) is a pure state, we employ the same fact from above about fidelity between states to note that \({\rm Tr}[ J(\mathcal E)⋅J(\mathcal U)] = F(J(\mathcal E),J(\mathcal U))\). We have reduced process DFE to state DFE, but we need to dive into the particulars of \(J(\cdot)\) to understand what we actually need to measure.

For \(U\) acting on \(n\) qubits the state \(J(\mathcal U)\) over \(2n\) qubits is given by

where \(|k \rangle\) is the state over \(n\) qubits corresponding to the binary representation of \(k\). The state matrix for the maximally entangled state on which \(\textrm{Id} \otimes \mathcal U\) acts can be decomposed as:

where the sum is over the complete Pauli basis (with normalization factored out) for a \(d\) dimensional Hilbert space. This gives us

Where we have split the operator \(P_j\) into the sum of the projectors onto the plus and minus eigenstates. We again **assume** that \(U\) is an element of the Clifford group, so we can easily compute the measurement observables \(U P_j U^\dagger\) whose expectation we estimate with respect to the various states \(\mathcal E\left(|P_j^\pm \rangle \langle P_j^\pm |\right)\) obtained from the \(d\) different preparations of eigenvectors.

*Practical Characterization of Quantum Devices without Tomography.*

*Direct Fidelity Estimation from Few Pauli Measurements.*

```
[1]:
```

```
from pyquil.paulis import ID
from pyquil.gates import I, X, MEASURE, H, CNOT, RY, CZ
from pyquil import Program, get_qc
from pyquil.api import get_benchmarker
from forest.benchmarking.direct_fidelity_estimation import ( generate_exhaustive_state_dfe_experiment,
generate_exhaustive_process_dfe_experiment,
generate_monte_carlo_state_dfe_experiment,
generate_monte_carlo_process_dfe_experiment,
acquire_dfe_data,
estimate_dfe )
import numpy as np
from matplotlib import pyplot
```

```
[2]:
```

```
# noiseless QVM
qvm = get_qc("9q-generic-qvm", as_qvm=True, noisy=False)
# noisy QVM
noisy_qvm = get_qc("9q-generic-qvm", as_qvm=True, noisy=True)
bm = get_benchmarker()
```

##### Direct fidelity estimation in `forest.benchmarking`

¶

The basic workflow is:

- Prepare a
*state*or a*process*by specifying a pyQuil program. - Construct a list of observables that are needed to estimate the state; we collect this into an object called an
`ObservablesExperiment`

. - Acquire the data by running the program on a QVM or QPU.
- Apply an estimator to the data to obtain an estimate of the fidelity between the ideal and measured state or process.
- Visualize if you wish.

##### Two quick examples¶

###### Step 1. Specify a state preparation or unitarty process with a `Program`

¶

This is the object we will do DFE on.

The process we choose is

and the state is

Perhaps we should emphasize that while the `Program`

that we pass into the DFE module is the same whether we are doing process DFE on \(\tilde U\) or state DFE on \(\tilde \Psi\), the specific experiment construction method we choose in the next step selects one or the other and results in different `ExperimentSettings`

around the same `Program`

.

```
[3]:
```

```
p = Program()
prep_prog = p.inst(H(0), CNOT(0,1))
print(prep_prog)
```

```
H 0
CNOT 0 1
```

```
[4]:
```

```
from pyquil.gate_matrices import I as Imatrix, H as Hmatrix, CNOT as CNOTmatrix
U_ideal = CNOTmatrix @ np.kron(Hmatrix, Imatrix)
print(U_ideal)
rho_ideal = U_ideal @ np.array([[1], [0], [0], [0]]) @ np.array([[1], [0], [0], [0]]).T @ U_ideal.conj().T
print(rho_ideal)
```

```
[[ 0.70710678 0. 0.70710678 0. ]
[ 0. 0.70710678 0. 0.70710678]
[ 0. 0.70710678 0. -0.70710678]
[ 0.70710678 0. -0.70710678 0. ]]
[[0.5 0. 0. 0.5]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0.5 0. 0. 0.5]]
```

###### Step 2. Construct an `ObservablesExperiment`

for DFE¶

We use the helper functions

`generate_exhaustive_state_dfe_experiment`

`generate_exhaustive_process_dfe_experiment`

to construct a (tomographically incomplete) set of preparations + measurements (i.e. `ExperimentSettings`

) for state and process DFE respectively

```
[5]:
```

```
qubits = [0,1]
# state dfe
state_exp = generate_exhaustive_state_dfe_experiment(bm, prep_prog, qubits)
# process dfe
process_exp = generate_exhaustive_process_dfe_experiment(bm, prep_prog, qubits)
```

For state DFE we can print this experiment object out to see the \(d - 1 = 2^2-1= 3\) observables whose expectation we will estimate (i.e. the measurements we will perform). Note that we could have included an additional observable `->I0I1`

, but since this trivially gives an expectation of 1 we instead omit this observable in experiment generation and include its contribution by hand in the estimation methods. Be mindful of this if generating your own settings.

```
[6]:
```

```
# Let's take a look into one of these experiment objects
print('The type of the object is:', type(state_exp),'\n')
print('The program is:')
print(state_exp.program)
print('There are three different settings:')
print(state_exp.settings_string())
```

```
The type of the object is: <class 'forest.benchmarking.observable_estimation.ObservablesExperiment'>
The program is:
H 0
CNOT 0 1
There are three different settings:
0: Z+_0 * Z+_1→(1+0j)*Z0Z1
1: Z+_0 * Z+_1→(1+0j)*X0X1
2: Z+_0 * Z+_1→(-1+0j)*Y0Y1
```

Let’s also look at a few of the settings for process DFE. Here our omission of trivial Identity terms is somewhat more subtle. There should be \(d^2 - 1 = 15\) total unique observables measured. For each of these \((d-1)^2 = 9\) observables with a non-identity on both qubits we have \(d = 4\) different initial preparations of eigenstates of the pre-circuit-conjugated observable. Meanwhile for the remaining \(2*(d - 1) = 6\) observables which have exactly one identity term there are only \(2\) different preparations. This yields a total of \(9\cdot 4 + 6 \cdot 2 = 48\) preparations.

Note that we have incorporated the minus sign into the observables paired with preparations of the negative eigenstates. This means that for ideal no-noise data collection we will get expectations all equal to positive 1. (we have chosen our observables so that the ideal state is always in the positive eigenvalue subspace).

```
[7]:
```

```
print('The program is, again:')
print(process_exp.program)
print('There are more settings:')
print(process_exp.settings_string())
```

```
The program is, again:
H 0
CNOT 0 1
There are more settings:
0: Z+_0 * X+_1→(1+0j)*X1
1: Z+_0 * X-_1→(-1+0j)*X1
2: Z+_0 * Y+_1→(1+0j)*Z0Y1
3: Z+_0 * Y-_1→(-1+0j)*Z0Y1
4: Z+_0 * Z+_1→(1+0j)*Z0Z1
5: Z+_0 * Z-_1→(-1+0j)*Z0Z1
6: X+_0 * Z+_1→(1+0j)*Z0
7: X+_0 * X+_1→(1+0j)*Z0X1
8: X+_0 * X-_1→(-1+0j)*Z0X1
9: X+_0 * Y+_1→(1+0j)*Y1
10: X+_0 * Y-_1→(-1+0j)*Y1
11: X+_0 * Z+_1→(1+0j)*Z1
12: X+_0 * Z-_1→(-1+0j)*Z1
13: X-_0 * Z+_1→(-1+0j)*Z0
14: X-_0 * X+_1→(-1+0j)*Z0X1
15: X-_0 * X-_1→(1+0j)*Z0X1
16: X-_0 * Y+_1→(-1+0j)*Y1
17: X-_0 * Y-_1→(1+0j)*Y1
18: X-_0 * Z+_1→(-1+0j)*Z1
19: X-_0 * Z-_1→(1+0j)*Z1
20: Y+_0 * Z+_1→(-1+0j)*Y0X1
21: Y+_0 * X+_1→(-1+0j)*Y0
22: Y+_0 * X-_1→(1-0j)*Y0
23: Y+_0 * Y+_1→(1+0j)*X0Z1
24: Y+_0 * Y-_1→(-1+0j)*X0Z1
25: Y+_0 * Z+_1→(-1+0j)*X0Y1
26: Y+_0 * Z-_1→(1-0j)*X0Y1
27: Y-_0 * Z+_1→(1-0j)*Y0X1
28: Y-_0 * X+_1→(1-0j)*Y0
29: Y-_0 * X-_1→(-1+0j)*Y0
30: Y-_0 * Y+_1→(-1+0j)*X0Z1
31: Y-_0 * Y-_1→(1+0j)*X0Z1
32: Y-_0 * Z+_1→(1-0j)*X0Y1
33: Y-_0 * Z-_1→(-1+0j)*X0Y1
34: Z+_0 * Z+_1→(1+0j)*X0X1
35: Z+_0 * X+_1→(1+0j)*X0
36: Z+_0 * X-_1→(-1+0j)*X0
37: Z+_0 * Y+_1→(1+0j)*Y0Z1
38: Z+_0 * Y-_1→(-1+0j)*Y0Z1
39: Z+_0 * Z+_1→(-1+0j)*Y0Y1
40: Z+_0 * Z-_1→(1-0j)*Y0Y1
41: Z-_0 * Z+_1→(-1+0j)*X0X1
42: Z-_0 * X+_1→(-1+0j)*X0
43: Z-_0 * X-_1→(1+0j)*X0
44: Z-_0 * Y+_1→(-1+0j)*Y0Z1
45: Z-_0 * Y-_1→(1+0j)*Y0Z1
46: Z-_0 * Z+_1→(1-0j)*Y0Y1
47: Z-_0 * Z-_1→(-1+0j)*Y0Y1
```

###### Step 3. Acquire the data¶

We will use the QVM, but at this point you could use a QPU.

We use a simple wrapper, `acquire_dfe_data`

, around the standard `estimate_observables`

method to run each `ExperimentSetting`

in our `ObservablesExperiment`

. By default `acquire_dfe_data`

includes an additional error mitigation step that attempts to calibrate estimation of each observable expectation in order to combat readout error.

Note that `acquire_dfe_data`

returns a `list`

of `ExperimentResult`

s which is a dataclass defined in the module `observable_estimation.py`

.

The details of the dataclass and error mitigation strategies are given in the observable estimation ipython notebook.

```
[8]:
```

```
# get some NOISELESS data
results = acquire_dfe_data(qvm, process_exp, num_shots=1000)
```

```
[9]:
```

```
# look at the results -- we expect all expectations to be one since there is no noise.
print("Operator Expectations")
print([res.expectation for res in results])
print('\n')
print("Calibration Expectations")
print([res.calibration_expectation for res in results])
```

```
Operator Expectations
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
Calibration Expectations
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
```

```
[10]:
```

```
# get some NOISY data
n_results_proce = acquire_dfe_data(noisy_qvm, process_exp, num_shots=1000)
n_results_state = acquire_dfe_data(noisy_qvm, state_exp, num_shots=1000)
```

```
[11]:
```

```
# look at it
print("Noisy Operator Expectations")
print([np.round(res.expectation, 4) for res in n_results_proce])
print('\n')
# given some readout noise we scale up our estimates by the inverse calibration expectation
print("Noisy Calibration Expectations")
print([res.calibration_expectation for res in n_results_proce])
```

```
Noisy Operator Expectations
[0.9853, 0.9649, 0.9674, 0.9643, 0.9774, 0.9722, 1.0182, 0.9877, 0.9741, 0.9896, 0.9839, 0.9809, 0.9708, 0.9738, 0.9748, 0.9657, 0.9919, 0.9781, 0.963, 0.9719, 0.9553, 0.9487, 0.9777, 0.9547, 0.9713, 0.9789, 0.9527, 0.9572, 0.971, 0.9443, 0.9783, 0.9522, 0.968, 0.936, 0.9745, 0.9989, 0.973, 0.9661, 0.993, 0.9968, 0.98, 0.9573, 0.9573, 0.9831, 0.9533, 0.9725, 0.9412, 0.9664]
Noisy Calibration Expectations
[0.884, 0.884, 0.7975, 0.7975, 0.773, 0.773, 0.877, 0.7725, 0.7725, 0.869, 0.869, 0.891, 0.891, 0.877, 0.7725, 0.7725, 0.869, 0.869, 0.891, 0.891, 0.7935, 0.897, 0.897, 0.784, 0.784, 0.7815, 0.7815, 0.7935, 0.897, 0.897, 0.784, 0.784, 0.7815, 0.7815, 0.7845, 0.889, 0.889, 0.781, 0.781, 0.774, 0.774, 0.7845, 0.889, 0.889, 0.781, 0.781, 0.774, 0.774]
```

###### Step 4. Apply some estimators to the data “do DFE”¶

This just performs the calculations we did at the top of the notebook.

**Process DFE**

```
[12]:
```

```
# estimate using NOISELESS data
fid_est, fid_std_err = estimate_dfe(results, 'process')
print('Fidelity point estimate is: ',fid_est)
print('The standard error of the fidelity point estimate is: ', fid_std_err)
```

```
Fidelity point estimate is: 1.0
The standard error of the fidelity point estimate is: 0.0
```

```
[13]:
```

```
# estimate using NOISY data
nfid_est, nfid_std_err = estimate_dfe(n_results_proce, 'process')
print('Fidelity point estimate is', np.round(nfid_est, 4))
print('The std error of the fidelity point estimate is', np.round(nfid_std_err, 4))
```

```
Fidelity point estimate is 0.9784
The std error of the fidelity point estimate is 0.0019
```

**State DFE**

```
[14]:
```

```
# estimate using NOISY data
nfid_est_state, nfid_std_err_state = estimate_dfe(n_results_state, 'state')
print('Fidelity point estimate is', np.round(nfid_est_state, 4))
print('The std error of the fidelity point estimate is', np.round(nfid_std_err_state, 4))
```

```
Fidelity point estimate is 0.9884
The std error of the fidelity point estimate is 0.0081
```

###### Step 5. Visualize¶

**State DFE**

We will start with state DFE as it is the simplest case.

Our strategy here will be to try to reconstruct the state using a state tomography estimator. This is possible since the settings used in DFE are a strict subset of the tomography settings. In fact, this makes the strategy of DFE clear– we only need to estimate those components of the ideal state that are non-zero in order to compute the fidelity.

```
[15]:
```

```
from forest.benchmarking.tomography import iterative_mle_state_estimate
```

```
[16]:
```

```
rho_est = iterative_mle_state_estimate(n_results_state, qubits=[0,1])
np.round(rho_est, 3)
```

```
[16]:
```

```
array([[0.495+0.j, 0. +0.j, 0. +0.j, 0.494+0.j],
[0. +0.j, 0.005+0.j, 0.005+0.j, 0. +0.j],
[0. +0.j, 0.005+0.j, 0.005+0.j, 0. +0.j],
[0.494+0.j, 0. +0.j, 0. +0.j, 0.495+0.j]])
```

```
[17]:
```

```
import matplotlib.pyplot as plt
from forest.benchmarking.utils import n_qubit_pauli_basis
from forest.benchmarking.operator_tools.superoperator_transformations import vec, computational2pauli_basis_matrix
from forest.benchmarking.plotting.state_process import plot_pauli_rep_of_state, plot_pauli_bar_rep_of_state
# convert to pauli representation
n_qubits = 2
pl_basis = n_qubit_pauli_basis(n_qubits)
c2p = computational2pauli_basis_matrix(2*n_qubits)
rho_true_pauli = np.real(c2p @ vec(rho_ideal))
rho_mle_pauli = np.real(c2p @ vec(rho_est))
fig1, (ax3, ax4) = plt.subplots(1, 2, figsize=(10,5))
title_res = f"Estimated via DFE data using MLE \n" f"DFE Fidelity = {np.round(nfid_est_state, 5)} ± {np.round(nfid_std_err_state, 6)}"
plot_pauli_bar_rep_of_state(rho_true_pauli.flatten(), ax=ax3, labels=pl_basis.labels, title='Ideal State')
plot_pauli_bar_rep_of_state(rho_mle_pauli.flatten(), ax=ax4, labels=pl_basis.labels, title=title_res)
fig1.tight_layout()
```

To be clear, each entry on the X axis corresponds to an observable. If we ran full state tomography on the noisy state then the plot on the right would likely have small values for each entry due to noise. However, we note that to compute the fidelity we only need to estimate those entries for which the ideal state is non-zero. The plot on the right shows that we estimated exactly these four entries.

**Process DFE**

The same principle applies to process DFE, but now we must remember that for processes we vary over input states and measurements both.

```
[18]:
```

```
from forest.benchmarking.tomography import pgdb_process_estimate
choi_mle_est = pgdb_process_estimate(n_results_proce, qubits)
# sneak peak at part of the estimated process
np.real_if_close(np.round(choi_mle_est, 2))[0:4]
```

```
[18]:
```

```
array([[ 0.5 , 0. , -0. , 0.5 , -0. , 0.49, 0.49, -0. , 0.49,
0. , -0. , -0.49, -0. , 0.39, -0.39, -0. ],
[ 0. , 0. , -0. , 0. , 0.01, 0. , 0. , -0.01, 0. ,
0.01, 0.01, -0. , 0.03, 0. , -0. , 0.03],
[-0. , -0. , 0. , -0. , -0.01, -0. , -0. , 0.01, -0. ,
-0.01, -0.01, 0. , -0.03, -0. , 0. , -0.03],
[ 0.5 , 0. , -0. , 0.5 , -0. , 0.49, 0.49, -0. , 0.49,
0. , -0. , -0.49, -0. , 0.39, -0.39, -0. ]])
```

```
[19]:
```

```
from forest.benchmarking.plotting.state_process import plot_pauli_transfer_matrix
from forest.benchmarking.operator_tools import choi2pauli_liouville, kraus2pauli_liouville
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,5))
title_res = f"Estimated via DFE data using MLE \n" f"DFE Fidelity = {np.round(nfid_est, 5)} ± {np.round(nfid_std_err, 6)}"
plot_pauli_transfer_matrix(np.real(kraus2pauli_liouville(U_ideal)), ax1, title='Ideal')
plot_pauli_transfer_matrix(np.real(choi2pauli_liouville(choi_mle_est)), ax2, title=title_res)
plt.tight_layout()
```

##### State fidelity between \(\left|1\right\rangle\) and \(|\theta\rangle = R_y(\theta)\left|1\right\rangle\)¶

In this section we check that state DFE is working correctly by comparing with an analytical calculation. Essentially we would like to prepare \(|1\rangle\) but we simulate an incorrect preparation \(|\theta\rangle = R_y(\theta)|1\rangle\). The fidelity in that case is

So the point of this section is to try to “experimentally” plot the fidelity expression as a function of \(\theta\).

```
[20]:
```

```
qubit = 0
ideal_prep_program = Program(X(qubit))
# generate state dfe experiment to estimate fidelity to |1 >
experiment = generate_exhaustive_state_dfe_experiment(bm, ideal_prep_program, [qubit])
```

To simulate the error we will modify the ideal program in the experiment to be `RY(theta, 0) X(0)`

for various angles \(\theta\); this is the case of unitary rotation error occurring after our ideal preparation `X(0)`

.

Of course, for characterizing the actual noise on a real QPU our program would simply remain `X(0)`

since this is the state preparation we hope will prepare the state \(|1 \rangle\) but which in practice will be affected by noise. Here we must append our own noise `RY(theta)`

to the program for the purposes of simulation.

```
[21]:
```

```
points = 10
res = []
res_std_err = []
# loop over different angles
for theta in np.linspace(0, np.pi, points):
# reuse the same experiment object but modify its program
# field to do the "noisy" program
experiment.program = ideal_prep_program + RY(theta, qubit)
ry_state_data = acquire_dfe_data(qvm, experiment, num_shots=1000)
fid_est, fid_std_err = estimate_dfe(ry_state_data, 'state')
res.append(fid_est)
res_std_err.append(2*fid_std_err)
```

```
[22]:
```

```
pyplot.errorbar(np.linspace(0, np.pi, points), res, res_std_err, fmt=".", label="Simulation")
pyplot.plot(np.linspace(0, np.pi, points), (1/2+1/2*np.cos(np.linspace(0, np.pi, points))), label="Theory")
pyplot.xlabel(r"$\theta$")
pyplot.ylabel("Fidelity")
pyplot.legend()
pyplot.title(r"State fidelity between $\left|1\right\rangle$ and $R_y(\theta)\left|1\right\rangle$")
pyplot.show()
```

as expected, the fidelity decays from the ideal with increasing values of theta.

##### Process fidelity between \(I\) and \(R_y(\theta)\)¶

Like the state fidelity example above we will construct a DFE experiment to estimate the fidelity between the ideal identity operation and a ‘noisy identity’ where we insert the unitary error \(R_y(\theta)\). Then we will show that process DFE is consistent with the analytical calculation.

The process fidelity is easy to calculate using Eqn 5 from arXiv:quant-ph/0701138. The fidelity between the unitary \(U_0\) and the quantum operation specified by the Kraus operators \(\{ G_k \}\) is

where \(M_k = U_0^\dagger G_k\).

In the context we care about we have a single Kraus operator \(\{ G_k \} = \{R_y(\theta)\}\) and \(U_0=I \implies M_k = R_y(\theta)\). So the fidelity becomes

where we used \(R_y(\theta) = \exp[-i \theta Y /2]= \cos(\theta/2) I - i \sin(\theta/2)Y\).

```
[23]:
```

```
# here our ideal program is the Identity process
qubit = 0
ideal_process_program = Program(I(qubit))
# generate process DFE experiment to estimate fidelity to I
expt = generate_exhaustive_process_dfe_experiment(bm, ideal_process_program, [qubit])
```

Now, as with state dfe, we will simulate noisy implementation of the identity by modifying the program that is actually run in the experiment to be `RY(theta,0)`

instead of just the identity.

As above we emphasize that modifying the program in this way only makes sense if you are inserting your own simulation of noise. To characterize real noise on a QPU the program should not be changed after generating the experiment, since the generation code tailors the experiment to the provided program.

**Note:** in some of the cells below there is a comment `# NBVAL_SKIP`

that is used to speed up our tests by skipping that particular cell.

```
[24]:
```

```
# NBVAL_SKIP
num_points = 10
thetas = np.linspace(0, np.pi, num_points)
res = []
res_std_err = []
for theta in thetas:
# modify the experiment object to do the noisy program instead
expt.program = ideal_process_program + RY(theta, qubit)
ry_proc_data = acquire_dfe_data(qvm, expt, num_shots=500)
fid_est, fid_std_err = estimate_dfe(ry_proc_data, 'process')
res.append(fid_est)
res_std_err.append(2*fid_std_err)
```

```
[25]:
```

```
# NBVAL_SKIP
pyplot.errorbar(thetas, res, res_std_err, fmt=".", label="Simulation")
pyplot.plot(thetas, (1 + 2*np.cos(thetas/2)**2)/3,
label="theory")
pyplot.xlabel(r"$\theta$")
pyplot.ylabel("Fidelity")
pyplot.legend()
pyplot.ylim(0.25, 1.05)
pyplot.title(r"Process fidelity between $I$ and $R_y(\theta)$")
pyplot.show()
```

##### Advanced¶

###### Monte Carlo Sampling of large graph states¶

We can do Monte Carlo or random sampling of `ExperimentSettings`

for large states or processes.

`generate_monte_carlo_state_dfe_experiment`

`generate_monte_carlo_process_dfe_experiment`

generally you need to specify the number of terms you would like to sample for the desired time savings

```
[26]:
```

```
import networkx as nx
from matplotlib import pyplot as plt
from forest.benchmarking.entangled_states import create_graph_state
```

We will demonstrate state DFE on a graph state over 5 qubits. First, we will take some subgraph of the larger QC topology.

```
[27]:
```

```
nx.draw(noisy_qvm.qubit_topology(), with_labels=True)
# we will do a subgraph
graph = nx.from_edgelist([(0, 1), (0, 3), (1, 2), (1, 4), (3, 4)])
```

```
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:518: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:565: MatplotlibDeprecationWarning:
The is_numlike function was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use isinstance(..., numbers.Number) instead.
if cb.is_numlike(alpha):
```

We use a helper to create the prep program

```
[28]:
```

```
graph_prep_prog = create_graph_state(graph)
qubits = list(graph_prep_prog.get_qubits())
print(graph_prep_prog)
```

```
H 0
H 1
H 3
H 2
H 4
CZ 0 1
CZ 0 3
CZ 1 2
CZ 1 4
CZ 3 4
```

Generate both exhaustive and monte_carlo experiments for comparison

```
[29]:
```

```
gstate_exp_exh = generate_exhaustive_state_dfe_experiment(bm, graph_prep_prog, qubits)
gstate_exp_mc = generate_monte_carlo_state_dfe_experiment(bm, graph_prep_prog, qubits, n_terms=8)
```

```
[30]:
```

```
num_exh_exp = len(list(gstate_exp_exh.setting_strings()))
num_mc_exp = len(list(gstate_exp_mc.setting_strings()))
print(f'In exhaustive state DFE there are {num_exh_exp} experiments.\n' )
print(f'In monte carlo state DFE we chose {num_mc_exp} experiments.' )
```

```
In exhaustive state DFE there are 31 experiments.
In monte carlo state DFE we chose 8 experiments.
```

Run each experiment and compare estimates and runtimes.

```
[31]:
```

```
# NBVAL_SKIP
# because of the #NBVAL_SKIP `%%time` wont work
from time import time
start_time = time()
graph_state_mc_data = acquire_dfe_data(noisy_qvm, gstate_exp_mc, num_shots=500)
fid_est, fid_std_err = estimate_dfe(graph_state_mc_data, 'state')
print(f'The five qubit graph fidelity estimate is {fid_est}.\n')
print('Monte-Carlo took ', np.round(time()-start_time, 2), 'seconds.')
```

```
The five qubit graph fidelity estimate is 0.9526655134837161.
Monte-Carlo took 45.75 seconds.
```

```
[32]:
```

```
# NBVAL_SKIP
start_time = time()
graph_state_exh_data = acquire_dfe_data(noisy_qvm, gstate_exp_exh, num_shots=500)
fid_est, fid_std_err = estimate_dfe(graph_state_exh_data, 'state')
print(f'The five qubit graph fidelity estimate is {fid_est}.\n')
print('Exhaustive took ', np.round(time()-start_time, 2), 'seconds.')
```

```
The five qubit graph fidelity estimate is 0.9513485834466823.
Exhaustive took 205.28 seconds.
```

###### DFE measurements that include spectator qubits to signal cross talk.¶

Suppose you wanted to do process tomography on a CZ gate.

It turns out in many superconducting qubits there are many kinds of cross-talk at play, that is, noise which affects qubits not directly involved in the gate.

One way to measure such noise is to do the gate you care about and additionally specify spectator qubits not involved in the gate when creating the experiment. If the actual process is not Identity on those qubits then DFE will pick it up.

Of course, such experiments are best done on the QPU or at the least with a noise model that includes these effects.

```
[33]:
```

```
prog = Program(CZ(0,1))
qubits = [0,1,2]
```

```
[34]:
```

```
print('The CZ gate is: CZ(0,1)')
nx.draw(nx.from_edgelist([(0, 1), (1, 2)]), with_labels=True)
```

```
The CZ gate is: CZ(0,1)
```

```
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:518: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:565: MatplotlibDeprecationWarning:
The is_numlike function was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use isinstance(..., numbers.Number) instead.
if cb.is_numlike(alpha):
```

```
[35]:
```

```
xtalk_proc_exp = generate_monte_carlo_process_dfe_experiment(bm, prog, qubits, n_terms=20)
```

```
[36]:
```

```
len(list(xtalk_proc_exp.setting_strings()))
```

```
[36]:
```

```
20
```

```
[37]:
```

```
xtalk_data = acquire_dfe_data(qvm, xtalk_proc_exp, num_shots=500)
```

```
[38]:
```

```
fid_est, fid_std_err = estimate_dfe(xtalk_data, 'process')
fid_est
```

```
[38]:
```

```
1.0
```

Again, we expect this to be 1.0 unless we are running on a real QPU or incorporate cross-talk into the QVM noise model.

If we wanted to we could “amplify” the cross talk error by applying the gate many times

```
[39]:
```

```
prog = Program([CZ(0,1)]*3)
```

###### Parallel state and process DFE¶

The `ObservablesExperiment`

framework allows for easy parallelization of experiments that operate on disjoint sets of qubits. Below we will demonstrate the simple example of performing process DFE on two separate bit flip processes `Program(X(0))`

and `Program(X(1))`

. To run each experiment in serial would require \(n_1 + n_2 = 2n\) experimental runs (\(n_1 = n_2 = n\) in this case), but when we run a ‘parallel’ experiment we need only \(n\)
runs.

Note that the parallel experiment is not the same as doing DFE on the program `Program(X(0), X(1))`

because in the later case we need to do more data acquisition runs on the qc and we get more information back; even if each process perfectly transforms the 1q states it could still behave erroneously on some 2q states but we would only catch that if we did 2q DFE. The `ExperimentSetting`

s for the 2q experiment are a superset of the parallel 1q settings. We also cannot directly compare a
parallel experiment with two serial experiments, because in a parallel experiment ‘cross-talk’ and other multi-qubit effects can impact the overall process; that is, the physics of ‘parallel’ experiments cannot in general be neatly factored into two serial experiments.

See the linked notebook for more explanation and words of caution.

```
[40]:
```

```
from forest.benchmarking.observable_estimation import ObservablesExperiment, merge_disjoint_experiments
disjoint_sets_of_qubits = [(0,),(1,)]
programs = [Program(X(*q)) for q in disjoint_sets_of_qubits]
expts_to_parallelize = []
for qubits, program in zip(disjoint_sets_of_qubits, programs):
expt = generate_exhaustive_process_dfe_experiment(bm, program, qubits)
expts_to_parallelize.append(expt)
# get a merged experiment with grouped settings for parallel data acquisition
parallel_expt = merge_disjoint_experiments(expts_to_parallelize)
print(f'Original number of runs: {sum(len(expt) for expt in expts_to_parallelize)}')
print(f'Parallelized number of runs: {len(parallel_expt)}\n')
print(parallel_expt)
```

```
Original number of runs: 12
Parallelized number of runs: 6
X 0; X 1
0: X+_0→(1+0j)*X0, X+_1→(1+0j)*X1
1: X-_0→(-1+0j)*X0, X-_1→(-1+0j)*X1
2: Y+_0→(-1+0j)*Y0, Y+_1→(-1+0j)*Y1
3: Y-_0→(1-0j)*Y0, Y-_1→(1-0j)*Y1
4: Z+_0→(-1+0j)*Z0, Z+_1→(-1+0j)*Z1
5: Z-_0→(1-0j)*Z0, Z-_1→(1-0j)*Z1
```

Collect the data. Separate the results by qubit to get back estimates for each process.

```
[41]:
```

```
from forest.benchmarking.observable_estimation import get_results_by_qubit_groups
parallel_results = acquire_dfe_data(noisy_qvm, parallel_expt, num_shots = 500)
individual_results = get_results_by_qubit_groups(parallel_results, disjoint_sets_of_qubits)
fidelity_estimates = []
process_estimates = []
for q in disjoint_sets_of_qubits:
fidelity_estimate = estimate_dfe(individual_results[q], 'process')
fidelity_estimates.append(fidelity_estimate)
print(fidelity_estimate)
proc_estimate = pgdb_process_estimate(individual_results[q], q)
process_estimates.append(proc_estimate)
fig, axes = plt.subplots(1, len(process_estimates), figsize=(12,5))
for idx, est in enumerate(process_estimates):
plot_pauli_transfer_matrix(choi2pauli_liouville(est), axes[idx], title=f'Estimate {idx}')
plt.tight_layout()
```

```
(1.0048337845674669, 0.0044255804943458)
(0.9980403537811947, 0.004362999583385503)
```

```
[ ]:
```

```
```

`do_dfe` (qc, benchmarker, program, qubits, …) |
A wrapper around experiment generation, data acquisition, and estimation that runs a DFE experiment and returns the (fidelity, std_err) pair along with the experiment and results. |

#### State DFE¶

`generate_exhaustive_state_dfe_experiment` (…) |
Estimate state fidelity by exhaustive direct fidelity estimation. |

`generate_monte_carlo_state_dfe_experiment` (…) |
Estimate state fidelity by sampled direct fidelity estimation. |

#### Process DFE¶

`generate_exhaustive_process_dfe_experiment` (…) |
Estimate process fidelity by exhaustive direct fidelity estimation (DFE). |

`generate_monte_carlo_process_dfe_experiment` (…) |
Estimate process fidelity by randomly sampled direct fidelity estimation. |

#### Data Acquisition¶

`acquire_dfe_data` (qc, expt, num_shots, …) |
Acquire data necessary for direct fidelity estimate (DFE). |

#### Analysis¶

`estimate_dfe` (results, kind) |
Analyse data from experiments to obtain a direct fidelity estimate (DFE). |

### Randomized Benchmarking¶

Randomized benchmarking involves running long sequences of random Clifford group gates which compose to the identity to observe how performance degrades with increasing circuit depth.

#### Randomized Benchmarking¶

In this notebook we explore the subset of methods in `randomized_benchmarking.py`

that are related specifically to standard randomized benchmarking.

This includes

- generating pyquil
`Program`

s that constitute a sequence of random Clifford gates. - grouping sequences on disjoint sets of qubits into ‘simultaneous’ or ‘parallel’ RB experiments
- running these experiments on a quantum computer and isolating the relevant measurement results
- fitting an exponential decay model to the data in order to estimate the RB decay parameter
- converting the estimated RB decay parameter into an estimate of the average Clifford gate error on the given qubits

For information and examples concerning specifically interleaved RB or unitarity RB please refer to the respective dedicated notebooks in `/examples/`

##### Motivation and Background¶

Randomized benchmarking is a commonly used protocol for characterizing an ‘average performance’ for gates on a quantum computer. It exhibits efficient scaling in the number of qubits over which the characterized gateset acts and is robust to state preparation and measurement noise. The RB decay parameter which is estimated in this procedure can be related to an estimate of ‘average gate error’ to the ideal, although some care is needed in interpreting this quantity; in particular note that the
estimated gate error is *not* the gate infidelity averaged over the *native gateset* for our QPU. When we say gate error below we refer to this more nuanced notion of average *Clifford* gate error.

The main idea of the protocol is to employ random sequences of gates where the ideal composite operation of the sequence is the identity. To produce such a sequence of depth `m+1`

, each of the first `m`

gates in the sequence are picked uniformly at random from the Clifford group. Using the group composition and inverse property the last gate is then uniquely determined as the Clifford element which inverts the composition of the previous `m`

gates. For illustration refer to this snippet
from appendix A1 of Logical Randomized Benchmarking

In the presence of noise the actual sequence of Cliffords \([U^{-1}, U_m, U_{m-1}, \dots, U_2, U_1]\) is affected by noise \(\Lambda\), and there is state preparation \(\Lambda_P\) and measurement error \(\Lambda_M\), so the circuit does not enact an identity operation. Instead there is some ‘survival probability’ `< 1`

of measuring the initial state after enacting the sequence. After estimating this ‘survival probability’ over many independent random sequences of increasing
depth \(d\) one can fit an exponential decay of the form (under some assumptions):

We get this relatively simple form thanks to the fact that averaging over Clifford sequences effectively averages any noise channel \(\Lambda\) to the depolarizing channel; this, in turn, is because the Clifford group forms a unitary 3-design.

Below we’ve saved a plot of 2q RB data. Each data point is the survival probability estimated for a single sequence of a particular depth. The fit parameters \(A_0\) = amplitude, \(p\) = decay, \(B_0\) = baseline are reported in the `variables`

section below the plot:

The parameter \(p\) estimated from this fit is the RB ‘decay’ which can be related to the average gate error \(r\) by

A brief summary of the procedure follows:

- Select some set of depths over which you expect the survival probability to decay significantly
- Generate many random sequences for each depth
- Estimate the ‘survival probability’ for each sequence by taking the fraction of outcomes matching the initial state over many shots. Here we use the
`ObservablesExperiment`

framework which estimates observable expectation values from which we can calculate the survival probability. - Fit an exponential decay model to the estimated survival probabilities.
- Extract the decay parameter from the fit and convert to ‘average gate error’

Note that the interpretation of RB is an active field of research; see Wallman and Proctor et al. for more information.

The references below are a starting point for general details about the RB protocol:

*Scalable and Robust Randomized Benchmarking of Quantum Processes*.

*Randomized Benchmarking of Quantum Gates*.

*Scalable Noise Estimation with Random Unitary Operators*.

##### A simple single qubit example¶

We’ll start with importing the necessary methods from the `randomized_benchmarking.py`

module and setting up a demo quantum computer object to characterize along with a benchmarker object that will generate our Clifford sequences.

Since our demo is using a quantum virtual machine (QVM) you will need a qvm server. Additionally, we currently rely on a benchmarker object to generate the Clifford sequences, which requires a quilc server.

```
[1]:
```

```
# Needs in terminal:
# $ quilc -S
# $ qvm -S
import numpy as np
from pyquil.api import get_benchmarker
from forest.benchmarking.randomized_benchmarking import (generate_rb_sequence,
generate_rb_experiments, acquire_rb_data,
get_stats_by_qubit_group, fit_rb_results)
%matplotlib inline
```

```
[2]:
```

```
from pyquil.api import get_qc, get_benchmarker
qc = get_qc("9q-square-noisy-qvm")
bm = get_benchmarker()
```

###### Create a single sequence¶

First we can generate a single sequence of 5 Clifford gates on qubit 0 to inspect. (Note we won’t have to actually call this individually to create a typical experiment)

```
[3]:
```

```
# the results are stochastic and can be seeded with random_seed = #
sequence = generate_rb_sequence(bm, qubits=[0], depth=5)
for gate in sequence:
print(gate) # each 'gate' is a separate pyquil Program
```

```
RZ(-pi) 0
RZ(-pi) 0
RX(-pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0
RX(pi/2) 0
RZ(pi/2) 0
RX(-pi/2) 0
RX(-pi/2) 0
RZ(pi/2) 0
RZ(-pi/2) 0
RX(-pi/2) 0
```

###### Generate a single qubit RB experiment¶

Now let’s start in on a full experiment on a single qubit. For the RB protocol we need to generate many sequences for many different depths, and we need to measure each sequence many times. We use the `ObservablesExperiment`

framework, consistent with the rest of forest.benchmarking, to estimate the expectation of the `Z`

observable, \(E[Z]\), after running each sequence on our qubit; the survival probability will simply be \((E[Z] + 1)/2\).

Since we will use the same experiment generation for ‘simultaneous’ rb experiments we will need to specify our qubit as belonging to an isolated single-qubit group.

```
[4]:
```

```
qubit_groups = [(2,)] # characterize the 1q gates on qubit 2
num_sequences_per_depth = 10
depths = [d for d in [2,25,50,125] for _ in range(num_sequences_per_depth)] # specify the depth of each sequence
experiments_1q = generate_rb_experiments(bm, qubit_groups, depths)
print(experiments_1q[0])
# shows the overall sequence being generated
# and that we'll be initializing qubit 2 to the zero state and measuring the Z observable.
```

```
RZ(-pi/2) 2; RX(-pi) 2; RZ(-pi/2) 2; RX(-pi) 2
0: Z+_2→(1+0j)*Z2
```

###### Acquire data¶

We can immediately acquire data for these experiments.

```
[5]:
```

```
num_shots = 500
# run the sequences on the qc object initialized above
results_1q = acquire_rb_data(qc, experiments_1q, num_shots, show_progress_bar=True)
print(results_1q[0])
# shows the estimates for each observable on sequence 0
# for now there's only one observable so we get a list of length 1
```

```
100%|██████████| 40/40 [00:23<00:00, 1.71it/s]
```

```
[ExperimentResult[Z+_2→(1+0j)*Z2: 0.964 +- 0.011891509576163995]]
```

```
```

###### Analyze and plot¶

We can unpack the results from each `ExperimentResult`

and pass this into a fit.

```
[6]:
```

```
# in this case it is simple to unpack the results--there is one result per sequence
expectations = [[res[0].expectation] for res in results_1q]
std_errs = [[res[0].std_err] for res in results_1q]
# we can also use a convenience method, which will be especially helpful with more complicated experiments
stats_q2 = get_stats_by_qubit_group(qubit_groups, results_1q)[(2,)]
# demonstrate equivalence
np.testing.assert_array_equal(expectations, stats_q2['expectation'])
np.testing.assert_array_equal(std_errs, stats_q2['std_err'])
# fit the exponential decay model
fit_1q = fit_rb_results(depths, expectations, std_errs, num_shots)
```

This fit contains estimates for the rb decay from which we can get the gate error.

We can also plot a figure

```
[7]:
```

```
from forest.benchmarking.plotting import plot_figure_for_fit
fig, ax = plot_figure_for_fit(fit_1q, xlabel="Sequence Length [Cliffords]", ylabel="Survival Probability", title='RB Decay for q2')
rb_decay_q2 = fit_1q.params['decay'].value
print(rb_decay_q2)
```

```
0.9999979683958936
```

##### Simultaneous RB¶

Running simultaneous experiments and multi-qubit experiments follows the same work flow. Here we’ll demonstrate a 1q, 2q simultaneous experiment. ‘Simultaneous’ has to be qualified somewhat on a real QPU – the physical action of gates is not guaranteed to occur in the order specified by a quil program (a quil program really only specifies causal relationships). Further one sequence of gates may terminate before another ‘simultaneous’ sequence has. Measurement only occurs when all gates have executed.

###### Generate the simultaneous experiment¶

```
[8]:
```

```
qubit_groups = [(2,), (4,5)] # characterize the 1q gates on qubit 2, and the 2q Cliffords on (4,5)
num_sequences_per_depth = 10
# specify the depth of each simultaneous sequence
depths = [d for d in [2,25,50,125] for _ in range(num_sequences_per_depth)]
experiments_simult = generate_rb_experiments(bm, qubit_groups, depths)
print(experiments_simult[0])
# note that this sequence consists of only 1q gates on qubit 2
# while qubits 4 and 5 should have some CZ gates
```

```
RX(-pi/2) 2; RZ(-pi) 2; CZ 4 5; RX(-pi/2) 5; RX(pi/2) 4; ... 8 instrs not shown ...; CZ 4 5; RZ(pi/2) 5; RX(pi/2) 5; CZ 4 5; RZ(-pi/2) 4
0: Z+_2→(1+0j)*Z2, Z+_4 * Z+_5→(1+0j)*Z5, Z+_4 * Z+_5→(1+0j)*Z4, Z+_4 * Z+_5→(1+0j)*Z4Z5
```

###### Acquire data¶

Collecting this data on a QVM can take a few minutes

```
[9]:
```

```
num_shots = 500
# run the sequences on the qc object initialized above
results_simult = acquire_rb_data(qc, experiments_simult, num_shots, show_progress_bar=True)
print(results_simult[0])
# shows the estimates for each observable on sequence 0
# there is one observable on q2 and three on qubits (4,5)
```

```
100%|██████████| 40/40 [01:24<00:00, 2.11s/it]
```

```
[ExperimentResult[Z+_2→(1+0j)*Z2: 0.948 +- 0.014233481654184263], ExperimentResult[Z+_4 * Z+_5→(1+0j)*Z5: 0.896 +- 0.019858700863853104], ExperimentResult[Z+_4 * Z+_5→(1+0j)*Z4: 0.9 +- 0.019493588689617924], ExperimentResult[Z+_4 * Z+_5→(1+0j)*Z4Z5: 0.828 +- 0.025076522884961535]]
```

```
```

###### Analyze and plot¶

We plot each result. While the one qubit decay has a baseline (ideal horizontal asymptote) of .5, the two qubit decay has a baseline of .25.

```
[10]:
```

```
stats_simult = get_stats_by_qubit_group(qubit_groups, results_simult)
fits = []
for qubits, stats in stats_simult.items():
exps = stats['expectation']
std_errs = stats['std_err']
# fit the exponential decay model
fit = fit_rb_results(depths, exps, std_errs, num_shots)
fits.append(fit)
fig, ax = plot_figure_for_fit(fit, xlabel="Sequence Length [Cliffords]", ylabel="Survival Probability", title=f'RB Decay for qubits {qubits}')
```

In general we expect that running a simultaneous RB experiment on a real QPU will result in smaller decay values due to effects such as cross-talk; this won’t show up on a QVM unless you create a noise model that captures such effects.

##### Advanced usage¶

###### Modifying the Clifford sequences¶

We have broken down experiment generation into two steps if you wish to use the basic functionality provided in the RB module but want to modify the individual Clifford elements in each sequence, e.g. by replacing with a logical operation on logical qubits.

```
[11]:
```

```
from forest.benchmarking.randomized_benchmarking import generate_rb_experiment_sequences
# similar to generate_rb_experiments we need a benchmarker and the depths for each sequence.
# unlike generate_rb_experiments we only specify a group of qubits rather than a list of simultaneous qubit groups
qubits = (2,3)
num_sequences_per_depth = 10
depths = [d for d in [2,25,50,125] for _ in range(num_sequences_per_depth)] # specify the depth of each sequence
sequences_23 = generate_rb_experiment_sequences(bm, qubits, depths)
print(sequences_23[0])
# here we see that each sequence is given with the division into Clifford gates rather than as a single program
# if we wished, we could modify these sequences at a Clifford-gate level.
```

```
[<pyquil.quil.Program object at 0x7fbd15e79f28>, <pyquil.quil.Program object at 0x7fbd15e79d68>]
```

```
[12]:
```

```
from forest.benchmarking.randomized_benchmarking import group_sequences_into_parallel_experiments
# now we can collect our modified sequences into an experiment
expt_2q_non_simult = group_sequences_into_parallel_experiments([sequences_23], [qubits])
print(f'A 2q experiment on qubits {qubits}')
print(expt_2q_non_simult[0])
# this generalizes to simultaneous experiments
sequences_01 = generate_rb_experiment_sequences(bm, (0,1), depths)
expt_2q_simult = group_sequences_into_parallel_experiments([sequences_23, sequences_01], [qubits, (0,1)])
print('\nA simultaneous 2q experiment.')
print(expt_2q_simult[0])
```

```
A 2q experiment on qubits (2, 3)
RX(-pi/2) 3; RZ(-pi/2) 2; RX(pi/2) 2; CZ 2 3; RX(-pi/2) 2; ... 2 instrs not shown ...; RX(-pi/2) 2; CZ 2 3; RX(pi/2) 3; RX(pi/2) 2; RZ(-pi/2) 2
0: Z+_2 * Z+_3→(1+0j)*Z3, Z+_2 * Z+_3→(1+0j)*Z2, Z+_2 * Z+_3→(1+0j)*Z2Z3
A simultaneous 2q experiment.
RX(-pi/2) 3; RZ(-pi/2) 2; RX(pi/2) 2; CZ 2 3; RX(-pi/2) 2; ... 14 instrs not shown ...; RX(-pi/2) 0; CZ 0 1; RX(-pi/2) 1; CZ 0 1; RZ(-pi/2) 0
0: Z+_2 * Z+_3→(1+0j)*Z3, Z+_2 * Z+_3→(1+0j)*Z2, Z+_2 * Z+_3→(1+0j)*Z2Z3, Z+_0 * Z+_1→(1+0j)*Z1, Z+_0 * Z+_1→(1+0j)*Z0, Z+_0 * Z+_1→(1+0j)*Z0Z1
```

###### Very fast RB by few point measurements¶

If we have prior information (say by running RB earlier) that p=0.9 we may want to monitor the decay as a function of time to see if our experiment is drifting in time.

A fisher information analysis shows that the optimal sequence length to sample given \(p\) scales as

Suppose the gate drifts with time and one has previously characterized the drift by doing repeated RB. Then one could imagine sampling at sequence lengths that correspond to the mean of the distribution of \(\langle p \rangle\) and \(\langle p \rangle \pm {\rm stdev}(p)\). For example if \(\langle p \rangle = 0.9\) and \({\rm stdev}(p) = 0.05\) then we might want to sample at \(d = [6, 10, 19]\).

```
[13]:
```

```
qubit_groups = [(2,)] # characterize the 1q gates on qubit 2
num_sequences_per_depth = 10
depths = [d for d in [6, 10, 19] for _ in range(num_sequences_per_depth)] # specify the depth of each sequence
print(depths)
experiments_1q = generate_rb_experiments(bm, qubit_groups, depths)
```

```
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19]
```

```
[14]:
```

```
num_shots = 500
# run the sequences on the qc object initialized above
results_1q = acquire_rb_data(qc, experiments_1q, num_shots, show_progress_bar=True)
print(results_1q[0])
# in this case it is simple to unpack the results--there is one result per sequence
expectations = [[res[0].expectation] for res in results_1q]
std_errs = [[res[0].std_err] for res in results_1q]
# fit the exponential decay model
fit_1q = fit_rb_results(depths, expectations, std_errs, num_shots)
fig, ax = plot_figure_for_fit(fit_1q, xlabel="Sequence Length [Cliffords]", ylabel="Survival Probability", title='RB Decay for q2')
```

```
100%|██████████| 30/30 [00:13<00:00, 2.23it/s]
```

```
[ExperimentResult[Z+_2→(1+0j)*Z2: 0.92 +- 0.017527121840165315]]
```

```
[ ]:
```

```
```

#### Randomized Benchmarking: Unitarity RB¶

##### Background¶

In this notebook we explore the subset of methods in `randomized_benchmarking.py`

that are related specifically to unitarity randomized benchmarking. I suggest reviewing the notebook `examples/randomized_benchmarking.ipynb`

first, since we will assume familiarity with its contents and treat unitarity as a modification to the ‘standard’ RB protocol.

In standard RB we are interested in characterizing the average impact of noise on our gates, and we estimate an average error rate per Clifford. Unitarity RB is designed to characterize this noise further. There are two different types of noise that impact the quality of a target operation

- coherent, or unitary, noise which can arise from imperfect control or improper calibration such that the resulting operation is unitary but different from the target gate. For example, an extreme case of coherent error would be implementing a perfect X gate when trying to perform a target Z gate. A more common and insidious case of coherent noise is an over or under rotation, e.g. performing RX(pi + .001) for a target RX(pi) gate.
- incoherent noise makes the overall operation non-unitary and can rotate the affected state out of the computational subspace into the larger system + environment space through unwanted interaction with the environment. For a single qubit state represented on the Bloch sphere this has the affect of moving the state vector closer to the center of the sphere. The depolarizing channel is the archetype of incoherent error which in the single qubit case shrinks the surface of the Bloch sphere uniformly to the central point. Amplitude damping is another example, where for a single qubit the surface of the Bloch sphere shrinks to the \(|0 \rangle\) state pole.

The unitarity measurement is meant to provide an estimate of how coherent the noise is. Already in describing the examples we see that the two different types of noise affect states in different ways. Focusing on the Bloch sphere representation for single qubit states, fully coherent (unitary) noise rotates the surface of the sphere just like any other unitary operation; meanwhile, incoherent noise shrinks the volume of the sphere and, in general, translates its center. We will use the difference between these two actions on various states to characterize the noise. Just as in RB we will estimate the average of some quantity (the shifted purity) over groups of random sequences of Clifford gates of increasing depth, fit an exponential decay curve, and extract an estimate of the base decay constant. In this case, the decay constant is dubbed ‘unitarity’ and takes on a value between 0 and 1 inclusive. Purely coherent noise–as in the case of an extraneous unitary gate–yields a unitarity of 1 whereas purely incoherent noise–such as the depolarizing channel–yields a unitarity of 0.

###### Purity¶

The purity \(p\) of a quantum state \(\rho\) is defined as

The set of pure single qubit states, that is with \(p=1\), is precisely the set of states which comprise the surface of the Bloch sphere. Indeed, we can relate the purity of a state to the length of the state vector in the generalized Bloch sphere. Using a complete operator basis such as the pauli basis we can expand any \(d\)-dimensional quantum state \(\rho\) as a linear combination over basis matrices:

where we have denoted the traceless pauli matrices by \(\{\sigma_k\}\). Here we see that the purity is given as

where \(\vec n = (n_1, n_2, \dots, n_{d-1})\) is the Bloch vector. The purity for quantum states lays in the interval \([1/d, 1]\). Following [ECN] we define a shifted (or normalized) purity bounded between 0 and 1:

Combining all of the observations above, we expect that the shifted purity \(p'\) of an initially pure state under the action of incoherent noise such as repeated applications of a depolarizing channel will decay from 1 to 0 over the course of many applications as the Bloch vector shrinks to the origin. Indeed, it is the shifted purity that replaces the survival probability from standard RB. To estimate the purity we take the expectation of each traceless pauli operator \(\sigma_k\) measured on a given sequence.

###### The protocol¶

There are two differences between unitarity and RB. First and foremost, we estimate the purity of each sequence rather than the survival probability. Secondly, we don’t need the sequence of Cliffords to be self-inverting. Hence the procedure is summarized as

- Select some set of depths over which you expect the survival probability to decay significantly
- Generate many random sequences for each depth. A random sequence is simply depth-many uniform random Clifford elements.
- Estimate the ‘shifted purity’ for each sequence estimating the expectation of each traceless Pauli observable after running the sequence; the
`ObservablesExperiment`

framework is used heavily here. - Fit an exponential decay model to the estimated shifted purities.
- Extract the decay parameter (unitarity) from the fit. From this we can get an upper bound on the standard RB decay. If the noise channel is a depolarizing channel then this upper bound is saturated.

For more information, including a formal definition of unitarity, see

*Estimating the Coherence of Noise*.

We also make use of unitarity in the notebook `examples/randomized_benchmarking_interleaved.ipynb`

##### The code¶

The basics mirror standard RB. We use the same basic setup and functions with slight name changes.

```
[1]:
```

```
# Needs in terminal:
# $ quilc -S
# $ qvm -S
import numpy as np
from forest.benchmarking.plotting import plot_figure_for_fit
from forest.benchmarking.randomized_benchmarking import *
from pyquil.api import get_benchmarker
from pyquil import Program, get_qc
noisy_qc = get_qc('2q-qvm', noisy=True)
bm = get_benchmarker()
```

Since we are estimating many more observables for each sequence than standard RB this will take longer.

By default all the observables on each sequence are grouped so that compatible observables are estimated simultaneously from the same set of measurements. This speeds up data acquisition. The number of observables being estimated scales as the square of the size of the largest qubit group.

```
[2]:
```

```
# THIS IS SLOW (around a minute). The below comment skips this cell during testing. Please do not remove.
# NBVAL_SKIP
num_sequences_per_depth = 50
num_shots = 25
# select groups of qubits. Here we choose to simultaneously run two
# single qubit experiments on qubit 0 and qubit 1
qubit_groups = [(0,), (1,)]
depths = [2, 8, 10, 20]
depths = [d for d in depths for _ in range(num_sequences_per_depth)]
expts = generate_unitarity_experiments(bm, qubit_groups, depths)
results = acquire_rb_data(noisy_qc, expts, num_shots, show_progress_bar=True)
stats_by_group = get_stats_by_qubit_group(qubit_groups, results)
fits = []
for group in qubit_groups:
stats = stats_by_group[group]
fit = fit_unitarity_results(depths, stats['expectation'], stats['std_err'])
fits.append(fit)
fig, axs = plot_figure_for_fit(fit, xlabel="Sequence Length [Cliffords]", ylabel="Shifted Purity",
title='Qubit' + str(group[0]))
```

```
100%|██████████| 200/200 [01:43<00:00, 1.93it/s]
```

The default qvm noise model doesn’t produce a nice curve. We’ll fix this below.

##### Advanced functionality: inserting simple-to-analyze noise¶

```
[3]:
```

```
from pyquil import noise
def add_noise_to_sequences(sequences, qubits, kraus_ops):
"""
Append the given noise to each clifford gate (sequence)
"""
for seq in sequences:
for program in seq:
program.defgate("noise", np.eye(2 ** len(qubits)))
program.define_noisy_gate("noise", qubits, kraus_ops)
program.inst(("noise", *qubits))
def depolarizing_noise(num_qubits: int, p: float =.95):
"""
Generate the Kraus operators corresponding to a given unitary
single qubit gate followed by a depolarizing noise channel.
:params float num_qubits: either 1 or 2 qubit channel supported
:params float p: parameter in depolarizing channel as defined by: p $\rho$ + (1-p)/d I
:return: A list, eg. [k0, k1, k2, k3], of the Kraus operators that parametrize the map.
:rtype: list
"""
num_of_operators = 4**num_qubits
probabilities = [p+(1.0-p)/num_of_operators] + [(1.0 - p)/num_of_operators]*(num_of_operators-1)
return noise.pauli_kraus_map(probabilities)
# get a qc without any noise
quiet_qc = get_qc('1q-qvm', noisy=False)
```

A depolarizing channel applied to each Clifford is especially simple to analyze. In particular, we expect to be able to convert the estimated unitarity to the RB decay parameter which parameterizes the channel.

To insert the noise we have to use the individual parts that make up `generate_unitarity_experiments`

, similar to what we did in the advanced section for standard RB.

```
[4]:
```

```
# This is also SLOW
qubits = (0,)
single_clifford_p = .95 #p parameter for the depolarizing channel applied to each clifford
kraus_ops = depolarizing_noise(len(qubits), single_clifford_p)
depths = [2, 8, 10, 20]
depths = [d for d in depths for _ in range(num_sequences_per_depth)]
sequences = generate_rb_experiment_sequences(bm, qubits, depths, use_self_inv_seqs=False)
# insert our custom noise
add_noise_to_sequences(sequences, qubits, kraus_ops)
expts = group_sequences_into_parallel_experiments([sequences], [qubits], is_unitarity_expt=True)
results = acquire_rb_data(quiet_qc, expts, num_shots, show_progress_bar=True)
stats = get_stats_by_qubit_group([qubits], results)[qubits]
fit = fit_unitarity_results(depths, stats['expectation'], stats['std_err'])
# plot the raw data, point estimate error bars, and fit
fig, axs = plot_figure_for_fit(fit, xlabel="Sequence Length [Cliffords]", ylabel="Shifted Purity")
```

```
100%|██████████| 200/200 [01:03<00:00, 3.16it/s]
```

```
[5]:
```

```
unitarity = fit.params['decay'].value
print(unitarity)
err = fit.params['decay'].stderr
print(err)
```

```
0.9353341455250523
0.016413752007987215
```

Since the noise is depolarizing, we expect `unitarity_to_rb_decay(unitarity)`

to match the input noise parameter `single_clifford_p = .95`

up to the error in our estimate.

```
[6]:
```

```
from forest.benchmarking.randomized_benchmarking import unitarity_to_rb_decay
print(f'{unitarity_to_rb_decay(unitarity-err, 2)} '\
f'<? {single_clifford_p} '\
f'<? {unitarity_to_rb_decay(unitarity+err, 2)}')
```

```
0.9586033556779703 <? 0.95 <? 0.9755756749391815
```

```
[ ]:
```

```
```

#### Randomized Benchmarking: Interleaved RB¶

In this notebook we explore the subset of methods in `randomized_benchmarking.py`

that are related specifically to interleaved randomized benchmarking (IRB). I suggest reviewing the notebook `examples/randomized_benchmarking.ipynb`

first, since we will assume familiarity with its contents and treat IRB as a modification to the ‘standard’ RB protocol.

In standard RB we are interested in characterizing the average impact of noise on our gates, and we estimate an average error rate per Clifford. Interleaved RB is designed to characterize a particular gate, which is ‘interleaved’ throughout the standard RB sequence. The protocol is a simple extension of standard RB which involves running sequences of random Cliffords with the gate of interest G interleaved after every Clifford. Specifically, we transform a standard RB sequence

into an ‘interleaved’ sequence

where G is the gate we wish to characterize. Note that we still need to invert the sequence, so G must be an element of the Clifford group, and the final inverting gate must be updated to invert the entire sequence of gates including the G.

The IRB protocol can be summarized as follows:

- we want to characterize a gate G which acts on the set of qubits Q
- run standard RB on the qubits Q and estimate an rb decay.
- generate another set of sequences with the gate G interleaved in each sequence. Run these sequences exactly as you would for standard RB and estimate an rb decay, which we will call the irb decay. We expect this parameter to be smaller than the standard rb decay due to the effect of the extra applications of G.
- use the two decay parameters to calculate the error on gate G. We can also get lower and upper bounds on the fidelity of G.
- (Optional) run a unitarity experiment on the qubits Q and use this to improve the fidelity bounds from the last step.

The following is a starting point for more information

*Efficient measurement of quantum gate error by interleaved randomized benchmarking*.

##### The code¶

The difference between an RB experiment and an IRB experiment is one flag in `generate_rb_experiments`

, so we’ll go through this quickly. Refer to `examples/randomized_benchmarking.ipynb`

for a more careful breakdown.

```
[1]:
```

```
# Needs in terminal:
# $ quilc -S
# $ qvm -S
import numpy as np
from pyquil.api import get_benchmarker, get_qc
from forest.benchmarking.randomized_benchmarking import *
from forest.benchmarking.plotting import plot_figure_for_fit
from pyquil.gates import *
from pyquil import Program
%matplotlib inline
```

Get a quantum computer and benchmarker object per usual

```
[2]:
```

```
bm = get_benchmarker()
qc = get_qc("9q-square-noisy-qvm", noisy=True)
```

###### Generate an RB and IRB experiment¶

We’ll start be selecting our gate and hyperparameters. Then we generate two sets of experiments. The first set is the standard RB sequences and the second set has our gate interleaved in the sequences

```
[3]:
```

```
# Choose your gate
qubits = (0, 1)
gate = Program(CNOT(*qubits))
# Choose your parameters
qubit_groups = [qubits]
depths = [2, 15, 25, 30, 60]
num_sequences = 25
depths = [d for d in depths for _ in range(num_sequences)]
rb_expts = generate_rb_experiments(bm, qubit_groups, depths)
# provide the extra arg for the interleaved gate
inter_expts = generate_rb_experiments(bm, qubit_groups, depths, interleaved_gate=gate)
```

###### Run the standard RB experiment¶

This takes around a minute. We extract the estimated decay parameter and plot the results.

```
[4]:
```

```
# NBVAL_SKIP
# tag this cell to be skipped during testing
# Run the RB Sequences on a QuantumComputer
num_shots=100
rb_results = acquire_rb_data(qc, rb_expts, num_shots, show_progress_bar=True)
# Calculate a fit to a decay curve
stats = get_stats_by_qubit_group(qubit_groups, rb_results)[qubit_groups[0]]
fit = fit_rb_results(depths, stats['expectation'], stats['std_err'], num_shots)
# Extract rb decay parameter
rb_decay = fit.params['decay'].value
rb_decay_error = fit.params['decay'].stderr
# Plot
fig, axs = plot_figure_for_fit(fit, xlabel="Sequence Length [Cliffords]", ylabel="Survival Probability")
```

```
100%|██████████| 125/125 [01:29<00:00, 1.40it/s]
```

###### Now run the interleaved RB experiment¶

This takes a bit longer than the last cell.

```
[5]:
```

```
# NBVAL_SKIP
# tag this cell to be skipped during testing
# Run the RB Sequences on a QuantumComputer
num_shots=100
inter_results = acquire_rb_data(qc, inter_expts, num_shots, show_progress_bar=True)
# Calculate a fit to a decay curve
stats = get_stats_by_qubit_group(qubit_groups, inter_results)[qubit_groups[0]]
fit = fit_rb_results(depths, stats['expectation'], stats['std_err'], num_shots)
# Extract irb decay parameter
irb_decay = fit.params['decay'].value
irb_decay_error = fit.params['decay'].stderr
# Plot
fig, axs = plot_figure_for_fit(fit, xlabel="Sequence Length [Cliffords]", ylabel="Survival Probability")
```

```
100%|██████████| 125/125 [01:41<00:00, 1.23it/s]
```

We expect the irb_decay to be somewhat smaller due to the effect of the extra instances of the interleaved gate adding noise.

```
[6]:
```

```
print(rb_decay)
print(irb_decay)
```

```
0.97288649829259
0.950257506322514
```

###### Extracting gate error and bounds¶

Using these two decay values we can estimate the error on our chosen gate.

```
[7]:
```

```
dim = 2**len(qubits)
print(rb_decay_to_gate_error(rb_decay, dim))
gate_error = irb_decay_to_gate_error(irb_decay, rb_decay, dim)
print(gate_error)
```

```
0.020335126280557503
0.017444731741413116
```

We can also get bounds on our estimate using the two decay values.

```
[8]:
```

```
bounds = interleaved_gate_fidelity_bounds(irb_decay, rb_decay, dim)
print(bounds)
```

```
[0.959329747438885, 1.0057807890782888]
```

```
[9]:
```

```
assert(bounds[0] < 1-gate_error and 1-gate_error < bounds[1])
```

###### Improving the bounds with unitarity¶

To get improved bounds we can run an additional unitarity experiment, following the method described in [U+IRB].

See `examples/randomized_benchmarking_unitarity.ipynb`

if unitarity RB is unfamiliar. This is particularly SLOW.

*Efficiently characterizing the total error in quantum circuits*.

```
[10]:
```

```
# NBVAL_SKIP
# tag this cell to be skipped during testing. It is particularly SLOW, around 6 minutes
num_shots = 50
expts = generate_unitarity_experiments(bm, qubit_groups, depths, num_sequences)
results = acquire_rb_data(qc, expts, num_shots, show_progress_bar=True)
stats = get_stats_by_qubit_group(qubit_groups, results)[qubit_groups[0]]
fit = fit_unitarity_results(depths, stats['expectation'], stats['std_err'])
# plot the raw data, point estimate error bars, and fit
fig, axs = plot_figure_for_fit(fit, xlabel="Sequence Length [Cliffords]", ylabel="Shifted Purity")
unitarity = fit.params['decay'].value
```

```
100%|██████████| 125/125 [07:32<00:00, 3.62s/it]
```

Hopefully the `unitarity`

can be incorporated to improve our bounds. However, the bounds might be `NaN`

depending on the outcome of the unitarity and difference between rb and irb decays. Getting better estimates of each parameter helps prevent this.

```
[11]:
```

```
better_bounds = interleaved_gate_fidelity_bounds(irb_decay, rb_decay, dim, unitarity)
print(better_bounds)
```

```
[0.9597518532587754, 0.9925052728337453]
```

```
[ ]:
```

```
```

`do_rb` (qc, benchmarker, qubit_groups, depths, …) |
A wrapper around experiment generation, data acquisition, and estimation that runs a RB experiment on the qubit_groups and returns the rb_decay along with the experiments and results. |

#### Gates and Sequences¶

`oneq_rb_gateset` (qubit) |
Yield the gateset for 1-qubit randomized benchmarking. |

`twoq_rb_gateset` (q1, q2) |
Yield the gateset for 2-qubit randomized benchmarking. |

`get_rb_gateset` (qubits) |
A wrapper around the gateset generation functions. |

`generate_rb_sequence` (benchmarker, qubits, …) |
Generate a complete randomized benchmarking sequence. |

`merge_sequences` (sequences) |
Takes a list of equal-length “sequences” (lists of Programs) and merges them element-wise, returning the merged outcome. |

`generate_rb_experiment_sequences` (…[, …]) |
Generate the sequences of Clifford gates necessary to run a randomized benchmarking experiment for a single (group of) qubit(s). |

`group_sequences_into_parallel_experiments` (…) |
Consolidates randomized benchmarking sequences on separate groups of qubits into a flat list of ObservablesExperiments which merge parallel sets of distinct sequences. |

#### Standard and Interleaved RB¶

`generate_rb_experiments` (benchmarker, …) |
Creates list of ObservablesExperiments which, when run in series, constitute a simultaneous randomized benchmarking experiment on the disjoint qubit_groups. |

`z_obs_stats_to_survival_statistics` (…[, …]) |
Convert expectations of the dim - 1 observables which are the nontrivial combinations of tensor products of I and Z into survival mean and variance, where survival is the all zeros outcome. |

`fit_rb_results` (depths, z_expectations, …) |
Fits the results of a standard RB or IRB experiment by converting expectations into survival probabilities (probability of measuring zero) and passing these on to the standard fit. |

#### Unitarity or Purity RB¶

`generate_unitarity_experiments` (benchmarker, …) |
Creates list of ObservablesExperiments which, when run in series, constitute a simultaneous unitarity experiment on the disjoint qubit_groups. |

`estimate_purity` (dim, op_expect, renorm) |
The renormalized, or ‘shifted’, purity is given in equation (10) of [ECN] where d is the dimension of the Hilbert space, 2**num_qubits |

`estimate_purity_err` (dim, op_expect, …[, …]) |
Propagate the observed variance in operator expectation to an error estimate on the purity. |

`fit_unitarity_results` (depths, expectations, …) |
Fits the results of a unitarity experiment by first calculating shifted purities and subsequently passing these on to the standard decay fit. |

`unitarity_to_rb_decay` (unitarity, dimension) |
Converts a unitarity decay to a standard RB decay under the assumption that no unitary errors present. |

#### Data Acquisition¶

`acquire_rb_data` (qc, experiments, num_shots, …) |
Runs each ObservablesExperiment and returns each group of resulting ExperimentResults |

`get_stats_by_qubit_group` (qubit_groups, …) |
Organize the results of a simultaneous RB experiment into lists of expectations and std_errs for each sequence; these lists are stored in a dict for each qubit group. |

#### Analysis Helper functions for RB¶

`coherence_angle` (rb_decay, unitarity) |
Equation 29 of [U+IRB] |

`gamma` (irb_decay, unitarity) |
Corollary 5 of [U+IRB], second line |

`interleaved_gate_fidelity_bounds` (irb_decay, …) |
Use observed rb_decay to place a bound on fidelity of a particular gate with given interleaved rb decay. |

`gate_error_to_irb_decay` (irb_error, rb_decay, dim) |
For convenience, inversion of Eq. |

`irb_decay_to_gate_error` (irb_decay, rb_decay, dim) |
Eq. |

`average_gate_error_to_rb_decay` (gate_error, …) |
Inversion of eq. |

`rb_decay_to_gate_error` (rb_decay, dimension) |
Eq. |

`unitarity_to_rb_decay` (unitarity, dimension) |
Converts a unitarity decay to a standard RB decay under the assumption that no unitary errors present. |

`get_stats_by_qubit_group` (qubit_groups, …) |
Organize the results of a simultaneous RB experiment into lists of expectations and std_errs for each sequence; these lists are stored in a dict for each qubit group. |

### Robust Phase Estimation¶

Is a kind of iterative phase estimation formalized by Kimmel, Low, Yoder Phys. Rev. A 92, 062315 (2015). It is ideal for measuring gate calibration errors.

#### Robust Phase Estimation¶

```
[1]:
```

```
import numpy as np
from numpy import pi
from forest.benchmarking import robust_phase_estimation as rpe
from pyquil import get_qc, Program
from pyquil.gates import *
import matplotlib.pyplot as plt
%matplotlib inline
# get a qauntum computer object. Here we use a noisy qvm.
qc = get_qc("9q-square", as_qvm=True, noisy=True)
```

##### Estimate phase of RZ(angle, qubit)¶

###### Generate RPE experiment¶

```
[2]:
```

```
# we start with determination of an angle of rotation about the Z axis
rz_angle = 2 # we will use an ideal gate with phase of 2 radians
qubit = 0
rotation = RZ(rz_angle, qubit)
# the rotation is about the Z axis; the eigenvectors are the computational basis states
# therefore the change of basis is trivially the identity.
cob = Program()
rz_experiments = rpe.generate_rpe_experiments(rotation, *rpe.all_eigenvector_prep_meas_settings([qubit], cob))
```

```
[3]:
```

```
print(rz_experiments)
```

```
[<forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7f23f0f67b70>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7f23eb3bcf98>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7f23eb3bc780>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7f23eb3bcd30>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7f23eb3bcef0>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7f23eb3bce48>]
```

###### Acquire data¶

```
[4]:
```

```
rz_results = rpe.acquire_rpe_data(qc, rz_experiments)
```

###### Extract the estimate of the phase¶

We hope that the estimated phase is close to our choice for rz_angle=2

```
[5]:
```

```
rpe.robust_phase_estimate(rz_results, [qubit])
```

```
[5]:
```

```
2.0003055903476907
```

##### Estimate phase of RX(angle, qubit)¶

```
[6]:
```

```
rx_angle = pi # radians; only x gates with phases in {-pi, -pi/2, pi/2, pi} are allowed
qubit = 1 # let's use a different qubit
rotation = RX(rx_angle, qubit)
# the rotation has eigenvectors |+> and |->;
# change of basis needs to rotate |0> to the plus state and |1> to the minus state
cob = RY(pi/2, qubit)
rx_experiments = rpe.generate_rpe_experiments(rotation, *rpe.all_eigenvector_prep_meas_settings([qubit], cob))
```

```
[7]:
```

```
rx_results = rpe.acquire_rpe_data(qc, rx_experiments)
# we hope this is close to rx_angle = pi
rpe.robust_phase_estimate(rx_results, [qubit])
```

```
[7]:
```

```
3.1448187563764525
```

##### Hadamard-like rotation¶

We can do any rotation which is expressed in native gates (or, more generally, using gates included in basic_compile() in forest_benchmarking.compilation). There are helper functions to help determine the proper change of basis required to do such an experiment. Here, we use the fact that a Hadamard interchanges X and Z, and so must constitute a rotation about the axis (pi/4, 0) on the Bloch sphere. From this, we find the eigenvectors of an arbitrary rotation about this axis, and use those to find the change of basis matrix. Note that the change of basis need not be a program or gate; if a matrix is supplied then it will be translated to a program using a qc object’s compiler. This can be done manually with a call to add_programs_to_rpe_dataframe(qc, expt) or automatically when acquire_rpe_data is called.

```
[8]:
```

```
# create a program implementing the rotation of a given angle about the "Hadamard axis"
rh_angle = 1.5 # radians
qubit = 0
RH = Program(RY(-pi / 4, qubit)).inst(RZ(rh_angle, qubit)).inst(RY(pi / 4, qubit))
# get the eigenvectors knowing the axis of rotation is pi/4, 0
evecs = rpe.bloch_rotation_to_eigenvectors(pi / 4, 0)
# get a ndarray representing the change of basis transformation
cob_matrix = rpe.get_change_of_basis_from_eigvecs(evecs)
# conver to quil using a qc object's compiler
cob = rpe.change_of_basis_matrix_to_quil(qc, [qubit], cob_matrix)
# create an experiment as usual
rh_experiments = rpe.generate_rpe_experiments(RH, *rpe.all_eigenvector_prep_meas_settings([qubit], cob))
# get the results per usual
rh_results = rpe.acquire_rpe_data(qc, rh_experiments, multiplicative_factor=2)
# the result should be close to our input rh_angle=1.5
rpe.robust_phase_estimate(rh_results, [qubit])
```

```
[8]:
```

```
1.502017186285983
```

##### Group multiple experiments to run simultaneously¶

This will lead to shorter data acquisition times on a QPU.

```
[9]:
```

```
from forest.benchmarking.observable_estimation import ObservablesExperiment
def combine_experiments(expt1, expt2):
program = expt1.program + expt2.program
settings = [s1 + s2 for s1, s2 in zip(expt1, expt2)]
return ObservablesExperiment(settings, program)
experiments = [combine_experiments(rz_expts, rx_expts) for rz_expts, rx_expts in zip(rz_experiments, rx_experiments)]
```

```
[10]:
```

```
results = rpe.acquire_rpe_data(qc, experiments)
```

Iterate through the group and check that the estimates agree with previous results

```
[11]:
```

```
# we expect 2 for qubit 0 and pi for qubit 1
print(rpe.robust_phase_estimate(results, [0]))
print(rpe.robust_phase_estimate(results, [1]))
```

```
1.9979230425970458
3.1432726476065986
```

##### Upper bound on the point estimate variance¶

For a particular number of depths we can compute the predicted upper bound on the phase point estimate variance assuming standard parameters are used in experiment generation and data acquisition.

```
[12]:
```

```
var = rpe.get_variance_upper_bound(num_depths=6)
rx_estimate = rpe.robust_phase_estimate(results, [1])
# difference between obs and actual should be less than predicted error
print(np.abs(rx_estimate - rx_angle), ' <? ', np.sqrt(var))
```

```
0.0016799940168055194 <? 0.07727878265222556
```

##### Visualize the rotation at each depth by plotting x and y expectations¶

This works best for small angles where there is less overlap at large depths

```
[13]:
```

```
angle = pi/16
num_depths = 6
q = 0
cob = Program()
args = rpe.all_eigenvector_prep_meas_settings([q], cob)
expts = rpe.generate_rpe_experiments(RZ(angle,q), *args, num_depths=num_depths)
expt_res = rpe.acquire_rpe_data(qc, expts, multiplicative_factor = 100, additive_error = .1)
observed = rpe.robust_phase_estimate(expt_res, [q])
print("Expected: ", angle)
print("Observed: ", observed)
expected = [(1.0, angle*2**j) for j in range(num_depths)]
x_results = [res for depth in expt_res for res in depth if res.setting.observable[q] == 'X']
y_results = [res for depth in expt_res for res in depth if res.setting.observable[q] == 'Y']
x_exps = [res.expectation for res in x_results]
y_exps = [res.expectation for res in y_results]
x_errs = [res.std_err for res in x_results]
y_errs = [res.std_err for res in y_results]
ax = rpe.plot_rpe_iterations(x_exps, y_exps, x_errs, y_errs, expected)
plt.show()
```

```
Expected: 0.19634954084936207
Observed: 0.1981856603092284
```

##### Multi qubit gates¶

We can also estimate the relative phases between eigenvectors of multi-qubit rotations

Note that for a particular 2q gate there are only three free eigenvalues; the fourth is determined by the special unitary condition. If we let

then the special unitary condition is

Our experiment will return estimates

from which each individual phases can be determined (indeed the system is over-determined).

In the example below we demonstrate this procedure for our native CZ gate. The ideal gate has phases

To enforce special unitarity we ought to factor out an overall phase; let us rather note that the ideal relative phases in the order listed about are given by

```
[14]:
```

```
from pyquil.gates import CZ
qubits = [0, 1]
rotation = CZ(*qubits)
cob = Program() # CZ is diagonal in computational basis, i.e. no need to change basis
cz_expts = rpe.generate_rpe_experiments(rotation, *rpe.all_eigenvector_prep_meas_settings(qubits, cob), num_depths=7)
cz_res = rpe.acquire_rpe_data(qc, cz_expts, multiplicative_factor = 50.0, additive_error=.1)
results = rpe.robust_phase_estimate(cz_res, qubits)
# we hope to match the ideal results (0, pi, 0, pi)
print(results)
```

```
[0.0003471650832299819, 3.1457528418577763, 0.003676934230035289, 3.143674321584174]
```

We can also alter the experiment by indicating a preparation-and-post-selection state for some of our qubits. In this type of experiment we prepare the superposition between only a subset of the eigenvectors and so estimate only a subset of the possible relative phases. In the two qubit case below we specify that qubit 0 be prepared in the one state and that we throw out any results where qubit 0 is not measured in this state. Meanwhile, qubit 1 is still prepared in the |+> state. The preparation state is thus the superposition of eigenvectors indexed 2 and 3, and we correspondingly measure the relative phase

in the ideal case

```
[15]:
```

```
fixed_qubit = 0
fixed_qubit_state = 1
free_rotation_qubit = 1
post_select_args = rpe.pick_two_eigenvecs_prep_meas_settings((fixed_qubit, fixed_qubit_state), free_rotation_qubit)
cz_single_phase = rpe.generate_rpe_experiments(rotation, *post_select_args, num_depths=7)
cz_res = rpe.acquire_rpe_data(qc, cz_single_phase, multiplicative_factor = 50.0)
single_result = rpe.robust_phase_estimate(cz_res, [0, 1])
# we hope the result is close to pi
print(single_result)
```

```
[3.142735897053661]
```

##### Characterizing a universal 1q gate set with approximately orthogonal rotation axes¶

(Here we use simulated artificially imperfect gates)

```
[16]:
```

```
from pyquil import Program
from pyquil.quil import DefGate, Gate
from pyquil.gate_matrices import X as x_mat, Y as y_mat, Z as z_mat
"""
Procedure and notation follows Sections III A, B, and C in
[RPE] Robust Calibration of a Universal Single-Qubit Gate-Set via Robust Phase Estimation
Kimmel et al., Phys. Rev. A 92, 062315 (2015)
https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.062315
arXiv:1502.02677
"""
q = 0 # pick a qubit
pauli_vector = np.array([x_mat, y_mat, z_mat])
alpha = .01
epsilon = .02
theta = .5
# Section III A of [RPE]
gate1 = RZ(pi/2 * (1 + alpha), q) # assume some small over-rotation by fraction alpha
# let gate 2 be RX(pi/4) with over-rotation by fraction epsilon,
# and with a slight over-tilt of rotation axis by theta in X-Z plane
rx_angle = pi/4 * (1 + epsilon)
axis_unit_vector = np.array([np.cos(theta), 0, -np.sin(theta)])
mtrx = np.add(np.cos(rx_angle / 2) * np.eye(2),
- 1j * np.sin(rx_angle / 2) * np.tensordot(axis_unit_vector, pauli_vector, axes=1))
# Section III B of [RPE]
# get Quil definition for simulated imperfect gate
definition = DefGate('ImperfectRX', mtrx)
# get gate constructor
IRX = definition.get_constructor()
# set gate as program with definition and instruction, compiled into native gateset
gate2 = qc.compiler.quil_to_native_quil(Program([definition, IRX(q)]))
gate2 = Program([inst for inst in gate2 if isinstance(inst, Gate)])
# Section III B of [RPE], eq. III.3
# construct the program used to estimate theta
half = Program(gate1)
for _ in range(4):
half.inst(IRX(q))
half.inst(gate1)
# compile into native gate set
U_theta = qc.compiler.quil_to_native_quil(Program([definition, half, half]))
U_theta = Program([inst for inst in U_theta if isinstance(inst, Gate)])
gates = [gate1, gate2, U_theta]
cobs = [I(q), RY(pi/2, q), RY(pi/2, q)]
expts = []
for gate, cob in zip(gates, cobs):
args = rpe.all_eigenvector_prep_meas_settings([q], cob)
expts.append(rpe.generate_rpe_experiments(gate, *args))
expts_results = [rpe.acquire_rpe_data(qc, expt, multiplicative_factor = 50.0, additive_error=.15) for expt in expts]
results = []
for ress in expts_results:
result = rpe.robust_phase_estimate(ress, [q])
results += [result]
print("Expected Alpha: " + str(alpha))
print("Estimated Alpha: " + str(results[0]/(pi/2) - 1))
print()
print("Expected Epsilon: " + str(epsilon))
epsilon_est = results[1]/(pi/4) - 1
print("Estimated Epsilon: " + str(epsilon_est))
print()
print("Expected Theta: " + str(theta))
print("Estimated Theta: " + str(np.sin(results[2]/2)/(2*np.cos(epsilon_est * pi/2))))
```

```
Expected Alpha: 0.01
Estimated Alpha: 0.011438052229321594
Expected Epsilon: 0.02
Estimated Epsilon: 0.018858130827586583
Expected Theta: 0.5
Estimated Theta: 0.42014250201187675
```

##### Customizing the noise model¶

```
[17]:
```

```
from pandas import Series
from pyquil.noise import damping_after_dephasing
from pyquil.quil import Measurement
qc = get_qc("9q-square", as_qvm=True, noisy=False)
def add_damping_dephasing_noise(prog, T1, T2, gate_time):
p = Program()
p.defgate("noise", np.eye(2))
p.define_noisy_gate("noise", [0], damping_after_dephasing(T1, T2, gate_time))
for elem in prog:
p.inst(elem)
if isinstance(elem, Measurement):
continue # skip measurement
p.inst(("noise", 0))
return p
def add_noise_to_experiments(expt, t1, t2, p00, p11):
gate_time = 200 * 10 ** (-9)
for expt in expts:
expt.program = add_damping_dephasing_noise(expt.program, t1, t2, gate_time).define_noisy_readout(0, p00, p11)
angle = 1
q = 0
RH = Program(RY(-pi / 4, q)).inst(RZ(angle, q)).inst(RY(pi / 4, q))
evecs = rpe.bloch_rotation_to_eigenvectors(pi / 4, 0)
# get a ndarray representing the change of basis transformation
cob_matrix = rpe.get_change_of_basis_from_eigvecs(evecs)
# convert to quil using a qc object's compiler
cob = rpe.change_of_basis_matrix_to_quil(qc, [q], cob_matrix)
# create an experiment as usual
expts = rpe.generate_rpe_experiments(RH, *rpe.all_eigenvector_prep_meas_settings([q], cob), num_depths=7)
# add noise to experiment with desired parameters
add_noise_to_experiments(expts, 25 * 10 ** (-6.), 20 * 10 ** (-6.), .92, .87)
res = rpe.acquire_rpe_data(qc, expts, multiplicative_factor=5., additive_error=.15)
# we hope this is close to angle=1
rpe.robust_phase_estimate(res, [q])
```

```
[17]:
```

```
1.004874836115469
```

You can also change the noise model of the qvm directly

```
[18]:
```

```
from pyquil.device import gates_in_isa
from pyquil.noise import decoherence_noise_with_asymmetric_ro, _decoherence_noise_model
# noise_model = decoherence_noise_with_asymmetric_ro(gates=gates_in_isa(qc.device.get_isa()), p00=0.92, p11=.87)
T1=20e-6
T2=10e-6
noise_model = _decoherence_noise_model(gates=gates_in_isa(qc.device.get_isa()), T1=T1, T2=T2, ro_fidelity=1.)
qc = get_qc("9q-square", as_qvm=True, noisy=False)
qc.qam.noise_model = noise_model
decohere_expts = rpe.generate_rpe_experiments(RH, *rpe.all_eigenvector_prep_meas_settings([q], cob), num_depths=7)
res = rpe.acquire_rpe_data(qc, decohere_expts, multiplicative_factor=5., additive_error=.15)
rpe.robust_phase_estimate(res, [q])
```

```
[18]:
```

```
1.0015588207846218
```

```
[ ]:
```

```
```

#### API Reference¶

`do_rpe` (qc, rotation, changes_of_basis, …) |
A wrapper around experiment generation, data acquisition, and estimation that runs robust phase estimation. |

`bloch_rotation_to_eigenvectors` (theta, phi) |
Provides convenient conversion from a 1q rotation about some Bloch vector to the two eigenvectors of rotation that lay along the rotation axis. |

`get_change_of_basis_from_eigvecs` (eigenvectors) |
Generates a unitary matrix that sends each computational basis state to the corresponding eigenvector. |

`change_of_basis_matrix_to_quil` (qc, qubits, …) |
Helper to return a native quil program for the given qc to implement the change_of_basis matrix. |

`generate_rpe_experiments` (rotation, …) |
Generate a dataframe containing all the experiments needed to perform robust phase estimation to estimate the angle of rotation of the given rotation program. |

`get_additive_error_factor` (M_j, …) |
Calculate the factor in Equation V.17 of [RPE]. |

`num_trials` (depth, max_depth, …) |
Calculate the optimal number of shots per program with a given depth. |

`acquire_rpe_data` (qc, experiments, …) |
Run each experiment in the sequence of experiments. |

Analysis

`_p_max` (M_j) |
Calculate an upper bound on the probability of error in the estimate on the jth iteration. |

`_xci` (h) |
Calculate the maximum error in the estimate after h iterations given that no errors occurred in all previous iterations. |

`get_variance_upper_bound` (num_depths, …) |
Equation V.9 in [RPE] |

`estimate_phase_from_moments` (xs, ys, x_stds, …) |
Estimate the phase in an iterative fashion as described in section V. |

`robust_phase_estimate` (results, qubits) |
Provides the estimate of the phase for an RPE experiment with results. |

`plot_rpe_iterations` (xs, ys, x_stds, y_stds, …) |
Creates a polar plot of the estimated location of the state in the plane perpendicular to the axis of rotation for each iteration of RPE. |

### Readout Error Estimation¶

The `readout.py`

module allows you to estimate the measurement confusion matrix.

#### Readout Error Estimation¶

In this notebook we explore the module `readout.py`

that enables easy estimation of readout imperfections in the computational basis.

The basic idea is to prepare all possible computational basis states and measure the resulting distribution over output strings. For input bit strings \(i\) and measured output strings \(j\) we want to learn all values of \(\Pr( {\rm detected \ } j | {\rm prepared \ } i)\). It is convenient to collect these probabilities into a matrix which is called a confusion matrix. For a single bit \(i,j \in \{0, 1 \}\) it is

Imperfect measurements have \(\Pr( j | i ) = C_{j,i}\) where \(\sum_j \Pr( j | i ) =1\) for any \(i\).

**In summary the functionality of ``readout.py`` is:**

- estimation of a single qubit confusion matrix
- estimation of the joint confusion matrix over
`n`

bits - marginalize a joint confusion matrix over
`m`

qubits to a smaller matrix over a subset of those qubits - estimate joint reset confusion matrix

##### More Details¶

Some of the assumptions in the measurement of the confusion matrix are that the measurement error (aka classical misclassification error) is larger than: * the ground state preparation error (how thermal is the state) * the errors introduced by applying \(X\) gates to prepare the bit strings * the quantum aspects of the measurement errors e.g. slightly wrong basis

When measuring `n`

(qu)bits there are \(d= 2^n\) possible input and output strings. Given perfect preparation of basis state \(|k\rangle\) the confusion matrix is

The trace of the confusion matrix divided by the number of states is the average probability to correctly report the input string

this is some times called the “joint assignment fidelity” or “readout fidelity” of our simultaneous qubit readout.

This matrix can be related to the quantum measurement operators (well the POVM). Given the coefficients appearing in the confusion matrix the equivalent readout POVM is

where we have introduced the bitstring projectors \(\Pi_{k}=|k\rangle \langle k|\).

To be a valid POVM we must have

- \(\hat N_j\ge 0\) for all \(j\), and
- \(\sum_j\hat N_j\le I\),

where \(I\) is the identity operator. We can immediately see condition one is true. Condition two is easy to verify:

##### Simple readout fidelity and confusion matrix estimation¶

The simple example below should help with understanding the more general code in `readout.py`

.

```
[1]:
```

```
import numpy as np
from pyquil import get_qc
from forest.benchmarking.readout import estimate_confusion_matrix
qc_1q = get_qc('1q-qvm', noisy=True)
```

`estimate_confusion_matrix`

constructs and measures a program for a single qubit for each of the target prep states, either \(| 0 \rangle\) or \(|1 \rangle\). When \(|0 \rangle\) is targeted, we prepare a qubit in the ground state with an identity operation. Similarly, when \(|1\rangle\) is targeted, we prepare a qubit in the excited state with an `X`

gate. To estimate the confusion matrix we measure each target state many times (default 1000). When we prepare the
\(|0\rangle\) state we expect to measure the \(|0\rangle\) state, so the percentage of the time we do in fact measure \(|0\rangle\) gives us an estimate of `p(0|0)`

; the remaining fraction of results where we instead measured \(|1\rangle\) after preparing \(|0\rangle\) gives us an estimate of `p(1|0)`

.

```
[2]:
```

```
confusion_matrix_q0 = estimate_confusion_matrix(qc_1q, 0)
print(confusion_matrix_q0)
print(f'readout fidelity: {np.trace(confusion_matrix_q0)/2}')
```

```
[[0.9735 0.0265]
[0.0928 0.9072]]
readout fidelity: 0.94035
```

##### Generalizing to larger groups of qubits¶

**Convenient estimation of (possibly joint) confusion matrices for groups of qubits**

The readout module also includes a convenience function for estimating all \(k\)-qubit confusion matrices for a group of \(n \geq k\) qubits; this generalizes the above example, since one could use it to measure a 1-qubit confusion matrix for a group of 1 qubit.

```
[3]:
```

```
from forest.benchmarking.readout import (estimate_joint_confusion_in_set,
estimate_joint_reset_confusion,
marginalize_confusion_matrix)
qc = get_qc('9q-square-noisy-qvm')
qubits = qc.qubits()
```

```
[16]:
```

```
# get all single qubit confusion matrices
one_q_ro_conf_matxs = estimate_joint_confusion_in_set(qc, use_active_reset=True)
# get all pairwise confusion matrices from subset of qubits.
subset = qubits[:4] # only look at 4 qubits of interest, this will mean (4 choose 2) = 6 matrices
two_q_ro_conf_matxs = estimate_joint_confusion_in_set(qc, qubits=subset, joint_group_size=2, use_active_reset=True)
```

###### Extract the one qubit readout (ro) fidelities¶

```
[17]:
```

```
print('Qubit Readout fidelity')
for qubit in qubits:
conf_mat = one_q_ro_conf_matxs[(qubit,)]
ro_fidelity = np.trace(conf_mat)/2 # average P(0 | 0) and P(1 | 1)
print(f'q{qubit:<3d}{ro_fidelity:15f}')
```

```
Qubit Readout fidelity
q0 0.933500
q1 0.938000
q2 0.934500
q3 0.933000
q4 0.936000
q5 0.942000
q6 0.936500
q7 0.938000
q8 0.939500
```

###### Pick a single two qubit joint confusion matrix and compare marginal to one qubit confusion matrix¶

Comparing a marginalized joint \(n\)-qubit matrix to the estimated \(k\)-qubit (\(k<n\)) matrices can help reveal correlated errors

```
[6]:
```

```
two_q_conf_mat = two_q_ro_conf_matxs[(subset[0],subset[-1])]
print('Two qubit confusion matrix:\n',two_q_conf_mat,'\n')
marginal = marginalize_confusion_matrix(two_q_conf_mat, [subset[0], subset[-1]], [subset[0]])
print('Marginal confusion matrix:\n',marginal,'\n')
print('One qubit confusion matrix:\n', one_q_ro_conf_matxs[(subset[0],)])
```

```
Two qubit confusion matrix:
[[0.947 0.027 0.026 0. ]
[0.091 0.888 0.003 0.018]
[0.083 0.004 0.887 0.026]
[0.01 0.085 0.091 0.814]]
Marginal confusion matrix:
[[0.9765 0.0235]
[0.091 0.909 ]]
One qubit confusion matrix:
[[0.971 0.029]
[0.104 0.896]]
```

##### Plot the confusion matrix¶

Here we make up a two bit confusion matrix that might represent correlated readout errors, that is a kind of **measurement error crosstalk**.

We can then marginalize and recombine to see how different the independent readout error model would look.

```
[39]:
```

```
confusion_ideal = np.eye(4)
# if the qubit is in |00> there is probablity to flip
# to |01>, |10>, |11>.
confusion_correlated_two_q_ro_errors = np.array(
[[0.925, 0.025, 0.025, 0.025],
[0.000, 1.000, 0.000, 0.000],
[0.000, 0.000, 1.000, 0.000],
[0.000, 0.000, 0.000, 1.000]])
marginal1 = marginalize_confusion_matrix(confusion_correlated_two_q_ro_errors, [0, 1], [0])
marginal2 = marginalize_confusion_matrix(confusion_correlated_two_q_ro_errors, [0, 1], [1])
# take the two marginal confusion matrices and recombine
recombine = np.kron(marginal1, marginal2)
```

```
[40]:
```

```
from forest.benchmarking.plotting.hinton import hinton_real
import itertools
import matplotlib.pyplot as plt
```

```
[41]:
```

```
one_bit_labels = ['0','1']
two_bit_labels = [''.join(x) for x in itertools.product('01', repeat=2)]
```

```
[42]:
```

```
fig0, ax0 = hinton_real(confusion_ideal, xlabels=two_bit_labels, ylabels=two_bit_labels, title='Ideal Confusion matrix')
ax0.set_xlabel("Input bit string", fontsize=14)
ax0.set_ylabel("Output bit string", fontsize=14);
```

```
[43]:
```

```
fig1, ax1 = hinton_real(confusion_correlated_two_q_ro_errors, xlabels=two_bit_labels, ylabels=two_bit_labels, title='Correlated two bit error Confusion matrix')
ax1.set_xlabel("Input bit string", fontsize=14)
ax1.set_ylabel("Output bit string", fontsize=14);
```

```
[44]:
```

```
fig2, ax2 = hinton_real(marginal1, xlabels=one_bit_labels, ylabels=one_bit_labels, title='Marginal Confusion matrix')
ax2.set_xlabel("Input bit string", fontsize=14)
ax2.set_ylabel("Output bit string", fontsize=14);
```

```
[45]:
```

```
fig3, ax3 = hinton_real(recombine, xlabels=two_bit_labels, ylabels=two_bit_labels, title='Marginalized then recombined Confusion matrix')
ax2.set_xlabel("Input bit string", fontsize=14)
ax2.set_ylabel("Output bit string", fontsize=14);
```

##### Estimate confusion matrix for active reset error¶

Similarly to readout, we can estimate a confusion matrix to see how well active reset is able to reset any given bitstring to the ground state (\(|0\dots 0\rangle\)).

```
[7]:
```

```
subset = tuple(qubits[:4])
subset_reset_conf_matx = estimate_joint_reset_confusion(qc, subset,
joint_group_size=len(subset),
show_progress_bar=True)
```

```
100%|██████████| 16/16 [00:40<00:00, 2.61s/it]
```

```
[8]:
```

```
for row in subset_reset_conf_matx[subset]:
pr_sucess = row[0]
print(pr_sucess)
```

```
0.7
0.8999999999999999
0.8999999999999999
0.9999999999999999
0.9999999999999999
0.9999999999999999
0.9999999999999999
0.9999999999999999
0.7999999999999999
0.9999999999999999
0.7
0.9999999999999999
0.7999999999999999
0.8999999999999999
0.7
0.8999999999999999
```

#### Functions¶

`get_flipped_program` (program) |
For symmetrization, generate a program where X gates are added before measurement. |

`estimate_confusion_matrix` (qc, qubit, num_shots) |
Estimate the readout confusion matrix for a given qubit. |

`estimate_joint_confusion_in_set` (qc, qubits, …) |
Measures the joint readout confusion matrix for all groups of size group_size among the qubits. |

`estimate_joint_reset_confusion` (qc, qubits, …) |
Measures a reset ‘confusion matrix’ for all groups of size joint_group_size among the qubits. |

`marginalize_confusion_matrix` (…) |
Marginalize a confusion matrix to get a confusion matrix on only the marginal_subset. |

### Spectroscopic and Analog Measurements of Qubits¶

The protocols in the module `qubit_spectroscopy`

are closer to analog protocols than gate based
QCVV protocols.

#### Qubit spectroscopy: \(T_1\) measurement example¶

This notebook demonstrates how to use the module `qubit_spectroscopy`

to assess the “energy relaxation” time, \(T_1\), of one or more qubits on a real quantum device using pyQuil. This one of the major sources of error on a quantum computer.

A \(T_1\) experiment consists of an X pulse, bringing the qubit from \(|0\rangle\) to \(|1\rangle\) (the excited state of the artificial atom), followed by a delay of variable duration.

The physics of the devices is such that we expect the state to decay exponentially with increasing time because of “energy relaxation”. We characterize this decay by the `decay_time`

constant, typically denoted \(T_1 =1/\Gamma\) where \(\Gamma\) is the decay rate. The parameter \(T_1\) is referred to as the qubit’s “relaxation” or “decoherence” time. A sample QUIL program at one data point (specified by the duration of the DELAY
pragma) for qubit 0 with a 10us wait would look like

```
DECLARE ro BIT[1]
RX(pi) 0
PRAGMA DELAY 0 "1e-05"
MEASURE 0 ro[0]
```

**NB: Since decoherence and dephasing noise are only simulated on gates, and we make use of DELAY pragmas to simulate relaxation time, we cannot simulate decoherence on the QPU with this experiment as written. This notebook should only be run on a real quantum device.**

setup - imports and relevant units

```
[1]:
```

```
from matplotlib import pyplot as plt
from pyquil.api import get_qc, QuantumComputer
from forest.benchmarking.qubit_spectroscopy import (
generate_t1_experiments,
acquire_qubit_spectroscopy_data,
fit_t1_results, get_stats_by_qubit)
```

We treat SI base units, such as the second, as dimensionless, unit quantities, so we define relative units, such as the microsecond, using scientific notation.

```
[2]:
```

```
MICROSECOND = 1e-6
```

**Get a quantum computer**

```
[3]:
```

```
# use this command to get the real QPU
#qc = get_qc('Aspen-1-15Q-A')
qc = get_qc('2q-noisy-qvm') # will run on a QVM, but not meaningfully
qubits = qc.qubits()
qubits
```

```
[3]:
```

```
[0, 1]
```

##### Generate simultaneous \(T_1\) experiments¶

We make and experiment for each desired qubit in `qubits`

, specifying the times in seconds at which we would like to measure the decay.

```
[4]:
```

```
import numpy as np
stop_time = 60 * MICROSECOND
num_points = 15
times = np.linspace(0, stop_time, num_points)
expt = generate_t1_experiments(qubits, times)
print(expt)
```

```
[<forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f79202e8>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f7920588>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f7920898>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f7920ba8>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f7920e80>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c4198>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c4438>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c4748>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c4ac8>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c4dd8>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c9128>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c9438>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c9748>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c9a58>, <forest.benchmarking.observable_estimation.ObservablesExperiment object at 0x7fc7f78c9d68>]
```

##### Acquire data¶

Collect our \(T_1\) raw data using `acquire_qubit_spectroscopy_data`

.

```
[5]:
```

```
num_shots = 1000
results = acquire_qubit_spectroscopy_data(qc, expt, num_shots)
```

##### Analyze and plot¶

**Use the results to produce estimates of :math:`T_1`**

In the cell below we first extract lists of expectations and std_errs from the results and store them separately by qubit. For each qubit we then fit to an exponential decay curve and evaluate the `decay_time`

constant, i.e. the T1. Finally we plot the decay curve fit over the data for each qubit.

```
[6]:
```

```
from forest.benchmarking.plotting import plot_figure_for_fit
stats_by_qubit = get_stats_by_qubit(results)
for q, stats in stats_by_qubit.items():
fit = fit_t1_results(np.asarray(times) / MICROSECOND, stats['expectation'],
stats['std_err'])
fig, axs = plot_figure_for_fit(fit, title=f'Q{q} Data and Fit', xlabel=r"Time [$\mu s$]",
ylabel=r"Pr($|1\rangle$)")
t1 = fit.params['decay_time'].value # in us
```

```
[ ]:
```

```
```

#### Qubit spectroscopy: \(T_2^*\) (Ramsey) and \(T_2\)-echo (Hahn echo) measurement example¶

This notebook demonstrates how to assess the dephasing time or spin–spin relaxation, \(T_2^*\), and detuning of one or more qubits on a real quantum device using pyQuil.

A \(T_2^*\) Ramsey experiment measures the dephasing time, \(T_2^*\), of a qubit and the qubit’s detuning, which is a measure of the difference between the qubit’s resonant frequency and the frequency of the rotation pulses being used to perform the \(T_2^*\) Ramsey experiment. Ideally, this detuning would be 0, meaning our pulses are perfectly tailored to address each qubit without enacting any unintended interactions with neighboring qubits. Practically, however, qubits drift, and the pulse parameters need to be updated. We retune our qubits and pulses regularly, but if you want to assess how well tuned the pulses are to the qubits’ frequencies for yourself, you can run a \(T_2^*\) Ramsey experiment and see how big the qubit detuning is.

To design a \(T_2^*\) Ramsey experiment, we need to provide a conservative estimate for how big the detuning is. The Ramsey analysis assumes this simulated detuning value, sometimes referred to as TPPI in nuclear magnetic resonance (NMR), is less than the actual qubit detuning, so for a recently retuned chip, it’s safe to make an estimate of a few megahertz (MHz).

A \(T_2^*\) Ramsey experiment consists of an X/2 pulse, bringing the qubit to the equator on the Bloch sphere, followed by a delay of variable duration, \(t\), during which we expect the qubit to precess about the Z axis on the equator. We then apply a Z rotation through \(2\pi * t * \text{detuning}\), where detuning is the simulated detuning. Finally, we apply another X/2 pulse, which, if the precession from the delay and the manual Z rotation offset each other perfectly, should land the qubit in the excited state. When the precession from the delay and the Z rotation do not offset each other perfectly, the qubit does not land perfectly in the excited state, so the excited state visibility oscillates sinusoidally in time, creating fringes. While this is happening, dephasing also causes the state to contract toward the center of the Bloch sphere, away from its surface. This causes the amplitude of the fringes to decay in time, so we expect an exponentially-decaying sinusoidal waveform as a function of the delay time. We calculate the time decay constant from these fringes, as in \(T_1\) experiments, and call this quantity \(T_2^*\). We also fit to the frequency of the Ramsey fringes to get our calculated detuning.

A sample QUIL program at one data point (specified by the duration of the DELAY pragma) for qubit 0 with a 10us delay and 5 MHz of simulated detuning (so that \(2\pi * t * \text{detuning} = 100\pi\)) would look like

```
DECLARE ro BIT[1]
RX(pi/2) 0
PRAGMA DELAY 0 "1e-05"
RZ(100*pi) 0
RX(pi/2) 0
MEASURE 0 ro[0]
```

**NB: Since decoherence and dephasing noise are only simulated on gates, and we make use of DELAY pragmas to simulate relaxation time, we cannot simulate dephasing on the QPU with this experiment as written. This notebook should only be run on a real quantum device.**

setup - imports and relevant units

```
[1]:
```

```
from matplotlib import pyplot as plt
from pyquil.api import get_qc
from forest.benchmarking.qubit_spectroscopy import (
generate_t2_star_experiments,
generate_t2_echo_experiments,
acquire_qubit_spectroscopy_data,
fit_t2_results, get_stats_by_qubit)
from forest.benchmarking.plotting import plot_figure_for_fit
```

We treat SI base units, such as the second, as dimensionless, unit quantities, so we define relative units, such as the microsecond, using scientific notation.

```
[2]:
```

```
MHZ = 1e6
MICROSECOND = 1e-6
```

**Get a quantum computer**

```
[3]:
```

```
#qc = get_qc('Aspen-1-15Q-A')
qc = get_qc('2q-noisy-qvm') # will run on a QVM, but not meaningfully
qubits = qc.qubits()
qubits
```

```
[3]:
```

```
[0, 1]
```

##### \(T_2^*\) Experiment¶

###### Generate simultaneous \(T_2^*\) experiments¶

We can specify which qubits we want to measure using `qubits`

and the maximum delay we’ll use for each using `stop_time`

.

```
[4]:
```

```
import numpy as np
stop_time = 13 * MICROSECOND
num_points = 30
times = np.linspace(0, stop_time, num_points)
detune = 5 * MHZ
t2_star_expt = generate_t2_star_experiments(qubits, times, detune)
```

###### Acquire data¶

Collect our \(T_2^*\) raw data using `acquire_t2_data`

.

```
[5]:
```

```
results = acquire_qubit_spectroscopy_data(qc, t2_star_expt, num_shots=500)
```

###### Analyze and plot¶

**Use the results to fit a curve and produce estimates of :math:`T_2^*`**

In the cell below we first extract lists of expectations and std_errs from the results and store them separately by qubit. For each qubit we then fit to an exponentially decaying sinusoid and evaluate the fitted `decay_time`

constant, i.e. the \(T_2^*\), as well as the fitted detuning. Finally we plot the curve fit over the data for each qubit. These are the Ramsey fringes for each qubit with respect to increasing delay duration.

**Note:** in some of the cells below there is a comment `# NBVAL_SKIP`

this is used in testing to speed up our tests by skipping that particular cell.

```
[6]:
```

```
# NBVAL_SKIP
stats_by_qubit = get_stats_by_qubit(results)
for q, stats in stats_by_qubit.items():
fit = fit_t2_results(np.asarray(times) / MICROSECOND, stats['expectation'],
stats['std_err'])
fig, axs = plot_figure_for_fit(fit, title=f'Q{q} Data and Fit', xlabel=r"Time [$\mu s$]",
ylabel=r"Pr($|1\rangle$)")
t2_star = fit.params['decay_time'].value # in us
freq = fit.params['frequency'].value # in MHZ
```

##### \(T_2\)-echo experiment¶

```
[7]:
```

```
detune = 5 * MHZ
t2_echo_expt = generate_t2_echo_experiments(qubits, times, detune)
```

###### Acquire data¶

Collect our \(T_2\)-echo raw data using `acquire_t2_data`

.

```
[8]:
```

```
echo_results = acquire_qubit_spectroscopy_data(qc, t2_echo_expt)
```

###### Analyze and plot¶

**Use the results to produce estimates of :math:`T_2`-echo**

```
[9]:
```

```
# NBVAL_SKIP
stats_by_qubit = get_stats_by_qubit(echo_results)
for q, stats in stats_by_qubit.items():
fit = fit_t2_results(np.asarray(times) / MICROSECOND, stats['expectation'],
stats['std_err'])
fig, axs = plot_figure_for_fit(fit, title=f'Q{q} Data and Fit', xlabel=r"Time [$\mu s$]",
ylabel=r"Pr($|1\rangle$)")
t2_echo = fit.params['decay_time'].value # in us
freq = fit.params['frequency'].value # in MHZ
```

```
[ ]:
```

```
```

#### Qubit spectroscopy: Rabi measurement example¶

This notebook demonstrates how to perform a Rabi experiment on a simulated or real quantum device. This experiment tests the calibration of the `RX`

pulse by rotating through a full \(2\pi\) radians and evaluating the excited state visibility as a function of the angle of rotation, \(\theta\). The QUIL program for one data point for qubit 0 at, for example \(\theta=\pi/2\), is

```
DECLARE ro BIT[1]
X 0
RX(pi/2) 0
MEASURE 0 ro[0]
```

The X 0 is simply to initialize the state to \(|1\rangle\). We expect to see a characteristic “Rabi flop” by sweeping \(\theta\) over \([0, 2\pi)\), thereby completing a full rotation around the Bloch sphere. It should look like \(\dfrac{1-\cos(\theta)}{2}\)

setup

```
[1]:
```

```
from matplotlib import pyplot as plt
from pyquil.api import get_qc, QuantumComputer
from forest.benchmarking.qubit_spectroscopy import *
```

```
[2]:
```

```
#qc = get_qc('Aspen-1-15Q-A')
qc = get_qc('2q-noisy-qvm') # will run on a QVM, but not meaningfully
qubits = qc.qubits()
qubits
```

```
[2]:
```

```
[0, 1]
```

##### Generate simultaneous Rabi experiments on all qubits¶

```
[3]:
```

```
import numpy as np
from numpy import pi
angles = np.linspace(0, 2*pi, 15)
rabi_expts = generate_rabi_experiments(qubits, angles)
```

##### Acquire data¶

Collect our Rabi raw data using `acquire_qubit_spectroscopy_data`

.

```
[4]:
```

```
results = acquire_qubit_spectroscopy_data(qc, rabi_expts, num_shots=500)
```

##### Analyze and plot¶

**Use the results to fit a Rabi curve and estimate parameters**

In the cell below we first extract lists of expectations and std_errs from the results and store them separately by qubit. For each qubit we then fit to a sinusoid and evaluate the period (which should be \(2\pi\)). Finally we plot the Rabi flop.

```
[5]:
```

```
from forest.benchmarking.plotting import plot_figure_for_fit
stats_by_qubit = get_stats_by_qubit(results)
for q, stats in stats_by_qubit.items():
fit = fit_rabi_results(angles, stats['expectation'], stats['std_err'])
fig, axs = plot_figure_for_fit(fit, title=f'Q{q} Data and Fit', xlabel="RX angle [rad]",
ylabel=r"Pr($|1\rangle$)")
frequency = fit.params['frequency'].value # ratio of actual angle over intended control angle
amplitude = fit.params['amplitude'].value # (P(1 given 1) - P(1 given 0)) / 2
baseline = fit.params['baseline'].value # amplitude + p(1 given 0)
```

```
[ ]:
```

```
```

#### Qubit spectroscopy: CZ Ramsey measurement example¶

Similar to a \(T_2^*\) Ramsey experiment, a CZ Ramsey experiment measures fringes resulting from induced Z rotations, which can result in non-unitary CZs. To rectify this non-unitarity, we determine the correction we need to apply to each qubit in the form of `RZ`

rotations. If a CZ is perfectly unitary (or has been compensated for adequately with `RZ`

pulses), a CZ Ramsey experiment should return 0 radians for each qubit. If, however, some correction is required, these angles will be
non-zero.

A sample QUIL program at one data point (specified by the equatorial Z rotation which maximizes excited state visibility when equal to the required `RZ`

correction) between qubits 0 and 1 would look like

```
DECLARE ro BIT[1]
DECLARE theta REAL[1]
RX(pi/2) 0
CZ 0 1
RZ(theta) 0
RX(pi/2) 0
MEASURE 0 ro[0]
```

Setup

```
[1]:
```

```
from typing import Tuple
from matplotlib import pyplot as plt
import numpy as np
from pyquil import Program
from pyquil.api import get_qc, QuantumComputer
from forest.benchmarking.qubit_spectroscopy import *
```

```
[2]:
```

```
#qc = get_qc('Aspen-1-15Q-A')
qc = get_qc('3q-noisy-qvm') # will run on a QVM, but not very meaningfully since there is no cross-talk.
graph = qc.qubit_topology()
edges = list(graph.edges())
edges
```

```
[2]:
```

```
[(0, 1), (0, 2), (1, 2)]
```

```
[3]:
```

```
# if you are on the QPU you can look at `qc.device.specs`
# qc.device.specs
```

##### Generate CZ Ramsey experiments¶

```
[4]:
```

```
import numpy as np
from numpy import pi
angles = np.linspace(0, 2*pi, 15)
edge = (0, 1)
# generate a set of len(angles) experiments for each measurement qubit on edge (0, 1).
edge_0_1_expts = [generate_cz_phase_ramsey_experiments(edge, measure_q, angles) for measure_q in edge]
```

##### Acquire data¶

Collect our Ramsey raw data using `estimate_observables`

.

```
[5]:
```

```
from forest.benchmarking.observable_estimation import estimate_observables
# acquire results for each measurement qubit on each edge.
results = []
for angle_expts in zip(*edge_0_1_expts):
angle_results = []
for meas_q_expt in angle_expts:
angle_results += estimate_observables(qc, meas_q_expt, num_shots=500)
results.append(angle_results)
```

##### Analyze and plot¶

**Use the results to produce estimates of Ramsey-acquired compensatory RZ phases for this edge**

In the cell below we first extract the expectation and std_err for each measurement and store these results in lists separately for each qubit measured. For each qubit measured we then fit a sinusoid to the data from which we can determine the `offset`

of the maximum excited state visibility, which tells us the effective imparted phase. Finally we plot the CZ Ramsey fringes for each qubit with respect to increasing applied contrast phase.

```
[6]:
```

```
from forest.benchmarking.plotting import plot_figure_for_fit
stats_by_qubit = get_stats_by_qubit(results)
for q, stats in stats_by_qubit.items():
fit = fit_cz_phase_ramsey_results(angles, stats['expectation'], stats['std_err'])
fig, axs = plot_figure_for_fit(fit, title=f'Q{q} Data and Fit', xlabel="RX angle [rad]",
ylabel=r"Pr($|1\rangle$)")
frequency = fit.params['frequency'].value # ratio of actual angle over intended control angle
amplitude = fit.params['amplitude'].value # (P(1 given 1) - P(1 given 0)) / 2
baseline = fit.params['baseline'].value # amplitude + P(1 given 0)
offset = fit.params['offset'].value # effective imparted phase on the qubit from the CZ, relative to the true qubit frequency
```

```
[ ]:
```

```
```

#### General Functions¶

`acquire_qubit_spectroscopy_data` (qc, …) |
A standard data acquisition method for all experiments in this module. |

`get_stats_by_qubit` (expt_results) |
Organize the mean and std_err of a single-observable experiment by qubit. |

`do_t1_or_t2` (qc, qubits, times, kind, …) |
A wrapper around experiment generation, data acquisition, and estimation that runs a t1, t2 echo, or t2* experiment on each qubit in qubits and returns the rb_decay along with the experiments and results. |

#### T1¶

`generate_t1_experiments` (qubits, times) |
Return a ObservablesExperiment containing programs which constitute a t1 experiment to measure the decay time from the excited state to ground state for each qubit in qubits. |

`fit_t1_results` (times, z_expectations, …) |
Wrapper for fitting the results of a T1 experiment for a single qubit; simply extracts key parameters and passes on to the standard fit. |

#### T2¶

`generate_t2_star_experiments` (qubits, times, …) |
Return ObservablesExperiments containing programs which constitute a T2 star experiment to measure the T2 star coherence decay time for each qubit in qubits. |

`generate_t2_echo_experiments` (qubits, times, …) |
Return ObservablesExperiments containing programs which constitute a T2 echo experiment to measure the T2 echo coherence decay time. |

`fit_t2_results` (times, y_expectations, …) |
Wrapper for fitting the results of a ObservablesExperiment; simply extracts key parameters and passes on to the standard fit. |

#### Rabi¶

`generate_rabi_experiments` (qubits, angles) |
Return ObservablesExperiments containing programs which constitute a Rabi experiment. |

`fit_rabi_results` (angles, z_expectations, …) |
Wrapper for fitting the results of a rabi experiment on a qubit; simply extracts key parameters and passes on to the standard fit. |

#### CZ phase Ramsey¶

`generate_cz_phase_ramsey_experiments` (…) |
Return ObservablesExperiments containing programs that constitute a CZ phase ramsey experiment. |

`fit_cz_phase_ramsey_results` (angles, …) |
Wrapper for fitting the results of a ObservablesExperiment; simply extracts key parameters and passes on to the standard fit. |

### Classical Logic¶

This module allows us to use a “simple” reversible binary adder to benchmark a quantum computer. The code is contained in the module classical_logic.

The benchmark is simplistic and not very rigorous as it does not test any specific feature of the hardware. Further the whole circuit is classical in the sense that we start and end in computational basis states and all gates simply perform classical not, controlled not (CNOT), or doubly controlled not (CCNOT aka a Toffoli gate). Finally, even for the modest task of adding two one bit numbers, the CZ gate (our fundamental two qubit gate) count is very high for the circuit. This in turn implies a low probability of the entire circuit working.

#### Ripple Carry Adder¶

In this notebook use a “simple” reversible binary adder to benchmark a quantum computer. The code is contained in the module `classical_logic`

.

The benchmark is simplistic and not very rigorous as it does not test any specific feature of the hardware. Further the whole circuit is classical in the sense that we start and end in computational basis states and all gates simply perform classical not, controlled not (`CNOT`

), or doubly controlled not (`CCNOT`

aka a Toffoli gate). Finally, even for the modest task of adding two one bit numbers, the `CZ`

gate (our fundamental two qubit
gate) count is very high for the circuit. This in turn implies a low probability of the entire circuit working.

However it is very easy to explain the performance of hardware to non-experts, e.g. *“At the moment quantum hardware is pretty noisy, so much so that when we run circuits to add two classical bits it gives the correct answer 40% of the time.”*

Moreover the simplicity of the benchmark is also its strength. At the bottom of this notebook we provide code for examining the “error distribution”. When run on hardware we can observe that low weight errors dominate which gives some insight that the hardware is approximately doing the correct thing.

The module `classical_logic`

is based on the circuits found in [QRCA]

*A new quantum ripple-carry addition circuit*.

Figures from QRCA

```
[1]:
```

```
import numpy as np
from pyquil.quil import Program
from pyquil.gates import *
from pyquil.api import get_qc
from forest.benchmarking.classical_logic.ripple_carry_adder import *
import matplotlib.pyplot as plt
import networkx as nx
```

```
[2]:
```

```
# noiseless QVM
qc = get_qc("9q-generic", as_qvm=True, noisy=False)
# noisy QVM
noisy_qc = get_qc("9q-generic-noisy-qvm", as_qvm=True, noisy=True)
```

##### Draw the noiseless qc topology¶

```
[3]:
```

```
nx.draw(qc.qubit_topology(),with_labels=True)
```

```
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:518: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:565: MatplotlibDeprecationWarning:
The is_numlike function was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use isinstance(..., numbers.Number) instead.
if cb.is_numlike(alpha):
```

##### One bit addtion: 1+1 = 10 i.e. 2¶

There is a small bit of setup that needs to happen before creating the program for the circuit. Specifically you have to pick registers of qubits for the two input numbers `reg_a`

and `reg_b`

, a carry bit `c`

, and an extra digit `z`

that will hold the most significant bit of the answer.

If you have a specific line of qubits in mind for the registers there is a helper `assign_registers_to_line_or_cycle()`

which will provide these registers for you–`c`

is assigned to the provided start qubit and assignments go down the line in the circuit diagram above; however, you have to provide a subgraph that is sufficiently simple so that the assignment can be done by simpling moving to the next neighbor–e.g. the graph is a line or a cycle.

If you don’t care about the particular arrangement of qubits then you can instead use `get_qubit_registers_for_adder()`

which will find a suitable assignment for you if one exists.

```
[4]:
```

```
# the input numbers
num_a = [1]
num_b = [1]
# There are two easy routes to assign registers
# 1) if you have particular target qubits in mind
target_qubits = [3,6,7,4,1]
start = 3
reg_a, reg_b, c, z = assign_registers_to_line_or_cycle(start,
qc.qubit_topology().subgraph(target_qubits),
len(num_a))
print('Registers c, a, b, z on target qubits', target_qubits,': ', c, reg_a, reg_b, z)
# 2) if you don't care about a particular arrangement
# you can still exclude qubits. Here we exclude 0.
reg_a, reg_b, c, z = get_qubit_registers_for_adder(qc, len(num_a), qubits = list(range(1,10)))
print('Registers c, a, b, z on any qubits excluding q0: ', c, reg_a, reg_b, z)
# given the numbers and registers construct the circuit to add
ckt = adder(num_a, num_b, reg_a, reg_b, c, z)
exe = qc.compile(ckt)
result = qc.run(exe)
print('\nThe answer of 1+1 is 10')
print('The circuit on an ideal qc gave: ', result)
```

```
Registers c, a, b, z on target qubits [3, 6, 7, 4, 1] : 3 [7] [6] 4
Registers c, a, b, z on any qubits excluding q0: 4 [2] [1] 5
The answer of 1+1 is 10
The circuit on an ideal qc gave: [[1 0]]
```

##### Two bit addition¶

We will start with 1+1=2 on a noiseless simulation.

We choose to represent 1 (decimal) as a two digit binary number 01 so the addition becomes

01 + 01 = 010

where the bits are ordered from most significant to least i.e. (MSB…LSB).

The MSB is necessary for representing other two bit additions e.g. 2 + 2 = 4 -> 10 + 10 = 100

```
[5]:
```

```
# the input numbers
num_a = [0,1]
num_b = [0,1]
#
reg_a, reg_b, c, z = get_qubit_registers_for_adder(qc, len(num_a))
# given the numbers and registers construct the circuit to add
ckt = adder(num_a, num_b, reg_a, reg_b, c, z)
exe = qc.compile(ckt)
result = qc.run(exe)
print('The answer of 01+01 is 010')
print('The circuit on an ideal qc gave: ', result)
```

```
The answer of 01+01 is 010
The circuit on an ideal qc gave: [[0 1 0]]
```

##### Draw the noisy qc topology¶

```
[6]:
```

```
nx.draw(noisy_qc.qubit_topology(),with_labels=True)
```

```
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:518: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
/home/kylegulshen/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:565: MatplotlibDeprecationWarning:
The is_numlike function was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use isinstance(..., numbers.Number) instead.
if cb.is_numlike(alpha):
```

##### Now try 1+1=2 on a noisy qc¶

The output is now stochastic–try re-running this cell multiple times! Note in particular that the MSB is sometimes (rarely) 1 due to some combination of readout error and error propagation through the CNOT

```
[7]:
```

```
reg_a, reg_b, c, z = get_qubit_registers_for_adder(noisy_qc, len(num_a))
ckt = adder(num_a, num_b, reg_a, reg_b, c, z)
exe = noisy_qc.compile(ckt)
noisy_qc.run(exe)
```

```
[7]:
```

```
array([[0, 1, 1]])
```

##### Get results for all summations of pairs of 2-bit strings¶

Because classical binary addition is easy we can caculate the ideal output of the circuit. In order to see how well the QPU excutes the circuit we average the circuit over all possible input strings. Here we look at two bit strings e.g.

Register a | Register b | a + b + carry |
---|---|---|

00 | 00 | 000 |

00 | 01 | 001 |

00 | 10 | 010 |

00 | 11 | 011 |

01 | 00 | 001 |

\(\vdots\) | \(\vdots\) | \(\vdots\) |

11 | 11 | 110 |

The rough measure of goodness is the success probability, which we define as number of times the QPU correctly returns the string listed in the (a+b+carry) column divided by the total number of trials.

You might wonder how well you can do just by generating a random binary number and reporting that as the answer. Well if you are doing addition of two \(n\) bit strings the probability that you can get the correct answer by guessing

\(\Pr({\rm correct}\, |\, n)= 1/ 2^{n +1}\),

explicitly \(\Pr({\rm correct}\, |\, 1)= 0.25\) and \(\Pr({\rm correct}\, |\, 2)= 0.125\).

A zeroth order performance criterion is to do better than these numbers.

```
[8]:
```

```
n_bits = 2
nshots = 100
results = get_n_bit_adder_results(noisy_qc, n_bits, use_param_program=False, num_shots = nshots,
show_progress_bar=True)
```

```
100%|██████████| 16/16 [00:35<00:00, 2.21s/it]
```

```
[9]:
```

```
# success probabilities of different input strings
pr_correct = get_success_probabilities_from_results(results)
print('The probability of getting the correct answer for each output in the above table is:')
print(np.round(pr_correct, 4),'\n')
print('The success probability averaged over all inputs is', np.round(np.mean(pr_correct), 5))
```

```
The probability of getting the correct answer for each output in the above table is:
[0.61 0.59 0.63 0.59 0.53 0.53 0.55 0.59 0.5 0.49 0.61 0.54 0.62 0.6
0.6 0.62]
The success probability averaged over all inputs is 0.575
```

```
[10]:
```

```
# For which outputs did we do better than random ?
np.asarray(pr_correct)> 1/2**(n_bits+1)
```

```
[10]:
```

```
array([ True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True])
```

##### Get the distribution of the hamming weight of errors¶

Even if the success probability of the circuit is worse than random there might be a way in which the circuit is not absolutely random. This could indicate that the computation is actually doing something ‘close’ to what is desired. To look for such situations we consider the full distribution of errors in our outputs.

The output of our circuit is in the computational basis so all errors manifest as bit flips from the actual answer. The number of bits you need to flip to transform one binary string \(B_1\) to another binary string \(B_2\) is called the Hamming distance. We are interested in the distance \({\rm dist}(B_t, B_o)\) between the true ideal answer \(B_{t}\) and the noisy output answer \(B_{o}\), which is equivalent to the Hamming weight ${\rm wt}(\cdot) $ of the error in our output.

For example, for various ideal answers and measured outputs for 4 bit addition (remember there’s an extra fifth MSB for the answer) we have

\({\rm dist}(00000,00001) = {\rm wt}(00001) = 1\)

\({\rm dist}(00000,10001) = {\rm wt}(10001) = 2\)

\({\rm dist}(11000,10101) = {\rm wt}(01101) = 3\)

\({\rm dist}(00001,11110) = {\rm wt}(11111) = 5\)

In order to see if our near term devices are doing interesting things we calculate the distribution of the Hamming weight of the errors observed in our QPU data with respect to the known ideal output. The entry corresponding to zero Hamming weight is the success probability.

```
[11]:
```

```
distributions = get_error_hamming_distributions_from_results(results)
```

##### Plot distribution of 00+00 and 11+11 and compare to random¶

```
[12]:
```

```
from scipy.special import comb
zeros_distribution = distributions[0]
rand_ans_distr = [comb(n_bits + 1, x)/2**(n_bits + 1) for x in range(len(zeros_distribution))]
x_labels = np.arange(0, len(zeros_distribution))
plt.bar(x_labels, zeros_distribution, width=0.61, align='center')
plt.bar(x_labels, rand_ans_distr, width=0.31, align='center')
plt.xticks(x_labels)
plt.xlabel('Hamming Weight of Error')
plt.ylabel('Relative Frequency of Occurrence')
plt.grid(axis='y', alpha=0.75)
plt.legend(['data','random'])
plt.title('Z basis Error Hamming Wt Distr for 00+00=000')
#name = 'numbits'+str(n_bits) + '_basisZ' + '_shots' + str(nshots)
#plt.savefig(name)
plt.show()
```

```
[13]:
```

```
from scipy.special import comb
ones_distribution = distributions[-1]
rand_ans_distr = [comb(n_bits + 1, x)/2**(n_bits + 1) for x in range(len(zeros_distribution))]
x_labels = np.arange(0, len(ones_distribution))
plt.bar(x_labels, ones_distribution, width=0.61, align='center')
plt.bar(x_labels, rand_ans_distr, width=0.31, align='center')
plt.xticks(x_labels)
plt.xlabel('Hamming Weight of Error')
plt.ylabel('Relative Frequency of Occurrence')
plt.grid(axis='y', alpha=0.75)
plt.legend(['data','random'])
plt.title('Z basis Error Hamming Wt Distr for 11+11=110')
#name = 'numbits'+str(n_bits) + '_basisZ' + '_shots' + str(nshots)
#plt.savefig(name)
plt.show()
```

##### Plot average distribution over all summations; compare to random¶

```
[14]:
```

```
from scipy.special import comb
averaged_distr = np.mean(distributions, axis=0)
rand_ans_distr = [comb(n_bits + 1, x)/2**(n_bits + 1) for x in range(len(averaged_distr))]
x_labels = np.arange(0, len(averaged_distr))
plt.bar(x_labels, averaged_distr, width=0.61, align='center')
plt.bar(x_labels, rand_ans_distr, width=0.31, align='center')
plt.xticks(x_labels)
plt.xlabel('Hamming Weight of Error')
plt.ylabel('Relative Frequency of Occurrence')
plt.grid(axis='y', alpha=0.75)
plt.legend(['data','random'])
plt.title('Z basis Error Hamming Wt Distr Avgd Over {}-bit Strings'.format(n_bits))
#name = 'numbits'+str(n_bits) + '_basisZ' + '_shots' + str(nshots)
#plt.savefig(name)
plt.show()
```

##### Now do the same, but with addition in the X basis¶

In this section we do classical logic in the X basis. This means the inputs to the circuits are no longer \(|0\rangle\) and \(|1\rangle\), instead they are \(|+\rangle = H|0\rangle\) and \(|-\rangle = H|1\rangle\).

Originally all the logic was done with X, CNOT, and Toffoli gates. In this case we have to convert them to the corresponding gates in the X basis. E.g.

CNOT = \(|0\rangle\langle 0|\otimes I + |1\rangle\langle 1|\otimes X\)

becomes

CNOT_in_X_basis = \((H\otimes I)\) CZ \((H\otimes I)\) = \(|+\rangle\langle +|\otimes I + |-\rangle\langle -|\otimes Z\).

**Note:** in some of the cells below there is a comment `# NBVAL_SKIP`

this is used in testing to speed up our tests by skipping that particular cell.

```
[15]:
```

```
# NBVAL_SKIP
n_bits = 2
# set in_x_basis to true here
results = get_n_bit_adder_results(noisy_qc, n_bits, in_x_basis=True, show_progress_bar=True)
distributions = get_error_hamming_distributions_from_results(results)
averaged_distr = np.mean(distributions, axis=0)
x_labels = np.arange(0, len(averaged_distr))
plt.bar(x_labels, averaged_distr, width=0.61, align='center')
plt.bar(x_labels, rand_ans_distr, width=0.31, align='center')
plt.xticks(x_labels)
plt.xlabel('Hamming Weight of Error')
plt.ylabel('Relative Frequency of Occurrence')
plt.grid(axis='y', alpha=0.75)
plt.legend(['data','random'])
plt.title('X basis Error Hamming Wt Distr Avgd Over {}-bit Strings'.format(n_bits))
#plt.savefig(name)
plt.show()
```

```
100%|██████████| 16/16 [00:35<00:00, 2.23s/it]
```

##### Error probability to random guess probability as a function of number of added bits¶

Here we compare the average probability of the adder working as a function of input size (averaged over all possible input strings) to random guessing. To provide context we also compare this to the error probability of the best input string (likely the all zero input string) and the worst input string (likely all ones).

```
[16]:
```

```
# NBVAL_SKIP
summand_lengths = [1,2,3]
avg_n = []
med_n = []
min_n = []
max_n = []
rand_n = []
for n_bits in summand_lengths:
results = get_n_bit_adder_results(noisy_qc, n_bits, show_progress_bar=True)
output_len = n_bits + 1
# success probability average over all input strings
avg_n.append(np.average(get_success_probabilities_from_results(results)))
# median success probability average over all input strings
med_n.append(np.median(get_success_probabilities_from_results(results)))
# success probability input bit string with most errors
min_n.append(np.min(get_success_probabilities_from_results(results)))
# success probability input bit string with least errors
max_n.append(np.max(get_success_probabilities_from_results(results)))
# success probability of randomly guessing the correct answer
rand_n.append(1 / 2**output_len)
```

```
100%|██████████| 4/4 [00:02<00:00, 1.62it/s]
100%|██████████| 16/16 [00:34<00:00, 2.15s/it]
100%|██████████| 64/64 [03:27<00:00, 3.24s/it]
```

```
[17]:
```

```
# NBVAL_SKIP
plt.scatter(summand_lengths, avg_n, c='b', label='mean')
plt.scatter(summand_lengths, rand_n, c='m', marker='D', label='random')
plt.scatter(summand_lengths, min_n, c='r', marker='_', label='min/max')
plt.scatter(summand_lengths, max_n, c='r', marker='_')
plt.xticks(summand_lengths)
plt.xlabel('Number of bits added n (n+1 including carry bit)')
plt.ylabel('Probablity of working')
plt.legend()
name = 'Pr_suc_fn_nbits' + '_basisZ' + '_shots' + str(nshots)
plt
```