Near-Time Gate Optimization¶
This example demonstrates how to maximize gate fidelities by varying pulse parameters in near-time.
This is achieved by defining an experiment from a single randomized benchmarking (RB) pulse sequence and a subsequent state measurement in real-time (RT). At the near-time (NT) level, the experiment makes use of a user callback function that obtains the value of the objective function, in this case the ORBIT fidelity, computes the parameters next optimization step, and updates and replaces the pulses that are defined by the optimized parameters.
We begin with the necessary imports. Note, that you will have to install the package scikit-optimize
to run this notebook.
# Package installation for this notebook
%pip install scikit-optimize
import numpy as np
from laboneq.contrib.example_helpers.generate_device_setup import (
generate_device_setup_qubits,
)
from laboneq.contrib.example_helpers.randomized_benchmarking_helper import (
clifford_parametrized,
generate_play_rb_pulses,
make_pauli_gate_map,
)
from laboneq.simple import *
from skopt import Optimizer
Device Setup and Session¶
We prepare both the device setup and session objects needed to run the experiment.
# specify the number of qubits you want to use
number_of_qubits = 2
# generate the device setup and the qubit objects using a helper function
device_setup, qubits = generate_device_setup_qubits(
number_qubits=number_of_qubits,
shfqc=[
{
"serial": "DEV12001",
"zsync": 1,
"number_of_channels": 6,
"readout_multiplex": 6,
"options": None,
}
],
include_flux_lines=False,
server_host="localhost",
setup_name=f"my_{number_of_qubits}_fixed_qubit_setup",
)
q0 = qubits[0]
# use emulation mode - no connection to instruments
use_emulation = True
# create and connect to a LabOne Q session
session = Session(device_setup)
session.connect(do_emulation=use_emulation)
Preparation¶
We define readout pulse and integration kernel as well as the mapping of the quantum gates used to construct the RB sequence from. We furthermore note the following
The Clifford gates used for randomized benchmarking are defined in terms of
$\left\{\hat{I}, \hat{X}, \hat{Y},\hat{X}^{1/2}, \hat{Y}^{1/2}, \hat{X}^{-1/2}, \hat{Y}^{-1/2}\right\}$.
We will come back at them later, when defining which pulses to optimize.
We define RB sequences of a fixed length of
n_rb_sequence_length = 3
gates (plus a recovery gate) in this example.To avoid converging to spurious local minima, a larger number of sequences (samples) are repeated, each being of the same length but containing different Clifford gates. In this example we set
n_rb_samples = 128
for the number of samples and initialize a pseudo random number generator to compose the actual gate sequence in each sample.
# qubit readout pulse
readout_pulse = pulse_library.const(
uid="readout_pulse",
length=200e-9,
amplitude=0.8,
)
# integration weights for qubit measurement
integration_kernel = pulse_library.const(
uid="readout_weighting_function",
length=200e-9,
amplitude=1.0,
)
# define the set of quantum operations for randomized benchmarking
gate_map = make_pauli_gate_map(
pi_pulse_amp=0.8,
pi_half_pulse_amp=0.42,
excitation_length=64e-9,
pulse_factory=pulse_library.gaussian,
pulse_kwargs={"sigma": 1 / 3},
)
# length of each RB sequence, not including the recovery gate
n_rb_sequence_length = 3
# number of individual RB sequences in each pass
n_rb_samples = 128
# random number generator used to obtain the sandom sequence of RB samples
prng = np.random.default_rng(seed=42)
To perform optimization steps within a LabOne Q experiment, we implement an outer NT-sweep. The values of the sweep parameter correspond to the index of each optimization step. This also requires setting a maximum number of iteration steps at this point, which we define as 12 for this example. Note, that in actual experiments
n_max_steps = 12
optimizer_sweep = LinearSweepParameter(start=0, stop=n_max_steps - 1, count=n_max_steps)
Randomized Benchmarking Experiment¶
We define the RB-waveform experiments as follows
- A RT acquisition loop with cyclic averaging in state discrimination mode provides the average index of the measured qubit states directly. As we perform RB for a qubit in its ground state we can use this quantity directly as objective function value and optimize it, i.e. the closer this value is to 0, the higher the gate fidelities.
- The RB sequence samples are generated by a helper function using the options defined in the previous section.
- After the RB sequence, the qubit state is measured using the readout pulse and integration kernel defined above.
- The RT acquisition loop is embedded in a NT sweep over iteration indices. At the end of each sweep pass, a user callback function with the label
"next NT step"
is called. In the next section we will discuss the definition of this function in detail.
exp = Experiment(
signals=["drive", "measure", "acquire"],
)
with exp.sweep(
uid="optimizer_loop",
parameter=optimizer_sweep,
execution_type=ExecutionType.NEAR_TIME,
):
with exp.acquire_loop_rt(
uid="rb_shots",
count=16,
averaging_mode=AveragingMode.CYCLIC,
acquisition_type=AcquisitionType.DISCRIMINATION,
):
# generate multiple different RB sequences of the same length
for i in range(n_rb_samples):
# randomized benchmarking sample
with exp.section(
uid=f"rb_sample_{i}", play_after=f"rb_measure_{i-1}" if i > 0 else None
):
generate_play_rb_pulses(
exp=exp,
signal="drive",
seq_length=n_rb_sequence_length,
cliffords=clifford_parametrized,
gate_map=gate_map,
rng=prng,
)
# readout and data acquisition
with exp.section(uid=f"rb_measure_{i}", play_after=f"rb_sample_{i}"):
exp.measure(
measure_pulse=readout_pulse,
measure_signal="measure",
acquire_signal="acquire",
handle="rb_results",
integration_kernel=integration_kernel,
reset_delay=1.0e-7,
)
exp.reserve(signal="drive")
# next step: compute result, generate next optimizer step, apply new parameters
exp.call("next NT step", i=optimizer_sweep)
Map the experiment signal to logical signals of the qubit used
exp.map_signal("drive", q0.signals["drive"])
exp.map_signal("measure", q0.signals["measure"])
exp.map_signal("acquire", q0.signals["acquire"])
Optimization Parameters¶
From the gate_map
defined in the Section "Preparations", we can directly extract the pulses whose parameters we want to optimize.
We exclude the identity I
gate here.
pulses_to_optimize = {gate_map[k].uid: gate_map[k] for k in gate_map if k != "I"}
In this example, we want to optimize the value of pulse_parameters["sigma"]
for each of these pulses, respectively.
We extract the initial parameter values and also set their ranges.
x_0 = [pulses_to_optimize[k].pulse_parameters["sigma"] for k in pulses_to_optimize]
x_range = [(0.0, 0.4) for _ in x_0]
To update these parameters during the experiment, we update each pulse with its respective new parameter value and then replace the corresponding pulses in the experiment by them.
Note, that updating other parameters like pulse amplitude or even individual waveform samples can be implemented analogously.
Near-Time User Callback Function¶
Objective Function Value¶
We begin by extracting the measurement results at the end of each NT sweep. As each RB sample begins in state 0, we simply average over the measured state indices and use this quantity as objective function value. For other use cases this definition of the objective function value should be adapted.
In emulation mode, we will generate synthetic results that decrease during the course of the optimization.
def objective_function_value(use_emulation):
def f(session: Session, i):
if use_emulation:
# synthetic data decreasing with i
return 0.7 ** (4.0 * i)
else:
# return temporary result
return np.mean(session.results.acquired_results["rb_results"].data[i].real)
return f
Note, that we use the average state discrimination result directly as objective function value here. Any computational steps needed to evaluate more complex objective functions could be added to this function too.
get_y = objective_function_value(use_emulation)
Optimization Step¶
The following function uses an Optimizer
instance from the scikit-optimize library to obtain new parameter values for the next optimization step.
def optimization_step(optimizer: Optimizer, x_0):
def f(i, y):
# set x as initial value x_0 or from previous optimization step
last_x = optimizer.ask() if i > 0 else x_0
# update optimizer with new parameter value and objective function result
optimizer.tell(last_x, y)
# ask optimizer for new parameter values and return new and last parameters
return optimizer.ask(), last_x
return f
Note, that any Optimizer class can be used here as long as it supports interrupted operation via an ask-and-tell interface.
# Instantiate an optimizer class supporting ask-and-tell interface
optimizer = Optimizer(
dimensions=x_range,
acq_func="EI",
acq_optimizer="sampling",
initial_point_generator="lhs",
)
# generate optimization step function
new_x = optimization_step(
optimizer=optimizer,
x_0=x_0,
)
Pulse Update¶
We define a collection of pulses as template to apply new parameter values to. The waveforms can then be replaced in the session.
def pulse_update(pulses):
def f(session: Session, x):
# loop over pulse uids and parameter values
for p, s in zip(pulses, x):
# modify pulse
pulses[p].pulse_parameters["sigma"] = s
# assign modified pulse under the same uid
session.replace_pulse(p, pulses[p])
return f
We initialize this function with the previously extracted pulses.
set_x = pulse_update(pulses_to_optimize)
User Callback Function¶
We can define the user callback function for the NT sweep from the three steps discussed above. Furthermore, the information progress is displayed if convergence is reached.
def next_nt_step(session: Session, i: float, convergence_criteria=lambda y: y < 1.0e-6):
# optimization step stems from sweep and needs to be converged
ii = int(i)
# evaluate new y from results
y = get_y(session, ii)
# obtain new and old x from optimizer
x, last_x = new_x(ii, y)
# update pulses with new x parameters
set_x(session, x)
# log optimization progress
if ii == 0:
print(f"\n{'i': ^6}|{'y': ^12}|{'y': ^12}")
print(f"{ii: ^6}| {y:10.2G} |", ", ".join([f"{_:8.5f}" for _ in last_x]))
# convergence check
if convergence_criteria(y):
print(f"CONVERGED in iteration {ii}\n")
session.abort_execution()
return {"i": ii, "y": y, "x": last_x}
We also register the neartime callback function with the LabOne Q session.
session.register_neartime_callback(
next_nt_step,
"next NT step",
)
Experiment Run¶
We can now execute the experiment. In emulation mode we achieve convergence after 10 optimization steps due to the selected convergence criteria and the behavior of the synthetic objective function values.
my_results = session.run(exp)