Qubit Tune-Up
Qubit Tune-Up¶
This guide show you how to use the experiment workflows in the Applications Library to perform single-qubit gate tune-up at your experimental setup containing a quantum processor of superconducting transmon qubits.
Getting Started¶
We will start by defining our experimental setup, connecting to the LabOne Q Session, and creating a FolderStore to save our data.
But first, we import numpy
, deepcopy
, and laboneq.simple
.
from copy import deepcopy
import numpy as np
from laboneq.simple import *
Define your experimental setup¶
Let's define our experimental setup. We will need:
a set of TunableTransmonOperations
a QPU
Here, we will be brief. We will mainly provide the code to obtain these objects. To learn more, check out these other tutorials:
We will use 3 TunableTransmonQubits
in this guide. Change this number to the one describing your setup.
number_of_qubits = 3
DeviceSetup¶
This guide requires a setup that can drive and readout tunable transmon qubits. Your setup could contain an SHFQC+ instrument, or an SHFSG and an SHFQA instruments. Here, we will use an SHFQC+ with 6 signal generation channels and a PQSC.
If you have used LabOne Q before and already have a DeviceSetup
for your setup, you can reuse that.
If you do not have a DeviceSetup
, you can create one using the code below. Just change the device numbers to the ones in your rack and adjust any other input parameters as needed.
# Setting get_zsync=True below, automatically detects the zsync ports of the PQCS that
# are used by the other instruments in this descriptor.
# Here, we are not connected to instruments, so we set this flag to False.
from laboneq.contrib.example_helpers.generate_descriptor import generate_descriptor
descriptor = generate_descriptor(
pqsc=["DEV10001"],
shfqc_6=["DEV12001"],
number_data_qubits=number_of_qubits,
multiplex=True,
number_multiplex=number_of_qubits,
include_cr_lines=False,
get_zsync=False, # set to True when at a real setup
ip_address="localhost",
)
setup = DeviceSetup.from_descriptor(descriptor, "localhost")
Qubits¶
We will generate 3 TunableTransmonQubits
from the logical signal groups in our DeviceSetup
. The names of the logical signal groups, q0
, q1
, q2
, will be the UIDs of the qubits. Moreover, the qubits will have the same logical signal lines as the ones of the logical signal groups in the DeviceSetup
.
from laboneq_applications.qpu_types.tunable_transmon import (
TunableTransmonQubit,
)
qubits = TunableTransmonQubit.from_device_setup(setup)
for q in qubits:
print("-------------")
print("Qubit UID:", q.uid)
print("Qubit logical signals:")
for sig, lsg in q.signals.items():
print(f" {sig:<10} ('{lsg:>10}')")
Configure the qubit parameters to reflect the properties of the qubits on your QPU using the following code:
for q in qubits:
q.parameters.ge_drive_pulse["sigma"] = 0.25
q.parameters.readout_amplitude = 0.5
q.parameters.reset_delay_length = 200e-6
q.parameters.readout_range_out = -25
q.parameters.readout_lo_frequency = 7.4e9
qubits[0].parameters.drive_lo_frequency = 6.4e9
qubits[0].parameters.resonance_frequency_ge = 6.3e9
qubits[0].parameters.resonance_frequency_ef = 6.0e9
qubits[0].parameters.readout_resonator_frequency = 7.0e9
qubits[1].parameters.drive_lo_frequency = 6.4e9
qubits[1].parameters.resonance_frequency_ge = 6.5e9
qubits[1].parameters.resonance_frequency_ef = 6.3e9
qubits[1].parameters.readout_resonator_frequency = 7.3e9
qubits[2].parameters.drive_lo_frequency = 6.0e9
qubits[2].parameters.resonance_frequency_ge = 5.8e9
qubits[2].parameters.resonance_frequency_ef = 5.6e9
qubits[2].parameters.readout_resonator_frequency = 7.2e9
Quantum Operations¶
Create the set of TunableTransmonOperations
:
from laboneq_applications.qpu_types.tunable_transmon import TunableTransmonOperations
qops = TunableTransmonOperations()
QPU¶
Create the QPU
object from the qubits and the quantum operations
from laboneq.dsl.quantum import QPU
qpu = QPU(qubits, quantum_operations=qops)
Alternatively, load from a file¶
If you you already have a DeviceSetup
and a QPU
stored in .json
files, you can simply load them back using the code below:
from laboneq import serializers
setup = serializers.load(full_path_to_device_setup_file)
qpu = serializers.load(full_path_to_qpu_file)
qubits = qpu.qubits
qops = qpu.quantum_operations
Connect to Session¶
session = Session(setup)
session.connect(do_emulation=True) # do_emulation=False when at a real setup
Create a FolderStore for saving data¶
The experiment Workflows
can automatically save the inputs and outputs of all their tasks to the folder path we specify when instantiating the FolderStore. Here, we choose the current working directory.
# import FolderStore from the `workflow` namespace of LabOne Q, which was imported
# from `laboneq.simple`
from pathlib import Path
folder_store = workflow.logbook.FolderStore(Path.cwd())
We disable saving in this guide. To enable it, simply run folder_store.activate()
.
folder_store.deactivate()
Optional: Configure the LoggingStore¶
You can also activate/deactivate the LoggingStore
, which is used for displaying the Workflow
logging information in the notebook; see again the tutorial on Recording Experiment Workflow Results for details.
Displaying the Workflow
logging information is activated by default, but here we deactivate it to shorten the outputs, which are not very meaningful in emulation mode.
We recommend that you do not deactivate the Workflow logging in practice.
from laboneq.workflow.logbook import LoggingStore
logging_store = LoggingStore()
logging_store.deactivate()
Single-qubit gate tune-up¶
Let's now proceed to calibrate our qubits using the experiment workflows from the modules imported below:
from laboneq_applications.experiments import (
amplitude_rabi,
drag_q_scaling,
echo,
lifetime_measurement,
qubit_spectroscopy,
ramsey,
resonator_spectroscopy,
)
To learn more about what each of these experiments does, check out our experiment how-to guides.
To learn more about experiment Workflows
in general and what you can do with them, check out this tutorial.
To learn how to write your own experiment Workflow
, check out this tutorial.
To learn more about Workflow
, Task
and options, look here.
Note: all the analysis results including the plots will be saved into the folder you have passed to the FolderStore
(if the FolderStore
is activated). You can also configure each experiment Workflow
to display the plots in this notebook by setting options.close_figures(False)
. We do not do this here because the data and the plots are meaningless in emulation mode.
Note: we will run all the experiments in this notebook with the setting options.update(True)
. This means that the relevant qubit parameters will be updated to the values extracted from the analysis (for example, the ge_drive_amplitude_pi
parameter in an amplitude Rabi experiment). If you're not happy with the new values or you've updated by mistake, you can revert to the original values from before the start of the experiment using the code,
experiment_module.update_qubits(qpu, workflow_result.tasks["analysis_workflow"].output["old_parameter_values"])
where experiment_module
is one of the experiment modules imported above.
Similarly, in case you've run your experiment with the update
option set to False
but would still like to update your values, use the code,
experiment_module.update_qubits(qpu, workflow_result.tasks["analysis_workflow"].output["new_parameter_values"])
Resonator Spectroscopy¶
options = resonator_spectroscopy.experiment_workflow.options()
options.update(True) # updates the qubit parameter "readout_resonator_frequency"
# The resonator spectroscopy can only be done on one qubit at a time
qubit_to_measure = qubits[0]
frequencies = qubit_to_measure.parameters.readout_resonator_frequency + np.linspace(-30e6, 30e6, 101)
exp_workflow = resonator_spectroscopy.experiment_workflow(
session=session,
qpu=qpu,
qubit=qubit_to_measure,
frequencies=frequencies,
options=options
)
workflow_result = exp_workflow.run()
qubit_to_measure.parameters.readout_resonator_frequency
Qubit Spectroscopy¶
options = qubit_spectroscopy.experiment_workflow.options()
options.count(4096)
options.update(True) # updates the qubit parameter "resonance_frequency_ge"
qubits_to_measure = qubits
temporary_parameters = {}
for q in qubits_to_measure:
temp_pars = deepcopy(q.parameters)
temp_pars.drive_range = -30
temp_pars.spectroscopy_amplitude = 1
temporary_parameters[q.uid] = temp_pars
frequencies = [
q.parameters.resonance_frequency_ge + np.linspace(-20e6, 20e6, 201)
for q in qubits_to_measure
]
exp_workflow = qubit_spectroscopy.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits_to_measure,
temporary_parameters=temporary_parameters,
frequencies=frequencies,
options=options
)
workflow_result = exp_workflow.run()
Check the updated value of the qubit parameter:
[q.parameters.resonance_frequency_ge for q in qubits]
Amplitude Rabi¶
options = amplitude_rabi.experiment_workflow.options()
# updates the qubit parameters "ge_drive_amplitude_pi" and ge_drive_amplitude_pi2
options.update(False)
transition_to_calibrate = "ge"
options.transition(transition_to_calibrate)
options.cal_states(transition_to_calibrate)
qubits_to_measure = qubits
exp_workflow = amplitude_rabi.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits_to_measure,
amplitudes=[np.linspace(0, 1, 21) for q in qubits_to_measure],
options=options
)
workflow_result = exp_workflow.run()
Note that the fit fails for this measurement in emulation mode, so the new qubit parameters were not extracted.
[(q.parameters.ge_drive_amplitude_pi, q.parameters.ge_drive_amplitude_pi2) for q in qubits]
Ramsey¶
options = ramsey.experiment_workflow.options()
# updates the qubit parameters "resonance_frequency_ge" and ge_T2_star
options.update(True)
transition_to_calibrate = "ge"
options.transition(transition_to_calibrate)
options.cal_states(transition_to_calibrate)
# Note: if neighbouring qubits are physically coupled by a resonator,
# you usually don't want to run Ramsey in parallel on them because
# your result will be skewed by strong residual-ZZ coupling.
# Next-nearest neighbours is typically okay.
qubits_to_measure = [qubits[0], qubits[2]]
delays = [
np.linspace(0, 1e-6, 51)
if transition_to_calibrate == "ef"
else np.linspace(0, 20e-6, 51)
for q in qubits_to_measure
]
detunings = [
11.76e6 if transition_to_calibrate == "ef"
else 0.673e6 for q in qubits_to_measure]
exp_workflow = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits_to_measure,
delays=delays,
detunings=detunings,
options=options
)
workflow_result = exp_workflow.run()
Check the updated values of the qubit parameters:
[q.parameters.resonance_frequency_ge * 1e6 for q in qubits]
[q.parameters.ge_T2_star * 1e6 for q in qubits]
DRAG Calib¶
options = drag_q_scaling.experiment_workflow.options()
options.update(True) # updates the qubit parameter 'ge_drive_pulse["beta"]'
transition_to_calibrate = "ge"
options.transition(transition_to_calibrate)
options.cal_states(transition_to_calibrate)
qubits_to_measure = qubits
exp_workflow = drag_q_scaling.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits_to_measure,
q_scalings=[np.linspace(-0.03, 0.03, 11) for _ in qubits_to_measure],
options=options
)
workflow_result = exp_workflow.run()
Note that the fit fails for this measurement in emulation mode, so the new qubit parameters were not extracted.
[q.parameters.ge_drive_pulse["beta"] for q in qubits]
T1¶
options = lifetime_measurement.experiment_workflow.options()
options.update(True) # updates the qubit parameter "ge_T1"
qubits_to_measure = qubits
exp_workflow = lifetime_measurement.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits_to_measure,
temporary_parameters=temporary_parameters,
delays=[np.linspace(0, 100e-6, 50) for q in qubits_to_measure],
options=options
)
workflow_result = exp_workflow.run()
Check the updated value of the qubit parameter:
[q.parameters.ge_T1 * 1e6 for q in qubits]
Echo¶
options = echo.experiment_workflow.options()
options.update(True) # updates the qubit parameter "ge_T2"
# Note: if neighbouring qubits are physically coupled by a resonator,
# you usually don't want to run Echo in parallel on them because
# your result will be skewed by strong residual-ZZ coupling.
# Next-nearest neighbours is typically okay.
qubits_to_measure = [qubits[0], qubits[2]]
exp_workflow = echo.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits_to_measure,
delays=[np.linspace(0, 100e-6, 50) for q in qubits_to_measure],
options=options
)
workflow_result = exp_workflow.run()
Check the updated value of the qubit parameter:
[q.parameters.ge_T2*1e6 for q in qubits]