Experiment Workflows¶
This tutorial shows how to use the experiments in the Applications Library, which are all implemented using the Workflow objects of LabOne Q.
A Workflow
is a collection of logically connected Tasks
or other workflows whose inputs and outputs depend on each other. The parent Workflow
automatically distributes options to all its Tasks
and saves their inputs and outputs. To learn more about Tasks
, check out the tutorial on using Tasks in LabOne Q
When instantiated, a function decorated with @workflow
builds a graph of tasks that will be executed later. This graph may be inspected. The graph of tasks is not executed directly by Python, but by a workflow engine provided by LabOne Q. To learn more about workflows, tasks, options, and the saving functionality of workflows check out the tutorials in the LabOne Q core manual.
Experiment Workflows
have the standard tasks shown in the image below:
Let's see what these tasks are:
create_experiment
for creating the experimental pulse sequence as an instance of the LabOne Q Experiment class. This task is typically unique for every experiment.compile_experiment
for compiling theExperiment
returned bycreate_experiment
.run_experiment
for running theCompiledExperiment
returned bycompile_experiment
.analysis_workflow
for running the analysis on theRunExperimentResults
returned byrun_experiment
.update_qubits
for updating the relevant qubit parameters with the values found in theanalysis_workflow
.
The Tasks
compile_experiment
, run_experiment
, and update_qubits
can be used for all experiments, because they are independent of the details of the experiment being implemented. create_experiment
and analysis_workflow
typically need to be implemented for every experiment.
Experiment Workflows
also have a few standard input parameters:
session
: a LabOne QSession
.qpu
: aQPU
object containing the most up-to-date knowledge about the parameters of the quantum processor.qubits
: the list of qubit instances on theqpu
, on which to run the experiment.- (the sweep points if relevant)
temporary_parameters
for temporarily overwriting the qubit parameters during the execution of the experiment.options
: an instance ofWorkflowOptions
.
Let's look at all of this in more detail.
Create a device setup and session¶
First, we create a LabOne Q DeviceSetup
, and 6 TunableTransmonQubits
and their corresponding TunableTransmonOperations
using the demo QuantumPlatform
provided by the Applications Library for running in emulation mode. See the Getting Started tutorial for more details about the QuantumPlatform
and how to create your experimental setup and prepare it for running experiments.
import numpy as np
from laboneq.core.exceptions import LabOneQException
from laboneq.simple import *
from laboneq_applications.qpu_types.tunable_transmon import demo_platform
# Create a demonstration QuantumPlatform for a tunable-transmon QPU:
qt_platform = demo_platform(n_qubits=6)
# The platform contains a setup, which is an ordinary LabOne Q DeviceSetup:
setup = qt_platform.setup
# And a tunable-transmon QPU:
qpu = qt_platform.qpu
# Inside the QPU, we have qubits, which is a list of six LabOne Q Application
# Library TunableTransmonQubit qubits:
qubits = qpu.qubits
session = Session(setup)
session.connect(do_emulation=True)
Create a FolderStore for Saving Data¶
The experiment Workflows
can automatically save the inputs and outputs of all their tasks to the folder path we specify when instantiating the FolderStore. Here, we choose the current working directory.
# import FolderStore from the `workflow` namespace of LabOne Q, which was imported
# from `laboneq.simple`
from pathlib import Path
folder_store = workflow.logbook.FolderStore(Path.cwd())
We disable saving in this tutorial. To enable it, simply run folder_store.activate()
.
folder_store.deactivate()
Optional: Configure the LoggingStore¶
You can also activate/deactivate the LoggingStore
, which is used for displaying the Workflow
logging information in the notebook; see again the tutorial on Recording Experiment Workflow Results for details.
Displaying the Workflow
logging information is activated by default, but here we deactivate it to shorten the outputs, which are not very meaningful in emulation mode.
We recommend that you do not deactivate the Workflow logging in practice.
from laboneq.workflow.logbook import LoggingStore
logging_store = LoggingStore()
logging_store.deactivate()
Inspect an experiment Workflow¶
Let's start by inspecting the experiment Workflow
for the Ramsey experiment.
from laboneq_applications.experiments import ramsey
Inspect the source code of the ramsey
Workflow
to see that the tasks follow the standard structure and logic of experiment workflows shown above. Notice that the workflow uses special constructions for conditional logic (with workflow.if_(condition)
). Have a look at the Workflow syntax tutorial to learn more about the syntax used by Workflows
.
ramsey.experiment_workflow.src
Instantiate the experiment Workflow¶
Let's instantiate the ramsey
Workflow
for one single qubit.
Note, instantiating the Workflow
does not run it. Instantiation only resolves the dependencies of the tasks within the workflow.
experiment_workflow = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits[0],
delays=np.linspace(0, 20e-6, 51),
detunings=0.67e6,
)
Inspect the tree display of the built dependency graph:
experiment_workflow.graph.tree
Run the experiment Workflow¶
To execute the experiment Workflow
, we call its run()
method:
workflow_result = experiment_workflow.run()
workflow_result.tasks
Inspect an executed experiment Workflow¶
Now that the Workflow
has run, we can inspect its inputs and outputs, as well as the inputs and outputs of all its tasks.
Workflow inputs¶
Let's first inspect the input parameters of the ramsey
Workflow
workflow_result.input
Workflow tasks¶
Inspect the tasks of the Workflow
. Notice that the update_qubits
tasks does not appear in this task list. This is because the updating functionality is disabled by default. We will see later how to enable it using the options.
for t in workflow_result.tasks:
print(t)
Inspect the source code of the create_experiment
task to see how the experiment pulse sequence was created:
workflow_result.tasks["compile_experiment"].src
workflow_result.tasks["compile_experiment"]
workflow_result.tasks["create_experiment"].src
The LabOne Q Experiment
object returned by the create_experiment
task is found in the output of this task:
workflow_result.tasks["create_experiment"].output
Inspect the pulse sequence using plot_simulation
and the LabOne Q CompiledExperiment
object returned by the compile_experiment
task:
from laboneq.contrib.example_helpers.plotting.plot_helpers import plot_simulation
plot_simulation(
workflow_result.tasks["compile_experiment"].output,
signal_names_to_show=["drive", "measure"],
start_time=0,
length=50e-6,
)
Workflow output - acquired data¶
Inspect the RunExperimentResults
containing the acquired data. The RunExperimentResults
can be access either from the output of the Workflow
, or from the output of the run_experiment
tasks:
acquired_data = workflow_result.output
acquired_data
workflow_result.tasks["run_experiment"].output
The information in the RunExperimentResults
object can be accessed both via standard Python dictionary notation and the dot-notation at any level of the nested structure:
acquired_data.q0.result
acquired_data["q0"].result
Analysis Workflow¶
Let's also inspect the Ramsey analysis Workflow
executed as part of the experiment Workflow
. First, let's look at the source code. The Ramsey analysis workflow contains the following tasks:
calculate_qubit_population
for interpreting the raw data into qubit population.fit_data
for fitting a cosine module to the qubit population as a function of the pulse amplitude.extract_qubit_parameters
for extracting the new qubit frequency and the $T_2^*$ value from the exponentially decaying cosine fit.plot_raw_complex_data_1d
for plotting the raw data.plot_population
for plotting the qubit population and the fit results.
ramsey.analysis_workflow.src
Let's check that these tasks were actually run in the analysis workflow:
analysis_workflow_results = workflow_result.tasks["analysis_workflow"]
for t in analysis_workflow_results.tasks:
print(t)
All the inputs and outputs of these tasks can be inspected. For example, let's get back the fit results returned by the fit_data
task and the final Ramsey figures returned by the plot_population
task:
fit_results_per_qubit = analysis_workflow_results.tasks["fit_data"].output
ramsey_figures_per_qubit = analysis_workflow_results.tasks["plot_population"].output
We can access the qubit parameters extracted by the analysis from the output of the analysis-workflow. Notice that the analysis workflow collects both the original qubit parameters with which the experiment was run (old_parameter_values
) and the new ones extracted from the analysis (new_parameter_values
).
from pprint import pprint
qubit_parameters = analysis_workflow_results.output
pprint(qubit_parameters) # noqa: T203
Manually Updating the Qubit Parameters¶
The run above did not update the qubit parameters with the values in qubit_parameters["new_parameter_values"]
because updating is disabled by default (we will see in the next section how to enable it via the experiment-workflow options). We can check this by inspecting the resonance_frequency_ge
parameter of the qubit, which will still have the original value collected by the analysis in qubit_parameters["old_parameter_values"]
:
qubits[0].parameters.resonance_frequency_ge
In practice, we sometimes want to disable automatic updating if we are not sure that the experiment runs correctly. In this case, we can still update the qubit parameters manually after the experiment has run using the update_qubits
task:
ramsey.update_qubits(qpu, qubit_parameters["new_parameter_values"])
Similarly, if we had accidentally updated our qubit parameters during the experiment run, we can revert them using the same task and old_parameter_values
:
ramsey.update_qubits(qpu, qubit_parameters["old_parameter_values"])
Change the options¶
We can change the options of the ramsey experiment Workflow
by using the options feature Workflows
(see the Options tutorial in LabOne Q Core for more details).
Let's start by creating the Workflow
options:
options = ramsey.experiment_workflow.options()
options
Using workflow.show_fields
, you can also read a description of each of the options fields, as well as their default values and the tasks that use them within the Ramsey experiment workflow.
workflow.show_fields(options)
Note that the experiments in the Applications Library collect the acquired data in an instance of the new results class, RunExperimentResults
. To return an instance of the standard LabOne Q Results
, you can set options.return_legacy_results(True)
.
Here, we specify new values for some of our options. Note that below, we are changing the value of these options fields for all the tasks inside the Ramsey workflow. To change the options for only a subset of the tasks, see the Options tutortial in the LabOne Q core manual.
options.count(2048) # change the counts
options.use_cal_traces(False) # remove the calibration traces
options.update(True) # the experiment workflow updates the qubit frequency
# and T2_star time with the new values from the analysis
Inspect the current values of an options field:
options.count
Run the Workflow
with these options. Here, we also run the Ramsey experiment on all the 6 qubit in parallel.
ramsey_workflow_result_options = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits,
delays=[np.linspace(0, 20e-6, 51) for q in qubits],
detunings=[0.67e6 for q in qubits],
options=options, # pass the options
).run()
If we inspect the simulated pulse sequence, we'll notice that the pulses are executed in parallel on all the qubits in the expeirment and that the calibration traces are no longer there.
from laboneq.contrib.example_helpers.plotting.plot_helpers import plot_simulation
plot_simulation(
ramsey_workflow_result_options.tasks["compile_experiment"].output,
signal_names_to_show=["drive"],
start_time=0,
length=50e-6,
)
Qubits with temporarily modified parameters¶
The qubit inside the qpu
contain the source of ground truth for an experiment and the best state of knowledge of the quantum system that is being operated. This means that the parameters of the qubits and any other parameters of the QPU define the configuration used by all the experiments in the Applications Library.
It is possible to run an experiment workflow using qubits with temporarily modified parameters. This is useful for testing or debugging purposes. To do this, we first clone the parameters of the qubits and then modify the parameters that we want. The cloned parameters as then passed to the experiment workflow.
Let's run the Ramsey experiment workflow with a set of temporary qubit parameters.
from copy import deepcopy
temporary_parameters = deepcopy(qubits[0].parameters)
temporary_parameters.ge_drive_length = 1000e-9 # 51ns in the original qubits
result_unmodified = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits[0],
delays=np.linspace(0, 20e-6, 51),
detunings=0.67e6,
).run()
result_modified = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits[0],
temporary_parameters={
qubits[0].uid: temporary_parameters
}, # pass temporary parameters
delays=np.linspace(0, 10e-6, 51),
detunings=1e6,
).run()
# compare the two pulse sequences
from laboneq.contrib.example_helpers.plotting.plot_helpers import plot_simulation
plot_simulation(
result_unmodified.tasks["compile_experiment"].output,
signal_names_to_show=["drive", "measure"],
start_time=0,
length=5e-6,
)
plot_simulation(
result_modified.tasks["compile_experiment"].output,
signal_names_to_show=["drive", "measure"],
start_time=0,
length=5e-6,
)
Debugging experiment Workflows¶
Inspect after an error¶
If an error occurs during the execution of the experiment Workflow
, we can inspect the tasks that have run up to the task that produced the error using recover()
. This is particularly useful to inspect the experiment pulse sequence in case of a compilation or measurement error.
Let's introduce a run-time error by exceeding the waveform memory.
# here we catch the exception so that the notebook can keep executing
try:
ramsey_result_error = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits[0],
delays=np.linspace(0, 50e-6, 10001),
detunings=0.67e6,
).run()
except LabOneQException as e:
print("ERROR: ", e)
ramsey_result_error = ramsey.experiment_workflow.recover()
for t in ramsey_result_error.tasks:
print(t)
Inspect the experiment section tree by calling:
ramsey_result_error.tasks["create_experiment"].output
Run until a task¶
ramsey_result_partial = ramsey.experiment_workflow(
session=session,
qpu=qpu,
qubits=qubits[0],
delays=np.linspace(0, 50e-6, 50),
detunings=0.67e6,
).run(until="compile_experiment")
for task in ramsey_result_partial.tasks:
print(task)