Recording Experiment Workflow Results¶
While running an experiment workflow one would like to keep a record of what took place -- a kind of digital lab book. LabOne Q provides workflow logbooks for just this task.
Each workflow run creates its own logbook. The logbook records the tasks being run and may also be used to store additional data such as device settings, LabOne Q experiments, qubits, and the results of experiments and analyses.
Logbooks need to be stored somewhere, and within the Applications Library, this place is called a logbook store.
Currently the Applications Library supports two kinds of stores:
FolderStore
LoggingStore
The FolderStore
writes logbooks to a folder on disk. It is used to keep a permanent record of the experiment workflow.
The LoggingStore
logs what is happening using Python's logging. It provides a quick overview of the steps performed by a workflow.
We'll look at each of these in more detail shortly, but first let us set up a quantum platform to run some experiments on so we have something to record.
Setting up a quantum platform¶
Build your LabOne Q DeviceSetup
, qubits and Session
as normal. Here we import a demonstration tunable transmon quantum platform from the library and the amplitude Rabi experiment:
import numpy as np
from laboneq.simple import *
from laboneq_applications.experiments import amplitude_rabi
from laboneq_applications.qpu_types.tunable_transmon import demo_platform
# Create a demonstration QuantumPlatform for a tunable-transmon QPU:
qt_platform = demo_platform(n_qubits=6)
# The platform contains a setup, which is an ordinary LabOne Q DeviceSetup:
setup = qt_platform.setup
# And a tunable-transmon QPU:
qpu = qt_platform.qpu
# Inside the QPU, we have quantum elements, which is a list of six LabOne Q Application
# Library TunableTransmonQubit qubits:
qubits = qpu.quantum_elements
session = Session(setup)
session.connect(do_emulation=True)
The LoggingStore¶
When you import the laboneq_applications
library it automatically creates a default LoggingStore
for you. This logging store is used whenever a workflow is executed and logs information about:
- the start and end of workflows
- the start and end of tasks
- any errors that occur
- comments (adhoc messages from tasks, more on these later)
- any data files that would be saved if a folder store was in use (more on these later too)
These logs don't save anything on disk, but they will allow us to see what events are recorded and what would be saved if we did a have a folder store active.
An example of logging¶
Let's run the amplitude Rabi experiment and take a look:
amplitudes = np.linspace(0.0, 0.9, 10)
options = amplitude_rabi.experiment_workflow.options()
options.count(10)
options.averaging_mode(AveragingMode.CYCLIC)
rabi_tb = amplitude_rabi.experiment_workflow(
session,
qpu,
qubits[0],
amplitudes,
options=options,
)
The workflow has not yet been executed, but when you run the next cell, you should see messages like:
──────────────────────────────────────────────────────────────────────────────
Workflow 'amplitude_rabi': execution started
──────────────────────────────────────────────────────────────────────────────
appear in the logs beneath the cell.
result = rabi_tb.run()
And that's all there is to the basic logging functionality.
Advanced logging uses¶
If you need to create a logging store of your own you can do so as follows:
from laboneq.workflow.logbook import LoggingStore
logging_store = LoggingStore()
The logging store created above won't be active unless you run:
logging_store.activate()
And you deactivate it with:
logging_store.deactivate()
You can access the default logging store by importing it from laboneq.workflow.logbook
:
from laboneq.workflow.logbook import DEFAULT_LOGGING_STORE
DEFAULT_LOGGING_STORE
You can also inspect all the active logbook stores:
from laboneq.workflow.logbook import active_logbook_stores
active_logbook_stores()
The FolderStore¶
Using the folder store¶
The FolderStore
saves workflow results on disk and is likely the most important logbook store you'll use.
You can import it as follows:
from laboneq.workflow.logbook import FolderStore
To create a folder store you'll need to pick a folder on disk to store logbooks in. Here we select ./experiment_store
as the folder name but you should pick your own.
Within the folder, logbooks are organized into folder by date and then stored in sub-folders whose name will start with a timestamp followed by the name of the workflow. For example, the folder layout might look as follows:
experiment_store
└── 20240728
└── 20240728T175500-amplitude-rabi
└── ...
└── 20240728T175900-amplitude-rabi
└── ...
Each logbook created by a workflow will have its own sub-folder. If necessary, a unique count will be added at the end to make the sub-folder name unique.
Timestamps are in local time.
The folder store will need to be activated before workflows will use it automatically.
folder_store = FolderStore("./experiment_store")
folder_store.activate()
Now let's run the amplitude Rabi workflow. As before we'll see the task events being logged. Afterwards we'll explore the folder to see what has been written to disk.
result = rabi_tb.run()
If you no longer wish to automatically store workflow results in the folder store, you can deactivate it with:
folder_store.deactivate()
Exploring what was written to disk¶
Here we will use Python's pathlib
functionality to explore what has been written to disk, but you can also use whatever ordinary tools you prefer (terminal, file navigator).
import json
from pathlib import Path
Remember that above we requested that the folder store use a folder named experiment_store
. Let's list the logbooks that were created in that folder:
store_folder = Path("experiment_store")
amplitude_rabi_folders = sorted(store_folder.glob("*/*-amplitude-rabi"))
Our amplitude Rabi experiment is the most recent one run, so let's look at the files within the most recent folder. Note that the logbook folder names start with a timestamp followed by the name of the workflow, which allows us to easily order them by time and to find the workflow we're looking for:
amplitude_rabi_folder = amplitude_rabi_folders[-1]
amplitude_rabi_files = sorted(amplitude_rabi_folder.iterdir())
amplitude_rabi_files
Let us look at the file log.jsonl
. This is the log of what took place. The log is stored in a format called "JSONL" which means each line of the log is a simple Python dictionary stored as JSON. Larger objects and certain types of data are stored as separate files in the same folder.
Let's open the file and list the logs:
experiment_log = amplitude_rabi_folder / "log.jsonl"
logs = [json.loads(line) for line in experiment_log.read_text().splitlines()]
logs
Workflow timestamps and names¶
The timestamps and workflow names used by the folder store can be accessed via the execution_info
utility function inside a task, as follows:
from laboneq.workflow import (
execution_info,
task,
workflow,
)
@task
def folder_logger_timestamp_and_workflow_name():
info = execution_info() # Returns a WorkflowExecutionInfoView object
return {"workflow": info.workflows[0], "start_time": info.start_time}
@workflow
def timestamp_and_name_workflow():
folder_logger_timestamp_and_workflow_name()
wf = timestamp_and_name_workflow()
result = wf.run()
print(result.tasks["folder_logger_timestamp_and_workflow_name"].output)
The output of execution_info()
has two attributes:
.workflows
: a list of the active workflow names, where the outermost workflow is the first element and the innermost workflow is the last element..start_time
is adatetime.datetime
object specifying the start time of the outermost workflow.
The folder store uses this same information when creating its sub-folder names. The strftime
format YYYYMMDDTHHMMSS
is used to generate the timestamp strings (i.e., strftime("%Y%m%dT%H%M%S")
) after conversion from UTC to local time.
Loading back data from a file¶
Currently, the FolderStore
cannot be used to load back data from a saved file. The intention is that the FolderStore
saves data in standard formats which can be loaded by other standard tools.
For example, to load back a LabOne Q object saved by a Workflow
, the standard tool would be the LabOne Q serializer's load
function:
from laboneq import serializers
my_object = serializers.load(path_to_file)
Here, path_to_file
is the full path to the data file.
How the folder store saves data¶
When saving task or workflow input or output values, the folder store first checks whether the value is a simple type that can be stored directly in the log file, i.e. log.jsonl
.
Types that are considered simple are:
None
,int
,float
,bool
-- stored as is.complex
-- stored as{"real": obj.real, "imag": obj.imag}
.str
-- stored as is iflen(obj) <= 1000
, otherwise not stored.datetime
-- stored as UTC timestamp.date
-- stored as astr(obj)
.list
-- stored iflen(obj) <= 10
and all elements are considered simple, otherwise not stored.dict
-- stored iflen(obj) <= 10
, all keys are strings and all values are considered simple, otherwise not stored.tuple
-- stored as fordict
ifobj
is anamedtuple
, otherwise stored as forlist
.laboneq.dsl.session.Session
-- explicitly marked as not to be serialized since theSession
object is stateful (it holds a connection to the controlled device).
If the type is not considered simple, it is saved as a file using the folder store serializer (see below) and a reference to the file is saved in the log. A file reference looks like {"filename": ..., "description": ...}
where the filename specifies the path to the file relative to the folder the logbook is being saved in.
The folder store serializer¶
If a task or workflow input or output value is not considered simple, or an object is stored directly using save_artifact
(see Store data from within tasks), then it is saved to disk using the folder store serializer.
Task inputs are saved individually using the name of the input parameter. Task outputs are stored as a single object, unless the output is a dictionary, in which case each element is saved individually using the key as its name.
The folder store serializer is distinct from the LabOne Q serializer. The LabOne Q serializer saves LabOne Q objects. The folder store serializer saves a much wider range of objects. It stores objects in standard formats that other tools can load. For LabOne Q objects, the standard format is that produced by the LabOne Q serializer, so the folder store uses that for LabOne Q objects.
The folder store serializer only saves objects. Objects are intended to be loaded using standard tools. For example, JSON files can be loaded with standard JSON libraries, PNGs can be loaded with image viewers, stored LabOne Q objects can be loaded with LabOne Q.
Types that are saved by the folder store and the formats they are stored in are:
str
-- serialized as a UTF-8 encoded text file (.txt
)bytes
-- serialized as a binary file (.dat
)PIL.Image
-- saved usingPIL.Image.save
(default format isPNG
)matplotlib.figure.Figure
-- saved usingmatplotlib.figure.Figure.savefig
(default format isPNG
)numpy.ndarray
-- saved usingnumpy.save
(.npy
)lmfit.model.ModelResult
-- saved as JSON using the dictionary fromlmfit.model.ModelResult.summary
(.json
)uncertainties.core.Variable
-- thevalue
,std_dev
andtag
attributes are saved as a JSON dictionary (.json
)uncertainties.core.AffineScalarFunc
-- thevalue
andstd_dev
attributes are saved as a JSON dictionary (.json
)sklearn.base.BaseEstimator
-- the estimator parameters are saved as a JSON dictionary (.json
)- LabOne Q objects -- see list of supported LabOne Q objects below, saved using the LabOne Q serializer (
.json
). list
-- saved asnumpy.ndarray
, except for lists ofQuantumElement
or lists ofQuantumParameters
, which are saved using the LabOne Q serializertuple
-- saved as a JSON list, except for tuples ofQuantumElement
or tuples ofQuantumParameters
, which are saved using the LabOne Q serializerdict
-- saved as a JSON dict, except for dictionaries ofQuantumElement
or dictionaries ofQuantumParameters
, which are saved using the LabOne Q serializer
The types of LabOne Q objects that are supported by the folder store are:
CompiledExperiment
DeviceSetup
Experiment
QPU
QuantumParameters
(and lists, tuples, and dictionaries of these)QuantumElement
(and lists. tuples, and dictionaries of these)Results
TaskOptions
WorkflowOptions
When the folder store saves objects as JSON, it uses its own extended JSON serializer that supports the following types:
None
,int
,float
,bool
,str
,numpy.integer
-- stored directly as JSON equivalentcomplex
-- stored as{"real": c.real, "imag": c.imag}
dict
-- supported if keys are strings and values are other supported objectslist
-- of other supported objectstuple
-- of other supported objectsnumpy.ndarray
-- most dtypes are stored directly as lists of appropriate types; complex arrays are in custom structure (see next entry); object arrays are not supported- complex
numpy.ndarray
: stored as{"description": ..., "data": ...}
wheredata
is a list of[real(v0), imag(v0), real(v1), imag(v1), ...]
corresponding to numpy's.view(dtype=float_dtype)
, anddescription
is a string describing the format lmfit.model.ModelResult
-- stored as the dictionary fromlmfit.model.ModelResult.summary
uncertainties.core.Variable
-- thevalue
,std_dev
andtag
attributes are stored as a dictionaryuncertainties.core.AffineScalarFunc
-- thevalue
andstd_dev
attributes are saved as a dictionary
Raising exceptions when inputs and outputs cannot be stored¶
By default a folder store logs a warning if a task input or output cannot be saved, but this behavior can be changed by setting the save_mode
option when creating the folder store or by calling the .save_mode
method.
The supported save modes are:
- WARN: A warning is logged when an input or output cannot be saved (the default mode).
- RAISE: An exception is raised when an input or output cannot be saved.
- SKIP: Task inputs and outputs are not saved.
Let's create a new folder store that doesn't save task inputs and outputs, and examine the set save mode:
folder_store_skip = FolderStore("./experiment_store", save_mode="skip")
folder_store_skip.save_mode()
We can also modify the save mode of an existing folder store:
folder_store_skip.save_mode("raise")
In the remaining sections, we'll look at how to write adhoc comments into the logs and how to save data files to disk.
The timestamp of the start time of the workflow execution and the name(s) of the currently executed workflow(s) (if the task was executed from a workflow) can be obtained from within a task. If the task was not called from within a workflow execution context, the timestamp will be None
and the workflow names will be an empty list. Timestamp and the first of the workflow names are also part of the folder path in case a folder logger is used. Here is an example of a task which reads the outermost workflow's name and the timestamp:
Logging comments from within tasks¶
Logbooks allow tasks to add their own messages to the logbook as comments.
This is done by calling the comment(...)
function within a task.
We'll work through an example below:
from laboneq.workflow import comment, task, workflow
Let's write a small workflow and a tiny task that just writes a comment to the logbook:
@task
def log_a_comment(msg):
comment(msg)
@workflow
def demo_comments():
log_a_comment("Activating multi-state discrimination! <sirens blare>")
log_a_comment("Analysis successful! <cheers>")
Now when we run the workflow we'll see the comments appear in the logs:
wf = demo_comments()
result = wf.run()
Above you should see the two comments. They look like this:
Comment: Activating multi-state discrimination! <sirens blare>
...
Comment: Analysis successful! <cheers>
In addition to comment(...)
, the logbook supports a function log(level: int, message: str, *args: object)
which logs a message at the specified logging level similar to Python's logging
module. This additional function is useful for logging messages that are not regular user comments, but allow tasks to give feedback about issues which are still important to record.
Store data from within tasks¶
Logbooks also allow files to be saved to disk using the function save_artifact
.
Here we will create a figure with matplotlib and save it to disk. The folder store will automatically save it as a PNG.
The kinds of objects the folder store serializes are described above in The folder store serializer.
import PIL
from laboneq.workflow import save_artifact
from matplotlib import pyplot as plt
Let's write a small workflow that plots the sine function and saves the plot using save_artifact
:
@task
def sine_plot():
fig = plt.figure()
plt.title("A sine wave")
x = np.linspace(0, 2 * np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
save_artifact("Sine Plot", fig)
@workflow
def demo_saving():
sine_plot()
Since we deactivated the folder store, let's activate it again now:
folder_store.activate()
And run our workflow:
wf = demo_saving()
result = wf.run()
You can see in the logs that an artifact was created:
Artifact: 'Sine Plot' of type 'Figure' logged
Now let's load the image from disk.
First we need to find the logbook folder created for our workflow:
demo_saving_folders = sorted(store_folder.glob("*/*-demo-saving"))
demo_saving_folder = demo_saving_folders[-1]
demo_saving_folder
And let's list its contents:
sorted(demo_saving_folder.iterdir())
And finally let's load the saved image using PIL:
PIL.Image.open(demo_saving_folder / "Sine Plot.png")
Saving an object also generates an entry in the folder store log.
We can view it by opening the log:
experiment_log = demo_saving_folder / "log.jsonl"
logs = [json.loads(line) for line in experiment_log.read_text().splitlines()]
logs
As you can see above the log records the name (artifact_name
) and type (artifact_type
) of the object saved, and the name of the file it was written to (artifact_files
)
Saving an artifact might potentially write multiple files to disk.
The artifact_metadata
contains additional user supplied information about the object saved, while artifact_options
provide initial information on how to save the object. For example, we could have elected to save the figure in another file format. We'll see how to use both next.
Specifying metadata and options when saving¶
Let's again make a small workflow that saves a plot, but this time we'll add some options and metadata.
@task
def sine_plot_with_options():
fig = plt.figure()
plt.title("A sine wave")
x = np.linspace(0, 2 * np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
[ax] = fig.get_axes()
save_artifact(
"Sine Plot",
fig,
metadata={
"title": ax.get_title(),
},
options={
"format": "jpg",
},
)
@workflow
def demo_saving_with_options():
sine_plot_with_options()
And run the workflow to save the plot:
wf = demo_saving_with_options()
result = wf.run()
Again we open the workflow folder and load the saved image:
demo_saving_with_options_folders = sorted(
store_folder.glob("*/*-demo-saving-with-options")
)
demo_saving_with_options_folder = demo_saving_with_options_folders[-1]
demo_saving_with_options_folder
sorted(demo_saving_with_options_folder.iterdir())
Now when we load the image it is very slightly blurry, because it was saved as a JPEG which uses lossy compression:
PIL.Image.open(demo_saving_with_options_folder / "Sine Plot.jpg")
And if we view the logs we can see that the title was recorded in the artifact_metadata
:
experiment_log = demo_saving_with_options_folder / "log.jsonl"
logs = [json.loads(line) for line in experiment_log.read_text().splitlines()]
logs
The supported options for saving artifacts depend on the type of artifact. For our matplotlib figure example, the options are forwarded to matplotlib.pyplot.savefig
and are documented in the Matplotlib documentation, with the following changes to the default values:
format
is set to "png" by defaultbbox_inches
is set to "tight" by default
In the same way, the options for a PIL.Image.Image
are forwarded to PIL.Image.Image.save
and are documented in the Pillow documentation with the format defaulting to "PNG". For a numpy.ndarray
the options are forwarded to numpy.save
and are documented in the Numpy documentation with allow_pickle
set to False
by default.
We're done!