New serializers in LabOne Q¶
The new LabOne Q serialization framework allows for more flexibility in the serialization process. It has a versioning scheme that keeps track of changes in the LabOne Q data structures and ensures backwards compatibility between different versions of LabOne Q.
The new serialization framework also makes it possible to serialize and deserialize objects that are not part of the standard LabOne Q library. Such features are useful, for example, when users want to implement new quantum elements or quantum operations classes.
When the structure of the object changes, new versions of the serialization can be added to the serializer. The serializer can then handle the different versions of the objects automatically and hence maintain the backwards compatibility.
The currently supported objects for serialization are:
- Python built-in data types
- Numpy arrays
- QPU
- QuantumParameters
- QuantumElement
- Results
- Workflow
- DeviceSetup
- Calibration
- Experiment
- CompiledExperiment
What is new in the serialization framework¶
Session¶
The serialization for Session objects is no longer supported in the new serialization framework.
CompiledExperiment¶
As is already the case in the old serialization framework, the new serialization framework does not promise backward compatibility for CompiledExperiment. A successful compilation with a previous version of LabOne Q is not guaranteed. Users are advised to save the Experiment object and recreate the CompiledExperiment by calling session.compile() or the task compile_experiment().
Saving and loading methods¶
The new serialization framework uses a centralised interface for saving and loading LabOne Q data classes: save, load, from_dict, to_dict, from_json and to_json that can be imported from laboneq.simple. These centralised functions supersede the .save and .load class methods associated with the LabOne Q data classes, which are no longer supported; for example, Experiment.load and Experiment.save.
Saving and loading any LabOne Q data class in the new serialization framework becomes simply:
from laboneq.simple import *
save(laboneq_object, filepath) # save
laboneq_object = load(filepath) # load
Let's have a look at how the new serializers work.
First, we import the serialization functions from laboneq.simple:
from __future__ import annotations
from laboneq.simple import from_dict, to_dict, to_json, from_json, save, load
If you want to serialize an object to a dictionary form, just call to_dict(). For example, to serialize a QuantumElement object ,
from laboneq.simple import QuantumElement
q0 = QuantumElement("q0")
serialized_q0 = to_dict(q0)
serialized_q0
Let's have a look at what is contained in the returned dictionary.
The most important field is __data__, which contains information required to initialize the serialized objects again.
The meta fields __serializer__ and __version__ help to reload the object with correct versioning. We will learn more about these fields in the next section.
The field __creator__ tells us the version of LabOne Q that performs the serialization. This is not crucial for the serialization process but could be useful for troubleshooting.
Please note that the returned dictionary is not directly Json-serializable as it may contain numpy arrays which requires a third party library such as orjson to convert it to a json. If you want to serialize the object directly to json, consider using to_json that will be explained shortly.
To load the object back, we use the from_dict function:
loaded_q0 = from_dict(serialized_q0)
loaded_q0
The functions to_json and from_json can be used in a similar way to convert objects to/from byte strings. Serializing objects to a byte strings could be useful when we want to send them over a network.
byte_string_q0 = to_json(q0)
byte_string_q0
loaded_q0 = from_json(byte_string_q0)
loaded_q0
Last but not least, we can convert objects to byte strings and save them to a file by using save:
save(q0, "q0.json")
And to load it back,
loaded_q0 = load("q0.json")
loaded_q0
Custom serializers¶
In the new serialization framework, serialization is decoupled from the data classes. A serializer class must be written for each data class that needs support for serialization.
LabOne Q provides a global default serializer registry that already contains serializers for the LabOne Q objects listed at the top of this notebook. To serialize LabOne Q objects that are not in this list, and hence not supported directly by one of the new serializers, you can write a new serialization class and add it into the serializer registry. Let's have a look at how to do this.
Writing and registering new serializers¶
A serializer must be written for any new class that does not have an existing serializer implemented for it or its parent classes.
We will learn how to write a new serializer class by writing one for the QuantumElement class and calling it QuantumElementSerializer.
Let's start by importing the needed modules and objects:
import attrs
from laboneq.serializers.base import VersionedClassSerializer
from laboneq.serializers.serializer_registry import serializer
from laboneq.serializers.types import (
DeserializationOptions,
JsonSerializableType,
SerializationOptions,
)
from laboneq.serializers.core import import_cls
The new serializer class must inherit from VersionedClassSerializer and must define the two class variables SERIALIZER_ID and VERSION.
Specifying SERIALIZER_ID as the path to the class could be helpful when the serializer is not registered in the global serializer_registry. In this case, the serialization engine imports the class of the object using the path specified in SERIALIZER_ID.
We should not forget to add our new serializer to serializer_registry. This can be done via the decorator @serializer.
@serializer(types=[QuantumElement], public=True)
class QuantumElementSerializer(VersionedClassSerializer[QuantumElement]):
SERIALIZER_ID = "QuantumElementSerializer"
VERSION = 1
In addition, we need to implement the following methods for the serializer: to_dict and from_dict_vx, where x is the version of the serializer.
Let's first look at the to_dict method, which returns a dictionary with three compulsory fields: __serializer__, __version__, and __data__.
The former two are metadata and required for selecting the right serializer with the correct version.
The field __data__ contains information required for loading the objects properly. Inspecting the definition of the QuantumElement class, we see that we need the following attributes to create a QuantumElement instance: uid, signals, and parameters. Because both uid and signals are Python primitive data types, we can simply assign the corresponding values obj.uid and obj.signals. However, we need both the class name and the serialized form for abstract data types such as parameters.
Finally, the name of the class we are serializing goes into quantum_element_class.
@classmethod
def to_dict(
cls, obj: QuantumElement, options: SerializationOptions | None = None
) -> JsonSerializableType:
return {
"__serializer__": cls.serializer_id(),
"__version__": cls.version(),
"__data__": {
"quantum_element_class": f"{obj.__class__.__module__}.{obj.__class__.__name__}",
"uid": obj.uid,
"signals": obj.signals,
"parameter_class": f"{obj.parameters.__class__.__module__}.{obj.parameters.__class__.__name__}",
"parameters": attrs.asdict(obj.parameters),
},
}
Let's continue with the deserializing method from_dict_v1, which initializes a new QuantumElement object with inputs taken from the fields of __data__.
@classmethod
def from_dict_v1(
cls,
serialized_data: JsonSerializableType,
options: DeserializationOptions | None = None,
) -> QuantumElement:
data = serialized_data["__data__"]
qe_cls = import_cls(data["quantum_element_class"])
param_cls = import_cls(data["parameter_class"])
return qe_cls(
uid=data["uid"],
signals=data["signals"],
parameters=param_cls(**from_dict(data["parameters"])),
)
Adding a new version to an existing serializer and dealing with API changes¶
Now let's imagine we'd like to rename parameters to attributes. This certainly breaks the backwards compatibility of QuantumElement class and requires us to update its serializer, QuantumElementSerializer.
We first need to increase VERSION of the serializer to 2 and update to_dict accordingly.
@serializer(types=[QuantumElement], public=True)
class QuantumElementSerializer(VersionedClassSerializer[QuantumElement]):
SERIALIZER_ID = "laboneq.serializers.implementations.QuantumElementSerializer"
VERSION = 2
@classmethod
def to_dict(
cls, obj: QuantumElement, options: SerializationOptions | None = None
) -> JsonSerializableType:
return {
"__serializer__": cls.serializer_id(),
"__version__": cls.version(),
"__data__": {
"quantum_element_class": f"{obj.__class__.__module__}.{obj.__class__.__name__}",
"uid": obj.uid,
"signals": obj.signals,
"attribute_class": f"{obj.attributes.__class__.__module__}.{obj.attributes.__class__.__name__}",
"attribute": attrs.asdict(obj.attributes),
},
}
We then add from_dict_v2 using the new signature of QuantumElement class.
@classmethod
def from_dict_v2(
cls,
serialized_data: JsonSerializableType,
options: DeserializationOptions | None = None,
) -> QuantumElement:
data = serialized_data["__data__"]
qe_cls = import_cls(data["quantum_element_class"])
param_cls = import_cls(data["attribute_class"])
return qe_cls(
uid=data["uid"],
signals=data["signals"],
attribute=param_cls(**from_dict(data["attribute"])),
)
Performance improvement¶
The speed of saving and loading has been enhanced in the new serialization framework.
To illustrate this improvement, we compare the performance of the old serializer and the new one when saving and loading randomized benchmarking experiments. These experiments are based on examples from our how-to-guide.
For a realistic comparison, we set the experiment parameters as follows: (max_sequence_exponent=10, n_sequences_per_length=10, chunk_count=10)
The comparison was performed with the new serializer in LabOneQ release 2.56 and the old serializer that existed in LabOneQ up until version 2.55.
| Task | Speed up |
|---|---|
to_dict(experiment) |
2x |
from_dict(experiment) |
3x |
save(experiment) |
2x |
load(experiment) |
2x |
save(compiled_experiment) |
1.5x |
load(compiled_experiment) |
1.9x |
Deprecation¶
The old LabOne Q serialization framework was removed in LabOne Q version 2.55.0 on June 19th. We advise our users to migrate to the new serialization framework as soon as possible.
The old and the new serializers have different serialization format. While the new serializers are still able to load the data saved by the old serializers, this is not ensured after the old serializers are removed completely. Hence, we strongly advise users to migrate to the new serialization framework as soon as possible.
Migration path¶
- In your codebase, replace the calls to the class methods
.save()and.load()with calls to the new functionssave()andload()available inlaboneq.simple. - Use the new functions
save()andload()available inlaboneq.simpleto serialize LabOne Q objects. - Migrate saved objects by loading them into the corresponding classes using the class methods
.save()and.load()of the old serialization framework, and then save them again into the new serialization format using thesave()andload()functions available inlaboneq.simple.
Is there a risk that your data is lost?¶
No! Even after the old serialization framework is removed, you can still retrieve objects saved with the old serializers by downgrading LabOne Q to version 2.54.0 and loading your objects via their .load() method. For example, to load a saved Experiment object:
from laboneq.dsl.experiment import Experiment
experiment = Experiment.load(filepath)