{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "fe3f6d82fee59fb2",
   "metadata": {},
   "source": "# Experiment Workflow Automation"
  },
  {
   "cell_type": "markdown",
   "id": "92336f765378fd1a",
   "metadata": {},
   "source": [
    "In the [Automation](https://docs.zhinst.com/labone_q_user_manual/core/functionality_and_concepts/07a_automation/tutorials/00_automation.html) tutorial, we introduced the LabOne Q Automation framework and showed how it can be used to automate a wide variety of experiment suites. In particular, we explained how we can write our own automation subclasses to customize the automation framework and automate arbitrary Python routines.\n",
    "\n",
    "In the LabOne Q Applications Library, we use [experiment workflows](https://docs.zhinst.com/labone_q_user_manual/applications_library/tutorials/sources/experiment_workflows.html) to standardize and structure our experiments. For this case, we have already written the automation subclasses, so that you do not have to. In this tutorial, we will explain how this works to automate a suite of experiment workflows.\n",
    "\n",
    "<img src=\"../../how-to-guides/images/labone_q_automation.svg\" align=\"left\">"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "690f4ba6b20046c7",
   "metadata": {},
   "source": "## Aim"
  },
  {
   "cell_type": "markdown",
   "id": "d704069da6d6168c",
   "metadata": {},
   "source": "The goal of experiment workflow automation is to automate the tedious human intervention involved when running an experiment suite, such as single-qubit tune-up. Experiment workflows are the building blocks that partially automate this process by creating, compiling, and running the experiment, analyzing the results, and updating the QPU if necessary. The remaining human step is to evaluate the analysis of the results and make decisions, such as assessing whether the experiment was successful, which parameters to update, and which experiment to run next. This step is captured by the experiment workflow [evaluate task](https://docs.zhinst.com/labone_q_user_manual/applications_library/tutorials/sources/experiment_workflows.html#evaluate-task) together with the automation framework."
  },
  {
   "cell_type": "markdown",
   "id": "637139212d915b64",
   "metadata": {},
   "source": "## Subpackage structure"
  },
  {
   "cell_type": "markdown",
   "id": "184c4e6ba52ea263",
   "metadata": {},
   "source": "In order to work with experiment workflows, we define the subclasses `WorkflowAutomation`, `WorkflowLayer`, and `WorkflowNode` in the `automation.workflow` directory. To help keep the logic subclasses organized, we also define the `WorkflowLogic` class in the same directory. The standard set of evaluation tasks are defined in the `tasks.evaluation` file. However, we may write workflow automation specific evaluation tasks also in this directory. The user is, of course, free to create other directories in the automation subpackage in their own copy of the Applications Library to define their own custom subclasses using ours as a template."
  },
  {
   "cell_type": "markdown",
   "id": "5256d8e1acced1e4",
   "metadata": {},
   "source": "## Example problem"
  },
  {
   "cell_type": "markdown",
   "id": "d355b4a23e53fb79",
   "metadata": {},
   "source": [
    "In order to explain experiment workflow automation in more concrete terms, let us look at a simple example.\n",
    "\n",
    "Consider an experiment suite consisting of a qubit spectroscopy experiment on four qubits. For each set of experiments, we define an experiment as successful if the $r^2$ value of the results fit is above a certain threshold. If a set of experiments is successful, we move on and repeat this three times. Once all experiments are successful, the experiment suite has passed."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff10f06e33fdf32f",
   "metadata": {},
   "source": "## Imports"
  },
  {
   "cell_type": "code",
   "id": "initial_id",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "import numpy as np\n",
    "from laboneq.automation.logic import FixedParameterUpdate\n",
    "from laboneq.automation.serialization import load_automation_parameters_from_file\n",
    "from laboneq.automation.web_viewer import start_web_viewer\n",
    "from laboneq.simple import *\n",
    "\n",
    "from laboneq_applications.automation import WorkflowAutomation, WorkflowLayer\n",
    "from laboneq_applications.automation.workflow.logic import AdaptFrequencyRange\n",
    "from laboneq_applications.experiments import qubit_spectroscopy\n",
    "from laboneq_applications.qpu_types.tunable_transmon import demo_platform"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "1c8dc876be95214",
   "metadata": {},
   "source": "## Setting up the quantum platform"
  },
  {
   "cell_type": "markdown",
   "id": "1633c6b55ba174c0",
   "metadata": {},
   "source": [
    "In order to initialize a `WorkflowAutomation` instance, there are two additional arguments to be aware of. There is a compulsory argument `session`, which is the LabOne Q `Session` object, and an optional argument `qpu`, which is the `QPU` object. If the `qpu` argument is passed to the `WorkflowAutomation`, then this QPU is used by default throughout the automation graph. If the `qpu` argument is not set, then a QPU needs to be passed directly to each layer.\n",
    "\n",
    "For this example, let us set up a demo session and QPU containing six tunable transmon qubits. The UIDs for the qubits are: `q0`, `q1`, `q2`, `q3`, `q4`, `q5`."
   ]
  },
  {
   "cell_type": "code",
   "id": "92d1604c726f836f",
   "metadata": {},
   "source": [
    "qt_platform = demo_platform(n_qubits=6)\n",
    "setup = qt_platform.setup\n",
    "qpu = qt_platform.qpu\n",
    "qubits = qpu.quantum_elements\n",
    "session = Session(setup)\n",
    "session.connect(do_emulation=True)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "b5b17a276601cf5",
   "metadata": {},
   "source": "## Creating the folder store"
  },
  {
   "cell_type": "markdown",
   "id": "eeb0b30441c97562",
   "metadata": {},
   "source": "Since we will be running experiment workflows, we can enable the folder store to store the experiment output. Experiment workflows run as part of an automation will have their output structured in a nested directory tree, designed to make viewing automation output easier."
  },
  {
   "cell_type": "code",
   "id": "99e927f34a6e81c6",
   "metadata": {},
   "source": [
    "from pathlib import Path\n",
    "\n",
    "folder_store = workflow.logbook.FolderStore(Path.cwd())"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "4285f69b2a3a7cba",
   "metadata": {},
   "source": "We disable saving in this tutorial. To enable it, simply run `folder_store.activate()`."
  },
  {
   "cell_type": "code",
   "id": "8dcb8315acee669c",
   "metadata": {},
   "source": "folder_store.deactivate()",
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "22952c7ea8179f3a",
   "metadata": {},
   "source": "## Constructing the workflow automation parameters"
  },
  {
   "cell_type": "markdown",
   "id": "8fc17b4d7c05be63",
   "metadata": {},
   "source": [
    "As in the [Automation](https://docs.zhinst.com/labone_q_user_manual/core/functionality_and_concepts/07a_automation/tutorials/00_automation.html) tutorial, we have the option of either setting initial automation parameters when initializing the automation, or passing all of the necessary parameters to the layers individually. The `parameters` property on the automation object displays the layer parameters for each added layer.\n",
    "\n",
    "The workflow automation parameters dictionary has the following structure:\n",
    "\n",
    "   - The primary key must be the layer key (as for the automation parameters dictionary in the `Automation` base class).\n",
    "   - The secondary key must be the parameters type, i.e., one of:\n",
    "      - `workflow_parameters`\n",
    "            - The primary key of this subdictionary is the node key. Parameters that are not \"per-quantum-element\" go under the special primary key `__common__`.\n",
    "      - `evaluation_parameters`\n",
    "      - `temporary_parameters`\n",
    "      - `options`\n",
    "\n",
    "Here is an example of how this looks in yaml format:\n",
    "\n",
    "```python\n",
    "amplitude_fine_layer:\n",
    "    workflow_parameters:\n",
    "        q0:\n",
    "            repetitions: [1, 2, 3, 4]\n",
    "        q1:\n",
    "            repetitions: [1, 2, 3, 4]\n",
    "        __common__:\n",
    "            amplification_qop: x180\n",
    "            target_angle: 1.0\n",
    "            phase_offset: 0.0\n",
    "    evaluation_parameters:\n",
    "        fit_r2_thresholds:\n",
    "            q0: 0.9\n",
    "            q1: 0.91\n",
    "    temporary_parameters:\n",
    "        q0:\n",
    "            readout_resonator_frequency: 7e9\n",
    "    options:\n",
    "        evaluate: True\n",
    "        update: True\n",
    "```\n",
    "\n",
    "We recommend setting workflow automation parameters when initializing the `WorkflowAutomation`. When there are many parameters, it may be easier to store them in a yaml file and convert them to a dictionary using the `automation.serializtion.load_automation_parameters_from_file` function, as shown in the [Automation](https://docs.zhinst.com/labone_q_user_manual/core/functionality_and_concepts/07a_automation/tutorials/00_automation.html) tutorial.\n",
    "\n",
    "At the beginning, we do not know exactly what parameters we should use but we can pass in at least the right structure with some rough initial values as a starting point. We know that we are expecting three layers, which we can call `qs1`, `qs2`, `qs3`, and we know that we are only interested in the first four qubits, `q0`, `q1`, `q2`, `q3`. We also know that we will be using the qubit spectroscopy experiment workflow, which takes an array of frequencies per qubit, and we will need the `evaluate` and `update` options set to fully benefit from the automation framework."
   ]
  },
  {
   "cell_type": "code",
   "id": "9c73d8d0aa23db3a",
   "metadata": {},
   "source": [
    "auto_params = {}\n",
    "for layer_key in [\"qs1\", \"qs2\", \"qs3\"]:\n",
    "    auto_params[layer_key] = {\"workflow_parameters\": {}}\n",
    "    for q in qubits[:4]:\n",
    "        auto_params[layer_key][\"workflow_parameters\"][q.uid] = {\n",
    "            \"frequencies\": np.linspace(6.1e9, 6.6e9, 101)\n",
    "        }\n",
    "    auto_params[layer_key][\"options\"] = {\n",
    "        \"evaluate\": True,\n",
    "        \"update\": True,\n",
    "        \"count\": 2048,\n",
    "        \"active_reset\": True,\n",
    "    }"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "bf224f1355cdc6dc",
   "metadata": {},
   "source": "Equivalently, we could load the parameters from a [yaml file](initial_parameters.yml), as shown below."
  },
  {
   "cell_type": "code",
   "id": "e9aaf9fc4b6c0065",
   "metadata": {},
   "source": [
    "auto_params = load_automation_parameters_from_file(\"initial_parameters.yml\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "426561ec47060df7",
   "metadata": {},
   "source": "## Constructing the automation graph"
  },
  {
   "cell_type": "markdown",
   "id": "4a12165b80a838c7",
   "metadata": {},
   "source": "We are now ready to initialize our workflow automation object."
  },
  {
   "cell_type": "code",
   "id": "47c33a0d60621a42",
   "metadata": {},
   "source": [
    "auto = WorkflowAutomation(session, qpu, name=\"example\", parameters=auto_params)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "fd6400350209d824",
   "metadata": {},
   "source": "We can also define our workflow layers and add them to the automation. In this case, the function of the workflow layer will be our workflow builder."
  },
  {
   "cell_type": "code",
   "id": "be72fec0a890ff3f",
   "metadata": {},
   "source": [
    "qs1 = WorkflowLayer(\n",
    "    qubit_spectroscopy.experiment_workflow,\n",
    "    [\"q0\", \"q1\", \"q2\", \"q3\"],\n",
    "    key=\"qs1\",\n",
    "    depends_on={\"root\"},\n",
    ")\n",
    "auto.add_layer(qs1)\n",
    "qs2 = WorkflowLayer(\n",
    "    qubit_spectroscopy.experiment_workflow,\n",
    "    [\"q0\", \"q1\", \"q2\", \"q3\"],\n",
    "    key=\"qs2\",\n",
    "    depends_on={\"qs1\"},\n",
    ")\n",
    "auto.add_layer(qs2)\n",
    "qs3 = WorkflowLayer(\n",
    "    qubit_spectroscopy.experiment_workflow,\n",
    "    [\"q0\", \"q1\", \"q2\", \"q3\"],\n",
    "    key=\"qs3\",\n",
    "    depends_on={\"qs2\"},\n",
    ")\n",
    "auto.add_layer(qs3)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "4802912dd258e2bf",
   "metadata": {},
   "source": "## Viewing the automation graph"
  },
  {
   "cell_type": "markdown",
   "id": "ac260a051c0ceb49",
   "metadata": {},
   "source": "In this tutorial, we will view the automation graph interactively using the web viewer."
  },
  {
   "cell_type": "code",
   "id": "1bb7f97478841b2b",
   "metadata": {
    "tags": [
     "nbval-skip"
    ]
   },
   "source": [
    "start_web_viewer(auto)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "9bac66ae0c7be86",
   "metadata": {},
   "source": "## Accessing the automation graph"
  },
  {
   "cell_type": "markdown",
   "id": "da6677e8a2b6f90a",
   "metadata": {},
   "source": [
    "In addition to the access options outlined in the [Automation](https://docs.zhinst.com/labone_q_user_manual/core/functionality_and_concepts/07a_automation/tutorials/00_automation.html) tutorial, `WorkflowAutomation` has a number of additional features to improve ease-of-use.\n",
    "\n",
    "One customization is the granular access to workflow automation parameters. Apart from viewing layer automation parameters all at once, the `WorkflowLayer` also has the following attributes to get/set/delete part of the parameters dictionary:\n",
    "\n",
    "- `workflow_parameters` (element + common workflow parameters)\n",
    "- `element_workflow_parameters` (workflow parameters that are per quantum element),\n",
    "- `common_workflow_parameters` (workflow parameters that apply to all quantum elements)\n",
    "- `evaluation_parameters`\n",
    "- `temporary_parameters`\n",
    "- `options`\n",
    "\n",
    "Another customization is aliases to generic base class names. For example, on the `WorkflowLayer` class we have the following aliases from `AutomationLayer`:\n",
    "\n",
    "- `function` <-> `workflow_builder`\n",
    "- `node_keys` <-> `quantum_elements`\n",
    "- `results` <-> `workflow_results`\n",
    "\n",
    "We can demonstrate how some of these look below."
   ]
  },
  {
   "cell_type": "code",
   "id": "947890173d594f48",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].workflow_parameters"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "73d82292bd0dfbe0",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].workflow_builder"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "6b8cbe24fcff35a8",
   "metadata": {},
   "source": "## Running the automation graph"
  },
  {
   "cell_type": "markdown",
   "id": "8777e2cfadff150b",
   "metadata": {},
   "source": "We are now ready to run the automation graph."
  },
  {
   "cell_type": "code",
   "id": "4faa6fdee1152ee",
   "metadata": {},
   "source": [
    "auto.run()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "7db0a3ec33ee5792",
   "metadata": {},
   "source": [
    "We can see that the entire automation graph passed.\n",
    "\n",
    "If running with an activated folder store, notice also how the workflow output has been saved in the folder store. Previously, with [workflows](https://docs.zhinst.com/labone_q_user_manual/applications_library/tutorials/sources/logbooks.html#the-folderstore), we had a date folder `{date}` with `{timestamp}-{workflow_name}` subfolders. If a workflow is run as part of an automation, however, we now have the directory structure `{date}`/`{automation_timestamp}-{automation_name}`/`{layer_key}`/`{timestamp}-{workflow_name}`/. Moreover, if a layer is run sequentially, or a layer is run explicitly using `run_layer` with the `node_keys` argument, then the directory structure is `{date}`/`{automation_timestamp}-{automation_name}`/`{layer_key}`/`{node_keys}`/`{timestamp}-{workflow_name}`/. This provides easier access to the workflow results and plots.\n",
    "\n",
    "Let us take a closer look at what happened when we ran the automation. We can see that the `qubit_spectroscopy` has an evaluation task and that this evaluation task uses the standard task `evaluate_parameter_and_fit_r2_thresholds`. As a reminder, this task declares `success` when the $r^2$ value of the fit is above a certain threshold, and `update` when the change in a parameter is above a certain threshold. Since we did not set any evaluation parameters, we are using the default thresholds set in the evaluation task. We can check these below."
   ]
  },
  {
   "cell_type": "code",
   "id": "4af7eafbcd26057",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].workflow_results[(\"q0\", \"q1\", \"q2\", \"q3\")].tasks[\"evaluate_experiment\"].src"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "bee4136579d03754",
   "metadata": {},
   "source": "We can see that our `fit_r2_threshold` is 0.99. Let us check what $r^2$ value we actually got for this layer, for one of the qubits (since they are all identical):"
  },
  {
   "cell_type": "code",
   "id": "1bb972023cdd10a",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].workflow_results[(\"q0\", \"q1\", \"q2\", \"q3\")].tasks[\"analysis_workflow\"].tasks[\n",
    "    \"fit_data\"\n",
    "].output[\"q0\"]"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "1f03b8082142fd44",
   "metadata": {},
   "source": "From the fit data, we can see that we have an $r^2$ value of 0.99911648 > 0.99 and so our experiment is a `success`. Let us confirm that the experiment was indeed marked as a success."
  },
  {
   "cell_type": "code",
   "id": "ccfed5a56aa170c",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].workflow_results[(\"q0\", \"q1\", \"q2\", \"q3\")].tasks[\n",
    "    \"evaluate_experiment\"\n",
    "].output[\"q0\"]"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "6456c9725a45da24",
   "metadata": {},
   "source": "The default `parameter` (`resonance_frequency_ge`) and `parameter_threshold` (2e8) for `qubit_spectroscopy` determine the `update` flag. In this case it is `false`, which means that `resonance_frequency_ge` changed by less than 2e8. Let us confirm that this is the case:"
  },
  {
   "cell_type": "code",
   "id": "50d76e1387750069",
   "metadata": {},
   "source": [
    "print(\n",
    "    \"Old value = \",\n",
    "    auto[\"qs1\"]\n",
    "    .workflow_results[(\"q0\", \"q1\", \"q2\", \"q3\")]\n",
    "    .tasks[\"analysis_workflow\"]\n",
    "    .output[\"old_parameter_values\"][\"q0\"],\n",
    ")\n",
    "print(\n",
    "    \"New value = \",\n",
    "    auto[\"qs1\"]\n",
    "    .workflow_results[(\"q0\", \"q1\", \"q2\", \"q3\")]\n",
    "    .tasks[\"analysis_workflow\"]\n",
    "    .output[\"new_parameter_values\"][\"q0\"],\n",
    ")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "9ae5481569595b32",
   "metadata": {},
   "source": "Since this is only a minor change in the parameter, we have decided not to update the quantum element in the QPU in this case. This is useful to prevent updating the QPU due to noise for example. Let us check the QPU parameters to verify that this parameter has not been updated:"
  },
  {
   "cell_type": "code",
   "id": "ba88c6fe77d0bf68",
   "metadata": {},
   "source": [
    "qpu[\"q0\"].parameters.resonance_frequency_ge"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "aaf2715a954f71e8",
   "metadata": {},
   "source": "## Adding automation logic"
  },
  {
   "cell_type": "markdown",
   "id": "c0797d521a74d86b",
   "metadata": {},
   "source": "Now that we understand how the automation interacts with the experiment workflows, let us test our new knowledge. Let us set the $r^2$ value for the first layer to 1, so that the layer definitely fails. Then, by adding logic to the layer, we can iteratively reduce the $r^2$ value until the layer passes. Up to our iteration resolution, this will tell us the highest $r^2$ threshold for which the layer passes."
  },
  {
   "cell_type": "markdown",
   "id": "da8543b31941d202",
   "metadata": {},
   "source": "As in the [Automation](https://docs.zhinst.com/labone_q_user_manual/core/functionality_and_concepts/07a_automation/tutorials/00_automation.html) tutorial, we can use the standard evaluation task `FixedParameterUpdate` for this purpose. To demonstrate how this works, we can reset the automation graph, add evaluation parameters and logic to the first layer, then rerun the graph."
  },
  {
   "cell_type": "code",
   "id": "67cb4ce173af034b",
   "metadata": {},
   "source": [
    "auto.reset()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "86a3601dd2bc6484",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].evaluation_parameters = {\n",
    "    \"fit_r2_thresholds\": {\"q0\": 1, \"q1\": 1, \"q2\": 1, \"q3\": 1}\n",
    "}"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "78da12712f5c61e8",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].logic = FixedParameterUpdate(\n",
    "    new_layer_key=\"qs1\",\n",
    "    parameter_changes={\n",
    "        \"evaluation_parameters\": {\n",
    "            \"fit_r2_thresholds\": {\n",
    "                \"q0\": -0.0003,\n",
    "                \"q1\": -0.0003,\n",
    "                \"q2\": -0.0003,\n",
    "                \"q3\": -0.0003,\n",
    "            }\n",
    "        },\n",
    "    },\n",
    ")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "bd5d66310d845135",
   "metadata": {},
   "source": [
    "auto.run()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "7961f38006899e5b",
   "metadata": {},
   "source": "Again, the automation graph has passed completely. Let us look at the fail/pass counts to understand what has happened."
  },
  {
   "cell_type": "code",
   "id": "a2a040816deda216",
   "metadata": {},
   "source": [
    "print(f\"Fail count for layer qs1 = {auto['qs1'].fail_count}\")\n",
    "print(f\"Pass count for layer qs1 = {auto['qs1'].pass_count}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "a68096f738b58fa8",
   "metadata": {},
   "source": "Here we can see that on the first attempt, layer `qs1` failed because 0.99911648 < 1. Then we reduce the $r^2$ threshold by 0.0003, and go back to layer `qs1`. On the second attempt, layer `qs1` fails because 0.99911648 < 0.9997. On the third attempt, layer `qs1` fails because 0.99911648 < 0.9994. On the fourth attempt, layer `qs1` passes because 0.99911648 > 0.9991. Hence, the final $r^2$ threshold that we expect is 0.9991. Let us verify that this is indeed the $r^2$ threshold now stored in the automation parameters."
  },
  {
   "cell_type": "code",
   "id": "8470af31afd61da7",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].evaluation_parameters"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "c52f6495764f90f6",
   "metadata": {},
   "source": "This is a simple example to demonstrate how the automation works in emulation mode. In practice, we would probably not update the evaluation parameters until a layer passes, but instead update the workflow parameters of an experiment."
  },
  {
   "cell_type": "markdown",
   "id": "d23c379294bdd069",
   "metadata": {},
   "source": "### Adaptive automation logic"
  },
  {
   "cell_type": "markdown",
   "id": "41eb896fbc68894e",
   "metadata": {},
   "source": [
    "In the above example, we showed how we can perform a fixed parameter update when an experiment fails. However, in some cases, the parameter update that should be performed is dependent on the results of the previous experiment, regardless of whether it passed or failed. In this section, we present a simple example, where the automation logic uses the results of the previous run.\n",
    "\n",
    "Let us consider the case where, after a layer run fails, we want to update the frequency range by a given multiplier (preserving the midpoint) that depends on the frequency range of the last run. For this, we can use the `AdaptFrequncyRange` subclass in `automation.logic`. For this class, the dictionary `range_thresholds` specifies what multiplier to apply to a given frequency range. In this example, if the frequency range is between 0 and 200 MHz, we multiply the range by 1.1. If the range is between 200 MHz and 400 MHz, we multiply by 1.2 times and so on. We can set this logic to run for 3 iterations, regardless of whether the layer passes or fails.\n",
    "\n",
    "Let us reset the automation graph, apply this logic to layer `qs1`, and rerun the graph."
   ]
  },
  {
   "cell_type": "code",
   "id": "f3572a538bc08c8c",
   "metadata": {},
   "source": [
    "auto.reset()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "beb8065e5a2d4ecb",
   "metadata": {},
   "source": [
    "del auto[\"qs1\"].evaluation_parameters  # reset the evaluation parameters"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "a880816f25769add",
   "metadata": {},
   "source": [
    "auto[\"qs1\"].logic = AdaptFrequencyRange(\n",
    "    new_layer_key=\"qs1\",\n",
    "    range_thresholds={\n",
    "        0: 1.1,\n",
    "        200e6: 1.2,\n",
    "        400e6: 1.3,\n",
    "        600e6: 1.4,\n",
    "    },\n",
    "    iterations=3,\n",
    ")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "id": "c1a617fe548a1555",
   "metadata": {},
   "source": [
    "auto.run()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "6c31f41124c0e6f3",
   "metadata": {},
   "source": "The automation graph again ran through successfully. As before, let us examine the fail/pass counts to see what happened."
  },
  {
   "cell_type": "code",
   "id": "7d841fbaac6cbadf",
   "metadata": {},
   "source": [
    "print(f\"Fail count for layer qs1 = {auto['qs1'].fail_count}\")\n",
    "print(f\"Pass count for layer qs1 = {auto['qs1'].pass_count}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "9d1df66a1c13e96",
   "metadata": {},
   "source": "Now we can see that the layer `qs1` passed four times. Let us check the initial frequency range, by printing the frequency range of the next layer, which has not changed."
  },
  {
   "cell_type": "code",
   "id": "af189932d37b7ae2",
   "metadata": {},
   "source": [
    "frequencies = auto[\"qs2\"].workflow_parameters[\"q0\"][\"frequencies\"]\n",
    "print(f\"Initial frequency range = {max(frequencies) - min(frequencies)}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "8752185182cc31d8",
   "metadata": {},
   "source": "The original frequency range was 500 MHz. Therefore, on the first logic iteration, this was multiplied by 1.3 to give 650 MHz. On the second iteration, this was multiplied by 1.4, since 650 > 600 MHz, which gives 910 MHz. Finally, on the third iteration, this was again multiplied by 1.4, which gives 1274 MHz. Once the three logic iterations are complete, the layer is run through normally, which gives the fourth pass. Let us verify that the final frequency range for layer `qs1` is indeed 1274 MHz."
  },
  {
   "cell_type": "code",
   "id": "b3fd26c2dcda1a3b",
   "metadata": {},
   "source": [
    "frequencies = auto[\"qs1\"].workflow_parameters[\"q0\"][\"frequencies\"]\n",
    "print(f\"Final frequency range = {max(frequencies) - min(frequencies)}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "6bb2eb1c7d7eb486",
   "metadata": {},
   "source": "As before, this was a contrived example to demonstrate how adaptive logic works. In practice, we would probably adapt our parameters based on the *outputs* of the layer executable, rather than the *inputs*. For example, if we notice that the $r^2$ value of the failed results fit is close to 0, we might want to make more drastic changes to the workflow parameters than if the fit is closer to 1."
  },
  {
   "cell_type": "markdown",
   "id": "f0cbac52bc3954e2",
   "metadata": {},
   "source": "## Saving/loading parameters"
  },
  {
   "cell_type": "markdown",
   "id": "66dcbba6124115fb",
   "metadata": {},
   "source": "As in the [Automation](https://docs.zhinst.com/labone_q_user_manual/core/functionality_and_concepts/07a_automation/tutorials/00_automation.html) tutorial, we can save the stateful automation parameters using `save_parameters`."
  },
  {
   "cell_type": "code",
   "id": "fa59fd4baede0eed",
   "metadata": {},
   "source": [
    "# Uncomment the line below to save the automation parameters\n",
    "# auto.save_parameters()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "7e6239070a3c873",
   "metadata": {},
   "source": "Notice that, by default, the automation parameters are saved as a yaml file in the timestamped automation directory created by the folder store. If the folder store is not activated, the `save_parameters` method will create a folder-store-compatible directory tree, unless specified otherwise. The automation parameters themselves also have a timestamp and so can be saved multiple times in the same directory."
  },
  {
   "cell_type": "code",
   "id": "c8ebef0e7682c114",
   "metadata": {},
   "source": [
    "# Uncomment the line below to save the automation parameters\n",
    "# auto.save_parameters()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "id": "fdc68329921b1ae8",
   "metadata": {},
   "source": "Similarly, the parameters can be loaded using the `load_parameters` method."
  },
  {
   "cell_type": "markdown",
   "id": "bfdab10d7aeb075d",
   "metadata": {},
   "source": "This tutorial covered the details of how to use the LabOne Q Automation framework together with workflows. With these tools, it is possible to run, for example, automated tune-up routines for single-qubit and two-qubit gates, as well as randomized benchmarking, and other experiment suites."
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
