Skip to content

Commit

Permalink
Tutorial updates for single-param wildcard and procedural error bars.
Browse files Browse the repository at this point in the history
  • Loading branch information
sserita committed Apr 4, 2024
1 parent 3e55098 commit 49fc6c6
Show file tree
Hide file tree
Showing 3 changed files with 267 additions and 4 deletions.
95 changes: 92 additions & 3 deletions jupyter_notebooks/Tutorials/algorithms/GST-Protocols.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,95 @@
"custom_gauge_opt_model = results_TP2.estimates['GSTwithMyGO'].models['my_gauge_opt']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Wildcard parameters\n",
"\n",
"TODO"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"proto = pygsti.protocols.GateSetTomography(\n",
" target_model_TP, name=\"GSTwithPerGateWildcard\",\n",
" badfit_options={'actions': ['wildcard']}\n",
" )\n",
"\n",
"# Artifically unset threshold so that wildcard runs. YOU WOULD NOT DO THIS IN PRODUCTION RUNS\n",
"proto.badfit_options.threshold = None\n",
"\n",
"results_pergate_wildcard = proto.run(data, disable_checkpointing=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# The wildcard can be retrieved by looking at unmodeled_error in the estimates\n",
"results_pergate_wildcard.estimates['GSTwithPerGateWildcard'].parameters['unmodeled_error']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another common form of wildcard is to have one parameter for SPAM and one for all the other gates."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"op_label_dict = {k:0 for k in target_model_TP.operations} # Assign all gates to value 0\n",
"op_label_dict['SPAM'] = 1 # Assign SPAM to value 1\n",
"\n",
"proto = pygsti.protocols.GateSetTomography(\n",
" target_model_TP, name=\"GSTwithPerGateWildcard\",\n",
" badfit_options={'actions': ['wildcard'], 'wildcard_primitive_op_labels': op_label_dict}\n",
" )\n",
"\n",
"# Artifically unset threshold so that wildcard runs. YOU WOULD NOT DO THIS IN PRODUCTION RUNS\n",
"proto.badfit_options.threshold = None\n",
"\n",
"results_globalgate_wildcard = proto.run(data, disable_checkpointing=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Unfortunately both of these wildcard strategies have the same problem. They are not unique, i.e. it is possible to \"slosh\" wildcard strength from one parameter to another to get another valid wildcard solution. This makes it difficult to make any quantitative statements about relative wildcard strengths.\n",
"\n",
"In order to avoid this, we have also introduced a 1D wildcard solution. This takes some reference weighting for the model operations and scales a single wildcard parameter ($\\alpha$) up until the model fits the data. Since there is only one parameter, this does not have any of the ambiguity of the above wildcard strategies. Currently, the reference weighting used is the diamond distance from the noisy model to the target model, with the intuition that \"noisier\" operations are more likely to contribute to model violation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"proto = pygsti.protocols.GateSetTomography(\n",
" target_model_TP, name=\"GSTwithPerGateWildcard\",\n",
" badfit_options={'actions': ['wildcard1d'], 'wildcard1d_reference': 'diamond distance'}\n",
" )\n",
"\n",
"# Artifically unset threshold so that wildcard runs. YOU WOULD NOT DO THIS IN PRODUCTION RUNS\n",
"proto.badfit_options.threshold = None\n",
"\n",
"results_1d_wildcard = proto.run(data, disable_checkpointing=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -487,9 +576,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "gst_checkpointing",
"display_name": "pygsti",
"language": "python",
"name": "gst_checkpointing"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -501,7 +590,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.11.5"
}
},
"nbformat": 4,
Expand Down
174 changes: 174 additions & 0 deletions jupyter_notebooks/Tutorials/reporting/ProceduralErrorBars.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Procedural Error Bars\n",
"\n",
"One other way we can use the `pygsti.report.reportables` module described in the [ModelAnalysisMetrics tutorial](ModelAnalysisMetrics.ipynb) is to procedurally generate error bars for any quantity you want.\n",
"\n",
"First, let's simulate a noisy GST experiment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pygsti\n",
"from pygsti.modelpacks import smq1Q_XY\n",
"from pygsti.report import reportables as rptbl, modelfunction as modelfn"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"target_model = smq1Q_XY.target_model()\n",
"\n",
"L=128\n",
"edesign = smq1Q_XY.create_gst_experiment_design(L)\n",
"\n",
"noisy_model = target_model.randomize_with_unitary(.1)\n",
"noisy_model = noisy_model.depolarize(.05)\n",
"\n",
"N=64\n",
"dataset = pygsti.data.simulate_data(noisy_model,edesign,N)\n",
"\n",
"\n",
"gst_proto = pygsti.protocols.StandardGST(modes=['full TP','CPTPLND','Target'],verbosity=2)\n",
"data = pygsti.protocols.ProtocolData(edesign,dataset)\n",
"results = gst_proto.run(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's compute error bars on the CPTP estimate, and then get a 95% confidence interval \"view\" from the `ConfidenceRegionFactory`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"crfact = results.estimates['CPTPLND'].add_confidence_region_factory('stdgaugeopt', 'final')\n",
"crfact.compute_hessian(comm=None, mem_limit=3.0*(1024.0)**3) #optionally use multiple processors & set memlimit\n",
"crfact.project_hessian('intrinsic error')\n",
"\n",
"crf_view = results.estimates['CPTPLND'].confidence_region_factories['stdgaugeopt','final'].view(95)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we can construct `pygsti.report.ModelFunction` objects that take a function which computes some observable from a model and the extracted view from above to compute error bars on that quantity of interest.\n",
"\n",
"One common thing to check is error bars on the process matrices. The `ModelFunction` in this case only needs to return the operation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"final_model = results.estimates['CPTPLND'].models['stdgaugeopt'].copy()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def get_op(model, lbl):\n",
" return model[lbl]\n",
"get_op_modelfn = modelfn.modelfn_factory(get_op)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rptbl.evaluate(get_op_modelfn(final_model, (\"Gxpi2\", 0)), crf_view)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rptbl.evaluate(get_op_modelfn(final_model, (\"Gypi2\", 0)), crf_view)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But we can also create model functions that perform more complicated actions, such as computing other reportables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Note that when creating ModelFunctions in this way, the model where you want the quantity evaluated must be the first argument\n",
"def ddist(model, ideal_model, lbl, basis):\n",
" return rptbl.half_diamond_norm(model[lbl], ideal_model[lbl], basis)\n",
"ddist_modelfn = modelfn.modelfn_factory(ddist)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rptbl.evaluate(ddist_modelfn(final_model, target_model, (\"Gxpi2\", 0), 'pp'), crf_view)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rptbl.evaluate(ddist_modelfn(final_model, target_model, (\"Gypi2\", 0), 'pp'), crf_view)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "pygsti",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
2 changes: 1 addition & 1 deletion pygsti/protocols/gst.py
Original file line number Diff line number Diff line change
Expand Up @@ -595,7 +595,7 @@ class GSTBadFitOptions(_NicelySerializable):
Actions to take when a GST fit is unsatisfactory. Allowed actions include:
* 'wildcard': Find an admissable wildcard model.
* 'ddist_wildcard': Fits a single parameter wildcard model in which
* 'wildcard1d': Fits a single parameter wildcard model in which
the amount of wildcard error added to an operation is proportional
to the diamond distance between that operation and the target.
* 'robust': scale data according out "robust statistics v1" algorithm,
Expand Down

0 comments on commit 49fc6c6

Please sign in to comment.