-
Notifications
You must be signed in to change notification settings - Fork 635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add zero count outcomes #2889
Add zero count outcomes #2889
Conversation
…42/pennylane into add_zero_count_outcomes
This comment was marked as resolved.
This comment was marked as resolved.
Most of the test pass, but there is an error when running
in line 904 of test_new_return_types.py that I'm not sure how to handle. I've gotten myself down a bit of a rabbit-hole trying to sort out what qml.execute is doing, how qnode.tape work, and what is being passed to the function as meas1 and meas2 when the test are run. Another set of eyes and/or a rundown of what this function is doing would be a big help. |
Hi @lillian542, thanks again! For building the docs: we have a continuous integration check for building the docs, it's at the bottom of this PR. See a mention of it in the CI checks section of our docs. |
This test file is related to a major change that is experimental for now. If working it out doesn't come along, the marker I'll leave a more detailed review on the PR today, thank you! 🙂 |
Alright, that's what I've done for now; I'm not sure quite what it going on, it seems to be passed an operator where
Great, looking forward to hearing your feedback! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lillian542, thank you! Overall looking great, I had some questions.
While reading the PR, a major thing came to mind that was not mentioned in the issue. Looking at the concrete implementation and considering the cost of obtaining eigenvalues, it might be best to preserve the previous behaviour and add zero count outcomes as an optional feature. A keyword argument can be added to qml.counts
, e.g., all_outcomes
and making all_outcomes=False
the default. Since this was not originally mentioned in the issue, we can go forward with this PR as is and the team can implement this kwarg logic (let me know).
Reasons for having a sparse dict by default (all_outcomes=False
):
qml.eigvals
can be costly to use for sizable Hamiltonians;- For many qubits, the length of the output for
qml.counts
grows significantly.
In return, having zero count outcomes (all_outcomes=True
) is still great to have when users would like to see all outcomes in the dictionary (e.g., users who are learning quantum computing).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!! 💯 I will review the tests as soon as all tests pass.
In the last commits to master
there were some additions that require changes in this PR:
- Add
qml.measurements.AllCounts
here:pennylane/pennylane/_device.py
Line 732 in b904c96
and qml.measurements.Counts not in return_types - Should we add
AllCounts
here?pennylane/pennylane/devices/default_mixed.py
Line 580 in b904c96
if obs.return_type in (Sample, Counts): - Add
AllCounts
in these two lines:
pennylane/pennylane/tape/tape.py
Line 481 in b904c96
self.is_sampled = any(m.return_type in (Sample, Counts) for m in self.measurements)
pennylane/pennylane/tape/tape.py
Line 482 in b904c96
self.all_sampled = all(m.return_type in (Sample, Counts) for m in self.measurements)
@AlbertMitjans Sorry for the delay, its been a busy few days. I think everything is ready to go now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job! 💯
I'll approve it once these small changes are applied!
Co-authored-by: Albert Mitjans <[email protected]>
…42/pennylane into add_zero_count_outcomes
Changes done! @AlbertMitjans |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more changes.
Please next time make sure that all tests pass.
Hi @lillian542, double-checking how is it going here? Could we help in some way to push for merging? |
@antalszava @AlbertMitjans I'm sorry for my slow response, you've caught me at an extremely busy time. I've made the most recently suggested changes. I'm not sure how to set the checks to run to make sure there are no further issues that need to be addressed. Can I do that on my end, and if so, how? I'll make sure to prioritise responding quickly so we can get this wrapped up. |
No worries there! Let us know if we can help out even with development. :) Just merged Otherwise, the tests can be run locally via Let us know if we can help further! |
Hi @lillian542 , as mentioned in the github action, the file
Finally commit the changes. Hope this helps. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job!! 🚀
Context:
The ability to compute the counts of outcomes from raw samples has recently been added to PennyLane (#2876 #2686 #2839).
The output of the new measurement is a dictionary where keys are the outcomes and the values are counts. This PR adds the option to include possible outcomes that were not observed as entries in the dictionary, with value 0.
Description of the Change:
Adds the kwarg
all_outcomes
to thecounts
function, with a default value of False. Callingcounts(all_outcomes=True)
can be used to return all possible outcomes, including unobserved outcomes.Uses the number of wires measured (if counting outcomes of measured computational basis states) or eigenvalues of observable to determine all possible outcomes, and populates a dictionary with all values set to 0, then updates with counts where relevant.
The variable
all_outcomes
is saved on theMeasurementProcess
object that is returned by thecounts
function, asMeasurementProcess.return_type.all_outcomes
. This variable can then be accessed by_samples_to_counts
asobs.return_type.all_outcomes
when creating the counts dictionary.Benefits:
If
all_outcomes=True
, querying dictionary about a possible outcome that was not observed does not throw a KeyError, and printing the dictionary shows all possible outcomes. A KeyError is raised only if the user tries to access results for an outcome that does not match the possibilities for the measured system.Possible Drawbacks:
None I can think of.
Related GitHub Issues:
(#2864)