Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add zero count outcomes #2889

Merged
merged 74 commits into from
Sep 8, 2022

Conversation

lillian542
Copy link
Contributor

@lillian542 lillian542 commented Aug 3, 2022

Context:
The ability to compute the counts of outcomes from raw samples has recently been added to PennyLane (#2876 #2686 #2839).

The output of the new measurement is a dictionary where keys are the outcomes and the values are counts. This PR adds the option to include possible outcomes that were not observed as entries in the dictionary, with value 0.

Description of the Change:
Adds the kwarg all_outcomes to the counts function, with a default value of False. Calling counts(all_outcomes=True) can be used to return all possible outcomes, including unobserved outcomes.

Uses the number of wires measured (if counting outcomes of measured computational basis states) or eigenvalues of observable to determine all possible outcomes, and populates a dictionary with all values set to 0, then updates with counts where relevant.

The variable all_outcomes is saved on the MeasurementProcess object that is returned by the counts function, as MeasurementProcess.return_type.all_outcomes. This variable can then be accessed by _samples_to_counts as obs.return_type.all_outcomes when creating the counts dictionary.

Benefits:
If all_outcomes=True, querying dictionary about a possible outcome that was not observed does not throw a KeyError, and printing the dictionary shows all possible outcomes. A KeyError is raised only if the user tries to access results for an outcome that does not match the possibilities for the measured system.

Possible Drawbacks:
None I can think of.

Related GitHub Issues:
(#2864)

@lillian542

This comment was marked as resolved.

@lillian542 lillian542 marked this pull request as draft August 3, 2022 19:21
@lillian542
Copy link
Contributor Author

Most of the test pass, but there is an error when running

res = qml.execute_new(tapes=[qnode.tape], device=dev, gradient_fn=None)

in line 904 of test_new_return_types.py that I'm not sure how to handle.

I've gotten myself down a bit of a rabbit-hole trying to sort out what qml.execute is doing, how qnode.tape work, and what is being passed to the function as meas1 and meas2 when the test are run. Another set of eyes and/or a rundown of what this function is doing would be a big help.

@antalszava
Copy link
Contributor

Hi @lillian542, thanks again!

For building the docs: we have a continuous integration check for building the docs, it's at the bottom of this PR. See a mention of it in the CI checks section of our docs.

@antalszava
Copy link
Contributor

Most of the test pass, but there is an error when running res = qml.execute_new(tapes=[qnode.tape], device=dev, gradient_fn=None) in line 904 of test_new_return_types.py that I'm not sure how to handle.
I've gotten myself down a bit of a rabbit-hole trying to sort out what qml.execute is doing, how qnode.tape work, and what is being passed to the function as meas1 and meas2 when the test are run. Another set of eyes and/or a rundown of what this function is doing would be a big help.

This test file is related to a major change that is experimental for now. If working it out doesn't come along, the marker pytest.mark.xfail can be put on the failing test cases. We're actively working on this feature and can resolve the issues if this is the only blocker. 👍

I'll leave a more detailed review on the PR today, thank you! 🙂

@lillian542
Copy link
Contributor Author

If working it out doesn't come along, the marker pytest.mark.xfail can be put on the failing test cases.

Alright, that's what I've done for now; I'm not sure quite what it going on, it seems to be passed an operator where observable.name is ['PauliZ', 'PauliZ'], and calling observable.compute_eigvals() raises an EigvalsUndefinedError. That's as far as I've gotten.

I'll leave a more detailed review on the PR today, thank you! 🙂

Great, looking forward to hearing your feedback!

@lillian542 lillian542 marked this pull request as ready for review August 4, 2022 17:34
Copy link
Contributor

@antalszava antalszava left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lillian542, thank you! Overall looking great, I had some questions.

While reading the PR, a major thing came to mind that was not mentioned in the issue. Looking at the concrete implementation and considering the cost of obtaining eigenvalues, it might be best to preserve the previous behaviour and add zero count outcomes as an optional feature. A keyword argument can be added to qml.counts, e.g., all_outcomes and making all_outcomes=False the default. Since this was not originally mentioned in the issue, we can go forward with this PR as is and the team can implement this kwarg logic (let me know).

Reasons for having a sparse dict by default (all_outcomes=False):

  • qml.eigvals can be costly to use for sizable Hamiltonians;
  • For many qubits, the length of the output for qml.counts grows significantly.

In return, having zero count outcomes (all_outcomes=True) is still great to have when users would like to see all outcomes in the dictionary (e.g., users who are learning quantum computing).

Copy link
Contributor

@AlbertMitjans AlbertMitjans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work!! 💯 I will review the tests as soon as all tests pass.

In the last commits to master there were some additions that require changes in this PR:

@lillian542
Copy link
Contributor Author

@AlbertMitjans Sorry for the delay, its been a busy few days. I think everything is ready to go now.

Copy link
Contributor

@AlbertMitjans AlbertMitjans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job! 💯

I'll approve it once these small changes are applied!

@lillian542
Copy link
Contributor Author

Changes done! @AlbertMitjans

Copy link
Contributor

@AlbertMitjans AlbertMitjans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more changes.

Please next time make sure that all tests pass.

@antalszava
Copy link
Contributor

Hi @lillian542, double-checking how is it going here? Could we help in some way to push for merging?

@lillian542
Copy link
Contributor Author

@antalszava @AlbertMitjans I'm sorry for my slow response, you've caught me at an extremely busy time. I've made the most recently suggested changes.

I'm not sure how to set the checks to run to make sure there are no further issues that need to be addressed. Can I do that on my end, and if so, how?

I'll make sure to prioritise responding quickly so we can get this wrapped up.

@antalszava
Copy link
Contributor

@antalszava @AlbertMitjans I'm sorry for my slow response, you've caught me at an extremely busy time. I've made the most recently suggested changes.

I'm not sure how to set the checks to run to make sure there are no further issues that need to be addressed. Can I do that on my end, and if so, how?

No worries there! Let us know if we can help out even with development. :)

Just merged master in and it seems that only the formatting check is off.

Otherwise, the tests can be run locally via pytest tests, see more here: https://docs.pennylane.ai/en/stable/development/guide/tests.html.
And see our documentation section on running the CI checks: https://docs.pennylane.ai/en/stable/development/guide/pullrequests.html#continuous-integration-checks

Let us know if we can help further!

@AlbertMitjans
Copy link
Contributor

Hi @lillian542 , as mentioned in the github action, the file pennylane/tape/tape.py needs reformatting. To do so please run (make sure to install black first: pip install black):

black -l 100 pennylane/

Finally commit the changes.

Hope this helps. :)

Copy link
Contributor

@AlbertMitjans AlbertMitjans left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job!! 🚀

@AlbertMitjans AlbertMitjans merged commit 54f76a9 into PennyLaneAI:master Sep 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants