Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TOML files update #826

Merged
merged 12 commits into from
Aug 2, 2024
Merged

Conversation

erick-xanadu
Copy link
Contributor

@erick-xanadu erick-xanadu commented Jul 30, 2024

Context: We would like Catalyst to optimize the first instance of qml.StatePrep by just setting the state vector in PennyLane lightning.

Description of the Change: To achieve this, we set a flag in the TOML files where it denotes whether the value for the flag skip_initial_state_prep used during decomposition. This flag is already in PennyLane and will skip the initial state preparation.

There are other ways to achieve this but this is the least invasive way to do so. Some other alternatives that were discussed:

  1. Adding StatePrep to the list of supported operations: this is not a good idea as it would lead StatePrep to not be decomposed ever.
  2. Creating a new operation and map StatePrep directly into this new operation only in the context of Catalyst. This would lead to setting this new qml Operation in the list of supported operations on the device, even though it would never be seen except when using with Catalyst.

The way this was achieved was not to create a new qml Operation, only an JAXPR/MLIR operation and bind StatePrep to this JAXPR operation and lower it through MLIR. So, no new qml operations.

Benefits: Optimization

Possible Drawbacks: More flags. No one likes flags.

Related GitHub Issues: Will be followed by this PennyLaneAI/catalyst#955 pull request in Catalyst.

[sc-69069]

@erick-xanadu erick-xanadu requested a review from dime10 July 30, 2024 19:17
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@maliasadi maliasadi self-requested a review July 30, 2024 19:17
@erick-xanadu
Copy link
Contributor Author

@dime10 , please let me know if you want a different flag name. I think this is appropriate.

@AmintorDusko
Copy link
Contributor

Hey @erick-xanadu, could you please update your branch? (merge master)

Copy link
Contributor

@dime10 dime10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Erick!

@dime10 dime10 requested a review from maliasadi August 1, 2024 14:35
Copy link

codecov bot commented Aug 2, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 86.52%. Comparing base (fb4e6c7) to head (31048f2).
Report is 84 commits behind head on master.

❗ There is a different number of reports uploaded between BASE (fb4e6c7) and HEAD (31048f2). Click for more details.

HEAD has 3 uploads less than BASE
Flag BASE (fb4e6c7) HEAD (31048f2)
10 7
Additional details and impacted files
@@             Coverage Diff             @@
##           master     #826       +/-   ##
===========================================
- Coverage   96.94%   86.52%   -10.43%     
===========================================
  Files         113       17       -96     
  Lines       18012     1914    -16098     
===========================================
- Hits        17462     1656    -15806     
+ Misses        550      258      -292     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@erick-xanadu erick-xanadu requested a review from a team August 2, 2024 14:46
Copy link
Contributor

@vincentmr vincentmr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but please update the change log.

@erick-xanadu
Copy link
Contributor Author

@vincentmr , done!

@erick-xanadu erick-xanadu requested a review from vincentmr August 2, 2024 14:51
Copy link
Contributor

@vincentmr vincentmr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, cheers @erick-xanadu .

@erick-xanadu erick-xanadu merged commit cfc3006 into master Aug 2, 2024
70 of 71 checks passed
@erick-xanadu erick-xanadu deleted the eochoa/2024-07-30/skip_initial_state_prep branch August 2, 2024 15:43
multiphaseCFD added a commit that referenced this pull request Aug 2, 2024
multiphaseCFD pushed a commit that referenced this pull request Aug 5, 2024
**Context:** We would like Catalyst to optimize the first instance of
`qml.StatePrep` by just setting the state vector in PennyLane lightning.

**Description of the Change:** To achieve this, we set a flag in the
TOML files where it denotes whether the value for the flag
`skip_initial_state_prep` used during decomposition. This flag is
already in PennyLane and will skip the initial state preparation.

There are other ways to achieve this but this is the least invasive way
to do so. Some other alternatives that were discussed:

1. Adding StatePrep to the list of supported operations: this is not a
good idea as it would lead StatePrep to not be decomposed ever.
2. Creating a new operation and map StatePrep directly into this new
operation only in the context of Catalyst. This would lead to setting
this new `qml` Operation in the list of supported operations on the
device, even though it would never be seen except when using with
Catalyst.

The way this was achieved was not to create a new `qml` Operation, only
an JAXPR/MLIR operation and bind `StatePrep` to this JAXPR operation and
lower it through MLIR. So, no new `qml` operations.

**Benefits:** Optimization

**Possible Drawbacks:** More flags. No one likes flags.

**Related GitHub Issues:** Will be followed by this
PennyLaneAI/catalyst#955 pull request in
Catalyst.

[sc-69069]

---------

Co-authored-by: ringo-but-quantum <[email protected]>
Co-authored-by: Ali Asadi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants