Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct Support to probabilities, expectation values and variances in PL-Lightning #185

Merged
merged 23 commits into from
Dec 15, 2021

Conversation

AmintorDusko
Copy link
Contributor

@AmintorDusko AmintorDusko commented Dec 3, 2021


Context:
Pennylane Lightning doesn't offer direct support for the calculation of probabilities, expectation values and variances.
Description of the Change:
This PR introduces a C++ class template Measures that implements such calculations.
Calculations can be performed in terms of the already defined gates, or with any operator provided as a matrix.
Benefits:
A direct implementation in C++ expands the optimization options, and this class allows to perform such calculations for any operator provided in matrix form.
Possible Drawbacks:
I'm not aware of.
Related GitHub Issues:

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2021

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@codecov
Copy link

codecov bot commented Dec 3, 2021

Codecov Report

Merging #185 (5c030be) into master (5b65f2a) will not change coverage.
The diff coverage is n/a.

❗ Current head 5c030be differs from pull request most recent head 701ea6c. Consider uploading reports for the commit 701ea6c to get more accurate results
Impacted file tree graph

@@           Coverage Diff           @@
##           master     #185   +/-   ##
=======================================
  Coverage   99.64%   99.64%           
=======================================
  Files           4        4           
  Lines         278      278           
=======================================
  Hits          277      277           
  Misses          1        1           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5b65f2a...701ea6c. Read the comment docs.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2021

Test Report (C++) on Ubuntu

       1 files  ±  0         1 suites  ±0   0s ⏱️ ±0s
   378 tests +10     378 ✔️ +10  0 💤 ±0  0 ±0 
2 233 runs  +28  2 233 ✔️ +28  0 💤 ±0  0 ±0 

Results for commit 701ea6c. ± Comparison against base commit 5b65f2a.

♻️ This comment has been updated with latest results.

Copy link
Member

@mlxd mlxd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this together @AmintorDusko
I have some comments and questions:

@AmintorDusko
Copy link
Contributor Author

I will work on every request and suggestion. Thanks for your feedback @mlxd .

@AmintorDusko AmintorDusko requested a review from mlxd December 6, 2021 22:22
@AmintorDusko
Copy link
Contributor Author

Please let me know your thoughts.

@AmintorDusko
Copy link
Contributor Author

Hi @mlxd. Following our discussion, I updated the probabilities calculations.
Please let me know your thoughts.

Copy link
Member

@mlxd mlxd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @AmintorDusko.
I have left some comments and queries.

Comment on lines 980 to 990
template <typename T>
auto transpose_state_tensor(
const std::vector<T> &tensor,
const std::vector<size_t> &new_axes)
-> std::vector<T> {
std::vector<T> transposed_tensor(tensor.size());
for (size_t ind=0; ind<tensor.size(); ind++){
transposed_tensor[ind]=tensor[transposed_state_index(ind, new_axes)];
}
return transposed_tensor;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry this may be slow for large data sets. Can you run some numbers on this and see how it performs for various numbers of qubits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Sure.

Copy link
Contributor Author

@AmintorDusko AmintorDusko Dec 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I explored some alternative strategies for this transposition, and at least for now, this seems to be the more efficient. In the other hand, It doesn't seem to be too slow.
I checked the performance for 100 runs, up to 22 qubits.
I applied the transpose_state_tensor function to vectors with the size of the equivalent state vector (or tensor). This shows exactly the information we are interested in. For 22 qubits, we have a vector with 8,388,608 elements, and the mean time is 0.89 seconds.

image

A loop like that, with no data dependency, is quite the low-hanging fruit for openMP parallelisation, but for the sizes I'm presenting here quite of an overkill. If you think we should proceed with some parallelisation strategy, I think we could impose a threshold based in the number of qubits.
The code was compiled in GCC 11 with the -Ofast flag.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Amintor. I do think we should ensure this is fast for <=30 qubits (for large node usage). We can add parallelism as a next pass though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mlxd ,
I extended the benchmark to 26 qubits (memory is limiting this range).
I optimised the code to determine the transposed index, and the function to transpose the tensor, accordingly.
The best method so far is still based in transpose the tensor after the calculation of the probabilities.
Anyway, in this new approach for the tensor transposition, I could observe a huge increase in speed, for larger number of qubits. I'm comparing the two methods here:
download
I also plotted the ration between the two times:
download

For 26 qubits, the previous algorithm is taking 18.4 seconds, while the new one is taking 2.3 seconds. The new code is approximately 8 times faster. This qubit state vector has 67,108,864 elements. I'm performing calculation with double precision, and compiling with the -Ofast flag.

We can observe an outlier in 12 qubits, where the previous algorithm seems to outperform the new one. The ration here is 0.85, and this may be because of the number of samples (100). Also, we are talking about small number and this could be strongly affected by fluctuations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other run, the 12 qubits performed better in the new algorithm. I think it is reasonable to assume that the measure of times in this fast cases is specially prone to fluctuations. It seems that starting in 20 qubits would be a good choice.

@AmintorDusko
Copy link
Contributor Author

Hi @mlxd. Let me know what you think.

@mlxd
Copy link
Member

mlxd commented Dec 14, 2021

Thanks Amintor. I think I am happy to merge this in as the initial pass. We can add the Python bindings in a follow up PR.
Also, I think it best to never use -Ofast and favour use of -O2 or -O3 instead. We never intend to ship anything with -Ofast so we should ensure all numbers taken do not use it.

Copy link
Member

@mlxd mlxd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the change listed here, I think this is good to go.

Comment on lines 984 to 985
// transposed_tensor[ind] = tensor[transposed_state_index(ind,
// new_axes)];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// transposed_tensor[ind] = tensor[transposed_state_index(ind,
// new_axes)];

@AmintorDusko
Copy link
Contributor Author

Thanks Amintor. I think I am happy to merge this in as the initial pass. We can add the Python bindings in a follow up PR. Also, I think it best to never use -Ofast and favour use of -O2 or -O3 instead. We never intend to ship anything with -Ofast so we should ensure all numbers taken do not use it.

Makes sense. Thanks!

@AmintorDusko AmintorDusko merged commit ab533a0 into master Dec 15, 2021
@AmintorDusko AmintorDusko deleted the feature/measures branch December 15, 2021 17:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants