-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a callback to get_parameter_data to follow data loading #4688
Conversation
Hi all, I am writing to keep the thread and see if anyone has comments on it. See you. |
Hi all, I am back again. Hoping to see someone on the thread. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay. I have been very busy with other things. I left some comments inline.
In general I am curious about the types of datasets that you are loading and the slowdown you are seeing in loading them. Ideally I would like it not to be an issue to load a dataset that makes it worthwhile to have a progress bar. Can you say something about the typical number of rows and parameters in your datasets?
test.zip |
Thanks starting to look ready. Could you resolve the conflicts either by merging or rebasing against master, revert the changes to many and many_many and write a small changelog note. Then it should be ready to go |
Moved the appending of data in the `get_parameter_tree_values` function.
All should be done. When I benchmarked the merged code I saw that:
|
@edumur Looks like some mypy issues have creapt in. will you be able to have a look? |
Codecov Report
@@ Coverage Diff @@
## master #4688 +/- ##
==========================================
+ Coverage 68.27% 68.33% +0.06%
==========================================
Files 339 339
Lines 32087 32123 +36
==========================================
+ Hits 21906 21952 +46
+ Misses 10181 10171 -10 |
I had to add |
The types looks good. Left a few comments inline. There is now a test.db file which should not be committed |
iteration is directly taken from config
Hi all, All things seem correct (I think). |
Darker runs the black formatter on only the part of the code that you have changed. We do this because we would like to move to a more consistent formatting but not rewrite all out code in one go losing history. You can run these lints automatically by installing the pre-commit hooks defined in https://github.com/QCoDeS/Qcodes/blob/master/.pre-commit-config.yaml using the pre-commit tool. https://pre-commit.com/ For now don't worry about it and I will run it when merging |
* Handle small database with less than 100 rows * Handle non commensurable tuple of row to be downloaded with percentage of progress * Adapt test for small database with less than 100 rows
I really appreciate you spending time checking this. I tested it with various database and various percentage and so far it worked 🤞. |
Hi all,
In the effort to use qcodes as the data loader of the pyplotter I added a callback to
get_parameter
. This is usefull to track the progress of the data download.Since sqlite3 does not allow to keep track of the data loading progress, we compute how many sqlite requests correspond to a certain percentage of progress which is dictated by a config parameter. Then we perform
x
sql request instead of one, running the callback everytime.The whole process add an overhead of ~13% on my laptop when used which is not negligible.
However, I would argue that keeping track of the download progress for a GUI is precious and is worth the overhead.
To test the download time
I kept the change minimal hoping other people will have comments.