-
Notifications
You must be signed in to change notification settings - Fork 583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tickstore query slowly #69
Comments
For performance comparison with "pure" pymongodb see
|
we should use it store more ticks data in one record by pandas DataFrame , right? |
Let's use same file for benchmarking https://drive.google.com/file/d/0B8iUtWjZOTqla3ZZTC1FS0pkZXc/view?usp=sharing see also pydata/pandas-datareader#153 I wonder if they (manahl Arctic dev team) shouldn't use Monary instead of pymongo Read this https://pypi.python.org/pypi/Monary/0.4.0.post2
see https://bitbucket.org/djcbeach/monary/issues/19/use-pandas-series-dataframe-and-panel-with |
I think there's quite a lot of overlap between what Monary does and Arctic. Monary makes it fast to marshall primitive types (numpy int, floats, etc) into and out of MongoDB. We do something similar, except we do compression and batching on the client side. A lot of the win (in network and disk I/O terms) comes from financial data being highly compressible. Because we batch in the client, we end up performing few pymongo operations relative to the number of ticks/rows. For profiling perhaps try: |
Thanks for your comments. I have made a mistake, that I should not insert single row to Arctic but with batch way. Happy new year. XD |
Arctic said that can query millions of rows per second per client, but when I try to use it in our team, and found that it only thousand of rows per second, Here the code, Does anyone got the same problem or I use it with wrong way.
thanks.
The text was updated successfully, but these errors were encountered: