-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Source Stripe: sync freeze with 80k records #6417
Comments
I've ran the connector locally, got the following output logs-24-2.txt. I think it may be a destination issue. Working on debugging and testing to resolve the exact place of the issue. |
@sherifnada so the connectors fails with (see extended logs in the files attached above).
This fail happens after a different number of BQ tables created each time. Seems that issue is within java core and I'm not sure how to fix that. I would also try to to run normalization locally to make sure its not an issue. |
I've ran the normalization process separately with So in total: source, destination and normalization modules work fine. It seems that the issue is with java core, and I'm not sure how to fix that. @sherifnada, perhaps you now who may help me with that? |
To reproduce the bug #6417 (comment) you need to set up the stripe -> bigquery connector, using the credentials from our LastPass account. Then just run the sync and wait for some time. It would fail, but each time it fails on another point for syncing. |
@htrueman thanks for the great summary. I'll pass this on to the Airbyte team. |
@htrueman I'm not able to reproduce this. I set up Stripe and BQ in prod and all my syncs have been successful. |
Were you seeing this on all syncs? |
I'd also be curious to know if you're hitting max RAM, CPU, or disk space usage during your tests. |
l the same as described above and still get the same error logs-28-0.txt. IDK, perhaps it's my local issue or smth. |
@htrueman were you running this locally? |
Yes. I don't have access to cloud to test it there |
Cool, can you give me details of how you set it up? (docker/kube, if kube, what kind of kube cluster) And what kind of resources the cluster had? I might be able to reproduce it this way. I was confused since the original issue description mentions this was run in GCP. |
Well, I've got pretty simple setup:
|
@davinchia @htrueman is there further steps for this ticket? |
this looks like it's in the connector roadmap - so I think the only thing left is to wait for implementation |
We have Improved performance for streams with substreams: invoice_line_items, subscription_items, bank_accounts |
Enviroment
Current Behavior
Look that for one user Stripe connector cant finish the sync, or took 17h to sync 80k. Other user comment that he is able to sync 500k in 4h. I opened this issue to record this and investigate further if there is any option in Stripe we need inform to users enable.
Slack convo.
Expected Behavior
Tell us what should happen.
Logs
If applicable, please upload the logs from the failing operation.
For sync jobs, you can download the full logs from the UI by going to the sync attempt page and
clicking the download logs button at the top right of the logs display window.
LOG
Steps to Reproduce
Are you willing to submit a PR?
Remove this with your answer.
The text was updated successfully, but these errors were encountered: