-
Notifications
You must be signed in to change notification settings - Fork 3.3k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IMPROVE] Can't complete openbb.build() when using Spark due to failure in renaming temporary cache file #6694
Comments
If you try |
I think this is likely a limitation related to the specific host service. Multiprocessing can turn into a nightmare pretty fast. It might help if you can disable that in the host service. Otherwise, dynamic command execution should get around it. Ruff has apparently resolved the referenced issue, but this does not appear to be the same error. "No such file or directory" suggests it is a file system error. Even if Ruff fails, the static assets will still have been generated and built. The error message - "This portal is not running" - implies that async event loops are not being handled somewhere in the pipeline. This can be caused from $PATH issues that introduce system packages to the environment instead of isolating the environment completely and only calling packages from within it. This is often a problem with an incorrectly configured Anaconda Navigator installation. See this for a comparable to Databricks. You may need to add Potential SolutionIn situations like this, it will be better to run functions using For Streamlit Cloud apps, you have to do it this way because they do not provide access to the file system site-packages where the static assets are stored. This is async and you need to manage this carefully throughout the entire pipeline. You might want to add this to the code after the import blocks: import nest_asyncio
nest_asyncio.apply() |
Unfortunately this doesn't work, it results in the same error and |
This works thanks a lot! Am able to use |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
When installing the
multpl
openbb extension and building the Python interface as recommended by the docs, I encounter an error when executing the build command as the temporary cache file cannot be renamed and subsequently I encounter an error pulling data from themultpl
provider.I only encounter this issue when using a Spark cluster (specifically in Databricks) but when I run the same code on my local Desktop there are no issues, the build succeeds and I can pull data as usual from the
multpl
provider. I believe the failure when using a Spark cluster is due to multiple processes running simultaneously using multipleruff
invocations as described in this issue.I am wondering if renaming the cache file is critical to the building step and can be ignored if it fails so the build can still succeed and I can continue using
obb
to pull data from the given provider.To Reproduce
Screenshots
![image](https://private-user-images.githubusercontent.com/71161528/370551215-b0077ec2-0dac-4be9-82c9-c73072e78165.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk2ODY0NDMsIm5iZiI6MTczOTY4NjE0MywicGF0aCI6Ii83MTE2MTUyOC8zNzA1NTEyMTUtYjAwNzdlYzItMGRhYy00YmU5LTgyYzktYzczMDcyZTc4MTY1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE2VDA2MDkwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgyNjAxM2I5NTMyNzJlMWIyOGNmM2YxYjI0NzhjZWM3ZTI1ZGFhYjE2ZTIzZDA0MWUwZDZjN2QyZWVjYTc5ZjkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.6QPz8Ttl1_p0Ybp0YtuNoa_fTa3acLugBqRwQcbA8PQ)
![image](https://private-user-images.githubusercontent.com/71161528/370547545-b32de5f1-cc5b-4ffe-ac69-d49eb8e6a14f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk2ODY0NDMsIm5iZiI6MTczOTY4NjE0MywicGF0aCI6Ii83MTE2MTUyOC8zNzA1NDc1NDUtYjMyZGU1ZjEtY2M1Yi00ZmZlLWFjNjktZDQ5ZWI4ZTZhMTRmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE2VDA2MDkwM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTUzMGEwNTU4YTJlYzM0MjEzNTUxM2Y0OTUwMzJlZGU4ZWVjNGUxM2FiOTczYmEzMzBiMWM2YzRjMWU1ZTA5ZmUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.v_zYtMp5LoO0gRh5KlMIEC8EKTP_RvzyHTLElebBTWY)
Desktop (see more details here):
The text was updated successfully, but these errors were encountered: