-
Notifications
You must be signed in to change notification settings - Fork 926
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iteratively open and close different Kedro sessions from code #4087
Comments
Hey @jround1, would it be possible to give some more context or more concrete examples so we can understand what are you trying to do with your testing. You could also try to bring the issue up on our questions channel on the Kedro slack: https://slack.kedro.org/ |
@lrcouto sure thing! Let me try again...
And pipeline_registry.py only registers the pipelines defined for the specific project:
This creates a leaner session/context that is easier to manage between projects and in Kubeflow integration. What I would like to do: The problem: Let me know if you have more questions or if it would be more appropriate in another forum. Thanks! |
Hi @jround1, Your setup is quite complex, so I'm not entirely sure I understand it all 100%. I think the problem here is that a session is created on a project level, not per pipeline. It's the |
Thanks @merelcht |
Moving this to discussions. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Description
From Kedro, I would like to iterate over multiple Kedro sessions and their contexts to test. I am able to iterate from CLI through the different projects but in via pytest the second create session just returns the first. I understand there may be magic for notebooks with %kedro_reload.
We have different Kedro sessions/contexts because we only load the parts of the data catalog and parameters that we need for the pipeline we are running (patterns in settings.py), which eases integration with Kubeflow. Then we want to test that there are no undefined node inputs/outputs and that all catalog entries are being used, project by project.
Steps to Reproduce
Below is an example of the fixture and test. I'll save space and not copy the second fixture and test (session2 & test_project_catalog2) that have example_project2 and example_pipeline2:
from settings.py:
Expected Result
session1 and session2 should be different. session2 is correct if I remove session1 or call it first.
Actual Result
session2 is just session1 again.
Your Environment
(Kedro version: 0.18.10)
The text was updated successfully, but these errors were encountered: