-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cylc clean #3887
Comments
Note: @dpmatthews has suggested that we might not want to expose the "cycle aware housekeeping" via the CLI which makes some sense as it would be nicer to configure housekeeping in the workflow rather than having a dedicated housekeep task. So (4) might not be related to the CLI, however, we would expect it to share the same logic as the |
For part 1,
Without checking the DB, |
It's an acceptable half-way-house until part (2) is implemented. Not good enough for |
From part 2:
What if the global config has changed in between |
If the install target has been changed for one platform then it will have been changed for all platforms (that used the same install target) so knowing what it was before the change wont be any help. |
A few quick examples of platforms config and clean locations:
|
I'm not sure what "clean up on a platform" means; if the |
[platforms]
[[foo]]
install target = a
hosts = foo
[[bar]]
install target = a
hosts = bar
Ah, ok, I mean "pick a host from that platform then invoke the clean script on that platform over SSH".
None whatsoever, which is the point. The important thing it that we only clean up on one of them rather than both.
Yep, stuff like |
From the team meeting today, it sounds like we'll need a
(I just ran into a case where the workflow stopped responding, I did Ctrl+C, it said it shut down, but the contact file was left over so it looked like it was still running) |
^ That one
|
What if the contact file is left over, but the workflow is actually stopped? Should I be using a more sophisticated method than |
No, It goes to the server the flow started on, queries the process ID and checks to ensure the command matches the one the flow was started with. |
Ah wait, the bug I faced was #3994 I did Ctrl+C on the unresponsive workflow and it said Anyway, rebasing the topic branch onto master solved this. |
If the user has multiple run dirs in a dir under cylc-run, e.g.
What should happen if they run I'm guessing it will have to iterate over the subdirs to find run dirs, due to the fact that the run dirs may use symlink dirs, and the database needs to be looked up for remote installs. |
We could decide to support only run dirs, at least initially. Consider a follow-up to handle nesting. |
With the universal ID this sort of thing may become more implicit e.g:
|
What if the directory has had flow.cylc deleted, for example? And the user just wants to remove it anyway? I suppose removing it anyway could be part of the behaviour of |
A bit facetious, don't need to worry about that. If they delete the We wouldn't expect users to do much if any manual fiddling with the Cylc managed |
As far as I can see if you run If this is the case I can see a couple of possible solutions:
|
Yes.
Not quite, but also, if you don't want that to happen don't use
Currently toying with this along with other things in a cylc-admin proposal, opinions welcome but do note, it's a WIP and the document is laying out a rough plan for what could be implemented rather than what will be implemented (in order to ensure the interface is forward compatible).
|
Maybe |
Covered to some extent in this proposal - cylc/cylc-admin#118 Examples:
|
Even if we don't offer
is a pretty strong reason to keep in available publicly |
@dpmatthews suggested a possible:
|
... the thinking being that if a user does a partial clean and then restarts a workflow it's good to have some evidence of why things might not be working |
As part of part 3 (targeted clean), I think that perhaps globs should not match the possible symlink dirs? E.g. if a user does Main reason I am asking is that it would make the implementation easier. Otherwise, as it stands, doing Update: probably best thing to do is just rejig the logic so that |
The important 8.0.0 tasks have been completed pending documented follow-up issues. Bumping the remainder of this issue back to 8.x. |
A new command for housekeeping workflows and their files.
Remove stopped workflows on the local scheduler filesystem. cylc clean - initial implementation #3961Remove stopped workflows on all filesystems. cylc clean 2: remote clean #4017attempt after platforms: platform and host selection methods and intelligent fallback #3827task_jobs
table in the database.{install_target: [platform, ...]}
.A targeted version of (1) & (2) e.g. delete just thelog
directory. cylc clean 3: targeted clean #4237log
,work
andshare
on running or stopped workflows.x
.rose_prune
functionality.CYLC_TASK_CYCLE_POINT
variable.The text was updated successfully, but these errors were encountered: