-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to ignore file by size #330
Comments
sounds nice. i already had a usecase like follows for such a feature: imagine a really slow connection between a production server and its (remote) backup-repo server. So, one would do multiple backup runs with increasing "exclude size" (and finally not exclude by size any more). By doing that, backing up your big unimportant files won't delay you small important files backup. |
I know this usecase and had it myself before. But implementing this as a feature in attic/forks would be violating the KISS principle IMHO. Especially as long as you can generate exclude-files with a simple 'find /home/foo -size +1000k' … I would say it is not needed in attic/forks. |
@dragetd good idea. :) |
Using a find command before running attic forces to read the data structure once and then when once again with attic. It's a problem in my opinion with big set of data in mind. I understand that KISS principles are an important concern, but performance is a quite important one too. |
Also worth noting that separating listing and processing the files will create a race condition: if you generate an exclusion list, you might backup files that were created in the meantime, but should be excluded based on your criteria. On the other hand, if you generate an inclusion list, you will miss all new files that were created between the listing and processing stage. |
It would be great to be able to ignore file by their size, like this
to exclude file superiors to 10Mo.
We can do it now with a combination of
--exclude-from
andfind
command, but that force to read everything twice, which can be very long.The text was updated successfully, but these errors were encountered: