Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker: nc-backup: free space check failed #858

Closed
langfingaz opened this issue Mar 26, 2019 · 9 comments
Closed

Docker: nc-backup: free space check failed #858

langfingaz opened this issue Mar 26, 2019 · 9 comments
Assignees
Labels

Comments

@langfingaz
Copy link
Contributor

langfingaz commented Mar 26, 2019

System information
Arch Linux with docker and docker-compose installed. Formatted with btrfs.
Running NextcloudPi form Docker_x86.

Problem Found
When I run nc-backup (excluding data, without compression, path=/data/ncp-backups) I get this eror:

[ nc-backup ]
check free space...
free space check failed. Need 1731093668 Bytes
Maintenance mode already disabled

Even if there's enough space left as can bee seen by df -h from inside the nextcloudpi container.

root@57159478c0ec:/# df -h
Filesystem         Size  Used Avail Use% Mounted on
/dev/mapper/crypt  238G   22G  216G   9% /
tmpfs               64M     0   64M   0% /dev
tmpfs              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/crypt  238G   22G  216G   9% /data
shm                 64M     0   64M   0% /dev/shm
tmpfs              3.9G     0  3.9G   0% /proc/asound
tmpfs              3.9G     0  3.9G   0% /proc/acpi
tmpfs              3.9G     0  3.9G   0% /proc/scsi
tmpfs              3.9G     0  3.9G   0% /sys/firmware
/dev/mapper/data   2.8T  1.7T  1.2T  60% /data/nextcloud/data

Research
The ncp-backup script checks the space left with free=$( df "$destdir" | tail -1 | awk '{ print $4 }' ) . Running it with my chosen backup path returns only approx. 200mb of space instead of approx. 216G:

root@57159478c0ec:/# df "/data/ncp-backups" | tail -1 | awk '{ print $4 }'  
226067952

Looks like the above method does not work properly in this use case (inside docker container on btrfs).
Any ideas how you could still perform the space-left check?

@nachoparker
Copy link
Member

what does this give you df "/data/ncp-backups"?

@langfingaz
Copy link
Contributor Author

root@6f2cf01ef85b:/data# df "/data/ncp-backups"
Filesystem        1K-blocks     Used Available Use% Mounted on
/dev/mapper/crypt 249492804 23350888 224859800  10% /data

So would guess it's off by the factor 1000 in my case.

@nachoparker
Copy link
Member

Yes, that's wrong. Please run sudo ncp-update devel and verify that it is fixed

@nachoparker nachoparker self-assigned this Mar 27, 2019
@langfingaz
Copy link
Contributor Author

Thanks for the quick reply!

The output of nc-backup is still the same after the update. May I ask what files/scripts should have changed? Just because the /data/bin/ncp-backup file did not change for me in case you updated that.
Here's the output from running the update on the devel branch:

root@6f2cf01ef85b:/# ncp-update devel
INFO: updating to development branch 'devel'
Downloading updates
Performing updates
Config value squareSizes for app previewgenerator set to 32
Config value widthSizes for app previewgenerator set to 128 256 512
Config value heightSizes for app previewgenerator set to 128 256
System config value jpeg_quality set to string 60
Running unattended-upgrades
Unattended upgrades active: yes (autoreboot true)
--2019-03-27 11:45:41--  https://packages.sury.org/php/apt.gpg
Resolving packages.sury.org (packages.sury.org)... 185.172.148.132, 2a0b:4d07:101::1
Connecting to packages.sury.org (packages.sury.org)|185.172.148.132|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1769 (1.7K) [application/octet-stream]
Saving to: '/etc/apt/trusted.gpg.d/php.gpg'

/etc/apt/trusted 100%[========>]   1.73K  --.-KB/s    in 0s      

2019-03-27 11:45:41 (18.8 MB/s) - '/etc/apt/trusted.gpg.d/php.gpg' saved [1769/1769]

Running nc-scan-auto
Restarting periodic command scheduler: cronStopping periodic command scheduler: cron.
Starting periodic command scheduler: cron.
automatic scans enabled
Running nc-autoupdate-ncp
automatic NextCloudPi updates enabled
Running nc-notify-updates
Restarting periodic command scheduler: cronStopping periodic command scheduler: cron.
Starting periodic command scheduler: cron.
update web notifications enabled
Running nc-update-nc-apps-auto
automatic app updates enabled
NextCloudPi updated to version v1.10.8

@nachoparker
Copy link
Member

oh sorry that's right. Please delete /usr/local/bin/ncp/BACKUPS/nc-backup.sh and run the update again.

@langfingaz
Copy link
Contributor Author

Even after I deleted BACKUPS/nc-backup.sh the new version was not installed - ncp-backup did not change. Anyways, I just executed the install() method manually and in the new version the free space is now read correctly :)

But somehow there's still another bug left. The required space for a dataless backup is now approx. 1.7TB - which is how much my data + database are together. But maybe thats just by accident. The mysql dump however should only have approx. 1GB I guess. Could it be that the "required space" is now 1000x too high?

Running nc-backup
check free space...
free space check failed. Need 1790804602597 Bytes
Maintenance mode already disabled
Done. Press any key...

When I run nc-backup with data included, the required space changes to an even higher value...

Running nc-backup
check free space...
free space check failed. Need 3581352539882 Bytes
Maintenance mode already disabled
Done. Press any key...

@nachoparker
Copy link
Member

You can see the code, the dataless backup reads the size of the /var/www/nextcloud folder, plus 100MiB extra.

BUT now that you mention it, we should also include the size of the database.

@langfingaz
Copy link
Contributor Author

langfingaz commented Mar 31, 2019

Sorry, but I'm just learning bash. As I didn't know the conditional && I didn't fully understand the code part of the free space check.

But I gave it another try and here are some ideas how to improve the backup script:

  1. As you said, include the database size in the $size variable.
  2. In the "space-check" section we have to check $includedata and if $datadir is a subdirectory of the $basedir/nextcloud folder.
  3. At the tar command, check again if $datadir is a subfolder of nextcloud and in that case do not add it twice to the backup.

Here is a first try of point 1 and 2. I did not yet test it. Maybe I'll be able to do that tommorrow.

dbsize=$(du -sb "$dbbackup" | awk '{ print $1 }')
dsize=$(du -sb "$datadir" | awk '{ print $1 }')
nsize=$(du -sb "$basedir/nextcloud" | awk '{ print $1 }')

size+=dbsize + $((nsize + dsize + 100*1024))

# check if datadir is a subdir of nextcloud
if [ $(echo "$datadir" | grep "$basedir/nextcloud") == "$datadir" ]; then
	if [ "$includedata" == "yes" ]; then
		# don't add size of datadir twice!
		size+="$nsize"
	else
		# subtract datadir in case of dateless backup
		size+="$nsize"-"$dsize"	
	fi
else
	if [ "$includedata" == "yes" ]; then
		# as datadir lays outside of nextcloud folder, add it to size
		size+="$nsize"+"$dsize"
	else
		size+="$nsize"	
	fi
fi

free=$( df -B1 "$destdir" | tail -1 | awk '{ print $4 }' )

[ $size -ge $free ] && {
  echo "free space check failed. Need $size Bytes";
  exit 1;
}

Edit: My subdirectory check will break with symlinks. Probably there's a better way to implement this.

@nachoparker
Copy link
Member

Looks good!

Actually somebody recently brought this up as well. Let's keep the conversation there

#864

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants