Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update docker2singularity to Singularity v2.5.1 #30

Merged
merged 11 commits into from
Oct 4, 2018
Merged

Update docker2singularity to Singularity v2.5.1 #30

merged 11 commits into from
Oct 4, 2018

Conversation

vsoch
Copy link
Member

@vsoch vsoch commented May 31, 2018

hey @chrisfilo and @ewels and @0xaf1f (interested parties!) here is an update for Singularity 2.5.1, if you'd care to give it a test. I've closed the PR for 2.4 (and it can live on the v2.4 branch). I've also added a changelog for (human notes) to coincide with these updates.

@vsoch vsoch changed the title Update docker2singulraity to Singularity v2.5.1 Update docker2singularity to Singularity v2.5.1 May 31, 2018
@vsoch
Copy link
Member Author

vsoch commented May 31, 2018

This will close #26

@vsoch
Copy link
Member Author

vsoch commented May 31, 2018

Note that I just revamped the argument parsing to add the option to give a custom name to the container:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/test:/ou
tput --privileged -t --rm docker2singularity --name meatballs ubuntu:14.04

Image Format: squashfs
Docker Image: ubuntu:14.04
Container Name: meatballs
Inspected Size: 223 MB

(1/10) Creating a build sandbox...
(2/10) Exporting filesystem...
(3/10) Creating labels...
(4/10) Adding run script...
(5/10) Setting ENV variables...
(6/10) Adding mount points...
(7/10) Fixing permissions...
(8/10) Stopping and removing the container...
(9/10) Building squashfs container...
Building image from sandbox: /tmp/meatballs.build
Building Singularity image...
Singularity container built: /tmp/meatballs.simg
Cleaning up...
(10/10) Moving the image to the output folder...
     65,077,279 100%  395.10MB/s    0:00:00 (xfr#1, to-chk=0/1)
Final Size: 63MB

$ ls /tmp/test/
meatballs.simg

Also notice more verbosity up front to tell the user about the choices. This will close #29

@vsoch
Copy link
Member Author

vsoch commented May 31, 2018

I also just updated the usage so its a single function, and displays cleanly:

Here is with --help

$ docker run -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/test:/ou
tput --privileged -t --rm docker2singularity --help
USAGE: docker2singularity [-m "/mount_point1 /mount_point2"] [options] docker_image_name

OPTIONS:

          Image Format
              --folder   -f   build development sandbox (folder)
              --writable -w   non-production writable image (ext3)         
                              Default is squashfs (recommended)
              --name     -n   provide basename for the container (default based on URI)
              --mount    -m   provide list of custom mount points (in quotes!)
              --help     -h   show this help and exit

and with no arguments, does the same

tput --privileged -t --rm docker2singularity
USAGE: docker2singularity [-m "/mount_point1 /mount_point2"] [options] docker_image_name

OPTIONS:

          Image Format
              --folder   -f   build development sandbox (folder)
              --writable -w   non-production writable image (ext3)         
                              Default is squashfs (recommended)
              --name     -n   provide basename for the container (default based on URI)
              --mount    -m   provide list of custom mount points (in quotes!)
              --help     -h   show this help and exit
              

@ewels
Copy link

ewels commented May 31, 2018

Many thanks for doing this @vsoch - especially so fast, it looks great!

I just tried checking out this branch and building the container, then running a test but got lots of Directory renamed before its status could be extracted errors (output pasted below).

Any ideas what's going on here?

Thanks,

Phil

$ docker --version
Docker version 18.03.1-ce, build 9ee9f40

$ sw_vers
ProductName:	Mac OS X
ProductVersion:	10.13.4
BuildVersion:	17E202

$ docker run \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /Users/philewels/GitHub/nf-core/testing/nf-core-methylseq-1.0/singularity-images:/output \
    --privileged -t --rm singularityware/docker2singularity \
    --name nf-core-methylseq-1.0.simg \
    nfcore/methylseq:1.0

Image Format: squashfs
Docker Image: nfcore/methylseq:1.0
Container Name: nf-core-methylseq-1.0.simg

Unable to find image 'nfcore/methylseq:1.0' locally
1.0: Pulling from nfcore/methylseq
c73ab1c6897b: Pulling fs layer
4da7db3cf22b: Pulling fs layer
2eec397d44ab: Pulling fs layer
9c36f5b9109d: Pulling fs layer
36ec3a10a78b: Pulling fs layer
ba4d49b04360: Pulling fs layer
13954a37d16d: Pulling fs layer
e24ade3b0cc4: Pulling fs layer
36ec3a10a78b: Waiting
ba4d49b04360: Waiting
13954a37d16d: Waiting
e24ade3b0cc4: Waiting
9c36f5b9109d: Waiting
c73ab1c6897b: Verifying Checksum
c73ab1c6897b: Download complete
9c36f5b9109d: Verifying Checksum
9c36f5b9109d: Download complete
c73ab1c6897b: Pull complete
36ec3a10a78b: Verifying Checksum
36ec3a10a78b: Download complete
4da7db3cf22b: Verifying Checksum
4da7db3cf22b: Download complete
2eec397d44ab: Verifying Checksum
2eec397d44ab: Download complete
13954a37d16d: Verifying Checksum
13954a37d16d: Download complete
ba4d49b04360: Verifying Checksum
ba4d49b04360: Download complete
4da7db3cf22b: Pull complete
2eec397d44ab: Pull complete
9c36f5b9109d: Pull complete
36ec3a10a78b: Pull complete
ba4d49b04360: Pull complete
13954a37d16d: Pull complete
e24ade3b0cc4: Verifying Checksum
e24ade3b0cc4: Download complete
e24ade3b0cc4: Pull complete
Digest: sha256:a71a60e509856576d819b720a91d2cd4dcf352cbb2592746dcd776fbc17cdb48
Status: Downloaded newer image for nfcore/methylseq:1.0
Inspected Size: 3147 MB

(1/10) Creating a build sandbox...
(2/10) Exporting filesystem...
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/X: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/Q: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/P: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/L: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/A: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/8: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/7: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/6: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/5: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/4: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/3: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/2: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/share/terminfo/1: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/include/ncursesw: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/include/ncurses: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-6.0-h9df7e31_2/include: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/z: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/x: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/w: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/v: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/u: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/t: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/s: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/r: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/p: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/o: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/n: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/m: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/l: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/k: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/j: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/i: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/h: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/g: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/f: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/e: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/d: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/c: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/b: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/a: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/X: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/Q: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/P: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/N: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/L: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/A: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/9: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/8: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/7: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/6: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/5: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/4: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/3: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/2: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo/1: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share/terminfo: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/share: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/include/ncursesw: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/include/ncurses: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10/include: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/ncurses-5.9-10: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu/sysroot/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu/sysroot: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-hdf63c60_3: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-h7a57d05_2/x86_64-conda_cos6-linux-gnu/sysroot/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-h7a57d05_2/x86_64-conda_cos6-linux-gnu/sysroot: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libstdcxx-ng-7.2.0-h7a57d05_2/x86_64-conda_cos6-linux-gnu: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgfortran-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu/sysroot/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgfortran-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu/sysroot: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgfortran-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgfortran-ng-7.2.0-hdf63c60_3: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu/sysroot/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu/sysroot: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-hdf63c60_3/x86_64-conda_cos6-linux-gnu: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-hdf63c60_3: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-h7cc24e2_2/x86_64-conda_cos6-linux-gnu/sysroot/lib: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-h7cc24e2_2/x86_64-conda_cos6-linux-gnu/sysroot: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/libgcc-ng-7.2.0-h7cc24e2_2/x86_64-conda_cos6-linux-gnu: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/fontconfig-2.12.1-6/etc/fonts/conf.d: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/fontconfig-2.12.1-6/etc/fonts: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/fontconfig-2.12.1-6/etc: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/fontconfig-2.12.1-6: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/fastqc-0.11.7-pl5.22.0_2/bin: Directory renamed before its status could be extracted
tar: opt/conda/pkgs/fastqc-0.11.7-pl5.22.0_2: Directory renamed before its status could be extracted
tar: opt/conda/include/ncursesw: Directory renamed before its status could be extracted
tar: opt/conda/include/ncurses: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/x86_64-conda_cos6-linux-gnu/sysroot/lib: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/x86_64-conda_cos6-linux-gnu/sysroot: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/x86_64-conda_cos6-linux-gnu: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/z: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/x: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/w: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/v: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/u: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/t: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/s: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/r: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/p: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/o: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/n: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/m: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/l: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/k: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/j: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/i: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/h: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/g: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/f: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/e: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/d: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/c: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/b: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/a: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/X: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/Q: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/P: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/N: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/L: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/A: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/9: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/8: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/7: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/6: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/5: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/4: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/3: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/2: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo/1: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/share/terminfo: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/jre/lib/amd64/server: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/jre/lib/amd64: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/jre/lib: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/jre: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/include/ncursesw: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/include/ncurses: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/include: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/etc/fonts/conf.d: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/etc/fonts: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/etc: Directory renamed before its status could be extracted
tar: opt/conda/envs/nfcore-methylseq-1.0/bin: Directory renamed before its status could be extracted
tar: lib64: Directory renamed before its status could be extracted
tar: etc/systemd/system/timers.target.wants: Directory renamed before its status could be extracted
tar: etc/systemd/system/multi-user.target.wants: Directory renamed before its status could be extracted
tar: etc/systemd/system: Directory renamed before its status could be extracted
tar: etc/systemd: Directory renamed before its status could be extracted
tar: etc/ssl/certs: Directory renamed before its status could be extracted
tar: etc/ssl: Directory renamed before its status could be extracted
tar: etc/sgml: Directory renamed before its status could be extracted
tar: etc/rcS.d: Directory renamed before its status could be extracted
tar: etc/rc6.d: Directory renamed before its status could be extracted
tar: etc/rc5.d: Directory renamed before its status could be extracted
tar: etc/rc4.d: Directory renamed before its status could be extracted
tar: etc/rc3.d: Directory renamed before its status could be extracted
tar: etc/rc2.d: Directory renamed before its status could be extracted
tar: etc/rc0.d: Directory renamed before its status could be extracted
tar: etc/profile.d: Directory renamed before its status could be extracted
tar: etc/lighttpd/conf-enabled: Directory renamed before its status could be extracted
tar: etc/lighttpd: Directory renamed before its status could be extracted
tar: etc/alternatives: Directory renamed before its status could be extracted
tar: etc: Directory renamed before its status could be extracted
tar: bin: Directory renamed before its status could be extracted
tar: Exiting with failure status due to previous errors

@vsoch
Copy link
Member Author

vsoch commented May 31, 2018

I think we are seeing this: apptainer/singularity#1570

@dtrudg
Copy link

dtrudg commented May 31, 2018

As in the linked Singularity issue - I'm pretty sure this is not a Singularity bug. Under certain conditions, tar run under docker, with overlayfs as the storage driver, will have this issue due to an overlayfs bug.

Notice here that Mac docker is being used. Try....

you can add {"storage-driver":"aufs"} in the advanced daemon preferences pane and see if that makes a difference.

docker/for-mac#1219

@vsoch
Copy link
Member Author

vsoch commented May 31, 2018

Thanks @dctrud ! I was just writing you a note here but you are too speedy :) @ewels would you care to try this fix and update us on if it resolves the issue?

@ewels
Copy link

ewels commented Jun 1, 2018

Thanks both! I'll take a look. It makes me a little anxious though, as I'm writing this into a general use tool for others to use. Is it likely that everyone (on Mac?) will have to do the same fix?

@vsoch
Copy link
Member Author

vsoch commented Jun 1, 2018

Yeah it's definitely not ideal - I use Ubuntu but I would guess that other Mac users would need to do this as well (given that most Macs are the same).For your tool, is there any reason to not try using singularity natively, or within Vagrant without the need for Docker?

@ewels
Copy link

ewels commented Jun 1, 2018

Nothing major - I'm trying to write a helper tool to reduce the work required to fetch singularity images for our nextflow pipelines (http://nf-co.re/). We and many of our users use clusters with no internet connection, so you have to download the pipeline and containers locally first and transfer them via whatever secure method.

The majority of people don't already have singularity installed locally, and to be honest they don't really need it installed locally if all they're doing is pulling a container. So in my helper tool I try with singularity first and if that's not installed I was planning to use docker if available. But people need to start playing around with docker to get it to work with this then that sort of defeats the purpose of making it super user-friendly and easy.

I'm now wondering if this is all overkill and I need to take a step back. If we start putting singularity images up on singularity-hub then presumably we could just use the API to get a download URL and pull that directly from the Python helper without any build processes. This would be far simpler.

So, going off on a tangent now, but is there an easy way to set up automated builds on singularity hub from Docker build scripts? We definitely want to have support for both Docker and Singularity in the pipelines, and I'd prefer to have to maintain only one set of build scripts instead of two if possible.

@vsoch
Copy link
Member Author

vsoch commented Jun 1, 2018

If you are using singularity in the nextflow pipeline, you need it installed. Having docker in there also means needing that installed (and maybe it already is?) but giving user space access introduces a security issue. I would try first the simpler of the two solutions - which is using singularity to pull the images first. As for docker and singularity compatability, I would actually host them on Docker Hub and then provide the docker URI to singularity (e.g., singularity pull docker://ubuntu that way you can have the image in both formats with just a Dockerfile. Another thing you could try is to host a local singularity registry (akin to your own hub) and you would be able to build how and where you like, and then push to it.

@ewels
Copy link

ewels commented Jun 1, 2018

If you are using singularity in the nextflow pipeline, you need it installed.

The nextflow pipeline is running on the HPC, where singularity is installed, yes. But the HPC has no internet connection, so the normal singularity pull doesn't work. This script is for running on the user's laptop to get the image prior to transferring to HPC and running the pipeline.

Having docker in there also means needing that installed (and maybe it already is?)

Yup, my approach is to use whatever is installed - try using singularity first, then try docker, then throw an error.

but giving user space access introduces a security issue.

I don't really understand this sorry 😕

I would try first the simpler of the two solutions - which is using singularity to pull the images first.

I absolutely agree 😀 This is what I've done - see nf-core/tools/pull/53 and relevant code

As for docker and singularity compatability, I would actually host them on Docker Hub and then provide the docker URI to singularity (e.g., singularity pull docker://ubuntu that way you can have the image in both formats with just a Dockerfile.

Yup, this is also what we're doing already. It works beautifully! ✨In fact, nextflow does this for us. But this requires a singularity pull command which is how I got here in the first place. My rambling train of thought about about using singularity hub was thinking about a way to get a singularity image file that could be easily fetched with as few dependencies as possible (eg. just curl), purely for the purpose of enabling the simple transfer of the image to the HPC.

Another thing you could try is to host a local singularity registry (akin to your own hub) and you would be able to build how and where you like, and then push to it.

This is cool! I didn't know that this was possible, or at least as easy as the docs makes it sound! I am sort of trying to avoid using any non-freeware solutions for the nf-core project as it is a community-based collaborative effort and I'd prefer to avoid relying on any one group's resources. However, if we manage to get any money for the project (I applied for a small grant just recently) then we could perhaps set something up.

@vsoch
Copy link
Member Author

vsoch commented Jun 1, 2018

This is cool! I didn't know that this was possible, or at least as easy as the docs makes it sound! I am sort of trying to avoid using any non-freeware solutions for the nf-core project as it is a community-based collaborative effort and I'd prefer to avoid relying on any one group's resources. However, if we manage to get any money for the project (I applied for a small grant just recently) then we could perhaps set something up.

What do you mean "non freeware?" Singularity Registry is openly available and free for use, no paying and the code is all there for you to use. Any changes / updates that you need, akin to with Singularity , you can post issues and I respond pretty quickly. Having more people to contribute and help would make it stronger!

Another idea that would actually work but I haven't played around with it is to (somehow, somewhere) expose the raw download links to the containers on Singularity Hub. They are JUST files on Google storage, and the singularity client simply gets them with pull by making a request to the Singularity Hub API that has the manifests. Arguably, if we had a reasonable solution to serve this information elsewhere, there isn't any reason you couldn't do the same. For most containers, you can find the metadata endpoint at a uri like http://www.singularity-hub.org/api/container//vsoch/hello-world and then parse the image field to get the image in storage. What are your thoughts on this? What I'd like to avoid is overly stressing the Singularity Hub server for these queries. It can handle a good number and likely isn't stressed with image caching, but repeated stress from many thousands of nodes, or something like that, would not be ideal.

@vsoch
Copy link
Member Author

vsoch commented Jun 1, 2018

Oh! Another option (but another dependency) is to install Singularity Global Client to do the pull --> https://github.com/singularityhub/sregistry-cli it will run without Singularity and make those same API calls to pull the image. A bonus is that it acts as a little local database manager, so a user that pulls an image once can then easily see where / what images they have pulled before.

@vsoch
Copy link
Member Author

vsoch commented Jun 1, 2018

https://singularityhub.github.io/sregistry-cli/

@ewels
Copy link

ewels commented Jun 1, 2018

What do you mean "non freeware?" Singularity Registry is openly available and free for use, no paying and the code is all there for you to use. Any changes / updates that you need, akin to with Singularity , you can post issues and I respond pretty quickly. Having more people to contribute and help would make it stronger!

Sorry, poor choice of words. I meant that we need to run a server for this, which takes resources. This isn't impossible, but so far we've got by with GitHub / Travis / Docker Hub etc which all work without this.

Another idea that would actually work but I haven't played around with it is to (somehow, somewhere) expose the raw download links to the containers on Singularity Hub.

Yes - this was exactly my idea too 😁 I was playing around with exactly this earlier this morning and getting super excited at how good the API is and how easy it would be to call the API from within my little helper tool and access the raw download link. I had found my way to https://www.singularity-hub.org/api/containers/2157/ but it's the same thing I think. Singularity Global Client looks properly cool - this could definitely be useful (though could potentially be a bit overkill for what could be about five lines of python).

What I'd like to avoid is overly stressing the Singularity Hub server for these queries. It can handle a good number and likely isn't stressed with image caching, but repeated stress from many thousands of nodes, or something like that, would not be ideal.

Sure - I don't think that we would be operating on that scale here. This is going to be one-off manual use by people before they move to another system for their analysis, so downloads will be in the tens or hundreds per month max.

I got to this point after my initial post above and latter today on gitter we were discussing how to make it happen (preferably without needing to co-maintain both a Dockerfile and a Singularity file, though that would not be so difficult if necessary). https://gitter.im/nf-core/Lobby for our chat logs! 💬 🎉

@vsoch
Copy link
Member Author

vsoch commented Jun 3, 2018

@ewels I know this wasn't for your exact use case, but would you have a few moments to test the PR against some other images and let us know if it works for you?

@pierlauro
Copy link

@ewels I know this wasn't for your exact use case, but would you have a few moments to test the PR against some other images and let us know if it works for you?

If it can help, I tested it on several images and it worked perfectly.

@vsoch
Copy link
Member Author

vsoch commented Oct 3, 2018

@chrisfilo now given the security issues with the older versions, do you have issue merging this PR to be default for master?

@chrisgorgo
Copy link
Collaborator

Please go ahead.

@vsoch
Copy link
Member Author

vsoch commented Oct 3, 2018

I'm no longer maintainer on this repository - or singularity-python which is primarily for Singularity Hub. @gmkurtzer would you care to explain why?

@vsoch vsoch requested a review from chrisgorgo October 4, 2018 00:18
@vsoch
Copy link
Member Author

vsoch commented Oct 4, 2018

And we are back! @chrisfilo I added a review for you so you can (officially) approve and then I'll merge into master. Thanks @ewels for confirmation that things look good.

@chrisgorgo
Copy link
Collaborator

@vsoch I don't have the capacity to review and test this PR, but please go ahead with merging if you wish

@vsoch
Copy link
Member Author

vsoch commented Oct 4, 2018

Ok no worries! I'm going to set up a circle build and deploy for this repo and I'll use this PR to test it again. Thanks for your past review @chrisfilo!

@vsoch
Copy link
Member Author

vsoch commented Oct 4, 2018

holy crap, did that just work on the first try? THIS HAS NEVER HAPPENED BEFORE. Excuse me while I go and look out the window for a flying 🐷 ... :)

@vsoch vsoch merged commit e47ef0d into master Oct 4, 2018
@vsoch
Copy link
Member Author

vsoch commented Oct 4, 2018

Note that:

  • the branch for v2.5 will be kept (and not deleted) in case it needs future work
  • we now have an automated build and deploy to docker hub: https://circleci.com/gh/singularityware/docker2singularity/6 the old "automated build" is disabled (but containers done with it maintained)
  • all new images are tagged with the version of singularity, and then "latest"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants