Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runc checkpoint fails #87

Closed
rajasec opened this issue Jul 4, 2015 · 34 comments
Closed

runc checkpoint fails #87

rajasec opened this issue Jul 4, 2015 · 34 comments

Comments

@rajasec
Copy link
Contributor

rajasec commented Jul 4, 2015

After starting my container using runc ( latest OCF format 0.1.1), I tried to do the checkpoint for my running container using

runc checkpoint

Error thrown:
criu failed: Type NOTIFY errno 0

@mbdas
Copy link

mbdas commented Jul 5, 2015

Can confirm the same

@mapk0y
Copy link
Contributor

mapk0y commented Jul 5, 2015

@rajasec
CRIU(Checkpoint/Restore In Userspace) will leave the log by default to /var/run/ocf/${continer_id}/criu.work/dump.log.
So, I think that you look at the log and why some details are known came out the error.

@mbdas
Copy link

mbdas commented Jul 5, 2015

(00.001638) Error (mount.c:630): 81:./dev/console doesn't h
ave a proper root mount
(00.001644) Unlock network
(00.001648) Running network-unlock scripts
(00.001654) RPC
(00.001703) Unfreezing tasks into 1
(00.001705) Unseizing 11993 into 1
(00.001705) Error (cr-dump.c:1947): Dumping FAILED.

@avagin
Copy link
Contributor

avagin commented Jul 6, 2015

External terminals are not supported yet. It's not hard to implement if you need them.
http://www.criu.org/Inheriting_FDs_on_restore

@mbdas
Copy link

mbdas commented Jul 6, 2015

The example I tried was with the docker busybox based rootfs and runc launching the sh in it with the spec config.json. Since this is the example mentioned in this project it would be good to make it work across other commands. From initial research looks like docker's creation of console may be buggy and shares mapping with host. But I tried using the rootfs from Jerome's link in test docker file where console looks like is setup differently but fails with same.

So good to have a working sample in project for criu in runc.

@avagin
Copy link
Contributor

avagin commented Jul 8, 2015

https://gist.github.com/avagin/5cb08eab7d66868ea73a

runc checkpoint/restore work fine with this config file.

root@ubuntu:/home/avagin/git/go/src/github.com/opencontainers/runc# ./runc --criu /home/avagin/git/criu/criu checkpoint
root@ubuntu:/home/avagin/git/go/src/github.com/opencontainers/runc# echo $?
0
root@ubuntu:/home/avagin/git/go/src/github.com/opencontainers/runc# ./runc --criu /home/avagin/git/criu/criu restore /busybox/spec.json
Wed Jul 8 12:56:03 UTC 2015
Wed Jul 8 12:56:04 UTC 2015
Wed Jul 8 12:56:05 UTC 2015

@rajasec
Copy link
Contributor Author

rajasec commented Jul 8, 2015

Let me try out

@mbdas
Copy link

mbdas commented Jul 8, 2015

I can confirm. It works for my setup. Comparing with your gist, the two changes needed was to make terminal as false and readonly as false. Thx for your help!

@rajasec
Copy link
Contributor Author

rajasec commented Jul 25, 2015

@avagin
I tried with terminal as false, where I could do checkpoint, but not with terminal as true ( which is the problem)
since it created the pipes and file descriptors upon terminal as false, checkpoint able to dump the state correctly.
Whenever we set the console or terminal as true.. all the "External file descriptors come as /dev/null).

@rajasec
Copy link
Contributor Author

rajasec commented Jul 25, 2015

Upon terminal as false in config.json
"external_descriptors":["pipe:[97765]","pipe:[97766]","pipe:[97767]"]}

Upon terminal as true in config.json
external_descriptors":["/dev/null","/dev/null","/dev/null"]}

This is the fd information from /proc

@rajasec
Copy link
Contributor Author

rajasec commented Aug 9, 2015

@avagin
I ran runc checkpoint with terminal as false, but it use to dump earlier in Ubuntu 14.04 systems, but with 15.04 Ubuntu (kernel : 3.19) checkpoint throwing out the following messages.
I've took the latest criu ( 1.6)
(00.127824) Something is mounted on top of ./sys/fs/cgroup
(00.127880) Error (mount.c:1005): Can't create a temporary directory: Read-only file system

@rajasec
Copy link
Contributor Author

rajasec commented Aug 9, 2015

@avagin
I ran runc checkpoint with terminal as false, but with 15.04 Ubuntu (kernel : 3.19) checkpoint throwing out the following messages.
(00.127824) Something is mounted on top of ./sys/fs/cgroup
(00.127880) Error (mount.c:1005): Can't create a temporary directory: Read-only file system

I had cloned the latest runc too I've took the latest criu ( 1.6)

@mbdas
Copy link

mbdas commented Aug 9, 2015

Did you make the readonly attribute as false in the spec file as well?

@LK4D4
Copy link
Contributor

LK4D4 commented Aug 10, 2015

@rajasec Probably you need to mount /tmp as tmpfs inside container. I had same issue.

@rajasec
Copy link
Contributor Author

rajasec commented Aug 10, 2015

@LK4D4
I have modified to mount tmpfs to /tmp so that it dumps the container. This one was working earlier in 14.04 system, looks like anything to do with new Ubuntu 15.04 version ( need to check on systemd)
But my real issue started from containers connected to tty where it fails to dump.
1)/dev/console doesnt have proper mount
even if I unmount /dev/console inside the container, it fails to dump because of container's /proc/1/mountinfo is not containing the mnt_id which criu is looking for.

@avagin
Copy link
Contributor

avagin commented Aug 12, 2015

@rajasec It fails because someone has opened /dev/console. External terminals are not supported yet.

@rajasec
Copy link
Contributor Author

rajasec commented Aug 12, 2015

@avagin
/proc//fdinfo/ contains mnt_id which isn’t in /proc//mountinfo
is this problem only for overlayfs in docker ? or even for runc containers ?
As I'm seeing the mnt_id not containing in pid/mountinfo

@avagin
Copy link
Contributor

avagin commented Aug 12, 2015

It's problem of overlayfs in the linux kernel, which was fixed in 4.2.
Could you try to update criu? We have added a workaround for overlayfs
recently:
xemul/criu@dbaab31
On Aug 12, 2015 5:32 PM, "Rajasekaran" [email protected] wrote:

@avagin https://github.com/avagin
/proc//fdinfo/ contains mnt_id which isn’t in /proc//mountinfo
is this problem only for overlayfs in docker ? or even for runc containers
?
As I'm seeing the mnt_id not containing in pid/mountinfo


Reply to this email directly or view it on GitHub
#87 (comment).

@rajasec
Copy link
Contributor Author

rajasec commented Aug 13, 2015

@avagin
Since Docker container mounts with overlayfs, criu looks for the overlayfs mount point in parse(overlayfs_parse).
Looks like the mountinfo of runc container is not with overlayfs. So mnt_id is still having the problem
is mnt_id problem only with overlayfs or with any file system ? As I'm seeing even in ext4 with rootfs mounted on /

@avagin
Copy link
Contributor

avagin commented Aug 13, 2015

Could you show mountinfo for a runc container and dump.log for it?

@rajasec
Copy link
Contributor Author

rajasec commented Aug 14, 2015

Kernel ver:3.19

runc container info

/proc/1/mountinfo output

220 97 8:5 /home/raj/go/pkg/src/github.com/opencontainers/runc/rootfs / ro,relatime - ext4 /dev/disk/by-uuid/f5a191f4-afa5-48d6-a118-47a753d5a074 rw,errors=remount-ro,data=ordered
221 220 0:44 / /proc rw,relatime - proc proc rw
222 220 0:45 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
223 222 0:46 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
224 222 0:47 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k
225 222 0:41 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw
226 220 0:48 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro
227 226 0:49 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - tmpfs tmpfs ro,mode=755
228 227 0:22 /user.slice/user-1000.slice/session-c2.scope /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
229 227 0:24 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpuset,clone_children
230 227 0:25 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event
231 227 0:26 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,freezer
232 227 0:27 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,net_cls,net_prio
233 227 0:28 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpu,cpuacct
234 227 0:29 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,devices
235 227 0:30 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,blkio
236 227 0:31 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb
237 227 0:32 /user.slice/user-1000.slice/session-c2.scope/runc /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,memory
112 221 0:44 /sys /proc/sys ro,nosuid,nodev,noexec,relatime - proc proc rw
190 221 0:44 /sysrq-trigger /proc/sysrq-trigger ro,nosuid,nodev,noexec,relatime - proc proc rw
191 221 0:44 /irq /proc/irq ro,nosuid,nodev,noexec,relatime - proc proc rw
192 221 0:44 /bus /proc/bus ro,nosuid,nodev,noexec,relatime - proc proc rw
193 221 0:45 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755

You can see that mnt_id 162 is not available in mountinfo

For testing purpose I unmounted the /dev/console in container, so it tries to parse the fds.

proc/1/fdinfo# cat 0
pos: 0
flags: 0100002
mnt_id: 162

Dump.log ( from the latest criu) where it confirms that 162 is not there.. ( Not an problem with criu which I felt)
(00.069192) Dumping opened files (pid: 6465)
(00.069195) ----------------------------------------
(00.069214) Sent msg to daemon 14 0 0
pie: __fetched msg: 14 0 0
pie: __sent ack msg: 14 14 0
pie: Daemon waits for command
(00.069255) Wait for ack 14 on daemon socket
(00.069263) Fetched ack: 14 14 0
(00.069316) 6465 fdinfo 0: pos: 0x 0 flags: 100002/0
(00.069351) tty: Dumping tty 19 with id 0x1
(00.069358) Error (files-reg.c:816): Can't lookup mount=162 for fd=0 path=/5
(00.069381) ----------------------------------------
(00.069385) Error (cr-dump.c:1255): Dump files (pid: 6465) failed with -1
(00.069441) Waiting for 6465 to trap
(00.069461) Daemon 6465 exited trapping
(00.069472) Sent msg to daemon 6 0 0
pie: __fetched msg: 6 0 0
pie: 1: new_sp=0x7fa69cc8b008 ip 0x7fa69c36f80e

For pipes ( with terminal as false): there is an different flow where criu reads the mnt_id which is on / ( in this case mnt_id 220 was used)
220 97 8:5 /home/raj/go/pkg/src/github.com/opencontainers/runc/rootfs / ro,relatime - ext4

for tty: it tries to read from /proc/fdinfo/
if (dump_one_reg_file(lfd, id, p))
return -1;

@rajasec
Copy link
Contributor Author

rajasec commented Aug 15, 2015

@avagin
Above information is from the latest runc and latest criu.

@avagin
Copy link
Contributor

avagin commented Aug 15, 2015

I already said that criu can't dump/restore containers with external termanals.You umount /dev/console, but it's not enough. Looks like the init process opened /dev/console.

Could you execute the following command in your container and show output:
ls -l /proc/1/fd

@rajasec
Copy link
Contributor Author

rajasec commented Aug 17, 2015

@avagin
rwx------ 1 root root 64 Aug 17 16:20 0 -> /2
lrwx------ 1 root root 64 Aug 17 16:20 1 -> /2
lrwx------ 1 root root 64 Aug 17 16:20 2 -> /2
lrwx------ 1 root root 64 Aug 17 16:20 255 -> /2

Above info belong to /bin/bash which is the process ID 1 in this sample.

I completely agree with you criu cant dump/restore containers with external terminal

Clarification needed from you:
Why mnt_id of fds is not available in /proc/1/mountinfo ?

@ivinpolosony
Copy link

I can confirm , (00.127880) Error (mount.c:1005): Can't create a temporary directory: Read-only file system error gets solved by changing

"root": {
                "path": "rootfs",
                "readonly": false
        } 

Although on restore there is no terminal messages , even though runc events shows otherwise.

@avagin
Copy link
Contributor

avagin commented Aug 25, 2015

@rajasec as I already said, we don't support external terminals. In your case all descriptors point on pty.
You can add support of external terminals into criu and runc. I think we need to use the same approach which was used for external pipes:
http://www.criu.org/Inheriting_FDs_on_restore

@crosbymichael
Copy link
Member

ttys are not support for c/t and we cannot do any changes in runc until this has some type of support in crui

@wcczlimited
Copy link

It means that checkpoint can only take effect on the container which is detached, and an interactive container is impossible to checkpoint and restore?

@avagin
Copy link
Contributor

avagin commented Apr 14, 2016

I added support of external terminals in criu:
xemul/criu@4bab48f

@crosbymichael
Copy link
Member

@avagin nice!

@kingsin-fzj
Copy link

kingsin-fzj commented Feb 10, 2017

@avagin hello,
I was in the runc when I encountered such a problem, in the case of the normal function of checkpoint/restore, through the Riddler tool to add the network after the failure of restore:

(0.251052) 1:Warn (libnetlink.c:54): ERROR -17 reported by netlink

(0.251063) 1: Error (net.c:779): Can't restore link

I tried to add a --empty-ns network at the time of the restore, and then reported the wrong: (0.285898) 1: Error (criu/mount.c:3406): mnt: Unable to find real path for

/run/runc/test/criu-root

() Error (criu/cr-restore.c:1020): 28059 killed signal by 9: Killed

(0.332568) mnt: to new to NS clean Switching (ghosts)

(0.332624) uns: calling exit_usernsd (-1, 1)

() uns: calls 0x457c70 (28055, -1, 1), (daemon)

(0.332669) an uns: daemon exits w/ 0

(0.332940) uns: daemon stopped

Version involved:

Criu --version:

Version: 2.6

Runc --version:

Runc version 1.0.0-rc2

Spec: 1.0.0-rc3

What should I do now, can you help me to slove it?

@avagin
Copy link
Contributor

avagin commented Feb 14, 2017

Could you show a config file for this container and output of "ip a" from a container?

@kingsin-fzj
Copy link

kingsin-fzj commented Feb 20, 2017

@avagin,

Before that question is Ok, the problem now is :
(00.681993) 1: Restoring fd 0 (state -> prepare)
(00.681996) 1: Restoring fd 1 (state -> prepare)
(00.681998) 1: Restoring fd 2 (state -> prepare)
(00.682001) 1: Restoring fd 0 (state -> create)
(00.682009) 1: Creating pipe pipe_id=0x327060f id=0x7
(00.682015) 1: Found id pipe:[52889103] (fd 0) in inherit fd list
(00.682019) 1: File pipe:[52889103] will be restored from fd 3 dumped from i
nherit fd 0
(00.682045) 1: Error (criu/pipes.c:224): Unable to reopen the pipe /proc/sel
f/fd/3: Permission denied
(00.718520) mnt: Switching to new ns to clean ghosts
(00.718567) uns: calling exit_usernsd (-1, 1)
(00.718604) uns: daemon calls 0x4592a0 (4752, -1, 1)
(00.718623) uns: `- daemon exits w/ 0
(00.718932) uns: daemon stopped

my config :
{
"ociVersion": "1.0.0-rc3",
"platform": {
"os": "linux",
"arch": "amd64"
},
"process": {
"terminal": false,
"user": {
},
"args": [
"sh"
],
"env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TZ=Asia/Shanghai",
"TERM=xterm"
],
"cwd": "/",
"capabilities": [
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FSETID",
"CAP_FOWNER",
"CAP_MKNOD",
"CAP_NET_RAW",
"CAP_SETGID",
"CAP_SETUID",
"CAP_SETFCAP",
"CAP_SETPCAP",
"CAP_NET_BIND_SERVICE",
"CAP_SYS_CHROOT",
"CAP_KILL",
"CAP_AUDIT_WRITE"
],
"rlimits": [
{
"type": "RLIMIT_NOFILE",
"hard": 1024,
"soft": 1024
}
],
"noNewPrivileges": true,
"apparmorProfile": "docker-default"
},
"root": {
"path": "rootfs",
"readonly": false
},
"hostname": "Test",
"mounts": [
{
"destination": "/proc",
"type": "proc",
"source": "proc"
},
{
"destination": "/dev",
"type": "tmpfs",
"source": "tmpfs",
"options": [
"nosuid",
"strictatime",
"mode=755",
"size=65536k"
]
},
{
"destination": "/dev/pts",
"type": "devpts",
"source": "devpts",
"options": [
"nosuid",
"noexec",
"newinstance",
"ptmxmode=0666",
"mode=0620"
]
},
{
"destination": "/dev/shm",
"type": "tmpfs",
"source": "shm",
"options": [
"nosuid",
"noexec",
"nodev",
"mode=1777",
"size=65536k"
]
},
{
"destination": "/sys",
"type": "sysfs",
"source": "sysfs",
"options": [
"nosuid",
"noexec",
"nodev"
]
},
{
"destination": "/sys/fs/cgroup",
"type": "cgroup",
"source": "cgroup",
"options": [
"nosuid",
"noexec",
"nodev",
"relatime"
]
},
{
"destination": "/etc/hosts",
"type": "bind",
"source": "/etc/hosts",
"options": [
"rbind",
"rprivate",
"ro"
]
},
{
"destination": "/etc/resolv.conf",
"type": "bind",
"source": "/etc/resolv.conf",
"options": [
"rbind",
"rprivate",
"ro"
]
}
],
"hooks": {
"prestart": [
{
"path": "/usr/bin/netns"
}
]
},
"linux": {
"uidMappings": [
{
"hostID": 1000,
"containerID": 0,
"size": 65536
}
],
"gidMappings": [
{
"hostID": 1000,
"containerID": 0,
"size": 65536
}
],
"resources": {
"devices": [
{
"allow": true,
"type": "c",
"major": 1,
"minor": 3,
"access": "rwm"
},
{
"allow": true,
"type": "c",
"major": 1,
"minor": 5,
"access": "rwm"
},
{
"allow": true,
"type": "c",
"major": 1,
"minor": 7,
"access": "rwm"
},
{
"allow": true,
"type": "c",
"major": 1,
"minor": 9,
"access": "rwm"
},
{
"allow": true,
"type": "c",
"major": 1,
"minor": 8,
"access": "rwm"
}
],
"disableOOMKiller": false,
"oomScoreAdj": 0,
"memory": {
"limit": 0,
"reservation": 0,
"swap": 0,
"kernel": 0,
"kernelTCP": null,
"swappiness": 18446744073709551615
},
"cpu": {
"shares": 0,
"quota": 0,
"period": 0,
"cpus": "",
"mems": ""
},
"pids": {
"limit": 0
},
"blockIO": {
"blkioWeight": 0
}
},
"namespaces": [
{
"type": "ipc"
},
{
"type": "uts"
},
{
"type": "mount"
},
{
"type": "network"
},
{
"type": "pid"
},
{
"type": "user"
}
],
"devices": [
{
"path": "/dev/null",
"type": "c",
"major": 1,
"minor": 3,
"fileMode": 438,
"uid": 0,
"gid": 0
},
{
"path": "/dev/zero",
"type": "c",
"major": 1,
"minor": 5,
"fileMode": 438,
"uid": 0,
"gid": 0
},
{
"path": "/dev/full",
"type": "c",
"major": 1,
"minor": 7,
"fileMode": 438,
"uid": 0,
"gid": 0
},
{
"path": "/dev/urandom",
"type": "c",
"major": 1,
"minor": 9,
"fileMode": 438,
"uid": 0,
"gid": 0
},
{
"path": "/dev/random",
"type": "c",
"major": 1,
"minor": 8,
"fileMode": 438,
"uid": 0,
"gid": 0
}
],
"seccomp": {
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": null,
"syscalls": [
{
"name": "accept",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "accept4",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "access",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "alarm",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "arch_prctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "bind",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "brk",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "capget",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "capset",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "chdir",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "chmod",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "chown",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "chown32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "chroot",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "clock_getres",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "clock_gettime",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "clock_nanosleep",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "clone",
"action": "SCMP_ACT_ALLOW",
"args": [
{
"index": 0,
"value": 2080505856,
"valueTwo": 0,
"op": "SCMP_CMP_MASKED_EQ"
}
]
},
{
"name": "close",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "connect",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "creat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "dup",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "dup2",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "dup3",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_create",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_create1",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_ctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_ctl_old",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_pwait",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_wait",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "epoll_wait_old",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "eventfd",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "eventfd2",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "execve",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "execveat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "exit",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "exit_group",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "faccessat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fadvise64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fadvise64_64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fallocate",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fanotify_init",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fanotify_mark",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fchdir",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fchmod",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fchmodat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fchown",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fchown32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fchownat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fcntl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fcntl64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fdatasync",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fgetxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "flistxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "flock",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fork",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fremovexattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fsetxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fstat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fstat64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fstatat64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fstatfs",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fstatfs64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "fsync",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ftruncate",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ftruncate64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "futex",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "futimesat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getcpu",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getcwd",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getdents",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getdents64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getegid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getegid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "geteuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "geteuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getgid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getgroups",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getgroups32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getitimer",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getpeername",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getpgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getpgrp",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getpid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getppid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getpriority",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getrandom",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getresgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getresgid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getresuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getresuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getrlimit",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "get_robust_list",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getrusage",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getsid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getsockname",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getsockopt",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "get_thread_area",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "gettid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "gettimeofday",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "getxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "inotify_add_watch",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "inotify_init",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "inotify_init1",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "inotify_rm_watch",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "io_cancel",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ioctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "io_destroy",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "io_getevents",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ioprio_get",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ioprio_set",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "io_setup",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "io_submit",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "kill",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lchown",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lchown32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lgetxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "link",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "linkat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "listen",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "listxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "llistxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "_llseek",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lremovexattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lseek",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lsetxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lstat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "lstat64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "madvise",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "memfd_create",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mincore",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mkdir",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mkdirat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mknod",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mknodat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mlock",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mlockall",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mmap",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mmap2",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mprotect",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mq_getsetattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mq_notify",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mq_open",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mq_timedreceive",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mq_timedsend",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mq_unlink",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "mremap",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "msgctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "msgget",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "msgrcv",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "msgsnd",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "msync",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "munlock",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "munlockall",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "munmap",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "nanosleep",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "newfstatat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "_newselect",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "open",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "openat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pause",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pipe",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pipe2",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "poll",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ppoll",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "prctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pread64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "preadv",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "prlimit64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pselect6",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pwrite64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "pwritev",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "read",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "readahead",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "readlink",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "readlinkat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "readv",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "recvfrom",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "recvmmsg",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "recvmsg",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "remap_file_pages",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "removexattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rename",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "renameat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "renameat2",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rmdir",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigaction",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigpending",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigprocmask",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigqueueinfo",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigreturn",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigsuspend",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_sigtimedwait",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "rt_tgsigqueueinfo",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_getaffinity",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_getattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_getparam",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_get_priority_max",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_get_priority_min",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_getscheduler",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_rr_get_interval",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_setaffinity",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_setattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_setparam",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_setscheduler",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sched_yield",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "seccomp",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "select",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "semctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "semget",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "semop",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "semtimedop",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sendfile",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sendfile64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sendmmsg",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sendmsg",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sendto",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setdomainname",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setfsgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setfsgid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setfsuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setfsuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setgid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setgroups",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setgroups32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sethostname",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setitimer",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setpgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setpriority",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setregid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setregid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setresgid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setresgid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setresuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setresuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setreuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setreuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setrlimit",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "set_robust_list",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setsid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setsockopt",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "set_thread_area",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "set_tid_address",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setuid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setuid32",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "setxattr",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "shmat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "shmctl",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "shmdt",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "shmget",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "shutdown",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sigaltstack",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "signalfd",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "signalfd4",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sigreturn",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "socket",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "socketpair",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "splice",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "stat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "stat64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "statfs",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "statfs64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "symlink",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "symlinkat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sync",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sync_file_range",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "syncfs",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "sysinfo",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "syslog",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "tee",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "tgkill",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "time",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timer_create",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timer_delete",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timerfd_create",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timerfd_gettime",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timerfd_settime",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timer_getoverrun",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timer_gettime",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "timer_settime",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "times",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "tkill",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "truncate",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "truncate64",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "ugetrlimit",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "umask",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "uname",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "unlink",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "unlinkat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "utime",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "utimensat",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "utimes",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "vfork",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "vhangup",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "vmsplice",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "wait4",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "waitid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "waitpid",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "write",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "writev",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "modify_ldt",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "breakpoint",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "cacheflush",
"action": "SCMP_ACT_ALLOW"
},
{
"name": "set_tls",
"action": "SCMP_ACT_ALLOW"
}
]
}
}
}

and my container ip:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.19.0.2 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::e030:98ff:fef6:3e2c prefixlen 64 scopeid 0x20
ether e2:30:98:f6:3e:2c txqueuelen 1000 (Ethernet)
RX packets 48 bytes 6717 (6.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 722 (722.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen
1000
link/ether 5e:a6:a4:5a:ed:df brd ff:ff:ff:ff:ff:ff
inet 172.19.0.4/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5ca6:a4ff:fe5a:eddf/64 scope link
valid_lft forever preferred_lft forever

stefanberger pushed a commit to stefanberger/runc that referenced this issue Sep 8, 2017
Add runtime state configuration and structs
stefanberger pushed a commit to stefanberger/runc that referenced this issue Sep 8, 2017
The version field was added while 180df9d (Add runtime state
configuration and structs, 2015-07-29, opencontainers#87) was in-flight [1], and it
missed getting documented in the example.

[1]: opencontainers/runtime-spec#87 (comment)

Signed-off-by: W. Trevor King <[email protected]>
@mouri11
Copy link

mouri11 commented Aug 13, 2020

Hi. I am running runc with the following config.json file.

{
	"ociVersion": "1.0.2-dev",
	"process": {
		"terminal": false,
		"user": {
			"uid": 0,
			"gid": 0
		},
		"args": [
			"sleep","120"
		],
		"env": [
			"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
			"TERM=xterm"
		],
		"cwd": "/",
		"capabilities": {
			"bounding": [
				"CAP_AUDIT_WRITE",
				"CAP_KILL",
				"CAP_NET_BIND_SERVICE"
			],
			"effective": [
				"CAP_AUDIT_WRITE",
				"CAP_KILL",
				"CAP_NET_BIND_SERVICE"
			],
			"inheritable": [
				"CAP_AUDIT_WRITE",
				"CAP_KILL",
				"CAP_NET_BIND_SERVICE"
			],
			"permitted": [
				"CAP_AUDIT_WRITE",
				"CAP_KILL",
				"CAP_NET_BIND_SERVICE"
			],
			"ambient": [
				"CAP_AUDIT_WRITE",
				"CAP_KILL",
				"CAP_NET_BIND_SERVICE"
			]
		},
		"rlimits": [
			{
				"type": "RLIMIT_NOFILE",
				"hard": 1024,
				"soft": 1024
			}
		],
		"noNewPrivileges": true
	},
	"root": {
		"path": "rootfs",
		"readonly": true
	},
	"hostname": "runc",
	"mounts": [
		{
			"destination": "/proc",
			"type": "proc",
			"source": "proc"
		},
		{
			"destination": "/dev",
			"type": "tmpfs",
			"source": "tmpfs",
			"options": [
				"nosuid",
				"strictatime",
				"mode=755",
				"size=65536k"
			]
		},
		{
			"destination": "/dev/pts",
			"type": "devpts",
			"source": "devpts",
			"options": [
				"nosuid",
				"noexec",
				"newinstance",
				"ptmxmode=0666",
				"mode=0620",
				"gid=5"
			]
		},
		{
			"destination": "/dev/shm",
			"type": "tmpfs",
			"source": "shm",
			"options": [
				"nosuid",
				"noexec",
				"nodev",
				"mode=1777",
				"size=65536k"
			]
		},
		{
			"destination": "/dev/mqueue",
			"type": "mqueue",
			"source": "mqueue",
			"options": [
				"nosuid",
				"noexec",
				"nodev"
			]
		},
		{
			"destination": "/sys",
			"type": "sysfs",
			"source": "sysfs",
			"options": [
				"nosuid",
				"noexec",
				"nodev",
				"ro"
			]
		},
		{
			"destination": "/sys/fs/cgroup",
			"type": "cgroup",
			"source": "cgroup",
			"options": [
				"nosuid",
				"noexec",
				"nodev",
				"relatime",
				"ro"
			]
		}
	],
	"linux": {
		"resources": {
			"devices": [
				{
					"allow": false,
					"access": "rwm"
				}
			]
		},
		"namespaces": [
			{
				"type": "pid"
			},
			{
				"type": "network"
			},
			{
				"type": "ipc"
			},
			{
				"type": "uts"
			},
			{
				"type": "mount"
			}
		],
		"maskedPaths": [
			"/proc/acpi",
			"/proc/asound",
			"/proc/kcore",
			"/proc/keys",
			"/proc/latency_stats",
			"/proc/timer_list",
			"/proc/timer_stats",
			"/proc/sched_debug",
			"/sys/firmware",
			"/proc/scsi"
		],
		"readonlyPaths": [
			"/proc/bus",
			"/proc/fs",
			"/proc/irq",
			"/proc/sys",
			"/proc/sysrq-trigger"
		]
	}
}

I am facing the following error while trying to checkpoint the container:

criu failed: type NOTIFY errno 0
log file: /run/runc/mycontainer/criu.work/dump.log

The log file is here.

Any help will be appreciated. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants