Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dump and restore containers with external terminals #1355

Merged
merged 9 commits into from
May 18, 2017

Conversation

avagin
Copy link
Contributor

@avagin avagin commented Mar 2, 2017

CRIU (2.12) was extended to report orthaned master pty-s and it can be used to restore a runc container console.

Another good thing is that startContainer() is used to restore a container and this allows us to remove a lot of code from restore.go.

Cc: @cyphar

Fixes #1202

@avagin avagin force-pushed the cr-console branch 8 times, most recently from cb19ff6 to ee0df0e Compare March 2, 2017 23:29
@cyphar
Copy link
Member

cyphar commented Mar 3, 2017

This looks good. However, can you add a --console-socket flag to runc restore so that people can use runc restore -d with --console-socket? You should be able to reuse the existing TTY sending code.

@cyphar cyphar self-requested a review March 3, 2017 01:25
@cyphar
Copy link
Member

cyphar commented Mar 3, 2017

Actually, since you're using startContainer it should "just work" if you add --console-socket.

@avagin
Copy link
Contributor Author

avagin commented Mar 3, 2017

@cyphar I added --console-socket. Thanks!

Dockerfile Outdated
@@ -31,10 +33,12 @@ RUN cd /tmp \
&& rm -rf /tmp/bats

# install criu
ENV CRIU_VERSION 1.7
ENV CRIU_VERSION 2.11.1
COPY tests/hacks/0001-criu-allow-to-ignore-ipv6-if-CRIU_NOIPV6-is-set.patch /tmp/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i feel a little bit sad about this 😢

Can we make the janky machines load ipv6 modules prior to running all tests ? Or maybe we can just disable janky now that we have travis.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like this hack too. Do you know who supports janky machines?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're managed by Docker Inc's infra team.

return err
}
return nil
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i dont think we need this break here as it will break in the end of the block if we let it continue, just like how break in DUMP is removed ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get this commet. What do you meen?

case t == criurpc.CriuReqType_RESTORE:
case t == criurpc.CriuReqType_DUMP:
        break
case t == criurpc.CriuReqType_PRE_DUMP:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@avagin I think he's referring to the break at the end of the for loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyphar No, it is about this break. I just found that it isn't required here in Go. But can we save it here, because I and probably others who read a lot of code in C will see a bug here.

Copy link
Member

@cyphar cyphar Mar 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the current code is also "wrong" (if you're trying to make it more friendly to C programmers) because C programmers will assume that CriuReqType_RESTORE will fallthrough to _DUMP. Please just use idiomatic Go to avoid confusion like this for Go programmers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyphar I haven't thought that it is a problem to have one extra "break" here. Fixed.

// If we got the message CriuReqType_PRE_DUMP it means
// CRIU was successful and we need to forcefully stop CRIU
logrus.Debugf("PRE_DUMP finished. Send close signal to CRIU service")
criuClient.Close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my guess is that criuClient is only a file so Close doesnt quite do what it should ?

criuClientCon can be casted to an unix socket, so maybe call Close, or even CloseWrite which is really SHUT_WR like we do below should do the correct thing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know what syscall.Shutdown() does. It work fast and reliable. I called it thouthds of times and it always works as expected;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dqminh I found that net.FileConn() creates a new file descriptors and it was a reason why criuClientCon.Close() didn't close a socket. The current version of patches doesn't use the raw shutdown() syscall.

// The current runc pre-dump approach, however, is
// start criu in PRE_DUMP once for a single pre-dump
// and not the whole series of pre-dump, pre-dump, ...m, dump
if !st.Success() && *req.Type != criurpc.CriuReqType_PRE_DUMP {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we should still check for DUMP here right, as Predump is only an option and we can still specify DUMP ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't check exit code only for pre-dump, because it is not zero.

@dqminh dqminh mentioned this pull request Mar 3, 2017
fds, err := syscall.ParseUnixRights(&scm[0])

process.consoleChan = make(chan *os.File, 1)
process.consoleChan <- os.NewFile(uintptr(fds[0]), "console")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we want to land this before or after #1356 -- but we need to make sure this is handled properly given the changes in #1356.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyphar I rebased my changes on #1356 and everything works as expected, so there is no problem with #1356
https://github.com/avagin/runc/tree/cr-console-after-1356

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool!

CT_ACT_RESTORE
)

func startContainer(context *cli.Context, spec *specs.Spec, action CtAct, criuOpts *libcontainer.CriuOpts) (int, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really like passing criuOpts to startContainer. I do like the action argument though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyphar do you have any ideas how to pass criuOpts to container.Restore?

@avagin
Copy link
Contributor Author

avagin commented Mar 22, 2017

0.33s$ git-validation -run DCO,short-subject -v -range ${TRAVIS_BRANCH}..FETCH_HEAD
 * a0adae1 "Merge a0cbd8047f97e6fb21f9567bf800889e22744d1d into 6b574d57594aedca4bb45b679788af73ae0d65f9" ... FAIL
  - PASS - merge commits do not require DCO
  - FAIL - commit subject exceeds 90 characters

@avagin
Copy link
Contributor Author

avagin commented Mar 22, 2017

The problem may be in 36b61ae. (cc: @vbatts )

@vbatts
Copy link
Member

vbatts commented Mar 22, 2017

@avagin yup. #1383 is up to fix it

@avagin avagin force-pushed the cr-console branch 4 times, most recently from 89b73a7 to 7360326 Compare March 29, 2017 20:16
@dqminh
Copy link
Contributor

dqminh commented Apr 11, 2017

@avagin i tried the patch but couldnt make it to work

> criu --version
Version: 2.12.1

> runc run ctr
/ #
> runc --debug --log /containers/busybox checkpoint ctr --image-path /containers/busybox --work-path /containers/busybox
criu failed: type NOTIFY errno 0
log file: /containers/busybox/dump.log
dump.log
# dump.log
(00.000042) ========================================
(00.000057) Dumping processes (pid: 8242)
(00.000058) ========================================
(00.000061) Running pre-dump scripts
(00.000063) 	RPC
(00.000160) Pagemap is fully functional
(00.000181) Found anon-shmem device at 5
(00.000185) Reset 8269's dirty tracking
(00.000211)  ... done
(00.000226) Dirty track supported on kernel
(00.000259) Found task size of 7ffffffff000
(00.001639) Found mmap_min_addr 0x10000
(00.001648) irmap: Searching irmap cache in work dir
(00.001654) No irmap-cache image
(00.001655) irmap: Searching irmap cache in parent
(00.001658) irmap: No irmap cache
(00.001665) cpu: fpu:1 fxsr:1 xsave:1
(00.001725) vdso: Parsing at 7fffc73f6000 7fffc73f8000
(00.001730) vdso: PT_LOAD p_vaddr: 0
(00.001732) vdso: DT_HASH: 120
(00.001733) vdso: DT_STRTAB: 298
(00.001734) vdso: DT_SYMTAB: 1a8
(00.001735) vdso: DT_STRSZ: 5e
(00.001736) vdso: DT_SYMENT: 18
(00.001738) vdso: nbucket 3 nchain a bucket 7fffc73f6128 chain 7fffc73f6134
(00.001742) vdso: rt [vdso] 7fffc73f6000-7fffc73f8000 [vvar] 7fffc73f4000-7fffc73f6000
(00.001750) cg-prop: Parsing controller "cpu"
(00.001752) cg-prop: 	Strategy "replace"
(00.001754) cg-prop: 	Property "cpu.shares"
(00.001756) cg-prop: 	Property "cpu.cfs_period_us"
(00.001757) cg-prop: 	Property "cpu.cfs_quota_us"
(00.001758) cg-prop: 	Property "cpu.rt_period_us"
(00.001760) cg-prop: 	Property "cpu.rt_runtime_us"
(00.001761) cg-prop: Parsing controller "memory"
(00.001762) cg-prop: 	Strategy "replace"
(00.001764) cg-prop: 	Property "memory.limit_in_bytes"
(00.001765) cg-prop: 	Property "memory.memsw.limit_in_bytes"
(00.001766) cg-prop: 	Property "memory.swappiness"
(00.001768) cg-prop: 	Property "memory.soft_limit_in_bytes"
(00.001769) cg-prop: 	Property "memory.move_charge_at_immigrate"
(00.001770) cg-prop: 	Property "memory.oom_control"
(00.001771) cg-prop: 	Property "memory.use_hierarchy"
(00.001773) cg-prop: 	Property "memory.kmem.limit_in_bytes"
(00.001774) cg-prop: 	Property "memory.kmem.tcp.limit_in_bytes"
(00.001775) cg-prop: Parsing controller "cpuset"
(00.001777) cg-prop: 	Strategy "replace"
(00.001778) cg-prop: 	Property "cpuset.cpus"
(00.001779) cg-prop: 	Property "cpuset.mems"
(00.001780) cg-prop: 	Property "cpuset.memory_migrate"
(00.001782) cg-prop: 	Property "cpuset.cpu_exclusive"
(00.001783) cg-prop: 	Property "cpuset.mem_exclusive"
(00.001784) cg-prop: 	Property "cpuset.mem_hardwall"
(00.001785) cg-prop: 	Property "cpuset.memory_spread_page"
(00.001787) cg-prop: 	Property "cpuset.memory_spread_slab"
(00.001788) cg-prop: 	Property "cpuset.sched_load_balance"
(00.001789) cg-prop: 	Property "cpuset.sched_relax_domain_level"
(00.001790) cg-prop: Parsing controller "blkio"
(00.001792) cg-prop: 	Strategy "replace"
(00.001793) cg-prop: 	Property "blkio.weight"
(00.001794) cg-prop: Parsing controller "freezer"
(00.001796) cg-prop: 	Strategy "replace"
(00.001797) cg-prop: Parsing controller "perf_event"
(00.001798) cg-prop: 	Strategy "replace"
(00.001800) cg-prop: Parsing controller "net_cls"
(00.001801) cg-prop: 	Strategy "replace"
(00.001802) cg-prop: 	Property "net_cls.classid"
(00.001804) cg-prop: Parsing controller "net_prio"
(00.001805) cg-prop: 	Strategy "replace"
(00.001806) cg-prop: 	Property "net_prio.ifpriomap"
(00.001808) cg-prop: Parsing controller "pids"
(00.001809) cg-prop: 	Strategy "replace"
(00.001810) cg-prop: 	Property "pids.max"
(00.001812) cg-prop: Parsing controller "devices"
(00.001813) cg-prop: 	Strategy "replace"
(00.001814) cg-prop: 	Property "devices.list"
(00.001840) Perparing image inventory (version 1)
(00.001859) Add pid ns 1 pid 8269
(00.001863) Add net ns 2 pid 8269
(00.001867) Add ipc ns 3 pid 8269
(00.001871) Add uts ns 4 pid 8269
(00.001875) Add mnt ns 5 pid 8269
(00.001878) Add user ns 6 pid 8269
(00.001882) Add cgroup ns 7 pid 8269
(00.001883) cg: Dumping cgroups for 8269
(00.001894) cg:  `- New css ID 1
(00.001896) cg:     `- [blkio] -> [/user.slice] [0]
(00.001898) cg:     `- [cpu,cpuacct] -> [/user.slice] [0]
(00.001899) cg:     `- [cpuset] -> [/] [0]
(00.001900) cg:     `- [devices] -> [/user.slice] [0]
(00.001906) cg:     `- [freezer] -> [/] [0]
(00.001907) cg:     `- [memory] -> [/user.slice] [0]
(00.001908) cg:     `- [name=systemd] -> [/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service] [0]
(00.001909) cg:     `- [net_cls,net_prio] -> [/] [0]
(00.001911) cg:     `- [perf_event] -> [/] [0]
(00.001912) cg:     `- [pids] -> [/user.slice/user-1000.slice/[email protected]] [0]
(00.001913) cg: Set 1 is criu one
(00.001990) Seized task 8242, state 1
(00.002003) Collected (4 attempts, 0 in_progress)
(00.002011) Collected (4 attempts, 0 in_progress)
(00.002014) Collected 8242 in 1 state
(00.002022) Will take pid namespace in the image
(00.002024) Add pid ns 8 pid 8242
(00.002027) Will take net namespace in the image
(00.002028) Add net ns 9 pid 8242
(00.002031) Will take ipc namespace in the image
(00.002032) Add ipc ns 10 pid 8242
(00.002036) Will take uts namespace in the image
(00.002037) Add uts ns 11 pid 8242
(00.002040) Will take mnt namespace in the image
(00.002042) Add mnt ns 12 pid 8242
(00.002049) Lock network
(00.002050) Running network-lock scripts
(00.002051) 	RPC
(00.003718) 	type ext4 source /dev/mapper/hanamura--vg-root mnt_id 268 s_dev 0xfe00001 /containers/busybox/rootfs @ ./ flags 0x200001 options errors=remount-ro,data=ordered
(00.003732) 	type proc source proc mnt_id 269 s_dev 0x29 / @ ./proc flags 0x200000 options 
(00.003737) 	type tmpfs source tmpfs mnt_id 270 s_dev 0x2d / @ ./dev flags 0x1000002 options size=65536k,mode=755
(00.003743) 	type devpts source devpts mnt_id 271 s_dev 0x2e / @ ./dev/pts flags 0x20000a options gid=5,mode=620,ptmxmode=666
(00.003752) 	type tmpfs source shm mnt_id 272 s_dev 0x2f / @ ./dev/shm flags 0x20000e options size=65536k
(00.003756) 	type mqueue source mqueue mnt_id 273 s_dev 0x27 / @ ./dev/mqueue flags 0x20000e options 
(00.003759) 	type sysfs source sysfs mnt_id 274 s_dev 0x30 / @ ./sys flags 0x20000f options 
(00.003763) 	type tmpfs source tmpfs mnt_id 275 s_dev 0x31 / @ ./sys/fs/cgroup flags 0x20000f options mode=755
(00.003768) 	type cgroup source cgroup mnt_id 276 s_dev 0x17 /user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr @ ./sys/fs/cgroup/systemd flags 0x20000f options xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd
(00.003772) 	type cgroup source cgroup mnt_id 277 s_dev 0x1a /user.slice/ctr @ ./sys/fs/cgroup/cpu,cpuacct flags 0x20000f options cpu,cpuacct
(00.003796) 	type cgroup source cgroup mnt_id 279 s_dev 0x1b /ctr @ ./sys/fs/cgroup/net_cls,net_prio flags 0x20000f options net_cls,net_prio
(00.003800) 	type cgroup source cgroup mnt_id 287 s_dev 0x1c /ctr @ ./sys/fs/cgroup/cpuset flags 0x20000f options cpuset
(00.003804) 	type cgroup source cgroup mnt_id 293 s_dev 0x1d /ctr @ ./sys/fs/cgroup/perf_event flags 0x20000f options perf_event
(00.003809) 	type cgroup source cgroup mnt_id 294 s_dev 0x1e /user.slice/ctr @ ./sys/fs/cgroup/blkio flags 0x20000f options blkio
(00.003812) 	type cgroup source cgroup mnt_id 295 s_dev 0x1f /user.slice/ctr @ ./sys/fs/cgroup/devices flags 0x20000f options devices
(00.003816) 	type cgroup source cgroup mnt_id 299 s_dev 0x20 /ctr @ ./sys/fs/cgroup/freezer flags 0x20000f options freezer
(00.003819) 	type cgroup source cgroup mnt_id 300 s_dev 0x21 /user.slice/ctr @ ./sys/fs/cgroup/memory flags 0x20000f options memory
(00.003823) 	type cgroup source cgroup mnt_id 301 s_dev 0x22 /user.slice/user-1000.slice/[email protected]/ctr @ ./sys/fs/cgroup/pids flags 0x20000f options pids
(00.003827) 	type devpts source devpts mnt_id 231 s_dev 0x2e /0 @ ./dev/console flags 0x20000a options gid=5,mode=620,ptmxmode=666
(00.003839) 	type proc source proc mnt_id 233 s_dev 0x29 /asound @ ./proc/asound flags 0x200001 options 
(00.003843) 	type proc source proc mnt_id 234 s_dev 0x29 /bus @ ./proc/bus flags 0x200001 options 
(00.003847) 	type proc source proc mnt_id 235 s_dev 0x29 /fs @ ./proc/fs flags 0x200001 options 
(00.003850) 	type proc source proc mnt_id 236 s_dev 0x29 /irq @ ./proc/irq flags 0x200001 options 
(00.003853) 	type proc source proc mnt_id 237 s_dev 0x29 /sys @ ./proc/sys flags 0x200001 options 
(00.003861) 	type proc source proc mnt_id 238 s_dev 0x29 /sysrq-trigger @ ./proc/sysrq-trigger flags 0x200001 options 
(00.003865) 	type tmpfs source tmpfs mnt_id 239 s_dev 0x2d /null @ ./proc/kcore flags 0x1000002 options size=65536k,mode=755
(00.003868) 	type tmpfs source tmpfs mnt_id 240 s_dev 0x2d /null @ ./proc/timer_list flags 0x1000002 options size=65536k,mode=755
(00.003872) 	type tmpfs source tmpfs mnt_id 241 s_dev 0x2d /null @ ./proc/sched_debug flags 0x1000002 options size=65536k,mode=755
(00.003876) 	type tmpfs source tmpfs mnt_id 242 s_dev 0x32 / @ ./sys/firmware flags 0x200001 options 
(00.003880) mnt: Building mountpoints tree
(00.003881) mnt: 	Building plain mount tree
(00.003882) mnt: 		Working on 242->274
(00.003883) mnt: 		Working on 241->269
(00.003885) mnt: 		Working on 240->269
(00.003886) mnt: 		Working on 239->269
(00.003887) mnt: 		Working on 238->269
(00.003888) mnt: 		Working on 237->269
(00.003889) mnt: 		Working on 236->269
(00.003890) mnt: 		Working on 235->269
(00.003891) mnt: 		Working on 234->269
(00.003893) mnt: 		Working on 233->269
(00.003894) mnt: 		Working on 231->270
(00.003895) mnt: 		Working on 301->275
(00.003896) mnt: 		Working on 300->275
(00.003897) mnt: 		Working on 299->275
(00.003898) mnt: 		Working on 295->275
(00.003899) mnt: 		Working on 294->275
(00.003900) mnt: 		Working on 293->275
(00.003901) mnt: 		Working on 287->275
(00.003903) mnt: 		Working on 279->275
(00.003904) mnt: 		Working on 277->275
(00.003905) mnt: 		Working on 276->275
(00.003906) mnt: 		Working on 275->274
(00.003907) mnt: 		Working on 274->268
(00.003908) mnt: 		Working on 273->270
(00.003909) mnt: 		Working on 272->270
(00.003910) mnt: 		Working on 271->270
(00.003912) mnt: 		Working on 270->268
(00.003913) mnt: 		Working on 269->268
(00.003914) mnt: 		Working on 268->230
(00.003915) mnt: 	Resorting siblings on 268
(00.003916) mnt: 	Resorting siblings on 274
(00.003917) mnt: 	Resorting siblings on 242
(00.003919) mnt: 	Resorting siblings on 275
(00.003920) mnt: 	Resorting siblings on 301
(00.003921) mnt: 	Resorting siblings on 300
(00.003922) mnt: 	Resorting siblings on 299
(00.003923) mnt: 	Resorting siblings on 295
(00.003924) mnt: 	Resorting siblings on 294
(00.003925) mnt: 	Resorting siblings on 293
(00.003926) mnt: 	Resorting siblings on 287
(00.003927) mnt: 	Resorting siblings on 279
(00.003928) mnt: 	Resorting siblings on 277
(00.003929) mnt: 	Resorting siblings on 276
(00.003930) mnt: 	Resorting siblings on 270
(00.003931) mnt: 	Resorting siblings on 231
(00.003933) mnt: 	Resorting siblings on 273
(00.003934) mnt: 	Resorting siblings on 272
(00.003935) mnt: 	Resorting siblings on 271
(00.003936) mnt: 	Resorting siblings on 269
(00.003937) mnt: 	Resorting siblings on 241
(00.003938) mnt: 	Resorting siblings on 240
(00.003939) mnt: 	Resorting siblings on 239
(00.003940) mnt: 	Resorting siblings on 238
(00.003941) mnt: 	Resorting siblings on 237
(00.003942) mnt: 	Resorting siblings on 236
(00.003943) mnt: 	Resorting siblings on 235
(00.003944) mnt: 	Resorting siblings on 234
(00.003945) mnt: 	Resorting siblings on 233
(00.003946) mnt: Done:
(00.003947) mnt: [./](268->230)
(00.003949) mnt:  [./sys](274->268)
(00.003950) mnt:   [./sys/firmware](242->274)
(00.003952) mnt:   <--
(00.003953) mnt:   [./sys/fs/cgroup](275->274)
(00.003954) mnt:    [./sys/fs/cgroup/pids](301->275)
(00.003955) mnt:    <--
(00.003956) mnt:    [./sys/fs/cgroup/systemd](276->275)
(00.003957) mnt:    <--
(00.003958) mnt:    [./sys/fs/cgroup/cpu,cpuacct](277->275)
(00.003959) mnt:    <--
(00.003960) mnt:    [./sys/fs/cgroup/net_cls,net_prio](279->275)
(00.003962) mnt:    <--
(00.003963) mnt:    [./sys/fs/cgroup/cpuset](287->275)
(00.003964) mnt:    <--
(00.003965) mnt:    [./sys/fs/cgroup/perf_event](293->275)
(00.003966) mnt:    <--
(00.003967) mnt:    [./sys/fs/cgroup/blkio](294->275)
(00.003968) mnt:    <--
(00.003969) mnt:    [./sys/fs/cgroup/devices](295->275)
(00.003970) mnt:    <--
(00.003971) mnt:    [./sys/fs/cgroup/freezer](299->275)
(00.003972) mnt:    <--
(00.003975) mnt:    [./sys/fs/cgroup/memory](300->275)
(00.003977) mnt:    <--
(00.003978) mnt:   <--
(00.003979) mnt:  <--
(00.003980) mnt:  [./proc](269->268)
(00.003981) mnt:   [./proc/sched_debug](241->269)
(00.003982) mnt:   <--
(00.003983) mnt:   [./proc/asound](233->269)
(00.003984) mnt:   <--
(00.003985) mnt:   [./proc/bus](234->269)
(00.003986) mnt:   <--
(00.003988) mnt:   [./proc/fs](235->269)
(00.003989) mnt:   <--
(00.003990) mnt:   [./proc/irq](236->269)
(00.003991) mnt:   <--
(00.003992) mnt:   [./proc/sys](237->269)
(00.003993) mnt:   <--
(00.003994) mnt:   [./proc/sysrq-trigger](238->269)
(00.003995) mnt:   <--
(00.003996) mnt:   [./proc/kcore](239->269)
(00.003997) mnt:   <--
(00.003998) mnt:   [./proc/timer_list](240->269)
(00.004000) mnt:   <--
(00.004001) mnt:  <--
(00.004002) mnt:  [./dev](270->268)
(00.004003) mnt:   [./dev/console](231->270)
(00.004004) mnt:   <--
(00.004005) mnt:   [./dev/pts](271->270)
(00.004006) mnt:   <--
(00.004007) mnt:   [./dev/shm](272->270)
(00.004008) mnt:   <--
(00.004009) mnt:   [./dev/mqueue](273->270)
(00.004011) mnt:   <--
(00.004012) mnt:  <--
(00.004013) mnt: <--
(00.004016) mnt: Found /dev/null mapping for ./proc/sched_debug mountpoint
(00.004018) mnt: Found /dev/null mapping for ./proc/timer_list mountpoint
(00.004019) mnt: Found /dev/null mapping for ./proc/kcore mountpoint
(00.004022) mnt: Found /sys/fs/cgroup/pids mapping for ./sys/fs/cgroup/pids mountpoint
(00.004024) mnt: Found /sys/fs/cgroup/memory mapping for ./sys/fs/cgroup/memory mountpoint
(00.004025) mnt: Found /sys/fs/cgroup/freezer mapping for ./sys/fs/cgroup/freezer mountpoint
(00.004026) mnt: Found /sys/fs/cgroup/devices mapping for ./sys/fs/cgroup/devices mountpoint
(00.004028) mnt: Found /sys/fs/cgroup/blkio mapping for ./sys/fs/cgroup/blkio mountpoint
(00.004029) mnt: Found /sys/fs/cgroup/perf_event mapping for ./sys/fs/cgroup/perf_event mountpoint
(00.004030) mnt: Found /sys/fs/cgroup/cpuset mapping for ./sys/fs/cgroup/cpuset mountpoint
(00.004031) mnt: Found /sys/fs/cgroup/net_cls,net_prio mapping for ./sys/fs/cgroup/net_cls,net_prio mountpoint
(00.004033) mnt: Found /sys/fs/cgroup/cpu,cpuacct mapping for ./sys/fs/cgroup/cpu,cpuacct mountpoint
(00.004034) mnt: Found /sys/fs/cgroup/systemd mapping for ./sys/fs/cgroup/systemd mountpoint
(00.004037) mnt: Inspecting sharing on 242 shared_id 0 master_id 0 (@./sys/firmware)
(00.004039) mnt: Inspecting sharing on 241 shared_id 0 master_id 0 (@./proc/sched_debug)
(00.004040) mnt: 	The mount 240 is bind for 241 (@./proc/timer_list -> @./proc/sched_debug)
(00.004041) mnt: 	The mount 239 is bind for 241 (@./proc/kcore -> @./proc/sched_debug)
(00.004043) mnt: 	The mount 270 is bind for 241 (@./dev -> @./proc/sched_debug)
(00.004044) mnt: Inspecting sharing on 240 shared_id 0 master_id 0 (@./proc/timer_list)
(00.004045) mnt: Inspecting sharing on 239 shared_id 0 master_id 0 (@./proc/kcore)
(00.004046) mnt: Inspecting sharing on 238 shared_id 0 master_id 0 (@./proc/sysrq-trigger)
(00.004048) mnt: 	The mount 237 is bind for 238 (@./proc/sys -> @./proc/sysrq-trigger)
(00.004049) mnt: 	The mount 236 is bind for 238 (@./proc/irq -> @./proc/sysrq-trigger)
(00.004050) mnt: 	The mount 235 is bind for 238 (@./proc/fs -> @./proc/sysrq-trigger)
(00.004051) mnt: 	The mount 234 is bind for 238 (@./proc/bus -> @./proc/sysrq-trigger)
(00.004053) mnt: 	The mount 233 is bind for 238 (@./proc/asound -> @./proc/sysrq-trigger)
(00.004054) mnt: 	The mount 269 is bind for 238 (@./proc -> @./proc/sysrq-trigger)
(00.004055) mnt: Inspecting sharing on 237 shared_id 0 master_id 0 (@./proc/sys)
(00.004056) mnt: Inspecting sharing on 236 shared_id 0 master_id 0 (@./proc/irq)
(00.004058) mnt: Inspecting sharing on 235 shared_id 0 master_id 0 (@./proc/fs)
(00.004059) mnt: Inspecting sharing on 234 shared_id 0 master_id 0 (@./proc/bus)
(00.004060) mnt: Inspecting sharing on 233 shared_id 0 master_id 0 (@./proc/asound)
(00.004061) mnt: Inspecting sharing on 231 shared_id 0 master_id 0 (@./dev/console)
(00.004062) mnt: 	The mount 271 is bind for 231 (@./dev/pts -> @./dev/console)
(00.004065) mnt: Inspecting sharing on 301 shared_id 0 master_id 0 (@./sys/fs/cgroup/pids)
(00.004066) mnt: Inspecting sharing on 300 shared_id 0 master_id 0 (@./sys/fs/cgroup/memory)
(00.004068) mnt: Inspecting sharing on 299 shared_id 0 master_id 0 (@./sys/fs/cgroup/freezer)
(00.004069) mnt: Inspecting sharing on 295 shared_id 0 master_id 0 (@./sys/fs/cgroup/devices)
(00.004070) mnt: Inspecting sharing on 294 shared_id 0 master_id 0 (@./sys/fs/cgroup/blkio)
(00.004071) mnt: Inspecting sharing on 293 shared_id 0 master_id 0 (@./sys/fs/cgroup/perf_event)
(00.004073) mnt: Inspecting sharing on 287 shared_id 0 master_id 0 (@./sys/fs/cgroup/cpuset)
(00.004074) mnt: Inspecting sharing on 279 shared_id 0 master_id 0 (@./sys/fs/cgroup/net_cls,net_prio)
(00.004075) mnt: Inspecting sharing on 277 shared_id 0 master_id 0 (@./sys/fs/cgroup/cpu,cpuacct)
(00.004076) mnt: Inspecting sharing on 276 shared_id 0 master_id 0 (@./sys/fs/cgroup/systemd)
(00.004077) mnt: Inspecting sharing on 275 shared_id 0 master_id 0 (@./sys/fs/cgroup)
(00.004079) mnt: Inspecting sharing on 274 shared_id 0 master_id 0 (@./sys)
(00.004080) mnt: Inspecting sharing on 273 shared_id 0 master_id 0 (@./dev/mqueue)
(00.004081) mnt: Inspecting sharing on 272 shared_id 0 master_id 0 (@./dev/shm)
(00.004082) mnt: Inspecting sharing on 271 shared_id 0 master_id 0 (@./dev/pts)
(00.004083) mnt: Inspecting sharing on 270 shared_id 0 master_id 0 (@./dev)
(00.004085) mnt: Inspecting sharing on 269 shared_id 0 master_id 0 (@./proc)
(00.004086) mnt: Inspecting sharing on 268 shared_id 0 master_id 0 (@./)
(00.004090) Collecting netns 9/8242
(00.004092) Switching to 8242's net for collecting sockets
(00.004160) sk unix: 	Collected: ino 0x5b36c peer_ino 0 family    1 type    5 state  7 name (null)
(00.004513) Collect netlink sock 0x5c206
(00.004516) Collect netlink sock 0x5c214
(00.004517) Collect netlink sock 0x5b36b
(00.004518) Collect netlink sock 0x5c21d
(00.004520) Collect netlink sock 0x5c213
(00.004521) Collect netlink sock 0x5c209
(00.004522) Collect netlink sock 0x5c21e
(00.004523) Collect netlink sock 0x5c207
(00.004524) Collect netlink sock 0x5c208
(00.004533) ========================================
(00.004534) Dumping task (pid: 8242)
(00.004536) ========================================
(00.004537) Obtaining task stat ... 
(00.004554) 
(00.004556) Collecting mappings (pid: 8242)
(00.004557) ----------------------------------------
(00.004650) Dumping path for -3 fd via self 9 [/bin/sh]
(00.004689) vma 6fa000 borrows vfi from previous 400000
(00.004723) Collected, longest area occupies 251 pages
(00.004725) 0x400000-0x4fb000 (1004K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0 reg fp  shmid: 0x1
(00.004727) 0x6fa000-0x6fb000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0xfa000 reg fp  shmid: 0x1
(00.004729) 0x6fb000-0x700000 (20K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap  shmid: 0
(00.004731) 0x103e000-0x1045000 (28K) prot 0x3 flags 0x22 fdflags 0 st 0x221 off 0 reg heap ap  shmid: 0
(00.004732) 0x7ffe29ffc000-0x7ffe2a01e000 (136K) prot 0x3 flags 0x122 fdflags 0 st 0x201 off 0 reg ap  shmid: 0
(00.004734) 0x7ffe2a06f000-0x7ffe2a071000 (8K) prot 0x1 flags 0x22 fdflags 0 st 0x1201 off 0 reg vvar ap  shmid: 0
(00.004736) 0x7ffe2a071000-0x7ffe2a073000 (8K) prot 0x5 flags 0x22 fdflags 0 st 0x209 off 0 reg vdso ap  shmid: 0
(00.004737) 0xffffffffff600000-0xffffffffff601000 (4K) prot 0x5 flags 0x22 fdflags 0 st 0x204 off 0 vsys ap  shmid: 0
(00.004739) ----------------------------------------
(00.004740) 
(00.004741) Collecting fds (pid: 8242)
(00.004742) ----------------------------------------
(00.004754) Found 4 file descriptors
(00.004755) ----------------------------------------
(00.004761) Dump private signals of 8242
(00.004765) Dump shared signals of 8242
(00.004772) Parasite syscall_ip at 0x400000
(00.004874) Set up parasite blob using memfd
(00.004876) Putting parasite blob into 0x7fd7ff338000->0x7fd1cf482000
(00.004890) Dumping GP/FPU registers for 8242
(00.004891) Warn  (criu/arch/x86/crtools.c:138): Will restore 8242 with interrupted system call
(00.004899) xsave runtime structure
(00.004901) -----------------------
(00.004902) cwd:37f swd:0 twd:0 fop:0 mxcsr:1f80 mxcsr_mask:ffff
(00.004903) magic1:46505853 extended_size:344 xstate_bv:7 xstate_size:340
(00.004905) xstate_bv: 7
(00.004906) -----------------------
(00.004907) Putting tsock into pid 8242
(00.004957) Wait for parasite being daemonized...
(00.004959) Wait for ack 2 on daemon socket
pie: 1: Running daemon thread leader
pie: 1: __sent ack msg: 2 2 0
pie: 1: Daemon waits for command
(00.004978) Fetched ack: 2 2 0
(00.004980) Parasite 8242 has been switched to daemon mode
(00.004985) Sent msg to daemon 13 0 0
pie: 1: __fetched msg: 13 0 0
pie: 1: __sent ack msg: 13 13 0
pie: 1: Daemon waits for command
(00.004997) Wait for ack 13 on daemon socket
(00.005000) Fetched ack: 13 13 0
(00.005007) Sent msg to daemon 15 0 0
pie: 1: __fetched msg: 15 0 0
(00.005008) Wait for ack 15 on daemon socket
pie: 1: __sent ack msg: 15 15 0
(00.005013) Fetched ack: 15 15 0
pie: 1: Daemon waits for command
(00.005019) Sent msg to daemon 11 0 0
pie: 1: __fetched msg: 11 0 0
(00.005021) Wait for ack 11 on daemon socket
pie: 1: __sent ack msg: 11 11 0
(00.005026) Fetched ack: 11 11 0
pie: 1: Daemon waits for command
(00.005028) sid=1 pgid=1 pid=1
(00.005041) 
(00.005042) Dumping opened files (pid: 8242)
(00.005043) ----------------------------------------
(00.005047) Sent msg to daemon 12 0 0
pie: 1: __fetched msg: 12 0 0
pie: 1: __sent ack msg: 12 12 0
pie: 1: Daemon waits for command
(00.005055) Wait for ack 12 on daemon socket
(00.005057) Fetched ack: 12 12 0
(00.005068) 8242 fdinfo 0: pos:                0 flags:           100002/0
(00.005077) tty: Dumping tty 9 with id 0x2
(00.005080) Dumping path for 0 fd via self 9 [/dev/pts/0]
(00.005085) Sent msg to daemon 14 0 0
(00.005086) Wait for ack 14 on daemon socket
pie: 1: __fetched msg: 14 0 0
pie: 1: __sent ack msg: 14 14 0
pie: 1: Daemon waits for command
(00.005093) Fetched ack: 14 14 0
(00.005114) fdinfo: type: 0xb flags: 0100002/0 pos:        0 fd: 0
(00.005129) 8242 fdinfo 1: pos:                0 flags:           100002/0
(00.005136) fdinfo: type: 0xb flags: 0100002/0 pos:        0 fd: 1
(00.005143) 8242 fdinfo 2: pos:                0 flags:           100002/0
(00.005147) fdinfo: type: 0xb flags: 0100002/0 pos:        0 fd: 2
(00.005156) 8242 fdinfo 10: pos:                0 flags:          2100002/0x1
(00.005161) tty: Dumping tty 12 with id 0x3
(00.005163) Dumping path for 10 fd via self 12 [/dev/tty]
(00.005169) Sent msg to daemon 14 0 0
pie: 1: __fetched msg: 14 0 0
(00.005170) Wait for ack 14 on daemon socket
pie: 1: __sent ack msg: 14 14 0
pie: 1: Daemon waits for command
(00.005176) Fetched ack: 14 14 0
(00.005181) fdinfo: type: 0xb flags: 02100002/01 pos:        0 fd: 10
(00.005183) ----------------------------------------
(00.005193) Sent msg to daemon 6 0 0
pie: 1: __fetched msg: 6 0 0
(00.005195) Wait for ack 6 on daemon socket
pie: 1: __sent ack msg: 6 6 0
(00.005198) Fetched ack: 6 6 0
pie: 1: Daemon waits for command
(00.005200) 
(00.005202) Dumping pages (type: 69 pid: 8242)
(00.005204) ----------------------------------------
(00.005205)    Private vmas 251/302 pages
(00.005209) pagemap-cache: created for pid 8242 (takes 4096 bytes)
(00.005211) page-pipe: Create page pipe for 302 segs
(00.005212) page-pipe: Will grow page pipe (iov off is 0)
(00.005232) pagemap-cache: filling VMA 400000-4fb000 (1004K) [l:400000 h:600000]
(00.005234) pagemap-cache: 	          400000-4fb000           nr:1     cov:1028096
(00.005236) pagemap-cache: 	simple mode [l:400000 h:4fb000]
(00.005241) page-pipe: Add iov to page pipe (0 iovs, 0/302 total)
(00.005244) Pagemap generated: 1 pages 0 holes
(00.005245) pagemap-cache: filling VMA 6fa000-6fb000 (4K) [l:600000 h:800000]
(00.005247) page-pipe: Add iov to page pipe (1 iovs, 1/302 total)
(00.005249) Pagemap generated: 1 pages 0 holes
(00.005250) pagemap-cache: filling VMA 6fb000-700000 (20K) [l:600000 h:800000]
(00.005252) page-pipe: Add iov to page pipe (2 iovs, 2/302 total)
(00.005254) Pagemap generated: 4 pages 0 holes
(00.005255) pagemap-cache: filling VMA 103e000-1045000 (28K) [l:1000000 h:1200000]
(00.005259) pagemap-cache: 	         103e000-1045000          nr:1     cov:28672
(00.005261) pagemap-cache: 	simple mode [l:103e000 h:1045000]
(00.005264) page-pipe: Add iov to page pipe (3 iovs, 3/302 total)
(00.005265) Pagemap generated: 7 pages 0 holes
(00.005266) pagemap-cache: filling VMA 7ffe29ffc000-7ffe2a01e000 (136K) [l:7ffe29e00000 h:7ffe2a000000]
(00.005270) page-pipe: Add iov to page pipe (4 iovs, 4/302 total)
(00.005271) Pagemap generated: 2 pages 0 holes
(00.005272) pagemap-cache: filling VMA 7ffe2a06f000-7ffe2a071000 (8K) [l:7ffe2a000000 h:7ffe2a200000]
(00.005273) pagemap-cache: 	    7ffe2a06f000-7ffe2a071000     nr:1     cov:8192
(00.005275) pagemap-cache: 	    7ffe2a071000-7ffe2a073000     nr:2     cov:16384
(00.005276) pagemap-cache: 	cache  mode [l:7ffe2a06f000 h:7ffe2a200000]
(00.005279) Pagemap generated: 0 pages 0 holes
(00.005280) page-pipe: Add iov to page pipe (5 iovs, 5/302 total)
(00.005282) page-pipe: Grow pipe 10 -> 20
(00.005283) Pagemap generated: 2 pages 0 holes
(00.005285) page-pipe: Page pipe:
(00.005286) page-pipe: * 1 pipes 6/302 iovs:
(00.005287) page-pipe: 	buf 17 pages, 6 iovs:
(00.005288) page-pipe: 		0x400000 1
(00.005289) page-pipe: 		0x6fa000 2
(00.005290) page-pipe: 		0x6fd000 3
(00.005292) page-pipe: 		0x103e000 7
(00.005293) page-pipe: 		0x7ffe2a01c000 2
(00.005294) page-pipe: 		0x7ffe2a071000 2
(00.005295) page-pipe: * 0 holes:
(00.005296) PPB: 17 pages 6 segs 32 pipe 0 off
(00.005300) Sent msg to daemon 7 0 0
pie: 1: __fetched msg: 7 0 0
(00.005303) Wait for ack 7 on daemon socket
pie: 1: __sent ack msg: 7 7 0
pie: 1: Daemon waits for command
(00.005311) Fetched ack: 7 7 0
(00.005313) Transferring pages:
(00.005314) 	buf 17/6
(00.005315) 	p 0x400000 [1]
(00.005323) 	p 0x6fa000 [2]
(00.005330) 	p 0x6fd000 [3]
(00.005337) 	p 0x103e000 [7]
(00.005350) 	p 0x7ffe2a01c000 [2]
(00.005356) 	p 0x7ffe2a071000 [2]
(00.005366) page-pipe: Killing page pipe
(00.005370) ----------------------------------------
(00.005373) Sent msg to daemon 6 0 0
(00.005374) Wait for ack 6 on daemon socket
pie: 1: __fetched msg: 6 0 0
pie: 1: __sent ack msg: 6 6 0
pie: 1: Daemon waits for command
(00.005379) Fetched ack: 6 6 0
(00.005383) Sent msg to daemon 8 0 0
pie: 1: __fetched msg: 8 0 0
(00.005384) Wait for ack 8 on daemon socket
pie: 1: __sent ack msg: 8 8 0
pie: 1: Daemon waits for command
(00.005393) Fetched ack: 8 8 0
(00.005409) Sent msg to daemon 9 0 0
pie: 1: __fetched msg: 9 0 0
(00.005411) Wait for ack 9 on daemon socket
pie: 1: __sent ack msg: 9 9 0
(00.005414) Fetched ack: 9 9 0
pie: 1: Daemon waits for command
(00.005417) Sent msg to daemon 10 0 0
pie: 1: __fetched msg: 10 0 0
(00.005419) Wait for ack 10 on daemon socket
pie: 1: __sent ack msg: 10 10 0
(00.005422) Fetched ack: 10 10 0
pie: 1: Daemon waits for command
(00.005424) 
(00.005427) Dumping core (pid: 8242)
(00.005428) ----------------------------------------
(00.005429) Obtaining personality ... 
(00.005436) Sent msg to daemon 3 0 0
pie: 1: __fetched msg: 3 0 0
(00.005438) Wait for ack 3 on daemon socket
pie: 1: __sent ack msg: 3 3 0
pie: 1: Daemon waits for command
(00.005446) Fetched ack: 3 3 0
(00.005449) 8242 has 0 sched policy
(00.005452) 	dumping 0 nice for 8242
(00.005453) dumping /proc/8242/loginuid
(00.005457) dumping /proc/8242/oom_score_adj
(00.005465) Sent msg to daemon 17 0 0
pie: 1: __fetched msg: 17 0 0
(00.005466) Wait for ack 17 on daemon socket
pie: 1: __sent ack msg: 17 17 0
pie: 1: Daemon waits for command
(00.005497) Fetched ack: 17 17 0
(00.005503) cg: Dumping cgroups for 8242
(00.005529) cg:  `- New css ID 2
(00.005531) cg:     `- [blkio] -> [/user.slice/ctr] [0]
(00.005533) cg:     `- [cpu,cpuacct] -> [/user.slice/ctr] [0]
(00.005534) cg:     `- [cpuset] -> [/ctr] [0]
(00.005535) cg:     `- [devices] -> [/user.slice/ctr] [0]
(00.005536) cg:     `- [freezer] -> [/ctr] [0]
(00.005537) cg:     `- [memory] -> [/user.slice/ctr] [0]
(00.005538) cg:     `- [name=systemd] -> [/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr] [0]
(00.005544) cg:     `- [net_cls,net_prio] -> [/ctr] [0]
(00.005545) cg:     `- [perf_event] -> [/ctr] [0]
(00.005546) cg:     `- [pids] -> [/user.slice/user-1000.slice/[email protected]/ctr] [0]
(00.036852) cg: adding cgroup /proc/self/fd/10/user.slice/ctr
(00.036882) cg: Dumping value 500 from /proc/self/fd/10/user.slice/ctr/blkio.weight
(00.036895) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/cgroup.clone_children
(00.036903) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/notify_on_release
(00.036910) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/cgroup.procs
(00.036917) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/tasks
(00.064806) cg: adding cgroup /proc/self/fd/10/user.slice/ctr
(00.064831) cg: Dumping value 1024 from /proc/self/fd/10/user.slice/ctr/cpu.shares
(00.064840) cg: Dumping value 100000 from /proc/self/fd/10/user.slice/ctr/cpu.cfs_period_us
(00.064848) cg: Dumping value -1 from /proc/self/fd/10/user.slice/ctr/cpu.cfs_quota_us
(00.064852) cg: Couldn't open /proc/self/fd/10/user.slice/ctr/cpu.rt_period_us. This cgroup property may not exist on this kernel
(00.064856) cg: Couldn't open /proc/self/fd/10/user.slice/ctr/cpu.rt_runtime_us. This cgroup property may not exist on this kernel
(00.064864) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/cgroup.clone_children
(00.064872) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/notify_on_release
(00.064878) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/cgroup.procs
(00.064885) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/tasks
(00.064891) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/cgroup.clone_children
(00.064897) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/notify_on_release
(00.064903) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/cgroup.procs
(00.064908) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/tasks
(00.096818) cg: adding cgroup /proc/self/fd/10/ctr
(00.096839) cg: Dumping value 0-3 from /proc/self/fd/10/ctr/cpuset.cpus
(00.096847) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.mems
(00.096858) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.memory_migrate
(00.096865) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.cpu_exclusive
(00.096872) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.mem_exclusive
(00.096879) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.mem_hardwall
(00.096886) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.memory_spread_page
(00.096897) cg: Dumping value 0 from /proc/self/fd/10/ctr/cpuset.memory_spread_slab
(00.096904) cg: Dumping value 1 from /proc/self/fd/10/ctr/cpuset.sched_load_balance
(00.096911) cg: Dumping value -1 from /proc/self/fd/10/ctr/cpuset.sched_relax_domain_level
(00.096918) cg: Dumping value 0 from /proc/self/fd/10/ctr/cgroup.clone_children
(00.096925) cg: Dumping value 0 from /proc/self/fd/10/ctr/notify_on_release
(00.096931) cg: Dumping value  from /proc/self/fd/10/ctr/cgroup.procs
(00.096937) cg: Dumping value  from /proc/self/fd/10/ctr/tasks
(00.140822) cg: adding cgroup /proc/self/fd/10/user.slice/ctr
(00.140850) cg: Dumping value c *:* m
b *:* m
c 1:3 rwm
c 1:8 rwm
c 1:7 rwm
c 5:0 rwm
c 1:5 rwm
c 1:9 rwm
c 5:1 rwm
c 136:* rwm
c 5:2 rwm
c 10:200 rwm from /proc/self/fd/10/user.slice/ctr/devices.list
(00.140860) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/cgroup.clone_children
(00.140867) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/notify_on_release
(00.140873) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/cgroup.procs
(00.140880) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/tasks
(00.180791) cg: adding cgroup /proc/self/fd/10/ctr
(00.180815) cg: Dumping value 0 from /proc/self/fd/10/ctr/cgroup.clone_children
(00.180827) cg: Dumping value 0 from /proc/self/fd/10/ctr/notify_on_release
(00.180834) cg: Dumping value  from /proc/self/fd/10/ctr/cgroup.procs
(00.180841) cg: Dumping value  from /proc/self/fd/10/ctr/tasks
(00.204827) cg: adding cgroup /proc/self/fd/10/user.slice/ctr
(00.204859) cg: Dumping value 9223372036854771712 from /proc/self/fd/10/user.slice/ctr/memory.limit_in_bytes
(00.204872) cg: Couldn't open /proc/self/fd/10/user.slice/ctr/memory.memsw.limit_in_bytes. This cgroup property may not exist on this kernel
(00.204880) cg: Dumping value 60 from /proc/self/fd/10/user.slice/ctr/memory.swappiness
(00.204889) cg: Dumping value 9223372036854771712 from /proc/self/fd/10/user.slice/ctr/memory.soft_limit_in_bytes
(00.204899) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/memory.move_charge_at_immigrate
(00.204913) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/memory.oom_control
(00.204922) cg: Dumping value 1 from /proc/self/fd/10/user.slice/ctr/memory.use_hierarchy
(00.204929) cg: Dumping value 9223372036854771712 from /proc/self/fd/10/user.slice/ctr/memory.kmem.limit_in_bytes
(00.204937) cg: Dumping value 9223372036854771712 from /proc/self/fd/10/user.slice/ctr/memory.kmem.tcp.limit_in_bytes
(00.204945) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/cgroup.clone_children
(00.204952) cg: Dumping value 0 from /proc/self/fd/10/user.slice/ctr/notify_on_release
(00.204958) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/cgroup.procs
(00.204965) cg: Dumping value  from /proc/self/fd/10/user.slice/ctr/tasks
(00.228770) cg: adding cgroup /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr
(00.228793) cg: Dumping value 0 from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr/cgroup.clone_children
(00.228803) cg: Dumping value 1 from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr/notify_on_release
(00.228810) cg: Dumping value  from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr/cgroup.procs
(00.228818) cg: Dumping value  from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service/ctr/tasks
(00.244813) cg: adding cgroup /proc/self/fd/10/ctr
(00.244836) cg: Dumping value 0 from /proc/self/fd/10/ctr/net_cls.classid
(00.244845) cg: Dumping value 0 from /proc/self/fd/10/ctr/cgroup.clone_children
(00.244856) cg: Dumping value 0 from /proc/self/fd/10/ctr/notify_on_release
(00.244863) cg: Dumping value  from /proc/self/fd/10/ctr/cgroup.procs
(00.244869) cg: Dumping value  from /proc/self/fd/10/ctr/tasks
(00.244878) cg: Dumping value lo 0
wlp3s0 0
docker0 0 from /proc/self/fd/10/ctr/net_prio.ifpriomap
(00.244884) cg: Dumping value 0 from /proc/self/fd/10/ctr/cgroup.clone_children
(00.244889) cg: Dumping value 0 from /proc/self/fd/10/ctr/notify_on_release
(00.244894) cg: Dumping value  from /proc/self/fd/10/ctr/cgroup.procs
(00.244899) cg: Dumping value  from /proc/self/fd/10/ctr/tasks
(00.284780) cg: adding cgroup /proc/self/fd/10/ctr
(00.284805) cg: Dumping value 0 from /proc/self/fd/10/ctr/cgroup.clone_children
(00.284813) cg: Dumping value 0 from /proc/self/fd/10/ctr/notify_on_release
(00.284820) cg: Dumping value  from /proc/self/fd/10/ctr/cgroup.procs
(00.284826) cg: Dumping value  from /proc/self/fd/10/ctr/tasks
(00.312739) cg: adding cgroup /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/ctr
(00.312768) cg: Dumping value max from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/ctr/pids.max
(00.312778) cg: Dumping value 0 from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/ctr/cgroup.clone_children
(00.312787) cg: Dumping value 0 from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/ctr/notify_on_release
(00.312794) cg: Dumping value  from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/ctr/cgroup.procs
(00.312801) cg: Dumping value  from /proc/self/fd/10/user.slice/user-1000.slice/[email protected]/ctr/tasks
(00.312816) cg: Set 2 is root one
(00.312848) ----------------------------------------
(00.312855) Waiting for 8242 to trap
(00.312867) Daemon 8242 exited trapping
(00.312877) Sent msg to daemon 5 0 0
(00.312882) Force no-breakpoints restore
(00.312891) 8242 was trapped
(00.312900) 8242 is going to execute the syscall 45
(00.312910) 8242 was trapped
(00.312913) `- Expecting exit
(00.312919) 8242 was trapped
(00.312922) 8242 is going to execute the syscall 186
(00.312929) 8242 was trapped
(00.312931) `- Expecting exit
(00.312937) 8242 was trapped
(00.312940) 8242 is going to execute the syscall 1
pie: 1: __fetched msg: 5 0 0
(00.312950) 8242 was trapped
(00.312951) `- Expecting exit
(00.312956) 8242 was trapped
(00.312958) 8242 is going to execute the syscall 186
(00.312963) 8242 was trapped
(00.312964) `- Expecting exit
(00.312969) 8242 was trapped
(00.312971) 8242 is going to execute the syscall 186
(00.312975) 8242 was trapped
(00.312977) `- Expecting exit
(00.312982) 8242 was trapped
(00.312984) 8242 is going to execute the syscall 1
pie: 1: 1: new_sp=0x7fd1cf489008 ip 0x4a58c8
(00.312989) 8242 was trapped
(00.312991) `- Expecting exit
(00.312995) 8242 was trapped
(00.312998) 8242 is going to execute the syscall 3
(00.313006) 8242 was trapped
(00.313008) `- Expecting exit
(00.313012) 8242 was trapped
(00.313014) 8242 is going to execute the syscall 3
(00.313019) 8242 was trapped
(00.313020) `- Expecting exit
(00.313025) 8242 was trapped
(00.313027) 8242 is going to execute the syscall 15
(00.313033) 8242 was stopped
(00.313040) 8242 was trapped
(00.313042) 8242 is going to execute the syscall 186
(00.313047) 8242 was trapped
(00.313048) `- Expecting exit
(00.313053) 8242 was trapped
(00.313055) 8242 is going to execute the syscall 1
(00.313059) 8242 was trapped
(00.313061) `- Expecting exit
(00.313065) 8242 was trapped
(00.313067) 8242 is going to execute the syscall 11
(00.313085) 8242 was stopped
(00.313103) 
(00.313104) Dumping mm (pid: 8242)
(00.313106) ----------------------------------------
(00.313107) 0x400000-0x4fb000 (1004K) prot 0x5 flags 0x2 fdflags 0 st 0x41 off 0 reg fp  shmid: 0x1
(00.313110) 0x6fa000-0x6fb000 (4K) prot 0x3 flags 0x2 fdflags 0 st 0x41 off 0xfa000 reg fp  shmid: 0x1
(00.313112) 0x6fb000-0x700000 (20K) prot 0x3 flags 0x22 fdflags 0 st 0x201 off 0 reg ap  shmid: 0
(00.313114) 0x103e000-0x1045000 (28K) prot 0x3 flags 0x22 fdflags 0 st 0x221 off 0 reg heap ap  shmid: 0
(00.313116) 0x7ffe29ffc000-0x7ffe2a01e000 (136K) prot 0x3 flags 0x122 fdflags 0 st 0x201 off 0 reg ap  shmid: 0
(00.313118) 0x7ffe2a06f000-0x7ffe2a071000 (8K) prot 0x1 flags 0x22 fdflags 0 st 0x1201 off 0 reg vvar ap  shmid: 0
(00.313119) 0x7ffe2a071000-0x7ffe2a073000 (8K) prot 0x5 flags 0x22 fdflags 0 st 0x209 off 0 reg vdso ap  shmid: 0
(00.313121) 0xffffffffff600000-0xffffffffff601000 (4K) prot 0x5 flags 0x22 fdflags 0 st 0x204 off 0 vsys ap  shmid: 0
(00.313123) Obtaining task auvx ...
(00.313183) Dumping path for -3 fd via self 12 [/]
(00.313194) Dumping task cwd id 0x4 root id 0x4
(00.313228) mnt: Dumping mountpoints
(00.313230) mnt: 	242: 32:/ @ ./sys/firmware
(00.313236) mnt: Path `/sys/firmware' resolved to `./sys/firmware' mountpoint
(00.315058) mnt: 	241: 2d:/null @ ./proc/sched_debug
(00.315064) mnt: 	240: 2d:/null @ ./proc/timer_list
(00.315066) mnt: 	239: 2d:/null @ ./proc/kcore
(00.315068) mnt: 	238: 29:/sysrq-trigger @ ./proc/sysrq-trigger
(00.315070) mnt: 	237: 29:/sys @ ./proc/sys
(00.315071) mnt: 	236: 29:/irq @ ./proc/irq
(00.315073) mnt: 	235: 29:/fs @ ./proc/fs
(00.315075) mnt: 	234: 29:/bus @ ./proc/bus
(00.315076) mnt: 	233: 29:/asound @ ./proc/asound
(00.315078) mnt: 	231: 2e:/0 @ ./dev/console
(00.315081) Error (criu/tty.c:2222): tty: Unable to find a master for /0
(00.315124) Unlock network
(00.315126) Running network-unlock scripts
(00.315127) 	RPC
(00.316749) Unfreezing tasks into 1
(00.316760) 	Unseizing 8242 into 1
(00.316772) Error (criu/cr-dump.c:1641): Dumping FAILED.

@avagin
Copy link
Contributor Author

avagin commented Apr 11, 2017

@dqminh I can not reproduce this issue and it looks strange.

In CRIU this error is reported if the OrphanPtsMaster isn't set:
https://github.com/xemul/criu/blob/14e0bf7baf0f3a47acaf86ea880d312f608d

but now runc always sets it https://github.com/avagin/runc/blob/cr-console/libcontainer/container_linux.go#L694

Could you check that you get runc from my repo (the cr-console branch)?

Thank you for the feedback.

@avagin
Copy link
Contributor Author

avagin commented Apr 14, 2017

@dqminh maybe you can send me a container config, I will try to reproduce on my host.

@rh-atomic-bot
Copy link

271/278 passed on RHEL - Failed.
261/276 passed on CentOS - Failed.
276/277 passed on Fedora - Failed.
Log - https://aos-ci.s3.amazonaws.com/opencontainers/runc/runc-integration-tests-prs/327/fullresults.xml

@rh-atomic-bot
Copy link

270/278 passed on RHEL - Failed.
260/276 passed on CentOS - Failed.
276/277 passed on Fedora - Failed.
Log - https://aos-ci.s3.amazonaws.com/opencontainers/runc/runc-integration-tests-prs/328/fullresults.xml

defer master.Close()

// While we can access console.master, using the API is a good idea.
if err := utils.SendFd(process.ConsoleSocket, master); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's still a two-steps-socket-translation? What #1356 does was try to avoid this, can the master be sent directly to process.consoleSocket in the container?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We get this master form CRIU and CRIU doesn't have access to process.consoleSocket

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But we can make it, right?

You can do something like this in criuSwrk: https://github.com/opencontainers/runc/blob/v1.0.0-rc3/libcontainer/container_linux.go#L373-L378

In criu, you can do similar as: https://github.com/opencontainers/runc/blob/v1.0.0-rc3/libcontainer/factory_linux.go#L251-L258

Then you'll have access to process.consoleSocket in criu.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately it isn't so easy. When a file descriptors are restored, we have to be sure that all restored file descriptors are not intersect with criu service descriptors. So the number of service descriptors are limited. It is one of reasons why we can't pass extra file descriptors to criu restore.

Another reason is that now we have very generic interface to handle external resources and it allows to handle any number of external terminals. It is impossible to pass a separate unix socket for each of them.

I understand your point, but I afraid there is no way to make it more optimal.

@@ -1,3 +1,5 @@
syntax = "proto2";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use proto3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, we can't, because criu doesn't support proto3. It is a part of CRIU API: https://github.com/xemul/criu/blob/master/images/rpc.proto

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that because CRIU uses protobuf-c? Just curious.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is. I tried to change proto2 to proto3 and get this error:
[avagin@laptop criu]$ make
PBCC images/rpc.pb-c.c
rpc.proto:1:10: Unrecognized syntax identifier "proto3". This parser only recognizes "proto2".

default:
return fmt.Errorf("unable to parse the response %s", resp.String())
}

break
}

syscall.Shutdown(fds[0], syscall.SHUT_WR)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are moving from syscall to sys/unix in runc, maybe you can use sys/unix here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, will fix. Thanks

@rh-atomic-bot
Copy link

269/278 passed on RHEL - Failed.
260/276 passed on CentOS - Failed.
275/277 passed on Fedora - Failed.
Log - https://aos-ci.s3.amazonaws.com/opencontainers/runc/runc-integration-tests-prs/334/fullresults.xml

@avagin
Copy link
Contributor Author

avagin commented Apr 24, 2017

ping @dqminh @cyphar

avagin added 5 commits May 1, 2017 21:45
Signed-off-by: Andrei Vagin <[email protected]>
Currently startContainer() is used to create and to run a container.
In the next patch it will be used to restore a container.

Signed-off-by: Andrei Vagin <[email protected]>
avagin added 4 commits May 2, 2017 04:48
CRIU was extended to report about orphaned master pty-s via RPC.

Signed-off-by: Andrei Vagin <[email protected]>
A freezer cgroup allows to dump processes faster.

If a user wants to checkpoint a container and its storage,
he has to pause a container, but in this case we need to pass
a path to its freezer cgroup to "criu dump".

Signed-off-by: Andrei Vagin <[email protected]>
We have two test cases with and without pre-dump. Terminals and
pre-dump features are orthogonal, so we can modify one of these test cases.

Signed-off-by: Andrei Vagin <[email protected]>
@crosbymichael
Copy link
Member

crosbymichael commented May 18, 2017

LGTM

Approved with PullApprove

@crosbymichael
Copy link
Member

ping @mrunalp @cyphar @hqhq

@mrunalp
Copy link
Contributor

mrunalp commented May 18, 2017

LGTM

Approved with PullApprove

@mrunalp mrunalp merged commit 6394544 into opencontainers:master May 18, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants