Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to restart service docker: Job for docker.service failed because the control process exited with error code. #14161

Closed
soltysh opened this issue May 12, 2017 · 11 comments
Assignees
Labels
dependency/docker kind/test-flake Categorizes issue or PR as related to test flakes. priority/P2 vendor-update Touching vendor dir or related files

Comments

@soltysh
Copy link
Contributor

soltysh commented May 12, 2017

Seen in https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/1387/

Reaching out to docker service logs gave me this:

signal: aborted (core dumped): \"panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x61224c]

goroutine 1 [running]:
panic(0x6ec1c0, 0xc420014050)
    /usr/lib/golang/src/runtime/panic.go:500 +0x1a1 fp=0xc420092bf0 sp=0xc420092b60
github.com/urfave/cli.HandleAction.func1(0xc420093748)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/urfave/cli/app.go:478 +0x247 fp=0xc420092c90 sp=0xc420092bf0
runtime.call32(0x0, 0x768d28, 0xc42000c0f0, 0x800000008)
    /usr/lib/golang/src/runtime/asm_amd64.s:479 +0x4c fp=0xc420092cc0 sp=0xc420092c90
panic(0x6ec1c0, 0xc420014050)
    /usr/lib/golang/src/runtime/panic.go:458 +0x243 fp=0xc420092d50 sp=0xc420092cc0
runtime.panicmem()
    /usr/lib/golang/src/runtime/panic.go:62 +0x6d fp=0xc420092d80 sp=0xc420092d50
runtime.sigpanic()
    /usr/lib/golang/src/runtime/sigpanic_unix.go:24 +0x214 fp=0xc420092dd8 sp=0xc420092d80
github.com/coreos/go-systemd/dbus.(*Conn).startJob(0x0, 0x0, 0x749062, 0x29, 0xc420130e40, 0x2, 0x2, 0x0, 0x0, 0x0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/coreos/go-systemd/dbus/methods.go:47 +0xcc fp=0xc420092e60 sp=0xc420092dd8
github.com/coreos/go-systemd/dbus.(*Conn).StopUnit(0x0, 0xc4200bbef0, 0x4d, 0x73c478, 0x7, 0x0, 0x73bc9b, 0x6, 0x3)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/coreos/go-systemd/dbus/methods.go:99 +0x14b fp=0xc420092ee8 sp=0xc420092e60
github.com/opencontainers/runc/libcontainer/cgroups/systemd.(*Manager).Destroy(0xc42013
May 12 05:40:00 ip-172-18-13-199.ec2.internal dockerd-current[7459]: 0de0, 0x0, 0x0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_systemd.go:193 +0xee fp=0xc420092f40 sp=0xc420092ee8
github.com/opencontainers/runc/libcontainer.destroy(0xc4200c4240, 0xc420092fe0, 0xc420092ff0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/state_linux.go:46 +0x7d fp=0xc420092fb8 sp=0xc420092f40
github.com/opencontainers/runc/libcontainer.(*stoppedState).destroy(0xc4200b8060, 0x769428, 0xc4200c42c0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/state_linux.go:99 +0x2e fp=0xc420092fe0 sp=0xc420092fb8
github.com/opencontainers/runc/libcontainer.(*linuxContainer).Destroy(0xc4200c4240, 0x0, 0x0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/container_linux.go:432 +0x92 fp=0xc420093010 sp=0xc420092fe0
main.destroy(0xab1680, 0xc4200c4240)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/utils_linux.go:121 +0x35 fp=0xc420093050 sp=0xc420093010
main.glob..func3(0xc4200c23c0, 0x0, 0x0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/delete.go:77 +0x559 fp=0xc420093298 sp=0xc420093050
runtime.call32(0xc4200b08d0, 0x768df0, 0xc4200b6c00, 0x800000018)
    /usr/lib/golang/src/runtime/asm_amd64.s:479 +0x4c fp=0xc4200932c8 sp=0xc420093298
reflect.Value.call(0x6dd040, 0x768df0, 0x13, 0x73b489, 0x4, 0xc420093708, 0x1, 0x1, 0x4d17a8, 0x7312e0, ...)
    /usr/lib/golang/src/reflect/valu
May 12 05:40:00 ip-172-18-13-199.ec2.internal dockerd-current[7459]: e.go:434 +0x5c8 fp=0xc420093618 sp=0xc4200932c8
reflect.Value.Call(0x6dd040, 0x768df0, 0x13, 0xc420093708, 0x1, 0x1, 0xac1700, 0xc4200936e8, 0x4da786)
    /usr/lib/golang/src/reflect/value.go:302 +0xa4 fp=0xc420093680 sp=0xc420093618
github.com/urfave/cli.HandleAction(0x6dd040, 0x768df0, 0xc4200c23c0, 0x0, 0x0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/urfave/cli/app.go:487 +0x1e0 fp=0xc420093730 sp=0xc420093680
github.com/urfave/cli.Command.Run(0x73bde5, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x74d057, 0x61, 0x0, ...)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/urfave/cli/command.go:191 +0xc3b fp=0xc420093af8 sp=0xc420093730
github.com/urfave/cli.(*App).Run(0xc4200d4000, 0xc42000c1c0, 0x4, 0x4, 0x0, 0x0)
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/Godeps/_workspace/src/github.com/urfave/cli/app.go:240 +0x611 fp=0xc420093cb8 sp=0xc420093af8
main.main()
    /builddir/build/BUILD/docker-92b10e401221d31096291aa3fb7b5c413eeb5539/runc-7f4b00e035c25f3a8d0dabc1658fe831a3bb6d13/main.go:137 +0xbd6 fp=0xc420093f38 sp=0xc420093cb8
runtime.main()
    /usr/lib/golang/src/runtime/proc.go:183 +0x1f4 fp=0xc420093f90 sp=0xc420093f38
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc420093f98 sp=0xc420093f90

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc42003afb8 sp=0xc42003afb0

goroutine 2 [force gc (idle)]:
runtime.gopark(0x769338, 0xac02e0, 0x73efbd, 0xf, 0x769214, 0x1)
    /usr/lib/golang/src/runtime/proc.go:259 +0x13a fp=0xc420026748 sp=0xc420026718
runtime.goparkunlock(0xac02e0, 0x73efbd, 0xf, 0xc420000114, 0x1)
    /usr/lib/golang/src/runtime/proc.go:265 +0x5e fp=0xc420026788 sp=0xc420026748
runtime.for
May 12 05:40:00 ip-172-18-13-199.ec2.internal dockerd-current[7459]: cegchelper()
    /usr/lib/golang/src/runtime/proc.go:224 +0xa8 fp=0xc4200267c0 sp=0xc420026788
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200267c8 sp=0xc4200267c0
created by runtime.init.3
    /usr/lib/golang/src/runtime/proc.go:213 +0x35

goroutine 3 [GC sweep wait]:
runtime.gopark(0x769338, 0xac0540, 0x73e44c, 0xd, 0x436714, 0x1)
    /usr/lib/golang/src/runtime/proc.go:259 +0x13a fp=0xc420026f38 sp=0xc420026f08
runtime.goparkunlock(0xac0540, 0x73e44c, 0xd, 0x14, 0x1)
    /usr/lib/golang/src/runtime/proc.go:265 +0x5e fp=0xc420026f78 sp=0xc420026f38
runtime.bgsweep(0xc420016070)
    /usr/lib/golang/src/runtime/mgcsweep.go:63 +0xb6 fp=0xc420026fb8 sp=0xc420026f78
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc420026fc0 sp=0xc420026fb8
created by runtime.gcenable
    /usr/lib/golang/src/runtime/mgc.go:195 +0x61

goroutine 4 [finalizer wait]:
runtime.gopark(0x769338, 0xadbfd8, 0x73eb52, 0xe, 0x14, 0x1)
    /usr/lib/golang/src/runtime/proc.go:259 +0x13a fp=0xc420027708 sp=0xc4200276d8
runtime.goparkunlock(0xadbfd8, 0x73eb52, 0xe, 0x14, 0x1)
    /usr/lib/golang/src/runtime/proc.go:265 +0x5e fp=0xc420027748 sp=0xc420027708
runtime.runfinq()
    /usr/lib/golang/src/runtime/mfinal.go:158 +0xaf fp=0xc4200277c0 sp=0xc420027748
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200277c8 sp=0xc4200277c0
created by runtime.createfing
    /usr/lib/golang/src/runtime/mfinal.go:139 +0x73

goroutine 18 [syscall]:
runtime.notetsleepg(0xadc260, 0xffffffffffffffff, 0x1)
    /usr/lib/golang/src/runtime/lock_futex.go:205 +0x42 fp=0xc420022750 sp=0xc420022720
os/signal.signal_recv(0x0)
    /usr/lib/golang/src/runtime/sigqueue.go:116 +0x157 fp=0xc420022780 sp=0xc420022750
os/signal.loop()
    /usr/lib/golang/src/os/signal/signal_unix.go:22 +0x22 fp=0xc4200227c0 sp=0xc420022780
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200227c8 sp=0xc4200227c0
create
@stevekuznetsov
Copy link
Contributor

/cc @runcom @rhatdan @lsm5

@stevekuznetsov
Copy link
Contributor

@soltysh reports this is happening on every job -- @sdodson can you please triage to someone who can determine what change in the installer is causing this? No change happened recently to udnerlying Docker.

@sdodson
Copy link
Member

sdodson commented May 12, 2017

@stevekuznetsov when did CI jobs get docker-1.12.6-19? If that correlates with when this started happening then that's the root cause. hmm, this may be different

@sdodson
Copy link
Member

sdodson commented May 12, 2017

The last job that was successful was docker-1.12.6-16, the first job that failed was 1.12.6-19. So I suspect this is https://bugzilla.redhat.com/show_bug.cgi?id=1447536 and the fix is to ensure we use docker-1.12.6-25 or newer.

@stevekuznetsov
Copy link
Contributor

when did CI jobs get docker-1.12.6-19?

05-May-2017 03:23

@stevekuznetsov
Copy link
Contributor

The last job that was successful was docker-1.12.6-16, the first job that failed was 1.12.6-19.

I see -- let me see what we can do immediately.

@stevekuznetsov
Copy link
Contributor

I've removed the readiness specification from the newest AMI, so we should be back to docker-1.12.6-16 for new VMs being provisioned. We'll need to extend our Docker vetting job to do a full install.

@sdodson
Copy link
Member

sdodson commented May 12, 2017

It's been green since @stevekuznetsov reverted to older docker in the image. Closing.

@sdodson sdodson closed this as completed May 12, 2017
@stevekuznetsov
Copy link
Contributor

I've re-enabled new AMI jobs for the base stage -- we have docker-1.12.6-25.git62520c0.el7 now available in our tested repo.

@sdodson
Copy link
Member

sdodson commented May 25, 2017 via email

@stevekuznetsov
Copy link
Contributor

@sdodson I am just using the same pipeline as before -- looking at the RHEL7 Next mirrors we have and triggering off of new Brew builds -- I run tests on whatever latest package I see available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependency/docker kind/test-flake Categorizes issue or PR as related to test flakes. priority/P2 vendor-update Touching vendor dir or related files
Projects
None yet
Development

No branches or pull requests

4 participants