Skip to content
This repository has been archived by the owner on May 6, 2020. It is now read-only.

Sharing docker socket #1101

Open
vladostp opened this issue May 7, 2018 · 7 comments
Open

Sharing docker socket #1101

vladostp opened this issue May 7, 2018 · 7 comments

Comments

@vladostp
Copy link

vladostp commented May 7, 2018

Description of problem

I am not able to use shared docker socket between host machine and a docker container.

Expected result

The docker client that is in container connects to docker daemon at host machine and launches other containers.

Actual result

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?


Meta details

Running cc-collect-data.sh version 3.0.23 (commit 64d2226) at 2018-05-07.11:12:21.315124301+0200.


Runtime is /usr/bin/cc-runtime.

cc-env

Output of "/usr/bin/cc-runtime cc-env":

[Meta]
  Version = "1.0.9"

[Runtime]
  Debug = false
  [Runtime.Version]
    Semver = "3.0.23"
    Commit = "64d2226"
    OCI = "1.0.1"
  [Runtime.Config]
    Path = "/usr/share/defaults/clear-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.7.1(2.7.1+git.d4a337fe91-11.cc), Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-lite-system-x86_64"
  Debug = false
  BlockDeviceDriver = "virtio-scsi"

[Image]
  Path = "/usr/share/clear-containers/cc-20640-agent-6f6e9e.img"

[Kernel]
  Path = "/usr/share/clear-containers/vmlinuz-4.14.22-86.container"
  Parameters = ""

[Proxy]
  Type = "ccProxy"
  Version = "Version: 3.0.23+git.3cebe5e"
  Path = "/usr/libexec/clear-containers/cc-proxy"
  Debug = false

[Shim]
  Type = "ccShim"
  Version = "shim version: 3.0.23 (commit: 205ecf7)"
  Path = "/usr/libexec/clear-containers/cc-shim"
  Debug = false

[Agent]
  Type = "hyperstart"
  Version = "<<unknown>>"

[Host]
  Kernel = "4.15.0-20-generic"
  Architecture = "amd64"
  VMContainerCapable = true
  [Host.Distro]
    Name = "Ubuntu"
    Version = "18.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz"

Runtime config files

Runtime default config files

/usr/share/defaults/clear-containers/configuration.toml
/usr/share/defaults/clear-containers/configuration.toml

Runtime config file contents

Config file /etc/clear-containers/configuration.toml not found
Output of "cat "/usr/share/defaults/clear-containers/configuration.toml"":

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Intel® Clear Containers
# XXX:   Type: cc

[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/clear-containers/vmlinuz.container"
image = "/usr/share/clear-containers/clear-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per POD/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1


# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per POD/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for POD/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or 
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

[proxy.cc]
path = "/usr/libexec/clear-containers/cc-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.cc]
path = "/usr/libexec/clear-containers/cc-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[agent.cc]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
internetworking_model="macvtap"

Agent

version:

unknown

Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2018-05-04T15:22:26.232738531+02:00" level=error msg="Invalid command \"s\"" name=cc-runtime pid=16375 source=runtime
time="2018-05-04T15:22:30.504395165+02:00" level=error msg="Missing container ID, should at least provide one" command=ps name=cc-runtime pid=16388 source=runtime
time="2018-05-04T15:29:25.30371846+02:00" level=error msg="ERROR received from VM agent, control msg received : Process could not be started: container_linux.go:296: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"" command=exec name=cc-runtime pid=17252 source=runtime
time="2018-05-04T15:29:58.21553719+02:00" level=error msg="Container not ready or running, impossible to signal the container" command=kill name=cc-runtime pid=17291 source=runtime
time="2018-05-04T15:29:58.274075152+02:00" level=error msg="Container fc00042b628ead65064ee001d7ab804cd90ceb077194eecdfa8c0a0b8c69a92f not ready or running, cannot send a signal" command=kill name=cc-runtime pid=17315 source=runtime
time="2018-05-04T15:42:59.377761137+02:00" level=error msg="Container 9b784dda97b994a83f26196453b84ac9703e088f949f5dd9e2c08f1a75f9d6ab not ready or running, cannot send a signal" command=kill name=cc-runtime pid=18557 source=runtime
time="2018-05-04T15:42:59.433846096+02:00" level=error msg="Container 9b784dda97b994a83f26196453b84ac9703e088f949f5dd9e2c08f1a75f9d6ab not ready or running, cannot send a signal" command=kill name=cc-runtime pid=18576 source=runtime
time="2018-05-04T15:45:22.947479056+02:00" level=error msg="Environment value cannot be empty" command=exec name=cc-runtime pid=19110 source=runtime
time="2018-05-04T15:45:32.456978028+02:00" level=error msg="Environment value cannot be empty" command=exec name=cc-runtime pid=19159 source=runtime
time="2018-05-04T15:45:45.031651395+02:00" level=error msg="Environment value cannot be empty" command=exec name=cc-runtime pid=19190 source=runtime
time="2018-05-04T15:47:09.691453248+02:00" level=error msg="Container not ready or running, impossible to signal the container" command=kill name=cc-runtime pid=19355 source=runtime
time="2018-05-04T15:47:09.755685832+02:00" level=error msg="Container 34bf841a84856db9b1e8244ccf875604ba4d35b71f1c3c43e803500e0a1e4e6e not ready or running, cannot send a signal" command=kill name=cc-runtime pid=19378 source=runtime
time="2018-05-04T15:47:45.805983278+02:00" level=error msg="Container 6567e8dc4b8f6d3047dd1de02e0d886a36750dbcf5ddf020edc52a935ec8ad3b not ready or running, cannot send a signal" command=kill name=cc-runtime pid=19507 source=runtime
time="2018-05-04T15:47:45.857566916+02:00" level=error msg="Container 6567e8dc4b8f6d3047dd1de02e0d886a36750dbcf5ddf020edc52a935ec8ad3b not ready or running, cannot send a signal" command=kill name=cc-runtime pid=19527 source=runtime
time="2018-05-04T15:48:50.149905799+02:00" level=error msg="Container 4c374adb1f3037877ad6ac14284d2490018de547c41fca8535450a4523497d18 not ready or running, cannot send a signal" command=kill name=cc-runtime pid=19858 source=runtime
time="2018-05-04T15:48:50.184040809+02:00" level=error msg="Container 4c374adb1f3037877ad6ac14284d2490018de547c41fca8535450a4523497d18 not ready or running, cannot send a signal" command=kill name=cc-runtime pid=19886 source=runtime
time="2018-05-04T16:06:38.127831925+02:00" level=error msg="Environment value cannot be empty" command=exec name=cc-runtime pid=29850 source=runtime
time="2018-05-04T16:14:09.731176815+02:00" level=error msg="Environment value cannot be empty" command=exec name=cc-runtime pid=30638 source=runtime
time="2018-05-04T16:15:06.821662627+02:00" level=error msg="Environment value cannot be empty" command=exec name=cc-runtime pid=30749 source=runtime
time="2018-05-04T16:16:12.658027046+02:00" level=error msg="Container 0c706eb2035fd1da51fdb140a20c3eef80802513430985bc178cf6f6c3d9ff4a not ready or running, cannot send a signal" command=kill name=cc-runtime pid=30825 source=runtime
time="2018-05-04T16:16:12.709815038+02:00" level=error msg="Container 0c706eb2035fd1da51fdb140a20c3eef80802513430985bc178cf6f6c3d9ff4a not ready or running, cannot send a signal" command=kill name=cc-runtime pid=30845 source=runtime
time="2018-05-04T16:58:19.58610628+02:00" level=error msg="Container 8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc not ready or running, cannot send a signal" command=kill name=cc-runtime pid=32493 source=runtime
time="2018-05-04T16:58:19.653154991+02:00" level=error msg="Container 8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc not ready or running, cannot send a signal" command=kill name=cc-runtime pid=32513 source=runtime
time="2018-05-04T18:01:31.601205579+02:00" level=error msg="Container 8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc not ready or running, cannot send a signal" command=kill name=cc-runtime pid=2205 source=runtime
time="2018-05-04T18:01:31.674636078+02:00" level=error msg="Container 8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc not ready or running, cannot send a signal" command=kill name=cc-runtime pid=2226 source=runtime
time="2018-05-07T10:03:49.506367111+02:00" level=error msg="Container not ready or running, impossible to signal the container" command=kill name=cc-runtime pid=1752 source=runtime
time="2018-05-07T10:03:49.529574923+02:00" level=error msg="Container 8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc not ready or running, cannot send a signal" command=kill name=cc-runtime pid=1777 source=runtime
time="2018-05-07T10:03:54.039306686+02:00" level=error msg="Container not ready or running, impossible to signal the container" command=kill name=cc-runtime pid=2384 source=runtime
time="2018-05-07T10:03:54.07792048+02:00" level=error msg="Container 8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc not ready or running, cannot send a signal" command=kill name=cc-runtime pid=2452 source=runtime
time="2018-05-07T10:05:56.544273114+02:00" level=error msg="Container not ready or running, impossible to signal the container" command=kill name=cc-runtime pid=4137 source=runtime
time="2018-05-07T10:05:56.596127703+02:00" level=error msg="Container 523f331786b47a285323d1a1de44dc3a0ab870d6f6f48ae3a597e20d71eb4638 not ready or running, cannot send a signal" command=kill name=cc-runtime pid=4159 source=runtime

Proxy logs

Recent proxy problems found in system journal:

time="2018-05-04T15:29:55.305823079+02:00" level=error msg="timeout waiting for process with token eQfpXEc_UaoTFPSlZa5di_VdM96e2I6PicA-vOlRtYI=" name=cc-proxy pid=17136 section=io source=proxy vm=fc00042b628ead65064ee001d7ab804cd90ceb077194eecdfa8c0a0b8c69a92f
time="2018-05-04T15:29:55.306346571+02:00" level=error msg="error serving client: timeout waiting for process with token eQfpXEc_UaoTFPSlZa5di_VdM96e2I6PicA-vOlRtYI=" client=8 name=cc-proxy pid=17136 source=proxy
time="2018-05-04T15:29:58.011329423+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/fc00042b628ead65064ee001d7ab804cd90ceb077194eecdfa8c0a0b8c69a92f/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=17136 source=proxy
time="2018-05-04T15:42:59.044508787+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/9b784dda97b994a83f26196453b84ac9703e088f949f5dd9e2c08f1a75f9d6ab/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=18281 source=proxy
time="2018-05-04T15:47:09.507736301+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/34bf841a84856db9b1e8244ccf875604ba4d35b71f1c3c43e803500e0a1e4e6e/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=19293 source=proxy
time="2018-05-04T15:47:45.51236896+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/6567e8dc4b8f6d3047dd1de02e0d886a36750dbcf5ddf020edc52a935ec8ad3b/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=18700 source=proxy
time="2018-05-04T15:48:49.80878528+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/4c374adb1f3037877ad6ac14284d2490018de547c41fca8535450a4523497d18/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=19784 source=proxy
time="2018-05-04T16:16:12.333700254+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/0c706eb2035fd1da51fdb140a20c3eef80802513430985bc178cf6f6c3d9ff4a/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=20080 source=proxy
time="2018-05-04T16:58:07.795669293+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=8 name=cc-proxy pid=30961 source=proxy
time="2018-05-04T16:58:19.25801127+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=30961 source=proxy
time="2018-05-04T17:07:07.741312922+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=8 name=cc-proxy pid=492 source=proxy
time="2018-05-04T18:01:31.290229535+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=492 source=proxy
time="2018-05-07T10:03:49.33472548+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=1670 source=proxy
time="2018-05-07T10:03:52.032557455+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=1989 source=proxy
time="2018-05-07T10:05:56.413488144+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/523f331786b47a285323d1a1de44dc3a0ab870d6f6f48ae3a597e20d71eb4638/proxy.sock->@: use of closed network connection" client=4 name=cc-proxy pid=4106 source=proxy
time="2018-05-07T11:12:17.230232501+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=8 name=cc-proxy pid=2593 source=proxy

Shim logs

Recent shim problems found in system journal:

time="2018-05-04T15:29:55.306612648+0200" level="error" pid=3 function="read_wire_data" line=395 source="shim" name="cc-shim" msg="Failed to read from fd: Connection reset by peer"
time="2018-05-04T15:29:55.306890319+0200" level="warning" pid=3 function="reconnect_to_proxy" line=1020 source="shim" name="cc-shim" msg="Reconnecting to cc-proxy (timeout 10 s)"

Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:17:38 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:15:45 2018
  OS/Arch:      linux/amd64
  Experimental: false

Output of "docker info":

Containers: 7
 Running: 1
 Paused: 0
 Stopped: 6
Images: 5
Server Version: 18.03.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: cc-runtime runc
Default Runtime: cc-runtime
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 64d2226 (expected: 4fc53a81fb7c994640722ac585fa9ca548971871)
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-20-generic
Operating System: Ubuntu 18.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.43GiB
Name: vladost-pro
ID: ASF7:OQ55:OE5A:2H6H:5IWD:75WL:KAEM:27TS:7VC5:BURM:XOF4:ZB4Y
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 28
 Goroutines: 39
 System Time: 2018-05-07T11:12:21.743146934+02:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Output of "systemctl show docker":

Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Mon 2018-05-07 10:03:49 CEST
WatchdogTimestampMonotonic=18196688
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=1177
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Mon 2018-05-07 10:03:44 CEST
ExecMainStartTimestampMonotonic=13750493
ExecMainExitTimestampMonotonic=0
ExecMainPID=1177
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -D --add-runtime cc-runtime=/usr/bin/cc-runtime --default-runtime=cc-runtime ; ignore_errors=no ; start_time=[Mon 2018-05-07 10:03:44 CEST] ; stop_time=[n/a] ; pid=1177 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=80
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=62622
LimitSIGPENDINGSoft=62622
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=system.slice docker.socket sysinit.target
Wants=network-online.target
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=multi-user.target shutdown.target
After=firewalld.service network-online.target system.slice systemd-journald.socket basic.target docker.socket sysinit.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
DropInPaths=/etc/systemd/system/docker.service.d/clear-containers.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Mon 2018-05-07 10:03:49 CEST
StateChangeTimestampMonotonic=18196690
InactiveExitTimestamp=Mon 2018-05-07 10:03:44 CEST
InactiveExitTimestampMonotonic=13750517
ActiveEnterTimestamp=Mon 2018-05-07 10:03:49 CEST
ActiveEnterTimestampMonotonic=18196690
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Mon 2018-05-07 10:03:44 CEST
ConditionTimestampMonotonic=13749959
AssertTimestamp=Mon 2018-05-07 10:03:44 CEST
AssertTimestampMonotonic=13749959
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=4eb342f27db048718acca470f6eb080d
CollectMode=inactive

No kubectl


Packages

Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtime|cc-proxy|cc-runtime|cc-shim|kata-proxy|kata-runtime|kata-shim|clear-containers-image|linux-container|qemu-lite|qemu-system-x86)"":

ii  cc-proxy                              3.0.23+git.3cebe5e-27               amd64        
ii  cc-runtime                            3.0.23+git.64d2226-27               amd64        
ii  cc-runtime-bin                        3.0.23+git.64d2226-27               amd64        
ii  cc-runtime-config                     3.0.23+git.64d2226-27               amd64        
ii  cc-shim                               3.0.23+git.205ecf7-27               amd64        
ii  clear-containers-image                20640-48                            amd64        Clear containers image
ii  linux-container                       4.14.22-86                          amd64        linux kernel optimised for container-like workloads.
ii  qemu-lite                             2.7.1+git.d4a337fe91-11             amd64        linux kernel optimised for container-like workloads.

No rpm


@devimc
Copy link

devimc commented May 8, 2018

ummh, that's weird, have you tried with kata-containers ?

@amshinde
Copy link
Contributor

amshinde commented May 8, 2018

You would see that error if the docker daemon is not running or you do not have right privileges. Whats the output of systemctl status docker.

@jodh-intel
Copy link
Contributor

The show command above shows dockerd is running with PID 1177 so it looks like an EPERM issue.

@vladostp
Copy link
Author

vladostp commented May 9, 2018

Thanks!
When i am launching container with -v /var/run/docker.sock:/var/run/docker.sock

Proxy gives an error:
time="2018-05-07T11:12:17.230232501+02:00" level=error msg="error serving client: write unix /run/virtcontainers/pods/8ebfc2adc9fab0be2fd0659c30fcfae8e862f7c899b9d6b65cac95eebe7c79bc/proxy.sock->@: use of closed network connection" client=8 name=cc-proxy pid=2593 source=proxy

@jodh-intel
Copy link
Contributor

Hi @vladostp - as @devimc says, please can you try with Kata Containers:

I've just tried this:

$ docker run -ti -v /var/run/docker.sock:/var/run/docker.sock --runtime kata-runtime busybox echo ok
ok

@vladostp
Copy link
Author

vladostp commented May 9, 2018

@jodh-intel I get the same result with kata-containers.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
docker run -ti -v /var/run/docker.sock:/var/run/docker.sock --runtime kata-runtime busybox echo ok
Works well but if you take a docker image with a docker client inside and you will try to call docker ps inside the container it will fail.

@vladostp
Copy link
Author

vladostp commented May 18, 2018

Using Dockerd Remote API is a working solution. Thanks to everyone

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants