summaryrefslogtreecommitdiffstats
path: root/docs/logs.txt
blob: e27b1ff16258d8a8e5dcea72e1f4895eda9cd357 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
/var/log/messages
=================
 - Various RPC errors. 
    ... rpc error: code = # desc = xxx ...
 
 - container kill failed because of 'container not found' or 'no such process': Cannot kill container ###: rpc error: code = 2 desc = no such process"
    Despite the errror, the containers are actually killed and pods destroyed. However, this error likely triggers
    problem with rogue interfaces staying on the OpenVSwitch bridge.

 - containerd: unable to save f7c3e6c02cdbb951670bc7ff925ddd7efd75a3bb5ed60669d4b182e5337dec23:d5b9394468235f7c9caca8ad4d97e7064cc49cd59cadd155eceae84545dc472a starttime: read /proc/81994/stat: no such process
   containerd: f7c3e6c02cdbb951670bc7ff925ddd7efd75a3bb5ed60669d4b182e5337dec23:d5b9394468235f7c9caca8ad4d97e7064cc49cd59cadd155eceae84545dc472a (pid 81994) has become an orphan, killing it
    Seems a bug in docker 1.12* which is resolved in 1.13.0rc2. No side effects according to the issue.
        https://github.com/moby/moby/issues/28336

 - W0625 03:49:34.231471   36511 docker_sandbox.go:337] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "...": Unexpected command output nsenter: cannot open /proc/63586/ns/net: No such file or directory
 - W0630 21:40:20.978177    5552 docker_sandbox.go:337] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "...": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "..."
    Probably refered by the following bug report and accordingly can be ignored...
        https://bugzilla.redhat.com/show_bug.cgi?id=1434950 

 - E0630 14:05:40.304042    5552 glusterfs.go:148] glusterfs: failed to get endpoints adei-cfg[an empty namespace may not be set when a resource name is provided]
   E0630 14:05:40.304062    5552 reconciler.go:367] Could not construct volume information: MountVolume.NewMounter failed for volume "kubernetes.io/glusterfs/4
    I guess some configuration issue.... Probably can be ignored...

 - kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
    There are no adverse effects to this.  It is a potential kernel issue, but should be just ignored by the customer.  Nothing is going to break.
        https://bugzilla.redhat.com/show_bug.cgi?id=1425278


 - E0625 03:59:52.438970   23953 watcher.go:210] watch chan error: etcdserver: mvcc: required revision has been compacted
    seems fine and can be ignored.

    
/var/log/openvswitch/ovs-vswitchd.log
=====================================
 - bridge|WARN|could not open network device veth7d33a20f (No such device)
    Indicates cleanup pod-cleanup failure and may cause problems during pod-scheduling.