kube dns在随机时间后停止工作

kube dns在随机时间后停止工作,dns,kubernetes,weave,Dns,Kubernetes,Weave,每次我初始化一个新集群时,一切都可以在3天到大约一个月的时间内完美运行。然后kube dns就会停止运行。我可以把壳装进kubedns的容器里,它看起来运行得很好,尽管我真的不知道该找什么。我可以ping一个主机名,它可以解析并且可以访问,所以kubedns容器本身仍然有dns服务。它只是不为集群中的其他容器提供它。失败发生在两个容器中,一个是从启动之前就开始运行的容器(因此它们过去能够解析+ping主机名,但现在无法解析,但仍然可以使用IP进行ping),另一个是创建的新容器 我不确定这是否与

每次我初始化一个新集群时,一切都可以在3天到大约一个月的时间内完美运行。然后kube dns就会停止运行。我可以把壳装进kubedns的容器里,它看起来运行得很好,尽管我真的不知道该找什么。我可以ping一个主机名,它可以解析并且可以访问,所以kubedns容器本身仍然有dns服务。它只是不为集群中的其他容器提供它。失败发生在两个容器中,一个是从启动之前就开始运行的容器(因此它们过去能够解析+ping主机名,但现在无法解析,但仍然可以使用IP进行ping),另一个是创建的新容器

我不确定这是否与时间有关,也不确定是否与创造的工作岗位或吊舱数量有关。最近的一次事故发生在32个吊舱和20个工作岗位建成后

如果我使用以下命令删除kube dns pod:

kubectl delete pod --namespace kube-system kube-dns-<pod_id>
故障开始后,一直在运行的同一容器:

bash-4.4# env
PACKAGES= dumb-init musl libc6-compat linux-headers build-base bash git ca-certificates python3 python3-dev
HOSTNAME=vda-test
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
PWD=/
HOME=/root
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
ALPINE_VERSION=3.7
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
TERM=xterm
SHLVL=1
KUBERNETES_SERVICE_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_HOST=10.96.0.1
OLDPWD=/root
_=/usr/bin/env

bash-4.4# ifconfig
eth0 Link encap:Ethernet HWaddr 22:5E:D5:72:97:98
inet addr:10.44.0.2 Bcast:10.47.255.255 Mask:255.240.0.0
UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
RX packets:1645 errors:0 dropped:0 overruns:0 frame:0
TX packets:1574 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:718909 (702.0 KiB) TX bytes:150313 (146.7 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

bash-4.4# ip route
default via 10.44.0.0 dev eth0
10.32.0.0/12 dev eth0 scope link src 10.44.0.2

bash-4.4# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local dc1.int.company.com dc2.int.company.com dc3.int.company.com

options ndots:5

bash-4.4# ping 10.44.0.0
PING 10.44.0.0 (10.44.0.0): 56 data bytes
64 bytes from 10.44.0.0: seq=0 ttl=64 time=0.130 ms
64 bytes from 10.44.0.0: seq=1 ttl=64 time=0.097 ms
64 bytes from 10.44.0.0: seq=2 ttl=64 time=0.072 ms
64 bytes from 10.44.0.0: seq=3 ttl=64 time=0.102 ms
64 bytes from 10.44.0.0: seq=4 ttl=64 time=0.116 ms
64 bytes from 10.44.0.0: seq=5 ttl=64 time=0.099 ms
64 bytes from 10.44.0.0: seq=6 ttl=64 time=0.167 ms
64 bytes from 10.44.0.0: seq=7 ttl=64 time=0.086 ms
--- 10.44.0.0 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max = 0.072/0.108/0.167 ms

bash-4.4# ping somehost.env.dc1.int.company.com
ping: bad address 'somehost.env.dc1.int.company.com'

bash-4.4# ping 10.112.17.2
PING 10.112.17.2 (10.112.17.2): 56 data bytes
64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.523 ms
64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.319 ms
64 bytes from 10.112.17.2: seq=2 ttl=63 time=0.304 ms
--- 10.112.17.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.304/0.382/0.523 ms

bash-4.4# ping worker1.env
ping: bad address 'worker1.env'

bash-4.4# ping 10.112.5.50
PING 10.112.5.50 (10.112.5.50): 56 data bytes
64 bytes from 10.112.5.50: seq=0 ttl=64 time=0.095 ms
64 bytes from 10.112.5.50: seq=1 ttl=64 time=0.073 ms
64 bytes from 10.112.5.50: seq=2 ttl=64 time=0.083 ms
--- 10.112.5.50 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.083/0.095 ms
以下是kube dns容器中的一些命令:

/ # ifconfig
eth0 Link encap:Ethernet HWaddr 9A:24:59:D1:09:52
inet addr:10.32.0.2 Bcast:10.47.255.255 Mask:255.240.0.0
UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
RX packets:4387680 errors:0 dropped:0 overruns:0 frame:0
TX packets:4124267 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1047398761 (998.8 MiB) TX bytes:1038950587 (990.8 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4352618 errors:0 dropped:0 overruns:0 frame:0
TX packets:4352618 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:359275782 (342.6 MiB) TX bytes:359275782 (342.6 MiB)

/ # ping somehost.env.dc1.int.company.com
PING somehost.env.dc1.int.company.com (10.112.17.2): 56 data bytes
64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.430 ms
64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.252 ms
--- somehost.env.dc1.int.company.com ping statistics ---
2 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.208/0.274/0.430 ms

/ # netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53152 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58424 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53174 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58468 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58446 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53096 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58490 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53218 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53100 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53158 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53180 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58402 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53202 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53178 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58368 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53134 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53200 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53136 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53130 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53222 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53196 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:48230 10.96.0.1:https ESTABLISHED
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53102 TIME_WAIT
netstat: /proc/net/tcp6: No such file or directory
netstat: /proc/net/udp6: No such file or directory
netstat: /proc/net/raw6: No such file or directory
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
主节点和工作节点上的版本/操作系统信息:

kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

uname -a
Linux master1.env.dc1.int.company.com 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

不访问集群很难判断,但是当您创建一个pod时,
kube proxy
会在您的节点上创建几个iptables规则,以便您可以访问它们。我的猜测是,一个或多个iptables规则正在打乱您的新POD和现有POD

然后,当您删除并重新创建kube dns pod时,这些iptable将被删除并重新创建,从而使事情恢复正常

您可以尝试以下几点:

  • 升级至使用核心dns的K8s 1.11
  • 尝试安装使用不同的
    podCidr
  • 尝试重新启动覆盖吊舱(例如,印花布吊舱)

  • 所有这些都会导致停机,并可能会使您的集群崩溃。因此,最好先创建一个新集群并在那里进行测试。

    您能否在出现此错误之前和之后从节点提供
    iptables
    ?出现错误时,
    kube dns
    是否可访问?此外,为了进行故障排除,您可以使用安装的
    nslookup
    实用程序创建一个Pod,并尝试了解如何使用它,DNSUPDATE:我发现,在工作环境中,我可以将外壳放入kube dns Pod-->kubedns容器中,并使用域名或IP ping外部地址。在DNS开始出现故障的环境中(但使用IPs时连接仍然有效),如果我将外壳放入kubedns容器,即使使用IPs,也无法ping外部地址。@Brent212您是如何解决这个问题的?我遇到了完全相同的问题。@Boban,我相信刚刚更新到K8s 1.15.3就解决了这个问题。实际上,印花棉布吊舱正在运行1/2,所以杀死这些吊舱就解决了这个问题。
    bash-4.4# env
    PACKAGES= dumb-init musl libc6-compat linux-headers build-base bash git ca-certificates python3 python3-dev
    HOSTNAME=vda-test
    KUBERNETES_PORT_443_TCP_PROTO=tcp
    KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
    KUBERNETES_PORT=tcp://10.96.0.1:443
    PWD=/
    HOME=/root
    KUBERNETES_SERVICE_PORT_HTTPS=443
    KUBERNETES_PORT_443_TCP_PORT=443
    ALPINE_VERSION=3.7
    KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
    TERM=xterm
    SHLVL=1
    KUBERNETES_SERVICE_PORT=443
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    KUBERNETES_SERVICE_HOST=10.96.0.1
    OLDPWD=/root
    _=/usr/bin/env
    
    bash-4.4# ifconfig
    eth0 Link encap:Ethernet HWaddr 22:5E:D5:72:97:98
    inet addr:10.44.0.2 Bcast:10.47.255.255 Mask:255.240.0.0
    UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
    RX packets:1645 errors:0 dropped:0 overruns:0 frame:0
    TX packets:1574 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:718909 (702.0 KiB) TX bytes:150313 (146.7 KiB)
    
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1
    RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
    
    bash-4.4# ip route
    default via 10.44.0.0 dev eth0
    10.32.0.0/12 dev eth0 scope link src 10.44.0.2
    
    bash-4.4# cat /etc/resolv.conf
    nameserver 10.96.0.10
    search default.svc.cluster.local svc.cluster.local cluster.local dc1.int.company.com dc2.int.company.com dc3.int.company.com
    
    options ndots:5
    
    bash-4.4# ping 10.44.0.0
    PING 10.44.0.0 (10.44.0.0): 56 data bytes
    64 bytes from 10.44.0.0: seq=0 ttl=64 time=0.130 ms
    64 bytes from 10.44.0.0: seq=1 ttl=64 time=0.097 ms
    64 bytes from 10.44.0.0: seq=2 ttl=64 time=0.072 ms
    64 bytes from 10.44.0.0: seq=3 ttl=64 time=0.102 ms
    64 bytes from 10.44.0.0: seq=4 ttl=64 time=0.116 ms
    64 bytes from 10.44.0.0: seq=5 ttl=64 time=0.099 ms
    64 bytes from 10.44.0.0: seq=6 ttl=64 time=0.167 ms
    64 bytes from 10.44.0.0: seq=7 ttl=64 time=0.086 ms
    --- 10.44.0.0 ping statistics ---
    8 packets transmitted, 8 packets received, 0% packet loss
    round-trip min/avg/max = 0.072/0.108/0.167 ms
    
    bash-4.4# ping somehost.env.dc1.int.company.com
    ping: bad address 'somehost.env.dc1.int.company.com'
    
    bash-4.4# ping 10.112.17.2
    PING 10.112.17.2 (10.112.17.2): 56 data bytes
    64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.523 ms
    64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.319 ms
    64 bytes from 10.112.17.2: seq=2 ttl=63 time=0.304 ms
    --- 10.112.17.2 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.304/0.382/0.523 ms
    
    bash-4.4# ping worker1.env
    ping: bad address 'worker1.env'
    
    bash-4.4# ping 10.112.5.50
    PING 10.112.5.50 (10.112.5.50): 56 data bytes
    64 bytes from 10.112.5.50: seq=0 ttl=64 time=0.095 ms
    64 bytes from 10.112.5.50: seq=1 ttl=64 time=0.073 ms
    64 bytes from 10.112.5.50: seq=2 ttl=64 time=0.083 ms
    --- 10.112.5.50 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.073/0.083/0.095 ms
    
    / # ifconfig
    eth0 Link encap:Ethernet HWaddr 9A:24:59:D1:09:52
    inet addr:10.32.0.2 Bcast:10.47.255.255 Mask:255.240.0.0
    UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
    RX packets:4387680 errors:0 dropped:0 overruns:0 frame:0
    TX packets:4124267 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:1047398761 (998.8 MiB) TX bytes:1038950587 (990.8 MiB)
    
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:4352618 errors:0 dropped:0 overruns:0 frame:0
    TX packets:4352618 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1
    RX bytes:359275782 (342.6 MiB) TX bytes:359275782 (342.6 MiB)
    
    / # ping somehost.env.dc1.int.company.com
    PING somehost.env.dc1.int.company.com (10.112.17.2): 56 data bytes
    64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.430 ms
    64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.252 ms
    --- somehost.env.dc1.int.company.com ping statistics ---
    2 packets transmitted, 5 packets received, 0% packet loss
    round-trip min/avg/max = 0.208/0.274/0.430 ms
    
    / # netstat
    Active Internet connections (w/o servers)
    Proto Recv-Q Send-Q Local Address Foreign Address State
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53152 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58424 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53174 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58468 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58446 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53096 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58490 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53218 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53100 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53158 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53180 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58402 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53202 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53178 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58368 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53134 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53200 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53136 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53130 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53222 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53196 TIME_WAIT
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:48230 10.96.0.1:https ESTABLISHED
    tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53102 TIME_WAIT
    netstat: /proc/net/tcp6: No such file or directory
    netstat: /proc/net/udp6: No such file or directory
    netstat: /proc/net/raw6: No such file or directory
    Active UNIX domain sockets (w/o servers)
    Proto RefCnt Flags Type State I-Node Path
    
    kubectl version
    Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
    
    cat /etc/os-release
    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"
    CENTOS_MANTISBT_PROJECT="CentOS-7"
    CENTOS_MANTISBT_PROJECT_VERSION="7"
    REDHAT_SUPPORT_PRODUCT="centos"
    REDHAT_SUPPORT_PRODUCT_VERSION="7"
    
    uname -a
    Linux master1.env.dc1.int.company.com 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux