使用kubeadm安装kubernetes1.10.1【centos7.3离线安装docker,kubeadm,kubectl,kubelet,dashboard】kubernetesv1.10.1

前端之家收集整理的这篇文章主要介绍了使用kubeadm安装kubernetes1.10.1【centos7.3离线安装docker,kubeadm,kubectl,kubelet,dashboard】kubernetesv1.10.1前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

参考资料:
https://kubernetes.io/docs/tasks/tools/install-kubeadm/
https://blog.csdn.net/yitaidn/article/details/79937316
https://www.jianshu.com/p/9c7e1c957752
https://www.jianshu.com/p/3ec8945a864f

环境:3台centos7.3虚拟机

10.10.31.202 k8s-master
10.10.31.203 k8s-node1
10.10.31.204 k8s-node2

环境设置:

1 . 系统是centos7.3,不是执行 yum update,不要更新成7.5

[root@szy-k8s-node2 ~]# yum update #最好不好更新7.5此版本为7.3教程

2.关闭防火墙,SELinux,swap(所有节点)

setenforce 0 :临时关闭,用于关闭selinux防火墙,但重启后失效。
swapoff -a  #保证 kubelet 正确运行
systemctl stop firewalld 
systemctl disable firewalld #关闭防火墙

执行效果:
[root@szy-k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@szy-k8s-master ~]# swapoff -a 
[root@szy-k8s-master ~]# setenforce 0
[root@szy-k8s-master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 修改配置永久生效,需重启
[root@szy-k8s-master ~]# /usr/sbin/sestatus #查看selinux的状态
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: permissive Mode from config file: disabled Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28

组件安装:

1.docker的安装-经常出问题,版本和系统不兼容等问题
或者使用文件docker-packages.tar,每个节点都要安装。

yum install -y docker  
systemctl enable docker && systemctl start docker 


本人是第二种安装方法
链接:https://pan.baidu.com/s/1nV_lOOJhNJpqGBq9heNWug 密码:zkfr

tar -xvf docker-packages.tar
cd docker-packages
rpm -Uvh * 或者 yum install local *.rpm  进行安装
docker version        #安装完成查看版本


注意:如果docker安装失败,重装步骤:
yum remove docker
yum remove docker-selinu
如果删不干净,就执行下面操作

1.#查看已经安装的docker安装包,列出入校内容
[root@szy-k8s-node2 docker-packages]# rpm -qa|grep docker
docker-common-1.13.1-63.git94f4240.el7.centos.x86_64
docker-client-1.13.1-63.git94f4240.el7.centos.x86_64
2.分别删除
yum -y remove docker-common-1.13.1-63.git94f4240.el7.centos.x86_64
yum -y remove docker-client-1.13.1-63.git94f4240.el7.centos.x86_64
3.删除docker镜像
rm -rf /var/lib/docker
再重新安装

2.给docker配置阿里镜像加速器【如下图,阿里已给出代码


输入docker info,==记录Cgroup Driver==
Cgroup Driver: cgroupfs
docker和kubelet的cgroup driver需要一致,如果docker不是cgroupfs,则执行

sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://XXXX.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=cgroupfs"] 
}
EOF
systemctl daemon-reload && systemctl restart docker

离线安装 kubeadm,kubectl,kubelet

链接:https://pan.baidu.com/s/13zfZKfARUN2s96fPil-8VQ 密码:10am
使用文件kube-packages-1.10.1.tar,每个节点都要安装
kubeadm是集群部署工具
kubectl是集群管理工具,通过command来管理集群
kubelet的k8s集群每个节点的docker管理服务

tar -xvf kube-packages-1.10.1.tar
cd kube-packages-1.10.1
rpm -Uvh * 或者 yum install local *.rpm  进行安装

在所有kubernetes节点上设置kubelet使用cgroupfs,与dockerd保持一致,否则kubelet会启动报错

默认kubelet使用的cgroup-driver=systemd,改为cgroup-driver=cgroupfs 
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
重设kubelet服务,并重启kubelet服务 
systemctl daemon-reload && systemctl restart kubelet

关闭swap,及修改iptables,不然后面kubeadm会报错

swapoff -a
vi /etc/fstab   #swap一行注释
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

导入镜像

每个节点都要执行

docker load -i k8s-images-1.10.tar.gz 
#一共11个镜像,分别是 
k8s.gcr.io/etcd-amd64:3.1.12 
k8s.gcr.io/kube-apiserver-amd64:v1.10.1 
k8s.gcr.io/kube-controller-manager-amd64:v1.10.1 
k8s.gcr.io/kube-proxy-amd64:v1.10.1 
k8s.gcr.io/kube-scheduler-amd64:v1.10.1 
k8s.gcr.io/pause-amd64:3.1 
k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 
k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 
quay.io/coreos/flannel:v0.9.1-amd64


kubeadm init 部署master节点,只在master节点执行

此处选用最简单快捷的部署方案。etcd、api、controller-manager、 scheduler服务都会以容器的方式运行在master。etcd 为单点,不带证书。etcd的数据会挂载到master节点/var/lib/etcd
init命令注意要指定版本,和pod范围

kubeadm init –kubernetes-version=v1.10.1 –pod-network-cidr=10.244.0.0/16

[root@szy-k8s-master kubernetes]# kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.10.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled,please run 'systemctl enable kubelet.service'
        [WARNING Firewalld]: firewalld is active,please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [szy-k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.31.202]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [szy-k8s-master] and IPs [10.10.31.202]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 23.002425 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node szy-k8s-master as master by adding a label and a taint
[markmaster] Master szy-k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: ou9izo.w3o32jgx1kg7lypl
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster,you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8

记下join的命令,后续node节点加入的时候要用到
执行提示的命令,保存kubeconfig

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时执行kubectl get node 已经可以看到master节点,notready是因为还未部署网络插件

[root@szy-k8s-master k8s]# kubectl get node
NAME             STATUS     ROLES     AGE       VERSION
szy-k8s-master   NotReady   master    3m        v1.10.1
[root@szy-k8s-master k8s]# 

查看所有的pod,kubectl get pod –all-namespaces
kubedns也依赖于容器网络,此时pending是正常的

[root@szy-k8s-master k8s]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   etcd-szy-k8s-master                      1/1       Running   0          2m
kube-system   kube-apiserver-szy-k8s-master            1/1       Running   0          2m
kube-system   kube-controller-manager-szy-k8s-master   1/1       Running   0          2m
kube-system   kube-dns-86f4d74b45-gp8zc                0/3       Pending   0          3m
kube-system   kube-proxy-kqnfs                         1/1       Running   0          3m
kube-system   kube-scheduler-szy-k8s-master            1/1       Running   0          2m

配置KUBECONFIG变量

[root@szy-k8s-master k8s]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
[root@szy-k8s-master k8s]# source /etc/profile
[root@szy-k8s-master k8s]# echo $KUBECONFIG
/etc/kubernetes/admin.conf

部署flannel网络

k8s支持多种网络方案,flannel,calico,openvswitch
此处选择flannel。 在熟悉了k8s部署后,可以尝试其他网络方案

在当前k8s目录下,有kube-flannel.yml

[root@szy-k8s-master k8s]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created
[root@szy-k8s-master k8s]# kubectl get node
NAME             STATUS    ROLES     AGE       VERSION
szy-k8s-master   Ready     master    6m        v1.10.1
[root@szy-k8s-master k8s]# 

kubeadm join 加入node节点

  1. node节点加入集群
    使用之前kubeadm init 生产的join命令,加入成功后,回到master节点查看是否成功
kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8

在节点node1上执行的效果如下:

[root@szy-k8s-node1 ~]# kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled,please run 'systemctl enable kubelet.service'
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.10.31.202:6443"
[discovery] Created cluster-info discovery client,requesting info from "https://10.10.31.202:6443"
[discovery] Requesting info from "https://10.10.31.202:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots,will use API Server "10.10.31.202:6443"
[discovery] Successfully established connection with API Server "10.10.31.202:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在节点node2上执行的效果如下:

[root@szy-k8s-node2 ~]# kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled,will use API Server "10.10.31.202:6443"
[discovery] Successfully established connection with API Server "10.10.31.202:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

注意问题:
我在其中出现了报错:[discovery] Failed to request cluster info,will try again:

[discovery] Failed to request cluster info,will try again: [Get https://10.10.31.202:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.10.31.202:6443: getsockopt:no route to host]

解决方案:防火墙没有关闭,也可能是刚开始时操作了,没有生效。

systemctl stop firewalld 
systemctl disable firewalld #关闭防火墙

问题2:[discovery] Failed to request cluster info,will try again: [Get https://10.10.31.202:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
解决方案: 这是因为master节点缺少KUBECONFIG变量

#master节点执行
export KUBECONFIG=$HOME/.kube/config
#node节点kubeadm reset 再join
kubeadm reset
kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8

部署kubernetes-ui,Dashboard

dashboard是官方的k8s 管理界面,可以查看应用信息及发布应用。dashboard的语言是根据浏览器的语言自己识别的
官方默认的dashboard为https方式,如果用chrome访问会拒绝。本次部署做了修改,方便使用,使用了http方式,用chrome访问正常。
一共需要导入3个yaml文件

kubectl apply -f kubernetes-dashboard-http.yaml
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

实际执行效果如下:

[root@szy-k8s-master k8s]# ll
total 1066600
-rwxr-xr-x. 1 root root       357 Jun 25 11:18 admin-role.yaml
drwxr-xr-x. 2 root root      4096 Apr 11 19:30 docker-packages
-rwxr-xr-x. 1 root root  39577600 Jun 25 11:18 docker-packages.tar
-rwxr-xr-x. 1 root root 999385088 Jun 25 11:18 k8s-images-1.10.tar.gz
-rwxr-xr-x. 1 root root      2801 Jun 25 11:18 kube-flannel.yml
drwxr-xr-x. 2 root root       190 Apr 25 16:27 kube-packages-1.10.1
-rwxr-xr-x. 1 root root  53207040 Jun 25 11:18 kube-packages-1.10.1.tar
-rwxr-xr-x. 1 root root       281 Jun 25 11:18 kubernetes-dashboard-admin.rbac.yaml
-rwxr-xr-x. 1 root root      4267 Jun 25 11:18 kubernetes-dashboard-http.yaml
[root@szy-k8s-master k8s]# kubectl apply -f kubernetes-dashboard-http.yaml
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@szy-k8s-master k8s]# kubectl apply -f admin-role.yaml
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
[root@szy-k8s-master k8s]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created

最后检查成果:

[root@szy-k8s-master kubelet]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   etcd-szy-k8s-master                      1/1       Running   0          5h
kube-system   kube-apiserver-szy-k8s-master            1/1       Running   0          5h
kube-system   kube-controller-manager-szy-k8s-master   1/1       Running   0          5h
kube-system   kube-dns-86f4d74b45-2zw26                3/3       Running   162        5h
kube-system   kube-flannel-ds-ht6pl                    1/1       Running   0          5h
kube-system   kube-flannel-ds-npgfq                    1/1       Running   1          24m
kube-system   kube-flannel-ds-rfxv9                    1/1       Running   0          24m
kube-system   kube-proxy-8pg78                         1/1       Running   0          24m
kube-system   kube-proxy-gxcvh                         1/1       Running   0          24m
kube-system   kube-proxy-jgqtp                         1/1       Running   0          5h
kube-system   kube-scheduler-szy-k8s-master            1/1       Running   0          5h
kube-system   kubernetes-dashboard-5c469b58b8-hr4bk    1/1       Running   0          21m
[root@szy-k8s-master kubelet]# 

查看界面:

http://10.10.31.202:31000
http://10.10.31.203:31000
http://10.10.31.204:31000

[root@szy-k8s-master kubelet]# kubectl version
Client Version: version.Info{Major:"1",Minor:"10",GitVersion:"v1.10.1",GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b",GitTreeState:"clean",BuildDate:"2018-04-12T14:26:04Z",GoVersion:"go1.9.3",Compiler:"gc",Platform:"linux/amd64"}
Server Version: version.Info{Major:"1",BuildDate:"2018-04-12T14:14:26Z",Platform:"linux/amd64"}
[root@szy-k8s-master kubelet]#

猜你在找的CentOS相关文章