CentOS 7 安装Kubernetes 1.5.3 集群(本地安装)

前端之家收集整理的这篇文章主要介绍了CentOS 7 安装Kubernetes 1.5.3 集群(本地安装)前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

准备虚拟机

在Openstack上开四个虚拟机,1个用作Kubernetes Master,3个用作Kubernetes Node

由于用到google的rpm包仓库(packages.cloud.google.com)和容器仓库(gcr.io),所以需要提前下载好需要的rpm包和容器镜像。

登陆所有4台机器安装kubelet和kubeadm

你将会安装如下软件包:

docker:容器运行时,被Kubernetes依赖
kubelet:Kubernetes核心组件,他运行在集群中的所有节点上,用来启动容器和pods
kubectl:命令行工具,用来控制集群,只需要安装到kube-master上
kubeadm:集群安装工具

在所有节点上执行以下命令:

tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

setenforce 0
# 升级内核,防止Ddashborad启动失败,https://github.com/rancher/rancher/issues/7436
yum update -y kernel
yum install -y docker-engine-1.12.6 docker-engine-selinux-1.12.6

systemctl enable docker && systemctl start docker

reboot

下载需要的rpm包

通过yum命令下载

yum install -y --downloadonly --downloaddir=/root/kubernetes-el7-x86_64 kubelet kubeadm kubectl kubernetes-cni
tar czvf kubernetes-el7-x86_64.tar.gz kubernetes-el7-x86_64

或者通过浏览器下载(需要)

# 可以通过下面网页找到要下载的rpm包地址
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/primary.xml
# 在浏览访问下面链接下载
https://packages.cloud.google.com/yum/pool/5612db97409141d7fd839e734d9ad3864dcc16a630b2a91c312589a0a0d960d0-kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
https://packages.cloud.google.com/yum/pool/93af9d0fbd67365fa5bf3f85e3d36060138a62ab77e133e35f6cadc1fdc15299-kubectl-1.5.1-0.x86_64.rpm
https://packages.cloud.google.com/yum/pool/8a299eb1db946b2bdf01c5d5c58ef959e7a9d9a0dd706e570028ebb14d48c42e-kubelet-1.5.1-0.x86_64.rpm
https://packages.cloud.google.com/yum/pool/567600102f687e0f27bd1fd3d8211ec1cb12e71742221526bb4e14a412f4fdb5-kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm

下载并上传到所有四台机器上然后安装

# 安装依赖包
# yum install -y ebtables socat

# tar xzvf /tmp/kubernetes-el7-x86_64.tar.gz
kubernetes-el7-x86_64/
kubernetes-el7-x86_64/567600102f687e0f27bd1fd3d8211ec1cb12e71742221526bb4e14a412f4fdb5-kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm
kubernetes-el7-x86_64/5612db97409141d7fd839e734d9ad3864dcc16a630b2a91c312589a0a0d960d0-kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm
kubernetes-el7-x86_64/8a299eb1db946b2bdf01c5d5c58ef959e7a9d9a0dd706e570028ebb14d48c42e-kubelet-1.5.1-0.x86_64.rpm
kubernetes-el7-x86_64/93af9d0fbd67365fa5bf3f85e3d36060138a62ab77e133e35f6cadc1fdc15299-kubectl-1.5.1-0.x86_64.rpm

# cd kubernetes-el7-x86_64/

# rpm -ivh *
warning: 5612db97409141d7fd839e734d9ad3864dcc16a630b2a91c312589a0a0d960d0-kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm: Header V4 RSA/SHA1 Signature,key ID 3e1ba8d5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:kubelet-1.5.1-0                  ################################# [ 25%]
   2:kubernetes-cni-0.3.0.1-0.07a8a2  ################################# [ 50%]
   3:kubectl-1.5.1-0                  ################################# [ 75%]
   4:kubeadm-1.6.0-0.alpha.0.2074.a092################################# [100%]

# systemctl enable kubelet && systemctl start kubelet

通过hub.docker.com作为代理,提前下载好容器镜像

images=(kube-proxy-amd64:v1.5.3 kube-scheduler-amd64:v1.5.3 kube-controller-manager-amd64:v1.5.3 kube-apiserver-amd64:v1.5.3 etcd-amd64:3.0.14-kubeadm kube-discovery-amd64:1.0 pause-amd64:3.0 kubedns-amd64:1.9 dnsmasq-metrics-amd64:1.0 kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2)
for imageName in ${images[@]} ; do
  docker pull ist0ne/$imageName
  docker tag ist0ne/$imageName gcr.io/google_containers/$imageName
  docker rmi ist0ne/$imageName
done

查看下载好的镜像

# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kube-proxy-amd64                v1.5.3              932ee3606ada        12 days ago         173.5 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.5.3              cb0ce9bb60f9        12 days ago         54 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.5.3              25304c6f1bb2        12 days ago         102.8 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.5.3              93d8b30a8f27        12 days ago         125.9 MB
gcr.io/google_containers/etcd-amd64                      3.0.14-kubeadm      856e39ac7be3        3 months ago        174.9 MB
gcr.io/google_containers/kubedns-amd64                   1.9                 26cf1ed9b144        3 months ago        47 MB
gcr.io/google_containers/dnsmasq-metrics-amd64           1.0                 5271aabced07        3 months ago        14 MB
gcr.io/google_containers/kube-dnsmasq-amd64              1.4                 3ec65756a89b        5 months ago        5.126 MB
gcr.io/google_containers/kube-discovery-amd64            1.0                 c5e0c9a457fc        5 months ago        134.2 MB
gcr.io/google_containers/exechealthz-amd64               1.2                 93a43bfb39bf        5 months ago        8.375 MB
gcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        9 months ago        746.9 kB

初始化k8s-master

[root@k8s-master ~]\# kubeadm init --use-kubernetes-version v1.5.3
[kubeadm] WARNING: kubeadm is in alpha,please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.3
[tokens] Generated token: "976234.e91451d4305bc282"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client,waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 30.791025 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 17.501927 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment,waiting for it to become ready
[token-discovery] kube-discovery is ready after 3.502132 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=976234.e91451d4305bc282 192.168.101.146

[root@k8s-master ~]\# kubectl get nodes
NAME                   STATUS         AGE
k8s-master.novalocal   Ready,master   1m

将k8s-node加入集群

[root@k8s-node1 ~]# kubeadm join --token=976234.e91451d4305bc282 192.168.101.146
[kubeadm] WARNING: kubeadm is in alpha,please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled,please run 'systemctl enable kubelet.service'
[preflight] Starting the kubelet service
[tokens] Validating provided token
[discovery] Created cluster info discovery client,requesting info from "http://192.168.101.146:9898/cluster-info/v1/?token-id=976234"
[discovery] Cluster info object received,verifying signature using given token
[discovery] Cluster info signature and contents are valid,will use API endpoints [https://192.168.101.146:6443]
[bootstrap] Trying to connect to endpoint https://192.168.101.146:6443
[bootstrap] Detected server version: v1.5.3
[bootstrap] Successfully established connection with endpoint "https://192.168.101.146:6443"
[csr] Created API client to obtain unique certificate for this node,generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:k8s-node1.novalocal | CA: false
Not before: 2017-02-28 04:44:00 +0000 UTC Not After: 2018-02-28 04:44:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

所有三个节点均加入集群:

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS         AGE
k8s-master.novalocal   Ready,master   2m
k8s-node1.novalocal    Ready          37s
k8s-node2.novalocal    Ready          35s
k8s-node3.novalocal    Ready          31s

部署pod网络

为了不同节点上的pods进行通信,需要安装pod网络插件。这里使用Weave Net,也可以用Calico和Canal

[root@k8s-master ~]# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                           READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-swbl5                         1/1       Running   0          57m
kube-system   etcd-k8s-master.novalocal                      1/1       Running   0          56m
kube-system   kube-apiserver-k8s-master.novalocal            1/1       Running   0          58m
kube-system   kube-controller-manager-k8s-master.novalocal   1/1       Running   0          58m
kube-system   kube-discovery-1769846148-x0wzj                1/1       Running   0          57m
kube-system   kube-dns-2924299975-lvxmt                      4/4       Running   0          57m
kube-system   kube-proxy-krr45                               1/1       Running   0          55m
kube-system   kube-proxy-lnr90                               1/1       Running   0          55m
kube-system   kube-proxy-m7wch                               1/1       Running   0          57m
kube-system   kube-proxy-td1jr                               1/1       Running   0          55m
kube-system   kube-scheduler-k8s-master.novalocal            1/1       Running   0          56m
kube-system   weave-net-b034k                                2/2       Running   0          50m
kube-system   weave-net-mncwx                                2/2       Running   0          50m
kube-system   weave-net-mpsqn                                2/2       Running   0          50m
kube-system   weave-net-r4c88                                2/2       Running   0          50m

需要等一会,待 kube-dns部署完成。

部署微服务

创建sock-shop命名空间

[root@k8s-master ~]# kubectl create namespace sock-shop
namespace "sock-shop" created

部署sock-shop服务

[root@k8s-master ~]# kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
namespace "sock-shop" configured
deployment "cart-db" created
service "cart-db" created
deployment "cart" created
service "cart" created
deployment "catalogue-db" created
service "catalogue-db" created
deployment "catalogue" created
service "catalogue" created
deployment "front-end" created
service "front-end" created
deployment "orders-db" created
service "orders-db" created
deployment "orders" created
service "orders" created
deployment "payment" created
service "payment" created
deployment "queue-master" created
service "queue-master" created
deployment "rabbitmq" created
service "rabbitmq" created
deployment "shipping" created
service "shipping" created
deployment "user-db" created
service "user-db" created
deployment "user" created
service "user" created
deployment "zipkin" created
service "zipkin" created
deployment "zipkin-MysqL" created
service "zipkin-MysqL" created
deployment "zipkin-cron" created

查看front-end服务部署情况

[root@k8s-master ~]# kubectl describe svc front-end -n sock-shop
Name:           front-end
Namespace:      sock-shop
Labels:         name=front-end
Selector:       name=front-end
Type:           NodePort
IP:         10.108.209.191
Port:           <unset> 80/TCP
NodePort:       <unset> 30001/TCP
Endpoints:      10.44.0.2:8079
Session Affinity:   None
No events.

[root@k8s-master ~]# kubectl get pods -n sock-shop
NAME                            READY     STATUS              RESTARTS   AGE
cart-2733362716-m3mq2           1/1       Running             0          45m
cart-db-2053818980-fdmw9        1/1       Running             0          45m
catalogue-3179692907-4bt8v      1/1       Running             0          45m
catalogue-db-2290683463-4g5vl   1/1       Running             0          45m
front-end-2489554388-7r98g      1/1       Running             0          45m
orders-3248148685-1k1g2         1/1       Running             0          45m
orders-db-3277638702-9t6vl      1/1       Running             0          45m
payment-1230586184-q8b75        1/1       Running             0          45m
queue-master-1190579278-l0pkg   1/1       Running             0          45m
rabbitmq-3472039365-rv21j       1/1       Running             0          45m
shipping-595972932-2v8pr        1/1       Running             0          45m
user-937712604-8m7zd            1/1       Running             0          45m
user-db-431019311-0qvfr         1/1       Running             0          45m
zipkin-3759864772-b8v93         1/1       Running             0          45m
zipkin-cron-1577918700-66s4n    1/1       Running             0          45m
zipkin-MysqL-1199230279-pnd6m   1/1       Running             0          45m

打开Openstack的防火墙规则,允许TCP 30001端口被访问,访问:http://10.101.1.175:30001/,其中10.101.1.175为kube-master的外网IP。

参考文档:http://kubernetes.io/docs/getting-started-guides/kubeadm/

http://yoyolive.com/2017/02/27/Kubernetes-1-5-3-Local-Install/

https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/#22%E9%95%9C%E5%83%8F%E4%BB%8E%E5%93%AA%E6%9D%A5

猜你在找的CentOS相关文章