CentOS 7.4搭建Kubernetes 1.8.5集群

前端之家收集整理的这篇文章主要介绍了CentOS 7.4搭建Kubernetes 1.8.5集群前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

环境介绍

角色 操作系统 IP 主机名 Docker版本
master,node CentOS 7.4 192.168.0.210 node210 17.11.0-ce
node CentOS 7.4 192.168.0.211 node211 17.11.0-ce
node CentOS 7.4 192.168.0.212 node212 17.11.0-ce

1.基础环境配置(所有服务器执行) a.SELinux关闭

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
setenforce 0

b.Docker安装

curl -sSL https://get.docker.com/ | sh

c.配置国内Docker镜像加速器

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://e2a6d434.m.daocloud.io

d.开启Docker开机自动启动

systemctl enable docker.service
systemctl restart docker

2.kubernetes证书准备(master执行) a.为将文件复制到Node节点,节省部署时间,我这里做ssh信任免密复制

ssh-genkey -t rsa
ssh-copy-id 192.168.0.211
ssh-copy-id 192.168.0.212

b.下载证书生成工具

yum -y install wget
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

c.CA证书制作 #目录准备

mkdir /root/ssl
cd /root/ssl

#创建CA证书配置 vim ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },"profiles": {
      "kubernetes": {
        "usages": [
            "signing","key encipherment","server auth","client auth"
        ],"expiry": "87600h"
      }
    }
  }
}

#创建CA证书请求文件 vim ca-csr.json

{
  "CN": "kubernetes","key": {
    "algo": "rsa","size": 2048
  },"names": [
    {
      "C": "CN","ST": "JIANGXI","L": "NANCHANG","O": "k8s","OU": "System"
    }
  ]
}

#生成CA证书和私钥 cfssl gencert -initca ca-csr.json | cfssljson -bare ca

#创建kubernetes证书签名请求 vim kubernetes-csr.json

{
    "CN": "kubernetes","hosts": [
      "127.0.0.1","192.168.0.210",#修改成自己主机的IP
      "192.168.0.211",#修改成自己主机的IP
      "192.168.0.212",#修改成自己主机的IP
      "10.254.0.1","kubernetes","node210",#修改成自己主机的主机名
      "node211",#修改成自己主机的主机名
      "node212",#修改成自己主机的主机名
      "kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"
    ],"key": {
        "algo": "rsa","size": 2048
    },"names": [
        {
            "C": "CN","L": "JIANGXI","OU": "System"
        }
    ]
}

#生成kubernetes证书及私钥 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

#创建admin证书签名请求 vim admin-csr.json

{
  "CN": "admin","hosts": [],"O": "system:masters","OU": "System"
    }
  ]
}

#生成admin证书及私钥 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#创建 kube-proxy 证书签名请求 vim kube-proxy-csr.json

{
  "CN": "system:kube-proxy","OU": "System"
    }
  ]
}

#生成证书及私钥 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#分发证书

mkdir -p /etc/kubernetes/ssl
cp -r *.pem /etc/kubernetes/ssl

cd /etc
scp -r kubernetes/ 192.168.0.211:/etc/
scp -r kubernetes/ 192.168.0.212:/etc/

3.etcd集群安装及配置 a.下载etcd,并分发至节点 wget https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz tar zxf etcd-v3.2.11-linux-amd64.tar.gz mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin scp -r /usr/local/bin/etc* 192.168.0.211:/usr/local/bin/ scp -r /usr/local/bin/etc* 192.168.0.212:/usr/local/bin/

b.创建etcd服务启动文件 vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
  --name ${ETCD_NAME} \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
  --listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
  --listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
  --advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
  --initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
  --initial-cluster infra1=https://192.168.0.210:2380,infra2=https://192.168.0.211:2380,infra3=https://192.168.0.212:2380 \
  --initial-cluster-state new \
  --data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

c.创建必要的目录

mkdir -p /var/lib/etcd/
mkdir /etc/etcd

d.编辑etcd的配置文件 vim /etc/etcd/etcd.conf node210的配置文件/etc/etcd/etcd.conf为

# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.210:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.210:2379"

#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.210:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.210:2379"

node211的配置文件/etc/etcd/etcd.conf为

# [member]
ETCD_NAME=infra2
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.211:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.211:2379"

#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.211:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.211:2379"

node212的配置文件/etc/etcd/etcd.conf为

# [member]
ETCD_NAME=infra3
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.212:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.212:2379"

#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.212:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.212:2379"

#在所有节点执行,启动etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

如果报错,就需要查看/var/log/messages文件进行排错

e.测试集群是否正常

验证ETCD是否成功启动
etcdctl \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  cluster-health

4.配置kubernetes参数 a.下载kubernetes编译好的二进制文件并进行分发

wget https://dl.k8s.io/v1.8.5/kubernetes-server-linux-amd64.tar.gz
tar zxf kubernetes-server-linux-amd64.tar.gz 
cp -rf kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kubectl,kubefed,kubelet,kube-proxy,kube-scheduler} /usr/local/bin/
scp -r kubernetes/server/bin/{kubelet,kube-proxy} 192.168.0.211:/usr/local/bin/
scp -r kubernetes/server/bin/{kubelet,kube-proxy} 192.168.0.212:/usr/local/bin/

#查看kubernetes最新版,可到https://github.com/kubernetes/kubernetes/releases 然后进入 CHANGELOG-x.x.md就可限制二进制的下载地址

b.创建 TLS Bootstrapping Token

cd /etc/kubernetes
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <    14h       v1.8.5
192.168.0.211   Ready         14h       v1.8.5
192.168.0.212   Ready         14h       v1.8.5

c.安装及配置kube-proxy #配置kube-proxy服务启动文件 vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#kube-proxy配置文件如下: node210: vim /etc/kubernetes/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.210 --hostname-override=192.168.0.210 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node211: vim /etc/kubernetes/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.211 --hostname-override=192.168.0.211 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

node212: vim /etc/kubernetes/proxy

###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.212--hostname-override=192.168.0.212 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

#启动kube-proxy服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

d.在所有节点默认开启forward为accept vim /usr/lib/systemd/system/forward.service

[Unit]
Description=iptables forward
Documentation=http://iptables.org/
After=network.target docker.service

[Service]
Type=forking
ExecStart=/usr/sbin/iptables -P FORWARD ACCEPT
ExecReload=/usr/sbin/iptables -P FORWARD ACCEPT
ExecStop=/usr/sbin/iptables -P FORWARD ACCEPT
PrivateTmp=true

[Install]
WantedBy=multi-user.target

#启动forward服务

systemctl daemon-reload
systemctl enable forward
systemctl start forward
systemctl status forward

7.测试集群是否工作正常 a.创建一个deploy kubectl run Nginx --replicas=2 --labels="run=Nginx-service" --image=Nginx --port=80 b.映射服务到外网可访问 kubectl expose deployment Nginx --type=NodePort --name=Nginx-service c.查看服务状态

kubectl describe svc example-service
Name:                   Nginx-service
Namespace:              default
Labels:                 run=Nginx-service
Annotations:            
Selector:               run=Nginx-service
Type:                   NodePort
IP:                     10.254.84.99
Port:                    80/TCP
NodePort:                30881/TCP
Endpoints:              172.30.1.2:80,172.30.54.2:80
Session Affinity:       None
Events:

d.查看pods启动情况

kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
Nginx-2317272628-nsfrr   1/1       Running   0          1m
Nginx-2317272628-qbbgg   1/1       Running   0          1m

e.在外网通过 http://192.168.0.210:30881 http://192.168.0.211:30881 http://192.168.0.212:30881 都可以访问Nginx页面

若无法访问,可通过iptables -nL查看forward链是否开启

原文链接:https://www.f2er.com/centos/375064.html

猜你在找的CentOS相关文章