部署网络结构
117主机 | k8s master +k8s minion+flannel+docker |
110 | etcd |
73主机 | K8s minion+docker+flannel |
安装docker
要求linux内核
3.10的版本,使用
uname -rs #查看linux内核版本
官方安装文档,很简单,直接copy paste.
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7/ enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF
yum install docker-engine #安装docker服务
systemctl enable docker.service
systemctl start docker #启动docker服务
systemctl stop docker #停止docker服务
安装与启动ETCD
cd /opt/kubernetes/bin
vi start_etcd.sh nohup ./etcd --name etcd001 \ --initial-advertise-peer-urls http://192.168.161.110:2380 \ --listen-peer-urls http://192.168.161.110:2380 \ --listen-client-urls http://192.168.161.110:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.161.110:2379 \ --initial-cluster-token etcd-cluster-1 \ --initial-cluster etcd001=http://192.168.161.110:2380 \ --initial-cluster-state new &
./start_etcd.sh
查看etcd集群成员
./etcdctl member list
检查etcd集群健康状态
./etcdctl cluster-health
./etcdctl cluster-health
主机启动k8s master结点
master结点启动kube-apiserver、kube-controller-manager、kube-scheduler三个服务
cd /opt/kubernetes/master
[root@linux-117 master]# vi start_k8s_master.sh #!/bin/sh nohup ./kube-apiserver \ --insecure-bind-address=0.0.0.0 \ --insecure-port=8080 \ --cors_allowed_origins=.* \ --etcd_servers=http://192.168.161.110:2379 \ --v=4 --logtostderr=true \ --log_dir=/opt/kubernetes/logs/k8s/apiserver \ --service-cluster-ip-range=10.10.10.0/24 & nohup ./kube-controller-manager \ --master=192.168.161.117:8080 \ --enable-hostpath-provisioner=false \ --v=1 --logtostderr=true \ --allocate-node-cidrs=true --cluster-cidr=10.1.0.0/16 \ --log_dir=/opt/kubernetes/logs/k8s/controller-manager & nohup ./kube-scheduler \ --master=192.168.161.117:8080 \ --v=1 --logtostderr=true \ --log_dir=/opt/kubernetes/logs/k8s/scheduler &
--cluster-cidr参数,指定k8s service集群内部访问IP子网段,本例中通过k8s部署service,内部访问地址由此子网段分配.下文部署的 my-Nginx-serv的内部访问IP是10.10.10.112
[root@linux-117 master]# ./kubectl get service my-Nginx-serv
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-Nginx-serv 10.10.10.112 <nodes> 8080/TCP 4d
启动k8s minion结点
master结点启动kubelet、kube-proxy三个服务,注意kubernets 1.3版本使用kubelet,而原来的kubcfg已经没有了
cd /opt/kubernetes/bin
vi k8s.minion.sh #! /bin/sh # start the minion nohup ./kubelet --address=0.0.0.0 \ --port=10250 \ --v=1 \ --log_dir=/opt/kubernetes/logs/k8s/kubelet \ --hostname_override=192.168.161.73 \ --container-runtime=docker \ --api_servers=http://192.168.161.117:8080 \ --logtostderr=false >> kublet.log 2>&1 & nohup ./kube-proxy \ --master=192.168.161.117:8080 \ --log_dir=/opt/kubernetes/logs/k8s/proxy \ --v=1 --logtostderr=false >> proxy.log 2>&1 &
./k8s.minion.sh
/***主机1上执行***/
nohup ./flanneld -etcd-endpoints=http://192.168.161.110:2379 -remote=192.168.161.110:8888 >> /opt/kubernetes/flanenl.log 2>&1 &source /run/flannel/subnet.env/***主机2上执行***/
nohup ./flanneld -etcd-endpoints=http://192.168.161.110:2379 -remote=192.168.161.110:8888 >> /opt/kubernetes/flanenl.log 2>&1 & source /run/flannel/subnet.env
配置flanneld网络
cd /opt/kubernetes/bin
nohup ./flanneld --listen=0.0.0.0:8888 >> /opt/kubernetes/logs/flanneld.log 2>&1 & /**在etcd服务器上设置子网*/
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
配置docker子网,重启minon结点
停止docker进程,./mk-docker-opts.sh
source /run/flannel/subnet.env
ifconfig docker0 ${FLANNEL_SUBNET}
setsid docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} --insecure-registry=192.168.161.117:5000 --registry-mirror=https://0ai1grsq.mirror.aliyuncs.com >docker.log 2>&1& ./k8s.minion.sh #重新启动kubelet、 kube-proxy
启动成功后,分别查看minon主机的子网情况,73的子网是10.1.59.*子网,117分配的是10.1.83.*子网.是上文以下命令
etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'设置的
在73主机查子网配置情况
[root@linux-73 ~]# ifconfig -adocker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1472
inet 10.1.59.1netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:75ff:fe18:4dd prefixlen 64 scopeid 0x20<link>
ether 02:42:75:18:04:dd txqueuelen 0 (Ethernet)
RX packets 1623171 bytes 625325902 (596.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1518744 bytes 474482055 (452.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,POINTOPOINT,NOARP,MULTICAST> mtu 1472
inet 10.1.59.0netmask 255.255.0.0 destination 10.1.59.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 879524 bytes 61983130 (59.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 895098 bytes 530931772 (506.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
inet 10.1.59.0netmask 255.255.0.0 destination 10.1.59.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 879524 bytes 61983130 (59.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 895098 bytes 530931772 (506.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
在117主机查子网配置情况
[root@linux-5f117 ~]# ifconfig -adocker0: flags=4163<UP,MULTICAST> mtu 1472
inet 10.1.83.1netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:aeff:fe27:e5d1 prefixlen 64 scopeid 0x20<link>
ether 02:42:ae:27:e5:d1 txqueuelen 0 (Ethernet)
RX packets 1929341 bytes 1078499314 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1851177 bytes 1376911611 (1.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel0: flags=4305<UP,MULTICAST> mtu 1472
inet 10.1.83.0 netmask 255.255.0.0 destination 10.1.83.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 880550 bytes 521809015 (497.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 884839 bytes 61525762 (58.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
inet 10.1.83.0 netmask 255.255.0.0 destination 10.1.83.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 880550 bytes 521809015 (497.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 884839 bytes 61525762 (58.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
部署kubernetes dashboard
[root@linux-5f117 master]# more kubernetes-dashboard.yaml # Copyright 2015 Google Inc. All Rights Reserved. # # Licensed under the Apache License,Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing,software # distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI. # # Example usage: kubectl create -f <this_file> kind: Deployment apiVersion: extensions/v1beta1 Metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 selector: matchLabels: app: kubernetes-dashboard template: Metadata: labels: app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: designer9418/kubernetes-dashboard-amd64 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified,Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. - --apiserver-host=http://192.168.161.117:8080 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 Metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard
kubectl
create
-f
kubernetes-dashboard.yaml
输入 http://192.168.161.117:8080/ui访问dashboard
防火墙
centos7查看防火墙状态。
systemctl status firewalld
临时关闭防火墙命令。重启电脑后,防火墙自动起来。
systemctl stop firewalld
永久关闭防火墙命令。重启后,防火墙不会自动启动。
systemctl disable firewalld