使用swarm构建docker集群之后我们发现面临很多问题 swarm虽好但是还处于发展阶段功能上有所不足 我们使用kubernetes来解决这个问题@H_301_4@
kubernetes 与swarm 比较@H_301_4@@H_404_15@
复制集与健康维护@H_301_4@
大而全意味着 复杂度较高 从部署到使用都比swarm 复杂的多 相对而已swarm比较轻量级 而且跟docker引擎配合的更好 从精神上我是更支持swarm 奈何现在功能太欠缺 几天前发布了一个 SwarmKit的管理功能 功能多了不少 待其完善以后可以重回swarm的怀抱@H_301_4@
@H_301_4@@H_404_15@
K8s 核心概念简介
@H_301_4@
pod
k8s 中创建的最小部署单元就是pod 而容器是运行在pod里面的 pod可以运行多个容器 pod内的容器可以共享网络和存储相互访问
@H_301_4@@H_404_15@
replication controller
复制及控制器:对多个pod创建相同的副本运行在不同节点 一般不会创建单独的pod而是与rc配合创建 rc控制并管理pod的生命周期维护pod的健康
service
每个容器重新运行后的ip地址都不是固定的 所以要有一个服务方向和负载均衡来处理 service就可以实现这个需求 service创建后可以暴露一个固定的端口 与相应的pod 进行绑定
K8s 核心组件简介@H_301_4@@H_404_15@
apiserver
提供对外的REST API服务 运行在 master节点 对指令进行验证后 修改etcd的存储数据
shcheduler
调度器运行在master节点,通过apiserver定时监控数据变化 运行pod时通过自身调度算法选出可运行的节点
controller-manager
控制管理器运行在master节点 分别几大管理器定时运行 分别为
1)replication controller 管理器 管理并保存所有的rc的的状态
2 ) service Endpoint 管理器 对service 绑定的pod 进行实时更新操作 对状态失败的pod进行解绑
3)Node controller 管理器 定时对集群的节点健康检查与监控
4)资源配额管理器 追踪集群资源使用情况
kuctrl (子节点)
管理维护当前子节点的所有容器 如同步创建新容器 回收镜像垃圾等
kube-proxy (子节点)
对客户端请求进行负载均衡并分配到service后端的pod 是service的具体实现保证了ip的动态变化 proxy 通过修改iptable 实现路由转发
工作流程
k8s 安装过程:@H_301_4@@H_404_15@
一、主机规划表:@H_301_4@
IP地址@H_301_4@
角色@H_301_4@
安装软件包@H_301_4@
启动服务及顺序@H_301_4@
192.168.20.60@H_301_4@
k8s-master兼minion@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
@H_301_4@@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kube-apiserver@H_301_4@
kube-controller-manager@H_301_4@
kube-scheduler@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
192.168.20.61@H_301_4@
k8s-minion1@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
192.168.20.62@H_301_4@
k8s-minion2@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
系统环境: CentOS-7.2@H_301_4@
#yum update
#关闭firewalld,安装iptables
systemctlstopfirewalld.service
systemctldisablefirewalld.service@H_301_4@
yum-yinstalliptables-services
systemctlrestartiptables.service
systemctlenableiptables.service
#关闭selinux
sed-i"s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config@H_301_4@
setenforce0@H_301_4@
#tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
#yuminstalldocker-engine
#使用内部私有仓库@H_301_4@
#vi@H_301_4@/usr/lib/systemd/system/docker.service@H_301_4@@H_301_4@
ExecStart=/usr/bin/dockerdaemon--insecure-registry=192.168.4.231:5000-Hfd://@H_301_4@
#启动docker@H_301_4@
systemctl start docker@H_301_4@
三、安装etcd集群(为k8s提供存储功能和强一致性保证@H_301_4@)@H_301_4@
tarzxfetcd-v2.3.2-linux-amd64.tar.gz@H_301_4@
cdetcd-v2.3.2-linux-amd64@H_301_4@
cp etcd* /usr/local/bin/@H_301_4@
#vi/usr/lib/systemd/system/etcd.service@H_301_4@
Description=etcd@H_301_4@
[Service]@H_301_4@
Environment=ETCD_NAME=k8s-master@H_301_4@#节点名称,唯一。minion节点就对应改为主机名就好@H_301_4@@H_301_4@
Environment=ETCD_DATA_DIR=/var/lib/etcd #存储数据路径@H_301_4@,如果集群出现问题,可以删除这个目录重新配。@H_301_4@
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=@H_301_4@http://192.168.20.60:7001 #监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_LISTEN_PEER_URLS=@H_301_4@http://192.168.20.60:7001#监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_LISTEN_CLIENT_URLS=@H_301_4@http://192.168.20.60:4001,http://127.0.0.1:4001 #对外监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_ADVERTISE_CLIENT_URLS=@H_301_4@http://192.168.20.60:4001#对外监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-1@H_301_4@ #集群名称,三台节点统一@H_301_4@
Environment=ETCD_INITIAL_CLUSTER=k8s-master=@H_301_4@http://192.168.20.60:7001,k8s-minion1=http://192.168.20.61:7001,k8s-minion2=http://192.168.20.62:7001 #集群监控@H_301_4@
Environment=ETCD_INITIAL_CLUSTER_STATE=new@H_301_4@
ExecStart=/usr/local/bin/etcd@H_301_4@
[Install]@H_301_4@
WantedBy=multi-user.target@H_301_4@
#启动服务@H_301_4@
systemctl start etcd@H_301_4@
#检查etcd集群是否正常工作@H_301_4@
[@H_301_4@root@k8s-monion2etcd]#etcdctlcluster-health@H_301_4@
member2d3a022000105975ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.61:4001
member34a68a46747ee684ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.62:4001
memberfe9e66405caec791ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.60:4001
clusterishealthy@H_301_4@#出现这个说明已经正常启动了。@H_301_4@
@H_301_4@
@H_301_4@
@H_502_586@#然后设置一下打通的内网网段范围@H_301_4@
etcdctl set /coreos.com/network/config '{ "Network": "172.20.0.0/16" }'@H_301_4@
四、安装启动Flannel(打通容器间网络,可实现容器跨主机互连)@H_301_4@@H_301_4@
tarzxfflannel-0.5.5-linux-amd64.tar.gz@H_301_4@
mvflannel-0.5.5/usr/local/flannel@H_301_4@
cd /usr/local/flannel@H_301_4@
#vi/usr/lib/systemd/system/flanneld.service@H_301_4@
[Unit]@H_301_4@
Description=flannel@H_301_4@
After=etcd.service@H_301_4@
After=docker.service@H_301_4@
[Service]@H_301_4@
EnvironmentFile=/etc/sysconfig/flanneld@H_301_4@
ExecStart=/usr/local/flannel/flanneld\@H_301_4@
-etcd-endpoints=${FLANNEL_ETCD}$FLANNEL_OPTIONS@H_301_4@
[Install]@H_301_4@
WantedBy=multi-user.target@H_301_4@
#vi /etc/sysconfig/flanneld@H_301_4@
FLANNEL_ETCD="@H_301_4@http://192.168.20.60:4001,http://192.168.20.61:4001,http://192.168.20.62:4001"@H_301_4@
@H_301_4@
#启动服务@H_301_4@
systemctlstartflanneld@H_301_4@
mk-docker-opts.sh-i@H_301_4@
source/run/flannel/subnet.env@H_301_4@
ifconfigdocker0${FLANNEL_SUBNET}@H_301_4@
systemctlrestartdocker@H_301_4@
#验证是否成功@H_301_4@
五、安装kubernets@H_301_4@
1.下载源码包
cd /usr/local/
git clone
https://github.com/kubernetes/kubernetes.git
cd kubernetes/server/
tarzxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cpkube-apiserverkubectlkube-schedulerkube-controller-managerkube-proxy
kubelet
/usr/local/bin/
2.注册系统服务
#vi/usr/lib/systemd/system/kubelet.service@H_301_4@
[Unit]
Description=KubernetesKubeletServer
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/kubelet
User=root
ExecStart=/usr/local/bin/kubelet\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_ALLOW_PRIV\
$KUBELET_ADDRESS\
$KUBELET_PORT\
$KUBELET_HOSTNAME\
$KUBELET_API_SERVER\
$KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-proxy.service@H_301_4@
[Unit]
Description=KubernetesKube-proxyServer
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/kube-proxy
User=root
ExecStart=/usr/local/bin/kube-proxy\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
3.创建配置文件
mkdir /etc/kubernetes
vi /etc/kubernetes/config
#Commaseparatedlistofnodesintheetcdcluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.60:4001"
#loggingtostderrmeanswegetitinthesystemdjournal
KUBE_LOGTOSTDERR="--logtostderr=true"
#journalmessagelevel,0isdebug
KUBE_LOG_LEVEL="--v=0"
#Shouldthisclusterbeallowedtorunprivilegeddockercontainers
#KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_ALLOW_PRIV="--allow-privileged=true"
vi/etc/kubernetes/kubelet
#Theaddressfortheinfoservertoserveon
KUBELET_ADDRESS="--address=0.0.0.0"
#Theportfortheinfoservertoserveon
KUBELET_PORT="--port=10250"
#Youmayleavethisblanktousetheactualhostname
KUBELET_HOSTNAME="--hostname-override=192.168.20.60@H_301_4@" #master,minion节点填本机的IP
#Locationoftheapi-server
KUBELET_API_SERVER="--api-servers=http://192.168.20.60:8080"
#Addyourown!
KUBELET_ARGS="--cluster-dns=192.168.20.64@H_301_4@--cluster-domain=cluster.local@H_301_4@" #后面使用dns插件会用到@H_301_4@
vi/etc/kubernetes/kube-proxy
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_PROXY_ARGS="--proxy-mode=userspace" @H_301_4@ #代理模式,这里使用userspace。而iptables模式效率比较高,但要注意你的内核版本和iptables的版本是否符合要求,要不然会出错。@H_301_4@
关于代理模式的选择,可以看国外友人的解释:
4.以上服务需要在所有节点上启动,下面的是master节点另外需要的服务:@H_301_4@
kube-apiserver
、
kube-controller-manager
、
kube-scheduler
4.1、配置相关服务
#vi/usr/lib/systemd/system/kube-apiserver.service@H_301_4@
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/local/bin/kube-apiserver\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_ETCD_SERVERS\
$KUBE_API_ADDRESS\
$KUBE_API_PORT\
$KUBELET_PORT\
$KUBE_ALLOW_PRIV\
$KUBE_SERVICE_ADDRESSES\
$KUBE_ADMISSION_CONTROL\
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-controller-manager.service@H_301_4@
[Unit]
Description=KubernetesControllerManager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/local/bin/kube-controller-manager\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-scheduler.service@H_301_4@
[Unit]
Description=KubernetesSchedulerPlugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=root
ExecStart=/usr/local/bin/kube-scheduler\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
vi /etc/kubernetes/apiserver
#Theaddressonthelocalservertolistento.
KUBE_API_ADDRESS="--address=0.0.0.0"
#Theportonthelocalservertolistenon.
KUBE_API_PORT="--port=8080"
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Portkubeletslistenon
KUBELET_PORT="--kubelet-port=10250"
#Addressrangetouseforservices
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.20.0/24"
#KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#Addyourown!
KUBE_API_ARGS=""
vi /etc/kubernetes/controller-manager
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_CONTROLLER_MANAGER_ARGS=""
vi /etc/kubernetes/scheduler
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_SCHEDULER_ARGS=""
更多配置项可以参考官方文档:
http://kubernetes.io/docs/admin/kube-proxy/
#启动master服务
systemctlstartkubelet
systemctlstartkube-proxy
systemctlstartkube-apiserver
systemctlstartkube-controller-manager
systemctlstartkube-scheduler
#启动minion服务
systemctlstartkubelet
systemctlstartkube-proxy
#检查服务是否启动正常
[@H_301_4@root@k8s-masterbin]#kubectlgetno@H_301_4@
NAMESTATUSAGE
192.168.20.60Ready24s
192.168.20.61Ready46s
192.168.20.62Ready35s
#重启命令
systemctlrestartkubelet
systemctlrestartkube-proxy
systemctlrestartkube-apiserver
systemctlrestart kube-controller-manager
systemctlrestartkube-scheduler@H_301_4@
#填坑
pause gcr.io 被墙,没有这个镜像k8s应用不了,报下面错误:
image pull Failed for gcr.io/google_containers/pause:2.0@H_301_4@
使用docker hub的镜像代替,或者下到本地仓库,然后再重新打tag,并且每个节点都需要这个镜像
docker pull@H_301_4@kubernetes/pause@H_301_4@
dockertag @H_301_4@kubernetes/pause @H_301_4@gcr.io/google_containers/pause:2.0@H_301_4@
[root@k8s-masteraddons]#dockerimages
REPOSITORYTAGIMAGEIDCREATEDSIZE
192.168.4.231:5000/pause2.02b58359142b09monthsago350.2kB
gcr.io/google_containers/pause@H_301_4@2.02b58359142b09monthsago350.2kB
5.官方源码包里有一些插件,如:监控面板、dns@H_301_4@
cd /usr/local/kubernetes/cluster/addons/
cd /usr/local/kubernetes/cluster/addons/dashboard@H_301_4@
下面有两个文件:
=============================================================
dashboard-controller.yaml #用来设置部署应用,如:副本数,使用镜像,资源控制等等
apiVersion:v1
kind:ReplicationController@H_301_4@
Metadata:
#Keepthenameinsyncwithimageversionand
#gce/coreos/kube-manifests/addons/dashboardcounterparts
name:kubernetes-dashboard-v1.0.1
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
version:v1.0.1
kubernetes.io/cluster-service:"true"
spec:
replicas:1 #副本数量
selector:
k8s-app:kubernetes-dashboard
template:
Metadata:
labels:
k8s-app:kubernetes-dashboard
version:v1.0.1
kubernetes.io/cluster-service:"true"
spec:
containers:
-name:kubernetes-dashboard
image:192.168.4.231:5000/kubernetes-dashboard:v1.0.1
resources:
#keeprequest=limittokeepthiscontaineringuaranteedclass
limits:
cpu:100m
memory:50Mi
requests:
cpu:100m
memory:50Mi
ports:
-containerPort:9090
args:@H_301_4@
---apiserver-host=http://192.168.20.60:8080@H_301_4@#这里需要注意,不加这个参数,会默认去找localhost,而不是去master那里取。还有就是这个配置文件各项缩减问题,空格。@H_301_4@
livenessProbe:
httpGet:
path:/
port:9090
initialDelaySeconds:30
timeoutSeconds:30
========================================================
dashboard-service.yaml #提供外部访问服务@H_301_4@@H_301_4@
apiVersion:v1
kind:Service@H_301_4@
Metadata:
name:kubernetes-dashboard
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
kubernetes.io/cluster-service:"true"
spec:
selector:
k8s-app:kubernetes-dashboard
ports:
-port:80
targetPort:9090
=========================================================
kubectlcreate-f./ #创建服务
kubectl--namespace=kube-systemgetpo #查看系统服务启动状态

kubectl--namespace=kube-systemgetpo-owide #查看系统服务起在哪个节点

#若想删除,可以执行下面命令
kubectldelete-f./
在浏览器输入: http://192.168.20.60:8080/ui/
5.2、DNS 插件安装@H_301_4@@H_301_4@@H_301_4@@H_301_4@@H_404_15@
#使用ip地址方式不太容易记忆,集群内可以使用dns绑定ip并自动更新维护。@H_301_4@@H_404_15@
cd@H_301_4@@H_301_4@/usr/local/kubernetes/cluster/addons/dns@H_301_4@@H_301_4@
cp@H_301_4@skydns-rc.yaml.in /opt/dns/@H_301_4@skydns-rc.yaml@H_301_4@@H_301_4@
cp@H_301_4@skydns-svc.yaml.in /opt/dns/@H_301_4@skydns-svc.yaml@H_301_4@@H_301_4@
@H_301_4@
#/opt/dns/skydns-rc.yaml 文件@H_301_4@
apiVersion:v1@H_301_4@
kind:ReplicationController@H_301_4@
name:kube-dns-v11@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
template:@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
containers:@H_301_4@
-name:etcd@H_301_4@
image:192.168.4.231:5000/etcd-amd64:2.2.1 #我都是先把官方镜像下载到本地仓库@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thensetrequest=limittokeepthiscontainerin@H_301_4@
#guaranteedclass.Currently,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:500Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
command:@H_301_4@
-/usr/local/bin/etcd@H_301_4@
--data-dir@H_301_4@
-/var/etcd/data@H_301_4@
--listen-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--advertise-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--initial-cluster-token@H_301_4@
-skydns-etcd@H_301_4@
volumeMounts:@H_301_4@
-name:etcd-storage@H_301_4@
mountPath:/var/etcd/data@H_301_4@
-name:kube2sky@H_301_4@
image:192.168.4.231:5000/kube2sky:1.14 @H_301_4@#本地仓库取@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
#Kube2skywatchesallpods.@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
livenessProbe:@H_301_4@
httpGet:@H_301_4@
path:/healthz@H_301_4@
port:8080@H_301_4@
scheme:HTTP@H_301_4@
initialDelaySeconds:60@H_301_4@
timeoutSeconds:5@H_301_4@
successThreshold:1@H_301_4@
failureThreshold:5@H_301_4@
readinessProbe:@H_301_4@
httpGet:@H_301_4@
path:/readiness@H_301_4@
port:8081@H_301_4@
scheme:HTTP@H_301_4@
#wepollonpodstartupfortheKubernetesmasterserviceand@H_301_4@
#onlysetupthe/readinessHTTPserveroncethat'savailable.@H_301_4@
initialDelaySeconds:30@H_301_4@
timeoutSeconds:5@H_301_4@
args:@H_301_4@
#command="/kube2sky"@H_301_4@
---domain=@H_301_4@cluster.local @H_301_4@#一个坑,要和/etc/kubernetes/kubelet 内的一致@H_301_4@
---kube_master_url=@H_301_4@http://192.168.20.60:8080@H_301_4@ @H_301_4@#master管理节点@H_301_4@
-name:skydns@H_301_4@
image:192.168.4.231:5000/skydns:2015-10-13-8c72f8c@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
args:@H_301_4@
#command="/skydns"@H_301_4@
--machines=http://127.0.0.1:4001@H_301_4@
--addr=0.0.0.0:53@H_301_4@
--ns-rotate=false@H_301_4@
--domain=@H_301_4@cluster.local. @H_301_4@#另一个坑!! 后面要带"."@H_301_4@
ports:@H_301_4@
-containerPort:53@H_301_4@
name:dns@H_301_4@
protocol:UDP@H_301_4@
-containerPort:53@H_301_4@
name:dns-tcp@H_301_4@
protocol:TCP@H_301_4@
-name:healthz@H_301_4@
image:192.168.4.231:5000/exechealthz:1.0 #本地仓库取镜像@H_301_4@@H_301_4@
resources:@H_301_4@
#keeprequest=limittokeepthiscontaineringuaranteedclass@H_301_4@
limits:@H_301_4@
memory:20Mi@H_301_4@
requests:@H_301_4@
memory:20Mi@H_301_4@
args:@H_301_4@
--cmd=nslookupkubernetes.default.@H_301_4@svc.@H_301_4@cluster.local@H_301_4@127.0.0.1>/dev/null #还是这个坑@H_301_4@@H_301_4@
--port=8080@H_301_4@
ports:@H_301_4@
-containerPort:8080@H_301_4@
protocol:TCP@H_301_4@
volumes:@H_301_4@
-name:etcd-storage@H_301_4@
emptyDir:{}@H_301_4@
dnsPolicy:Default#Don'tuseclusterDNS.@H_301_4@
========================================================================
#/opt/dns/skydns-svc.yaml 文件@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service@H_301_4@
name:kube-dns@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
kubernetes.io/name:"KubeDNS"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
clusterIP:192.168.20.100@H_301_4@
ports:@H_301_4@
-name:dns@H_301_4@
port:53@H_301_4@
protocol:UDP@H_301_4@
-name:dns-tcp@H_301_4@
port:53@H_301_4@
protocol:TCP@H_301_4@
==================================================================@H_301_4@
#启动@H_301_4@
cd /opt/dns/@H_301_4@
kubectl create -f ./@H_301_4@
@H_301_4@
#查看pod启动状态,看到4/4 个服务都启动完成,那就可以进行下一步验证阶段。@H_301_4@
kubectl--namespace=kube-systemgetpod-owide@H_301_4@@H_301_4@
转官网的验证方法:@H_301_4@@H_301_4@
网址:https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md#userconsent#
How do I test if it is working?
First deploy DNS as described above.
1 Create a simple Pod to use as a test environment.@H_404_15@
Create a file named busyBox.yaml with the following contents:
apiVersion:v1kind:PodMetadata:name:busyBoxnamespace:defaultspec:containers:-image:busyBoxcommand:-sleep-"3600"imagePullPolicy:IfNotPresentname:busyBoxrestartPolicy:Always
Then create a pod using this file:
kubectlcreate-fbusyBox.yaml
2 Wait for this pod to go into the running state.@H_404_15@
You can get its status with:
kubectlgetpodsbusyBox
You should see:
NAMEREADYSTATUSRESTARTSAGE
busyBox1/1Running0<some-time>
3 Validate DNS works@H_404_15@
Once that pod is running,you can exec nslookup in that environment:
kubectlexecbusyBox--nslookupkubernetes.default
You should see something like:
Server:10.0.0.10
Address1:10.0.0.10
Name:kubernetes.default
Address1:10.0.0.1
If you see that,DNS is working correctly.
5.3、manage插件@H_301_4@@H_301_4@@H_301_4@
@H_301_4@
mkdir /opt/k8s-manage@H_301_4@
cd /opt/k8s-manage@H_301_4@
================================================@H_301_4@
#catk8s-manager-rc.yaml
apiVersion:v1
kind:ReplicationController
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
replicas:1
selector:
app:k8s-manager
template:
Metadata:
labels:
app:k8s-manager
spec:
containers:
-image:mlamina/k8s-manager:latest@H_301_4@
name:k8s-manager
resources:
limits:
cpu:100m
memory:50Mi
ports:
-containerPort:80
name:http
=================================================@H_301_4@@H_301_4@
#catk8s-manager-svr.yaml
apiVersion:v1
kind:Service
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
ports:
-port:80
targetPort:http
selector:
app:k8s-manager@H_301_4@
=================================================@H_301_4@@H_301_4@
#启动@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
@H_301_4@
浏览器访问:@H_301_4@
http://192.168.20.60:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager
@H_301_4@
实例演示@H_301_4@
1.搭zookeeper、activeMQ、redis、mongodb服务@H_301_4@@H_301_4@
mkdir /opt/service/@H_301_4@@H_301_4@
cd /opt/service@H_301_4@@H_301_4@
==========================================================@H_301_4@@H_301_4@
#cat service.yaml@H_301_4@@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service
Metadata:
name:zk-amq-rds-mgd #服务名称
labels:
run:zk-amq-rds-mgd
spec:
type:NodePort
ports:
-port:2181 #标识@H_301_4@
nodePort:31656#master节点对外服务端口@H_301_4@
targetPort:2181 #容器内部端口@H_301_4@
protocol:TCP #协议类型@H_301_4@
name:zk-app #表示名@H_301_4@
-port:8161@H_301_4@
nodePort:31654@H_301_4@
targetPort:8161@H_301_4@
protocol:TCP@H_301_4@
name:amq-http@H_301_4@
-port:61616@H_301_4@
nodePort:31655@H_301_4@
targetPort:61616@H_301_4@
protocol:TCP@H_301_4@
name:amq-app@H_301_4@
-port:27017@H_301_4@
nodePort:31653@H_301_4@
targetPort:27017@H_301_4@
protocol:TCP@H_301_4@
name:mgd-app@H_301_4@
-port:6379@H_301_4@
nodePort:31652@H_301_4@
targetPort:6379@H_301_4@
protocol:TCP@H_301_4@
name:rds-app@H_301_4@
selector:
run:zk-amq-rds-mgd
---
#apiVersion:extensions/v1beta1
apiVersion:v1
kind:ReplicationController
Metadata:
name:zk-amq-rds-mgd
spec:
replicas:2 #两个副本@H_301_4@
template:
Metadata:
labels:
run:zk-amq-rds-mgd
spec:
containers:
-name:zookeeper #应用名称@H_301_4@
image:192.168.4.231:5000/zookeeper:0524 #使用本地镜像@H_301_4@
imagePullPolicy:IfNotPresent #自动分配到性能好的节点,去拉镜像并启动容器。@H_301_4@
ports:@H_301_4@
-containerPort:2181 #容器内部服务端口@H_301_4@
env:@H_301_4@
-name:LANG@H_301_4@
value:en_US.UTF-8@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/tmp/zookeeper #容器内部挂载点@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@#挂载名称,要与下面配置外部挂载点一致@H_301_4@@H_301_4@
-name:activemq@H_301_4@
image:192.168.4.231:5000/activemq:v2@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:8161@H_301_4@
-containerPort:61616@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/apache-activemq-5.10.2/data@H_301_4@
name:activemq-d@H_301_4@
-name:mongodb@H_301_4@
image:192.168.4.231:5000/mongodb:3.0.6@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:27017@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/var/lib/mongo@H_301_4@
name:mongodb-d@H_301_4@
-name:redis@H_301_4@
image:192.168.4.231:5000/redis:2.8.25@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:6379@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/redis/var@H_301_4@
name:redis-d@H_301_4@
volumes:
-hostPath:@H_301_4@
path:/mnt/mfs/service/zookeeper/data #宿主机挂载点,我这里用了分布式共享存储(MooseFS),这样可以保证多个副本数据的一致性。@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/activemq/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/mongodb/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/redis/data@H_301_4@
name:redis-d@H_301_4@
===========================================================================================@H_301_4@@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
参考文献:@H_301_4@http://my.oschina.net/jayqqaa12/blog/693919#userconsent#
复制集与健康维护@H_301_4@
K8s 核心概念简介
@H_301_4@
pod
k8s 中创建的最小部署单元就是pod 而容器是运行在pod里面的 pod可以运行多个容器 pod内的容器可以共享网络和存储相互访问
@H_301_4@@H_404_15@
replication controller
复制及控制器:对多个pod创建相同的副本运行在不同节点 一般不会创建单独的pod而是与rc配合创建 rc控制并管理pod的生命周期维护pod的健康
service
每个容器重新运行后的ip地址都不是固定的 所以要有一个服务方向和负载均衡来处理 service就可以实现这个需求 service创建后可以暴露一个固定的端口 与相应的pod 进行绑定
K8s 核心组件简介@H_301_4@@H_404_15@
apiserver
提供对外的REST API服务 运行在 master节点 对指令进行验证后 修改etcd的存储数据
shcheduler
调度器运行在master节点,通过apiserver定时监控数据变化 运行pod时通过自身调度算法选出可运行的节点
controller-manager
控制管理器运行在master节点 分别几大管理器定时运行 分别为
1)replication controller 管理器 管理并保存所有的rc的的状态
2 ) service Endpoint 管理器 对service 绑定的pod 进行实时更新操作 对状态失败的pod进行解绑
3)Node controller 管理器 定时对集群的节点健康检查与监控
4)资源配额管理器 追踪集群资源使用情况
kuctrl (子节点)
管理维护当前子节点的所有容器 如同步创建新容器 回收镜像垃圾等
kube-proxy (子节点)
对客户端请求进行负载均衡并分配到service后端的pod 是service的具体实现保证了ip的动态变化 proxy 通过修改iptable 实现路由转发
工作流程
k8s 安装过程:@H_301_4@@H_404_15@
一、主机规划表:@H_301_4@
IP地址@H_301_4@
角色@H_301_4@
安装软件包@H_301_4@
启动服务及顺序@H_301_4@
192.168.20.60@H_301_4@
k8s-master兼minion@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
@H_301_4@@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kube-apiserver@H_301_4@
kube-controller-manager@H_301_4@
kube-scheduler@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
192.168.20.61@H_301_4@
k8s-minion1@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
192.168.20.62@H_301_4@
k8s-minion2@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
系统环境: CentOS-7.2@H_301_4@
#yum update
#关闭firewalld,安装iptables
systemctlstopfirewalld.service
systemctldisablefirewalld.service@H_301_4@
yum-yinstalliptables-services
systemctlrestartiptables.service
systemctlenableiptables.service
#关闭selinux
sed-i"s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config@H_301_4@
setenforce0@H_301_4@
#tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
#yuminstalldocker-engine
#使用内部私有仓库@H_301_4@
#vi@H_301_4@/usr/lib/systemd/system/docker.service@H_301_4@@H_301_4@
ExecStart=/usr/bin/dockerdaemon--insecure-registry=192.168.4.231:5000-Hfd://@H_301_4@
#启动docker@H_301_4@
systemctl start docker@H_301_4@
三、安装etcd集群(为k8s提供存储功能和强一致性保证@H_301_4@)@H_301_4@
tarzxfetcd-v2.3.2-linux-amd64.tar.gz@H_301_4@
cdetcd-v2.3.2-linux-amd64@H_301_4@
cp etcd* /usr/local/bin/@H_301_4@
#vi/usr/lib/systemd/system/etcd.service@H_301_4@
Description=etcd@H_301_4@
[Service]@H_301_4@
Environment=ETCD_NAME=k8s-master@H_301_4@#节点名称,唯一。minion节点就对应改为主机名就好@H_301_4@@H_301_4@
Environment=ETCD_DATA_DIR=/var/lib/etcd #存储数据路径@H_301_4@,如果集群出现问题,可以删除这个目录重新配。@H_301_4@
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=@H_301_4@http://192.168.20.60:7001 #监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_LISTEN_PEER_URLS=@H_301_4@http://192.168.20.60:7001#监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_LISTEN_CLIENT_URLS=@H_301_4@http://192.168.20.60:4001,http://127.0.0.1:4001 #对外监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_ADVERTISE_CLIENT_URLS=@H_301_4@http://192.168.20.60:4001#对外监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-1@H_301_4@ #集群名称,三台节点统一@H_301_4@
Environment=ETCD_INITIAL_CLUSTER=k8s-master=@H_301_4@http://192.168.20.60:7001,k8s-minion1=http://192.168.20.61:7001,k8s-minion2=http://192.168.20.62:7001 #集群监控@H_301_4@
Environment=ETCD_INITIAL_CLUSTER_STATE=new@H_301_4@
ExecStart=/usr/local/bin/etcd@H_301_4@
[Install]@H_301_4@
WantedBy=multi-user.target@H_301_4@
#启动服务@H_301_4@
systemctl start etcd@H_301_4@
#检查etcd集群是否正常工作@H_301_4@
[@H_301_4@root@k8s-monion2etcd]#etcdctlcluster-health@H_301_4@
member2d3a022000105975ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.61:4001
member34a68a46747ee684ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.62:4001
memberfe9e66405caec791ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.60:4001
clusterishealthy@H_301_4@#出现这个说明已经正常启动了。@H_301_4@
@H_301_4@
@H_301_4@
@H_502_586@#然后设置一下打通的内网网段范围@H_301_4@
etcdctl set /coreos.com/network/config '{ "Network": "172.20.0.0/16" }'@H_301_4@
四、安装启动Flannel(打通容器间网络,可实现容器跨主机互连)@H_301_4@@H_301_4@
tarzxfflannel-0.5.5-linux-amd64.tar.gz@H_301_4@
mvflannel-0.5.5/usr/local/flannel@H_301_4@
cd /usr/local/flannel@H_301_4@
#vi/usr/lib/systemd/system/flanneld.service@H_301_4@
[Unit]@H_301_4@
Description=flannel@H_301_4@
After=etcd.service@H_301_4@
After=docker.service@H_301_4@
[Service]@H_301_4@
EnvironmentFile=/etc/sysconfig/flanneld@H_301_4@
ExecStart=/usr/local/flannel/flanneld\@H_301_4@
-etcd-endpoints=${FLANNEL_ETCD}$FLANNEL_OPTIONS@H_301_4@
[Install]@H_301_4@
WantedBy=multi-user.target@H_301_4@
#vi /etc/sysconfig/flanneld@H_301_4@
FLANNEL_ETCD="@H_301_4@http://192.168.20.60:4001,http://192.168.20.61:4001,http://192.168.20.62:4001"@H_301_4@
@H_301_4@
#启动服务@H_301_4@
systemctlstartflanneld@H_301_4@
mk-docker-opts.sh-i@H_301_4@
source/run/flannel/subnet.env@H_301_4@
ifconfigdocker0${FLANNEL_SUBNET}@H_301_4@
systemctlrestartdocker@H_301_4@
#验证是否成功@H_301_4@
五、安装kubernets@H_301_4@
1.下载源码包
cd /usr/local/
git clone
https://github.com/kubernetes/kubernetes.git
cd kubernetes/server/
tarzxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cpkube-apiserverkubectlkube-schedulerkube-controller-managerkube-proxy
kubelet
/usr/local/bin/
2.注册系统服务
#vi/usr/lib/systemd/system/kubelet.service@H_301_4@
[Unit]
Description=KubernetesKubeletServer
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/kubelet
User=root
ExecStart=/usr/local/bin/kubelet\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_ALLOW_PRIV\
$KUBELET_ADDRESS\
$KUBELET_PORT\
$KUBELET_HOSTNAME\
$KUBELET_API_SERVER\
$KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-proxy.service@H_301_4@
[Unit]
Description=KubernetesKube-proxyServer
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/kube-proxy
User=root
ExecStart=/usr/local/bin/kube-proxy\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
3.创建配置文件
mkdir /etc/kubernetes
vi /etc/kubernetes/config
#Commaseparatedlistofnodesintheetcdcluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.60:4001"
#loggingtostderrmeanswegetitinthesystemdjournal
KUBE_LOGTOSTDERR="--logtostderr=true"
#journalmessagelevel,0isdebug
KUBE_LOG_LEVEL="--v=0"
#Shouldthisclusterbeallowedtorunprivilegeddockercontainers
#KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_ALLOW_PRIV="--allow-privileged=true"
vi/etc/kubernetes/kubelet
#Theaddressfortheinfoservertoserveon
KUBELET_ADDRESS="--address=0.0.0.0"
#Theportfortheinfoservertoserveon
KUBELET_PORT="--port=10250"
#Youmayleavethisblanktousetheactualhostname
KUBELET_HOSTNAME="--hostname-override=192.168.20.60@H_301_4@" #master,minion节点填本机的IP
#Locationoftheapi-server
KUBELET_API_SERVER="--api-servers=http://192.168.20.60:8080"
#Addyourown!
KUBELET_ARGS="--cluster-dns=192.168.20.64@H_301_4@--cluster-domain=cluster.local@H_301_4@" #后面使用dns插件会用到@H_301_4@
vi/etc/kubernetes/kube-proxy
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_PROXY_ARGS="--proxy-mode=userspace" @H_301_4@ #代理模式,这里使用userspace。而iptables模式效率比较高,但要注意你的内核版本和iptables的版本是否符合要求,要不然会出错。@H_301_4@
关于代理模式的选择,可以看国外友人的解释:
4.以上服务需要在所有节点上启动,下面的是master节点另外需要的服务:@H_301_4@
kube-apiserver
、
kube-controller-manager
、
kube-scheduler
4.1、配置相关服务
#vi/usr/lib/systemd/system/kube-apiserver.service@H_301_4@
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/local/bin/kube-apiserver\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_ETCD_SERVERS\
$KUBE_API_ADDRESS\
$KUBE_API_PORT\
$KUBELET_PORT\
$KUBE_ALLOW_PRIV\
$KUBE_SERVICE_ADDRESSES\
$KUBE_ADMISSION_CONTROL\
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-controller-manager.service@H_301_4@
[Unit]
Description=KubernetesControllerManager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/local/bin/kube-controller-manager\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-scheduler.service@H_301_4@
[Unit]
Description=KubernetesSchedulerPlugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=root
ExecStart=/usr/local/bin/kube-scheduler\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
vi /etc/kubernetes/apiserver
#Theaddressonthelocalservertolistento.
KUBE_API_ADDRESS="--address=0.0.0.0"
#Theportonthelocalservertolistenon.
KUBE_API_PORT="--port=8080"
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Portkubeletslistenon
KUBELET_PORT="--kubelet-port=10250"
#Addressrangetouseforservices
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.20.0/24"
#KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#Addyourown!
KUBE_API_ARGS=""
vi /etc/kubernetes/controller-manager
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_CONTROLLER_MANAGER_ARGS=""
vi /etc/kubernetes/scheduler
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_SCHEDULER_ARGS=""
更多配置项可以参考官方文档:
http://kubernetes.io/docs/admin/kube-proxy/
#启动master服务
systemctlstartkubelet
systemctlstartkube-proxy
systemctlstartkube-apiserver
systemctlstartkube-controller-manager
systemctlstartkube-scheduler
#启动minion服务
systemctlstartkubelet
systemctlstartkube-proxy
#检查服务是否启动正常
[@H_301_4@root@k8s-masterbin]#kubectlgetno@H_301_4@
NAMESTATUSAGE
192.168.20.60Ready24s
192.168.20.61Ready46s
192.168.20.62Ready35s
#重启命令
systemctlrestartkubelet
systemctlrestartkube-proxy
systemctlrestartkube-apiserver
systemctlrestart kube-controller-manager
systemctlrestartkube-scheduler@H_301_4@
#填坑
pause gcr.io 被墙,没有这个镜像k8s应用不了,报下面错误:
image pull Failed for gcr.io/google_containers/pause:2.0@H_301_4@
使用docker hub的镜像代替,或者下到本地仓库,然后再重新打tag,并且每个节点都需要这个镜像
docker pull@H_301_4@kubernetes/pause@H_301_4@
dockertag @H_301_4@kubernetes/pause @H_301_4@gcr.io/google_containers/pause:2.0@H_301_4@
[root@k8s-masteraddons]#dockerimages
REPOSITORYTAGIMAGEIDCREATEDSIZE
192.168.4.231:5000/pause2.02b58359142b09monthsago350.2kB
gcr.io/google_containers/pause@H_301_4@2.02b58359142b09monthsago350.2kB
5.官方源码包里有一些插件,如:监控面板、dns@H_301_4@
cd /usr/local/kubernetes/cluster/addons/
cd /usr/local/kubernetes/cluster/addons/dashboard@H_301_4@
下面有两个文件:
=============================================================
dashboard-controller.yaml #用来设置部署应用,如:副本数,使用镜像,资源控制等等
apiVersion:v1
kind:ReplicationController@H_301_4@
Metadata:
#Keepthenameinsyncwithimageversionand
#gce/coreos/kube-manifests/addons/dashboardcounterparts
name:kubernetes-dashboard-v1.0.1
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
version:v1.0.1
kubernetes.io/cluster-service:"true"
spec:
replicas:1 #副本数量
selector:
k8s-app:kubernetes-dashboard
template:
Metadata:
labels:
k8s-app:kubernetes-dashboard
version:v1.0.1
kubernetes.io/cluster-service:"true"
spec:
containers:
-name:kubernetes-dashboard
image:192.168.4.231:5000/kubernetes-dashboard:v1.0.1
resources:
#keeprequest=limittokeepthiscontaineringuaranteedclass
limits:
cpu:100m
memory:50Mi
requests:
cpu:100m
memory:50Mi
ports:
-containerPort:9090
args:@H_301_4@
---apiserver-host=http://192.168.20.60:8080@H_301_4@#这里需要注意,不加这个参数,会默认去找localhost,而不是去master那里取。还有就是这个配置文件各项缩减问题,空格。@H_301_4@
livenessProbe:
httpGet:
path:/
port:9090
initialDelaySeconds:30
timeoutSeconds:30
========================================================
dashboard-service.yaml #提供外部访问服务@H_301_4@@H_301_4@
apiVersion:v1
kind:Service@H_301_4@
Metadata:
name:kubernetes-dashboard
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
kubernetes.io/cluster-service:"true"
spec:
selector:
k8s-app:kubernetes-dashboard
ports:
-port:80
targetPort:9090
=========================================================
kubectlcreate-f./ #创建服务
kubectl--namespace=kube-systemgetpo #查看系统服务启动状态

kubectl--namespace=kube-systemgetpo-owide #查看系统服务起在哪个节点

#若想删除,可以执行下面命令
kubectldelete-f./
在浏览器输入: http://192.168.20.60:8080/ui/
5.2、DNS 插件安装@H_301_4@@H_301_4@@H_301_4@@H_301_4@@H_404_15@
#使用ip地址方式不太容易记忆,集群内可以使用dns绑定ip并自动更新维护。@H_301_4@@H_404_15@
cd@H_301_4@@H_301_4@/usr/local/kubernetes/cluster/addons/dns@H_301_4@@H_301_4@
cp@H_301_4@skydns-rc.yaml.in /opt/dns/@H_301_4@skydns-rc.yaml@H_301_4@@H_301_4@
cp@H_301_4@skydns-svc.yaml.in /opt/dns/@H_301_4@skydns-svc.yaml@H_301_4@@H_301_4@
@H_301_4@
#/opt/dns/skydns-rc.yaml 文件@H_301_4@
apiVersion:v1@H_301_4@
kind:ReplicationController@H_301_4@
name:kube-dns-v11@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
template:@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
containers:@H_301_4@
-name:etcd@H_301_4@
image:192.168.4.231:5000/etcd-amd64:2.2.1 #我都是先把官方镜像下载到本地仓库@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thensetrequest=limittokeepthiscontainerin@H_301_4@
#guaranteedclass.Currently,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:500Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
command:@H_301_4@
-/usr/local/bin/etcd@H_301_4@
--data-dir@H_301_4@
-/var/etcd/data@H_301_4@
--listen-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--advertise-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--initial-cluster-token@H_301_4@
-skydns-etcd@H_301_4@
volumeMounts:@H_301_4@
-name:etcd-storage@H_301_4@
mountPath:/var/etcd/data@H_301_4@
-name:kube2sky@H_301_4@
image:192.168.4.231:5000/kube2sky:1.14 @H_301_4@#本地仓库取@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
#Kube2skywatchesallpods.@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
livenessProbe:@H_301_4@
httpGet:@H_301_4@
path:/healthz@H_301_4@
port:8080@H_301_4@
scheme:HTTP@H_301_4@
initialDelaySeconds:60@H_301_4@
timeoutSeconds:5@H_301_4@
successThreshold:1@H_301_4@
failureThreshold:5@H_301_4@
readinessProbe:@H_301_4@
httpGet:@H_301_4@
path:/readiness@H_301_4@
port:8081@H_301_4@
scheme:HTTP@H_301_4@
#wepollonpodstartupfortheKubernetesmasterserviceand@H_301_4@
#onlysetupthe/readinessHTTPserveroncethat'savailable.@H_301_4@
initialDelaySeconds:30@H_301_4@
timeoutSeconds:5@H_301_4@
args:@H_301_4@
#command="/kube2sky"@H_301_4@
---domain=@H_301_4@cluster.local @H_301_4@#一个坑,要和/etc/kubernetes/kubelet 内的一致@H_301_4@
---kube_master_url=@H_301_4@http://192.168.20.60:8080@H_301_4@ @H_301_4@#master管理节点@H_301_4@
-name:skydns@H_301_4@
image:192.168.4.231:5000/skydns:2015-10-13-8c72f8c@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
args:@H_301_4@
#command="/skydns"@H_301_4@
--machines=http://127.0.0.1:4001@H_301_4@
--addr=0.0.0.0:53@H_301_4@
--ns-rotate=false@H_301_4@
--domain=@H_301_4@cluster.local. @H_301_4@#另一个坑!! 后面要带"."@H_301_4@
ports:@H_301_4@
-containerPort:53@H_301_4@
name:dns@H_301_4@
protocol:UDP@H_301_4@
-containerPort:53@H_301_4@
name:dns-tcp@H_301_4@
protocol:TCP@H_301_4@
-name:healthz@H_301_4@
image:192.168.4.231:5000/exechealthz:1.0 #本地仓库取镜像@H_301_4@@H_301_4@
resources:@H_301_4@
#keeprequest=limittokeepthiscontaineringuaranteedclass@H_301_4@
limits:@H_301_4@
memory:20Mi@H_301_4@
requests:@H_301_4@
memory:20Mi@H_301_4@
args:@H_301_4@
--cmd=nslookupkubernetes.default.@H_301_4@svc.@H_301_4@cluster.local@H_301_4@127.0.0.1>/dev/null #还是这个坑@H_301_4@@H_301_4@
--port=8080@H_301_4@
ports:@H_301_4@
-containerPort:8080@H_301_4@
protocol:TCP@H_301_4@
volumes:@H_301_4@
-name:etcd-storage@H_301_4@
emptyDir:{}@H_301_4@
dnsPolicy:Default#Don'tuseclusterDNS.@H_301_4@
========================================================================
#/opt/dns/skydns-svc.yaml 文件@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service@H_301_4@
name:kube-dns@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
kubernetes.io/name:"KubeDNS"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
clusterIP:192.168.20.100@H_301_4@
ports:@H_301_4@
-name:dns@H_301_4@
port:53@H_301_4@
protocol:UDP@H_301_4@
-name:dns-tcp@H_301_4@
port:53@H_301_4@
protocol:TCP@H_301_4@
==================================================================@H_301_4@
#启动@H_301_4@
cd /opt/dns/@H_301_4@
kubectl create -f ./@H_301_4@
@H_301_4@
#查看pod启动状态,看到4/4 个服务都启动完成,那就可以进行下一步验证阶段。@H_301_4@
kubectl--namespace=kube-systemgetpod-owide@H_301_4@@H_301_4@
转官网的验证方法:@H_301_4@@H_301_4@
网址:https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md#userconsent#
How do I test if it is working?
First deploy DNS as described above.
1 Create a simple Pod to use as a test environment.@H_404_15@
Create a file named busyBox.yaml with the following contents:
apiVersion:v1kind:PodMetadata:name:busyBoxnamespace:defaultspec:containers:-image:busyBoxcommand:-sleep-"3600"imagePullPolicy:IfNotPresentname:busyBoxrestartPolicy:Always
Then create a pod using this file:
kubectlcreate-fbusyBox.yaml
2 Wait for this pod to go into the running state.@H_404_15@
You can get its status with:
kubectlgetpodsbusyBox
You should see:
NAMEREADYSTATUSRESTARTSAGE
busyBox1/1Running0<some-time>
3 Validate DNS works@H_404_15@
Once that pod is running,you can exec nslookup in that environment:
kubectlexecbusyBox--nslookupkubernetes.default
You should see something like:
Server:10.0.0.10
Address1:10.0.0.10
Name:kubernetes.default
Address1:10.0.0.1
If you see that,DNS is working correctly.
5.3、manage插件@H_301_4@@H_301_4@@H_301_4@
@H_301_4@
mkdir /opt/k8s-manage@H_301_4@
cd /opt/k8s-manage@H_301_4@
================================================@H_301_4@
#catk8s-manager-rc.yaml
apiVersion:v1
kind:ReplicationController
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
replicas:1
selector:
app:k8s-manager
template:
Metadata:
labels:
app:k8s-manager
spec:
containers:
-image:mlamina/k8s-manager:latest@H_301_4@
name:k8s-manager
resources:
limits:
cpu:100m
memory:50Mi
ports:
-containerPort:80
name:http
=================================================@H_301_4@@H_301_4@
#catk8s-manager-svr.yaml
apiVersion:v1
kind:Service
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
ports:
-port:80
targetPort:http
selector:
app:k8s-manager@H_301_4@
=================================================@H_301_4@@H_301_4@
#启动@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
@H_301_4@
浏览器访问:@H_301_4@
http://192.168.20.60:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager
@H_301_4@
实例演示@H_301_4@
1.搭zookeeper、activeMQ、redis、mongodb服务@H_301_4@@H_301_4@
mkdir /opt/service/@H_301_4@@H_301_4@
cd /opt/service@H_301_4@@H_301_4@
==========================================================@H_301_4@@H_301_4@
#cat service.yaml@H_301_4@@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service
Metadata:
name:zk-amq-rds-mgd #服务名称
labels:
run:zk-amq-rds-mgd
spec:
type:NodePort
ports:
-port:2181 #标识@H_301_4@
nodePort:31656#master节点对外服务端口@H_301_4@
targetPort:2181 #容器内部端口@H_301_4@
protocol:TCP #协议类型@H_301_4@
name:zk-app #表示名@H_301_4@
-port:8161@H_301_4@
nodePort:31654@H_301_4@
targetPort:8161@H_301_4@
protocol:TCP@H_301_4@
name:amq-http@H_301_4@
-port:61616@H_301_4@
nodePort:31655@H_301_4@
targetPort:61616@H_301_4@
protocol:TCP@H_301_4@
name:amq-app@H_301_4@
-port:27017@H_301_4@
nodePort:31653@H_301_4@
targetPort:27017@H_301_4@
protocol:TCP@H_301_4@
name:mgd-app@H_301_4@
-port:6379@H_301_4@
nodePort:31652@H_301_4@
targetPort:6379@H_301_4@
protocol:TCP@H_301_4@
name:rds-app@H_301_4@
selector:
run:zk-amq-rds-mgd
---
#apiVersion:extensions/v1beta1
apiVersion:v1
kind:ReplicationController
Metadata:
name:zk-amq-rds-mgd
spec:
replicas:2 #两个副本@H_301_4@
template:
Metadata:
labels:
run:zk-amq-rds-mgd
spec:
containers:
-name:zookeeper #应用名称@H_301_4@
image:192.168.4.231:5000/zookeeper:0524 #使用本地镜像@H_301_4@
imagePullPolicy:IfNotPresent #自动分配到性能好的节点,去拉镜像并启动容器。@H_301_4@
ports:@H_301_4@
-containerPort:2181 #容器内部服务端口@H_301_4@
env:@H_301_4@
-name:LANG@H_301_4@
value:en_US.UTF-8@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/tmp/zookeeper #容器内部挂载点@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@#挂载名称,要与下面配置外部挂载点一致@H_301_4@@H_301_4@
-name:activemq@H_301_4@
image:192.168.4.231:5000/activemq:v2@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:8161@H_301_4@
-containerPort:61616@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/apache-activemq-5.10.2/data@H_301_4@
name:activemq-d@H_301_4@
-name:mongodb@H_301_4@
image:192.168.4.231:5000/mongodb:3.0.6@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:27017@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/var/lib/mongo@H_301_4@
name:mongodb-d@H_301_4@
-name:redis@H_301_4@
image:192.168.4.231:5000/redis:2.8.25@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:6379@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/redis/var@H_301_4@
name:redis-d@H_301_4@
volumes:
-hostPath:@H_301_4@
path:/mnt/mfs/service/zookeeper/data #宿主机挂载点,我这里用了分布式共享存储(MooseFS),这样可以保证多个副本数据的一致性。@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/activemq/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/mongodb/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/redis/data@H_301_4@
name:redis-d@H_301_4@
===========================================================================================@H_301_4@@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
参考文献:@H_301_4@http://my.oschina.net/jayqqaa12/blog/693919#userconsent#
apiserver
提供对外的REST API服务 运行在 master节点 对指令进行验证后 修改etcd的存储数据
shcheduler
调度器运行在master节点,通过apiserver定时监控数据变化 运行pod时通过自身调度算法选出可运行的节点
controller-manager
控制管理器运行在master节点 分别几大管理器定时运行 分别为
1)replication controller 管理器 管理并保存所有的rc的的状态
2 ) service Endpoint 管理器 对service 绑定的pod 进行实时更新操作 对状态失败的pod进行解绑
3)Node controller 管理器 定时对集群的节点健康检查与监控
4)资源配额管理器 追踪集群资源使用情况
kuctrl (子节点)
管理维护当前子节点的所有容器 如同步创建新容器 回收镜像垃圾等
kube-proxy (子节点)
对客户端请求进行负载均衡并分配到service后端的pod 是service的具体实现保证了ip的动态变化 proxy 通过修改iptable 实现路由转发
工作流程
k8s 安装过程:@H_301_4@@H_404_15@
一、主机规划表:@H_301_4@
IP地址@H_301_4@
角色@H_301_4@
安装软件包@H_301_4@
启动服务及顺序@H_301_4@
192.168.20.60@H_301_4@
k8s-master兼minion@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
@H_301_4@@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kube-apiserver@H_301_4@
kube-controller-manager@H_301_4@
kube-scheduler@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
192.168.20.61@H_301_4@
k8s-minion1@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
192.168.20.62@H_301_4@
k8s-minion2@H_301_4@
kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
系统环境: CentOS-7.2@H_301_4@
#yum update
#关闭firewalld,安装iptables
systemctlstopfirewalld.service
systemctldisablefirewalld.service@H_301_4@
yum-yinstalliptables-services
systemctlrestartiptables.service
systemctlenableiptables.service
#关闭selinux
sed-i"s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config@H_301_4@
setenforce0@H_301_4@
#tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
#yuminstalldocker-engine
#使用内部私有仓库@H_301_4@
#vi@H_301_4@/usr/lib/systemd/system/docker.service@H_301_4@@H_301_4@
ExecStart=/usr/bin/dockerdaemon--insecure-registry=192.168.4.231:5000-Hfd://@H_301_4@
#启动docker@H_301_4@
systemctl start docker@H_301_4@
三、安装etcd集群(为k8s提供存储功能和强一致性保证@H_301_4@)@H_301_4@
tarzxfetcd-v2.3.2-linux-amd64.tar.gz@H_301_4@
cdetcd-v2.3.2-linux-amd64@H_301_4@
cp etcd* /usr/local/bin/@H_301_4@
#vi/usr/lib/systemd/system/etcd.service@H_301_4@
Description=etcd@H_301_4@
[Service]@H_301_4@
Environment=ETCD_NAME=k8s-master@H_301_4@#节点名称,唯一。minion节点就对应改为主机名就好@H_301_4@@H_301_4@
Environment=ETCD_DATA_DIR=/var/lib/etcd #存储数据路径@H_301_4@,如果集群出现问题,可以删除这个目录重新配。@H_301_4@
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=@H_301_4@http://192.168.20.60:7001 #监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_LISTEN_PEER_URLS=@H_301_4@http://192.168.20.60:7001#监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_LISTEN_CLIENT_URLS=@H_301_4@http://192.168.20.60:4001,http://127.0.0.1:4001 #对外监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_ADVERTISE_CLIENT_URLS=@H_301_4@http://192.168.20.60:4001#对外监听地址,其他机器按照本机IP地址修改@H_301_4@
Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-1@H_301_4@ #集群名称,三台节点统一@H_301_4@
Environment=ETCD_INITIAL_CLUSTER=k8s-master=@H_301_4@http://192.168.20.60:7001,k8s-minion1=http://192.168.20.61:7001,k8s-minion2=http://192.168.20.62:7001 #集群监控@H_301_4@
Environment=ETCD_INITIAL_CLUSTER_STATE=new@H_301_4@
ExecStart=/usr/local/bin/etcd@H_301_4@
[Install]@H_301_4@
WantedBy=multi-user.target@H_301_4@
#启动服务@H_301_4@
systemctl start etcd@H_301_4@
#检查etcd集群是否正常工作@H_301_4@
[@H_301_4@root@k8s-monion2etcd]#etcdctlcluster-health@H_301_4@
member2d3a022000105975ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.61:4001
member34a68a46747ee684ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.62:4001
memberfe9e66405caec791ishealthy:gothealthyresultfrom@H_301_4@http://192.168.20.60:4001
clusterishealthy@H_301_4@#出现这个说明已经正常启动了。@H_301_4@
@H_301_4@
@H_301_4@
@H_502_586@#然后设置一下打通的内网网段范围@H_301_4@
etcdctl set /coreos.com/network/config '{ "Network": "172.20.0.0/16" }'@H_301_4@
四、安装启动Flannel(打通容器间网络,可实现容器跨主机互连)@H_301_4@@H_301_4@
tarzxfflannel-0.5.5-linux-amd64.tar.gz@H_301_4@
mvflannel-0.5.5/usr/local/flannel@H_301_4@
cd /usr/local/flannel@H_301_4@
#vi/usr/lib/systemd/system/flanneld.service@H_301_4@
[Unit]@H_301_4@
Description=flannel@H_301_4@
After=etcd.service@H_301_4@
After=docker.service@H_301_4@
[Service]@H_301_4@
EnvironmentFile=/etc/sysconfig/flanneld@H_301_4@
ExecStart=/usr/local/flannel/flanneld\@H_301_4@
-etcd-endpoints=${FLANNEL_ETCD}$FLANNEL_OPTIONS@H_301_4@
[Install]@H_301_4@
WantedBy=multi-user.target@H_301_4@
#vi /etc/sysconfig/flanneld@H_301_4@
FLANNEL_ETCD="@H_301_4@http://192.168.20.60:4001,http://192.168.20.61:4001,http://192.168.20.62:4001"@H_301_4@
@H_301_4@
#启动服务@H_301_4@
systemctlstartflanneld@H_301_4@
mk-docker-opts.sh-i@H_301_4@
source/run/flannel/subnet.env@H_301_4@
ifconfigdocker0${FLANNEL_SUBNET}@H_301_4@
systemctlrestartdocker@H_301_4@
#验证是否成功@H_301_4@
五、安装kubernets@H_301_4@
1.下载源码包
cd /usr/local/
git clone
https://github.com/kubernetes/kubernetes.git
cd kubernetes/server/
tarzxf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cpkube-apiserverkubectlkube-schedulerkube-controller-managerkube-proxy
kubelet
/usr/local/bin/
2.注册系统服务
#vi/usr/lib/systemd/system/kubelet.service@H_301_4@
[Unit]
Description=KubernetesKubeletServer
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/kubelet
User=root
ExecStart=/usr/local/bin/kubelet\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_ALLOW_PRIV\
$KUBELET_ADDRESS\
$KUBELET_PORT\
$KUBELET_HOSTNAME\
$KUBELET_API_SERVER\
$KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-proxy.service@H_301_4@
[Unit]
Description=KubernetesKube-proxyServer
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/config
EnvironmentFile=/etc/kubernetes/kube-proxy
User=root
ExecStart=/usr/local/bin/kube-proxy\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
3.创建配置文件
mkdir /etc/kubernetes
vi /etc/kubernetes/config
#Commaseparatedlistofnodesintheetcdcluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.60:4001"
#loggingtostderrmeanswegetitinthesystemdjournal
KUBE_LOGTOSTDERR="--logtostderr=true"
#journalmessagelevel,0isdebug
KUBE_LOG_LEVEL="--v=0"
#Shouldthisclusterbeallowedtorunprivilegeddockercontainers
#KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_ALLOW_PRIV="--allow-privileged=true"
vi/etc/kubernetes/kubelet
#Theaddressfortheinfoservertoserveon
KUBELET_ADDRESS="--address=0.0.0.0"
#Theportfortheinfoservertoserveon
KUBELET_PORT="--port=10250"
#Youmayleavethisblanktousetheactualhostname
KUBELET_HOSTNAME="--hostname-override=192.168.20.60@H_301_4@" #master,minion节点填本机的IP
#Locationoftheapi-server
KUBELET_API_SERVER="--api-servers=http://192.168.20.60:8080"
#Addyourown!
KUBELET_ARGS="--cluster-dns=192.168.20.64@H_301_4@--cluster-domain=cluster.local@H_301_4@" #后面使用dns插件会用到@H_301_4@
vi/etc/kubernetes/kube-proxy
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_PROXY_ARGS="--proxy-mode=userspace" @H_301_4@ #代理模式,这里使用userspace。而iptables模式效率比较高,但要注意你的内核版本和iptables的版本是否符合要求,要不然会出错。@H_301_4@
关于代理模式的选择,可以看国外友人的解释:
4.以上服务需要在所有节点上启动,下面的是master节点另外需要的服务:@H_301_4@
kube-apiserver
、
kube-controller-manager
、
kube-scheduler
4.1、配置相关服务
#vi/usr/lib/systemd/system/kube-apiserver.service@H_301_4@
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=root
ExecStart=/usr/local/bin/kube-apiserver\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_ETCD_SERVERS\
$KUBE_API_ADDRESS\
$KUBE_API_PORT\
$KUBELET_PORT\
$KUBE_ALLOW_PRIV\
$KUBE_SERVICE_ADDRESSES\
$KUBE_ADMISSION_CONTROL\
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-controller-manager.service@H_301_4@
[Unit]
Description=KubernetesControllerManager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
User=root
ExecStart=/usr/local/bin/kube-controller-manager\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
#vi/usr/lib/systemd/system/kube-scheduler.service@H_301_4@
[Unit]
Description=KubernetesSchedulerPlugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=root
ExecStart=/usr/local/bin/kube-scheduler\
$KUBE_LOGTOSTDERR\
$KUBE_LOG_LEVEL\
$KUBE_MASTER\
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
vi /etc/kubernetes/apiserver
#Theaddressonthelocalservertolistento.
KUBE_API_ADDRESS="--address=0.0.0.0"
#Theportonthelocalservertolistenon.
KUBE_API_PORT="--port=8080"
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Portkubeletslistenon
KUBELET_PORT="--kubelet-port=10250"
#Addressrangetouseforservices
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.20.0/24"
#KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#Addyourown!
KUBE_API_ARGS=""
vi /etc/kubernetes/controller-manager
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_CONTROLLER_MANAGER_ARGS=""
vi /etc/kubernetes/scheduler
#Howthereplicationcontrollerandschedulerfindthekube-apiserver
KUBE_MASTER="--master=http://192.168.20.60:8080"
#Addyourown!
KUBE_SCHEDULER_ARGS=""
更多配置项可以参考官方文档:
http://kubernetes.io/docs/admin/kube-proxy/
#启动master服务
systemctlstartkubelet
systemctlstartkube-proxy
systemctlstartkube-apiserver
systemctlstartkube-controller-manager
systemctlstartkube-scheduler
#启动minion服务
systemctlstartkubelet
systemctlstartkube-proxy
#检查服务是否启动正常
[@H_301_4@root@k8s-masterbin]#kubectlgetno@H_301_4@
NAMESTATUSAGE
192.168.20.60Ready24s
192.168.20.61Ready46s
192.168.20.62Ready35s
#重启命令
systemctlrestartkubelet
systemctlrestartkube-proxy
systemctlrestartkube-apiserver
systemctlrestart kube-controller-manager
systemctlrestartkube-scheduler@H_301_4@
#填坑
pause gcr.io 被墙,没有这个镜像k8s应用不了,报下面错误:
image pull Failed for gcr.io/google_containers/pause:2.0@H_301_4@
使用docker hub的镜像代替,或者下到本地仓库,然后再重新打tag,并且每个节点都需要这个镜像
docker pull@H_301_4@kubernetes/pause@H_301_4@
dockertag @H_301_4@kubernetes/pause @H_301_4@gcr.io/google_containers/pause:2.0@H_301_4@
[root@k8s-masteraddons]#dockerimages
REPOSITORYTAGIMAGEIDCREATEDSIZE
192.168.4.231:5000/pause2.02b58359142b09monthsago350.2kB
gcr.io/google_containers/pause@H_301_4@2.02b58359142b09monthsago350.2kB
5.官方源码包里有一些插件,如:监控面板、dns@H_301_4@
cd /usr/local/kubernetes/cluster/addons/
cd /usr/local/kubernetes/cluster/addons/dashboard@H_301_4@
下面有两个文件:
=============================================================
dashboard-controller.yaml #用来设置部署应用,如:副本数,使用镜像,资源控制等等
apiVersion:v1
kind:ReplicationController@H_301_4@
Metadata:
#Keepthenameinsyncwithimageversionand
#gce/coreos/kube-manifests/addons/dashboardcounterparts
name:kubernetes-dashboard-v1.0.1
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
version:v1.0.1
kubernetes.io/cluster-service:"true"
spec:
replicas:1 #副本数量
selector:
k8s-app:kubernetes-dashboard
template:
Metadata:
labels:
k8s-app:kubernetes-dashboard
version:v1.0.1
kubernetes.io/cluster-service:"true"
spec:
containers:
-name:kubernetes-dashboard
image:192.168.4.231:5000/kubernetes-dashboard:v1.0.1
resources:
#keeprequest=limittokeepthiscontaineringuaranteedclass
limits:
cpu:100m
memory:50Mi
requests:
cpu:100m
memory:50Mi
ports:
-containerPort:9090
args:@H_301_4@
---apiserver-host=http://192.168.20.60:8080@H_301_4@#这里需要注意,不加这个参数,会默认去找localhost,而不是去master那里取。还有就是这个配置文件各项缩减问题,空格。@H_301_4@
livenessProbe:
httpGet:
path:/
port:9090
initialDelaySeconds:30
timeoutSeconds:30
========================================================
dashboard-service.yaml #提供外部访问服务@H_301_4@@H_301_4@
apiVersion:v1
kind:Service@H_301_4@
Metadata:
name:kubernetes-dashboard
namespace:kube-system
labels:
k8s-app:kubernetes-dashboard
kubernetes.io/cluster-service:"true"
spec:
selector:
k8s-app:kubernetes-dashboard
ports:
-port:80
targetPort:9090
=========================================================
kubectlcreate-f./ #创建服务
kubectl--namespace=kube-systemgetpo #查看系统服务启动状态

kubectl--namespace=kube-systemgetpo-owide #查看系统服务起在哪个节点

#若想删除,可以执行下面命令
kubectldelete-f./
在浏览器输入: http://192.168.20.60:8080/ui/
5.2、DNS 插件安装@H_301_4@@H_301_4@@H_301_4@@H_301_4@@H_404_15@
#使用ip地址方式不太容易记忆,集群内可以使用dns绑定ip并自动更新维护。@H_301_4@@H_404_15@
cd@H_301_4@@H_301_4@/usr/local/kubernetes/cluster/addons/dns@H_301_4@@H_301_4@
cp@H_301_4@skydns-rc.yaml.in /opt/dns/@H_301_4@skydns-rc.yaml@H_301_4@@H_301_4@
cp@H_301_4@skydns-svc.yaml.in /opt/dns/@H_301_4@skydns-svc.yaml@H_301_4@@H_301_4@
@H_301_4@
#/opt/dns/skydns-rc.yaml 文件@H_301_4@
apiVersion:v1@H_301_4@
kind:ReplicationController@H_301_4@
name:kube-dns-v11@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
template:@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
containers:@H_301_4@
-name:etcd@H_301_4@
image:192.168.4.231:5000/etcd-amd64:2.2.1 #我都是先把官方镜像下载到本地仓库@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thensetrequest=limittokeepthiscontainerin@H_301_4@
#guaranteedclass.Currently,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:500Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
command:@H_301_4@
-/usr/local/bin/etcd@H_301_4@
--data-dir@H_301_4@
-/var/etcd/data@H_301_4@
--listen-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--advertise-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--initial-cluster-token@H_301_4@
-skydns-etcd@H_301_4@
volumeMounts:@H_301_4@
-name:etcd-storage@H_301_4@
mountPath:/var/etcd/data@H_301_4@
-name:kube2sky@H_301_4@
image:192.168.4.231:5000/kube2sky:1.14 @H_301_4@#本地仓库取@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
#Kube2skywatchesallpods.@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
livenessProbe:@H_301_4@
httpGet:@H_301_4@
path:/healthz@H_301_4@
port:8080@H_301_4@
scheme:HTTP@H_301_4@
initialDelaySeconds:60@H_301_4@
timeoutSeconds:5@H_301_4@
successThreshold:1@H_301_4@
failureThreshold:5@H_301_4@
readinessProbe:@H_301_4@
httpGet:@H_301_4@
path:/readiness@H_301_4@
port:8081@H_301_4@
scheme:HTTP@H_301_4@
#wepollonpodstartupfortheKubernetesmasterserviceand@H_301_4@
#onlysetupthe/readinessHTTPserveroncethat'savailable.@H_301_4@
initialDelaySeconds:30@H_301_4@
timeoutSeconds:5@H_301_4@
args:@H_301_4@
#command="/kube2sky"@H_301_4@
---domain=@H_301_4@cluster.local @H_301_4@#一个坑,要和/etc/kubernetes/kubelet 内的一致@H_301_4@
---kube_master_url=@H_301_4@http://192.168.20.60:8080@H_301_4@ @H_301_4@#master管理节点@H_301_4@
-name:skydns@H_301_4@
image:192.168.4.231:5000/skydns:2015-10-13-8c72f8c@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
args:@H_301_4@
#command="/skydns"@H_301_4@
--machines=http://127.0.0.1:4001@H_301_4@
--addr=0.0.0.0:53@H_301_4@
--ns-rotate=false@H_301_4@
--domain=@H_301_4@cluster.local. @H_301_4@#另一个坑!! 后面要带"."@H_301_4@
ports:@H_301_4@
-containerPort:53@H_301_4@
name:dns@H_301_4@
protocol:UDP@H_301_4@
-containerPort:53@H_301_4@
name:dns-tcp@H_301_4@
protocol:TCP@H_301_4@
-name:healthz@H_301_4@
image:192.168.4.231:5000/exechealthz:1.0 #本地仓库取镜像@H_301_4@@H_301_4@
resources:@H_301_4@
#keeprequest=limittokeepthiscontaineringuaranteedclass@H_301_4@
limits:@H_301_4@
memory:20Mi@H_301_4@
requests:@H_301_4@
memory:20Mi@H_301_4@
args:@H_301_4@
--cmd=nslookupkubernetes.default.@H_301_4@svc.@H_301_4@cluster.local@H_301_4@127.0.0.1>/dev/null #还是这个坑@H_301_4@@H_301_4@
--port=8080@H_301_4@
ports:@H_301_4@
-containerPort:8080@H_301_4@
protocol:TCP@H_301_4@
volumes:@H_301_4@
-name:etcd-storage@H_301_4@
emptyDir:{}@H_301_4@
dnsPolicy:Default#Don'tuseclusterDNS.@H_301_4@
========================================================================
#/opt/dns/skydns-svc.yaml 文件@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service@H_301_4@
name:kube-dns@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
kubernetes.io/name:"KubeDNS"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
clusterIP:192.168.20.100@H_301_4@
ports:@H_301_4@
-name:dns@H_301_4@
port:53@H_301_4@
protocol:UDP@H_301_4@
-name:dns-tcp@H_301_4@
port:53@H_301_4@
protocol:TCP@H_301_4@
==================================================================@H_301_4@
#启动@H_301_4@
cd /opt/dns/@H_301_4@
kubectl create -f ./@H_301_4@
@H_301_4@
#查看pod启动状态,看到4/4 个服务都启动完成,那就可以进行下一步验证阶段。@H_301_4@
kubectl--namespace=kube-systemgetpod-owide@H_301_4@@H_301_4@
转官网的验证方法:@H_301_4@@H_301_4@
网址:https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md#userconsent#
How do I test if it is working?
First deploy DNS as described above.
1 Create a simple Pod to use as a test environment.@H_404_15@
Create a file named busyBox.yaml with the following contents:
apiVersion:v1kind:PodMetadata:name:busyBoxnamespace:defaultspec:containers:-image:busyBoxcommand:-sleep-"3600"imagePullPolicy:IfNotPresentname:busyBoxrestartPolicy:Always
Then create a pod using this file:
kubectlcreate-fbusyBox.yaml
2 Wait for this pod to go into the running state.@H_404_15@
You can get its status with:
kubectlgetpodsbusyBox
You should see:
NAMEREADYSTATUSRESTARTSAGE
busyBox1/1Running0<some-time>
3 Validate DNS works@H_404_15@
Once that pod is running,you can exec nslookup in that environment:
kubectlexecbusyBox--nslookupkubernetes.default
You should see something like:
Server:10.0.0.10
Address1:10.0.0.10
Name:kubernetes.default
Address1:10.0.0.1
If you see that,DNS is working correctly.
5.3、manage插件@H_301_4@@H_301_4@@H_301_4@
@H_301_4@
mkdir /opt/k8s-manage@H_301_4@
cd /opt/k8s-manage@H_301_4@
================================================@H_301_4@
#catk8s-manager-rc.yaml
apiVersion:v1
kind:ReplicationController
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
replicas:1
selector:
app:k8s-manager
template:
Metadata:
labels:
app:k8s-manager
spec:
containers:
-image:mlamina/k8s-manager:latest@H_301_4@
name:k8s-manager
resources:
limits:
cpu:100m
memory:50Mi
ports:
-containerPort:80
name:http
=================================================@H_301_4@@H_301_4@
#catk8s-manager-svr.yaml
apiVersion:v1
kind:Service
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
ports:
-port:80
targetPort:http
selector:
app:k8s-manager@H_301_4@
=================================================@H_301_4@@H_301_4@
#启动@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
@H_301_4@
浏览器访问:@H_301_4@
http://192.168.20.60:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager
@H_301_4@
实例演示@H_301_4@
1.搭zookeeper、activeMQ、redis、mongodb服务@H_301_4@@H_301_4@
mkdir /opt/service/@H_301_4@@H_301_4@
cd /opt/service@H_301_4@@H_301_4@
==========================================================@H_301_4@@H_301_4@
#cat service.yaml@H_301_4@@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service
Metadata:
name:zk-amq-rds-mgd #服务名称
labels:
run:zk-amq-rds-mgd
spec:
type:NodePort
ports:
-port:2181 #标识@H_301_4@
nodePort:31656#master节点对外服务端口@H_301_4@
targetPort:2181 #容器内部端口@H_301_4@
protocol:TCP #协议类型@H_301_4@
name:zk-app #表示名@H_301_4@
-port:8161@H_301_4@
nodePort:31654@H_301_4@
targetPort:8161@H_301_4@
protocol:TCP@H_301_4@
name:amq-http@H_301_4@
-port:61616@H_301_4@
nodePort:31655@H_301_4@
targetPort:61616@H_301_4@
protocol:TCP@H_301_4@
name:amq-app@H_301_4@
-port:27017@H_301_4@
nodePort:31653@H_301_4@
targetPort:27017@H_301_4@
protocol:TCP@H_301_4@
name:mgd-app@H_301_4@
-port:6379@H_301_4@
nodePort:31652@H_301_4@
targetPort:6379@H_301_4@
protocol:TCP@H_301_4@
name:rds-app@H_301_4@
selector:
run:zk-amq-rds-mgd
---
#apiVersion:extensions/v1beta1
apiVersion:v1
kind:ReplicationController
Metadata:
name:zk-amq-rds-mgd
spec:
replicas:2 #两个副本@H_301_4@
template:
Metadata:
labels:
run:zk-amq-rds-mgd
spec:
containers:
-name:zookeeper #应用名称@H_301_4@
image:192.168.4.231:5000/zookeeper:0524 #使用本地镜像@H_301_4@
imagePullPolicy:IfNotPresent #自动分配到性能好的节点,去拉镜像并启动容器。@H_301_4@
ports:@H_301_4@
-containerPort:2181 #容器内部服务端口@H_301_4@
env:@H_301_4@
-name:LANG@H_301_4@
value:en_US.UTF-8@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/tmp/zookeeper #容器内部挂载点@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@#挂载名称,要与下面配置外部挂载点一致@H_301_4@@H_301_4@
-name:activemq@H_301_4@
image:192.168.4.231:5000/activemq:v2@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:8161@H_301_4@
-containerPort:61616@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/apache-activemq-5.10.2/data@H_301_4@
name:activemq-d@H_301_4@
-name:mongodb@H_301_4@
image:192.168.4.231:5000/mongodb:3.0.6@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:27017@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/var/lib/mongo@H_301_4@
name:mongodb-d@H_301_4@
-name:redis@H_301_4@
image:192.168.4.231:5000/redis:2.8.25@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:6379@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/redis/var@H_301_4@
name:redis-d@H_301_4@
volumes:
-hostPath:@H_301_4@
path:/mnt/mfs/service/zookeeper/data #宿主机挂载点,我这里用了分布式共享存储(MooseFS),这样可以保证多个副本数据的一致性。@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/activemq/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/mongodb/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/redis/data@H_301_4@
name:redis-d@H_301_4@
===========================================================================================@H_301_4@@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
参考文献:@H_301_4@http://my.oschina.net/jayqqaa12/blog/693919#userconsent#
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kube-apiserver@H_301_4@
kube-controller-manager@H_301_4@
kube-scheduler@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
etcd@H_301_4@
flannel@H_301_4@
docker@H_301_4@
kubelet@H_301_4@
kube-proxy@H_301_4@
@H_301_4@
@H_301_4@


#使用ip地址方式不太容易记忆,集群内可以使用dns绑定ip并自动更新维护。@H_301_4@@H_404_15@
cd@H_301_4@@H_301_4@/usr/local/kubernetes/cluster/addons/dns@H_301_4@@H_301_4@
cp@H_301_4@skydns-rc.yaml.in /opt/dns/@H_301_4@skydns-rc.yaml@H_301_4@@H_301_4@
cp@H_301_4@skydns-svc.yaml.in /opt/dns/@H_301_4@skydns-svc.yaml@H_301_4@@H_301_4@
@H_301_4@
#/opt/dns/skydns-rc.yaml 文件@H_301_4@
apiVersion:v1@H_301_4@
kind:ReplicationController@H_301_4@
name:kube-dns-v11@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
template:@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
version:v11@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
spec:@H_301_4@
containers:@H_301_4@
-name:etcd@H_301_4@
image:192.168.4.231:5000/etcd-amd64:2.2.1 #我都是先把官方镜像下载到本地仓库@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thensetrequest=limittokeepthiscontainerin@H_301_4@
#guaranteedclass.Currently,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:500Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
command:@H_301_4@
-/usr/local/bin/etcd@H_301_4@
--data-dir@H_301_4@
-/var/etcd/data@H_301_4@
--listen-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--advertise-client-urls@H_301_4@
-http://127.0.0.1:2379,http://127.0.0.1:4001@H_301_4@
--initial-cluster-token@H_301_4@
-skydns-etcd@H_301_4@
volumeMounts:@H_301_4@
-name:etcd-storage@H_301_4@
mountPath:/var/etcd/data@H_301_4@
-name:kube2sky@H_301_4@
image:192.168.4.231:5000/kube2sky:1.14 @H_301_4@#本地仓库取@H_301_4@@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
#Kube2skywatchesallpods.@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
livenessProbe:@H_301_4@
httpGet:@H_301_4@
path:/healthz@H_301_4@
port:8080@H_301_4@
scheme:HTTP@H_301_4@
initialDelaySeconds:60@H_301_4@
timeoutSeconds:5@H_301_4@
successThreshold:1@H_301_4@
failureThreshold:5@H_301_4@
readinessProbe:@H_301_4@
httpGet:@H_301_4@
path:/readiness@H_301_4@
port:8081@H_301_4@
scheme:HTTP@H_301_4@
#wepollonpodstartupfortheKubernetesmasterserviceand@H_301_4@
#onlysetupthe/readinessHTTPserveroncethat'savailable.@H_301_4@
initialDelaySeconds:30@H_301_4@
timeoutSeconds:5@H_301_4@
args:@H_301_4@
#command="/kube2sky"@H_301_4@
---domain=@H_301_4@cluster.local @H_301_4@#一个坑,要和/etc/kubernetes/kubelet 内的一致@H_301_4@
---kube_master_url=@H_301_4@http://192.168.20.60:8080@H_301_4@ @H_301_4@#master管理节点@H_301_4@
-name:skydns@H_301_4@
image:192.168.4.231:5000/skydns:2015-10-13-8c72f8c@H_301_4@
resources:@H_301_4@
#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge@H_301_4@
#clusters,thiscontainerfallsintothe@H_301_4@
#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.@H_301_4@
limits:@H_301_4@
memory:200Mi@H_301_4@
requests:@H_301_4@
memory:50Mi@H_301_4@
args:@H_301_4@
#command="/skydns"@H_301_4@
--machines=http://127.0.0.1:4001@H_301_4@
--addr=0.0.0.0:53@H_301_4@
--ns-rotate=false@H_301_4@
--domain=@H_301_4@cluster.local. @H_301_4@#另一个坑!! 后面要带"."@H_301_4@
ports:@H_301_4@
-containerPort:53@H_301_4@
name:dns@H_301_4@
protocol:UDP@H_301_4@
-containerPort:53@H_301_4@
name:dns-tcp@H_301_4@
protocol:TCP@H_301_4@
-name:healthz@H_301_4@
image:192.168.4.231:5000/exechealthz:1.0 #本地仓库取镜像@H_301_4@@H_301_4@
resources:@H_301_4@
#keeprequest=limittokeepthiscontaineringuaranteedclass@H_301_4@
limits:@H_301_4@
memory:20Mi@H_301_4@
requests:@H_301_4@
memory:20Mi@H_301_4@
args:@H_301_4@
--cmd=nslookupkubernetes.default.@H_301_4@svc.@H_301_4@cluster.local@H_301_4@127.0.0.1>/dev/null #还是这个坑@H_301_4@@H_301_4@
--port=8080@H_301_4@
ports:@H_301_4@
-containerPort:8080@H_301_4@
protocol:TCP@H_301_4@
volumes:@H_301_4@
-name:etcd-storage@H_301_4@
emptyDir:{}@H_301_4@
dnsPolicy:Default#Don'tuseclusterDNS.@H_301_4@
========================================================================
#/opt/dns/skydns-svc.yaml 文件@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service@H_301_4@
name:kube-dns@H_301_4@
namespace:kube-system@H_301_4@
labels:@H_301_4@
k8s-app:kube-dns@H_301_4@
kubernetes.io/cluster-service:"true"@H_301_4@
kubernetes.io/name:"KubeDNS"@H_301_4@
spec:@H_301_4@
selector:@H_301_4@
k8s-app:kube-dns@H_301_4@
clusterIP:192.168.20.100@H_301_4@
ports:@H_301_4@
-name:dns@H_301_4@
port:53@H_301_4@
protocol:UDP@H_301_4@
-name:dns-tcp@H_301_4@
port:53@H_301_4@
protocol:TCP@H_301_4@
==================================================================@H_301_4@
#启动@H_301_4@
cd /opt/dns/@H_301_4@
kubectl create -f ./@H_301_4@
@H_301_4@
#查看pod启动状态,看到4/4 个服务都启动完成,那就可以进行下一步验证阶段。@H_301_4@
kubectl--namespace=kube-systemgetpod-owide@H_301_4@@H_301_4@
转官网的验证方法:@H_301_4@@H_301_4@
网址:https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md#userconsent#
How do I test if it is working?
First deploy DNS as described above.
1 Create a simple Pod to use as a test environment.@H_404_15@
Create a file named busyBox.yaml with the following contents:
apiVersion:v1kind:PodMetadata:name:busyBoxnamespace:defaultspec:containers:-image:busyBoxcommand:-sleep-"3600"imagePullPolicy:IfNotPresentname:busyBoxrestartPolicy:Always
Then create a pod using this file:
kubectlcreate-fbusyBox.yaml
2 Wait for this pod to go into the running state.@H_404_15@
You can get its status with:
kubectlgetpodsbusyBox
You should see:
NAMEREADYSTATUSRESTARTSAGE
busyBox1/1Running0<some-time>
3 Validate DNS works@H_404_15@
Once that pod is running,you can exec nslookup in that environment:
kubectlexecbusyBox--nslookupkubernetes.default
You should see something like:
Server:10.0.0.10
Address1:10.0.0.10
Name:kubernetes.default
Address1:10.0.0.1
If you see that,DNS is working correctly.
5.3、manage插件@H_301_4@@H_301_4@@H_301_4@
@H_301_4@
mkdir /opt/k8s-manage@H_301_4@
cd /opt/k8s-manage@H_301_4@
================================================@H_301_4@
#catk8s-manager-rc.yaml
apiVersion:v1
kind:ReplicationController
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
replicas:1
selector:
app:k8s-manager
template:
Metadata:
labels:
app:k8s-manager
spec:
containers:
-image:mlamina/k8s-manager:latest@H_301_4@
name:k8s-manager
resources:
limits:
cpu:100m
memory:50Mi
ports:
-containerPort:80
name:http
=================================================@H_301_4@@H_301_4@
#catk8s-manager-svr.yaml
apiVersion:v1
kind:Service
Metadata:
name:k8s-manager
namespace:kube-system
labels:
app:k8s-manager
spec:
ports:
-port:80
targetPort:http
selector:
app:k8s-manager@H_301_4@
=================================================@H_301_4@@H_301_4@
#启动@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
@H_301_4@
浏览器访问:@H_301_4@
http://192.168.20.60:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager
@H_301_4@
实例演示@H_301_4@
1.搭zookeeper、activeMQ、redis、mongodb服务@H_301_4@@H_301_4@
mkdir /opt/service/@H_301_4@@H_301_4@
cd /opt/service@H_301_4@@H_301_4@
==========================================================@H_301_4@@H_301_4@
#cat service.yaml@H_301_4@@H_301_4@
@H_301_4@
apiVersion:v1@H_301_4@
kind:Service
Metadata:
name:zk-amq-rds-mgd #服务名称
labels:
run:zk-amq-rds-mgd
spec:
type:NodePort
ports:
-port:2181 #标识@H_301_4@
nodePort:31656#master节点对外服务端口@H_301_4@
targetPort:2181 #容器内部端口@H_301_4@
protocol:TCP #协议类型@H_301_4@
name:zk-app #表示名@H_301_4@
-port:8161@H_301_4@
nodePort:31654@H_301_4@
targetPort:8161@H_301_4@
protocol:TCP@H_301_4@
name:amq-http@H_301_4@
-port:61616@H_301_4@
nodePort:31655@H_301_4@
targetPort:61616@H_301_4@
protocol:TCP@H_301_4@
name:amq-app@H_301_4@
-port:27017@H_301_4@
nodePort:31653@H_301_4@
targetPort:27017@H_301_4@
protocol:TCP@H_301_4@
name:mgd-app@H_301_4@
-port:6379@H_301_4@
nodePort:31652@H_301_4@
targetPort:6379@H_301_4@
protocol:TCP@H_301_4@
name:rds-app@H_301_4@
selector:
run:zk-amq-rds-mgd
---
#apiVersion:extensions/v1beta1
apiVersion:v1
kind:ReplicationController
Metadata:
name:zk-amq-rds-mgd
spec:
replicas:2 #两个副本@H_301_4@
template:
Metadata:
labels:
run:zk-amq-rds-mgd
spec:
containers:
-name:zookeeper #应用名称@H_301_4@
image:192.168.4.231:5000/zookeeper:0524 #使用本地镜像@H_301_4@
imagePullPolicy:IfNotPresent #自动分配到性能好的节点,去拉镜像并启动容器。@H_301_4@
ports:@H_301_4@
-containerPort:2181 #容器内部服务端口@H_301_4@
env:@H_301_4@
-name:LANG@H_301_4@
value:en_US.UTF-8@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/tmp/zookeeper #容器内部挂载点@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@#挂载名称,要与下面配置外部挂载点一致@H_301_4@@H_301_4@
-name:activemq@H_301_4@
image:192.168.4.231:5000/activemq:v2@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:8161@H_301_4@
-containerPort:61616@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/apache-activemq-5.10.2/data@H_301_4@
name:activemq-d@H_301_4@
-name:mongodb@H_301_4@
image:192.168.4.231:5000/mongodb:3.0.6@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:27017@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/var/lib/mongo@H_301_4@
name:mongodb-d@H_301_4@
-name:redis@H_301_4@
image:192.168.4.231:5000/redis:2.8.25@H_301_4@
imagePullPolicy:IfNotPresent@H_301_4@
ports:@H_301_4@
-containerPort:6379@H_301_4@
volumeMounts:@H_301_4@
-mountPath:/opt/redis/var@H_301_4@
name:redis-d@H_301_4@
volumes:
-hostPath:@H_301_4@
path:/mnt/mfs/service/zookeeper/data #宿主机挂载点,我这里用了分布式共享存储(MooseFS),这样可以保证多个副本数据的一致性。@H_301_4@
@H_301_4@name:zookeeper-d@H_301_4@@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/activemq/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/mongodb/data@H_301_4@
-hostPath:@H_301_4@
path:/mnt/mfs/service/redis/data@H_301_4@
name:redis-d@H_301_4@
===========================================================================================@H_301_4@@H_301_4@
kubectl create -f ./@H_301_4@@H_301_4@
参考文献:@H_301_4@http://my.oschina.net/jayqqaa12/blog/693919#userconsent#
@H_301_4@
@H_301_4@
@H_301_4@
@H_301_4@
@H_301_4@
@H_301_4@
@H_301_4@