Kubernetes 是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度、均衡容灾、服务注册、动态扩缩容等功能套件,目前最新版本为1.0.6。本文介绍如何基于Centos7构建Kubernetes平台。
centos 7.1.1503 + etcd 2.2.0 + kubernetes 1.0.6 + docker 1.7.1
一、环境安装
1.etcd安装(可独立安装)
[root@docker3 ~]# wget https://github.com/coreos/etcd/releases/download/v2.2.0/etcd-v2.2.0-linux-amd64.tar.gz [root@docker3 ~]# tar -zxvf etcd-v2.2.0-linux-amd64.tar.gz [root@docker3 ~]# cd etcd-v2.2.0-linux-amd64 [root@docker3 ~]# cp etcd* /usr/local/bin/ [root@docker3 ~]# etcd -peer-bind-addr 0.0.0.0:2380 -bind-addr 0.0.0.0:2379 &
2.kubernetes安装(master)
[root@docker3 ~]# yum -y install kubernetes
kubernetes升级至v1.0.6,覆盖bin文件即可,方法如下:
[root@docker3 ~]# wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.6/kubernetes.tar.gz [root@docker3 ~]# tar zxf kubernetes.tar.gz [root@docker3 ~]# tar zxf kubernetes/server/kubernetes-server-linux-amd64.tar.gz [root@docker3 ~]# cp kubernetes/server/bin/kube* /usr/bin
3.配置master侧kubernetes相关文件
[root@docker3 ~]# cat /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow_privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.0.5:8080"
kubernetes apiserver文件
[root@docker3 ~]# cat /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver #
# The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on. KUBE_API_PORT="--port=8080"
# Port minions listen on KUBELET_PORT="--kubelet_port=10250"
# Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.0.5:2379"
# Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies #KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own! KUBE_API_ARGS=""
kubernetes controller-manager文件
[root@docker3 ~]# cat /etc/kubernetes/controller-manager ### # The following values are used to configure the kubernetes controller-manager KUBELET_ADDRESSES="--machines= 192.168.0.5"
# defaults from config and apiserver should be adequate
# Add your own! KUBE_CONTROLLER_MANAGER_ARGS=""
4.启动master侧相关服务
[root@docker3 ~]# systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service [root@docker3 ~]# systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service [root@docker3 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.6", GitCommit:"388061f00f0d9e4d641f9ed4971c775e1654579d", GitTreeState:"clean"}
5.kubernetes安装(nodes)
[root@docker3 ~]# yum -y install kubernetes docker 注:如果不做多节点测试,仅安装docker即可
6.配置slave侧kubernetes kubelet文件
[root@docker3 ~]# cat /etc/kubernetes/kubelet ### # kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.0.5"
# location of the api-server KUBELET_API_SERVER="--api_servers=http://192.168.0.5:8080"
# Add your own! KUBELET_ARGS=""
7.启动node侧相关服务
[root@docker3 ~]# systemctl enable docker.service kubelet.service kube-proxy.service [root@docker3 ~]# systemctl start docker.service kubelet.service kube-proxy.service
8.查看nodes
[root@docker3 ~]# kubectl get nodes NAME LABELS STATUS 192.168.0.5 kubernetes.io/hostname=192.168.0.5 Ready
这样安装就ok了。如果查看nodes为空,请检查以上步骤是否配置正确。
9.创建测试pod单元
[root@docker3 ~]# cat apache-pod.yaml apiVersion: v1 kind: Pod metadata: name: apache spec: containers: - name: apache image: docker.io/fedora/apache ports: - containerPort: 80 hostPort: 8888
[root@docker3 ~]# kubectl create -f apache-pod.yaml
理论上,这样就ok了。
但是!!!!因为有伟大的GFW!!我们会在slave上看到大体如下的Error:
HTTP Error: statusCode=404 No such image: gcr.io/google_containers/pause:0.8.0
解决方案如下,在slave上运行命令:
docker pull docker.io/kubernetes/pause docker tag docker.io/kubernetes/pause gcr.io/google_containers/pause:0.8.0
查看pod单元
[root@docker3 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE apache 0/1 Running 0 8s
打开浏览器,访问http://192.168.0.5:8888/

二、实战操作:通过Kubernetes创建一个webserver集群,以及观察其负载均衡
创建一个replication
[root@docker3 ~]# cat webserver-repl.yaml apiVersion: v1 kind: ReplicationController metadata: name: webserver spec: replicas: 2 selector: app: webserver template: metadata: labels: app: webserver spec: containers: - name: webserver image: docker.io/yorko/webserver command: ["/bin/sh", "-c", "/usr/bin/supervisord -c /etc/supervisord.conf"] ports: - containerPort: 80
[root@docker3 ~]# kubectl create -f webserver-repl.yaml [root@docker3 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE apache 1/1 Running 0 16m webserver-51vo1 1/1 Running 0 11s webserver-xeh61 1/1 Running 0 11s
创建一个service,关联replication selector
[root@docker3 ~]# cat webserver-service.yaml apiVersion: v1 kind: Service metadata: name: webserver spec: ports: - port: 8008 targetPort: 80 protocol: TCP selector: app: webserver
[root@docker3 ~]# kubectl create -f webserver-service.yaml [root@docker3 ~]# kubectl get service NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCP webserver <none> app=webserver 10.254.69.214 8008/TCP
查询生成的iptables转发规则
[root@docker3 ~]# iptables -nL -t nat|tail -1 DNAT tcp -- 0.0.0.0/0 10.254.69.214 /* default/webserver: */ tcp dpt:8008 to:192.168.0.5:44893
打开浏览器,访问http://192.168.0.5:44893/info.php

刷新浏览器发现proxy后端的变化,默认为随机轮循算法。

三、测试pods自动复制、销毁,观察kubernetes自动保持2个副本
删除一个replication副本
[root@docker3 ~]# kubectl delete pod webserver-51vo1
自动生成出一个副本,始终保持2份
[root@docker3 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE apache 1/1 Running 0 54m webserver-xeh61 1/1 Running 0 38m webserver-yn8au 0/1 Pending 0 2s
[root@docker3 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE apache 1/1 Running 0 54m webserver-xeh61 1/1 Running 0 38m webserver-yn8au 1/1 Running 0 11s
这样测试就全部ok了。至于前面是否加一层haproxy/nginx反向代理,随你。。。
参考文献:
http://blog.liuts.com/post/247/
http://segmentfault.com/a/1190000002886795
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/walkthrough/k8s201.md |