1. 程式人生 > >Kubernetes安裝部署學習筆記(二)

Kubernetes安裝部署學習筆記(二)

1 原始碼分析

如果全部前提工作準備完畢,啟動和停止k8s的方法非常簡單分別單獨一條命令(在cluster目錄下):

KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
KUBERNETES_PROVIDER=ubuntu ./kube-down.sh

下面分析下這條命令具體做了哪些事情。
#!/bin/bash

# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Bring up a Kubernetes cluster.
#
# If the full release name (gs://<bucket>/<release>) is passed in then we take
# that directly.  If not then we assume we are doing development stuff and take
# the defaults in the release config.

set -o errexit
set -o nounset
set -o pipefail

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..

if [ -f "${KUBE_ROOT}/cluster/env.sh" ]; then
    source "${KUBE_ROOT}/cluster/env.sh"
fi

source "${KUBE_ROOT}/cluster/kube-env.sh"
source "${KUBE_ROOT}/cluster/kube-util.sh"

echo "... Starting cluster using provider: $KUBERNETES_PROVIDER" >&2

echo "... calling verify-prereqs" >&2
verify-prereqs

echo "... calling kube-up" >&2
kube-up

echo "... calling validate-cluster" >&2
validate-cluster

echo -e "Done, listing cluster services:
" >&2
"${KUBE_ROOT}/cluster/kubectl.sh" cluster-info
echo

exit 0

指令碼執行結構圖:

kube-up.sh
- env.sh
- kube-env.sh
- kube-util.sh
  - util.sh
    - config-default.sh
    - build.sh

首先設定KUBERNETES_PROVIDER環境變數為ubuntu(預設此環境變數在kube-env.sh中初始化),目的是指定我們要使用cluster/ubuntu下的指令碼。真正工作的指令碼是util.sh,裡面定義了後面要執行的一些函式,例如,verify-prereqs,kube-up,validate-cluster。

(1) verify-prereqs

作用:在客戶端和伺服器端通過使用ssh建立安全通訊,同時使用ssh-agent避免了每次都輸入密碼的麻煩。

過程:首先執行ssh-add -L命令,顯示所有身份認證的公鑰,如果echo $?顯示返回值為2,說明ssh-agent服務沒有啟動,則執行ssh-agent命令來啟動ssh-agent服務。然後再執行ssh-add -L判斷,如果返回值是1(The agent has no identities.),則執行ssh-add新增一個預設的identities。最後再執行ssh-add -L判斷新增是否成功。關於ssh-agent的用法可以參考連結[3],Anyway, here is how to set up a pair of keys for passwordless authentication via ssh-agent. 。

# Verify ssh prereqs
function verify-prereqs {
	
  local rc

  rc=0
  ssh-add -L 1> /dev/null 2> /dev/null || rc="$?"
  # "Could not open a connection to your authentication agent."
  if [[ "${rc}" -eq 2 ]]; then
    eval "$(ssh-agent)" > /dev/null
    trap-add "kill ${SSH_AGENT_PID}" EXIT
  fi

  rc=0
  ssh-add -L 1> /dev/null 2> /dev/null || rc="$?"
  # "The agent has no identities."
  if [[ "${rc}" -eq 1 ]]; then
    # Try adding one of the default identities, with or without passphrase.
    ssh-add || true
  fi
  # Expect at least one identity to be available.
  if ! ssh-add -L 1> /dev/null 2> /dev/null; then
    echo "Could not find or add an SSH identity."
    echo "Please start ssh-agent, add your identity, and retry."
    exit 1
  fi

}

在/etc/profile配置中新增如下程式碼,實現每次登陸終端後都自動啟動ssh-agent服務。
# ssh-agent auto-load

SSH_ENV="$HOME/.ssh/environment"

function start_agent {
     echo "Initialising new SSH agent..."
     /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
     echo succeeded
     chmod 600 "${SSH_ENV}"
     . "${SSH_ENV}" > /dev/null
     /usr/bin/ssh-add;
}

# Source SSH settings, if applicable

if [ -f "${SSH_ENV}" ]; then
     . "${SSH_ENV}" > /dev/null
     #ps ${SSH_AGENT_PID} doesn't work under cywgin
     ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
         start_agent;
     }
else
     start_agent;
fi

(2) kube-up

作用:在ubuntu上初始化k8s cluster。

過程:先預設載入config-default.sh配置,然後檢查ubuntu/binaries/master/kube-apiserver是否存在,如果沒有則執行build.sh下載需要的二進位制檔案。setClusterInfo根據config-default.sh中配置的節點資訊設定master和node的IP。設定完分簇資訊後,再根據配置的節點型別a/i/ai分別呼叫provision-master/provision-node/provision-masterandnode函式,通過scp方法向每個目標節點複製需要的配置和可執行檔案(在~/kube目錄下),完成master/node/master-node節點的部署。


# Instantiate a kubernetes cluster on ubuntu
function kube-up() {
  source "${KUBE_ROOT}/cluster/ubuntu/${KUBE_CONFIG_FILE-"config-default.sh"}"

  # ensure the binaries are well prepared
  if [ ! -f "ubuntu/binaries/master/kube-apiserver" ]; then
    echo "No local binaries for kube-up, downloading... "
    "${KUBE_ROOT}/cluster/ubuntu/build.sh"
  fi

  setClusterInfo
  ii=0

  for i in ${nodes}
  do
    {
      if [ "${roles[${ii}]}" == "a" ]; then
        provision-master
      elif [ "${roles[${ii}]}" == "ai" ]; then
        provision-masterandnode
      elif [ "${roles[${ii}]}" == "i" ]; then
        provision-node $i
      else
        echo "unsupported role for ${i}. please check"
        exit 1
      fi
    }

    ((ii=ii+1))
  done
  wait

  verify-cluster
  detect-master
  export CONTEXT="ubuntu"
  export KUBE_SERVER="http://${KUBE_MASTER_IP}:8080"

  source "${KUBE_ROOT}/cluster/common.sh"

  # set kubernetes user and password
  load-or-gen-kube-basicauth

  create-kubeconfig
}

關於build.sh:

此指令碼可以在kube-up.sh指令碼前執行,在下載需要的二進位制檔案前,可以通過環境變數指定需要下載的具體版本,在執行build.sh前可以先執行下面的指令碼:

#cd kubernetes/cluster/ubuntu
#source init_version.sh

#init_version.sh
#!/bin/bash

export KUBE_VERSION=1.1.2
export FLANNEL_VERSION=0.5.5
export ETCD_VERSION=2.2.1

echo "done"

關於setClusterInfo:

nodes和roles變數都在config-default.sh初始化,然後根據初始化結果設定MASTER_IP和NODE_IPS。

# From user input set the necessary k8s and etcd configuration information
function setClusterInfo() {
  # Initialize NODE_IPS in setClusterInfo function
  # NODE_IPS is defined as a global variable, and is concatenated with other nodeIP	
  # When setClusterInfo is called for many times, this could cause potential problems
  # Such as, you will have NODE_IPS=192.168.0.2,192.168.0.3,192.168.0.2,192.168.0.3 which is obviously wrong
  NODE_IPS=""
  
  ii=0
  for i in $nodes; do
    nodeIP=${i#*@}

    if [[ "${roles[${ii}]}" == "ai" ]]; then
      MASTER_IP=$nodeIP
      MASTER=$i
      NODE_IPS="$nodeIP"
    elif [[ "${roles[${ii}]}" == "a" ]]; then
      MASTER_IP=$nodeIP
      MASTER=$i
    elif [[ "${roles[${ii}]}" == "i" ]]; then
      if [[ -z "${NODE_IPS}" ]];then
        NODE_IPS="$nodeIP"
      else
        NODE_IPS="$NODE_IPS,$nodeIP"
      fi
    else
      echo "unsupported role for ${i}. please check"
      exit 1
    fi

    ((ii=ii+1))
  done

}

關於provision-masterandnode:
function provision-masterandnode() {
  # copy the binaries and scripts to the ~/kube directory on the master
  echo "Deploying master and node on machine ${MASTER_IP}"
  echo "SSH_OPTS=$SSH_OPTS"
  echo "MASTER=$MASTER"
  echo "SERVICE_CLUSTER_IP_RANGE=$SERVICE_CLUSTER_IP_RANGE"
  echo "ADMISSION_CONTROL=$ADMISSION_CONTROL"
  echo "SERVICE_NODE_PORT_RANGE=$SERVICE_NODE_PORT_RANGE"
  echo "NODE_IPS=$NODE_IPS"
  echo "DNS_SERVER_IP=$DNS_SERVER_IP"
  echo "DNS_DOMAIN=$DNS_DOMAIN"
  echo "FLANNEL_NET=$FLANNEL_NET"
  echo
  ssh $SSH_OPTS $MASTER "mkdir -p ~/kube/default"
  # scp order matters
  scp -r $SSH_OPTS ubuntu/config-default.sh ubuntu/util.sh ubuntu/minion/* ubuntu/master/* ubuntu/reconfDocker.sh ubuntu/binaries/master/ ubuntu/binaries/minion "${MASTER}:~/kube"

  # remote login to the node and use sudo to configue k8s
  ssh $SSH_OPTS -t $MASTER "source ~/kube/util.sh; 
                            setClusterInfo; 
                            create-etcd-opts; 
                            create-kube-apiserver-opts "${SERVICE_CLUSTER_IP_RANGE}" "${ADMISSION_CONTROL}" "${SERVICE_NODE_PORT_RANGE}"; 
                            create-kube-controller-manager-opts "${NODE_IPS}"; 
                            create-kube-scheduler-opts; 
                            create-kubelet-opts "${MASTER_IP}" "${MASTER_IP}" "${DNS_SERVER_IP}" "${DNS_DOMAIN}";
                            create-kube-proxy-opts "${MASTER_IP}";
                            create-flanneld-opts "127.0.0.1"; 
                            sudo -p '[sudo] password to start master: ' cp ~/kube/default/* /etc/default/ && sudo cp ~/kube/init_conf/* /etc/init/ && sudo cp ~/kube/init_scripts/* /etc/init.d/ ; 
                            sudo mkdir -p /opt/bin/ && sudo cp ~/kube/master/* /opt/bin/ && sudo cp ~/kube/minion/* /opt/bin/; 
                            sudo service etcd start; 
                            sudo FLANNEL_NET=${FLANNEL_NET} -b ~/kube/reconfDocker.sh "ai";"
}

關於verify-cluster:
function verify-cluster {
  ii=0

  for i in ${nodes}
  do
    if [ "${roles[${ii}]}" == "a" ]; then
      verify-master
    elif [ "${roles[${ii}]}" == "i" ]; then
      verify-node $i
    elif [ "${roles[${ii}]}" == "ai" ]; then
      verify-master
      verify-node $i
    else
      echo "unsupported role for ${i}. please check"
      exit 1
    fi

    ((ii=ii+1))
  done

  echo
  echo "Kubernetes cluster is running.  The master is running at:"
  echo
  echo "  http://${MASTER_IP}:8080"
  echo

}

關於verify-master:
function verify-master(){
  # verify master has all required daemons
  printf "Validating master"
  local -a required_daemon=("kube-apiserver" "kube-controller-manager" "kube-scheduler")
  local validated="1"
  local try_count=1
  local max_try_count=3
  until [[ "$validated" == "0" ]]; do
    validated="0"
    local daemon
    for daemon in "${required_daemon[@]}"; do
      ssh $SSH_OPTS "$MASTER" "pgrep -f ${daemon}" >/dev/null 2>&1 || {
        printf "."
        validated="1"
        ((try_count=try_count+1))
        if [[ ${try_count} -gt ${max_try_count} ]]; then
          printf "
Warning: Process "${daemon}" failed to run on ${MASTER}, please check.
"
          exit 1
        fi
        sleep 2
      }
    done
  done
  printf "
"

}

關於verify-node:
function verify-node(){
  # verify node has all required daemons
  printf "Validating ${1}"
  local -a required_daemon=("kube-proxy" "kubelet" "docker")
  local validated="1"
  local try_count=1
  local max_try_count=3
  until [[ "$validated" == "0" ]]; do
    validated="0"
    local daemon
    for daemon in "${required_daemon[@]}"; do
      ssh $SSH_OPTS "$1" "pgrep -f $daemon" >/dev/null 2>&1 || {
        printf "."
        validated="1"
        ((try_count=try_count+1))
        if [[ ${try_count} -gt ${max_try_count} ]]; then
          printf "
Warning: Process "${daemon}" failed to run on ${1}, please check.
"
          exit 1
        fi
        sleep 2
      }
    done
  done
  printf "
"
}

3 測試

可以使用kubectl工具以命令列的方式操作k8s。

[email protected]:~/k8s/test/kubernetes/k8s_1.1.3/kubernetes/cluster/ubuntu/binaries# ./kubectl --help
kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

Usage: 
  kubectl [flags]
  kubectl [command]

Available Commands: 
  get            Display one or many resources
  describe       Show details of a specific resource or group of resources
  create         Create a resource by filename or stdin
  replace        Replace a resource by filename or stdin.
  patch          Update field(s) of a resource by stdin.
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector.
  edit           Edit a resource on the server
  apply          Apply a configuration to a resource by filename or stdin
  namespace      SUPERSEDED: Set and view the current Kubernetes namespace
  logs           Print the logs for a container in a pod.
  rolling-update Perform a rolling update of the given ReplicationController.
  scale          Set a new size for a Replication Controller.
  attach         Attach to a running container.
  exec           Execute a command in a container.
  port-forward   Forward one or more local ports to a pod.
  proxy          Run a proxy to the Kubernetes API server
  run            Run a particular image on the cluster.
  stop           Deprecated: Gracefully shut down a resource by name or filename.
  expose         Take a replication controller, service or pod and expose it as a new Kubernetes Service
  autoscale      Auto-scale a replication controller
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  config         config modifies kubeconfig files
  cluster-info   Display cluster info
  api-versions   Print the supported API versions on the server, in the form of "group/version".
  version        Print the client and server version information.
  help           Help about any command

Flags:
      --alsologtostderr[=false]: log to standard error as well as files
      --api-version="": The API version to use when talking to the server
      --certificate-authority="": Path to a cert. file for the certificate authority.
      --client-certificate="": Path to a client key file for TLS.
      --client-key="": Path to a client key file for TLS.
      --cluster="": The name of the kubeconfig cluster to use
      --context="": The name of the kubeconfig context to use
      --insecure-skip-tls-verify[=false]: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
      --kubeconfig="": Path to the kubeconfig file to use for CLI requests.
      --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace
      --log-dir="": If non-empty, write log files in this directory
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --logtostderr[=true]: log to standard error instead of files
      --match-server-version[=false]: Require server version to match client version
      --namespace="": If present, the namespace scope for this CLI request.
      --password="": Password for basic authentication to the API server.
  -s, --server="": The address and port of the Kubernetes API server
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --token="": Bearer token for authentication to the API server.
      --user="": The name of the kubeconfig user to use
      --username="": Username for basic authentication to the API server.
      --v=0: log level for V logs
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging


Use "kubectl [command] --help" for more information about a command.


kube*


etcd

flannel




4 參考

[1] https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/ubuntu.md
[2] https://github.com/kubernetes/kubernetes/releases
[3] http://mah.everybody.org/docs/ssh

相關推薦

Kubernetes安裝部署學習筆記

1 原始碼分析 如果全部前提工作準備完畢,啟動和停止k8s的方法非常簡單分別單獨一條命令(在cluster目錄下): KUBERNETES_PROVIDER=ubuntu ./kube-up.sh KUBERNETES_PROVIDER=ubuntu ./kube-d

自動化構建Android專案 ---- Jenkins自動化部署學習筆記

  上篇文章跟大家分享了在Windows上安裝Jenkins的方法,這篇文章來跟大家分享一下利用Jenkins自動化構建Android專案: 一、所需準備: Android專案上傳至版本管理平臺,這裡我準備了Github上的Android專案https://github.co

kubernetes學習筆記:bashborad安裝配置

tag log struct recommend ide col create part describe 官方推薦方法: 連接:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashb

Kubernetes學習筆記部署託管的Pod -- 存活探針、ReplicationController、ReplicaSet、DaemonSet、Job、CronJob

## 存活探針 Kubernetes可以通過存活探針(liveness probe)檢查容器是否存活。如果探測失敗,Kubernetes將定期執行探針並重新啟動容器。 官方文件請見:https://kubernetes.io/docs/tasks/configure-pod-container/configu

在Windows系統上安裝Jenkins ---- Jenkins自動化部署學習筆記

  之前一直想著學習一下Jenkins自動化部署,最近剛好有點時間,就利用這點時間來學習一下Jenkins自動化部署,做個筆記,既可以鞏固自己的學習,也可以幫助更多的人瞭解Jenkins自動化部署。   先從簡單的開始,我們先用Windows系統來安裝Jenkins,當然以後肯定會在

Git 學習筆記 在windows 上安裝git

先記錄兩個學習git的網站 Git與SVN、CVS最大的區別是git是分散式版本控制系統,SVN、CVS是分散式版本控制系統。 分散式版本控制系統中,沒有“中央伺服器”,每個人的電腦上都是一個完成的版本庫,可以方便離線工作。多人協作時,可以把各自的修改推送給對方,就可以互

ElasticSearch學習筆記IK分詞器和拼音分詞器的安裝

ElasticSearch是自帶分詞器的,但是自帶的分詞器一般就只能對英文分詞,對英文的分詞只要識別空格就好了,還是很好做的(ES的這個分詞器和Lucene的分詞器很想,是不是直接使用Lucene的就不知道),自帶的分詞器對於中文就只能分成一個字一個字,這個顯然

大資料基礎知識學習-----Hive學習筆記Hive安裝環境準備

Hive安裝環境準備 Hive安裝地址 Hive安裝部署 Hive安裝及配置 把apache-hive-1.2.1-bin.tar.gz上傳到linux的/opt/software目錄下 解壓apache-hive-1.2.

docker學習筆記—— docker部署mysql服務

1 獲取mysql官方最新docker映象 $ sudo docker pull mysql/mysql-server:latest 耐心等待下載,完成後可以通過以下命令檢視下載的映象: $ sudo docker images 2 建立

GO語言學習筆記IDE安裝與配置、格式化代碼、生成代碼文檔

dea mit sts eid mark idea current href alt 一、安裝goland IDE1、goland IDE的下載 下載地址:(我這裏是下載的mac版,因為我的是mac本)https://download-cf.jetbrains.com/g

Jetson TX2學習筆記:caffe安裝配置

我需要在caffe上執行深度學習程式,因此在這裡把caffe GPU在jetson tx2(下稱TX2主機)安裝的過程記錄下來。 由於JetPack中自帶CUDA、OpenCV和CuDNN,在完成TX2主機JetPack安裝後(教程可參見我的前一篇博文http

Kubernetes學習筆記:網路原理

Kubernetes網路模型 Kubernetes網路模型設計的一個基礎原則是:每個Pod都擁有一個獨立的IP地址,而且假定所有Pod都在一個可以直接連通的、扁平的網路空間中。所以不管它們是否執行在同一個Node(宿主機)中,都要求它們可以直接通過對方的

Linux視訊學習筆記--Linux系統安裝與遠端登入

宣告:本系列文章是博主根據 “兄弟連新版Linux視訊教程”做的筆記和視訊截圖,只為學習和教學使用,不適用任何商業用途。 PS:如果對Linux感興趣,建議去看《細說Linux》,沈超老師和李明老師的教學風格我很喜歡:) 視訊2.3-Linux系統安裝 1. 安裝

Docker學習筆記--docker部署配置及常用指令介紹

Docker部署安裝 ubuntu:apt-get install docker centos:yum install docker 設定開機啟動Docker Daemon程序 systemctl start docker.service syst

Contour 學習筆記:使用級聯功能實現藍綠部署和金絲雀釋出

上篇文章介紹了 Contour 分散式架構的工作原理,順便簡單介紹了下 IngressRoute 的使用方式。本文將探討 IngressRoute 更高階的用法,其中級聯功能是重點。 1. IngressRoute 大入門 上篇文章在 examples/example-workload 目錄下建立了一個示例應

Kubernetes學習筆記:Pod、標籤、註解

## pod與容器 一個pod是一組緊密相關的容器,它們總是一起執行在同一個節點上,以及同一個LInux名稱空間中。 每個pod擁有自己的ip,包含若干個容器。pod分佈在不同的節點上。 ### 為什麼需要pod 為什麼需要pod,而不是直接使用容器: 因為容器被設計為只執行一個程序,由於不能夠將多個程序聚集

php laravel框架學習筆記 數據庫操作

true 數據 mar sql show top 一行 ati del 原博客鏈接:http://www.cnblogs.com/bitch1319453/p/6810492.html mysql基本配置 你可用通過配置環境變量,使用cmd進入mysql,當然還有一種東

java學習筆記圖形用戶接口

star strong per getwidth cep runnable graphics s2d gb2 這個學期主要放在ACM比賽上去了,比賽結束了。不知不覺就15周了,這周就要java考試了,復習一下java吧。java的學習的目的還是讓我們學以致用,讓我們可以

數據結構學習筆記 線性表的順序存儲和鏈式存儲

出錯 初始化 node != test span 輸入 des val 線性表:由同類型數據元素構成有序序列的線性結構  --》表中元素的個數稱為線性表的長度  --》沒有元素時,成為空表  --》表起始位置稱表頭,表結束位置稱表尾 順序存儲:    1 package

Memcache 學習筆記---- PHP 腳本操作 Memcache 服務器

ext status ram var_dump 介紹 修改 memcache local dbn    PHP 腳本操作 Memcache 服務器 一、PHP腳本操作Memcache方法     使用 PHP 腳本操作 Memcache,在 PHP 手冊中有詳細的介紹,我們