1. 程式人生 > >基於ubuntu 14.04部署k8s過程記錄

基於ubuntu 14.04部署k8s過程記錄

2015-12-27 wcdj

0 前提準備

本文主要參考連結[1]嘗試在ubuntu 14.04部署k8s。


1 原始碼分析

如果全部前提工作準備完畢,啟動和停止k8s的方法非常簡單分別單獨一條命令(在cluster目錄下):

KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
KUBERNETES_PROVIDER=ubuntu ./kube-down.sh

下面分析下這條命令具體做了哪些事情。
#!/bin/bash

# Copyright 2014 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Bring up a Kubernetes cluster.
#
# If the full release name (gs://<bucket>/<release>) is passed in then we take
# that directly.  If not then we assume we are doing development stuff and take
# the defaults in the release config.

set -o errexit
set -o nounset
set -o pipefail

KUBE_ROOT=$(dirname "${BASH_SOURCE}")/..

if [ -f "${KUBE_ROOT}/cluster/env.sh" ]; then
    source "${KUBE_ROOT}/cluster/env.sh"
fi

source "${KUBE_ROOT}/cluster/kube-env.sh"
source "${KUBE_ROOT}/cluster/kube-util.sh"

echo "... Starting cluster using provider: $KUBERNETES_PROVIDER" >&2

echo "... calling verify-prereqs" >&2
verify-prereqs

echo "... calling kube-up" >&2
kube-up

echo "... calling validate-cluster" >&2
validate-cluster

echo -e "Done, listing cluster services:\n" >&2
"${KUBE_ROOT}/cluster/kubectl.sh" cluster-info
echo

exit 0

指令碼執行結構圖:

kube-up.sh
- env.sh
- kube-env.sh
- kube-util.sh
  - util.sh
    - config-default.sh
    - build.sh

首先設定KUBERNETES_PROVIDER環境變數為ubuntu(預設此環境變數在kube-env.sh中初始化),目的是指定我們要使用cluster/ubuntu下的指令碼。真正工作的指令碼是util.sh,裡面定義了後面要執行的一些函式,例如,verify-prereqs,kube-up,validate-cluster。

(1) verify-prereqs

作用:在客戶端和伺服器端通過使用ssh建立安全通訊,同時使用ssh-agent避免了每次都輸入密碼的麻煩。

過程:首先執行ssh-add -L命令,顯示所有身份認證的公鑰,如果echo $?顯示返回值為2,說明ssh-agent服務沒有啟動,則執行ssh-agent命令來啟動ssh-agent服務。然後再執行ssh-add -L判斷,如果返回值是1(The agent has no identities.),則執行ssh-add新增一個預設的identities。最後再執行ssh-add -L判斷新增是否成功。關於ssh-agent的用法可以參考連結[3],Anyway, here is how to set up a pair of keys for passwordless authentication via ssh-agent. 。

# Verify ssh prereqs
function verify-prereqs {
	
  local rc

  rc=0
  ssh-add -L 1> /dev/null 2> /dev/null || rc="$?"
  # "Could not open a connection to your authentication agent."
  if [[ "${rc}" -eq 2 ]]; then
    eval "$(ssh-agent)" > /dev/null
    trap-add "kill ${SSH_AGENT_PID}" EXIT
  fi

  rc=0
  ssh-add -L 1> /dev/null 2> /dev/null || rc="$?"
  # "The agent has no identities."
  if [[ "${rc}" -eq 1 ]]; then
    # Try adding one of the default identities, with or without passphrase.
    ssh-add || true
  fi
  # Expect at least one identity to be available.
  if ! ssh-add -L 1> /dev/null 2> /dev/null; then
    echo "Could not find or add an SSH identity."
    echo "Please start ssh-agent, add your identity, and retry."
    exit 1
  fi

}

在/etc/profile配置中新增如下程式碼,實現每次登陸終端後都自動啟動ssh-agent服務。
# ssh-agent auto-load

SSH_ENV="$HOME/.ssh/environment"

function start_agent {
     echo "Initialising new SSH agent..."
     /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
     echo succeeded
     chmod 600 "${SSH_ENV}"
     . "${SSH_ENV}" > /dev/null
     /usr/bin/ssh-add;
}

# Source SSH settings, if applicable

if [ -f "${SSH_ENV}" ]; then
     . "${SSH_ENV}" > /dev/null
     #ps ${SSH_AGENT_PID} doesn't work under cywgin
     ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
         start_agent;
     }
else
     start_agent;
fi

(2) kube-up

作用:在ubuntu上初始化k8s cluster。

過程:先預設載入config-default.sh配置,然後檢查ubuntu/binaries/master/kube-apiserver是否存在,如果沒有則執行build.sh下載需要的二進位制檔案。setClusterInfo根據config-default.sh中配置的節點資訊設定master和node的IP。設定完分簇資訊後,再根據配置的節點型別a/i/ai分別呼叫provision-master/provision-node/provision-masterandnode函式,通過scp方法向每個目標節點複製需要的配置和可執行檔案(在~/kube目錄下),完成master/node/master-node節點的部署。


# Instantiate a kubernetes cluster on ubuntu
function kube-up() {
  source "${KUBE_ROOT}/cluster/ubuntu/${KUBE_CONFIG_FILE-"config-default.sh"}"

  # ensure the binaries are well prepared
  if [ ! -f "ubuntu/binaries/master/kube-apiserver" ]; then
    echo "No local binaries for kube-up, downloading... "
    "${KUBE_ROOT}/cluster/ubuntu/build.sh"
  fi

  setClusterInfo
  ii=0

  for i in ${nodes}
  do
    {
      if [ "${roles[${ii}]}" == "a" ]; then
        provision-master
      elif [ "${roles[${ii}]}" == "ai" ]; then
        provision-masterandnode
      elif [ "${roles[${ii}]}" == "i" ]; then
        provision-node $i
      else
        echo "unsupported role for ${i}. please check"
        exit 1
      fi
    }

    ((ii=ii+1))
  done
  wait

  verify-cluster
  detect-master
  export CONTEXT="ubuntu"
  export KUBE_SERVER="http://${KUBE_MASTER_IP}:8080"

  source "${KUBE_ROOT}/cluster/common.sh"

  # set kubernetes user and password
  load-or-gen-kube-basicauth

  create-kubeconfig
}

關於build.sh:

此指令碼可以在kube-up.sh指令碼前執行,在下載需要的二進位制檔案前,可以通過環境變數指定需要下載的具體版本,在執行build.sh前可以先執行下面的指令碼:

#cd kubernetes/cluster/ubuntu
#source init_version.sh

#init_version.sh
#!/bin/bash

export KUBE_VERSION=1.1.2
export FLANNEL_VERSION=0.5.5
export ETCD_VERSION=2.2.1

echo "done"

關於setClusterInfo:

nodes和roles變數都在config-default.sh初始化,然後根據初始化結果設定MASTER_IP和NODE_IPS。

# From user input set the necessary k8s and etcd configuration information
function setClusterInfo() {
  # Initialize NODE_IPS in setClusterInfo function
  # NODE_IPS is defined as a global variable, and is concatenated with other nodeIP	
  # When setClusterInfo is called for many times, this could cause potential problems
  # Such as, you will have NODE_IPS=192.168.0.2,192.168.0.3,192.168.0.2,192.168.0.3 which is obviously wrong
  NODE_IPS=""
  
  ii=0
  for i in $nodes; do
    nodeIP=${i#*@}

    if [[ "${roles[${ii}]}" == "ai" ]]; then
      MASTER_IP=$nodeIP
      MASTER=$i
      NODE_IPS="$nodeIP"
    elif [[ "${roles[${ii}]}" == "a" ]]; then
      MASTER_IP=$nodeIP
      MASTER=$i
    elif [[ "${roles[${ii}]}" == "i" ]]; then
      if [[ -z "${NODE_IPS}" ]];then
        NODE_IPS="$nodeIP"
      else
        NODE_IPS="$NODE_IPS,$nodeIP"
      fi
    else
      echo "unsupported role for ${i}. please check"
      exit 1
    fi

    ((ii=ii+1))
  done

}

關於provision-masterandnode:
function provision-masterandnode() {
  # copy the binaries and scripts to the ~/kube directory on the master
  echo "Deploying master and node on machine ${MASTER_IP}"
  echo "SSH_OPTS=$SSH_OPTS"
  echo "MASTER=$MASTER"
  echo "SERVICE_CLUSTER_IP_RANGE=$SERVICE_CLUSTER_IP_RANGE"
  echo "ADMISSION_CONTROL=$ADMISSION_CONTROL"
  echo "SERVICE_NODE_PORT_RANGE=$SERVICE_NODE_PORT_RANGE"
  echo "NODE_IPS=$NODE_IPS"
  echo "DNS_SERVER_IP=$DNS_SERVER_IP"
  echo "DNS_DOMAIN=$DNS_DOMAIN"
  echo "FLANNEL_NET=$FLANNEL_NET"
  echo
  ssh $SSH_OPTS $MASTER "mkdir -p ~/kube/default"
  # scp order matters
  scp -r $SSH_OPTS ubuntu/config-default.sh ubuntu/util.sh ubuntu/minion/* ubuntu/master/* ubuntu/reconfDocker.sh ubuntu/binaries/master/ ubuntu/binaries/minion "${MASTER}:~/kube"

  # remote login to the node and use sudo to configue k8s
  ssh $SSH_OPTS -t $MASTER "source ~/kube/util.sh; \
                            setClusterInfo; \
                            create-etcd-opts; \
                            create-kube-apiserver-opts "${SERVICE_CLUSTER_IP_RANGE}" "${ADMISSION_CONTROL}" "${SERVICE_NODE_PORT_RANGE}"; \
                            create-kube-controller-manager-opts "${NODE_IPS}"; \
                            create-kube-scheduler-opts; \
                            create-kubelet-opts "${MASTER_IP}" "${MASTER_IP}" "${DNS_SERVER_IP}" "${DNS_DOMAIN}";
                            create-kube-proxy-opts "${MASTER_IP}";\
                            create-flanneld-opts "127.0.0.1"; \
                            sudo -p '[sudo] password to start master: ' cp ~/kube/default/* /etc/default/ && sudo cp ~/kube/init_conf/* /etc/init/ && sudo cp ~/kube/init_scripts/* /etc/init.d/ ; \
                            sudo mkdir -p /opt/bin/ && sudo cp ~/kube/master/* /opt/bin/ && sudo cp ~/kube/minion/* /opt/bin/; \
                            sudo service etcd start; \
                            sudo FLANNEL_NET=${FLANNEL_NET} -b ~/kube/reconfDocker.sh "ai";"
}

關於verify-cluster:
function verify-cluster {
  ii=0

  for i in ${nodes}
  do
    if [ "${roles[${ii}]}" == "a" ]; then
      verify-master
    elif [ "${roles[${ii}]}" == "i" ]; then
      verify-node $i
    elif [ "${roles[${ii}]}" == "ai" ]; then
      verify-master
      verify-node $i
    else
      echo "unsupported role for ${i}. please check"
      exit 1
    fi

    ((ii=ii+1))
  done

  echo
  echo "Kubernetes cluster is running.  The master is running at:"
  echo
  echo "  http://${MASTER_IP}:8080"
  echo

}

關於verify-master:
function verify-master(){
  # verify master has all required daemons
  printf "Validating master"
  local -a required_daemon=("kube-apiserver" "kube-controller-manager" "kube-scheduler")
  local validated="1"
  local try_count=1
  local max_try_count=3
  until [[ "$validated" == "0" ]]; do
    validated="0"
    local daemon
    for daemon in "${required_daemon[@]}"; do
      ssh $SSH_OPTS "$MASTER" "pgrep -f ${daemon}" >/dev/null 2>&1 || {
        printf "."
        validated="1"
        ((try_count=try_count+1))
        if [[ ${try_count} -gt ${max_try_count} ]]; then
          printf "\nWarning: Process \"${daemon}\" failed to run on ${MASTER}, please check.\n"
          exit 1
        fi
        sleep 2
      }
    done
  done
  printf "\n"

}

關於verify-node:
function verify-node(){
  # verify node has all required daemons
  printf "Validating ${1}"
  local -a required_daemon=("kube-proxy" "kubelet" "docker")
  local validated="1"
  local try_count=1
  local max_try_count=3
  until [[ "$validated" == "0" ]]; do
    validated="0"
    local daemon
    for daemon in "${required_daemon[@]}"; do
      ssh $SSH_OPTS "$1" "pgrep -f $daemon" >/dev/null 2>&1 || {
        printf "."
        validated="1"
        ((try_count=try_count+1))
        if [[ ${try_count} -gt ${max_try_count} ]]; then
          printf "\nWarning: Process \"${daemon}\" failed to run on ${1}, please check.\n"
          exit 1
        fi
        sleep 2
      }
    done
  done
  printf "\n"
}

3 測試

可以使用kubectl工具以命令列的方式操作k8s。

root@gerryyang:~/k8s/test/kubernetes/k8s_1.1.3/kubernetes/cluster/ubuntu/binaries# ./kubectl --help
kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

Usage: 
  kubectl [flags]
  kubectl [command]

Available Commands: 
  get            Display one or many resources
  describe       Show details of a specific resource or group of resources
  create         Create a resource by filename or stdin
  replace        Replace a resource by filename or stdin.
  patch          Update field(s) of a resource by stdin.
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector.
  edit           Edit a resource on the server
  apply          Apply a configuration to a resource by filename or stdin
  namespace      SUPERSEDED: Set and view the current Kubernetes namespace
  logs           Print the logs for a container in a pod.
  rolling-update Perform a rolling update of the given ReplicationController.
  scale          Set a new size for a Replication Controller.
  attach         Attach to a running container.
  exec           Execute a command in a container.
  port-forward   Forward one or more local ports to a pod.
  proxy          Run a proxy to the Kubernetes API server
  run            Run a particular image on the cluster.
  stop           Deprecated: Gracefully shut down a resource by name or filename.
  expose         Take a replication controller, service or pod and expose it as a new Kubernetes Service
  autoscale      Auto-scale a replication controller
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  config         config modifies kubeconfig files
  cluster-info   Display cluster info
  api-versions   Print the supported API versions on the server, in the form of "group/version".
  version        Print the client and server version information.
  help           Help about any command

Flags:
      --alsologtostderr[=false]: log to standard error as well as files
      --api-version="": The API version to use when talking to the server
      --certificate-authority="": Path to a cert. file for the certificate authority.
      --client-certificate="": Path to a client key file for TLS.
      --client-key="": Path to a client key file for TLS.
      --cluster="": The name of the kubeconfig cluster to use
      --context="": The name of the kubeconfig context to use
      --insecure-skip-tls-verify[=false]: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
      --kubeconfig="": Path to the kubeconfig file to use for CLI requests.
      --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace
      --log-dir="": If non-empty, write log files in this directory
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --logtostderr[=true]: log to standard error instead of files
      --match-server-version[=false]: Require server version to match client version
      --namespace="": If present, the namespace scope for this CLI request.
      --password="": Password for basic authentication to the API server.
  -s, --server="": The address and port of the Kubernetes API server
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --token="": Bearer token for authentication to the API server.
      --user="": The name of the kubeconfig user to use
      --username="": Username for basic authentication to the API server.
      --v=0: log level for V logs
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging


Use "kubectl [command] --help" for more information about a command.


kube*


etcd

flannel




4 參考

[1] https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/ubuntu.md
[2] https://github.com/kubernetes/kubernetes/releases
[3] http://mah.everybody.org/docs/ssh