1. 程式人生 > >AWS RHEL 7快速安裝配置OpenShift

AWS RHEL 7快速安裝配置OpenShift

images chmod release google maps 提示 unity cor 客戶端請求

OpenShift簡介

微服務架構應用日漸廣泛,Docker和Kubernetes技術是不可或缺的。Red Hat OpenShift 3是建立在Docker和Kubernetes基礎之上的容器應用平臺,用於開發和部署企業應用程序。
技術分享圖片

OpenShift版本

OpenShift Dedicated(Enterprise)

  • Private, high-availability OpenShift clusters hosted on Amazon Web Services or Google Cloud Platform
  • Delivered as a hosted service and supported by Red Hat

OpenShift Container Platform(Enterprise)

  • Across cloud and on-premise infrastructure
  • Customizable, with full administrative control

OKD
OpenShift開源社區版(Origin Community Distribution of Kubernetes)

OpenShift架構

技術分享圖片

  • Master Node提供的組件:API Server (負責處理客戶端請求, 包括node、user、 administrator和其他的infrastructure系統);Controller Manager Server (包括scheduler和replication controller);OpenShift客戶端工具 (oc)
  • Compute Node(Application Node) 部署application
  • Infra Node 運行router、image registry和其他的infrastructure服務
  • etcd 可以部署在Master Node,也可以單獨部署, 用來存儲共享數據:master state、image、 build、deployment metadata等
  • Pod 最小的Kubernetes object,可以部署一個或多個container

安裝計劃

軟件環境

  • AWS RHEL 7.5
  • OKD 3.10
  • Ansible 2.6.3
  • Docker 1.13.1
  • Kubernetes 1.10

使用Ansible安裝openshift,僅需配置一些Node信息和參數即可完成集群安裝,大大提高了安裝速度。

硬件需求

Masters

  • 最小4 vCPU
  • 最小16 GB RAM
  • /var/最小40 GB硬盤空間
  • /usr/local/bin/最小1 GB硬盤空間
  • 臨時目錄最小1 GB硬盤空間

Nodes

  • 1 vCPU
  • 最小8 GB RAM
  • /var/最小15 GB硬盤空間
  • /usr/local/bin/最小1 GB硬盤空間
  • 臨時目錄最小1 GB硬盤空間

安裝類型

RPM-based Installations System Container Installations
Delivery Mechanism RPM packages using yum System container images using docker
Service Management systemd docker and systemd units
Operating System Red Hat Enterprise Linux (RHEL) RHEL Atomic Host

RPM安裝通過包管理器來安裝和配置服務,system container安裝使用系統容器鏡像來安裝服務, 服務運行在獨立的容器內。
從OKD 3.10開始, 如果使用Red Hat Enterprise Linux (RHEL)操作系統,將使用RPM方法安裝OKD組件。如果使用RHEL Atomic,將使用system container方法。不同安裝類型提供相同的功能, 安裝類型的選擇依賴於操作系統、你想使用的服務管理和系統升級方法。

本文使用RPM安裝方法。

Node ConfigMaps

Configmaps定義Node配置, OKD 3.10忽略openshift_node_labels值。默認創建了下面的ConfigMaps:

  • node-config-master
  • node-config-infra
  • node-config-compute
  • node-config-all-in-one
  • node-config-master-infra

集群安裝時選擇node-config-master、node-config-infra、node-config-compute。

環境場景

  • Master、Compute、Infra Node各一,etcd部署在master上
  • Master、Compute、Infra Node各三,etcd部署在master上

為快速了解OpenShift安裝,我們先使用第一種環境,成功後再安裝第二種環境。Ansible一般使用單獨的機器,兩種情況分別需要創建4和10臺EC2。

前期準備

更新系統

# yum update

Red Hat訂閱

安裝OpenShift需要Red Hat賬號並訂閱了RHEL,依次執行以下命令啟用必須的repo:

# subscription-manager register
# subscription-manager list --available
# subscription-manager attach --pool=8a85f98b62dd96fc0162f04efb0e6350
# subscription-manager repos --list
# subscription-manager repos --enable rhel-7-server-ansible-2.6-debug-rpms
# subscription-manager repos --enable rhel-7-server-rpms
# subscription-manager repos --enable rhel-7-server-extras-rpms

檢查SELinux

檢查/etc/selinux/config,確保內容如下:

SELINUX=enforcing
SELINUXTYPE=targeted

配置DNS

為了使用更清晰的名字,需要創建額外的DNS服務器,為EC2配置合適的域名,如下:

master1.itrunner.org    A   10.64.33.100
master2.itrunner.org    A   10.64.33.103
node1.itrunner.org      A   10.64.33.101
node2.itrunner.org      A   10.64.33.102

EC2需要配置DNS服務器,創建dhclient.conf文件

# vi /etc/selinux/config

添加如下內容:

supersede domain-name-servers 10.164.18.18;

配置完畢後需要重啟才能生效,重啟後/etc/resolv.conf內容如下:

# Generated by NetworkManager
search cn-north-1.compute.internal
nameserver 10.164.18.18

OKD使用了dnsmasq,安裝成功後會自動配置所有Node,/etc/resolv.conf會被修改,nameserver變為本機IP。Pod將使用Node作為DNS,Node轉發請求。

# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
# Generated by NetworkManager
search cluster.local cn-north-1.compute.internal itrunner.org
nameserver 10.64.33.100

配置hostname

hostnamectl set-hostname --static master1.itrunner.org

編輯/etc/cloud/cloud.cfg文件,在底部添加以下內容:

preserve_hostname: true

安裝基礎包

所有Node需安裝。

# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct

安裝Docker

所有Node需安裝。

# yum install docker
# systemctl enable docker
# systemctl start docker

檢查docker安裝:

# docker info

安裝Ansible

僅Ansible EC2需安裝。

# yum install ansible

Ansible需要能訪問其他所有機器才能完成安裝,因此需要配置免密登錄。將密鑰拷貝到ec2-user/.ssh目錄下,然後授權:

$ cd .ssh/
$ chmod 600 *

配置成功後逐一測試連接:

ssh master1.itrunner.org

如使用密碼或需要密碼的密鑰登錄,請使用keychain。

配置Security Group

Security Group Port
All OKD Hosts tcp/22 from host running the installer/Ansible
etcd Security Group tcp/2379 from masters, tcp/2380 from etcd hosts
Master Security Group tcp/8443 from 0.0.0.0/0, tcp/53 from all OKD hosts, udp/53 from all OKD hosts, tcp/8053 from all OKD hosts, udp/8053 from all OKD hosts
Node Security Group tcp/10250 from masters, udp/4789 from nodes
Infrastructure Nodes tcp/443 from 0.0.0.0/0, tcp/80 from 0.0.0.0/0

配置ELB

第二種場景下需要配置ELB。
使用外部ELB時,Inventory文件不需定義lb,需要指定openshift_master_cluster_hostname、openshift_master_cluster_public_hostname、openshift_master_default_subdomain三個參數(請參見後面章節)。
openshift_master_cluster_hostname和openshift_master_cluster_public_hostname負責master的load balance,ELB定義時指向Master Node,其中openshift_master_cluster_hostname供內部使用,openshift_master_cluster_public_hostname供外部訪問(Web Console),兩者可以設置為同一域名,但openshift_master_cluster_hostname所使用的ELB必須配置為Passthrough。
技術分享圖片
技術分享圖片
為了安全,生產環境openshift_master_cluster_hostname和openshift_master_cluster_public_hostname應設置為兩個不同域名。
openshift_master_default_subdomain定義OpenShift部署應用的域名,ELB指向Infra Node。
因此,共需創建三個ELB:

  • openshift_master_cluster_hostname 必須創建網絡負載均衡器,協議為TCP,默認端口8443,Target要使用IP方式。
  • openshift_master_cluster_public_hostname ELB/ALB,協議HTTPS,默認端口8443。
  • openshift_master_default_subdomain ELB/ALB,協議HTTPS,默認端口443;協議HTTP,默認端口80。

為了方便使用,openshift_master_cluster_public_hostname、openshift_master_default_subdomain一般配置為企業的域名,不直接使用AWS ELB的DNS名稱。

安裝OpenShift

下載openshift-ansible

$ cd ~
$ git clone https://github.com/openshift/openshift-ansible
$ cd openshift-ansible
$ git checkout release-3.10

配置Inventory文件

Inventory文件定義了host和配置信息,默認文件為/etc/ansible/hosts。
場景一
master、compute、infra各一個結點,etcd部署在master上。

# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user

# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true

openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Defining htpasswd users
#openshift_master_htpasswd_users={‘user1‘: ‘<pre-hashed password>‘, ‘user2‘: ‘<pre-hashed password>‘
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>

# host group for masters
[masters]
master1.itrunner.org

# host group for etcd
[etcd]
master1.itrunner.org

# host group for nodes, includes region info
[nodes]
master1.itrunner.org openshift_node_group_name=‘node-config-master‘
compute1.itrunner.org openshift_node_group_name=‘node-config-compute‘
infra1.itrunner.org openshift_node_group_name=‘node-config-infra‘

場景二
master、compute、infra各三個結點,在非生產環境下,load balance可以不使用外部ELB,使用HAProxy,etcd可以單獨部署,也可以與master部署在一起。

  1. Multiple Masters Using Native HA with External Clustered etcd
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]

# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# apply updated node defaults
openshift_node_kubelet_args={‘pods-per-core‘: [‘10‘], ‘max-pods‘: [‘250‘], ‘image-gc-high-threshold‘: [‘90‘], ‘image-gc-low-threshold‘: [‘80‘]}

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name=‘node-config-master‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
infra-node1.example.com openshift_node_group_name=‘node-config-infra‘
infra-node2.example.com openshift_node_group_name=‘node-config-infra‘
  1. Multiple Masters Using Native HA with Co-located Clustered etcd
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
#openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
master1.example.com
master2.example.com
master3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_group_name=‘node-config-master‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
node1.example.com openshift_node_group_name=‘node-config-compute‘
infra-node1.example.com openshift_node_group_name=‘node-config-infra‘
infra-node2.example.com openshift_node_group_name=‘node-config-infra‘
  1. ELB Load Balancer

使用外部ELB,不需定義lb,需要指定openshift_master_cluster_hostname、openshift_master_cluster_public_hostname、openshift_master_default_subdomain。

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
# Since we are providing a pre-configured LB VIP, no need for this group
#lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=ec2-user

# If ansible_ssh_user is not root, ansible_become must be set to true
ansible_become=true

openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{‘name‘: ‘htpasswd_auth‘, ‘login‘: ‘true‘, ‘challenge‘: ‘true‘, ‘kind‘: ‘HTPasswdPasswordIdentityProvider‘}]
# Defining htpasswd users
#openshift_master_htpasswd_users={‘user1‘: ‘<pre-hashed password>‘, ‘user2‘: ‘<pre-hashed password>‘
# or
#openshift_master_htpasswd_file=<path to local pre-generated htpasswd file>

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-master-internal-123456b57ac7be6c.elb.cn-north-1.amazonaws.com.cn
openshift_master_cluster_public_hostname=openshift.itrunner.org
openshift_master_default_subdomain=apps.itrunner.org
#openshift_master_api_port=443
#openshift_master_console_port=443

# host group for masters
[masters]
master1.itrunner.org
master2.itrunner.org
master3.itrunner.org

# host group for etcd
[etcd]
master1.itrunner.org
master2.itrunner.org
master3.itrunner.org

# Since we are providing a pre-configured LB VIP, no need for this group
#[lb]
#lb.itrunner.org

# host group for nodes, includes region info
[nodes]
master[1:3].itrunner.org openshift_node_group_name=‘node-config-master‘
node1.itrunner.org openshift_node_group_name=‘node-config-compute‘
node1.itrunner.org openshift_node_group_name=‘node-config-compute‘
infra-node1.itrunner.org openshift_node_group_name=‘node-config-infra‘
infra-node2.itrunner.org openshift_node_group_name=‘node-config-infra‘

安裝OpenShift

一切準備就緒,使用ansible安裝OpenShift非常簡單,僅需運行prerequisites.yml和deploy_cluster.yml兩個playbook。

$ ansible-playbook ~/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook ~/openshift-ansible/playbooks/deploy_cluster.yml

如沒有使用默認的inventory文件,可以使用-i指定文件位置:

$ ansible-playbook [-i /path/to/inventory] ~/openshift-ansible/playbooks/prerequisites.yml
$ ansible-playbook [-i /path/to/inventory] ~/openshift-ansible/playbooks/deploy_cluster.yml

deploy過程中如出現錯誤,修正後可以運行提示中的playbook測試,然後再運行deploy_cluster.yml。

驗證安裝

  1. 驗證所有結點是否成功安裝,在Master上運行:
# oc get nodes
  1. 驗證Web Console

場景一,使用master hostname訪問web console: https://master1.itrunner.org:8443/console
場景二,使用域名訪問web console: https://openshift.itrunner.org:8443/console

用戶與權限

創建兩個用戶:

# htpasswd /etc/origin/master/htpasswd admin
# htpasswd /etc/origin/master/htpasswd developer

使用system:admin登錄:

# oc login -u system:admin

用戶授權:

# oc adm policy add-cluster-role-to-user cluster-admin admin
# oc adm policy add-role-to-user admin admin

CLI配置文件
oc login命令自動創建和管理CLI配置文件~/.kube/config。

卸載OpenShift

  • 卸載所有Node

使用安裝時的inventory文件

$ ansible-playbook ~/openshift-ansible/playbooks/adhoc/uninstall.yml
  • 卸載部分Node

新建一個inventory文件,配置要卸載的node:

[OSEv3:children]
nodes 

[OSEv3:vars]
ansible_ssh_user=ec2-user
openshift_deployment_type=origin

[nodes]
node3.example.com openshift_node_group_name=‘node-config-infra‘

指定新的inventory文件,運行uninstall.yml playbook:

$ ansible-playbook -i /path/to/new/file ~/openshift-ansible/playbooks/adhoc/uninstall.yml

參考資料

OpenShift
OpenShift Github
OpenShift Documentation
OKD
OKD Latest Documentation
Ansible Documentation
External Load Balancer Integrations with OpenShift Enterprise 3
Red Hat OpenShift on AWS
Docker Documentation
Kubernetes Documentation
Kubernetes中文社區
Kubernetes-基於EFK進行統一的日誌管理
SSL For Free

AWS RHEL 7快速安裝配置OpenShift