CentOS7.2中使用Kubernetes(k8s)1.4.6原始碼搭建k8s容器叢集環境
一、相關準備工作
1.1、準備工作
- 準備至少兩臺已安裝好CentOS7.2作業系統的物理機或者虛擬機器(本文配置時使用的是三臺KVM虛擬機器);
- 設定hostname命令:
hostnamectl set-hostname k8s-mst
角色 | ip | hostname |
---|---|---|
Master | 192.168.3.87 | k8s-mst |
Node | 192.168.3.88 | k8s-nod1 |
Node | 192.168.3.89 | k8s-nod2 |
為了避免和Docker的iptables產生衝突,需要關閉Node節點上的防火牆
systemctl stop firewalld
systemctl disable firewalld
為了讓各個節點的時間保持一致,需要為所有節點安裝NTP
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd
1.2、編譯Kubernetes原始碼(建議在Node節點上)
1.2.1、安裝docker-engine
使用以下方法可以安裝較新版本
- 新增yum庫
sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https: //yum.dockerproject.org/repo/experimental/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
安裝Docker-engine
sudo yum install -y docker-engine
執行Docker Daemon
sudo systemctl start docker
- 可以使用
sudo systemctl status docker
檢視docker Daemon的執行狀態
1.2.2、安裝Golang
sudo yum install -y golang
1.2.3、下載Kubernetes
- 如果使用git clone,下載完成後進入kubernetes資料夾,使用命令
git checkout v1.4.6
,下載release包則解壓tar -xvf kubernetes-1.4.6.tar.gz
進入kubernetes資料夾;
1.2.4、編譯Kubernetes
修改hosts
由於在Kubernetes編譯過程中需要pull谷歌容器庫(gcr)中的相關映象,故需要修改hosts進行翻牆,hosts檔案參考:https://github.com/racaljk/hosts修改執行平臺配置引數
根據自己的執行平臺(linux/amd64)修改hack/lib/golang.sh,把KUBE_SERVER_PLATFORMS,KUBE_CLIENT_PLATFORMS和KUBE_TEST_PLATFORMS中除linux/amd64以外的其他平臺註釋掉,以此來減少編譯所用時間編譯原始碼
在Kubernetes根目錄下執行命令make release-skip-tests
,編譯耗時相對較長編譯成功之後,可執行檔案在資料夾“_output”中
二、Master配置工作
2.1、安裝ectd並修改配置檔案
2.1.1、安裝必要軟體etcd
yum -y install etcd
2.1.2、 修改etcd的配置檔案/etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
2.1.3、執行etcd
sudo systemctl start etcd
2.1.4、配置etcd中的網路
etcdctl mk /k8s/network/config '{"Network":"172.17.0.0/16"}'
2.2、kubernetes環境配置
2.2.1、複製命令(可執行檔案)
將位於_output/release-stage/server/linux-amd64/kubernetes/server/bin/目錄下的kube-apiserver、kube-controller-manager、kube-scheduler、kubectl複製到Master節點的/usr/bin/目錄下
2.2.2、建立相應的service檔案以及配置檔案(shell指令碼)
根據自己的的配置修改MASTER_ADDRESS和ETCD_SERVERS
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
MASTER_ADDRESS=${1:-"192.168.3.87"}
ETCD_SERVERS=${2:-"http://192.168.3.87:2379"}
SERVICE_CLUSTER_IP_RANGE=${3:-"10.254.0.0/16"}
ADMISSION_CONTROL=${4:-"NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"}
cat <<EOF >/etc/kubernetes/config
# --logtostderr=true: log to standard error instead of files
KUBE_LOGTOSTDERR="--logtostderr=true"
# --v=0: log level for V logs
KUBE_LOG_LEVEL="--v=0"
# --allow-privileged=false: If true, allow privileged containers.
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
EOF
cat <<EOF >/etc/kubernetes/apiserver
# --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# --insecure-port=8080: The port on which to serve unsecured, unauthenticated access.
KUBE_API_PORT="--insecure-port=8080"
# --kubelet-port=10250: Kubelet port
NODE_PORT="--kubelet-port=10250"
# --etcd-servers=[]: List of etcd servers to watch (http://ip:port),
# comma separated. Mutually exclusive with -etcd-config
KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"
# --advertise-address=<nil>: The IP address on which to advertise
# the apiserver to members of the cluster.
KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"
# --service-cluster-ip-range=<nil>: A CIDR notation IP range from which to assign service cluster IPs.
# This must not overlap with any IP ranges assigned to nodes for pods.
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"
# --admission-control="AlwaysAdmit": Ordered list of plug-ins
# to do admission control of resources into cluster.
# Comma-delimited list of:
# LimitRanger, AlwaysDeny, SecurityContextDeny, NamespaceExists,
# NamespaceLifecycle, NamespaceAutoProvision,
# AlwaysAdmit, ServiceAccount, ResourceQuota, DefaultStorageClass
KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"
# Add your own!
KUBE_API_ARGS=""
EOF
KUBE_APISERVER_OPTS=" \${KUBE_LOGTOSTDERR} \\
\${KUBE_LOG_LEVEL} \\
\${KUBE_ETCD_SERVERS} \\
\${KUBE_API_ADDRESS} \\
\${KUBE_API_PORT} \\
\${NODE_PORT} \\
\${KUBE_ADVERTISE_ADDR} \\
\${KUBE_ALLOW_PRIV} \\
\${KUBE_SERVICE_ADDRESSES} \\
\${KUBE_ADMISSION_CONTROL} \\
\${KUBE_API_ARGS}"
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
EOF
KUBE_CONTROLLER_MANAGER_OPTS=" \${KUBE_LOGTOSTDERR} \\
\${KUBE_LOG_LEVEL} \\
\${KUBE_MASTER} \\
\${KUBE_CONTROLLER_MANAGER_ARGS}"
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS}
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/kubernetes/scheduler
###
# kubernetes scheduler config
# Add your own!
KUBE_SCHEDULER_ARGS=""
EOF
KUBE_SCHEDULER_OPTS=" \${KUBE_LOGTOSTDERR} \\
\${KUBE_LOG_LEVEL} \\
\${KUBE_MASTER} \\
\${KUBE_SCHEDULER_ARGS}"
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler ${KUBE_SCHEDULER_OPTS}
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
2.2.3、執行相應的Kubernetes命令(shell 指令碼)
for svc in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $svc
systemctl enable $svc
systemctl status $svc
done
三、Node配置工作
3.1、安裝flannel並修改配置檔案
3.1.1、安裝必要軟體flannel
yum -y install flannel
3.1.2、 修改flannel的配置檔案/etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.3.87:2379"
FLANNEL_ETCD_KEY="/k8s/network"
3.1.3、執行flannel
systemctl restart flanneld
systemctl enable flanneld
systemctl status flanneld
3.1.4、上傳網路配置
建立一個config.json檔案,內容如下:
{
"Network": "172.17.0.0/16",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 7890
}
}
然後將配置上傳到etcd伺服器上:
curl -L http://192.168.3.87:2379/v2/keys/k8s/network/config -XPUT --data-urlencode [email protected]
3.2、kubernetes環境配置
3.2.1、複製命令(可執行檔案)
將位於_output/release-stage/server/linux-amd64/kubernetes/server/bin/目錄下的kube-proxy、kubelet 複製到Node節點的/usr/bin/目錄下
3.2.2、建立相應的service檔案以及配置檔案(shell指令碼)
根據自己的的配置修改MASTER_ADDRESS和NODE_HOSTNAME
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
MASTER_ADDRESS=${1:-"192.168.3.87"}
NODE_HOSTNAME=${2:-"k8s-nod"}
cat <<EOF >/etc/kubernetes/config
# --logtostderr=true: log to standard error instead of files
KUBE_LOGTOSTDERR="--logtostderr=true"
# --v=0: log level for V logs
KUBE_LOG_LEVEL="--v=0"
# --allow-privileged=false: If true, allow privileged containers.
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
EOF
cat <<EOF >/etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""
EOF
KUBE_PROXY_OPTS=" \${KUBE_LOGTOSTDERR} \\
\${KUBE_LOG_LEVEL} \\
\${KUBE_MASTER} \\
\${KUBE_PROXY_ARGS}"
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kube-proxy
ExecStart=/usr/bin/kube-proxy ${KUBE_PROXY_OPTS}
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/kubernetes/kubelet
# --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces)
KUBELET__ADDRESS="--address=0.0.0.0"
# --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag.
KUBELET_PORT="--port=10250"
# --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.
KUBELET_HOSTNAME="--hostname-override=${NODE_HOSTNAME}"
# --api-servers=[]: List of Kubernetes API servers for publishing events,
# and reading pods and services. (ip:port), comma separated.
KUBELET_API_SERVER="--api-servers=${MASTER_ADDRESS}:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
EOF
KUBE_PROXY_OPTS=" \${KUBE_LOGTOSTDERR} \\
\${KUBE_LOG_LEVEL} \\
\${KUBELET__ADDRESS} \\
\${KUBELET_PORT} \\
\${KUBELET_HOSTNAME} \\
\${KUBELET_API_SERVER} \\
\${KUBE_ALLOW_PRIV} \\
\${KUBELET_POD_INFRA_CONTAINER}\\
\${KUBELET_ARGS}"
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet ${KUBE_PROXY_OPTS}
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
3.2.3、執行相應的Kubernetes命令(shell指令碼)
for svc in kube-proxy kubelet docker; do
systemctl restart $svc
systemctl enable $svc
systemctl status $svc
done
四、驗證配置以及建立dashboard
4.1、驗證環境配置
在Master節點執行命令kubectl get nodes
,輸出資訊如下:
[[email protected] ~]# kubectl get nodes
NAME STATUS AGE
nod1 Ready 3h
nod2 Ready 3h
4.2、搭建dashboard
4.2.1、建立名稱空間(namespace)kube-system
建立檔案kubernetes-namespace.jason,內容如下:
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "kube-system"
}
}
使用命令kubectl create -f kubernetes-namespace.jason
建立kube-system名稱空間。
4.2.2、建立dashboard
建立kubernetes-dashboard.yaml檔案
# Configuration to deploy release version of the Dashboard UI.
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
# Comment the following annotaion if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.2
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
- --apiserver-host=http://192.168.3.87:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard
注意:
- 修改 –apiserver-host=http://192.168.3.87:8080;
- 其中 “ image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.2 ”,使用的是谷歌映象庫,需要Node節點翻牆才會可能正常建立。
使用命令
kubectl create -f kubernetes-dashboard.yaml
建立kubernetes-dashboard Deployment和Service。如無法獲取相應的映象建立之後,使用命令
kubectl get pod --namespace="kube-system"
檢視會顯示如下結果:
[root@mst ~]# kubectl get pod --namespace="kube-system"
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-47291540-lcuox 0/1 ErrImagePull 0 1m
4.2.3、使用dashboard
成功執行之後:
[[email protected] ~]# kubectl get pod --namespace="kube-system"
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-3856900779-226mr 1/1 Running 0 2m
[[email protected] ~]# kubectl describe pod kubernetes-dashboard-3856900779-226mr --namespace="kube-system"
Name: kubernetes-dashboard-3856900779-226mr
Namespace: kube-system
Node: nod1/192.168.3.91
Start Time: Tue, 22 Nov 2016 20:33:04 +0800
Labels: app=kubernetes-dashboard
pod-template-hash=3856900779
Status: Running
IP: 172.18.0.2
Controllers: ReplicaSet/kubernetes-dashboard-3856900779
Containers:
kubernetes-dashboard:
Container ID: docker://d36cff522129c73f0de370857124697659662c99d370af548a1367604bac7014
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.2
Image ID: docker://sha256:c0e4ba8968ee756368cbe5f64f39b0ef8e128de90d0bdfe1d040f0773055e68a
Port: 9090/TCP
Args:
--apiserver-host=http://192.168.3.87:8080
State: Running
Started: Tue, 22 Nov 2016 20:35:00 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: dedicated=master:Equal:NoSchedule
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {default-scheduler } Normal Scheduled Successfully assigned kubernetes-dashboard-3856900779-226mr to nod1
3m 3m 1 {kubelet nod1} spec.containers{kubernetes-dashboard} Normal Pulling pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.2"
3m 2m 2 {kubelet nod1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
2m 2m 1 {kubelet nod1} spec.containers{kubernetes-dashboard} Normal Pulled Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.2"
2m 2m 1 {kubelet nod1} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id d36cff522129; Security:[seccomp=unconfined]
2m 2m 1 {kubelet nod1} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id d36cff522129
[[email protected] ~]# kubectl describe service kubernetes-dashboard --namespace="kube-system"
Name: kubernetes-dashboard
Namespace: kube-system
Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard
Type: NodePort
IP: 10.254.196.154
External IPs: 192.168.3.87
Port: <unset> 80/TCP
NodePort: <unset> 31437/TCP
Endpoints: 172.18.0.2:9090
Session Affinity: None
執行成功之後,就可以使用瀏覽器訪問192.168.3.89:31437使用dashboard。
相關推薦
CentOS7.2中使用Kubernetes(k8s)1.4.6原始碼搭建k8s容器叢集環境
一、相關準備工作 1.1、準備工作 準備至少兩臺已安裝好CentOS7.2作業系統的物理機或者虛擬機器(本文配置時使用的是三臺KVM虛擬機器); 設定hostname命令: hostnamectl set-hostname k8s-mst
Centos7安裝部署Kubernetes(K8s)集群
signed node use ots 新的 b- 分區 str ext Kubernetes集群的安裝有多種方式:下載源碼包編譯安裝、下載編譯好的二進制包安裝、使用kubeadm工具安裝等。本文是以二進制文件方式安裝Kubernetes集群。系統環境 主機名 IP地
tensorflow中多層lstm專案程式碼詳解(進度:1/4)
1. 專案地址 2. 專案資料 使用text8.zip Linux下下載指令 curl http://mattmahoney.net/dc/text8.zip > text8.zip 3. 命令列執行指令 python3.5 ptb_word_lm.py
Centos7.2學習記錄(4)——調整root和home大小
df -h檢視磁碟使用情況 備份/home資料夾下內容 # cp -r /home/ homebak/ 解除安裝 /home # umount /home 如果失敗通過以下指令終止/ho
centos7.2編譯安裝zabbix-3.0.4
www 服務啟動 min asi com 修改配置 size ldap word 安裝zabbix-3.0.4 #安裝必備的包 yum -y install gcc* make php php-gd php-mysql php-bcmath php-mbstring php
解決在centos7.2下YUM安裝nginx-1.12.2依賴的問題
sha 使用 host package .rpm ide RoCE net plugins 由於CENTOS7.2默認使用老版本的openssl(OpenSSL 1.0.1e) ,這個問題會導致yum nginx-1.12以上版本的時候會因為依賴libcrypto.so.1
tensorflow nmt基本配置(tf-1.4)
href 。。 小時 基本 dir hub git git刪除 sdn 隨著tensorflow的不斷更新,直接按照nmt的教程搭建nmt環境會報錯的。。。因此,需要一些不太好的辦法來避免更多的問題出現。tensorflow看來在ubuntu和debian中運行是沒有問題的
Centos7.2中安裝pip
管理 pip安裝 鼓勵 tps 情況下 c11 我們 cer o-c CentOS安裝python-pip 在使用Python時,需要導入一些第三方工具包,一般情況下,鼓勵使用pip來安裝管理這些第三方的包, 這裏我們來看一下如何在CentOS 7.2
CentOS7.2中安裝MongoDB
CentOS7.2中安裝MongoDB 2018年02月25日 21:28:24 junshangshui 閱讀數:9738 標籤: mongodbmongodb安裝 更多 個人分類: mongodb MongoDB是由C++編寫的NoSQL
centos7下Hadoop2.8.4全分佈搭建之HDFS叢集搭建(一)
1)搭建前的準備 注意:(以下操作可以先配置一臺,然後通過scp命令傳送到其他兩臺虛擬機器上 傳送到其他機器 scp -r 主機名: 注意:載入環境變數 source /etc/profile
大資料學習系列8-Centos6.7 hadoop-2.6.5下sqoop-1.4.6的安裝部署
安裝sqoop的前提是已經具備java和hadoop的環境 1、下載並解壓 2、修改配置檔案 $ cd $SQOOP_HOME/conf $ mv sqoop-env-template.sh sqoop-env.sh 開啟sqoop-env.sh並編輯下面幾
Kubernetes(k8s)1.5新特性:支援windows容器_Kubernetes中文社群
一、Windows容器概述 1、Windows 容器型別 Windows 容器包括兩個不同的容器型別或執行時。 WindowsServer 容器 – 通過程序和名稱空間隔離技術提供應用程式隔離。 Windows Server 容器與容器主機和該主機上執行的所有容器共享核心。 Hyper-V容器
呼叫介面的2中方法(conn和httpclient)
import com.ursa.acf.util.StringUtils; import net.sf.json.JSONArray; import net.sf.json.JSONObject; import org.apache.http.client.methods.*; import org
Win10搭建wamp環境超詳細教程(php7.1.4 + mysql5.7.18 + apache2.4)
前言 wamp整合軟體用了一年了,最近突然覺得應該自己來搭建環境,畢竟用別人的總是感覺不舒服,出了許多bug也不好找。 PHP安裝 首先到官網上面去下載適合自己的php版本。 1.php目前最新版
centos7.2中搭建ARM開發環境所需工具初體驗
需要用到的工具: 編譯工具:arm-linux-gcc 連結工具:arm-linux-ld 格式轉換工具:arm-linux-objcopy 反彙編器:arm-linux-objdump 檔案資訊檢視:arm-linux-readelf 下載工具:dnw 除錯工具:ar
CentOS 6 安裝最新版Freeswitch(版本: 1.4.15)
由於要試驗一個視訊會議的專案,所以需要server端用最新的Freeswitch來進行測試 1. 準備工作: yum install autoconf automake gcc-c++ git-core libjpeg-devel libtool make ncurses-
centos7.2 利用yum安裝配置apache2.4多虛擬主機
一、安裝apache 安裝 # yum install httpd -y # rpm -qa httpd 操作步驟: [[email protected]1 httpd]# cat /etc/centos-release CentOS Li
SparkSQL(Spark-1.4.0)實戰系列(二)——DataFrames進階
本節主要內容如下 DataFrame與RDD的互操作實戰 不同資料來源構建DataFrame實戰 DataFrame與RDD的互操作實戰 1 採用反映機制進行Schema型別推導(RDD到DataFrame的轉換) SparkSQL支援RDD到D
阿里雲ECS之完整流程搭建:CentOS 7.3+Nginx 1.12.1+php 7.2 + MaraiaDB 5.5.52 + PhpMyAdmin 4.6.6
大神們,請跳過~~~【背景】本人客戶端專業戶,簡單用用php,搭建伺服器屬於小白中的小白。去年買了低配阿里雲做測試服順便掛了個個人網站及堆放了些資源,最初是硬著頭皮據“甯浩網”的系列視訊教程搭建了一個apache+php+mysql的伺服器,被各種配置搞得頭大,但還算穩定。幾
Broken Necklace ( USACO1.1.4 破碎的項鍊)
Description 你有一條由N個紅色的,白色的,或藍色的珠子組成的項鍊(3<=N<=350),珠子是隨意安排的。 這裡是 n=29 的二個 例子: 1 2 1 2