使用docker-compose建立hadoop叢集
阿新 • • 發佈:2018-11-12
下載docker映象
首先下載需要使用的五個docker映象
docker pull bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-resourcemanager:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-historyserver:1.1.0-hadoop2.7.1-java8
docker pull bde2020/hadoop-nodemanager :1.1.0-hadoop2.7.1-java8
設定hadoop配置引數
建立 hadoop.env 檔案,內容如下:
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
CORE_CONF_hadoop_http_staticuser_user=root
CORE_CONF_hadoop_proxyuser_hue_hosts=*
CORE_CONF_hadoop_proxyuser_hue_groups=*
HDFS_CONF_dfs_webhdfs_enabled=true
HDFS_CONF_dfs_permissions_enabled=false
YARN_CONF_yarn_log___aggregation___enable=true
YARN_CONF_yarn_resourcemanager_recovery_enabled=true
YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/
YARN_CONF_yarn_timeline___service_enabled=true
YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
YARN_CONF_yarn_resourcemanager_hostname=resourcemanager
YARN_CONF_yarn_timeline___service_hostname=historyserver
YARN_CONF_yarn_resourcemanager_address=resourcemanager:8032
YARN_CONF_yarn_resourcemanager_scheduler_address=resourcemanager:8030
YARN_CONF_yarn_resourcemanager_resource___tracker_address=resourcemanager:8031
建立docker-compose檔案
建立 docker-compose.yml 檔案,內如如下:
version: "2"
services:
namenode:
image: bde2020/hadoop-namenode:1.1.0-hadoop2.7.1-java8
container_name: namenode
volumes:
- hadoop_namenode:/hadoop/dfs/name
environment:
- CLUSTER_NAME=test
env_file:
- ./hadoop.env
resourcemanager:
image: bde2020/hadoop-resourcemanager:1.1.0-hadoop2.7.1-java8
container_name: resourcemanager
depends_on:
- namenode
- datanode1
- datanode2
- datanode3
env_file:
- ./hadoop.env
historyserver:
image: bde2020/hadoop-historyserver:1.1.0-hadoop2.7.1-java8
container_name: historyserver
depends_on:
- namenode
- datanode1
- datanode2
- datanode3
volumes:
- hadoop_historyserver:/hadoop/yarn/timeline
env_file:
- ./hadoop.env
nodemanager1:
image: bde2020/hadoop-nodemanager:1.1.0-hadoop2.7.1-java8
container_name: nodemanager1
depends_on:
- namenode
- datanode1
- datanode2
- datanode3
env_file:
- ./hadoop.env
datanode1:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode1
depends_on:
- namenode
volumes:
- hadoop_datanode1:/hadoop/dfs/data
env_file:
- ./hadoop.env
datanode2:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode2
depends_on:
- namenode
volumes:
- hadoop_datanode2:/hadoop/dfs/data
env_file:
- ./hadoop.env
datanode3:
image: bde2020/hadoop-datanode:1.1.0-hadoop2.7.1-java8
container_name: datanode3
depends_on:
- namenode
volumes:
- hadoop_datanode3:/hadoop/dfs/data
env_file:
- ./hadoop.env
volumes:
hadoop_namenode:
hadoop_datanode1:
hadoop_datanode2:
hadoop_datanode3:
hadoop_historyserver:
建立並啟動hadoop叢集
sudo docker-compose up
啟動hadoop集群后,可以使用下面命令檢視一下hadoop叢集的容器資訊
# 檢視叢集包含的容器,以及export的埠號
sudo docker-compose ps
Name Command State Ports
------------------------------------------------------------
datanode1 /entrypoint.sh /run.sh Up 50075/tcp
datanode2 /entrypoint.sh /run.sh Up 50075/tcp
datanode3 /entrypoint.sh /run.sh Up 50075/tcp
historyserver /entrypoint.sh /run.sh Up 8188/tcp
namenode /entrypoint.sh /run.sh Up 50070/tcp
nodemanager1 /entrypoint.sh /run.sh Up 8042/tcp
resourcemanager /entrypoint.sh /run.sh Up 8088/tc
# 檢視namenode的IP地址
sudo docker inspect namenode | grep IPAddress
也可以通過 http://:50070 檢視叢集狀態。
提交作業
要提交作業,我們首先需要登入到叢集中的一個節點,這裡我們就登入到namenode節點。
sudo docker exec -it namenode /bin/bash
準備資料並提交作業
cd /opt/hadoop-2.7.1
# 建立使用者目錄
hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/root
# 準備資料
hdfs dfs -mkdir input
hdfs dfs -put etc/hadoop/*.xml input
# 提交作業
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs[a-z.]+'
# 檢視作業執行結果
hdfs dfs -cat output/*
清空資料
hdfs dfs -rm input/*
hdfs dfs -rmdir input/
hdfs dfs -rm output/*
hdfs dfs -rmdir output/
停止叢集
可以通過CTRL+C來終止叢集,也可以通過 “sudo docker-compose stop”。
停止集群后,建立的容器並不會被刪除,此時可以使用 “sudo docker-compose rm” 來刪除已經停止的容器。也可以使用 “sudo docker-compose down” 來停止並刪除容器。
刪除容器後,使用 “sudo docker volume ls” 可以看到上面叢集使用的volume資訊,我們可以使用 “sudo docker rm ” 來刪除。