1. 程式人生 > >基於【CentOS-7+ Ambari 2.7.0 + HDP 3.0】搭建HAWQ資料倉庫02 ——使用ambari-server安裝HDP

基於【CentOS-7+ Ambari 2.7.0 + HDP 3.0】搭建HAWQ資料倉庫02 ——使用ambari-server安裝HDP

本文記錄使用ambari-server安裝HDP的過程,對比於使用cloudera-manager安裝CDH,不得不說ambari的易用性差的比較多~_~,需要使用者介入的過程較多,或者說可定製性更高。

首先、安裝之前,在每個主機節點上執行下面命令,已清除快取,避免一些repo原因導致的安裝失敗。

yum clean all

下面開始安裝過程:

一、安裝過程:

1,登入ambari-server管理介面,用瀏覽器訪問http://ep-bd01:8080,預設使用者名稱口令皆為admin。

2,點選按鈕“LUNCH INSTALL WIZZARD”,給叢集起名,這裡為EPBD,下一步

4,選擇HDP版本3.0.0.0,配置repo地址

這一步ambari自動列出配置在本地repo中HDP版本的repo ID

下面是倉庫的設定,這裡選擇本地倉庫,刪除掉除了"Redhat7"之外的其他作業系統,倉庫基地址,就是前面配置的hdp-local.repo中的設定:

http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634 、 http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634 和  http://ep-bd01/hdp/HDP-UTILS/centos7/1.1.0.22 

然後,選中“Use RedHat Satellite/Spacewalk”,此時可以修改倉庫名稱,確保和配好的hdp.repo中保持一致,點選下一步。

5,[Target Hosts]填寫叢集中主機列表,主機填寫可以使用中括號加上序數字尾範圍的方式,詳細用法點選"Pattern Expressions"

主機註冊方式可以選中SSH方式,這需要提供ssh免密訪問所用私有證書;

或者選擇“Perform manual registration on hosts and do not use SSH”,這種方式需要在每臺主機上事先安裝好ambari-agent,就如我在上一篇中所做的,所以我選擇的是這種方式。經試驗對比用SSH的方式註冊主機時稍稍快上一點兒。

進行下一步“REGISTER AND CONFIRM”,ambari可能會提示主機名稱不是全名稱FQDN,不用理會它,繼續進行即可。

7,進入選擇filesystem和services,這裡接受預設設定,點選next。

注:後經過無數次失敗的打擊,我取消了Ranger和Ranger KMS服務,原因不知,這裡又一次失敗的log:

stderr: 
2018-08-17 12:04:22,639 - The 'ranger-kms' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (3.0.0.0-1634). This is the version that will be reported.
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 137, in <module>
    KmsServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 52, in install
    self.configure(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 94, in configure
    kms.kms()
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms.py", line 150, in kms
    create_parents = True
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 177, in action_create
    raise Fail("Applying %s failed, looped symbolic links found while resolving %s" % (self.resource, path))
resource_management.core.exceptions.Fail: Applying Directory['/usr/hdp/current/ranger-kms/conf'] failed, looped symbolic links found while resolving /usr/hdp/current/ranger-kms/conf
 stdout:
2018-08-17 12:04:22,355 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2018-08-17 12:04:22,358 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-08-17 12:04:22,359 - Group['kms'] {}
2018-08-17 12:04:22,360 - Group['livy'] {}
2018-08-17 12:04:22,360 - Group['spark'] {}
2018-08-17 12:04:22,360 - Group['ranger'] {}
2018-08-17 12:04:22,360 - Group['hdfs'] {}
2018-08-17 12:04:22,360 - Group['zeppelin'] {}
2018-08-17 12:04:22,360 - Group['hadoop'] {}
2018-08-17 12:04:22,361 - Group['users'] {}
2018-08-17 12:04:22,361 - Group['knox'] {}
2018-08-17 12:04:22,361 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,362 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,363 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,363 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,364 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,365 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-08-17 12:04:22,366 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,366 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,367 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2018-08-17 12:04:22,368 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-08-17 12:04:22,368 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2018-08-17 12:04:22,369 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2018-08-17 12:04:22,370 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,370 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2018-08-17 12:04:22,371 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,372 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2018-08-17 12:04:22,373 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-08-17 12:04:22,373 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,374 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-08-17 12:04:22,375 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,376 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,376 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,377 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-17 12:04:22,378 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2018-08-17 12:04:22,378 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-17 12:04:22,379 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-08-17 12:04:22,383 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-08-17 12:04:22,383 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-08-17 12:04:22,384 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-17 12:04:22,385 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-17 12:04:22,385 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-08-17 12:04:22,391 - call returned (0, '1015')
2018-08-17 12:04:22,392 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-08-17 12:04:22,395 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] due to not_if
2018-08-17 12:04:22,396 - Group['hdfs'] {}
2018-08-17 12:04:22,396 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-08-17 12:04:22,396 - FS Type: HDFS
2018-08-17 12:04:22,396 - Directory['/etc/hadoop'] {'mode': 0755}
2018-08-17 12:04:22,406 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-08-17 12:04:22,407 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-08-17 12:04:22,419 - Repository['HDP-3.0-repo-1'] {'append_to_file': False, 'base_url': 'http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-08-17 12:04:22,424 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-08-17 12:04:22,424 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-08-17 12:04:22,424 - Repository['HDP-3.0-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-08-17 12:04:22,427 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://ep-bd01/hdp/HDP/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://ep-bd01/hdp/HDP-GPL/centos7/3.0.0.0-1634\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-08-17 12:04:22,427 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-08-17 12:04:22,444 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://ep-bd01/hdp/HDP-UTILS/centos7/1.1.0.22', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-08-17 12:04:22,
            
           

相關推薦

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫02 ——使用ambari-server安裝HDP

本文記錄使用ambari-server安裝HDP的過程,對比於使用cloudera-manager安裝CDH,不得不說ambari的易用性差的比較多~_~,需要使用者介入的過程較多,或者說可定製性更高。 首先、安裝之前,在每個主機節點上執行下面命令,已清除快取,避免一些repo原因導致的安裝失敗。 yum

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫——作業系統配置,安裝必備軟體

注意未經說明,所有本文中所有操作都預設需要作為root使用者進行操作。 一、安裝zmodem,用於遠端上傳下載檔案,安裝gedit,方便重定向到遠端windows上編輯檔案(通過xlanuch) [[email protected]]# yum install lrzsz -y [[email

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫04 —— 安裝HAWQ外掛PXF3.3.0.0

一、 安裝PXF3.3.0.0,這裡所安裝的pxf的包檔案都包含在apache-hawq-rpm-2.3.0.0-incubating.tar.gz裡面下面步驟都是以root身份執行這裡注意,pxf外掛要用到tomcat服務,必須使用安裝包裡面的7.0.62, 不能安裝或升級為 tomcat8,這會造成依賴的

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫01 —— 準備環境,搭建本地倉庫安裝ambari

一、叢集軟硬體環境準備: 作業系統:  centos 7 x86_64.1804 Ambari版本:2.7.0 HDP版本:3.0.0 HAWQ版本:2.3.05臺PC作為工作站: ep-bd01 ep-bd02 ep-bd03 ep-bd04 ep-bd05

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫——安裝配置OPEN-SSH,設定主機節點之間免密互訪

配置root使用者免密互訪(為了方便,各臺系統中使用統一的證書檔案)一、安裝Open-SSH 1,查詢系統中是否安裝了openssh [[email protected]]# opm -qa |grep ssh 如已安裝,則列出下面類似的軟體包 openssh-server-7.4p1-16.

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫03 —— 安裝HAWQ 2.3.0.0

一、 HAWQ2.3.0環境準備【全部主機節點】: 1, vim /etc/sysctl.conf,編輯如下內容: kernel.shmmax= 1000000000 kernel.shmmni= 4096 kernel.shmall= 4000000000 kernel.sem= 250 512000

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫——安裝配置NTP服務,保證叢集時間保持同步

一、所有節點上使用yum安裝配置NTP服務yum install ntp -y 二、選定一臺節點作為NTP server, 192.168.58.11修改/etc/ntp.conf vim /etc/ntp.conf 1,註釋掉restrict 127.0.0.1 ,修改為: restrict 192

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ資料倉庫 —— MariaDB 安裝配置

一、安裝並使用MariaDB作為Ambari、Hive、Hue的儲存資料庫。 yum install mariadb-server mariadb 啟動、檢視狀態,檢查mariadb是否成功安裝 systemctl start mariadb systemctl status mariadb 二、配置

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0搭建HAWQ數據倉庫之一 —— MariaDB 安裝配置

ola http iad com grant stop drive 數據庫 commit 一、安裝並使用MariaDB作為Ambari、Hive、Hue的存儲數據庫。 yum install mariadb-server mariadb 啟動、查看狀態,檢查mariad

基於CentOS-7+ Ambari 2.7.0 + HDP 3.0HAWQ資料倉庫 使用之 gpfdist協議

一、HAWQ基本安裝自帶gpfdist協議 gpfdist是HAWQ支援的外部表訪問協議之一,這是hawq自帶的一個簡單的整合http服務命令。 在我的前述安裝hawq之後,gpfdist命令位於hawq的bin目錄之中。/opt/gpadmin/apache-hawq/bin/gpfdist gpfdist

CentOS-7+ Ambari 2.7.0 + HDP 3.0+HAWQ2.3.00遭遇問題及解決記錄

一、zookeeper超出最大連線限制:ambari server檢測到critical錯誤, zookeeper server on ep-bd01:2181 連線被積極拒絕,翻看主機上zookeeper的日誌 tail -n200 zookeeper-zookeeper-server-ep-bd01.ou

Centos升級Python 2.7安裝pip、ipython

info bin ssl ber .cn update .com space rap https://www.cnblogs.com/technologylife/p/6242115.html Centos系統一般默認就安裝有Python2.6.6版本,不少軟件需要2.7以

ambari 2.7 編譯與安裝

1.環境準備 安裝好JDK,maven,ant,postgresql,以及nodejs,npm和bower。參見博主的相關部落格。注意部分元件的下載需要科學上網。 安裝rpm,rpm-build,git sudo yum install rpm sudo

編譯 ambari 2.7.3

com 這一 options .tar.gz 分享圖片 popd max test oca 官方給的教程比較簡單,需要事先安裝的工具也是這裏列一點,那裏列一點。在此記錄一下編譯要點(在 centos 7 下)。 1. 事先需要安裝的工具 yum install

File Cabinet Pro for Mac(Mac選單欄檔案管理器) V6.7.1(2.7.9)破解版

File Cabinet Pro for Mac是Mac平臺上一款非常簡潔的Mac選單欄檔案管理器,File Cabinet Pro Mac破解版不僅僅是一個選單欄檔案管理軟體,它還內建了文字編輯器、pdf檢視器、影象檢視器和媒體播放器等小功能,功能非常的強大。 File Cabinet Pro

python爬取網頁包含動態js資訊(3.7 +,2.7+)

post_param = {'action': '', 'start': '0', 'limit': '1'} return_data =

MTK phonebook vCard 2 1 和vCard 3 0 有何不同

分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow 也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!        

註解框架AndroidAnnotations4.5.2在android Studio 3.0之後的配置

注意:這是android Studio 3.0之後的配置(關於 android studio 即可檢視版本) 在module的build.gradle中根據以下紅色字型來進行配置 第一步:1.新增下面這一段到defaultConfig下面 javaCompileOp

Jedis 2.9.1、2.10.03.0.0 釋出,Redis 的 Java 客戶端

   Jedis 2.9.1、2.10.0 與 3.0.0 釋出了,Jedis 是 Redis 的 Java 客戶端,它易於使用,與 Redis 2.8.x 和 3.x.x 完全相容。 2.9.1 與 2.10.0 更新: JedisCluster 掃描 bug 修復&nb

升級還是權謀?從USB PD 2.03.0

原文出處 http://www.eetop.cn/blog/html/43/n-433743.html   如同iPhone的出現,才讓智慧機真正主導手機市場一樣,Type-C口釋出後,USB PD才正式進入大眾的視野。而事實上,USB PD 1.0的標準在201