1. 程式人生 > >大數據環境搭建(3)- cdh5.11.1 - hue安裝

大數據環境搭建(3)- cdh5.11.1 - hue安裝

from security sna manage dir admin smtp mysql數據庫 desktop

一、簡介

hue是一個開源的apache hadoop ui系統,由cloudear desktop演化而來,最後cloudera公司將其貢獻給了apache基金會的hadoop社區,它基於python框架django實現的。

通過使用hue,我們可以使用可視化的界面在web瀏覽器上與hadoop集群交互來分析處理數據,例如操作hdfs上的數據,運行MapReduce Job,查看HBase中的數據

二、安裝

(1)下載

http://archive.cloudera.com/cdh5/cdh/5/

從這裏下載cdh5.11.1的最新版本的hue,3.9.0版本,到本地,並上傳到服務器,解壓縮到app目錄下

(2)必要的組件準備

需要先安裝好mysql數據庫

需要安裝好下面的組件

sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel python-simplejson sqlite-devel gmp-devel -y

(3)編譯

到hue的根目錄下,運行

make apps

(4)配置

基礎配置,打開desktop/conf/hue.ini文件

[desktop]

  # Set this to a random string, the longer the better.
  # This is used for secure hashing in the session store.
  secret_key=jFE93j;2[290-eiw.KEiwN2s3[‘d;/.q[eIW^y#e=+Iei*@Mn<qW5o

  # Webserver listens on this address and port
  http_host=hadoop001
  http_port=8888

  # Time zone name
  time_zone=Asia/Shanghai

  # Enable or disable Django debug mode.
  django_debug_mode=false

  # Enable or disable backtrace for server error
  http_500_debug_mode=false

  # Enable or disable memory profiling.
  ## memory_profiler=false

  # Server email for internal error messages
  ## django_server_email=‘[email protected]‘

  # Email backend
  ## django_email_backend=django.core.mail.backends.smtp.EmailBackend

  # Webserver runs as this user
  server_user=hue
  server_group=hue

  # This should be the Hue admin and proxy user
  ## default_user=hue

  # This should be the hadoop cluster admin
  #default_hdfs_superuser=hadoop

配置hue集成hadoop

首先hadoop裏設置代理用戶,需要配置hadoop的core-site.xml

<property>
	  <name>hadoop.proxyuser.hue.hosts</name>
	  <value>*</value>
	</property>
	<property>
	  <name>hadoop.proxyuser.hue.groups</name>
	  <value>*</value>
</property> 

 加入這兩個屬性即可。

然後重啟hadoop集群

sbin/stop-dfs.sh

sbin/stop-yarn.sh

sbin/start-dfs.sh

sbin/start-yarn.sh

配置hue與hadoop集成

[hadoop]

  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://hadoop001:8020

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://hadoop001:50070/webhdfs/v1

      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # Default umask for file and directory creation, specified in an octal value.
      ## umask=022

      # Directory of the Hadoop configuration
      hadoop_conf_dir=/home/hadoop/app/hadoop/etc/hadoop

  # Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=hadoop002

      # The port where the ResourceManager IPC listens on
      resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop002:8088

      # URL of the ProxyServer API
      proxy_api_url=http://hadoop002:8088

      # URL of the HistoryServer API
      history_server_api_url=http://hadoop002:19888

      # In secure mode (HTTPS), if SSL certificates from Resource Manager‘s
      # Rest Server have to be verified against certificate authority
      ## ssl_cert_ca_verify=False

    # HA support by specifying multiple clusters
    # e.g.

    # [[[ha]]]
      # Resource Manager logical name (required for HA)
      ## logical_name=my-rm-name

  # Configuration for MapReduce (MR1)

配置hue集成hive

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
    hive_server_host=hadoop001

  # Port where HiveServer2 Thrift server runs on.
    hive_server_port=10000

  # Hive configuration directory, where hive-site.xml is located
    hive_conf_dir=/home/hadoop/app/hive/conf

  # Timeout in seconds for thrift calls to Hive service
  server_conn_timeout=120

 

(5)啟動hue

先啟動hive的metastore服務,和hiveserver2服務

nohup hive --service metastore & nohup hive --service hiveserver2 & 再啟動hue nohup /home/hadoop/app/hue/build/env/bin/supervisor &

(6)訪問hue

http://hadoop004:8888

 

可能會遇到的問題:

Failed to contact an active Resource Manager: YARN RM returned a failed response: { "RemoteException" : { "message" : "User: hue is not allowed to impersonate admin", "exception" : "AuthorizationException", "javaClassName" : "org.apache.hadoop.security.authorize.AuthorizationException" } } (error 403)

這個問題是hadoop的core-site.xml配置的代理的用戶和hue配置文件的用戶不一致造成的。

比如,hadoop的core-site.xml是這樣配置的

<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>

代理用戶是hue。

而hue裏面是這樣配置的:

# Webserver runs as this user
#server_user=hue
#server_group=hue

需要把server_user和server_group設置成hue,即可

大數據環境搭建(3)- cdh5.11.1 - hue安裝