1. 程式人生 > >Hadoop2.7.1+Hbase1.2.1叢集環境搭建(1)hadoop2.7.1原始碼編譯

Hadoop2.7.1+Hbase1.2.1叢集環境搭建(1)hadoop2.7.1原始碼編譯

        官網目前提供的下載包為32位系統的安裝包,在linux 64位系統下安裝後會一直提示錯誤“WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  ”,但官網又不提供64位系統下的安裝包,所以你只能自己去編譯打包64位系統下的安裝包。

        如何檢視自己的Hadoop是32位還是64位呢,這裡我的Hadoop安裝在/opt/hadoop-2.7.1/,那麼在/opt/hadoop-2.7.1/lib/native目錄下,可以檢視檔案libhadoop.so.1.0.0,裡面會顯示Hadoop的位數,這裡我的是已經自己編譯了Hadoop,所以是64位的

,截圖如下:


 

2.編譯hadoop2.7.1的正確方法(官網)

        網上關於Hadoop編譯的文章一大堆,編譯前準備工作五花八門,很少有人告訴你為什麼這麼做,初學者只能被動接受整個過程

        當你的作業系統是64位linux,但在官網下載的hadoop2.7.1是32位的時候,你就得考慮自己編譯打包,獲得hadoop2.7.1在64位作業系統下的安裝包了,問題是網上編譯hadoop2.7.1一大堆,為什麼要這麼做你知道嗎?別人為什麼要這麼做呢?

        當你遇到上述問題的時候,最可靠的還是官網對於編譯的說明,這個說明在hadoop2.7.1的原始碼根目錄下的BUILDING.txt

檔案裡面,這裡我下載的hadoop2.7.1原始碼根目錄在/opt/hadoop-2.7.1-src/,截圖如下:


        這裡將BUILDING.txt重要內容分析如下:

        1)編譯前必備條件

來自BUILDING.txt Requirements:
* Unix System
* JDK 1.7+
* Maven 3.0 or later
* Findbugs 1.3.9 (if running findbugs)
* ProtocolBuffer 2.5.0
* CMake 2.6 or newer (if compiling native code), must be 3.0 or newer on Mac
* Zlib devel (if compiling native code)
* openssl devel ( if compiling native hadoop-pipes and to get the best HDFS encryption performance )
* Jansson C XML parsing library ( if compiling libwebhdfs )
* Linux FUSE (Filesystem in Userspace) version 2.6 or above ( if compiling fuse_dfs )
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)

        2)hadoop maven模組介紹

  - hadoop-project           (Parent POM for all Hadoop Maven modules.All plugins & dependencies versions are defined here.)

  - hadoop-project-dist      (Parent POM for modules that generate distributions.)

  - hadoop-annotations       (Generates the Hadoop doclet used to generated the Javadocs)

  - hadoop-assemblies        (Maven assemblies used by the different modules)

  - hadoop-common-project    (Hadoop Common)

  - hadoop-hdfs-project      (Hadoop HDFS)

  - hadoop-mapreduce-project (Hadoop MapReduce)

  - hadoop-tools             (Hadoop tools like Streaming, Distcp, etc.)

  - hadoop-dist              (Hadoop distribution assembler)

        3)maven工程從哪裡開始編譯

來自BUILDING.txt Where to run Maven from?
It can be run from any module. The only catch is that if not run from utrunk
all modules that are not part of the build run must be installed in the local
Maven cache or available in a Maven repository.

        可以編譯單個模組,可以在主模組下編譯所有模組,唯一不同是,編譯單個模組只會將變異的jar包放置於maven本地資源庫中,在主模組下編譯也會將各模組編譯放置於maven本地資源庫中,還會打包Hadoop針對該機的tar.gz安裝包。

        4)關於snappy

來自BUILDING.txt Snappy build options:
Snappy is a compression library that can be utilized by the native code.
It is currently an optional component, meaning that Hadoop can be built with
or without this dependency.

* Use -Drequire.snappy to fail the build if libsnappy.so is not found.
If this option is not specified and the snappy library is missing,
we silently build a version of libhadoop.so that cannot make use of snappy.
This option is recommended if you plan on making use of snappy and want
to get more repeatable builds.

* Use -Dsnappy.prefix to specify a nonstandard location for the libsnappy
header files and library files. You do not need this option if you have
installed snappy using a package manager.
* Use -Dsnappy.lib to specify a nonstandard location for the libsnappy library
files. Similarly to snappy.prefix, you do not need this option if you have
installed snappy using a package manager.
* Use -Dbundle.snappy to copy the contents of the snappy.lib directory into
the final tar file. This option requires that -Dsnappy.lib is also given,
and it ignores the -Dsnappy.prefix option.

        Hadoop支援用特定的壓縮演算法將要儲存的檔案進行壓縮,在客戶端訪問時,又自動解壓縮返回給客戶端原始格式檔案,目前Hadoop支援的壓縮格式有LZO、SNAPPY等,這裡SNAPPY預設是不支援的,如果要使得Hadoop支援SNAPPY,需要首先安裝linux關於SNAPPY庫,然後編譯Hadoop得到安裝包

        目前市面上普遍採用的壓縮方式為SNAPPY,SNAPPY也是後期分散式列儲存資料庫HBASE的首選,而hbase必須依賴Hadoop環境,所以如果後期採用hbase又想用壓縮SNAPPY的話,這裡將SNAPPY一起編譯進來是有必要的。

        5)編譯方式選擇

來自BUILDING.txt ----------------------------------------------------------------------------------
Building distributions:
Create binary distribution without native code and without documentation:
$ mvn package -Pdist -DskipTests -Dtar
Create binary distribution with native code and with documentation:
$ mvn package -Pdist,native,docs -DskipTests -Dtar
Create source distribution:
$ mvn package -Psrc -DskipTests
Create source and binary distributions with native code and documentation:
$ mvn package -Pdist,native,docs,src -DskipTests -Dtar
Create a local staging version of the website (in /tmp/hadoop-site)
$ mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
----------------------------------------------------------------------------------

         大致意思如下:


     

        6)Hadoop單機和叢集安裝方式介紹

來自BUILDING.txt ----------------------------------------------------------------------------------
Installing Hadoop
Look for these HTML files after you build the document by the above commands.
* Single Node Setup:
hadoop-project-dist/hadoop-common/SingleCluster.html
* Cluster Setup:
hadoop-project-dist/hadoop-common/ClusterSetup.html
----------------------------------------------------------------------------------

        7)maven編譯Hadoop時候記憶體設定項

來自BUILDING.txt ----------------------------------------------------------------------------------
If the build process fails with an out of memory error, you should be able to fix
it by increasing the memory used by maven -which can be done via the environment
variable MAVEN_OPTS.
Here is an example setting to allocate between 256 and 512 MB of heap space to
Maven
export MAVEN_OPTS="-Xms256m -Xmx512m"
----------------------------------------------------------------------------------

         大致意思是,如果maven編譯遇到記憶體方面錯誤,請先設定maven記憶體配置,例如linux下請設定export MAVEN_OPTS="-Xms256m -Xmx512m",這點對於後期編譯spark原始碼也一樣好使

3.Hadoop編譯必備條件準備

        1)Unix System(作業系統為linux,作業系統請自行安裝,沒條件的就弄虛擬機器)

        2)JDK 1.7+

#1.首先不建議用openjdk,建議採用oracle官網JDK

#2.首先解除安裝系統自帶的低版本或者自帶openjdk
#首先用命令java -version 檢視系統中原有的java版本
#然後用用 rpm -qa | gcj 命令檢視具體的資訊
#最後用 rpm -e --nodeps java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64解除安裝

#3.安裝jdk-7u65-linux-x64.gz
#下載jdk-7u65-linux-x64.gz放置於/opt/java/jdk-7u65-linux-x64.gz並解壓
cd /opt/java/
tar -zxvf jdk-7u65-linux-x64.gz
#配置linux系統環境變數
vi /etc/profile
#在檔案末尾追加如下內容
export JAVA_HOME=/opt/java/jdk1.7.0_65
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
#使配置生效
source /etc/profile

#4檢查JDK環境是否配置成功
java -version

        3)Maven 3.0 or later

#1.下載apache-maven-3.3.3.tar.gz放置於/opt/下並解壓
cd /opt
tar zxvf apache-maven-3.3.3.tar.gz

#2.配置環境變數
vi /etc/profile
#新增如下內容
MAVEN_HOME=/opt/apache-maven-3.3.3
export MAVEN_HOME
export PATH=${PATH}:${MAVEN_HOME}/bin

#3.使配置生效
source /etc/profile

#4.檢測maven是否安裝成功
mvn -version

#5.配置maven中央倉庫,maven預設是從中央倉庫去下載依賴的jar和外掛,中央倉庫在過完,對於國內,
有其他中央倉庫映象可供下載,這裡我設定maven在國內映象倉庫oschina
#maven不懂的話先去了解下,最好不要修改/opt/apache-maven-3.3.3/conf/settings.xml配置檔案,因為該檔案對所有
使用者生效,而是修改當前使用者所在根目錄,比如對於hadoop使用者,你要修改的檔案是/home/hadoop/.m2/settings.xml配置檔案,
#在該檔案中新增如下內容 Xml程式碼  收藏程式碼
  1. <?xmlversion="1.0"encoding="UTF-8"?>
  2. <settingsxmlns="http://maven.apache.org/SETTINGS/1.0.0"
  3.           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4.           xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
  5.   <!--預設下載依賴放置於/home/hadoop/.m2/repository目錄下,這裡我指定我想存放於/opt/maven-localRepository-->
  6.   <localRepository>/opt/maven-localRepository</localRepository>
  7.   <pluginGroups></pluginGroups>
  8.   <proxies></proxies>
  9.   <servers></servers>
  10.   <mirrors>
  11.   <!--add by aperise start-->
  12.     <mirror>
  13.         <id>nexus-osc</id>
  14.         <mirrorOf>*</mirrorOf>
  15.         <name>Nexus osc</name>
  16.         <url>http://maven.oschina.net/content/groups/public/</url>
  17.     </mirror>
  18.   <!--add by aperise end-->
  19.   </mirrors>
  20.   <profiles>
  21.   <!--add by aperise start-->
  22.         <profile>
  23.             <id>jdk-1.7</id>
  24.             <activation>
  25.                 <jdk>1.7</jdk>
  26.             </activation>
  27.             <repositories>
  28.                 <repository>
  29.                     <id>nexus</id>
  30.                     <name>local private nexus</name>
  31.                     <url>http://maven.oschina.net/content/groups/public/</url>
  32.                     <releases>
  33.                         <enabled>true</enabled>
  34.                     </releases>
  35.                     <snapshots>
  36.                         <enabled>false</enabled>
  37.                     </snapshots>
  38.                 </repository>
  39.             </repositories>
  40.             <pluginRepositories>
  41.                 <pluginRepository>
  42.                     <id>nexus</id>
  43.                     <name>local private nexus</name>
  44.                     <url>http://maven.oschina.net/content/groups/public/</url>
  45.                     <releases>
  46.                         <enabled>true</enabled>
  47.                     </releases>
  48.                     <snapshots>
  49.                         <enabled>false</enabled>
  50.                     </snapshots>
  51.                 </pluginRepository>
  52.             </pluginRepositories>
  53.         </profile>
  54.   <!--add by aperise end-->
  55.   </profiles>
  56. </settings>

        4)Findbugs 3.0.1 (if running findbugs)

#1.安裝
tar zxvf findbugs-3.0.1.tar.gz
#2.配置環境變數
vi /etc/profile
#內容如下:
export FINDBUGS_HOME=/opt/findbugs-3.0.1
export PATH=$PATH:$FINDBUGS_HOME/bin
#3.使配置生效
source /etc/profile
#4.鍵入findbugs檢測是否安裝成功
findbugs

        5)ProtocolBuffer 2.5.0

#1.安裝(需要先安裝cmake,條件六需要先做)
tar zxvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/usr/local/protobuf
make
make check
make install
#2.配置環境變數
vi /etc/profile
#編輯內容如下:
export PATH=$PATH:/usr/local/protobuf/bin
export PKG_CONFIG_PATH=/usr/local/protobuf/lib/pkgconfig/
#3.使配置生效,輸入命令,source /etc/profile
#4.鍵入protoc --version檢測是否安裝成功
protoc --version

         6)CMake 2.6 or newer (if compiling native code), must be 3.0 or newer on Mac

#1.安裝前提
yum install gcc-c++
yum install ncurses-devel
#2.安裝
#方法一是直接yum install cmake
#方法二下載tar.gz編譯安裝
#下載cmake-3.3.2.tar.gz編譯並安裝
tar -zxv -f cmake-3.3.2.tar.gz
cd cmake-3.3.2
./bootstrap
make
make install
#3.鍵入cmake檢測是否安裝成功
cmake

        7)Zlib devel (if compiling native code)

yum -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev zlib-devel

        8)openssl devel ( if compiling native hadoop-pipes and to get the best HDFS encryption performance )

yum install openssl-devel

        9)Jansson C XML parsing library ( if compiling libwebhdfs )

        10)Linux FUSE (Filesystem in Userspace) version 2.6 or above ( if compiling fuse_dfs )

        11Internet connection for first build (to fetch all Maven and Hadoop dependencies)

        上面三個看自己情況選擇,不是必須的。

        在我搭建環境過程中,安裝的東西很多,我個人當時還執行過如下命令安裝一些缺失的庫,命令如下:

yum -y install build-essential autoconf automake libtool zlib1g-dev pkg-config libssl-dev

4.正式編譯Hadoop原始碼


        進入原始碼根目錄執行/opt/hadoop-2.7.1-src

cd /opt/hadoop-2.7.1-src export MAVEN_OPTS="-Xms256m -Xmx512m"
mvn package -Pdist,native,docs -DskipTests -Dtar

        原始碼編譯需要半小時以上,會從公網去下載原始碼依賴的jar包,所以請耐心等待編譯完成,編譯完成後提示資訊如下:

Java程式碼  收藏程式碼
  1. [INFO] ------------------------------------------------------------------------  
  2. [INFO] Reactor Summary:  
  3. [INFO]  
  4. [INFO] Apache Hadoop Common ............................... SUCCESS [02:27 min]  
  5. [INFO] Apache Hadoop NFS .................................. SUCCESS [  4.841 s]  
  6. [INFO] Apache Hadoop KMS .................................. SUCCESS [ 15.176 s]  
  7. [INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.055 s]  
  8. [INFO] Apache Hadoop HDFS ................................. SUCCESS [03:36 min]  
  9. [INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 21.601 s]  
  10. [INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [  4.182 s]  
  11. [INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  3.577 s]  
  12. [INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.036 s]  
  13. [INFO] hadoop-yarn ........................................ SUCCESS [  0.033 s]  
  14. [INFO] hadoop-yarn-api .................................... SUCCESS [01:53 min]  
  15. [INFO] hadoop-yarn-common ................................. SUCCESS [ 23.525 s]  
  16. [INFO] hadoop-yarn-server ................................. SUCCESS [  0.042 s]  
  17. [INFO] hadoop-yarn-server-common .......................... SUCCESS [  8.896 s]  
  18. [INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 11.562 s]  
  19. [INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  3.324 s]  
  20. [INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [  6.115 s]  
  21. [INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 14.149 s]  
  22. [INFO] hadoop-yarn-server-tests ........................... SUCCESS [  3.887 s]  
  23. [INFO] hadoop-yarn-client ................................. SUCCESS [  5.333 s]  
  24. [INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [  2.249 s]  
  25. [INFO] hadoop-yarn-applications ........................... SUCCESS [  0.032 s]  
  26. [INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  1.915 s]  
  27. [INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  1.450 s]  
  28. [INFO] hadoop-yarn-site ................................... SUCCESS [  0.049 s]  
  29. [INFO] hadoop-yarn-registry ............................... SUCCESS [  4.165 s]  
  30. [INFO] hadoop-yarn-project ................................ SUCCESS [  4.168 s]  
  31. [INFO] hadoop-mapreduce-client ............................ SUCCESS [  0.077 s]  
  32. [INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 15.869 s]  
  33. [INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 15.401 s]  
  34. [INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  2.696 s]  
  35. [INFO] hadoop-mapreduce-client-app ........................ SUCCESS [  5.780 s]  
  36. [INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  4.528 s]  
  37. [INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [  3.592 s]  
  38. [INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  1.262 s]  
  39. [INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  3.969 s]  
  40. [INFO] hadoop-mapreduce ................................... SUCCESS [  3.829 s]  
  41. [INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [  2.999 s]  
  42. [INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [  7.995 s]  
  43. [INFO] Apache Hadoop Archives ............................. SUCCESS [  1.425 s]  
  44. [INFO] Apache Hadoop Rumen ................................ SUCCESS [  4.508 s]  
  45. [INFO] Apache Hadoop Gridmix .............................. SUCCESS [  3.023 s]  
  46. [INFO] Apache Hadoop Data Join ............................ SUCCESS [  1.896 s]  
  47. [INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  1.633 s]  
  48. [INFO] Apache Hadoop Extras ............................... SUCCESS [  2.256 s]  
  49. [INFO] Apache Hadoop Pipes ................................ SUCCESS [  1.738 s]  
  50. [INFO] Apache Hadoop OpenStack support .................... SUCCESS [  3.198 s]  
  51. [INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [  8.421 s]  
  52. [INFO] Apache Hadoop Azure support ........................ SUCCESS [  2.808 s]  
  53. [INFO] Apache Hadoop Client ............................... SUCCESS [ 10.124 s]  
  54. [INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.097 s]  
  55. [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  3.395 s]  
  56. [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 10.150 s]  
  57. [INFO] Apache Hadoop Tools ................................ SUCCESS [  0.035 s]  
  58. [INFO] Apache Hadoop Distribution ......................... SUCCESS [01:48 min]  
  59. [INFO] ------------------------------------------------------------------------  
  60. [INFO] BUILD SUCCESS  
  61. [INFO] ------------------------------------------------------------------------  
  62. [INFO] Total time: 14:12 min  
  63. [INFO] Finished at: 2015-11-04T16:08:14+08:00
  64. [INFO] Final Memory: 139M/1077M  
  65. [INFO] ------------------------------------------------------------------------  

        編譯後獲得Hadoop安裝包位置如下:

        cd /opt/hadoop-2.7.1-src/hadoop-dist/target

        ls

        antrun                    hadoop-2.7.1.tar.gz            maven-archiver

        dist-layout-stitching.sh  hadoop-dist-2.7.1.jar          test-dir

        dist-tar-stitching.sh     hadoop-dist-2.7.1-javadoc.jar

        hadoop-2.7.1              javadoc-bundle-options

5.Hadoop編譯環境相關依賴包分享

相關推薦

Hyperledger Fabric v1.1 單機多節點叢集環境搭建

Fabric v1.1 1.環境安裝 1).安裝go 1.9.x 下載地址 http://golang.org/dl/ 配置環境 #go的安裝根目錄 export GOROOT=/usr/local/go #go的工作路徑根目錄 export GOPAT

Hadoop2.7.5+Hbase1.2.6完全分散式搭建

1.叢集安裝主機名MasterzookeeperregionServerMaster11Slave1備份11Slave2112.ssh(在Hadoop分散式搭建中已經完成)3.修改Master中Hbase的conf目錄下regionservers檔案,類似於Hadoop修改s

redis3.2.8叢集環境搭建

環境準備 centos6.7 redis3.2.8 安裝步驟 第二步: 解壓壓縮包到中/inner_dev_env/redis-3.2.8 [root@allere /inner_dev_env/redis-3.2.8]# pwd /i

Hadoop原始碼閱讀環境搭建(IDEA) Hadoop原始碼編譯環境搭建 IDEA配置maven中央庫

拿到一份Hadoop原始碼之後,經常關注的兩件事情就是 1、怎麼閱讀?涉及IDEA和Eclipse工程搭建。IDEA搭建,選擇原始碼,逐步匯入即可;Eclipse可以選擇後臺生成工程,也可以選擇IDE匯入。二者工程也可以互相匯入\到處。 2、怎麼構建?利用maven,生成安裝包。 二者均需配置maven

Hadoop2.7.1+Hbase1.2.1叢集環境搭建(1)hadoop2.7.1原始碼編譯

        官網目前提供的下載包為32位系統的安裝包,在linux 64位系統下安裝後會一直提示錯誤“WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-j

Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分散式叢集環境

Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分散式叢集環境   一、環境說明 個人理解:zookeeper可以獨立搭建叢集,hbase本身不能獨立搭建叢集需要和hadoop和hdfs整合 叢集環境至少需要3個節點(也就是3臺伺服器裝置):1個Master,2

hadoop學習1--hadoop2.7.3叢集環境搭建

           下面的部署步驟,除非說明是在哪個伺服器上操作,否則預設為在所有伺服器上都要操作。為了方便,使用root使用者。    1.準備工作    1.1 centOS7伺服器3臺    master    192.168.174.132    node1  

Hadoop-2.7.1叢集環境搭建

摘自:http://blog.csdn.net/u014039577/article/details/49813531 由於日誌資料量越來越大,資料處理的邏輯越來越複雜,同時還涉及到大量日誌需要批處理,當前的flume-kafka-storm-Hbase-web這一套流程已經不能滿足當前的需求了,所以只

Storm叢集環境搭建1個nimbus+2個supervisor)

Storm是開源的一個分散式實時計算系統,用於資料的實時分析,持續計算,分散式RPC、線上機器學習、ETL等。例如,在電商購物網站購買商品時,會在網頁旁邊或者底端看到與自己所需要商品相關的系列商品,這就是使用類似Storm實時計算去做的。Hadoop適用於海量資料的離

Storm(1.2.1)叢集環境搭建

1,Storm基礎介紹:Storm作為大資料處理框架之一,其和Spark一樣具有非常廣泛的使用,如下為Storm的架構圖: 在瞭解叢集配置安裝之前需要掌握如下幾個概念(concept):拓撲(Topologies):類似Hadoop的MapReduce 的任務(Job),區別

MonkeyRunner環境搭建配置步驟(1.安裝jdk,2.安裝python,3.安裝android sdk)

img ads 系統 windows info beans android-s monk bean 前言:需要安裝jdk、python、android sdk 第一步:JDk的安裝以及配置 jdk下載地址:http://www.oracle.com/technetwork/

事無鉅細 Spark 1.6.1 叢集環境搭建

還是在之前的Hadoop叢集環境上繼續搭建Spark-1.6.1環境 下載安裝 下載Spark並解壓 wget http://mirrors.cnnic.cn/apache/spark/spark-1.6.1/spark-1.6.1-b

事無鉅細 Apache Kafka 0.9.0.1 叢集環境搭建

Kafka叢集環境依賴於Zookeeper環境。因此我們的環境搭建實際分為兩部分。Zookeeper環境搭建和Kafka環境搭建。 Zookeeper 3.4.8叢集搭建 部署安裝包 下載 wget http://mirrors.cn

CentOS 6.5 hadoop 2.7.3 叢集環境搭建

CentOS 6.5 hadoop 2.7.3 叢集環境搭建 所需硬體,軟體要求 使用 virtualbox 構建三臺虛擬機器模擬真實物理環境 作業系統:CentOS6.5 主機列表: master ip: 192.168.3.171 slav

ZK+Kafka+Spark Streaming叢集環境搭建(九)安裝kafka_2.11-1.1.0

安裝kafka的伺服器:192.168.0.120 master 192.168.0.121 slave1 192.168.0.122 slave2 192.168.0.123 slave3備註:只在slave1,slave2,slave3三個節店上安裝zookeepe

Ubuntu17.10+Cuda9.2+Cudnn7.1+Anaconda+tensorflow 深度學習環境搭建

機緣巧合之下安裝ubuntu17.10過程一堆坑,在帖子中記錄一下。過程中參考以下及若干連結,感謝感謝!!過程如下:1 安裝驅動顯示卡驅動中,拒絕手動,一堆坑。本文中顯示卡為GTX1080.軟體與更新(全部)-〉 附加驅動 -> nvidia corporation G

Ext4.2.1學習歷程之一:環境搭建及Hello ExtJS4.2

原文出處    http://blog.itpub.net/28562677/viewspace-1066765/ 1、從官網下載ExtJS4.2資源包,解壓開有原始碼、API文件、演示程式; 2、官網地址:http://www.sencha.com/products/e

hadoop學習之路(一)---叢集環境搭建2.7.3版本)

三:下載解壓 hadoop 到某個目錄(例如 /usr/loacl/hadoop) 四:賬號建立: 即為hadoop叢集專門設定一個使用者組及使用者,這部分比較簡單,參考示例如下: groupadd hadoop //設定h

hadoop2.7.2 win7 eclipse環境搭建測試

環境搭建參照上一篇hadoop2.7.2 win7基礎環境搭建。 Eclipse hadoop外掛下載2.7.2:http://download.csdn.net/detail/fly_leopard/9503172 將下載的檔案解壓,將jar包放到Eclipse/plug

【ZooKeeper系列】1.ZooKeeper單機版、偽叢集叢集環境搭建

ZooKeeper安裝模式主要有3種: 單機版(Standalone模式)模式:僅有一個ZooKeeper服務 偽叢集模式:單機多個ZooKeeper服務 叢集模式:多機多ZooKeeper服務 1 單機版(Standalone模式)安裝 ZooKeeper官網下載地址:http://zookeeper.