1. 程式人生 > >centos7 yum安裝java運行環境,初識hadoop

centos7 yum安裝java運行環境,初識hadoop

安裝java hadoop單機試運行

安裝java運行環境

1.實驗機相關信息:
[root@node2 ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root@node2 ~]# uname -r
3.10.0-327.el7.x86_6
2.配置epel源,以yum方式安裝openjdk
yum search java | grep -i JDK
yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel
3.設置JAVA_HOME 環境變量
[root@node2 ~]# cat /etc/profile.d/java_home.sh

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
export PATH=$PATH:$JAVA_HOME/bin
使配置生效
source /etc/profile.d/java_home.sh 或 . /etc/profile.d/java_home.sh
4.測試java是否安裝配置成功
[root@node2 ~]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

5.創建java小程序,編譯 打印hello world

[root@node2 ~]# cat helloworld.java
public class helloworld {
        public  static void  main(String[] args){
                System.out.println("hello wolrd!");
        }
}

[root@node2 ~]# javac helloworld.java #編譯後會出現helloworld.class 這個類文件
[root@node2 ~]# java helloworld #運行

hello wolrd!

  1. 如何運行 .jar .war 這些java應用?
    java -jar /path/to/*.jar [arg1] [arg2]

#############################################################################

接下來認識hadoop 官網:http://hadoop.apache.org/

什麽是Apache Hadoop?
Apache?Hadoop?項目為可靠的,可擴展的分布式計算開發開源軟件。
Apache Hadoop軟件庫是一個框架,它允許使用簡單的編程模型跨計算機群集分布式處理大型數據集。
旨在從單個服務器擴展到數千臺機器,每臺機器提供本地計算和存儲。
該庫本身不是依靠硬件來提供高可用性,而是設計用於在應用層檢測並處理故障,從而在一組計算機之上提供高可用性服務,每個計算機都可能出現故障。
技術分享圖片
技術分享圖片

hadoop單機模式運行

官網下載二進制包,解壓到/usr/locl目錄,創建軟連接同目錄下hadoop,配置PATH變量,使生效

[jerry@node2 ~]$ cat /etc/profile.d/hadoop.sh 
export PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin
[root@node2 ~]# hadoop
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
 or    hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
  where CLASSNAME is a user-provided Java class

  OPTIONS is none or any of:

buildpaths                       attempt to add class files from build tree
--config dir                     Hadoop config directory
--debug                          turn on shell script debug mode
--help                           usage information
hostnames list[,of,host,names]   hosts to use in slave mode
hosts filename                   list of hosts to use in slave mode
loglevel level                   set the log4j level for this command
workers                          turn on worker mode

  SUBCOMMAND is one of:

    Admin Commands:

daemonlog     get/set the log level for each daemon

    Client Commands:

archive       create a Hadoop archive
checknative   check native Hadoop and compression libraries availability
classpath     prints the class path needed to get the Hadoop jar and the required libraries
conftest      validate configuration XML files
credential    interact with credential providers
distch        distributed metadata changer
distcp        copy file or directories recursively
dtutil        operations related to delegation tokens
envvars       display computed Hadoop environment variables
fs            run a generic filesystem user client
gridmix       submit a mix of synthetic job, modeling a profiled from production load
jar <jar>     run a jar file. NOTE: please use "yarn jar" to launch YARN applications, not this command.
jnipath       prints the java.library.path
kdiag         Diagnose Kerberos Problems
kerbname      show auth_to_local principal conversion
key           manage keys via the KeyProvider
rumenfolder   scale a rumen input trace
rumentrace    convert logs into a rumen trace
s3guard       manage metadata on S3
trace         view and modify Hadoop tracing settings
version       print the version

    Daemon Commands:

kms           run KMS, the Key Management Server

SUBCOMMAND may print help when invoked w/o parameters or with -h.

Hadoop 默認配置是以非分布式模式運行,即單 Java 進程,方便進行調試。可以執行附帶的例子 WordCount 來感受下 Hadoop 的運行。將 input 文件夾中的文件作為輸入,統計當中符合正則表達式 wo[a-z.]+ 的單詞出現的次數,並輸出結果到 output 文件夾中。

如果需要再次運行,需要刪除output文件夾(因為Hadoop 默認不會覆蓋結果文件):

  # cd /usr/local/hadoop/
  # mkdir input
  # cp etc/hadoop/*.xml   input
  # bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar grep input output ‘dfs[a-z.]+‘
  # cat output/*
         1  work

[root@node2 /usr/local/hadoop]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar grep /etc/passwd output ‘root‘
[root@node2 /usr/local/hadoop]# cat output/*

centos7 yum安裝java運行環境,初識hadoop