1. 程式人生 > >windows 環境下hadoop+spark+maven整合

windows 環境下hadoop+spark+maven整合

1.下載apache-maven-3.5.0-bin.tar,並設定MAVEN_HOME.

2. 下載hadoop-2.6.0.tar,並設定HADOOP_HOME.

3.POM配置。

<dependency> <!-- Spark dependency -->

      <groupId>org.apache.spark</groupId>

      <artifactId>spark-sql_2.11</artifactId>

      <version>2.2.0</version>

    </

dependency>

<dependency>

    <groupId>org.apache.spark</groupId>

    <artifactId>spark-core_2.11</artifactId>

    <version>2.2.0</version>

    <scope>provided</scope>

</dependency>

    <dependency> <!-- Spark dependency -->

      <

groupId>org.apache.hadoop</groupId>

      <artifactId>hadoop-client</artifactId>

      <version>2.6.0</version>

    </dependency>

    <dependency>

    <groupId>com.google.collections</groupId>

    <artifactId>google-collections</artifactId>

    <version>1.0</version>

</dependency>

4. 建立SimpleApp類。

import java.util.Arrays;
import java.util.List;


import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.SparkSession;


public class SimpleApp {
public static void main(String[] args) {

          //local代表執行本地叢集
   SparkConf conf = new SparkConf().setAppName("app demo").setMaster("local");
   JavaSparkContext sc = new JavaSparkContext(conf);

           //檔案放在工程根目錄下面
   JavaRDD<String> lines = sc.textFile("test.txt");
   JavaRDD<Integer> lineLengths = lines.map(s -> s.length());
   int totalLength = lineLengths.reduce((a, b) -> a + b);
   
   System.out.println("length:"+totalLength);
   
 }
}


5.執行MAVEN命令eclipse:clean eclipse:eclipse構建好MAVEN專案。

6.執行SimpleApp