1. 程式人生 > >大資料spark計算引擎快速入門

大資料spark計算引擎快速入門

在這裡插入圖片描述
spark快速入門
  spark框架是用scala寫的,執行在Java虛擬機器(JVM)上。支援Python、Java、Scala或R多種語言編寫客戶端應用。
  下載Spark  訪問http://spark.apache.org/downloads.html
  選擇預編譯的版本進行下載。
  解壓Spark
  開啟終端,將工作路徑轉到下載的spark壓縮包所在的目錄,然後解壓壓縮包。 可使用如下命令:cd ~ tar -xf spark-2.2.2-bin-hadoop2.7.tgz -C /opt/module/ cd spark-2.2.2-bin-hadoop2.7 ls
  注:tar命令中x標記指定tar命令執行解壓縮操作,f標記指定壓縮包的檔名。

spark主要目錄結構README.md
  包含用來入門spark的簡單使用說明 - bin  
  包含可用來和spark進行各種方式互動的一系列可執行檔案 - core、streaming、python  
  包含spark專案主要元件的原始碼 - examples  
  包含一些可檢視和執行的spark程式,對學習spark的API非常有幫助執行案例及互動式Shell執行案例./bin/run-example SparkPi 10scala shell./bin/spark-shell --master local[2]# --master選項指定執行模式。local是指使用一個執行緒本地執行;local[N]是指使用N個執行緒本地執行。
python shell./bin/pyspark --master local[2]R shell./bin/sparkR --master local[2]提交應用指令碼支援多種語言提交./bin/spark-submit examples/src/main/python/pi.py 10 ./bin/spark-submit examples/src/main/r/dataframe.R ...
  使用spark shell進行互動式分析scala  
  使用spark-shell指令碼進行互動式分析。
基礎
scala> val textFile = spark.read.textFile("README.md") textFile: org.apache.spark.sql.Dataset[String] = [value: string]scala> textFile.count() // Number of items in this Dataset res0: Long = 126 // May be different from yours as README.md will change over time, similar to other outputsscala> textFile.first() // First item in this Dataset res1: String = # Apache Spark使用filter運算元返回原DataSet的子集scala> val linesWithSpark = textFile.filter(line => line.contains("Spark")) linesWithSpark: org.apache.spark.sql.Dataset[String] = [value: string]拉鍊方式scala> textFile.filter(line => line.contains("Spark")).count() // How many lines contain "Spark"? res3: Long = 15

進階
使用DataSet的轉換和動作查詢最多單詞的行scala> textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b) res4: Long = 15 ``````統計單詞個數scala> val wordCounts = textFile.flatMap(line => line.split(" ")).groupByKey(identity).count() wordCounts: org.apache.spark.sql.Dataset[(String, Long)] = [value: string, count(1): bigint]scala> wordCounts.collect() res6: Array[(String, Int)] = Array((means,1), (under,2), (this,3), (Because,1), (Python,2), (agree,1), (cluster.,1), ...)
python  
  使用pyspark指令碼進行互動式分析基礎textFile = spark.read.text("README.md")textFile.count() # Number of rows in this DataFrame 126textFile.first() # First row in this DataFrame Row(value=u'# Apache Spark')filter過濾linesWithSpark = textFile.filter(textFile.value.contains("Spark"))拉鍊方式textFile.filter(textFile.value.contains("Spark")).count() # How many lines contain "Spark"? 15進階查詢最多單詞的行from pyspark.sql.functions import *textFile.select(size(split(textFile.value, "\s+")).name("numWords")).agg(max(col("numWords"))).collect() [Row(max(numWords)=15)]統計單詞個數wordCounts = textFile.select(explode(split(textFile.value, "\s+")).alias("word")).groupBy("word").count()wordCounts.collect() [Row(word=u'online', count=1), Row(word=u'graphs', count=1), ...]
獨立應用  
  spark除了互動式執行之外,spark也可以在Java、Scala或Python的獨立程式中被連線使用。   
  獨立應用與shell的主要區別在於需要自行初始化SparkContext。scala分別統計包含單詞a和單詞b的行數
/* SimpleApp.scala */ import org.apache.spark.sql.SparkSessionobject SimpleApp { def main(args: Array[String]) { val logFile = "YOURSPARKHOME/README.md" // Should be some file on your system val spark = SparkSession.builder.appName("Simple Application").getOrCreate() val logData = spark.read.textFile(logFile).cache() val numAs = logData.filter(line => line.contains("a")).count() val numBs = logData.filter(line => line.contains("b")).count() println(s"Lines with a: $numAs, Lines with b: $numBs") spark.stop() } }
執行應用
Use spark-submit to run your application$ YOURSPARKHOME/bin/spark-submit \ --class "SimpleApp" \ --master local[4] \ target/scala-2.11/simple-project_2.11-1.0.jar ... Lines with a: 46, Lines with b: 23
  java分別統計包含單詞a和單詞b的行數
  ```/* SimpleApp.java */ import org.apache.spark.sql.SparkSession; import org.apache.spark.sql.Dataset;public class SimpleApp { public static void main(String[] args) { String logFile = “YOURSPARKHOME/README.md”; // Should be some file on your system SparkSession spark = SparkSession.builder().appName(“Simple Application”).getOrCreate(); Dataset logData = spark.read().textFile(logFile).cache();long numAs = logData.filter(s -> s.contains(“a”)).count();
long numBs = logData.filter(s -> s.contains(“b”)).count();

System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

spark.stop();} } **執行應用**Use spark-submit to run your application$ YOURSPARKHOME/bin/spark-submit \ --class “SimpleApp” \ --master local[4] \ target/simple-project-1.0.jar … Lines with a: 46, Lines with b: 23 python分別統計包含單詞a和單詞b的行數setup.py指令碼新增內容 install_requires=[ 'pyspark=={site.SPARK_VERSION}' ]“”“SimpleApp.py”"" from pyspark.sql import SparkSessionlogFile = “YOURSPARKHOME/README.md” # Should be some file on your system spark = SparkSession.builder().appName(appName).master(master).getOrCreate() logData = spark.read.text(logFile).cache()numAs = logData.filter(logData.value.contains(‘a’)).count() numBs = logData.filter(logData.value.contains(‘b’)).count()print(“Lines with a: %i, lines with b: %i” % (numAs, numBs))spark.stop() ```
執行應用

/bin/spark-submit \ --master local[4] \ SimpleApp.py ... Lines with a: 46, Lines with b: 23 ```
文章來自:https://www.itjmd.com/news/show-4240.html