1. 程式人生 > >【Python3實戰Spark大資料分析及排程】Spark Core 課程筆記(1)

【Python3實戰Spark大資料分析及排程】Spark Core 課程筆記(1)

目錄

架構

注意事項

Spark Core: Spark 核心進階

Spark 核心概念

Application

User program built on Spark. Consists of a driver program and executors on the cluster.

基於spark的應用程式 = 1 driver + executors

Application jar

A jar containing the user's Spark application. In some cases, users will want to create an "uber jar" containing their application along with its dependencies. The user's jar should never include Hadoop or Spark libraries, however, these will be added at runtime.

針對jave/scala, pyhton沒有

Driver program

The process running the main() function of the application and creating the SparkContext

Cluster manager

An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN)

spark-submit --master local[2]/spark://hadoop000:7077/yarn

Deploy mode

Distinguishes where the driver process runs. In "cluster" mode, the framework launches the driver inside of the cluster. In "client" mode, the submitter launches the driver outside of the cluster.

driver執行在哪裡,client:本地 cluster:叢集

Worker node

Any node that can run application code in the cluster

Executor

A process launched for an application on a worker node, that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors.

Task A unit of work that will be sent to one executor
Job A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g. savecollect); you'll see this term used in the driver's logs.
Stage Each job gets divided into smaller sets of stages that depend on each other (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs.

Job, stage 和task的關係

1 action = 1 job = x stages, 1 stage = n tasks (1 task on 1 worker node) (執行緒) <=> n executors(程序)

Spark執行架構及注意事項

架構

Spark applications run as independent sets of processes on a cluster, coordinated by the SparkContext object in your main program (called the driver program). 

Specifically, to run on a cluster, the SparkContext can connect to several types of cluster managers (either Spark’s own standalone cluster manager, Mesos or YARN), which allocate resources across applications. Once connected, Spark acquires executors on nodes in the cluster, which are processes that run computations and store data for your application. Next, it sends your application code (defined by JAR or Python files passed to SparkContext) to the executors. Finally, SparkContext sends tasks to the executors to run.

注意事項

There are several useful things to note about this architecture:

  1. Each application gets its own executor processes, which stay up for the duration of the whole application and run tasks in multiple threads. This has the benefit of isolating applications from each other, on both the scheduling side (each driver schedules its own tasks) and executor side (tasks from different applications run in different JVMs). However, it also means that data cannot be shared across different Spark applications (instances of SparkContext) without writing it to an external storage system.(一個application的data不能被其他的application用)
  2. Spark is agnostic(不感知的) to the underlying cluster manager. As long as it can acquire executor processes, and these communicate with each other, it is relatively easy to run it even on a cluster manager that also supports other applications (e.g. Mesos/YARN). 寫spark程式的時候不用在意它會執行在哪
  3. The driver program must listen for and accept incoming connections from its executors throughout its lifetime (e.g., see spark.driver.port in the network config section). As such, the driver program must be network addressable from the worker nodes. driver要和executor能夠通訊
  4. Because the driver schedules tasks on the cluster, it should be run close to the worker nodes, preferably on the same local area network. If you’d like to send requests to the cluster remotely, it’s better to open an RPC to the driver and have it submit operations from nearby than to run a driver far away from the worker nodes.  Driver最好跟worker nodes在一個局域網裡執行

Spark和Hadoop重要概念區分