1. 程式人生 > >elastic-job入門例項

elastic-job入門例項

說明

Elastic-Job是一個分散式排程解決方案,由兩個相互獨立的子專案Elastic-Job-Lite和Elastic-Job-Cloud組成。

Elastic-Job-Lite定位為輕量級無中心化解決方案,使用jar包的形式提供分散式任務的協調服務;Elastic-Job-Cloud採用自研Mesos Framework的解決方案,額外提供資源治理、應用分發以及程序隔離等功能。

功能列表

1. 任務分片

  • 將整體任務拆解為多個子任務
  • 可通過伺服器的增減彈性伸縮任務處理能力
  • 分散式協調,任務伺服器上下線的全自動發現與處理

2. 多工型別

  • 基於時間驅動的任務
  • 基於資料驅動的任務(TBD)
  • 同時支援常駐任務和瞬時任務
  • 多語言任務支援

3. 雲原生

  • 完美結合Mesos或Kubernetes等排程平臺
  • 任務不依賴於IP、磁碟、資料等有狀態元件
  • 合理的資源排程,基於Netflix的Fenzo進行資源分配

4. 容錯性

  • 支援定時自我故障檢測與自動修復
  • 分散式任務分片唯一性保證
  • 支援失效轉移和錯過任務重觸發

5. 任務聚合

  • 相同任務聚合至相同的執行器統一處理
  • 節省系統資源與初始化開銷
  • 動態調配追加資源至新分配的任務

6. 易用性

  • 完善的運維平臺
  • 提供任務執行歷史資料追蹤能力
  • 註冊中心資料一鍵dump用於備份與除錯問題

接下來我們就開始實現一個小例子

構建工具

gradle

專案結構如下

這裡寫圖片描述

引入依賴

在build.gradle檔案中

//elastic-job
            [group: 'com.dangdang', name: 'elastic-job-lite-core', version: '2.1.5'],
            [group: 'com.dangdang', name: 'elastic-job-lite-spring', version: '2.1.5']

SimpleJob 簡單作業

import com.dangdang.ddframe.job.api.ShardingContext;
import
com.dangdang.ddframe.job.api.simple.SimpleJob; public class MyElasticSimpleJob implements SimpleJob{ @Override public void execute(ShardingContext context) { switch (context.getShardingItem()) { case 0: System.out.println("do something by sharding item 0"); break; case 1: System.out.println("do something by sharding item 1"); break; case 2: System.out.println("do something by sharding item 2"); break; // case n: ... } } }

DataFlowJob 資料流作業

import java.util.ArrayList;
import java.util.List;

import com.dangdang.ddframe.job.api.ShardingContext;
import com.dangdang.ddframe.job.api.dataflow.DataflowJob;

public class MyElasticDataflowJob implements DataflowJob<String>{

    @Override
    public List<String> fetchData(ShardingContext context) {
        switch (context.getShardingItem()) {
            case 0: 
                // get data from database by sharding item 0
                List<String> data1 = new ArrayList<>();
                data1.add("get data from database by sharding item 0");
                return data1;
            case 1: 
                // get data from database by sharding item 1
                List<String> data2 = new ArrayList<>();
                data2.add("get data from database by sharding item 1");
                return data2;
            case 2: 
                // get data from database by sharding item 2
                List<String> data3 = new ArrayList<>();
                data3.add("get data from database by sharding item 2");
                return data3;
            // case n: ...
        }
        return null;
    }

    @Override
    public void processData(ShardingContext shardingContext, List<String> data) {
        int count=0;
        // process data
        // ...
        for (String string : data) {
            count++;
            System.out.println(string);
            if (count>10) {
                return;
            }
        }

    }

}

測試以上兩種作業

import java.net.InetAddress;

import java.net.UnknownHostException;

import com.dangdang.ddframe.job.api.dataflow.DataflowJob;
import com.dangdang.ddframe.job.api.simple.SimpleJob;
import com.dangdang.ddframe.job.config.JobCoreConfiguration;
import com.dangdang.ddframe.job.config.JobRootConfiguration;
import com.dangdang.ddframe.job.config.dataflow.DataflowJobConfiguration;
import com.dangdang.ddframe.job.config.script.ScriptJobConfiguration;
import com.dangdang.ddframe.job.config.simple.SimpleJobConfiguration;
import com.dangdang.ddframe.job.lite.api.JobScheduler;
import com.dangdang.ddframe.job.lite.config.LiteJobConfiguration;
import com.dangdang.ddframe.job.reg.base.CoordinatorRegistryCenter;
import com.dangdang.ddframe.job.reg.zookeeper.ZookeeperConfiguration;
import com.dangdang.ddframe.job.reg.zookeeper.ZookeeperRegistryCenter;
import com.job.task.MyElasticDataflowJob;
import com.job.task.MyElasticSimpleJob;

public class JobDemo {

    public static void main(String[] args) throws UnknownHostException {
        System.out.println("Start...");
        System.out.println(InetAddress.getLocalHost());
        new JobScheduler(createRegistryCenter(), createSimpleJobConfiguration()).init();
        new JobScheduler(createRegistryCenter(), createDataflowJobConfiguration()).init();
    }

    private static CoordinatorRegistryCenter createRegistryCenter() {
        CoordinatorRegistryCenter regCenter = new ZookeeperRegistryCenter(
                new ZookeeperConfiguration("127.0.0.1:2181", "new-elastic-job-demo"));
        regCenter.init();
        return regCenter;
    }

    private static LiteJobConfiguration createSimpleJobConfiguration() {
        // 定義作業核心配置
        JobCoreConfiguration simpleCoreConfig = JobCoreConfiguration.newBuilder("SimpleJobDemo", "0/15 * * * * ?", 10).build();
        // 定義SIMPLE型別配置
        SimpleJobConfiguration simpleJobConfig = new SimpleJobConfiguration(simpleCoreConfig, MyElasticSimpleJob.class.getCanonicalName());
        // 定義Lite作業根配置
        JobRootConfiguration simpleJobRootConfig = LiteJobConfiguration.newBuilder(simpleJobConfig).build();

        return (LiteJobConfiguration) simpleJobRootConfig;

    }

    private static LiteJobConfiguration createDataflowJobConfiguration() {
        // 定義作業核心配置
        JobCoreConfiguration dataflowCoreConfig = JobCoreConfiguration.newBuilder("DataflowJob", "0/30 * * * * ?", 10).build();
        // 定義DATAFLOW型別配置
        DataflowJobConfiguration dataflowJobConfig = new DataflowJobConfiguration(dataflowCoreConfig, MyElasticDataflowJob.class.getCanonicalName(), true);
        // 定義Lite作業根配置
        JobRootConfiguration dataflowJobRootConfig = LiteJobConfiguration.newBuilder(dataflowJobConfig).build();

        return (LiteJobConfiguration) dataflowJobRootConfig;
    }
}

執行結果

這裡寫圖片描述

現在我們通過配置檔案的方式來實現兩種型別的作業

建立elastic.xml配置檔案

將elastic-job通過配置檔案進行引數設定

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:reg="http://www.dangdang.com/schema/ddframe/reg"
    xmlns:job="http://www.dangdang.com/schema/ddframe/job"
    xsi:schemaLocation="http://www.springframework.org/schema/beans 
                        http://www.springframework.org/schema/beans/spring-beans.xsd 
                        http://www.dangdang.com/schema/ddframe/reg 
                        http://www.dangdang.com/schema/ddframe/reg/reg.xsd 
                        http://www.dangdang.com/schema/ddframe/job 
                        http://www.dangdang.com/schema/ddframe/job/job.xsd 
                        ">

    <!-- 配置作業註冊中心; baseSleepTimeMilliseconds:等待重試的間隔時間的初始值單位:毫秒 ; 
    maxSleepTimeMilliseconds:等待重試的間隔時間的最大值單位:毫秒;maxRetries:最大重試次數 -->
    <reg:zookeeper id="regCenter" server-lists="192.168.6.175:12181"
        namespace="elastic-job" base-sleep-time-milliseconds="1000"
        max-sleep-time-milliseconds="3000" max-retries="3" />

    <!-- 配置簡單作業 -->
    <job:simple id="JobSimpleJob" class="com.job.task.MyElasticSimpleJob"
        registry-center-ref="regCenter" cron="0/30 * * * * ?"
        sharding-total-count="3" sharding-item-parameters="0=A,1=B,2=C" />

    <!-- 配置資料流作業, job-parameter定義的為分頁引數 
    sharding-total-count 作業分片總數
    sharding-item-parameters分片序列號和引數用等號分隔,多個鍵值對用逗號分隔 ,分片序列號從0開始,不可大於或等於作業分片總數
    job-parameter 作業自定義引數,可通過傳遞該引數為作業排程的業務方法傳參,用於實現帶引數的作業
    例:每次獲取的資料量、作業例項從資料庫讀取的主鍵等
    job-sharding-strategy-class 作業分片策略實現類全路徑 預設使用平均分配策略
    streaming-process 是否流式處理資料
    reconcile-interval-minutes 修復作業伺服器不一致狀態服務排程間隔時間,配置為小於1的任意值表示不執行修復
    event-trace-rdb-data-source 作業事件追蹤的資料來源Bean引用
    -->

        <job:dataflow id="JobDataflow" class="com.job.task.MqElasticDataflowJob" 
        registry-center-ref="regCenter" cron="0/10 * * * * ?" sharding-total-count="3" 
        sharding-item-parameters="0=a,1=b,2=c" job-sharding-strategy-class="com.dangdang.ddframe.job.lite.api.strategy.impl.AverageAllocationJobShardingStrategy" 
        job-parameter="100" streaming-process="true" reconcile-interval-minutes="10" 
        overwrite="true" event-trace-rdb-data-source="dataSource"/> 

</beans>

配置datasource

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
    xmlns:tx="http://www.springframework.org/schema/tx"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd">

    <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
        destroy-method="close">
        <property name="driverClass" value="com.mysql.jdbc.Driver" />
        <property name="jdbcUrl" value="jdbc:mysql://127.0.0.1:3306/for_test?useUnicode=yes&amp;characterEncoding=UTF-8" />
        <property name="user" value="admin" />
        <property name="password" value="super" />
        <property name="minPoolSize" value="3" />
        <property name="maxPoolSize" value="20" />
        <property name="acquireIncrement" value="1" />
        <property name="testConnectionOnCheckin" value="true" />
        <property name="maxIdleTimeExcessConnections" value="240" />
        <property name="idleConnectionTestPeriod" value="300" />
    </bean>


</beans>

建立applicationContext.xml檔案

將elastic-job與Spring整合

<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context" xmlns:task="http://www.springframework.org/schema/task"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd ">

    <task:scheduler id="taskScheduler" pool-size="10" />
    <task:executor id="taskExecutor" />
    <task:annotation-driven executor="taskExecutor" scheduler="taskScheduler" />

    <import resource="elastic.xml" />
    <import resource="mysql.xml"/>
</beans>

配置web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

    <display-name>elastic-job</display-name>

    <!-- 用來設定web應用的環境引數(context) -->
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>classpath:applicationContext.xml</param-value>
    </context-param>

    <!-- listener元素用來定義Listener介面,對事件監聽程式 -->
    <listener>
        <listener-class>
            org.springframework.web.context.ContextLoaderListener
        </listener-class>
    </listener>

</web-app>

運作結果

這裡寫圖片描述