1. 程式人生 > >爬蟲框架webmagic與spring boot的結合使用--轉

爬蟲框架webmagic與spring boot的結合使用--轉

odi oid reat 簡單 ots factor per 爬蟲框架 ddp

原文地址:http://www.jianshu.com/p/c3fc3129407d

1. 爬蟲框架webmagic

WebMagic是一個簡單靈活的爬蟲框架。基於WebMagic,你可以快速開發出一個高效、易維護的爬蟲。

1.1 官網地址

官網文檔寫的比較清楚,建議大家直接閱讀官方文檔,也可以閱讀下面的內容。地址如下:

官網:http://webmagic.io

中文文檔地址: http://webmagic.io/docs/zh/

English: http://webmagic.io/docs/en

2. webmagic與spring boot框架集成

spring bootwebmagic

的結合主要有三個模塊,分別為爬取模塊Processor,入庫模塊Pipeline,向數據庫存入爬取數據,和定時任務模塊Scheduled,復制定時爬取網站數據。

2.1 maven添加

<dependency> 
    <groupId>us.codecraft</groupId>
    <artifactId>webmagic-core</artifactId>
    <version>0.5.3</version>
</dependency>
<dependency>
    <groupId>us.codecraft</groupId>
    <artifactId>webmagic-extension</artifactId>
    <version>0.5.3</version>
</dependency>

2.2 爬取模塊Processor

爬取簡書首頁Processor,分析簡書首頁的頁面數據,獲取響應的簡書鏈接和標題,放入wegmagic的Page中,到入庫模塊取出添加到數據庫。代碼如下:

package com.shang.spray.common.processor;

import com.shang.spray.entity.News;
import com.shang.spray.entity.Sources;
import com.shang.spray.pipeline.NewsPipeline;
import us.codecraft.webmagic.Page;
import us.codecraft.webmagic.Site;
import us.codecraft.webmagic.Spider;
import us.codecraft.webmagic.processor.PageProcessor;
import us.codecraft.webmagic.selector.Selectable;

import java.util.List;

/**
 * info:簡書首頁爬蟲
 * Created by shang on 16/9/9.
 */
public class JianShuProcessor implements PageProcessor {

    private Site site = Site.me()
            .setDomain("jianshu.com")
            .setSleepTime(100)
            .setUserAgent("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36");
    ;

    public static final String list = "http://www.jianshu.com";

    @Override
    public void process(Page page) {
        if (page.getUrl().regex(list).match()) {
            List<Selectable> list=page.getHtml().xpath("//ul[@class=‘article-list thumbnails‘]/li").nodes();
            for (Selectable s : list) {
                String title=s.xpath("//div/h4/a/text()").toString();
                String link=s.xpath("//div/h4").links().toString();
                News news=new News();
                news.setTitle(title);
                news.setInfo(title);
                news.setLink(link);
                news.setSources(new Sources(5));
                page.putField("news"+title, news);
            }
        }
    }

    @Override
    public Site getSite() {
        return site;
    }

    public static void main(String[] args) {
        Spider spider=Spider.create(new JianShuProcessor());
        spider.addUrl("http://www.jianshu.com");
        spider.addPipeline(new NewsPipeline());
        spider.thread(5);
        spider.setExitWhenComplete(true);
        spider.start();
    }
}

2.3 入庫模塊Pipeline

入庫模塊結合spring boot的Repository模塊一起組合成入庫方法,繼承webmagic的Pipeline,然後實現方法,在process方法中獲取爬蟲模塊的數據,然後調用spring boot的save方法。代碼如下:

package com.shang.spray.pipeline;

import com.shang.spray.entity.News;
import com.shang.spray.entity.Sources;
import com.shang.spray.repository.NewsRepository;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.jpa.domain.Specification;
import org.springframework.stereotype.Repository;
import us.codecraft.webmagic.ResultItems;
import us.codecraft.webmagic.Task;
import us.codecraft.webmagic.pipeline.Pipeline;

import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Predicate;
import javax.persistence.criteria.Root;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;

/**
 * info:新聞
 * Created by shang on 16/8/22.
 */
@Repository
public class NewsPipeline implements Pipeline {

    @Autowired
    protected NewsRepository newsRepository;

    @Override
    public void process(ResultItems resultItems, Task task) {
        for (Map.Entry<String, Object> entry : resultItems.getAll().entrySet()) {
            if (entry.getKey().contains("news")) {
                News news=(News) entry.getValue();
                Specification<News> specification=new Specification<News>() {
                    @Override
                    public Predicate toPredicate(Root<News> root, CriteriaQuery<?> criteriaQuery, CriteriaBuilder criteriaBuilder) {
                        return criteriaBuilder.and(criteriaBuilder.equal(root.get("link"),news.getLink()));
                    }
                };
                if (newsRepository.findOne(specification) == null) {//檢查鏈接是否已存在
                    news.setAuthor("水花");
                    news.setTypeId(1);
                    news.setSort(1);
                    news.setStatus(1);
                    news.setExplicitLink(true);
                    news.setCreateDate(new Date());
                    news.setModifyDate(new Date());
                    newsRepository.save(news);
                }
            }

        }
    }
}

2.4 定時任務模塊Scheduled

使用spring boot自帶的定時任務註解@Scheduled(cron = "0 0 0/2 * * ? "),每天從0天開始,每兩個小時執行一次爬取任務,在定時任務裏調取webmagic的爬取模塊Processor。代碼如下:

package com.shang.spray.common.scheduled;

import com.shang.spray.common.processor.DevelopersProcessor;
import com.shang.spray.common.processor.JianShuProcessor;
import com.shang.spray.common.processor.ZhiHuProcessor;
import com.shang.spray.entity.Config;
import com.shang.spray.pipeline.NewsPipeline;
import com.shang.spray.service.ConfigService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.jpa.domain.Specification;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import us.codecraft.webmagic.Spider;

import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Predicate;
import javax.persistence.criteria.Root;


/**
 * info:新聞定時任務
 * Created by shang on 16/8/22.
 */
@Component
public class NewsScheduled {
    @Autowired
    private NewsPipeline newsPipeline;

    /**
     * 簡書
     */
    @Scheduled(cron = "0 0 0/2 * * ? ")//從0點開始,每2個小時執行一次
    public void jianShuScheduled() {
        System.out.println("----開始執行簡書定時任務");
        Spider spider = Spider.create(new JianShuProcessor());
        spider.addUrl("http://www.jianshu.com");
        spider.addPipeline(newsPipeline);
        spider.thread(5);
        spider.setExitWhenComplete(true);
        spider.start();
        spider.stop();
    }

}

2.5 spring boot啟用定時任務

在spring boot的Application裏啟用定時任務註解,@EnableScheduling。代碼如下:

package com.shang.spray;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.context.web.SpringBootServletInitializer;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableScheduling;

/**
 * info:
 * Created by shang on 16/7/8.
 */
@Configuration
@EnableAutoConfiguration
@ComponentScan
@SpringBootApplication
@EnableScheduling
public class SprayApplication extends SpringBootServletInitializer{

    @Override
    protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
        return application.sources(SprayApplication.class);
    }

    public static void main(String[] args) throws Exception {
        SpringApplication.run(SprayApplication.class, args);
    }
}

3. 結束語

使用webmagic是我在水花一現項目中爬取網站數據時使用的的爬蟲框架,在綜合比較的其他幾個爬蟲框架後,選擇了這個框架,這個框架比較簡單易學,且功能強大,我這裏只使用了基本的功能,還有許多強大的功能都沒有使用。有興趣的可以去看看官方文檔!

爬蟲框架webmagic與spring boot的結合使用--轉