请选择 进入手机版 | 继续访问电脑版
  • 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

crawler: A simple and flexible web crawler framework for java. https://github.co ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

crawler

开源软件地址:

https://gitee.com/xbynet/crawler

开源软件介绍:

crawler

A simple and flexible web crawler framework for java.

Features:

1、Code is easy to understand and customized (代码简单易懂,可定制性强)
2、Api is simple and easy to use
3、Support File download、Content part fetch.(支持文件下载、分块抓取)    
4、Request And Response support much options、strong customizable.(请求和响应支持的内容和选项比较丰富、每个请求可定制性强)
5、Support do your own operation before or after network request in downloader(支持网络请求前后执行自定义操作)      
6、Selenium+PhantomJS support  
7、Redis support

Future:

1、Complete the code comment and test(完善代码注释和完善测试代码)  

Install:

This is the plain maven javase project. So you can download the code and package jar file for your own.

demo:

import net.xby1993.crawler.http.DefaultDownloader;import net.xby1993.crawler.http.FileDownloader;import net.xby1993.crawler.http.HttpClientFactory;import net.xby1993.crawler.parser.JsoupParser;import net.xby1993.crawler.scheduler.DefaultScheduler;public class GithubCrawler extends Processor {	@Override	public void process(Response resp) {		String currentUrl = resp.getRequest().getUrl();		System.out.println("CurrentUrl:" + currentUrl);		int respCode = resp.getCode();		System.out.println("ResponseCode:" + respCode);		System.out.println("type:" + resp.getRespType().name());		String contentType = resp.getContentType();		System.out.println("ContentType:" + contentType);		Map<String, List<String>> headers = resp.getHeaders();		System.out.println("ResonseHeaders:");		for (String key : headers.keySet()) {			List<String> values=headers.get(key);			for(String str:values){				System.out.println(key + ":" +str);			}		}		JsoupParser parser = resp.html();		// suppport parted ,分块抓取是会有个parent response来关联所有分块response		// System.out.println("isParted:"+resp.isPartResponse());		// Response parent=resp.getParentResponse();		// resp.addPartRequest(null);		//Map<String,Object> extras=resp.getRequest().getExtras();		if (currentUrl.equals("https://github.com/xbynet")) {			String avatar = parser.single("img.avatar", "src");			String dir = System.getProperty("java.io.tmpdir");			String savePath = Paths.get(dir, UUID.randomUUID().toString())					.toString();			boolean avatarDownloaded = download(avatar, savePath);			System.out.println("avatar:" + avatar + ", saved:" + savePath);			// System.out.println("avtar downloaded status:"+avatarDownloaded);			String name = parser.single(".vcard-names > .vcard-fullname",					"text");			System.out.println("name:" + name);			List<String> reponames = parser.list(					".pinned-repos-list .repo.js-repo", "text");			List<String> repoUrls = parser.list(					".pinned-repo-item .d-block >a", "href");			System.out.println("reponame:url");			if (reponames != null) {				for (int i = 0; i < reponames.size(); i++) {					String tmpUrl="https://github.com"+repoUrls.get(i);					System.out.println(reponames.get(i) + ":"+tmpUrl);					Request req=new Request(tmpUrl).putExtra("name", reponames.get(i));					resp.addRequest(req);				}			}		}else{			Map<String,Object> extras=resp.getRequest().getExtras();			String name=extras.get("name").toString();			System.out.println("repoName:"+name);			String shortDesc=parser.single(".repository-meta-content","allText");			System.out.println("shortDesc:"+shortDesc);		}	}	public void start() {		Site site = new Site();		Spider spider = Spider.builder(this).threadNum(5).site(site)				.urls("https://github.com/xbynet").build();		spider.run();	}  	public static void main(String[] args) {		new GithubCrawler().start();	}    	public void startCompleteConfig() {		String pcUA = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36";		String androidUA = "Mozilla/5.0 (Linux; Android 5.1.1; Nexus 6 Build/LYZ28E) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.23 Mobile Safari/537.36";		Site site = new Site();		site.setEncoding("UTF-8").setHeader("Referer", "https://github.com/")				.setRetry(3).setRetrySleep(3000).setSleep(50).setTimeout(30000)				.setUa(pcUA);		Request request = new Request("https://github.com/xbynet");		HttpClientContext ctx = new HttpClientContext();		BasicCookieStore cookieStore = new BasicCookieStore();		ctx.setCookieStore(cookieStore);		request.setAction(new RequestAction() {			@Override			public void before(CloseableHttpClient client, HttpUriRequest req) {				System.out.println("before-haha");			}			@Override			public void after(CloseableHttpClient client,					CloseableHttpResponse resp) {				System.out.println("after-haha");			}		}).setCtx(ctx).setEncoding("UTF-8")				.putExtra("somekey", "I can use in the response by your own")				.setHeader("User-Agent", pcUA).setMethod(Const.HttpMethod.GET)				.setPartRequest(null).setEntity(null)				.setParams("appkeyqqqqqq", "1213131232141").setRetryCount(5)				.setRetrySleepTime(10000);		Spider spider = Spider.builder(this).threadNum(5)				.name("Spider-github-xbynet")				.defaultDownloader(new DefaultDownloader())				.fileDownloader(new FileDownloader())				.httpClientFactory(new HttpClientFactory()).ipProvider(null)				.listener(null).pool(null).scheduler(new DefaultScheduler())				.shutdownOnComplete(true).site(site).build();		spider.run();	}}

Examples:

  • Github(github个人项目信息)
  • OSChinaTweets(开源中国动弹)
  • Qiushibaike(醜事百科)
  • Neihanshequ(内涵段子)
  • ZihuRecommend(知乎推荐)

More Examples: Please see here

Thinks:

webmagic:本项目借鉴了webmagic多处代码,设计上也作了较多参考,非常感谢。
xsoup:本项目使用xsoup作为底层xpath处理器  
JsonPath:本项目使用JsonPath作为底层jsonpath处理器
Jsoup 本项目使用Jsoup作为底层HTML/XML处理器
HttpClient 本项目使用HttpClient作为底层网络请求工具


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
李皇/BreeCracker发布时间:2022-02-14
下一篇:
auto_download_link: 自动抓取网页中的下载地址或者种子连接保存发布时间:2022-02-14
热门推荐
热门话题
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap