博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
scrapy-redis实现分布式爬虫
阅读量:7166 次
发布时间:2019-06-29

本文共 15091 字,大约阅读时间需要 50 分钟。

Scrapy是Python开发的一个快速、高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结构化的数据。Scrapy用途广泛,可以用于数据挖掘、监测和。 Scrapy框架已经可以完成很大的一部分爬虫工作了。但是如果遇到比较大规模的数据爬取,直接可以用上python的多线程/多进程,如果你拥有多台服务器,分布式爬取是最好的解决方式,也是最有效率的方法。 Scrapy-redis是基于redis的一个scrapy组件,scrapy-redis提供了维持待爬取url的去重以及储存requests的指纹验证。原理是:redis维持一个共同的url队列,各个不同机器上的爬虫程序获取到的url都保存在redis的url队列,各个爬虫都从redis的uel队列获取url,并把数据统一保存在同一个数据库里面。 之前听了崔庆才老师的知乎爬虫课程,但是关于利用scrapy-redis构建分布式一直不太清晰。所以下面会利用MongoDB、redis搭建分布式爬虫。

scrapy-redis分布式架构图:

Scheduler调度器从redis获取请求的url地址,传递给Downloader下载器下载数据网页,然后把数据网页传递给spiders爬虫提取数据逻辑器处理,最后把结构化保存数据的item数据对象经过itemPipeLine保存在redis数据库。

其他机器的item Proccess进程和图上的单一进程相类似,Master主爬虫程序则维持redis数据库的url队列。

准备条件:

1. linux系统机器一台(博主用的是阿里云ECS centos7.2,如需ECS安装的过程可以参照之前的阿里云ECS安装文章)2. Redis[redis的windows客户端和windows的RedisDesktopMananger]和Linux redis版本3. Anaconda(windows)和Anaconda(Linux版本)4  MongoDB(linux版本)5. Robomongo 0.9.0(mongodb的可视化管理工具)复制代码

安装windows的redis客户端以及linux的redis的服务端。

安装的版本是 和redis可视化工具

  • windows下安装redis以及RedisDesktopManager十分简单,直接下一步下一步就可以完成。
  • 验证redis是否成功,在windows的DOS命令进入你安装redis的目录下,输入以下命令,博主安装目录是D盘的redis目录:

redis的二进制安装文件包含了redis的链接客户端,打开另外一个命令行终端,输入如下图的命令。可以连接上本地windows的redis数据库。

似乎是不是对于DOS命令窗口不太感冒而且也不太好管理,RedisDesktopManager派上用场了。安装完RedisDesktopManager启动如下图,输入如图的信息,即可连接上本地redis数据库:

  • 至此已经完成安装windows的redis数据库。感觉路还长着。

在阿里云ECS上面安装Redis:

在xshell登录阿里云ECS终端,运行下面命令安装redis:

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# yum -y install redis复制代码

阿里云系统是CentOS7.2,如果你自己的是Ubuntu,可以运行下面的命令安装:

[author@iZpq90f23ft5jyj3s7fmduhZ ~]$sudo apt-get install redis复制代码

Redis数据库安装完之后,会自动启动。运行下面命令查看redis运行状态。

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# ps -aux|grep redisroot     13925  0.0  0.0 112648   964 pts/0    R+   14:42   0:00 grep --color=auto redisredis    29418  0.0  0.6 151096 11912 ?        Ssl  Sep22   1:25 /usr/bin/redis-server *:6379复制代码

如果不设置redis密码,那么跟在大街上裸奔有什么区别。依稀还记得早些时候MongoDB国内外发生拖库事件,所以还是为redis设置密码。默认安装redis的配置文件在/etc/下面,如下所示,然后修改里面的几条信息:

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# vim /etc/redis.conf# bind 127.0.0.1(注释绑定的IP地址链接,如果你想只绑定特定的链接IP地址,可以改为自己的IP地址)requirepass xxxxxxx(这xxxxxx是设置的密码,把requirepass前面的#去除)port 6379(这是连接redis数据库的端口,可以修改为其他的端口,博主采用默认的端口)protected-mode no(里面no设置为yes)复制代码

修改完成,保存退出。重新启动redis服务:

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# service redis restart复制代码

使用windows的RedisDesktopManager连接阿里云上面的Redis:

意外永远是预料不到的,连接不上。这是因为阿里云的安全规则,要添加开放6379的端口,才能进行连接。

登录阿里云个人管理控制台,然后添加安全组规则。如下图所示:其中授权对象0.0.0.0/0是指允许所有的IP地址连接redis,端口范围6379/6379就是说只开放6379端口

完成安全组设置,在RedisDesktopManager设置IP地址和密码,即可登录上阿里云的redis数据库:

安装Anaconda:

在windows安装过程很简单,下载好可执行文件,直接下一步下一步就可完成。Anaconda默认包含python解释器,博主选择的是python3.6版,在windows运行一下命令,查看Anaconda安装了什么包:

C:\User\Username>conda list复制代码

因为scrapy框架在window安装比较麻烦,经常出现很多不知名的错误依赖,所以选择Anaconda,可以很快安装scrapy,scrapy-reis,pymongo,redis包;当然也可以直接使用pip安装模块包。

conda install scrapyconda install scrapy-redisconda install pymongoconda install redis  复制代码

linux可执行脚本文件,可以直接在windows下载,然后在通过Filezilla上传到到阿里云ECS。上传到Linux上,执行下面的命令。Anaconda在linux'安装需要手动enter,并且过程中输入是否把conda命令写进环境变量,整个过程,如果遇到询问,直接输入yes即可:

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# bash Anaconda3-4.4.0-Linux-x86_64.sh复制代码

安装完Anaconda之后,在命令行窗口输入python,即可发现是python3.6的版本。阿里云ECS CentOS7.2默认的python版本是python2.7.使用anaconda安装pymongo、redis、scrapy、scrapy-redis依赖包。

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# pythonPython 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:09:58) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linuxType "help", "copyright", "credits" or "license" for more information.>>> >>> [author@iZpq90f23ft5jyj3s7fmduhZ ~]# conda install scrapy[author@iZpq90f23ft5jyj3s7fmduhZ ~]# conda install scrapy-redis[author@iZpq90f23ft5jyj3s7fmduhZ ~]# conda install pymongo[author@iZpq90f23ft5jyj3s7fmduhZ ~]# conda install redis  复制代码

在阿里云ECS上面安装MongoDB:

在MongoDB官网下载 ,下载完成之后,通过文件FileZilla上传到阿里云ECS。

在阿里云ECS运行一下命令安装MongoDB,其中db.createUser方法的db是将来爬虫使用数据库。如果想详细了解db.createUser可以直接到查阅

[author@iZpq90f23ft5jyj3s7fmduhZ ~]# tar -vxzf  mongodb-linux-x86_64-amazon-3.4.9.tgz[author@iZpq90f23ft5jyj3s7fmduhZ ~]# mv  mongodb-linux-x86_64-amazon-3.4.9.tgz mongodb[author@iZpq90f23ft5jyj3s7fmduhZ ~]# cd mongodb[author@iZpq90f23ft5jyj3s7fmduhZ mongodb~]# mkdir db[author@iZpq90f23ft5jyj3s7fmduhZ mongodb~]# mkdir logs[author@iZpq90f23ft5jyj3s7fmduhZ mongodb~]# cd logs[author@iZpq90f23ft5jyj3s7fmduhZ logs~]# touch mongodb.log[author@iZpq90f23ft5jyj3s7fmduhZ ~]# cd ..[author@iZpq90f23ft5jyj3s7fmduhZ ~]# cd ..[author@iZpq90f23ft5jyj3s7fmduhZ mognodb~]# cd bin[author@iZpq90f23ft5jyj3s7fmduhZ mognodb bin~]# touch mongodb.conf(创建mongodb的日志保存路径以及数据保存路径)# 下面是mongodb.conf的文件内容dbpath=/author/mongodb/db()logpath=/author/mongodb/logs/mongodb.logport=27017fork=truenohttpinterface=true##############################[author@iZpq90f23ft5jyj3s7fmduhZ mongodb bin ~]# ./mongod --config mongodb.conf(启动mongoDB)[author@iZpq90f23ft5jyj3s7fmduhZ mongodb bin ~]# ./mongo (启动mongodb客户端)MongoDB shell version v3.4.9connecting to: mongodb://127.0.0.1:27017MongoDB server version: 3.4.9> db.createUser({user:"xxx",pwd:"xxx",roles:[{role:"readWrite",db:"zhihu"}]})[author@iZpq90f23ft5jyj3s7fmduhZ ~]# kill -9 pid(这里是mongodb的进程id,可以通过ps -aux|grep mongodb查看)[author@iZpq90f23ft5jyj3s7fmduhZ mognodb bin~]# ./mongod --config mongodb.conf --auth(--auth以需要授权的方式启动mongodb)复制代码

windows安装 可视化工具:

安装Robbomongo过程很简单,就不太再叙述了。安装完之后,其中的username是刚才创建的user,zhihu是要连接的数据库。这里会发现连接时间过长失败,原因也是想Redis一样,阿里云的安全规则限制,所以可以像redis那样设置连接开放27017端口就可以了。

终于全部安装完所需要的工具,工欲善其事必先利其器,真的是有苦说不来。

scrapy-redis的源码

这里是另外一位作者的源码,因为通过抓包分析。知乎的json的格式数据已经改变了以及自己安装的Mongodb需要进行验证,所以自己改写了一部分。

setting.py配置文件部分:

# -*- coding: utf-8 -*-# Scrapy settings for zhihuuser project## For simplicity, this file contains only settings considered important or# commonly used. You can find more settings consulting the documentation:##     http://doc.scrapy.org/en/latest/topics/settings.html#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html      BOT_NAME = 'zhihuuser'      SPIDER_MODULES = ['zhihuuser.spiders']      NEWSPIDER_MODULE = 'zhihuuser.spiders'    # Crawl responsibly by identifying yourself (and your website) on the user-agent    # USER_AGENT = 'zhihuuser (+http://www.yourdomain.com)'    # Obey robots.txt rules      ROBOTSTXT_OBEY = False    # Configure maximum concurrent requests performed by Scrapy (default: 16)    #CONCURRENT_REQUESTS = 32    # Configure a delay for requests for the same website (default: 0)    # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay    # See also autothrottle settings and docs    #DOWNLOAD_DELAY = 3    # The download delay setting will honor only one of:    #CONCURRENT_REQUESTS_PER_DOMAIN = 16    #CONCURRENT_REQUESTS_PER_IP = 16    # Disable cookies (enabled by default)    #COOKIES_ENABLED = False    # Disable Telnet Console (enabled by default)    #TELNETCONSOLE_ENABLED = False    # Override the default request headers:      DEFAULT_REQUEST_HEADERS = {       'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',       'Accept-Language':'en',       'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3)                 AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36',       'authorization':'oauth c3cef7c66a1843f8b3a9e6a1e3160e20'      }      # Enable or disable spider middlewares    # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html    #SPIDER_MIDDLEWARES = {
# 'zhihuuser.middlewares.ZhihuuserSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = {
# 'zhihuuser.middlewares.MyCustomDownloaderMiddleware': 543, #} # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html #EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'zhihuuser.pipelines.MongoPipeline': 300, # 'zhihuuser.pipelines.JsonWriterPipeline': 300, 'scrapy_redis.pipelines.RedisPipeline': 301 } # Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader- middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' MONGO_URI='hostIP' MONGO_DATABASE='zhihu' MONGO_USER="username" MONGO_PASS="password" SCHEDULER = "scrapy_redis.scheduler.Scheduler" DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" REDIS_URL = 'redis://username:pass@hostIP:6379'复制代码

Pipelines.py管道部分:

# -*- coding: utf-8 -*-  # Define your item pipelines here  # Don't forget to add your pipeline to the ITEM_PIPELINES setting  # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html  import pymongo  class MongoPipeline(object):      collection_name="users"      def __init__(self,mongo_uri,mongo_db,mongo_user,mongo_pass):        self.mongo_uri=mongo_uri        self.mongo_db=mongo_db        self.mongo_user=mongo_user        self.mongo_pass=mongo_pass    @classmethod    def from_crawler(cls,crawler):        return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DATABASE'),mongo_user=crawler.settings.get("MONGO_USER"),mongo_pass=crawler.settings.get("MONGO_PASS"))    def open_spider(self, spider):        self.client = pymongo.MongoClient(self.mongo_uri)        self.db = self.client[self.mongo_db]        self.db.authenticate(self.mongo_user,self.mongo_pass)           def close_spider(self, spider):        self.client.close()    def process_item(self, item, spider):        # self.db[self.collection_name].update({'url_token': item['url_token']}, {'$set': dict(item)}, True)        # return item        self.db[self.collection_name].insert(dict(item))        return item  # import json  # class JsonWriterPipeline(object):  #     def __init__(self):  #         self.file = open('data.json', 'w',encoding='UTF-8')  #     def process_item(self, item, spider):  #         #self.file.write("我开始打印了\n")  #         line = json.dumps(dict(item)) + "\n"  #         self.file.write(line)  #         return item复制代码

items部分,知乎json数据已经改变,所以改写了这部分:

# -*- coding: utf-8 -*-# Define here the models for your scraped items# See documentation in:# http://doc.scrapy.org/en/latest/topics/items.htmlfrom scrapy import Item,Fieldclass ZhihuuserItem(Item):   allow_message=Field()   answer_count=Field()   articles_count=Field()   avatar_url_template=Field()   badge=Field()   employments=Field()   follower_count=Field()   gender=Field()   headline=Field()   id=Field()   is_advertiser=Field()   is_blocking=Field()   is_followed=Field()   is_following=Field()   url=Field()   url_token=Field()   user_type=Field()复制代码

zhihu.py即spiders部分:

# -*- coding: utf-8 -*-  from  scrapy import Spider,Request  import json  from zhihuuser.items import ZhihuuserItem  class ZhihuSpider(Spider):    name = "zhihu"    allowed_domains = ["www.zhihu.com"]    start_urls = ['http://www.zhihu.com/']    #获取用户的关注列表    follows_url="https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset={offset}&limit={limit}"    #用户的详细信息    user_url="https://www.zhihu.com/api/v4/members/{user}?include={include}"    #开始用户名    start_user="zhang-yu-meng-7"    #用户详细信息include参数    user_query = 'locations,employments,gender,educations,business,voteup_count,thanked_Count,follower_count,following_count,cover_url,following_topic_count,following_question_count,following_favlists_count,following_columns_count,answer_count,articles_count,pins_count,question_count,commercial_question_count,favorite_count,favorited_count,logs_count,marked_answers_count,marked_answers_text,message_thread_token,account_status,is_active,is_force_renamed,is_bind_sina,sina_weibo_url,sina_weibo_name,show_sina_weibo,is_blocking,is_blocked,is_following,is_followed,mutual_followees_count,vote_to_count,vote_from_count,thank_to_count,thank_from_count,thanked_count,description,hosted_live_count,participated_live_count,allow_message,industry_category,org_name,org_homepage,badge[?(type=best_answerer)].topics'  #获取关注人的include的参数  follows_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'  followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&offset={offset}&limit={limit}'  followers_query = 'data[*].answer_count,articles_count,gender,follower_count,is_followed,is_following,badge[?(type=best_answerer)].topics'  def start_requests(self):      yield Request(self.user_url.format(user=self.start_user,include=self.user_query),self.parse_user)      yield Request(self.followers_url.format(user=self.start_user, include=self.followers_query, limit=20, offset=0),self.parse_followers)      yield Request(self.follows_url.format(user=self.start_user,include=self.follows_query,limit=20,offset=0),self.parse_follows)  #保存用户详细信息  def parse_user(self, response):      result=json.loads(response.text)      item=ZhihuuserItem()      for field in item.fields:          if field in result.keys():              item[field]=result.get(field)      yield item  #获取用户关注用户列表  def parse_follows(self,response):      results=json.loads(response.text)      if 'data' in results.keys():          for result in results.get('data'):              yield Request(self.user_url.format(user=result.get('url_token'),include=self.user_query),self.parse_user)      if 'paging' in results.keys()and results.get('paging').get('is_end')==False:          next_page=results.get('paging').get('next')          yield Request(next_page,self.parse_follows)  def parse_followers(self, response):      results = json.loads(response.text)      if 'data' in results.keys():          for result in results.get('data'):              yield Request(self.user_url.format(user=result.get('url_token'), include=self.user_query),self.parse_user)      if 'paging' in results.keys() and results.get('paging').get('is_end') == False:          next_page = results.get('paging').get('next')          yield Request(next_page, self.parse_followers)复制代码

在windows和linux中分别启动爬虫进程,然后查看获取到的数据:

windows启动爬虫程序:

scrapy crawl zhihu复制代码

阿里云linux启动爬虫程序

scrapy crawl zhihu复制代码

查看redis:

查看mongodb数据库

参考文章:

转载地址:http://gihwm.baihongyu.com/

你可能感兴趣的文章