5.scrapy-redis使用概要

3883 ワード

1.scrapy-redisのダウンロードインストール
Windows下:pip install scrapy-redisまたはpython.x -m pip install scrapy-redis
2.scrapy-redisの役割と特徴
役割:scrapy-redisはScrapyにRedis-backedコンポーネントの特徴を提供します.複数の爬虫類インスタンスを起動して単一のredisキューを共有できます.広範な多域爬虫類に最適です.分散型post処理.scrapyからのitemsをredisキューに入れると、このitemsキューを共有し、post処理プロセスを十分に有効にすることができます.
3.要求
Pythonバージョン:2.7,3.4+Redis>=2.8 Scrapy>=1.1 redis-py>=2.10公式使用例:https://github.com/rmax/scrapy-redis
4.使い方
プロジェクトでは、次のsettings設定を使用します.
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# Default requests serializer is pickle, but it can be changed to any module
# with loads and dumps functions. Note that pickle is not compatible between
# python versions.
# Caveat: In python 3.x, the serializer must return strings keys and support
# bytes as values. Because of this reason the json or msgpack module will not
# work by default. In python 2.x there is no such issue and you can use
# 'json' or 'msgpack' as serializers.
#SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"

# Don't cleanup redis queues, allows to pause/resume crawls.
#SCHEDULER_PERSIST = True

# Schedule requests using a priority queue. (default)
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'

# Alternative queues.
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.FifoQueue'
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.LifoQueue'

# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
#SCHEDULER_IDLE_BEFORE_CLOSE = 10

# Store scraped item in redis for post-processing.
ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 300
}

# The item pipeline serializes and stores the items in this redis key.
#REDIS_ITEMS_KEY = '%(spider)s:items'

# The items serializer is by default ScrapyJSONEncoder. You can use any
# importable path to a callable object.
#REDIS_ITEMS_SERIALIZER = 'json.dumps'

# Specify the host and port to use when connecting to Redis (optional).
#REDIS_HOST = 'localhost'
#REDIS_PORT = 6379

# Specify the full Redis URL for connecting (optional).
# If set, this takes precedence over the REDIS_HOST and REDIS_PORT settings.
#REDIS_URL = 'redis://user:pass@hostname:9001'

# Custom redis client parameters (i.e.: socket timeout, etc.)
#REDIS_PARAMS  = {}
# Use custom redis client class.
#REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient'

# If True, it uses redis' ``SPOP`` operation. You have to use the ``SADD``
# command to add URLs to the redis queue. This could be useful if you
# want to avoid duplicates in your start urls list and the order of
# processing does not matter.
#REDIS_START_URLS_AS_SET = False

# Default start urls key for RedisSpider and RedisCrawlSpider.
#REDIS_START_URLS_KEY = '%(name)s:start_urls'

# Use other encoding than utf-8 for redis.
#REDIS_ENCODING = 'latin1'

4.redisで爬虫類を満腹にする
クラスscrapy_redis.spiders.RedisSpiderは、爬虫類プログラムがredisからURLを読み取ることができるようにする.最初のリクエストがより多くのリクエストを生成すると、redisキュー内のurlが次々と処理され、爬虫類プログラムはredisから別のurlを取得する前にこれらのリクエストを処理します.myspiderを作成pyファイル:
from scrapy_redis.spiders import RedisSpider

class MySpider(RedisSpider):
    name = 'myspider'

    def parse(self, response):
        # do stuff
        pass

爬虫類プログラムの実行:
scrapy runspider myspider.py

redisにurlsを読み込みます.
redis-cli lpush myspider:start_urls http://baidu.com