一、scrapy 創(chuàng)建項目
scrapy startproject first 創(chuàng)建項目
cd first
scrapy genspider chouti dig.chouti.com 創(chuàng)建爬蟲文件
scrapy crawl chouti --nolog 執(zhí)行爬蟲,忽略日志
windows打印中文出錯解決方式(開頭加上):
sys.stdout=io.TextIOWrapper(sys.stdout.buffer, encoding='gb18030')
代替bs4的內(nèi)部解析器:
- response.xpath
- from scrapy.selector import HtmlXPathSelector
xph = HtmlXPathSelector()
spider爬取數(shù)據(jù),parse返回 yield items
items相當(dāng)于model,定義字段
pipline 做持久化,需要在配置文件注冊
pipline
from_crawler(cls, crawler) 優(yōu)先執(zhí)行,可用來取配置信息
path = crawler.settings.get("PATH") # 配置文件必須大寫
return cls(path)
init(self, path) 初始化
self.path = path
open_spider(self, spider) pipline開始前執(zhí)行
close_spider(self, spider) pipline結(jié)束后執(zhí)行
二級下載
from scrapy.http import Request
yield Request(url=page_url, callback=self.parse, meta={'cookiejar': True})
丟棄item,不傳遞給下一個pipline的process_item
from scrapy.exceptions import DropItem
raise DropItem()
拿cookies的方式
response.headers.getlist("Set-cookie")
from scrapy.http.cookies import cookie_jar
cookie_jar = CookieJar()
cookie_jar.extract_cookies(response, response.request)
cookie_jar._cookies.items()scrapy 自動操作
meta={'cookiejar': True}
去重
配置文件:DUPEFILTER_CLASS = 'scrapy.dumpfilter.MyDupeFilter'
取url唯一值:
from scrapy.util.request import request_fingerprint
unique = request_fingerprint(url)
USER_AGENT:配置文件可配置