通過對(duì)廖雪峰的python教程學(xué)習(xí)生成器,如下代碼:
def odd():
print('step 1')
yield 1
print('step 2')
yield (3)
print('step 3')
yield (5)
if __name__ == "__main__":
o = odd()
for index in o:
print(index)
輸出內(nèi)容如下:
step 1
1
step 2
3
step 3
5
generator函數(shù),在每次調(diào)用next()的時(shí)候執(zhí)行,遇到y(tǒng)ield語句返回,再次執(zhí)行時(shí)從上次返回的yield語句處繼續(xù)執(zhí)行。for循環(huán)就是調(diào)用next()函數(shù),理解了這點(diǎn)就可以理解上述代碼了。再scrapy框架爬蟲中經(jīng)常會(huì)遇到y(tǒng)ield函數(shù),
def start_requests(self):
self.log('------' + __name__ + ' start requests ------')
if self.task_running is False:
return
apps = appinfo_mq.query_star_ids(self.market, self.country, self.start_id,
self.start_index, self.keyword_count - self.start_index)
header = CommentsSpider.headers
# apps = ['548984223'] #文件管理器
if apps is not None:
log_file = open(self.log_path, 'a')
for app in apps:
app = app.replace('id', '')
log_file.write(str(app) + '---')
self.page_index[str(app)] = 1
self.is_first[str(app)] = True
new_url = CommentsSpider.url.format(app, 1)
yield Request(new_url, headers=header, meta={'app_id': app})
log_file.close()
else:
yield None
調(diào)用如下:
for req in self.start_requests():
if req is not None:
self.crawler.engine.crawl(req, spider=self)
self.no_keyword = False
else:
self.task_running = False
self.no_keyword = True
timer.check_keyword_recover(self.request_action)
break
我們的start_requests()函數(shù)生成一個(gè)generator,通過循環(huán)逐一拿到Request()請(qǐng)求,
通過我們的引擎self.crawler.engine對(duì)每一個(gè)網(wǎng)絡(luò)請(qǐng)求進(jìn)行爬取,
Request()是scrapy內(nèi)部封裝的網(wǎng)絡(luò)請(qǐng)求。我們?cè)谂老x中將所有的請(qǐng)求放入generator,
后面通過generator來靈活處理我們的請(qǐng)求。