序言
2017,定下一個小目標(biāo),深度學(xué)Python~
爬取對象
使用包
import codecs #非必須,用于指定創(chuàng)建記錄爬取文件字符集
import requests #數(shù)據(jù)抓取
from bs4 import BeautifulSoup #抓取內(nèi)容解析
實現(xiàn)步驟
1、編寫抓取偽代碼
url = 'https://www.douban.com/'
webPage = requests.get(url).text
soup = BeautifulSoup(webPage,"html.parser")
print(soup.title) # <title>豆瓣</title>
2、抓取對象分析
- 通過
F12捕獲頁面內(nèi)容得知待抓取內(nèi)容以列表形式顯示在grid_view中,每“行”主要分為左側(cè)圖片、右側(cè)電影名稱等文本介紹
肖申克的救贖 - 分析翻頁機制,重點關(guān)注不同頁間
URL區(qū)別
翻頁
3、電影名稱抓取實現(xiàn)
import codecs #非必須,用于指定創(chuàng)建記錄爬取文件字符集
import requests #數(shù)據(jù)抓取
from bs4 import BeautifulSoup #抓取內(nèi)容解析
DOWNLOAD_URL = 'http://movie.douban.com/top250/'
def download_page(url):
return requests.get(url, headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36'
}).content
def parse_html(html):
soup = BeautifulSoup(html, "html.parser")
movie_list_soup = soup.find('ol', attrs={'class': 'grid_view'})
movie_name_list = []
print("當(dāng)前準(zhǔn)備解析頁碼:" + soup.find('span', attrs={'class': 'thispage'}).getText())
top_num = 1 + (int(soup.find('span', attrs={'class': 'thispage'}).getText())-1) * 25
for movie_li in movie_list_soup.find_all('li'):
detail = movie_li.find('div', attrs = {'class': 'hd'})
movie_name_list.append("## Top " + str(top_num))
#名稱
movie_name = ""
for sp in detail.find_all('span', attrs={'class': 'title'}):
movie_name += sp.text
img = movie_li.find('div', attrs={'class': 'pic'}).find('a').find('img')
movie_name_list.append("### "+ movie_name )
top_num +=1
next_page = soup.find('span', attrs={'class': 'next'}).find('a')
if next_page:
return movie_name_list, DOWNLOAD_URL + next_page['href']
return movie_name_list, None
def main():
url = DOWNLOAD_URL
with codecs.open('douban_moviesList_top250.md', 'wb', encoding='utf-8') as fp:
while url:
html = download_page(url)
movies, url = parse_html(html)
fp.write(u'{movies}\n'.format(movies='\n'.join(movies)))
if __name__ == '__main__':
main()

電影名稱
4、基于名稱,補充剩余信息解析
- 封面解析示意
img = movie_li.find('div', attrs={'class': 'pic'}).find('a').find('img')
try:
img_req = requests.get(img["src"], timeout=20)
img_localhost = 'douban_moviesList_top250\\'+str(top_num)+ '.jpg'
with open(img_localhost, 'wb') as f:
f.write(img_req.content)
movie_name_list.append('+'.jpg "douban_moviesList_top250")')
except requests.exceptions.ConnectionError:
print('【錯誤】當(dāng)前圖片無法下載,失效地址為:' + img["src"])
總結(jié)
因豆瓣大部分均為靜態(tài)頁面,相對較為簡單,主要涉及翻頁循環(huán)、圖片抓取異常處理、BeautifulSoup 中 find、find_all使用......
好了,到此結(jié)束,我是不會承認我會抓取豆瓣妹子什么的......
完整代碼

豆瓣Top250

