1.異常處理
URLError類來自urllib庫的error模塊,它繼承自O(shè)SError類,是error異常模塊的基類,由request模塊產(chǎn)生的異常都可以通過這個類來處理。
from urllib import request, error
try:
response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.HTTPError as e:
print(e.reason, e.code, e.headers, sep='\n')
except error.URLError as e:
print(e.reason)
else:
print('Request Successfully')
輸出結(jié)果如下:
Not Found
404
Server: nginx/1.10.3 (Ubuntu)
Date: Tue, 09 Apr 2019 07:25:19 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: close
Vary: Cookie
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Link: <https://cuiqingcai.com/wp-json/>; rel="https://api.w.org/"
- 這樣來理解:URLError 和 HTTPError ==> URLError其子類是HTTPError。
- URLError擁有屬性reason;HTTPError擁有屬性reason(原因)、headers(請求頭)、code(狀態(tài)碼).
import socket
import urllib.request
import urllib.error
try:
response = urllib.request.urlopen('https://www.baidu.com', timeout=0.01)
except urllib.error.URLError as e:
print(type(e.reason))
if isinstance(e.reason, socket.timeout):
print('TIME OUT')
2.解析鏈接
urllib庫中的parse模塊,它定義了處理URL的標準接口,例如實現(xiàn)URL各部分的抽取、合并以及鏈接轉(zhuǎn)換.
!!! ***scheme協(xié)議://netloc域名/path訪問路徑;params參數(shù)?query查詢條件#fragment瞄點***
-
urlencode()!!! 這個函數(shù)前面提到過,很重要,用于構(gòu)造get請求參數(shù)
from urllib.parse import urlencode
params = {
'name': 'germey',
'age': 22
}
base_url = 'http://www.baidu.com?'
url = base_url + urlencode(params)
print(url)
輸出結(jié)果如下:
http://www.baidu.com?name=germey&age=22
urllib.parse.urlparse(urlstring, scheme='', allow_fragments=True):
該方法可以實現(xiàn)URL的識別和分段 urlstring必填項,即代解析的URL地址;scheme默認的協(xié)議(eg:http、https);allow_fragments是否忽略fragment
實例0:
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment')
print(type(result), result)
輸出結(jié)果如下:
<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')
實例1:
from urllib.parse import urlparse
result = urlparse('www.baidu.com/index.html;user?id=5#comment', scheme='https')
print(result)
"""
輸出結(jié)果:
ParseResult(scheme='https', netloc='', path='www.baidu.com/index.html', params='user', query='id=5', fragment='comment')
"""
實例2:
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment', scheme='https')
print(result)
"""
輸出結(jié)果:
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')
"""
實例3:
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment', allow_fragments=False)
print(result)
"""
輸出結(jié)果:
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5#comment', fragment='')
"""
實例4:
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html#comment', allow_fragments=False)
print(result)
"""
輸出結(jié)果:
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html#comment', params='', query='', fragment='')
"""
-
urlunparse(data):實現(xiàn)URL的構(gòu)造。長度必須為6、類型:列表、元組或特定的數(shù)據(jù)結(jié)構(gòu)
from urllib.parse import urlunparse
data = ['http', 'www.baidu.com', 'index.html', 'user', 'a=6', 'comment']
print(urlunparse(data))
"""
結(jié)果如下:
http://www.baidu.com/index.html;user?a=6#comment
"""
-
urljoin():實現(xiàn)鏈接的解析合并和生成
from urllib.parse import urljoin
print(urljoin('http://www.baidu.com', 'FAQ.html'))
print(urljoin('http://www.baidu.com', 'https://cuiqingcai.com/FAQ.html'))
print(urljoin('http://www.baidu.com/about.html', 'https://cuiqingcai.com/FAQ.html'))
print(urljoin('http://www.baidu.com/about.html', 'https://cuiqingcai.com/FAQ.html?question=2'))
print(urljoin('http://www.baidu.com?wd=abc', 'https://cuiqingcai.com/index.php'))
print(urljoin('http://www.baidu.com', '?category=2#comment'))
print(urljoin('www.baidu.com', '?category=2#comment'))
print(urljoin('www.baidu.com#comment', '?category=2'))
輸出結(jié)果如下:
http://www.baidu.com/FAQ.html
https://cuiqingcai.com/FAQ.html
https://cuiqingcai.com/FAQ.html
https://cuiqingcai.com/FAQ.html?question=2
https://cuiqingcai.com/index.php
http://www.baidu.com?category=2#comment
www.baidu.com?category=2#comment
www.baidu.com?category=2
=>有點暈?沒關(guān)系.是有一定規(guī)律的,簡單來說,有兩個參數(shù),以后面的那個為準,沒有的補充,有的后者覆蓋前者。
=>值得注意的是:第一個參數(shù)path訪問路徑后面的(即params、query、fragment)是不起作用的。(看最后的那個打印就明白了)
-
urlsplit():類似與urlparse。區(qū)別:urlsplit()會將params合并到path中,返回5個結(jié)果,其返回的結(jié)果是一個元組類型,即可以用屬性獲取值,也可以用索引來索取。
from urllib.parse import urlsplit
result = urlsplit('http://www.baidu.com/index.html;user?id=5#comment')
print(type(result), result)
print(result.scheme,result[0])
"""
<class 'urllib.parse.SplitResult'> SplitResult(scheme='http', netloc='www.baidu.com', path='/index.html;user', query='id=5', fragment='comment')
http http
"""
from urllib.parse import urlparse
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment')
print(type(result), result)
"""
<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')
"""
-
urlunsplit():類似與urlunparse。唯一的區(qū)別是:傳入的參數(shù)長度必須為5.
from urllib.parse import urlunsplit
data = ['http', 'www.baidu.com', 'index.html', 'a=6', 'comment']
print(urlunsplit(data))
"""
http://www.baidu.com/index.html?a=6#comment
"""
from urllib.parse import urlunparse
data = ['http', 'www.baidu.com', 'index.html', 'user', 'a=6', 'comment']
print(urlunparse(data))
"""
http://www.baidu.com/index.html;user?a=6#comment
"""
-
parse_qs():反序列化,將一串Get請求數(shù)據(jù)轉(zhuǎn)回字典。
from urllib.parse import parse_qs
query = 'name=germey&age=22'
print(parse_qs(query))
"""
{'name': ['germey'], 'age': ['22']}
"""
-
parse_qsl():將參數(shù)轉(zhuǎn)換為元組組成的列表。
from urllib.parse import parse_qsl
query = 'name=germey&age=22'
print(parse_qsl(query))
"""
[('name', 'germey'), ('age', '22')]
"""
-
quote()與unquote():URL編碼和解碼
from urllib.parse import quote
keyword = "我愛你"
url = 'https://www.baidu.com/s?wd=' + quote(keyword)
print(url)
"""
https://www.baidu.com/s?wd=%E6%88%91%E7%88%B1%E4%BD%A0
"""
from urllib.parse import unquote
url = 'https://www.baidu.com/s?wd=%E6%88%91%E7%88%B1%E4%BD%A0'
print(unquote(url))
"""
https://www.baidu.com/s?wd=我愛你
"""
3.分析Robots協(xié)議(爬蟲協(xié)議、機器人協(xié)議)
告知爬蟲和搜索引擎哪些頁面可以爬取,哪些頁面不可爬取。它通常是一個叫作robots.txt的文本文件。
舉例:robots.txt
User-agent:* (*指代所有爬蟲)
Disallow: / (禁止爬取'/'所有目錄)
Allow:/public/ (允許爬取的目錄)
利用urllib的robotparser模塊,我們可以實現(xiàn)網(wǎng)站的Robots協(xié)議的分析。
=>常用的幾個方法:(只例舉出3個,其實還有parse()、mtime()、modified(),用到的時候再說)
- set_url:設(shè)置robots.txt文件的鏈接。
- read():讀取robots.txt文件并進行分析.必須調(diào)用?。。?/li>
- can_fetch():參數(shù)1 User-agent,參數(shù)2 URL.返回結(jié)果True.False表明是否可以爬取.
from urllib.robotparser import RobotFileParser
rp = RobotFileParser()
rp.set_url('http://www.itdecent.cn/robots.txt')
rp.read()
print(rp.can_fetch('*','http://www.itdecent.cn/p/b67554025d7d'))
print(rp.can_fetch('*','http://www.itdecent.cn/search?q=python&page=1&type=collections'))
"""
結(jié)果如下:
False
False
"""
也可以使用parse()方法執(zhí)行讀取和分析.那個相對復(fù)雜,我選擇簡單的,夠用就行.