爬蟲筆記(2):urllib

它是python自帶的HTTP請求庫

1)urllib.request:請求庫

urllib.request.urlopen(url,data=None,[timeout,]*,cafile=None,capath=None,cadefault=False,context=None)

#參數(shù)意義:
#url:請求的鏈接
#data:post時用的請求
#[timeout]*超時時間
#后面的都是關(guān)于CA(證書)認(rèn)證的選項

example:
a) get方法獲取:

import urllib.request

response = urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8'))

b) post方法獲?。?/p>

1. 先建立data對象
2. 使用bytes與urllib.parse建立可以用在urlopen中的data


import urllib.parse
import urllib.request

data = bytes(urllib.parse.urlencode({'word':'hello'}),encoding='utf8')
response = urllib.request.urlopen('http://httpbin.org/post',data=data)
print(response.read())

c)超時(timeout)功能使用:

import socket
import urllib.request
import urllib.error

try:
    response = urllib.request.urlopen('http://httpbin.org/get',timeout=0.1)
except urllib.error.URLError as e:
    if isinstance(e.reason,socket.timeout):
        print('TIME OUT')

d) 將url與data(表單)分離
使用request.Request()封裝請求對象

from urllib import request,parse

url = 'http://httpbin.org/post'
headers = {
    'User-Agent':'Mozilla/4.0(compatible;MSIE 5.5;Windows NT)',
    'Host':'httpbin.org'
}
dict = {
    'name':'Kim'
}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,headers=headers,method='POST')
response = urllib.request.urlopen(req)
print(response.read().decode('utf-8'))

e)add_header()方法

from urllib import request,parse

url = 'http://httpbin.org/post'
dict = {
    'name':'Kim'
}
data = bytes(parse.urlencode(dict),encoding='utf8')
req = request.Request(url=url,data=data,method='POST')
req.add_header( 'User-Agent','Mozilla/4.0(compatible;MSIE 5.5;Windows NT)')
response = urllib.request.urlopen(req)
print(response.read().decode('utf-8'))

Handler高級操作:通過urllib.request.build_opener(handler)來使用handler發(fā)送請求。

a) 代理設(shè)置:若遇到爬取網(wǎng)站限制同一IP訪問時就需要代理來繞過限制:

1. 用urllib.request.ProxyHandler()封裝代理對象
2. 將proxy_handler對象通過urllib.request.build_opener()函數(shù)來封裝為代理對象
3. 使用 代理對象.open()方法打開目標(biāo)鏈接。

import urllib.request

proxy_handler = urllib.request.ProxyHandler({
    'http':'http://180.125.137.126:8000',
    'https':'http://106.112.169.216:808'
})
opener = urllib.request.build_opener(proxy_handler)
try:
    response = opener.open('http://httpbin.org/get')
except urllib.error.URLError as e:
    if isinstance(e.reason,socket.timeout):
        print('TIME OUT')
print(response.read().decode('utf-8'))

b) 對于Cookie的操作:看到需要登陸才能看到的頁面

使用導(dǎo)入的http.cookiejar
1. http.cookiejar.MozillaCookieJar()獲取火狐瀏覽器格式的cookie
2. urllib.request.HTTPCookieProcessor()制作handler
3. 使用urllib.request.build_opener()來創(chuàng)建opener
4. 使用opener.open()
import http.cookiejar,urllib.request

cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
for item in cookie:
    print(item.name+"="+item.value)

-------------------------------------------------------------------------------------
import http.cookiejar,urllib.request

filename='cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True,ignore_expires=True)

-------------------------------------------------------------------------------------
引入文件中的cookie并將其導(dǎo)入到請求中(cookie.load)
import http.cookiejar,urllib.request

cookie = http.cookiejar.MozillaCookieJar()
cookie.load('cookie.txt',ignore_discard=True,ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response.read().decode('utf-8'))

2)urllib.error:獲取錯誤,保持爬蟲程序的健壯。

*先HTTPError后URLError

from urllib import request,error

try:
    response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.URLError as e:
    print(e.reason)
------------------------------------------------------------------------------
標(biāo)準(zhǔn)寫法:
from urllib import request,error

try:
    response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.HTTPError as e:
    print(e.reason,e.code,e.headers,sep='\n')
except error.URLError as e:
    print(e.reason)
else:
    print('Request Successfully')

3)urllib.parse:url的解析模塊(URL拆分)

from urllib.parse import urlparse

result = urlparse('www.baidu.com/index.html;user?id=5#comment',scheme='https')
print(result)

輸出:ParseResult(scheme='https', netloc='', path='www.baidu.com/index.html', params='user', query='id=5', fragment='comment')

urlunparse:

from urllib.parse import urlunparse

data = ['http', 'www.baidu.com', 'index.html', 'user','a=6', 'comment']
print(urlunparse(data))


輸出:http://www.baidu.com/index.html;user?a=6#comment

urljoin:

from urllib.parse import urlencode

params = {
    'name':'germey',
    'age':22
}
base_url = 'http://www.baidu.com?'
url = base_url+urlencode(params)
print(url)


輸出:http://www.baidu.com?name=germey&age=22
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容