Python內(nèi)容分享活躍站點(diǎn)統(tǒng)計(jì)與一些文章鏈接匯總

根據(jù)PythonWeekly每期推薦的文章或教程的引用來源做簡單統(tǒng)計(jì),推測(cè)國外python內(nèi)容分享活躍站點(diǎn),并簡單比較優(yōu)質(zhì)python博客或內(nèi)容發(fā)布地的變化。
舊為2012-2013年左右,新為今年(2017)。格式: [(站點(diǎn)根域名, 累計(jì)次數(shù)),...]。
(新舊統(tǒng)計(jì)期數(shù)不同,新:最近20期;舊:最初的81期。只統(tǒng)計(jì)文章和教程的鏈接。)


可以看出GithubYoutube一直是活躍的分享站點(diǎn)。wordpress的份額變少了,blogspot基本看不到了,而medium成為風(fēng)頭正勁。變化還是很大的。

  • PythonWeekly的內(nèi)容相當(dāng)不錯(cuò),可是有時(shí)太懶沒看,錯(cuò)過不少期,又懶得在郵件列表中一個(gè)一個(gè)翻,于是做了個(gè)爬蟲,只爬取其中的推薦文章/教程部分,生成一個(gè)可以快速瀏覽的markdown列表,便于篩選感興趣的文章,在瀏覽器中打開也很方便搜索。不過,看英文標(biāo)題的掃視速度還是太慢,申請(qǐng)了個(gè)百度翻譯的api機(jī)翻了標(biāo)題,雖然不如谷歌翻譯,也勉強(qiáng)能看,畢竟看中文還是舒服些。點(diǎn)擊下方的鏈接就可以查看匯總文章了。(簡書放不下就放到了github上了,顯示效果都差不多)

近期文章匯總

初期文章匯總

  • 目前能找到的只有初始81期(對(duì)應(yīng)initNews.py)和最近的20期(對(duì)應(yīng)recentNews.py)(PythonWeekly 一周一期)。使用代碼需要替換入你自己的百度翻譯api秘鑰。
  • initNews.pyrecentNews.py基本差不多。不過,后者只使用了單線程,比較慢,但是量少,也沒花多少時(shí)間。前者使用了多線程,速度提升很明顯,雖然是4倍的量,但差不多一下子就完了。(此外,百度翻譯api中的query使用'\n'分隔可一次翻譯多個(gè)語句)

代碼

  • initNews.py
import requests
from bs4 import BeautifulSoup
import re
# 請(qǐng)?zhí)鎿Q為你的秘鑰
appid = 'yourappid'
secretKey = 'yoursecretkey'

from fake_useragent import UserAgent
ua = UserAgent()
headers = {'user-agent': ua.chrome}

pythonweekly_init_issues_archive_url = (
    'http://www.pythonweekly.com/archive/')

def get_pythonweekly_init_issues_urls():
    url = pythonweekly_init_issues_archive_url
    res = requests.get(url, headers=headers)
    soup = BeautifulSoup(res.content, 'lxml')
    return [[
                a.text.split(' ')[-1].strip(),
                ''.join([url, a['href']]),
            ] for a in soup.select('li a')]

pythonweekly_init_issues_urls = get_pythonweekly_init_issues_urls()

def get_single_issue_info(issue):
    try:
        # issue = [text, url, list] 
        url = issue[1]
        res = requests.get(url, headers=headers)
        soup = BeautifulSoup(res.content, 'lxml')
        content = soup.select_one('td .defaultText')

        try:
            submenus = [i.text for i in content.find_all('strong')]
            for index, menu in enumerate(submenus):
                if re.search('[Aa]rticles', menu):
                    break
            start_text = [menu,]
            end_text = submenus[index+1]
        except:
            # 臟改 
            start_text = ['Articles,\xa0Tutorials and Talks',
                          '\xa0Tutorials and Talks', # 應(yīng)對(duì)11,12.html
                          'Articles Tutorials and Talks']
            end_text = 'Interesting Projects, Tools and Libraries'

        flag = 0
        list_ = []
        for s in content.find_all('span'):
            if not flag:
                if s.text not in start_text:
                    continue
                else:
                    flag = 1
                    continue
            if s.text == end_text:
                break
            try:
                one = [s.text.strip(), s.find('a')['href']]
                # print(one)
                list_.append(one)
            except TypeError:
                pass
        # return list_
        issue.append(list_)
        print('下載完成', issue[0])
    except Exception as e:
        print('wrong: ', issue[0], '\n', e)

from multiprocessing.dummy import Pool
pool = Pool(30)
pool.map(get_single_issue_info, pythonweekly_init_issues_urls)
pythonweekly_init_issues = pythonweekly_init_issues_urls


def baidu_translates(query):
    '''
    http://api.fanyi.baidu.com/api/trans/product/apidoc
    '''
    from hashlib import md5
    import random
    
    url = 'http://api.fanyi.baidu.com/api/trans/vip/translate'
    fromLang = 'en'
    toLang = 'zh'
    salt = random.randint(32768, 65536)

    sign = appid + query + str(salt) + secretKey
    m1 = md5()
    m1.update(sign.encode('utf-8'))
    sign = m1.hexdigest()
    
    params = {'appid':appid,
              'q':query,
              'from':fromLang,
              'to':toLang,
              'salt':str(salt),
              'sign':sign,}
    res = requests.get(url, params=params)
    return res.json()['trans_result']

def get_translate(issue):
    articles = issue[-1]
    try:
        result = baidu_translates('\n'.join([i[0] for i in articles]))
        for index, i in enumerate(articles):
            i.append(result[index]['dst'])
        print('翻譯完成', issue[0])
    except:
        print('**翻譯失敗**', issue[0])

pool.map(get_translate, pythonweekly_init_issues)


from jinja2 import Template
table = """
<table>
    {% for issue_num, issue_href, article_lists in issues %}
        {% for article_name, article_href, article_chinese in article_lists %}
        <tr>
            <td><a href='{{issue_href}}'>{{ issue_num }}</a></td>
            <td><a href='{{article_href}}'>{{ article_name }}</a></td>
            <td><a href='{{article_href}}'>{{ article_chinese }}</a></td>
        </tr>
        {% endfor %}
    {% endfor %}
</table>
"""

template = Template(table)
t = template.render(issues=pythonweekly_init_issues)

import time
with open('pythonweekly_init ' + time.ctime().replace(':', '_') + '.html', 'w', encoding='utf-8') as f:
    f.write(t)
    

pool.close()
pool.join()

# https://stackoverflow.com/questions/9626535/get-domain-name-from-url
# get_host = requests.urllib3.util.url.get_host # get_host(i[1])[1]
import tldextract
host_list = [
    tldextract.extract(i[1]).domain
    for *_, articles in pythonweekly_init_issues for i in articles ]

from collections import Counter
counter = Counter(host_list)
print(counter.most_common(20))

with open('pythonweekly_init.md', 'w', encoding='utf-8') as f:
    f.write(u'### PythonWeekly初期文章教程匯總\n')
    f.write(u'|      期號(hào)     |     英文名    | 中文名|\n')
    f.write(u'| ------------- |:-------------:| -----:|\n')
    for issue_num, issue_href, article_lists in pythonweekly_init_issues:
        for article_name, article_href, article_chinese in article_lists:
            f.write(('| [{issue_num}]({issue_href}) '
                     '| [{article_name}]({article_href}) '
                     '| [{article_chinese}]({article_href}) '
                     '| \n').format(**locals()))
  • recentNews.py
import requests
from bs4 import BeautifulSoup
import re
# 請(qǐng)?zhí)鎿Q為你的秘鑰
appid = 'yourappid'
secretKey = 'yoursecretkey'


from fake_useragent import UserAgent
ua = UserAgent()
headers = {'user-agent': ua.chrome}

pythonweekly_recent_issues_archive_url = (
    'http://us2.campaign-archive2.com/home/'
    '?u=e2e180baf855ac797ef407fc7&id=9e26887fc5')

def get_pythonweekly_recent_issues_urls():
    
    res = requests.get(pythonweekly_recent_issues_archive_url, headers=headers)
    soup = BeautifulSoup(res.content, 'lxml')
    return [[
                a.text.split(' ')[-1].strip(),
                a['href'],
            ]
            for a in soup.select('li a')]

pythonweekly_recent_issues_urls = get_pythonweekly_recent_issues_urls()

def get_single_issue_info(url):
    res = requests.get(url, headers=headers)
    soup = BeautifulSoup(res.content, 'lxml')
    content = soup.select_one('td .defaultText')

    submenus = [i.text for i in content.find_all('span', attrs={'style':"color:#B22222"})]
    for index, i in enumerate(submenus):
        if re.search('[Aa]rticles', i):
            break
    start_text = i
    end_text = submenus[index+1]

    flag = 0
    list_ = []
    for s in content.find_all('span'):
        if not flag:
            if s.text != start_text:
                continue
            else:
                flag = 1
                continue
        if s.text == end_text:
            break
        try:
            one = [s.text.strip(), s.find('a')['href']]
            # print(one)
            list_.append(one)
        except TypeError:
            pass
    return list_

for i in pythonweekly_recent_issues_urls:
    # [text, url, list] 
    print(i[0])
    i.append(get_single_issue_info(i[1]))

pythonweekly_recent_issues = pythonweekly_recent_issues_urls


def baidu_translate(query):
    '''
    http://api.fanyi.baidu.com/api/trans/product/apidoc
    '''
    from hashlib import md5
    import random

    url = 'http://api.fanyi.baidu.com/api/trans/vip/translate'
    fromLang = 'en'
    toLang = 'zh'
    salt = random.randint(32768, 65536)

    sign = appid + query + str(salt) + secretKey
    m1 = md5()
    m1.update(sign.encode('utf-8'))
    sign = m1.hexdigest()
    
    params = {'appid':appid,
              'q':query,
              'from':fromLang,
              'to':toLang,
              'salt':str(salt),
              'sign':sign,}
    res = requests.get(url, params=params)
    return res.json()['trans_result'][0]['dst']

for *_, articles in pythonweekly_recent_issues:
    for i in articles:
        i.append(baidu_translate(i[0]))
    print('done')


from jinja2 import Template
table = """
<table>
    {% for issue_num, issue_href, article_lists in issues %}
        {% for article_name, article_href, article_chinese in article_lists %}
        <tr>
            <td><a href='{{issue_href}}'>{{ issue_num }}</a></td>
            <td><a href='{{article_href}}'>{{ article_name }}</a></td>
            <td><a href='{{article_href}}'>{{ article_chinese }}</a></td>
        </tr>
        {% endfor %}
    {% endfor %}
</table>
"""

template = Template(table)
t = template.render(issues=pythonweekly_recent_issues)

import time
with open('pythonweekly_recent ' + time.ctime().replace(':', '_') + '.html', 'w', encoding='utf-8') as f:
    f.write(t)

import tldextract
host_list = [
    tldextract.extract(i[1]).domain
    for *_, articles in pythonweekly_recent_issues for i in articles ]

from collections import Counter
counter = Counter(host_list)
counter.most_common(20)


with open('pythonweekly_recent.md', 'w', encoding='utf-8') as f:
    f.write(u'### PythonWeekly文章教程近期匯總\n')
    f.write(u'|      期號(hào)     |     英文名    | 中文名|\n')
    f.write(u'| ------------- |:-------------:| -----:|\n')
    for issue_num, issue_href, article_lists in pythonweekly_recent_issues:
        for article_name, article_href, article_chinese in article_lists:
            f.write(('| [{issue_num}]({issue_href}) '
                     '| [{article_name}]({article_href}) '
                     '| [{article_chinese}]({article_href}) '
                     '| \n').format(**locals()))

參考

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • Android 自定義View的各種姿勢(shì)1 Activity的顯示之ViewRootImpl詳解 Activity...
    passiontim閱讀 179,001評(píng)論 25 709
  • # Python 資源大全中文版 我想很多程序員應(yīng)該記得 GitHub 上有一個(gè) Awesome - XXX 系列...
    aimaile閱讀 26,832評(píng)論 6 427
  • 他來了 我站在十一樓已經(jīng)望見 穿著我原來給他買的衣服 背著黃色的蛇皮袋子 一個(gè)標(biāo)準(zhǔn)的中國農(nóng)民的樣子 緩緩映入我的眼...
    悠然_3c09閱讀 289評(píng)論 5 5
  • 這個(gè)曼陀羅畫于我最失落的時(shí)候, 女朋友要離我而去回到前男友那里, 而我用盡了一切辦法但無能為力, 焦慮,憔悴,不甘...
    心理學(xué)者阮健閱讀 690評(píng)論 0 1
  • 我之前也有一個(gè)樹洞,是我心里的那個(gè)他。五年前大學(xué)畢業(yè),我斷斷續(xù)續(xù)也接觸過幾個(gè)男生。曾經(jīng)高傲的堅(jiān)定一個(gè)信念,...
    薪薇閱讀 460評(píng)論 0 0

友情鏈接更多精彩內(nèi)容