Python3 - 搜索百度關(guān)鍵詞 - 識(shí)別廣告

背景

  • 目的:通過百度搜索指定關(guān)鍵詞,然后抓取前三頁(yè)網(wǎng)頁(yè)的廣告

技術(shù)棧小結(jié)

time

import time
now = lambda : time.perf_counter()
# 暫停5s
time.sleep(5)
>>> time.time()  # 時(shí)間戳
1613220602.8661115

>>> time.localtime()  # 本地時(shí)間
time.struct_time(tm_year=2021, tm_mon=2, tm_mday=13, tm_hour=20, tm_min=49, tm_sec=57, tm_wday=5, tm_yday=44, tm_isdst=0)

>>> time.asctime(time.localtime())  # 獲取格式化時(shí)間
'Sat Feb 13 20:35:31 2021'

>>> time.asctime(time.localtime(time.time()))
'Sat Feb 13 20:58:19 2021'

>>> time.strftime('%Y%m%d', time.localtime())  # 格式化時(shí)間
'20210213'
>>> 

os

# 文件/目錄方法
>>> import os
>>> os.getcwd()  # 當(dāng)前工作文件夾

>>> os.chdir(path)  # 修改當(dāng)前工作文件目錄為path
>>> os.listdir(path)  # 返加指定路徑下path中的文件夾及文件名稱
>>> os.mkdir(path)  # 新建文件夾
>>> os.rmdir(path)  # 刪除空文件夾  -- 文件夾非空 OSError
>>> os.remove(path)  # 刪除文件
>>> os.rename(oldName, newName)  # 修改文件名
>>> os.stat()  # 獲取指定路徑的信息

# os.stat()
# st_atime 上次訪問時(shí)間;st_mtime 最近修改時(shí)間; st_ctime 創(chuàng)建時(shí)間
>>> os.stat(os.getcwd())
os.stat_result(st_mode=16749, st_ino=1688849860301781, st_dev=3370681046, st_nlink=1, st_uid=0, st_gid=0, st_size=61440, st_atime=1613227168, st_mtime=1613227168, st_ctime=1523514771)

>>> os.stat(os.getcwd()).st_ctime
1523514771.5762281

os.path模塊

import os
# 獲取文件的屬性信息

os.path.abspath(path)  # 返回絕對(duì)路徑
os.path.dirname(path)  # 返回文件路徑
os.path.basename(path)  # 返回文件名
os.path.exists(path)  # 判定文件路徑是否存在,是True,否False
os.path.expanduser('~')  # 返回用戶目錄
>>> os.path.expanduser('~tt')  # 將用戶目錄換為用戶(tt)目錄
'C:\\Users\\tt'

>>> os.path.getmtime(r't.py')  # 返回最近修改時(shí)間
1612868196.803879

>>> os.path.getsize(r't.py')  # 返回文件大小
1862

os.path.isabs(path)  # 判定是否為絕對(duì)路徑

os.path.isfile(path)  # 判定是否文件

os.path.isdir(path)  # 判定是否為目錄

>>> os.path.join(os.getcwd(),'t', 'a','c')  # 將目錄、文件夾、文件合并為路徑
'c:\\users\\chen.huaiyu\\desktop\\t\\a\\c'

>>> os.path.split(os.path.join(os.path.expanduser('~tt'),r't.py'))  # 將路徑分割為dirname & basename,返回元組
('C:\\Users\\tt', 't.py')

>>> os.path.splitext(os.path.join(os.path.expanduser('~tt'),r't.py'))   # 分割為路徑名 & 文件拓展名
('C:\\Users\\tt\\t', '.py')
>>> os.path.splitext(r't.py')
('t', '.py')

logging

  • logging模塊的日志級(jí)別:1 -> 5依次升高。
    1.DEBUG 問題診斷;
    2.INFO 關(guān)鍵節(jié)點(diǎn)信息,檢查程序是否按預(yù)期運(yùn)行;
    3 WARNING 不期望的事情發(fā)生;
    4.ERROR 嚴(yán)重的問題發(fā)生,導(dǎo)致某些功能不能正常使用;
    5.CRITICAL 嚴(yán)重錯(cuò)誤,導(dǎo)致程序無法正常動(dòng)行。

  • logging四大組件:
    1.Loggers,提供日志使用接口;
    2.Handles,將日志發(fā)送到指定位置;
    3.Filters,過濾日志,決定哪些日志將會(huì)被記錄;
    4.Formatters,控制日志輸出格式

  • logging模塊的使用方式:
    1.使用logging提供的模塊級(jí)別的函數(shù);

# logging.info(msg, *args)
logging.info('%s is %d years old', 'Tom', 10)

# exc_info:True - 將異常信息添加到日志中
# stack_info:默認(rèn)False,True - 棧信息將被添加到日志中
# extra: dict參數(shù),自定義消息格式中的字段
>>> import logging
>>> LOG_FORMAT = '%(asctime)s - %(levelname)s - %(user)s[%(ip)s] - %(message)s'
>>> DATE_FORMAT = '%m/%d/%Y %H:%M:%S %p'
>>> logging.basicConfig(format=LOG_FORMAT, datefmt=DATE_FORMAT)
>>> logging.warning('Some one delete the log file.', exc_info=True, stack_info=True, extra={'user':'Tom', 'ip':'10.10.10.10'})
            
02/14/2021 21:11:43 PM - WARNING - Tom[10.10.10.10] - Some one delete the log file.
NoneType: None
Stack (most recent call last):
  File "<string>", line 1, in <module>
  File "D:\...\lib\idlelib\run.py", line 144, in main
    ret = method(*args, **kwargs)
  File "D:\...\lib\idlelib\run.py", line 474, in runcode
    exec(code, self.locals)
  File "<pyshell#70>", line 1, in <module>

2.使用Logging日志系統(tǒng)的四大組件:日志器、處理器、過濾器、格式器。
1)日志器(logger)需要處理器(handler)將日志輸出;
2)一個(gè)logger可以有多個(gè)handler;
3)不同處理器(handler)可以將日志輸出到不同位置;
4)每個(gè)handler可以有多個(gè)過濾器(filter);
5)每個(gè)處理器可以設(shè)置自己的格式器(formatter)。
3.logging日志處理流程
1)日志器等級(jí)過濾;
2)日志器的過濾器過濾;
3)日志器的處理器的等級(jí)過濾;
4)日志器的處理器的過濾器過濾。
4.配置logging的幾種方式
1)顯式創(chuàng)建loggers;
2)使用日志配置文件,使用fileConfig()讀取文件內(nèi)容;
3)創(chuàng)建包含配置信息的dict,然后傳遞給dictConfig()。

# 讀取日志配置文件
logging.config.fileConfig('logging.conf')

# 創(chuàng)建一個(gè)日志器
logger = logging.getLogger('simpleExample')

# 日志輸出
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')

#
# 配置文件 logging.conf 內(nèi)容如下:
[loggers]  # 日志器
keys=root,simpleExample

[handlers]  # 處理器
keys=fileHandler,consoleHandler

[formatters]  # 格式器
keys=simpleFormatter

[logger_root]
level=DEBUG
handlers=fileHandler

[logger_simpleExample]
level=DEBUG
handlers=consoleHandler
qualname=simpleExample  # 必須,表示在logger層級(jí)中的名字,應(yīng)用代碼中通個(gè)這個(gè)得到logger
propagate=0  # 指定propagate為0,防止日志記錄向上層logger傳遞

[handler_consoleHandler]
class=StreamHandler  # * 將日志消息發(fā)送到輸出到Stream
args=(sys.stdout,)  # *
level=DEBUG
formatter=simpleFormatter

[handler_fileHandler]
class=FileHandler  # *,將日志消息發(fā)送到磁盤文件
args=('logger.log', 'a')  # *
level=ERROR
formatter=simpleFormatter

[formatter_simpleFormatter]
format=%(asctime)s - %(name)s - %(levelName)s - %(message)s
datefmt='%Y-%m-%d %H:%M:%S'

pandas

import pandas as pd
pd.DataFrame(data, columns=column)  # 新建DataFrame
pd.concat(df1, df2, axis=0, ignore_index=True)  # axis=0 按行合并
df.fillna('-', inplace=True)  # 空值填充為'-'
pd.read_csv(path, engine='python', encoding='GBK')  # 讀取csv;文件名為中文必須用參數(shù)engine='python'
# pd.read_csv(path, engine='python', encoding='utf-8-sig')  # utf-8-BOM
pd.read_excel(path)  # 讀取excel
df.merge(df_url, how='left', on='Notloc')  # 按Notloc左側(cè)合并
df.merge(result, how='left', left_on='廣告主', right_on='搜索詞')  # 按left_on & right_on合并
df.append(df1)  # 增加
df['URL'].apply(lambda x: urlparse(x).netloc)  # 調(diào)用函數(shù)lambda,獲取主域

selenium


# 示例1
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()  # 打開瀏覽器
driver.implicitly_wait(10)  ## 隱式等待,如查找不所需元素,等待10s,期間每0.5s自動(dòng)檢查一次,如查找不到,報(bào)錯(cuò)
driver.maximize_window()  # 瀏覽器最大化
driver.get('https://www.baidu.com')
elem = driver.find_element_by_name('wd')  # 定位搜索框
elem.clear()  # 清空搜索框
elem.send_keys('a')  # 輸入 a
elem.send_keys(Keys.RETURN)  # Enter
driver.close()  # 關(guān)閉瀏覽器

# 緩慢向下滾動(dòng)到頁(yè)面底部
def scroll():
    ini_height, check_height = 0, 0
    while 1:
        driver.execute_script(
            'window.scrollTo({top:536 + %s, behivor:"smooth"})' 
            % check_height)
        time.sleep(0.5)
        check_height = driver.execute_script('return document.documentElement.scrollTop || window.pageYOffset || document.body.scrollTop;')
        if ini_height == check_height:
            break
        ini_height = check_height
        
# 異常類
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException

# 定位元素: find_element_by_*
driver.find_element_by_name()
driver.fing_element_by_xpath(xpath)

# 用Selenium寫測(cè)試用例
# 摘
import unittest
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

class PythonOrgSearch(unittest.TestCase):  # 聲明這是一個(gè)測(cè)試用例:TestCase

    def setUp(self):  # 初始化的一部分,每個(gè)測(cè)試方法被執(zhí)行前都執(zhí)行一次
        self.driver = webdriver.Chrome()

    def test_search_in_python_org(self):  # 測(cè)試方法始終以test開頭
        driver = self.driver
        driver.get('https://www.baidu.com')
        self.assertIn('百度一下', driver.title)
        elem = driver.find_element_by_name('wd')
        elem.send_keys('Fergus')
        elem.send_keys(Keys.RETURN)
        assert 'No result found' not in driver.page_source

    def tearDown(self):  # 每個(gè)測(cè)試方法執(zhí)行后執(zhí)行,用來做清掃工作
        self.driver.close()
    
if __name__ == "__main__":
    unittest.main()
        
# driver.get 方法將會(huì)根據(jù)方法中給出的URL地址打開該網(wǎng)站。 WebDriver 會(huì)等待整個(gè)頁(yè)面加載完成(其實(shí)是等待”onload”事件執(zhí)行完畢)之后把控制權(quán)交給測(cè)試程序。 如果你的頁(yè)面使用大量的AJAX技術(shù)來加載頁(yè)面,WebDriver可能不知道什么時(shí)候頁(yè)面已經(jīng)加載完成:

urlparse

# 獲取域名
'''parse.urlparse(scheme='https', netloc='www.cnblogs.com', path='/angelyan/', params='', query='', fragment='')
scheme:表示協(xié)議
netloc:域名
path:路徑
params:參數(shù)
query:查詢條件,一般都是get請(qǐng)求的url
fragment:錨點(diǎn),用于直接定位頁(yè)面的下拉位置,跳轉(zhuǎn)到網(wǎng)頁(yè)的指定位置
'''
from urllib import parse
url = "http://xx.xx.xx:8000/get_account.json?page_size=20&page_index=1&user_id=456"
parse.urlparse(url)
Out[33]: ParseResult(scheme='http', netloc='xx.xx.xx:8000', path='/get_account.json', params='', query='page_size=20&page_index=1&user_id=456', fragment='')

parse.urlparse(url).netloc
Out[34]: 'xx.xx.xx:8000'

其它

# 計(jì)數(shù)
from collections import Counter
dic = Counter()
for i in lis:
    dic[i] += 1
# 累加
from functools import reduce
all_str = reduce(lambda x,y: x+y, split_netloc)
# 篩選
filter(lambda x: x[1]<2 and len(x[0])>1, sorted(dic.items(), key=lambda kv:(kv[1], kv[0]), reverse=True)

代碼

# -*- coding: utf-8 -*-
"""
Created on Sat Jan  9 16:08:57 2021
@author: Fergus
"""
import os
import time
import logging.config
import pandas as pd
from selenium import webdriver
from urllib.parse import urlparse
from selenium.webdriver.common.keys import Keys
#from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException

now = lambda : time.perf_counter()

# 日志
PATH = os.path.join(os.path.expanduser('~'), r'CheckAD.conf')
logging.config.fileConfig(PATH)
logger = logging.getLogger('CheckAD')

def scroll():
    # 滾
    ini_height, check_height = 0, 0
    while True:
        # 每次平滑滾動(dòng)
        driver.execute_script(
                'window.scrollTo({top:536 + %s, behavior:"smooth"})' 
                % check_height)
        time.sleep(0.5)
        check_height = driver.execute_script(
                'return document.documentElement.scrollTop || \
                window.pageYOffset || document.body.scrollTop;')
        if ini_height == check_height:
            break
        ini_height = check_height

def search(keyword):
    try:
        # 定位搜索框
        elem = driver.find_element_by_name('wd')
    except NoSuchElementException:
        input('手動(dòng)滑塊驗(yàn)證、或F5刷新頁(yè)面后Enter')
        elem = driver.find_element_by_name('wd')
    finally:
        elem.clear()
        elem.send_keys(keyword)
        elem.send_keys(Keys.RETURN)
        time.sleep(0.3)

def parser(xpath):
    try:
        element = driver.find_element_by_xpath(xpath)
    except NoSuchElementException:
        return '1'
    except StaleElementReferenceException:
        logger.info('ERROR:\n', exc_info=True)
    except Exception as e:
        logger.info('parse: %s' % e)
    else:
        return element.text
    
def parsers(xpath):
    try:
        element = driver.find_elements_by_xpath(xpath)
    except NoSuchElementException:
        logger.info('ERROR:\n', exc_info=True)
    except StaleElementReferenceException:
        logger.info('ERROR:\n', exc_info=True)
    except Exception as e:
        logger.info('parsers: %s' % e, exc_info=True)
    else:
        return element

def getAd():
    # 廣告
    ## 標(biāo)題 & 標(biāo)題url
    ad = parsers('//h3/a[@data-is-main-url="true"]')
    ad_headline = [i.text for i in ad]
    ad_headline_url = [i.get_attribute('data-landurl') for i in ad]
    ## 顯式url
    lis = ['京公網(wǎng)安備11000002000001號(hào)', '京ICP證030173號(hào)', '展開', '']
    url = parsers('//div/div[2]/a/span[1] | //div/div[3]/a/span[1]')
    explicit_url = [i.text for i in url if i.text not in lis]
    return ad_headline, ad_headline_url, explicit_url

def getBrandAd():
    # 品牌廣告
    ## 標(biāo)題 & 標(biāo)題url
    brandad = parsers('//h2/a')
    brandad_headline = [i.text for i in brandad 
                        if '想在此推廣' not in i.text and i.text != '官方']
    brandad_headline_url = [i.get_attribute('ourl') for i in brandad 
                            if i.get_attribute('ourl') is not None]
    ## 顯式url
    url = parsers('//span[@class="ec-pc_brand_tip-official-site"]')
    explicit_brandurl = [i.text for i in url]
    return brandad_headline, brandad_headline_url, explicit_brandurl
    

def next_page():
    try:
        driver.find_element_by_partial_link_text('下一頁(yè)').click()
        time.sleep(0.5)
    except NoSuchElementException:
        logger.info('ERROR:\n', exc_info=True)
    except Exception as e:
        input('%s\n手動(dòng)滑塊驗(yàn)證、或F5刷新頁(yè)面后Enter' % e)
        driver.find_element_by_partial_link_text('下一頁(yè)').click()
        time.sleep(0.5)
    finally:
        scroll()

def output(keyword):
    
    global result
    # 當(dāng)前頁(yè)
    cur_page = parser('//div[@id="page"]/div/strong/span[2]')
    
    # 標(biāo)題 & 標(biāo)題url & 顯式url
    ad_headline, ad_headline_url, explicit_url = getAd()
    brandad_headline, brandad_headline_url, explicit_brandurl = getBrandAd()
    
    # 輸出 - 廣告:搜索詞、頁(yè)碼、廣告標(biāo)題 & 落地頁(yè) & 顯式url
    try:
        rows = len(ad_headline)
        df1 = pd.DataFrame({'搜索詞': [keyword] * rows,
                           '廣告': ['廣告'] * rows,
                           '頁(yè)碼': [cur_page] * rows,
                           '標(biāo)題': ad_headline,
                           '落地頁(yè)': ad_headline_url,
                           '顯式url': explicit_url
                           })
        # 打標(biāo)識(shí)
        if not df1.shape[0]:
            df1 = df1.append(pd.DataFrame([[keyword, '廣告', cur_page, '', 
                                            '','']], columns=df1.columns))
        rows = len(brandad_headline)
        df2 = pd.DataFrame({'搜索詞': [keyword] * rows,
                            '廣告': ['品牌廣告'] * rows,
                            '頁(yè)碼': [cur_page] * rows,
                            '標(biāo)題': brandad_headline,
                            '落地頁(yè)': brandad_headline_url,
                            '顯式url': explicit_brandurl
                            })
        # 打標(biāo)識(shí)
        if not df2.shape[0]:
            df2 = df2.append(pd.DataFrame([[keyword, '品牌廣告', cur_page[0], 
                                            '', '', '']], columns=df1.columns))
        df = df1.append(df2)
        result = result.append(df)
        #print('\n', result, '\nRuntime: {:.3f}Min'.format((now() - st
        #                        )/60))
    except Exception as e:
        # 部分?jǐn)?shù)據(jù)抓取異常
        lis1 = []
        lis1.extend([keyword, '廣告', cur_page[0]])
        lis1.append(ad_headline)
        lis1.append(ad_headline_url)
        lis1.append(explicit_url)
        lis2 = []
        lis2.extend([keyword, '品牌廣告', cur_page[0]])
        lis2.append(brandad_headline)
        lis2.append(brandad_headline_url)
        lis2.append(explicit_brandurl)
        lis3 = []
        lis3.append(lis1)
        lis3.append(lis2)
        lis3.append(['異常: %s | %s' % (keyword, e)])
        err.append(lis3)
        #print(err)
        print('Runtime: {:.3f}Min'.format((now() - st)/60))
    
def connectDB():
    # 訪問DB
    from sqlalchemy import create_engine
    # sql server
    ss = 'mssql+pymssql://%s:%s@%s:%s/%s'
    try:
        engine = create_engine(ss % ('sa', 'cs_holly123', '192.168.60.110'
                                    , '1433', 'Account Management'))
    except Exception:
        raise
    else:
        logger.info('數(shù)據(jù)庫(kù)連接成功,讀取數(shù)據(jù)中...')
        return engine 
    
def getDB(deadline):
    with connectDB().begin() as conn:
        # 廣告主近半年有消費(fèi) and 廣告主近3天沒有消費(fèi)
        sql = '''
SELECT *
 FROM ( SELECT 用戶名, AM, b.廣告主, 網(wǎng)站名稱, URL, ad_hy.[近半年消費(fèi)(AD)]
     , D.近3天消費(fèi)
  FROM basicInfo b
   LEFT JOIN (SELECT 廣告主, ISNULL(sum(HY.sum_),0) '近半年消費(fèi)(AD)'
     FROM basicInfo b
      LEFT JOIN(SELECT 用戶名, sum(金額) sum_
        FROM 消費(fèi)
        WHERE 日期 BETWEEN DATEADD(DD, -200, '20210121') AND '20210121'
         AND 類別 in ('搜索點(diǎn)擊', '新產(chǎn)品', '自主投放', '超投')
        GROUP BY 用戶名) HY
       ON HY.用戶名 = b.用戶名
     GROUP BY 廣告主) ad_hy
    ON b.廣告主 = ad_hy.廣告主
   LEFT JOIN ( SELECT 廣告主, ISNULL(sum(HY.sum_),0) '近3天消費(fèi)'
     FROM basicInfo b
      LEFT JOIN(SELECT 用戶名, sum(金額) sum_
        FROM 消費(fèi)
        WHERE 日期 BETWEEN DATEADD(DD, -3, '20210121') AND '20210121'
         AND 類別 in ('搜索點(diǎn)擊', '新產(chǎn)品', '自主投放', '超投')
        GROUP BY 用戶名) HY
       ON HY.用戶名 = b.用戶名
     GROUP BY 廣告主) D
    ON D.廣告主 = b.廣告主) T
 WHERE T.[近半年消費(fèi)(AD)] > 0
  AND T.近3天消費(fèi) = 0
                '''.replace('{}', deadline)
        df = pd.DataFrame(conn.execute(sql).fetchall()
                , columns=['用戶名', 'AM', '廣告主', '網(wǎng)站名稱', 'URL'
                           , '近半年有消費(fèi)(AD)', '近3日無消費(fèi)(AD)'])
    return df

def getNetlocKeywords():
    # URL拆分: 核心url
    from functools import reduce
    from collections import Counter
    
    global df
    # 獲取域名
    df['Netloc'] = df['URL'].apply(lambda x: urlparse(x).netloc)
    netloc = list(set(df['Netloc']))
    # 截取域名中核心字段
    ## 先將url按'.'拆解,計(jì)數(shù),保留計(jì)次 <= 2的詞 & length > 1
    split_netloc = list(map(lambda x: x.split('.'), netloc))
    all_str = reduce(lambda x,y: x+y, split_netloc)
    ### 統(tǒng)計(jì)
    dic = Counter()
    for i in all_str:
        dic[i] = dic[i] + 1
    ### 篩選
    letters = filter(lambda x: x[1] <= 2 and len(x[0]) > 1, 
                     sorted(dic.items(), key=lambda kv: (kv[1], kv[0])
                     , reverse=True))
    keywords = [i[0] for i in letters]
    # url, keywords
    df_url = pd.DataFrame([[netloc[n], j] for n, i in enumerate(split_netloc
                                         ) for j in keywords if j in i]
                        , columns=['Netloc', 'Key'])
    # return
    df = df.merge(df_url, how='left', on='Netloc')

def exclude():
    # 排除關(guān)鍵詞
    input('Tips: 檢查桌面排除文件,確認(rèn)無誤Enter\n')
    path = r'c:\users\chen.huaiyu\desktop\excludeKeywords.csv'
    csv = pd.read_csv(path, engine='python', encoding='GBK')
    return set(csv['搜索詞'])

def fromExcel():
    # 1.讀取excel;2.分解關(guān)鍵詞;3.分解前后一一對(duì)應(yīng);4.return
    from re import split
    from functools import reduce
    
    path = r'c:\users\chen.huaiyu\desktop\賬戶關(guān)鍵詞.xlsx'
    inputKeywords = pd.read_excel(path)
    inputKeywords.drop(columns=inputKeywords.columns[0], inplace=True)
    # 關(guān)鍵詞拆分
    keywords = list(set(reduce(lambda x,y: x+y, inputKeywords['關(guān)鍵詞'].apply(
                        lambda x: split(r',|,', x)))))
    keywords.remove('物料已刪除')
    keywords.remove('')
    # 匹配關(guān)鍵詞 & 廣告主
    adAndKeyword = [(inputKeywords['廣告主'][n], j) for n, i in enumerate(
                    inputKeywords['關(guān)鍵詞'].apply(lambda x: split(r',|,',x)))
                for j in keywords if j in i]
    df = pd.DataFrame(adAndKeyword, columns=['廣告主', '搜索關(guān)鍵詞'])
    df = inputKeywords.merge(df, on='廣告主', how='left')
    df.fillna('-', inplace=True)
    df = df.loc[df.apply(lambda x: x['搜索關(guān)鍵詞'] in x['關(guān)鍵詞'], axis=1), :]
    return keywords, df
    
def getKeywords():
    #
    if choice == 'from DB':
        getNetlocKeywords()
        search_words = list(set(df['網(wǎng)站名稱']) | set(df['廣告主']
                        ) | set(df['Key']) - exclude())
    elif choice == 'from Excel':
        search_words, _ = fromExcel()
    return search_words

def combine():
    if choice == 'from DB':
        # 合并查詢結(jié)果,輸出
        df_ad = df.merge(result, how='left', left_on='廣告主', right_on='搜索詞')
        df_website = df.merge(result, how='left', left_on='網(wǎng)站名稱'
                              , right_on='搜索詞')
        df_url = df.merge(result, how='left', left_on='Key', right_on='搜索詞')
        merge = pd.concat((df_ad, df_website, df_url), axis=0, ignore_index=True)
        merge.fillna('-', inplace=True)
        #
        # 篩選準(zhǔn)備
        #
        ## 獲取落地頁(yè)netloc
        merge['landurl_netloc'] = merge['落地頁(yè)'].apply(
                                                lambda x: urlparse(x).netloc)
        ## 顯式url == 開戶url Netloc
        merge['filter1'] = merge['Netloc'] == merge['顯式url']
        ## 開戶url核心詞 in 顯式url
        merge['filter2'] = merge.apply(
                            lambda x: x['Key'] in x['顯式url'], axis=1)
        ## Key in land_netloc
        merge['filter3'] = merge.apply(
                        lambda x: x['Key'] in x['landurl_netloc'], axis=1)
        merge.to_excel(r'c:/users/chen.huaiyu/desktop/CheckAD.xlsx')
    elif choice == 'from Excel':
        _, DF = fromExcel()
        merge = DF.merge(result, how='left', left_on='搜索關(guān)鍵詞'
                         , right_on='搜索詞')
        merge.to_excel(r'c:/users/chen.huaiyu/desktop/CheckADKey.xlsx')
        
    # 異常記錄
    er = pd.DataFrame(err, columns=['搜索詞', '廣告', 'Note'])
    er.to_excel(r'c:/users/chen.huaiyu/desktop/error.xlsx')
    # 
    kw = pd.DataFrame(keywords)
    kw.to_csv(r'c:/users/chen.huaiyu/desktop/kw.csv', encoding='GBK')
    

if __name__ == '__main__':
    
    st = now()
    # 設(shè)定截止日期
    DL = input('設(shè)置截止日期,eg.20210121\n')
    choice = input('關(guān)鍵詞從哪里來?(from DB/from Excel)')
    logger.info('\nStart,設(shè)置截止日期為:%s\n' % DL)
    # 讀取數(shù)據(jù)庫(kù)
    df = getDB(DL)
    # output
    result = pd.DataFrame()
    err = []
    # 搜索詞
    keywords = getKeywords()
    # 運(yùn)行瀏覽器開始搜索
    #chrome_options = Options()
    #chrome_options.add_argument('--headless')  # 不顯示瀏覽器
    #chrome_options.add_argument('--disable-gpu')  # 不加載圖片
    driver = webdriver.Chrome()
    driver.implicitly_wait(10)
    driver.maximize_window()
    driver.get('https://www.baidu.com')
    for n, keyword in enumerate(keywords):
        try:
            # 搜索
            search(keyword)
            scroll()
            output(keyword)
            # 只需檢查前3頁(yè)
            for i in range(2):
                next_page()
                output(keyword)
            # 輸出提示
            logger.info('{}\n共{}個(gè),第{}個(gè),完成{:.0%},預(yù)計(jì)耗時(shí){:.1f}min...'.format(
                time.ctime(), len(keywords), n, n/len(keywords)
                , ((now() - st)/n*(len(keywords)-n))/60))
        except KeyboardInterrupt as e:
            print(e)
            break
        except Exception as e:
            logger.info(e, exc_info=True)
            continue
    # 完成后將結(jié)果合并輸出
    combine()
    driver.quit()
    
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容