
MySQL可以登陸賬戶后安裝,也可以單擊“No thanks, just start my download.”直接下載,如果安裝時提示requires .NET Framework4.0,請網(wǎng)上搜索下載后再繼續(xù)安裝。安裝界面時選擇“Developer Default”單選按鈕,后面直接點Next即可。
- 2.運(yùn)行:由于MySQL安裝就已經(jīng)開啟了服務(wù),所以MySQL不需要用命令行來啟動服務(wù)??梢栽诎惭b路徑bin文件夾下打開cmd,輸入以下命令,即可連接數(shù)據(jù)庫。
mysql -uroot -p123456
若不設(shè)置密碼,可以輸入:mysql -uroot
查看數(shù)據(jù)庫:show database;
打開某數(shù)據(jù)庫:use XX
查看數(shù)據(jù)庫中的數(shù)據(jù)表:show tables;
3.第三方庫采用mysqlclient
py -3 -m pip install mysqlclient4.實現(xiàn),與sqlite3類似:
①創(chuàng)建一個供scrapy使用的數(shù)據(jù)庫,取名為scrapydb,并創(chuàng)建數(shù)據(jù)表books;
mysql -uroot
CREATE DATABASE scrapydb;
USE scrapydb;
CREATE TABLE books(
upc CHAR(16) NOT NULL PRIMARY KEY,
name VARCHAR(256) NOT NULL,
price VARCHAR(16) NOT NULL,
review_rating INT,
review_num INT,
stock INT
)ENGINE INNODB DEFAULT CHARSET=utf8;

②mysql的簡單示例可以另外建立一個py文件來運(yùn)行,與sqlite3幾乎相同:
# -*- coding: utf-8 -*-
import pymysql
#連接數(shù)據(jù)庫,得到Connection對象
conn = pymysql.connect(host='localhost',user='root',db='scrapydb',port=3306,charset='utf8')
#創(chuàng)建Cursor對象,用于執(zhí)行SQL語句
cursor = conn.cursor()
#創(chuàng)建數(shù)據(jù)表
##cursor.execute("CREATE TABLE person(name VARCHAR(32),age INT,sex char(1)) ENGINE INNODB DEFAULT CHARSET=utf8")
#插入一條數(shù)據(jù)
cursor.execute('INSERT INTO person VALUES(%s,%s,%s)',('李小龍',23,'M'))
#保存變更,commit后數(shù)據(jù)才會實際寫入數(shù)據(jù)庫
conn.commit()
#關(guān)閉連接
conn.close()
③用sqlite3已經(jīng)建立的項目,在此基礎(chǔ)上修改代碼,首先是pipelines.py
# 爬取到的數(shù)據(jù)寫入到MySQL數(shù)據(jù)庫
import pymysql
class MysqlPipeline(object):
# 打開數(shù)據(jù)庫
def open_spider(self, spider):
db = spider.settings.get('MYSQL_DB_NAME', 'scrapy_default')
host = spider.settings.get('MYSQL_HOST', 'localhost')
port = spider.settings.get('MYSQL_PORT', 3306)
user = spider.settings.get('MYSQL_USER', 'root')
#passwd = spider.settings.get('MYSQL_PASSWORD', '123456')
self.db_conn = pymysql.connect(host=host,port=port,db=db,user=user,charset='utf8')
self.db_cur = self.db_conn.cursor()
# 關(guān)閉數(shù)據(jù)庫
def close_spider(self, spider):
self.db_conn.commit()
self.db_conn.close()
# 對數(shù)據(jù)進(jìn)行處理
def process_item(self, item, spider):
self.insert_db(item)
return item
# 插入數(shù)據(jù)
def insert_db(self, item):
values = (
item['upc'],
item['name'],
item['price'],
item['review_rating'],
item['review_num'],
item['stock']
)
sql = 'INSERT INTO books VALUES(%s,%s,%s,%s,%s,%s)'
self.db_cur.execute(sql, values)
④settings.py
MYSQL_DB_NAME = 'scrapydb'
MYSQL_HOST = 'localhost'
MYSQL_USER = 'root'
ITEM_PIPELINES = {
'books.pipelines.BooksPipeline': 300,
# 'books.pipelines.SQLitePipeline': 400,
'books.pipelines.MysqlPipeline': 401,
}
⑤運(yùn)行cmd命令:scrapy crawl books,用MySQL Workbench顯示了1000條結(jié)果。

MySQL管理工具也可以使用,直接下載中文版更好用:

