介紹
本文講述一個傳輸組的同步過程。從txg_sync_thread函數(shù)直到dbuf_sync_indirect和dbuf_sync_leaf 函數(shù)層層調(diào)用( dbuf_sync_indirect 和dbuf_sync_leaf 作為數(shù)據(jù)集的緩存存在)。dbuf_sync_indirect 中,間接塊I/O依賴于其子間接塊I/O的調(diào)度更新,但是在同一等級(level)的間接塊又是相互獨(dú)立的。葉子節(jié)點(diǎn)數(shù)據(jù)塊的寫相對其他葉子節(jié)點(diǎn)數(shù)據(jù)節(jié)點(diǎn)也是相互獨(dú)立的。分析dbuf_sync_{leaf, indirect},我們可以知道,最后緩存刷盤的本質(zhì)是處理一個臟數(shù)據(jù)記錄的鏈表。那么臟數(shù)據(jù)記錄又是怎么跟ZFS對象,也就是dnode,對應(yīng)起來呢?:blush:在下一篇文章里我們會介紹VFS的寫操作是怎么最后在ZFS中生成臟數(shù)據(jù)記錄的。
正常的寫ZIO在ZIO流水線中被異步分發(fā),流水線等待其所有的獨(dú)立子IO結(jié)束。0級的數(shù)據(jù)塊會被并發(fā)處理。
正文
接下來介紹對于任意存儲池IO,從txg_sync_start 到 zio_wait(或zio_nowait) 的函數(shù)調(diào)用的流程。對于代碼,我們只摘取其中核心的部分,使用(...)來省略其他代碼,增加本文中代碼的可讀性。
一個存儲池在創(chuàng)建和導(dǎo)入的時候,txg_sync_start函數(shù)會被調(diào)用,創(chuàng)建txg_sync_thread線程。
void
txg_sync_start(dsl_pool_t *dp)
{
...
tx->tx_sync_thread = thread_create(NULL, 32 << 10, txg_sync_thread, dp, 0, &p0, TS_RUN, minclsyspri);
...
}
存儲池運(yùn)行期間會不停地在txg狀態(tài)之間切換。在進(jìn)入syncing狀態(tài)的時候,就會調(diào)用spa_sync。而spa_sync調(diào)用完畢后,就會喚醒所有等待在tx_sync_done_cv上的線程。
static void
txg_sync_thread(void *arg)
{
dsl_pool_t *dp = arg;
spa_t *spa = dp->dp_spa;
...
for (;;) {
...
txg = tx->tx_quiesced_txg;
...
spa_sync(spa, txg);
...
cv_broadcast(&tx->tx_sync_done_cv);
...
}
}
spa_sync 會調(diào)用dsl_pool_sync,直到?jīng)]有新的臟數(shù)據(jù)需要被更新。
void
spa_sync(spa_t *spa, uint64_t txg)
{
dsl_pool_t *dp = spa->spa_dsl_pool;
objset_t *mos = spa->spa_meta_objset;
...
do {
...
dsl_pool_sync(dp, txg);
} while (dmu_objset_is_dirty(mos, txg));
}
dsl_pool_sync遍歷存儲池內(nèi)所有臟數(shù)據(jù)集,調(diào)用dsl_dataset_sync兩次。第一次將所有臟數(shù)據(jù)塊下盤。第二次則為所有的用戶空間改變下盤。這兩個遍歷操作都會以一個同步ZIO的形式創(chuàng)建到本存儲池的根ZIO下。(同步的體現(xiàn)形式為調(diào)用了ZIO_WAIT)。
void
dsl_pool_sync(dsl_pool_t *dp, uint64_t txg)
{
...
dsl_dataset_t *ds;
objset_t *mos = dp->dp_meta_objset;
...
tx = dmu_tx_create_assigned(dp, txg);
/*
* Write out all dirty blocks of dirty datasets.
*/
zio = zio_root(dp->dp_spa, NULL, NULL, ZIO_FLAG_MUSTSUCCEED);
while ((ds = txg_list_remove(&dp->dp_dirty_datasets, txg)) != NULL) {
/*
* We must not sync any non-MOS datasets twice,
* because we may have taken a snapshot of them.
* However, we may sync newly-created datasets on
* pass 2.
*/
ASSERT(!list_link_active(&ds->ds_synced_link));
list_insert_tail(&synced_datasets, ds);
dsl_dataset_sync(ds, zio, tx);
}
VERIFY0(zio_wait(zio));
...
/*
* After the data blocks have been written (ensured by the zio_wait()
* above), update the user/group space accounting.
*/
for (ds = list_head(&synced_datasets); ds != NULL;
ds = list_next(&synced_datasets, ds)) {
dmu_objset_do_userquota_updates(ds->ds_objset, tx);
}
/*
* Sync the datasets again to push out the changes due to
* userspace updates. This must be done before we process the
* sync tasks, so that any snapshots will have the correct
* user accounting information (and we won't get confused
* about which blocks are part of the snapshot).
*/
zio = zio_root(dp->dp_spa, NULL, NULL, ZIO_FLAG_MUSTSUCCEED);
while ((ds = txg_list_remove(&dp->dp_dirty_datasets, txg)) != NULL) {
ASSERT(list_link_active(&ds->ds_synced_link));
dmu_buf_rele(ds->ds_dbuf, ds);
dsl_dataset_sync(ds, zio, tx);
}
VERIFY0(zio_wait(zio));
...
}
dsl_dataset_sync 傳遞數(shù)據(jù)集(dataset)的對象集合(objset)給dmu_objset_sync函數(shù)進(jìn)行數(shù)據(jù)集同步。
void
dsl_dataset_sync(dsl_dataset_t *ds, zio_t *zio, dmu_tx_t *tx)
{
...
dmu_objset_sync(ds->ds_objset, zio, tx);
}
dmu_objset_sync調(diào)用dmu_objset_sync_dnodes將對象集合(objectset)下的臟dnode鏈表和被釋放dnode鏈表中的dnode下盤。需要注意的是,對于特殊的元數(shù)據(jù)對象(special metadata dnodes),需要先行同步,調(diào)用dnode_sync即可。
/* called from dsl */
void
dmu_objset_sync(objset_t *os, zio_t *pio, dmu_tx_t *tx)
{
int txgoff;
...
list_t *newlist = NULL;
dbuf_dirty_record_t *dr;
...
/*
* Create the root block IO
*/
...
zio = arc_write(pio, os->os_spa, tx->tx_txg,
os->os_rootbp, os->os_phys_buf, DMU_OS_IS_L2CACHEABLE(os),
DMU_OS_IS_L2COMPRESSIBLE(os), &zp, dmu_objset_write_ready,
NULL, dmu_objset_write_done, os, ZIO_PRIORITY_ASYNC_WRITE,
ZIO_FLAG_MUSTSUCCEED, &zb);
/*
* Sync special dnodes - the parent IO for the sync is the root block
*/
dnode_sync(DMU_META_DNODE(os), tx);
...
if (DMU_USERUSED_DNODE(os) &&
DMU_USERUSED_DNODE(os)->dn_type != DMU_OT_NONE) {
DMU_USERUSED_DNODE(os)->dn_zio = zio;
dnode_sync(DMU_USERUSED_DNODE(os), tx);
DMU_GROUPUSED_DNODE(os)->dn_zio = zio;
dnode_sync(DMU_GROUPUSED_DNODE(os), tx);
}
...
txgoff = tx->tx_txg & TXG_MASK;
...
if (dmu_objset_userused_enabled(os)) {
newlist = &os->os_synced_dnodes;
/*
* We must create the list here because it uses the
* dn_dirty_link[] of this txg.
*/
list_create(newlist, sizeof (dnode_t),
offsetof(dnode_t, dn_dirty_link[txgoff]));
}
dmu_objset_sync_dnodes(&os->os_free_dnodes[txgoff], newlist, tx);
dmu_objset_sync_dnodes(&os->os_dirty_dnodes[txgoff], newlist, tx);
list = &DMU_META_DNODE(os)->dn_dirty_records[txgoff];
while (dr = list_head(list)) {
ASSERT0(dr->dr_dbuf->db_level);
list_remove(list, dr);
if (dr->dr_zio)
zio_nowait(dr->dr_zio);
}
/*
* Free intent log blocks up to this tx.
*/
zil_sync(os->os_zil, tx);
os->os_phys->os_zil_header = os->os_zil_header;
zio_nowait(zio);
}
dmu_objset_sync_dnodes對于鏈表內(nèi)的置臟對象,會調(diào)用dnode_sync,將dnode下盤,把他們加入到newlist (如果,非空)中。(根據(jù)入?yún)⒖梢耘袛?,已?jīng)加入到os->os_synced_dnodes)。
static void
dmu_objset_sync_dnodes(list_t *list, list_t *newlist, dmu_tx_t *tx)
{
dnode_t *dn;
while (dn = list_head(list)) {
...
/*
* Initialize dn_zio outside dnode_sync() because the
* meta-dnode needs to set it ouside dnode_sync().
*/
dn->dn_zio = dn->dn_dbuf->db_data_pending->dr_zio;
list_remove(list, dn);
if (newlist) {
(void) dnode_add_ref(dn, newlist);
list_insert_tail(newlist, dn);
}
dnode_sync(dn, tx);
}
}
dnode_sync將置臟的緩沖記錄傳遞給dbuf_sync_list 。
void
dnode_sync(dnode_t *dn, dmu_tx_t *tx)
{
...
list_t *list = &dn->dn_dirty_records[txgoff];
...
dbuf_sync_list(list, tx);
}
dbuf_sync_list函數(shù)遍歷訪問臟緩沖記錄鏈表中的每個元素,根據(jù)緩沖數(shù)據(jù)的類型,調(diào)用 dbuf_sync_leaf和dbuf_sync_indirect。
void
dbuf_sync_list(list_t *list, dmu_tx_t *tx)
{
dbuf_dirty_record_t *dr;
while (dr = list_head(list)) {
<...>
list_remove(list, dr);
if (dr->dr_dbuf->db_level > 0)
dbuf_sync_indirect(dr, tx);
else
dbuf_sync_leaf(dr, tx);
}
}
ZFS是COW的文件系統(tǒng),對于每個塊都不例外。因此每次數(shù)據(jù)塊更新后,指向該數(shù)據(jù)塊的間接塊也會被更新。因此在修改一個文件內(nèi)的數(shù)據(jù)塊的時候,必須從盤中讀取這些間接數(shù)據(jù)。修改間接塊意味著它指向的數(shù)據(jù)塊有臟數(shù)據(jù)。在給間接快的所有孩子節(jié)點(diǎn)下發(fā)ZIO之后,本間接塊的ZIO被下發(fā)。
static void
dbuf_sync_indirect(dbuf_dirty_record_t *dr, dmu_tx_t *tx)
{
dmu_buf_impl_t *db = dr->dr_dbuf;
...
/* Read the block if it hasn't been read yet. */
if (db->db_buf == NULL) {
mutex_exit(&db->db_mtx);
(void) dbuf_read(db, NULL, DB_RF_MUST_SUCCEED);
mutex_enter(&db->db_mtx);
}
..
/* Provide the pending dirty record to child dbufs */
db->db_data_pending = dr;
mutex_exit(&db->db_mtx);
/* doesn't actually execute a write - it just creates
* dr->dr_zio which is executed by zio_nowait before
* returning
*/
dbuf_write(dr, db->db_buf, tx);
zio = dr->dr_zio;
mutex_enter(&dr->dt.di.dr_mtx);
dbuf_sync_list(&dr->dt.di.dr_children, tx);
ASSERT(list_head(&dr->dt.di.dr_children) == NULL);
mutex_exit(&dr->dt.di.dr_mtx);
zio_nowait(zio);
}
dbuf_sync_leaf為臟緩沖數(shù)據(jù)記錄創(chuàng)建ZIO,異步分發(fā)之。
static void
dbuf_sync_leaf(dbuf_dirty_record_t *dr, dmu_tx_t *tx)
{
arc_buf_t **datap = &dr->dt.dl.dr_data;
dmu_buf_impl_t *db = dr->dr_dbuf;
...
/* doesn't actually execute a write - it just creates
* dr->dr_zio which is executed by zio_nowait before
* returning
*/
dbuf_write(dr, *datap, tx);
ASSERT(!list_link_active(&dr->dr_dirty_node));
if (dn->dn_object == DMU_META_DNODE_OBJECT) {
list_insert_tail(&dn->dn_dirty_records[txg&TXG_MASK], dr);
DB_DNODE_EXIT(db);
} else {
/*
* Although zio_nowait() does not "wait for an IO", it does
* initiate the IO. If this is an empty write it seems plausible
* that the IO could actually be completed before the nowait
* returns. We need to DB_DNODE_EXIT() first in case
* zio_nowait() invalidates the dbuf.
*/
DB_DNODE_EXIT(db);
zio_nowait(dr->dr_zio);
}
}