NLP 數(shù)據(jù)增強(qiáng)

在機(jī)器學(xué)習(xí)領(lǐng)域,個人覺得有一個大前提:數(shù)據(jù)是永遠(yuǎn)不夠的。雖然現(xiàn)在有很多吹噓大數(shù)據(jù),在自然語言處理領(lǐng)域,標(biāo)注數(shù)據(jù)尤其匱乏,而且標(biāo)注的質(zhì)量也非常難控制。在這種情況下,數(shù)據(jù)增強(qiáng)是非常必要的,這對于模型的robustness和generalization都非常重要。

在不同NLP領(lǐng)域都有一些特定的數(shù)據(jù)增強(qiáng)的方法.?

Task-independent data augmentation for NLP

Data augmentation aims to create additional training data by producing variations of existing training examples through transformations, which can mirror those encountered in the real world. In Computer Vision (CV), common augmentation techniques aremirroring, random cropping, shearing, etc. Data augmentation is super useful in CV. For instance, it has been used to great effect in AlexNet (Krizhevsky et al., 2012) [1] to combat overfitting and in most state-of-the-art models since. In addition, data augmentation makes intuitive sense as it makes the training data more diverse and should thus increase a model’s generalization ability.

However, in NLP, data augmentation is not widely used. In my mind, this is for two reasons:

Data in NLP is discrete. This prevents us from applying simple transformations directly to the input data. Most recently proposed augmentation methods in CV focus on such transformations, e.g. domain randomization (Tobin et al., 2017) [2].

Small perturbations may change the meaning. Deleting a negation may change a sentence’s sentiment, while modifying a word in a paragraph might inadvertently change the answer to a question about that paragraph. This is not the case in CV where perturbing individual pixels does not change whether an image is a cat or dog and even stark changes such as interpolation of different images can be useful (Zhang et al., 2017) [3].

Existing approaches that I am aware of are either rule-based (Li et al., 2017) [5] or task-specific, e.g. for parsing (Wang and Eisner, 2016) [6] or zero-pronoun resolution (Liu et al., 2017) [7]. Xie et al. (2017) [39] replace words with samples from different distributions for language modelling and Machine Translation. Recent work focuses on creating adversarial examples either by replacing words or characters (Samanta and Mehta, 2017; Ebrahimi et al., 2017) [8,9], concatenation (Jia and Liang, 2017) [11], or adding adversarial perturbations (Yasunaga et al., 2017) [10]. An adversarial setup is also used by Li et al. (2017) [16] who train a system to produce sequences that are indistinguishable from human-generated dialogue utterances.

Back-translation (Sennrich et al., 2015; Sennrich et al., 2016) [12,13] is a common data augmentation method in Machine Translation (MT) that allows us to incorporate monolingual training data. For instance, when training a EN→FR system, monolingual French text is translated to English using an FR→EN system; the synthetic parallel data can then be used for training. Back-translation can also be used for paraphrasing (Mallinson et al., 2017) [14]. Paraphrasing has been used for data augmentation for QA (Dong et al., 2017) [15], but I am not aware of its use for other tasks.

back translation.Translate the targeted sentence into source sentence and then use synthetic sentence pairs as additional training data.?Improving Neural Machine Translation Models with Monolingual Data

Joint Learning.?Joint Training for Neural Machine Translation Models with Monolingual Data??

Dual Learning.?Dual Learning for Machine Translation



Another method that is close to paraphrasing is generating sentences from a continuous space using a variational autoencoder (Bowman et al., 2016; Guu et al., 2017) [17,19]. If the representations are disentangled as in (Hu et al., 2017) [18], then we are also not too far from style transfer (Shen et al., 2017) [20].

There are a few research directions that would be interesting to pursue:

Evaluation study:Evaluate a range of existing data augmentation methods as well as techniques that have not been widely used for augmentation such as paraphrasing and style transfer on a diverse range of tasks including text classification and sequence labelling. Identify what types of data augmentation are robust across task and which are task-specific. This could be packaged as a software library to make future benchmarking easier (thinkCleverHansfor NLP).

Data augmentation with style transfer:Investigate if style transfer can be used to modify various attributes of training examples for more robust learning.

Learn the augmentation:Similar to Dong et al. (2017) we could learn either to paraphrase or to generate transformations for a particular task.

Learn a word embedding space for data augmentation:A typical word embedding space clusters synonyms and antonyms together; using nearest neighbours in this space for replacement is thus infeasible. Inspired by recent work (Mrk?i? et al., 2017) [21], we could specialize the word embedding space to make it more suitable for data augmentation.

Adversarial data augmentation:Related to recent work in interpretability (Ribeiro et al., 2016) [22], we could change the most salient words in an example, i.e. those that a model depends on for a prediction. This still requires a semantics-preserving replacement method, however.


Tutorial

Robust, Unbiased Natural Language Processing

(未完待續(xù)...)

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 今天一天都過得比較休閑,基本上都在玩烏賊,磨煉技術(shù)。之前好長一段時間沒有好好玩這個游戲,今天剛開始玩的時候發(fā)現(xiàn)沒有...
    鈐魚擺擺閱讀 362評論 1 1
  • 營業(yè)到凌晨零點二十分左右,最后一位客人離去后,唯一的店員小史也拎著四只后廚剩下的大閘蟹下班滾蛋,路于心把正在睡...
    路于心閱讀 596評論 3 3
  • 愚人節(jié)是國外節(jié)日中我覺得最有必要在中國過一下的節(jié),因為我是一個富有逗逼精神的女漢子。。。。。題記 4月的第一天是春...
    意境的亦樹閱讀 340評論 2 3

友情鏈接更多精彩內(nèi)容