易 AI - AlexNet 論文深度講解

原文:https://makeoptim.com/deep-learning/yiai-paper-alexnet

論文地址

https://papers.nips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf

閱讀方式

本文采用原文、翻譯、記錄的排版。

筆者使用如何閱讀深度學(xué)習(xí)論文的方法進(jìn)行閱讀,文中標(biāo)注的 $1(第一步)、$2、$3、$4 分別表示在第該步閱讀中的記錄和思考。

ImageNet Classification with Deep Convolutional Neural Networks

使用深度卷積神經(jīng)網(wǎng)絡(luò)的 ImageNet 分類

$1 本論文的重點是使用深度卷積神經(jīng)網(wǎng)絡(luò)ImageNet 分類,可以猜到該網(wǎng)絡(luò)應(yīng)該就是 AlexNet

Abstract

摘要

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

我們訓(xùn)練了一個大型的深度卷積神經(jīng)網(wǎng)絡(luò)ImageNet LSVRC-2010 競賽中的 120 萬個的高分辨率圖像分成 1000 個不同的類別。在測試數(shù)據(jù)上,我們實現(xiàn)了 top-1 37.5%top-5 17.0% 的錯誤率,這比以往的最新技術(shù)要好得多。這個神經(jīng)網(wǎng)絡(luò)具有 6000 萬個參數(shù)和 650,000 個神經(jīng)元,由 5卷積層(某些卷積層后面帶有最大池化層)和 3全連接層以及最后一個 1000softmax 組成。為了使訓(xùn)練更快,我們使用了非飽和神經(jīng)元并對卷積操作進(jìn)行了非常有效的 GPU 實現(xiàn)。為了減少全連接中的過擬合,我們采用了最近開發(fā)的被稱為 “dropout” 的正則化方法,事實證明這是非常有效的。我們也使用這個模型的一個變種參加了 ILSVRC-2012 競賽,贏得了冠軍,取得了 top-5 15.3% 的錯誤率,相比之下,第二名為 top-5 26.2%。

$1 摘要中說明了 AlexNet 是個大型的網(wǎng)絡(luò),包含卷積層、最大池化層、全連接層、softmax。并且,作者使用了非飽和神經(jīng)元、卷積操作 GPU 實現(xiàn)、dropout 等操作來優(yōu)化網(wǎng)絡(luò),在幾次大賽中都獲得不錯的成績。

1 Introduction

1 簡介

Current approaches to object recognition make essential use of machine learning methods. To im-prove their performance, we can collect larger datasets, learn more powerful models, and use bet-ter techniques for preventing overfitting. Until recently, datasets of labeled images were relatively small — on the order of tens of thousands of images (e.g., NORB [16], Caltech-101/256 [8, 9], and CIFAR-10/100 [12]). Simple recognition tasks can be solved quite well with datasets of this size, especially if they are augmented with label-preserving transformations. For example, the current-best error rate on the MNIST digit-recognition task (<0.3%) approaches human performance [4]. But objects in realistic settings exhibit considerable variability, so to learn to recognize them it is necessary to use much larger training sets. And indeed, the shortcomings of small image datasets have been widely recognized (e.g., Pinto et al. [21]), but it has only recently become possible to col-lect labeled datasets with millions of images. The new larger datasets include LabelMe [23], which consists of hundreds of thousands of fully-segmented images, and ImageNet [6], which consists of over 15 million labeled high-resolution images in over 22,000 categories.

當(dāng)前的目標(biāo)識別方法基本上都使用了機器學(xué)習(xí)方法。為了提高目標(biāo)識別的性能,我們可以收集更大的數(shù)據(jù)集,學(xué)習(xí)更強大的模型,使用更好的技術(shù)來防止過擬合。直到最近,標(biāo)注圖像的數(shù)據(jù)集都相對較小--在幾萬張圖像的數(shù)量級上(例如,NORB[16],Caltech-101/256 [8, 9]和 CIFAR-10/100 [12])。簡單的識別任務(wù)在這樣大小的數(shù)據(jù)集上可以被解決的相當(dāng)好,尤其是如果通過標(biāo)簽保留變換進(jìn)行數(shù)據(jù)增強的情況下。例如,目前在 MNIST 數(shù)字識別任務(wù)上(<0.3%)的最好準(zhǔn)確率已經(jīng)接近了人類水平[4]。但真實環(huán)境中的對象表現(xiàn)出了相當(dāng)大的可變性,因此為了學(xué)習(xí)識別它們,有必要使用更大的訓(xùn)練數(shù)據(jù)集。實際上,小圖像數(shù)據(jù)集的缺點已經(jīng)被廣泛認(rèn)識到(例如,Pinto et al. [21]),但收集上百萬圖像的標(biāo)注數(shù)據(jù)僅在最近才變得的可能。新的更大的數(shù)據(jù)集包括 LabelMe [23],它包含了數(shù)十萬張完全分割的圖像,ImageNet[6],它包含了 22000 個類別上的超過 1500 萬張標(biāo)注的高分辨率的圖像。

To learn about thousands of objects from millions of images, we need a model with a large learning capacity. However, the immense complexity of the object recognition task means that this prob-lem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots of prior knowledge to compensate for all the data we don’t have. Convolutional neural networks (CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be con-trolled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.

為了從數(shù)百萬張圖像中學(xué)習(xí)幾千個對象,我們需要一個有很強學(xué)習(xí)能力的模型。然而對象識別任務(wù)的巨大復(fù)雜性意味著這個問題不能被指定,即使通過像 ImageNet 這樣的大數(shù)據(jù)集,因此我們的模型應(yīng)該也有許多先驗知識來補償我們所沒有的數(shù)據(jù)。卷積神經(jīng)網(wǎng)絡(luò)(CNNs)構(gòu)成了一個這樣的模型[16, 11, 13, 18, 15, 22, 26]。它們的能力可以通過改變它們的廣度深度來控制,它們也可以對圖像的本質(zhì)進(jìn)行強大且通常正確的假設(shè)(也就是說,統(tǒng)計的穩(wěn)定性和像素依賴的局部性)。因此,與具有層次大小相似的標(biāo)準(zhǔn)前饋神經(jīng)網(wǎng)絡(luò),CNNs 有更少的連接和參數(shù),因此它們更容易訓(xùn)練,而它們理論上的最佳性能可能僅比標(biāo)準(zhǔn)前饋神經(jīng)網(wǎng)絡(luò)差一點。

Despite the attractive qualities of CNNs, and despite the relative efficiency of their local architecture, they have still been prohibitively expensive to apply in large scale to high-resolution images. Luck-ily, current GPUs, paired with a highly-optimized implementation of 2D convolution, are powerful enough to facilitate the training of interestingly-large CNNs, and recent datasets such as ImageNet contain enough labeled examples to train such models without severe overfitting.

盡管 CNN 具有引人注目的質(zhì)量,盡管它們的局部架構(gòu)相當(dāng)有效,但將它們大規(guī)模的應(yīng)用到到高分辨率圖像中仍然是極其昂貴的。幸運的是,目前的 GPU,搭配了高度優(yōu)化的 2D 卷積實現(xiàn),強大到足夠促進(jìn)有趣地大量 CNN 的訓(xùn)練,最近的數(shù)據(jù)集例如 ImageNet 包含足夠的標(biāo)注樣本來訓(xùn)練這樣的模型而沒有嚴(yán)重的過擬合。

The specific contributions of this paper are as follows: we trained one of the largest convolutional neural networks to date on the subsets of ImageNet used in the ILSVRC-2010 and ILSVRC-2012 competitions [2] and achieved by far the best results ever reported on these datasets. We wrote a highly-optimized GPU implementation of 2D convolution and all the other operations inherent in training convolutional neural networks, which we make available publicly1. Our network contains a number of new and unusual features which improve its performance and reduce its training time, which are detailed in Section 3. The size of our network made overfitting a significant problem, even with 1.2 million labeled training examples, so we used several effective techniques for preventing overfitting, which are described in Section 4. Our final network contains five convolutional and three fully-connected layers, and this depth seems to be important: we found that removing any convolutional layer (each of which contains no more than 1% of the model’s parameters) resulted in inferior performance.

本文具體的貢獻(xiàn)如下:我們在 ILSVRC-2010 和 ILSVRC-2012[2]的 ImageNet 子集上訓(xùn)練了到目前為止最大的神經(jīng)網(wǎng)絡(luò)之一,并取得了迄今為止在這些數(shù)據(jù)集上報道過的最好結(jié)果。我們編寫了高度優(yōu)化的 2D 卷積 GPU 實現(xiàn)以及訓(xùn)練卷積神經(jīng)網(wǎng)絡(luò)內(nèi)部的所有其它操作,我們把它公開了。我們的網(wǎng)絡(luò)包含許多新的不尋常的特性,這些特性提高了神經(jīng)網(wǎng)絡(luò)的性能并減少了訓(xùn)練時間,詳見第三節(jié)。即使使用了 120 萬標(biāo)注的訓(xùn)練樣本,我們的網(wǎng)絡(luò)尺寸仍然使過擬合成為一個明顯的問題,因此我們使用了一些有效的技術(shù)來防止過擬合,詳見第四節(jié)。我們最終的網(wǎng)絡(luò)包含 5 個卷積層和 3 個全連接層,深度似乎是非常重要的:我們發(fā)現(xiàn)移除任何卷積層(每個卷積層包含的參數(shù)不超過模型參數(shù)的 1%)都會導(dǎo)致更差的性能。

In the end, the network’s size is limited mainly by the amount of memory available on current GPUs and by the amount of training time that we are willing to tolerate. Our network takes between five and six days to train on two GTX 580 3GB GPUs. All of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become available.

最后,網(wǎng)絡(luò)尺寸主要受限于目前 GPU 的內(nèi)存容量和我們能忍受的訓(xùn)練時間。我們的網(wǎng)絡(luò)在兩個 GTX 580 3GB GPU 上訓(xùn)練五六天。我們的所有實驗表明我們的結(jié)果可以簡單地通過等待更快的 GPU 和更大的可用數(shù)據(jù)集來提高。

$2 從簡介部分可以了解到,本論文主要講的是 AlexNet 團隊使用 GPU 訓(xùn)練了一個大的卷積神經(jīng)網(wǎng)絡(luò),在大數(shù)據(jù)集中實現(xiàn)了圖像分類。并且,在網(wǎng)絡(luò)中引入了一些新特性,用于提高性能減少訓(xùn)練時間;使用了一些技術(shù)防止了過擬合;意識到網(wǎng)絡(luò)的深度很重要。因此,AlexNet 團隊使用到的新特性、防止過擬合技術(shù)、GPU 訓(xùn)練是我們需要重點學(xué)習(xí)的。

2 The Dataset

2 數(shù)據(jù)集

ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. The images were collected from the web and labeled by human labelers using Ama-zon’s Mechanical Turk crowd-sourcing tool. Starting in 2010, as part of the Pascal Visual Object Challenge, an annual competition called the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has been held. ILSVRC uses a subset of ImageNet with roughly 1000 images in each of 1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images.

ImageNet 數(shù)據(jù)集有超過 1500 萬的標(biāo)注高分辨率圖像,這些圖像屬于大約 22000 個類別。這些圖像是從網(wǎng)上收集的,使用了 Amazon’s Mechanical Turk 的眾包工具通過人工標(biāo)注的。從 2010 年起,作為 Pascal 視覺對象挑戰(zhàn)賽的一部分,每年都會舉辦 ImageNet 大規(guī)模視覺識別挑戰(zhàn)賽(ILSVRC)。ILSVRC 使用 ImageNet 的一個子集,1000 個類別每個類別大約 1000 張圖像。總計,大約 120 萬訓(xùn)練圖像,50000 張驗證圖像和 15 萬測試圖像。

ILSVRC-2010 is the only version of ILSVRC for which the test set labels are available, so this is the version on which we performed most of our experiments. Since we also entered our model in the ILSVRC-2012 competition, in Section 6 we report our results on this version of the dataset as well, for which test set labels are unavailable. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-5 error rate is the fraction of test images for which the correct label is not among the five labels considered most probable by the model.

ILSVRC-2010 是 ILSVRC 競賽中唯一可以獲得測試集標(biāo)簽的版本,因此我們大多數(shù)實驗都是在這個版本上運行的。由于我們也使用我們的模型參加了 ILSVRC-2012 競賽,因此在第六節(jié)我們也報告了模型在這個版本的數(shù)據(jù)集上的結(jié)果,這個版本的測試標(biāo)簽是不可獲得的。在 ImageNet 上,按照慣例報告兩個錯誤率:top-1 和 top-5,top-5 錯誤率是指測試圖像的正確標(biāo)簽不在模型認(rèn)為的五個最可能的便簽之中。

ImageNet consists of variable-resolution images, while our system requires a constant input dimen-sionality. Therefore, we down-sampled the images to a fixed resolution of 256 × 256. Given a rectangular image, we first rescaled the image such that the shorter side was of length 256, and then cropped out the central 256×256 patch from the resulting image. We did not pre-process the images in any other way, except for subtracting the mean activity over the training set from each pixel. So we trained our network on the (centered) raw RGB values of the pixels.

ImageNet 包含各種分辨率的圖像,而我們的系統(tǒng)要求不變的輸入維度。因此,我們將圖像進(jìn)行下采樣到固定的 256×256 分辨率。給定一個矩形圖像,我們首先縮放圖像短邊長度為 256,然后從結(jié)果圖像中裁剪中心的 256×256 大小的圖像塊。除了在訓(xùn)練集上對像素減去平均活躍度外,我們不對圖像做任何其它的預(yù)處理。因此我們在原始的 RGB 像素值(中心的)上訓(xùn)練我們的網(wǎng)絡(luò)。

$3 作者先介紹了 ImageNet 數(shù)據(jù)集,而后介紹了大多實驗使用的 ILSVRC-2010,額外還介紹了 top-5 錯誤率的概念。重點的是作者對不同分辨率圖像的處理,采用的是縮放短邊中心裁剪的方法。

3 The Architecture

3 架構(gòu)

The architecture of our network is summarized in Figure 2. It contains eight learned layers — five convolutional and three fully-connected. Below, we describe some of the novel or unusual features of our network’s architecture. Sections 3.1-3.4 are sorted according to our estimation of their importance, with the most important first.

我們的網(wǎng)絡(luò)架構(gòu)概括為圖 2。它包含八個學(xué)習(xí)層--5 個卷積層和 3 個全連接層。下面,我們將描述我們網(wǎng)絡(luò)結(jié)構(gòu)中的一些新奇的不尋常的特性。3.1-3.4 小節(jié)按照我們對它們評估的重要性進(jìn)行排序,最重要的最優(yōu)先。

3.1 ReLU Nonlinearity

3.1 ReLU 非線性

The standard way to model a neuron’s output f as a function of its input x is with f(x) = tanh(x) or f(x) = (1 + e?x)?1. In terms of training time with gradient descent, these saturating nonlinearities are much slower than the non-saturating nonlinearity f(x) = max(0,x). Following Nair and Hinton [20], we refer to neurons with this nonlinearity as Rectified Linear Units (ReLUs). Deep convolutional neural net-works with ReLUs train several times faster than their equivalents with tanh units. This is demonstrated in Figure 1, which shows the number of iterations re-quired to reach 25% training error on the CIFAR-10 dataset for a particular four-layer convolutional net-work. This plot shows that we would not have been able to experiment with such large neural networks for this work if we had used traditional saturating neuron models.

將神經(jīng)元輸出 f 建模為輸入 x 的函數(shù)的標(biāo)準(zhǔn)方式是用 f(x) = tanh(x)f(x) = (1 + e?x)?1??紤]到梯度下降的訓(xùn)練時間,這些飽和的非線性比非飽和非線性 f(x) = max(0,x) 更慢。根據(jù) Nair 和 Hinton[20]的說法,我們將這種非線性神經(jīng)元稱為修正線性單元(ReLU)。采用 ReLU 的深度卷積神經(jīng)網(wǎng)絡(luò)訓(xùn)練時間比等價的 tanh 單元要快幾倍。在圖 1 中,對于一個特定的四層卷積網(wǎng)絡(luò),在 CIFAR-10 數(shù)據(jù)集上達(dá)到 25% 的訓(xùn)練誤差所需要的迭代次數(shù)可以證實這一點。這幅圖表明,如果我們采用傳統(tǒng)的飽和神經(jīng)元模型,我們將不能在如此大的神經(jīng)網(wǎng)絡(luò)上實驗該工作。

Figure 1: A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line). The learning rates for each net-work were chosen independently to make train-ing as fast as possible. No regularization of any kind was employed. The magnitude of the effect demonstrated here varies with network architecture, but networks with ReLUs consis-tently learn several times faster than equivalents with saturating neurons.

圖 1:使用 ReLU 的四層卷積神經(jīng)網(wǎng)絡(luò)在 CIFAR-10 數(shù)據(jù)集上達(dá)到 25% 的訓(xùn)練誤差比使用 tanh 神經(jīng)元的等價網(wǎng)絡(luò)(虛線)快 6 倍。為了使訓(xùn)練盡可能快,每個網(wǎng)絡(luò)的學(xué)習(xí)率是單獨選擇的。沒有采用任何類型的正則化。影響的大小隨著網(wǎng)絡(luò)結(jié)構(gòu)的變化而變化,這一點已得到證實,但使用 ReLU 的網(wǎng)絡(luò)都比等價的飽和神經(jīng)元快幾倍

$1 通過圖 1 可以了解到使用 ReLU 可以加快訓(xùn)練的速度。

We are not the first to consider alternatives to tradi-tional neuron models in CNNs. For example, Jarrett et al. [11] claim that the nonlinearity f (x) = |tanh(x)| works particularly well with their type of contrast nor-malization followed by local average pooling on the Caltech-101 dataset. However, on this dataset the pri-mary concern is preventing overfitting, so the effect they are observing is different from the accelerated ability to fit the training set which we report when us-ing ReLUs. Faster learning has a great influence on the performance of large models trained on large datasets.

我們不是第一個考慮替代 CNN 中傳統(tǒng)神經(jīng)元模型的人。例如,Jarrett 等人[11]聲稱非線性函數(shù) f(x) = |tanh(x)| 與其對比度歸一化一起,然后是局部均值池化,在 Caltech-101 數(shù)據(jù)集上工作的非常好。然而,在這個數(shù)據(jù)集上主要的關(guān)注點是防止過擬合,因此他們觀測到的影響不同于我們使用 ReLU 擬合數(shù)據(jù)集時的加速能力。更快的學(xué)習(xí)對大型數(shù)據(jù)集上大型模型的性能有很大的影響。

$3 作者指出了網(wǎng)絡(luò)架構(gòu)圖,并重點指出幾個新特征,而且按重要程度排序。ReLU 作為第一個,當(dāng)然是最重要的特性。作者指出,使用非飽和的 ReLu 替代傳統(tǒng)的飽和(tanh 等)非線性激活函數(shù),可以加快網(wǎng)絡(luò)的訓(xùn)練速度,這對于大型數(shù)據(jù)集的模型性能有很大的幫助。想要詳細(xì)了解為何非飽和的 ReLU 可以加速訓(xùn)練,可以參考引文 20

3.2 Training on Multiple GPUs

3.2 多 GPU 訓(xùn)練

A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. Current GPUs are particularly well-suited to cross-GPU parallelization, as they are able to read from and write to one another’s memory directly, without going through host machine memory. The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers. This means that, for example, the kernels of layer 3 take input from all kernel maps in layer 2. However, kernels in layer 4 take input only from those kernel maps in layer 3 which reside on the same GPU. Choosing the pattern of connectivity is a problem for cross-validation, but this allows us to precisely tune the amount of communication until it is an acceptable fraction of the amount of computation.

單個 GTX580 GPU 只有 3G 內(nèi)存,這限制了可以在 GTX580 上進(jìn)行訓(xùn)練的網(wǎng)絡(luò)最大尺寸。事實證明 120 萬圖像用來進(jìn)行網(wǎng)絡(luò)訓(xùn)練是足夠的,但網(wǎng)絡(luò)太大因此不能在單個 GPU 上進(jìn)行訓(xùn)練。因此我們將網(wǎng)絡(luò)分布在兩個 GPU 上。目前的 GPU 非常適合跨 GPU 并行,因為它們可以直接互相讀寫內(nèi)存,而不需要通過主機內(nèi)存。我們采用的并行方案基本上每個 GPU 放置一半的核(或神經(jīng)元),還有一個額外的技巧只在某些特定的層上進(jìn)行 GPU 通信。這意味著,例如,第 3 層的核會將第 2 層的所有核映射作為輸入。然而,第 4 層的核只將位于相同 GPU 上的第 3 層的核映射作為輸入。連接模式的選擇是一個交叉驗證問題,但這可以讓我們準(zhǔn)確地調(diào)整通信數(shù)量,直到它的計算量在可接受的范圍內(nèi)。

The resultant architecture is somewhat similar to that of the “columnar” CNN employed by Cires ?an et al. [5], except that our columns are not independent (see Figure 2). This scheme reduces our top-1 and top-5 error rates by 1.7% and 1.2%, respectively, as compared with a net with half as many kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time to train than the one-GPU net2.

除了我們的列不是獨立的之外(看圖 2),最終的架構(gòu)有點類似于 Ciresan 等人[5]采用的“columnar” CNN。與每個卷積層一半的核在單 GPU 上訓(xùn)練的網(wǎng)絡(luò)相比,這個方案降分別低了我們的 top-1 1.7%,top-5 1.2% 的錯誤率。雙 GPU 網(wǎng)絡(luò)比單 GPU 網(wǎng)絡(luò)稍微減少了訓(xùn)練時間。

Figure 2: An illustration of the architecture of our CNN, explicitly showing the delineation of responsibilities between the two GPUs. One GPU runs the layer-parts at the top of the figure while the other runs the layer-parts at the bottom. The GPUs communicate only at certain layers. The network’s input is 150,528-dimensional, and the number of neurons in the network’s remaining layers is given by 253,440–186,624–64,896–64,896–43,264– 4096–4096–1000.

圖 2:我們的 CNN 架構(gòu)圖解,明確描述了兩個 GPU 之間的責(zé)任。在圖的頂部,一個 GPU 運行在部分層上,而在圖的底部,另一個 GPU 運行在部分層上。GPU 只在特定的層進(jìn)行通信。網(wǎng)絡(luò)的輸入是 150,528 維,網(wǎng)絡(luò)剩下層的神經(jīng)元數(shù)目分別是 253,440–186,624–64,896–64,896–43,264–4096–4096–1000(8 層)。

$1 圖 2 展示了 AlexNet 的網(wǎng)絡(luò)架構(gòu),并說明了多個 GPU 之間如何協(xié)作。注:這個圖只展示了一半內(nèi)容,原論文就是這樣的!!!。

$3 作者介紹了如何使用多 GPU 訓(xùn)練模型,來突破網(wǎng)絡(luò)的最大尺寸。雖然,現(xiàn)在已經(jīng)有很成熟的分布式訓(xùn)練方案,但是在當(dāng)時那個時候,這絕對是非常成功的實踐。

3.3 Local Response Normalization

3.3 局部響應(yīng)歸一化

ReLUs have the desirable property that they do not require input normalization to prevent them from saturating. If at least some training examples produce a positive input to a ReLU, learning will happen in that neuron. However, we still find that the following local normalization scheme aids generalization. Denoting by a*{x,y}^i the activity of a neuron computed by applying kernel i at position
(x, y) and then applying the ReLU nonlinearity, the response-normalized activity b^i*{x,y} is given by the expression

b^i_{x,y} = a_{x,y}^i / ( k + \alpha \sum _{j = max(0, i-n / 2)} ^{min(N-1, i+n / 2)} (a_{x,y}^j)^2 )^\beta

where the sum runs over n “adjacent” kernel maps at the same spatial position, and N is the total number of kernels in the layer. The ordering of the kernel maps is of course arbitrary and determined before training begins. This sort of response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels. The constants k, n, α, and β are hyper-parameters whose values are determined using a validation set; we used k = 2, n = 5, α = 10?4, and β = 0.75. We applied this normalization after applying the ReLU nonlinearity in certain layers (see Section 3.5).

ReLU 具有讓人滿意的特性,它不需要通過輸入歸一化來防止飽和。如果至少一些訓(xùn)練樣本對 ReLU 產(chǎn)生了正輸入,那么那個神經(jīng)元上將發(fā)生學(xué)習(xí)。然而,我們?nèi)匀话l(fā)現(xiàn)接下來的局部響應(yīng)歸一化有助于泛化。a*{x,y}^i 表示神經(jīng)元激活,通過在(x,y)位置應(yīng)用核 i,然后應(yīng)用 ReLU 非線性來計算,響應(yīng)歸一化激活 b^i*{x,y} 通過下式給定:

b^i_{x,y} = a_{x,y}^i / ( k + \alpha \sum _{j = max(0, i-n / 2)} ^{min(N-1, i+n / 2)} (a_{x,y}^j)^2 )^\beta

求和運算在 n 個“毗鄰的”核映射的同一位置上執(zhí)行,N 是本層的卷積核數(shù)目。核映射的順序當(dāng)然是任意的,在訓(xùn)練開始前確定。響應(yīng)歸一化的順序?qū)崿F(xiàn)了一種側(cè)抑制形式,靈感來自于真實神經(jīng)元中發(fā)現(xiàn)的類型,為使用不同核進(jìn)行神經(jīng)元輸出計算的較大活動創(chuàng)造了競爭。常量 k,n,α,β 是超參數(shù),它們的值通過驗證集確定;我們設(shè) k=2,n=5,α=0.0001,β=0.75。我們在特定的層使用的 ReLU 非線性之后應(yīng)用了這種歸一化(請看 3.5 小節(jié))。

This scheme bears some resemblance to the local contrast normalization scheme of Jarrett et al. [11], but ours would be more correctly termed “brightness normalization”, since we do not subtract the mean activity. Response normalization reduces our top-1 and top-5 error rates by 1.4% and 1.2%, respectively. We also verified the effectiveness of this scheme on the CIFAR-10 dataset: a four-layer CNN achieved a 13% test error rate without normalization and 11% with normalization3.

這個方案與 Jarrett 等人[11]的局部對比度歸一化方案有一定的相似性,但我們更恰當(dāng)?shù)姆Q其為“亮度歸一化”,因此我們沒有減去均值。響應(yīng)歸一化分別減少了 top-1 1.4%,top-5 1.2%的錯誤率。我們也在 CIFAR-10 數(shù)據(jù)集上驗證了這個方案的有效性:一個沒有歸一化的四層 CNN 取得了 13% 的錯誤率,而使用歸一化取得了 11% 的錯誤率。

$3 作者發(fā)現(xiàn),雖然 ReLU 不需要通過輸入歸一化來防止飽和,但是局部響應(yīng)歸一化有助于泛化。并且用實際證明,響應(yīng)歸一化可以降低錯誤率。溫馨提示:本次閱讀如果沒能理解數(shù)學(xué)公式,可以先跳過。

3.4 Overlapping Pooling

3.4 重疊池化

Pooling layers in CNNs summarize the outputs of neighboring groups of neurons in the same kernel map. Traditionally, the neighborhoods summarized by adjacent pooling units do not overlap (e.g., [17, 11, 4]). To be more precise, a pooling layer can be thought of as consisting of a grid of pooling units spaced s pixels apart, each summarizing a neighborhood of size z × z centered at the location of the pooling unit. If we set s = z, we obtain traditional local pooling as commonly employed in CNNs. If we set s < z, we obtain overlapping pooling. This is what we use throughout our network, with s = 2 and z = 3. This scheme reduces the top-1 and top-5 error rates by 0.4% and 0.3%, respectively, as compared with the non-overlapping scheme s = 2, z = 2, which produces output of equivalent dimensions. We generally observe during training that models with overlapping pooling find it slightly more difficult to overfit.

CNN 中的池化層歸納了同一核映射上相鄰組神經(jīng)元的輸出。習(xí)慣上,相鄰池化單元歸納的區(qū)域是不重疊的(例如[17, 11, 4])。更確切的說,池化層可看作由池化單元網(wǎng)格組成,網(wǎng)格間距為 s 個像素,每個網(wǎng)格歸納池化單元中心位置 z×z 大小的鄰居。如果設(shè)置 s=z,我們會得到通常在 CNN 中采用的傳統(tǒng)局部池化。如果設(shè)置 s<z,我們會得到重疊池化。這就是我們網(wǎng)絡(luò)中使用的方法,設(shè)置 s=2,z=3。這個方案分別降低了 top-1 0.4%,top-5 0.3% 的錯誤率,與非重疊方案 s=2,z=2 相比,輸出的維度是相等的。我們在訓(xùn)練過程中通常觀察采用重疊池化的模型,發(fā)現(xiàn)它更難過擬合。

$3 作者發(fā)現(xiàn)使用重疊池化可以降低錯誤率,并且有避免過擬合的效果

3.5 Overall Architecture

3.5 整體架構(gòu)

Now we are ready to describe the overall architecture of our CNN. As depicted in Figure 2, the net contains eight layers with weights; the first five are convolutional and the remaining three are fully-connected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. Our network maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution.

現(xiàn)在我們準(zhǔn)備描述我們的 CNN 的整體架構(gòu)。如圖 2 所示,我們的網(wǎng)絡(luò)包含 8 個帶權(quán)重的層;前 5 層是卷積層,剩下的 3 層是全連接層。最后一層全連接層的輸出是 1000 維 softmax 的輸入,softmax 會產(chǎn)生 1000 類標(biāo)簽的分布。我們的網(wǎng)絡(luò)最大化了多項邏輯回歸的目標(biāo),這等價于最大化預(yù)測分布下訓(xùn)練樣本正確標(biāo)簽的對數(shù)概率的均值。

The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (see Figure 2). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully- connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers, of the kind described in Section 3.4, follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.

第 2,4,5 卷積層的核只與位于同一 GPU 上的前一層的核映射相連接(看圖 2)。第 3 卷積層的核與第 2 層的所有核映射相連。全連接層的神經(jīng)元與前一層的所有神經(jīng)元相連。第 1,2 卷積層之后是響應(yīng)歸一化層。3.4 節(jié)描述的這種最大池化層在響應(yīng)歸一化層和第 5 卷積層之后。ReLU 非線性應(yīng)用在每個卷積層和全連接層的輸出上。

The first convolutional layer filters the 224 × 224 × 3 input image with 96 kernels of size 11 × 11 × 3 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5 × 5 × 48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3 × 3 × 256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3 × 3 × 192 , and the fifth convolutional layer has 256 kernels of size 3 × 3 × 192. The fully-connected layers have 4096 neurons each.

第 1 卷積層使用 96 個核對 224 × 224 × 3 的輸入圖像進(jìn)行濾波,核大小為 11 × 11 × 3,步長是 4 個像素(核映射中相鄰神經(jīng)元感受野中心之間的距離)。第 2 卷積層使用用第 1 卷積層的輸出(響應(yīng)歸一化和池化)作為輸入,并使用 256 個核進(jìn)行濾波,核大小為 5 × 5 × 48。第 3,4,5 卷積層互相連接,中間沒有接入池化層或歸一化層。第 3 卷積層有 384 個核,核大小為 3 × 3 × 256,與第 2 卷積層的輸出(歸一化的,池化的)相連。第 4 卷積層有 384 個核,核大小為 3 × 3 × 192,第 5 卷積層有 256 個核,核大小為 3 × 3 × 192。每個全連接層有 4096 個神經(jīng)元。

$3 這個部分作者主要描述了整體架構(gòu),與圖 2 描述的一致。

4 Reducing Overfitting

4 減少過擬合

Our neural network architecture has 60 million parameters. Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label, this turns out to be insufficient to learn so many parameters without considerable overfitting. Below, we describe the two primary ways in which we combat overfitting.

我們的神經(jīng)網(wǎng)絡(luò)架構(gòu)有 6000 萬參數(shù)。盡管 ILSVRC 的 1000 類使每個訓(xùn)練樣本從圖像到標(biāo)簽的映射上強加了 10 比特的約束,但這不足以學(xué)習(xí)這么多的參數(shù)而沒有相當(dāng)大的過擬合。下面,我們會描述我們用來克服過擬合的兩種主要方式。

4.1 Data Augmentation

4.1 數(shù)據(jù)增強

The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations (e.g., [25, 4, 5]). We employ two distinct forms of data augmentation, both of which allow transformed images to be produced from the original images with very little computation, so the transformed images do not need to be stored on disk. In our implementation, the transformed images are generated in Python code on the CPU while the GPU is training on the previous batch of images. So these data augmentation schemes are, in effect, computationally free.

圖像數(shù)據(jù)上最簡單常用的用來減少過擬合的方法是使用標(biāo)簽保留變換(例如[25, 4, 5])來人工增大數(shù)據(jù)集。我們使用了兩種獨特的數(shù)據(jù)增強方式,這兩種方式都可以從原始圖像通過非常少的計算量產(chǎn)生變換的圖像,因此變換圖像不需要存儲在硬盤上。在我們的實現(xiàn)中,變換圖像通過 CPU 的 Python 代碼生成而此時 GPU 正在訓(xùn)練前一批圖像。因此,實際上這些數(shù)據(jù)增強方案是計算免費的。

The first form of data augmentation consists of generating image translations and horizontal reflec-tions. We do this by extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images and training our network on these extracted patches4. This increases the size of our training set by a factor of 2048, though the resulting training examples are, of course, highly inter-dependent. Without this scheme, our network suffers from substantial overfitting, which would have forced us to use much smaller networks. At test time, the network makes a prediction by extracting five 224 × 224 patches (the four corner patches and the center patch) as well as their horizontal reflections (hence ten patches in all), and averaging the predictions made by the network’s softmax layer on the ten patches.

第一種數(shù)據(jù)增強方式包括產(chǎn)生圖像變換和水平翻轉(zhuǎn)。我們從 256×256 圖像上通過隨機提取 224 × 224 的圖像塊實現(xiàn)了這種方式,然后在這些提取的圖像塊上進(jìn)行訓(xùn)練。這通過一個 2048 因子增大了我們的訓(xùn)練集,盡管最終的訓(xùn)練樣本是高度相關(guān)的。沒有這個方案,我們的網(wǎng)絡(luò)會有大量的過擬合,這會迫使我們使用更小的網(wǎng)絡(luò)。在測試時,網(wǎng)絡(luò)會提取 5 個 224 × 224 的圖像塊(四個角上的圖像塊和中心的圖像塊)和它們的水平翻轉(zhuǎn)(因此總共 10 個圖像塊)進(jìn)行預(yù)測,然后對網(wǎng)絡(luò)在 10 個圖像塊上的 softmax 層進(jìn)行平均。

The second form of data augmentation consists of altering the intensities of the RGB channels in training images. Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNet training set. To each training image, we add multiples of the found principal components, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel I*xy = [I^R*{xy} , I^G*{xy} , I^B*{xy} ]^T we add the following quantity:

[p_1, p_2, p_3][\alpha_1\lambda_1, \alpha_2\lambda_2, \alpha_3\lambda_3]^T

where pi and λi are ith eigenvector and eigenvalue of the 3 × 3 covariance matrix of RGB pixel values, respectively, and αi is the aforementioned random variable. Each αi is drawn only once for all the pixels of a particular training image until that image is used for training again, at which point it is re-drawn. This scheme approximately captures an important property of natural images, namely, that object identity is invariant to changes in the intensity and color of the illumination. This scheme reduces the top-1 error rate by over 1%.

第二種數(shù)據(jù)增強方式包括改變訓(xùn)練圖像的 RGB 通道的強度。具體地,我們在整個 ImageNet 訓(xùn)練集上對 RGB 像素值集合執(zhí)行 PCA。對于每幅訓(xùn)練圖像,我們加上多倍找到的主成分,大小成正比的對應(yīng)特征值乘以一個隨機變量,隨機變量通過均值為 0,標(biāo)準(zhǔn)差為 0.1 的高斯分布得到。因此對于每幅 RGB 圖像像素 I*xy = [I^R*{xy} , I^G*{xy} , I^B*{xy} ]^T,我們加上下面的數(shù)量:

[p_1, p_2, p_3][\alpha_1\lambda_1, \alpha_2\lambda_2, \alpha_3\lambda_3]^T

pi,λi 分別是 RGB 像素值 3 × 3 協(xié)方差矩陣的第 i 個特征向量和特征值,αi 是前面提到的隨機變量。對于某個訓(xùn)練圖像的所有像素,每個 αi 只獲取一次,直到圖像進(jìn)行下一次訓(xùn)練時才重新獲取。這個方案近似抓住了自然圖像的一個重要特性,即光照的顏色和強度發(fā)生變化時,目標(biāo)身份是不變的。這個方案減少了 top 1 錯誤率 1% 以上。

$3 作者介紹了客服過擬合的兩個方法,其中之一就是數(shù)據(jù)增強。作者在數(shù)據(jù)增強部分,主要使用了圖像變換和水平翻轉(zhuǎn)、改變 RGB 通到強度這兩種方式。雖然,現(xiàn)在 TensorFlow 在處理數(shù)據(jù)增強部分已經(jīng)有很成熟的方案了,但在當(dāng)時作者的實踐是非常有價值的。

4.2 Dropout

4.2 丟棄(Dropout)

Combining the predictions of many different models is a very successful way to reduce test errors [1, 3], but it appears to be too expensive for big neural networks that already take several days to train. There is, however, a very efficient version of model combination that only costs about a factor of two during training. The recently-introduced technique, called “dropout” [10], consists of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are “dropped out” in this way do not contribute to the forward pass and do not participate in back- propagation. So every time an input is presented, the neural network samples a different architecture, but all these architectures share weights. This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. At test time, we use all the neurons but multiply their outputs by 0.5, which is a reasonable approximation to taking the geometric mean of the predictive distributions produced by the exponentially-many dropout networks.

將許多不同模型的預(yù)測結(jié)合起來是降低測試誤差[1, 3]的一個非常成功的方法,但對于需要花費幾天來訓(xùn)練的大型神經(jīng)網(wǎng)絡(luò)來說,這似乎太昂貴了。然而,有一個非常有效的模型結(jié)合版本,它只花費兩倍的訓(xùn)練成本。這種最近引入的技術(shù),叫做“dropout”[10],它會以 0.5 的概率對每個隱層神經(jīng)元的輸出設(shè)為 0。那些“被丟棄的”的神經(jīng)元不再進(jìn)行前向傳播并且不參與反向傳播。因此每次輸入時,神經(jīng)網(wǎng)絡(luò)會采樣一個不同的架構(gòu),但所有架構(gòu)共享權(quán)重。這個技術(shù)減少了復(fù)雜的神經(jīng)元互適應(yīng),因為一個神經(jīng)元不能依賴特定的其它神經(jīng)元的存在。因此,神經(jīng)元被強迫學(xué)習(xí)更魯棒的特征,它在與許多不同的其它神經(jīng)元的隨機子集結(jié)合時是有用的。在測試時,我們使用所有的神經(jīng)元但它們的輸出乘以 0.5,對指數(shù)級的許多失活網(wǎng)絡(luò)的預(yù)測分布進(jìn)行幾何平均,這是一種合理的近似。

We use dropout in the first two fully-connected layers of Figure 2. Without dropout, our network ex-hibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.

我們在圖 2 中的前兩個全連接層使用 dropout。如果沒有 dropout,我們的網(wǎng)絡(luò)表現(xiàn)出大量的過擬合。dropout 大致上使要求收斂的迭代次數(shù)翻了一倍。

$3 作者介紹了使用 Dropout 來克服過擬合。其中,“將許多不同模型的預(yù)測結(jié)合起來是降低測試誤差”道破了 Dropout 的本質(zhì),值得深入理解。如果想深入理解也可以閱讀我的Dropout 文章。

5 Details of learning

5 學(xué)習(xí)細(xì)節(jié)

We trained our models using stochastic gradient descent with a batch size of 128 examples, momentum of 0.9, and weight decay of 0.0005. We found that this small amount of weight decay was important for the model to learn. In other words, weight decay here is not merely a regularizer: it reduces the model’s training error. The update rule for weight w was

v_{i+1} := 0.9 \bullet v_i - 0.0005 \bullet \varepsilon \bullet w_i - \varepsilon \bullet \langle \frac{\partial L} {\partial w} |_{w_i}\rangle _{D_i}

where i is the iteration index, v is the momentum variable, ε is the learning rate, and
\langle \frac{\partial L} {\partial w} |_{w_i}\rangle _{D_i} is the average over the ith batch D_i of the derivative of the objective with respect to w, evaluated at w_i.

我們使用隨機梯度下降來訓(xùn)練我們的模型,樣本的 batch size 為 128,動量為 0.9,權(quán)重衰減為 0.0005。我們發(fā)現(xiàn)少量的權(quán)重衰減對于模型的學(xué)習(xí)是重要的。換句話說,權(quán)重衰減不僅僅是一個正則項:它減少了模型的訓(xùn)練誤差。權(quán)重 w 的更新規(guī)則是

v_{i+1} := 0.9 \bullet v_i - 0.0005 \bullet \varepsilon \bullet w_i - \varepsilon \bullet \langle \frac{\partial L} {\partial w} |_{w_i}\rangle _{D_i}

i 是迭代索引,v 是動量變量,ε 是學(xué)習(xí)率,
\langle \frac{\partial L} {\partial w} |_{w_i}\rangle _{D_i} 是目標(biāo)函數(shù)對 w,在 w_i 上的第 i 批微分 D_i 的平均。

We initialized the weights in each layer from a zero-mean Gaussian distribution with standard de-viation 0.01. We initialized the neuron biases in the second, fourth, and fifth convolutional layers, as well as in the fully-connected hidden layers, with the constant 1. This initialization accelerates the early stages of learning by providing the ReLUs with positive inputs. We initialized the neuron biases in the remaining layers with the constant 0.

我們使用均值為 0,標(biāo)準(zhǔn)差為 0.01 的高斯分布對每一層的權(quán)重進(jìn)行初始化。我們在第 2,4,5 卷積層和全連接隱層將神經(jīng)元偏置初始化為常量 1。這個初始化通過為 ReLU 提供正輸入加速了學(xué)習(xí)的早期階段。我們在剩下的層將神經(jīng)元偏置初始化為 0。

We used an equal learning rate for all layers, which we adjusted manually throughout training. The heuristic which we followed was to divide the learning rate by 10 when the validation error rate stopped improving with the current learning rate. The learning rate was initialized at 0.01 and reduced three times prior to termination. We trained the network for roughly 90 cycles through the training set of 1.2 million images, which took five to six days on two NVIDIA GTX 580 3GB GPUs.

我們對所有的層使用相等的學(xué)習(xí)率,這個是在整個訓(xùn)練過程中我們手動調(diào)整得到的。當(dāng)驗證誤差在當(dāng)前的學(xué)習(xí)率下停止提供時,我們遵循啟發(fā)式的方法學(xué)習(xí)率除以 10。學(xué)習(xí)率初始化為 0.01,在訓(xùn)練停止之前降低三次。我們在 120 萬圖像的訓(xùn)練數(shù)據(jù)集上訓(xùn)練神經(jīng)網(wǎng)絡(luò)大約 90 個循環(huán),在兩個 NVIDIA GTX 580 3GB GPU 上花費了五到六天。

$3 作者介紹了訓(xùn)練中的一些細(xì)節(jié)。其中權(quán)重衰減、權(quán)重初始化、偏移初始化、啟發(fā)式學(xué)習(xí)率都值得我們學(xué)習(xí)。

6 Results

6 結(jié)果

Our results on ILSVRC-2010 are summarized in Table 1. Our network achieves top-1 and top-5 test set error rates of 37.5% and 17.0%5. The best performance achieved during the ILSVRC-2010 competition was 47.1% and 28.2% with an approach that averages the predictions produced from six sparse-coding models trained on different features [2], and since then the best pub-lished results are 45.7% and 25.7% with an approach that averages the predictions of two classi-fiers trained on Fisher Vectors (FVs) computed from two types of densely-sampled features [24].

Table 1: Comparison of results on ILSVRC-2010 test set.In italics are best results achieved by others.

我們在 ILSVRC-2010 上的結(jié)果概括為表 1。我們的神經(jīng)網(wǎng)絡(luò)取得了 top-1 37.5%,top-5 17.0% 的錯誤率。在 ILSVRC-2010 競賽中最佳結(jié)果是 top-1 47.1%,top-5 28.2%,使用的方法是對 6 個在不同特征上訓(xùn)練的稀疏編碼模型生成的預(yù)測進(jìn)行平均,從那時起已公布的最好結(jié)果是 top-1 45.7%,top-5 25.7%,使用的方法是平均在 Fisher 向量(FV)上訓(xùn)練的兩個分類器的預(yù)測結(jié)果,F(xiàn)isher 向量是通過兩種密集采樣特征計算得到的[24]。

表 1:ILSVRC-2010 測試集上的結(jié)果對比。斜體是其它人取得的最好結(jié)果。

We also entered our model in the ILSVRC-2012 com-petition and report our results in Table 2. Since the ILSVRC-2012 test set labels are not publicly available, we cannot report test error rates for all the models that
we tried. In the remainder of this paragraph, we use validation and test error rates interchangeably because in our experience they do not differ by more than 0.1% (see Table 2). The CNN described in this paper achieves a top-5 error rate of 18.2%. Averaging the predictions of five similar CNNs gives an error rate of 16.4%. Training one CNN, with an extra sixth con-volutional layer over the last pooling layer, to classify the entire ImageNet Fall 2011 release (15M images, 22K categories), and then “fine-tuning” it on ILSVRC-2012 gives an error rate of 16.6%. Averaging the predictions of two CNNs that were pre-trained on the entire Fall 2011 re-lease with the aforementioned five CNNs gives an error rate of 15.3%. The second-best con-test entry achieved an error rate of 26.2% with an approach that averages the predictions of sev-eral classifiers trained on FVs computed from different types of densely-sampled features [7].

Table 2: Comparison of error rates on ILSVRC-2012 validation and test sets. In italics are best results achieved by others. Models with an asterisk were “pre-trained” to classify the entire ImageNet 2011 Fall release. See Section 6 for details.

我們也用我們的模型參加了 ILSVRC-2012 競賽并在表 2 中報告了我們的結(jié)果。由于 ILSVRC-2012 的測試集標(biāo)簽不可以公開得到,我們不能報告我們嘗試的所有模型的測試錯誤率。在這段的其余部分,我們會使用驗證誤差率和測試誤差率互換,因為在我們的實驗中它們的差別不會超過 0.1%(看圖 2)。本文中描述的 CNN 取得了 top-5 18.2%的錯誤率。五個類似的 CNN 預(yù)測的平均誤差率為 16.4%。為了對 ImageNet 2011 秋季發(fā)布的整個數(shù)據(jù)集(1500 萬圖像,22000 個類別)進(jìn)行分類,我們在最后的池化層之后有一個額外的第 6 卷積層,訓(xùn)練了一個 CNN,然后在它上面進(jìn)行“fine-tuning”,在 ILSVRC-2012 取得了 16.6%的錯誤率。對在 ImageNet 2011 秋季發(fā)布的整個數(shù)據(jù)集上預(yù)訓(xùn)練的兩個 CNN 和前面提到的五個 CNN 的預(yù)測進(jìn)行平均得到了 15.3%的錯誤率。第二名的最好競賽輸入取得了 26.2%的錯誤率,他的方法是對 FV 上訓(xùn)練的一些分類器的預(yù)測結(jié)果進(jìn)行平均,F(xiàn)V 在不同類型密集采樣特征計算得到的。

表 2:ILSVRC-2012 驗證集和測試集的誤差對比。斜線部分是其它人取得的最好的結(jié)果。帶星號的是“預(yù)訓(xùn)練的”對 ImageNet 2011 秋季數(shù)據(jù)集進(jìn)行分類的模型。更多細(xì)節(jié)請看第六節(jié)。

Finally, we also report our error rates on the Fall 2009 version of ImageNet with 10,184 categories and 8.9 million images. On this dataset we follow the convention in the literature of using half of the images for training and half for testing. Since there is no es-tablished test set, our split neces-sarily differs from the splits used by previous authors, but this does not affect the results appreciably. Our top-1 and top-5 error rates on this dataset are 67.4% and 40.9%, attained by the net described above but with an additional, sixth convolutional layer over the last pooling layer. The best published results on this dataset are 78.1% and 60.9% [19].

最后,我們也報告了我們在 ImageNet 2009 秋季數(shù)據(jù)集上的誤差率,ImageNet 2009 秋季數(shù)據(jù)集有 10,184 個類,890 萬圖像。在這個數(shù)據(jù)集上我們按照慣例用一半的圖像來訓(xùn)練,一半的圖像來測試。由于沒有建立測試集,我們的數(shù)據(jù)集分割有必要不同于以前作者的數(shù)據(jù)集分割,但這對結(jié)果沒有明顯的影響。我們在這個數(shù)據(jù)集上的的 top-1 和 top-5 錯誤率是 67.4%和 40.9%,使用的是上面描述的在最后的池化層之后有一個額外的第 6 卷積層網(wǎng)絡(luò)。這個數(shù)據(jù)集上公開可獲得的最好結(jié)果是 78.1%和 60.9%[19]。

$3 作者詳細(xì)介紹了使用該模型參賽的結(jié)果數(shù)據(jù)。

6.1 Qualitative Evaluations

6.1 定性評估

Figure 3 shows the convolutional kernels learned by the network’s two data-connected layers. The network has learned a variety of frequency-and orientation-selective kernels, as well as various col-ored blobs. Notice the specialization exhibited by the two GPUs, a result of the restricted connec-tivity described in Section 3.5. The kernels on GPU 1 are largely color-agnostic, while the kernels on on GPU 2 are largely color-specific. This kind of specialization occurs during every run and is independent of any particular random weight initialization (modulo a renumbering of the GPUs).

Figure 3: 96 convolutional kernels of size 11×11×3 learned by the first convolutional layer on the 224×224×3 input images. The top 48 kernels were learned on GPU 1 while the bottom 48 kernels were learned on GPU 2. See Section 6.1 for details.

圖 3 顯示了網(wǎng)絡(luò)的兩個數(shù)據(jù)連接層學(xué)習(xí)到的卷積核。網(wǎng)絡(luò)學(xué)習(xí)到了大量的頻率核、方向選擇核,也學(xué)到了各種顏色點。注意兩個 GPU 表現(xiàn)出的專業(yè)化,3.5 小節(jié)中描述的受限連接的結(jié)果。GPU 1 上的核主要是沒有顏色的,而 GPU 2 上的核主要是針對顏色的。這種專業(yè)化在每次運行時都會發(fā)生,并且是與任何特別的隨機權(quán)重初始化(以 GPU 的重新編號為模)無關(guān)的。

圖 3:第一卷積層在 224×224×3 的輸入圖像上學(xué)習(xí)到的大小為 11×11×3 的 96 個卷積核。上面的 48 個核是在 GPU 1 上學(xué)習(xí)到的而下面的 48 個卷積核是在 GPU 2 上學(xué)習(xí)到的。更多細(xì)節(jié)請看 6.1 小節(jié)。

$1 圖 3 展示了兩個 GPU 在第一層卷積層上學(xué)到的卷積核。

In the left panel of Figure 4 we qualitatively assess what the network has learned by computing its top-5 predictions on eight test images. Notice that even off-center objects, such as the mite in the top-left, can be recognized by the net. Most of the top-5 labels appear reasonable. For example, only other types of cat are considered plausible labels for the leopard. In some cases (grille, cherry) there is genuine ambiguity about the intended focus of the photograph.

Figure 4: (Left) Eight ILSVRC-2010 test images and the five labels considered most probable by our model. The correct label is written under each image, and the probability assigned to the correct label is also shown with a red bar (if it happens to be in the top 5). (Right) Five ILSVRC-2010 test images in the first column. The remaining columns show the six training images that produce feature vectors in the last hidden layer with the smallest Euclidean distance from the feature vector for the test image.

在圖 4 的左邊部分,我們通過在 8 張測試圖像上計算它的 top-5 預(yù)測定性地評估了網(wǎng)絡(luò)學(xué)習(xí)到的東西。注意即使是不在圖像中心的目標(biāo)也能被網(wǎng)絡(luò)識別,例如左上角的小蟲。大多數(shù)的 top-5 標(biāo)簽似乎是合理的。例如,對于美洲豹來說,只有其它類型的貓被認(rèn)為是看似合理的標(biāo)簽。在某些案例(格柵,櫻桃)中,網(wǎng)絡(luò)在意的圖片焦點真的很含糊。

圖 4:(左)8 張 ILSVRC-2010 測試圖像和我們的模型認(rèn)為最可能的 5 個標(biāo)簽。每張圖像的下面是它的正確標(biāo)簽,正確標(biāo)簽的概率用紅條表示(如果正確標(biāo)簽在 top 5 中)。(右)第一列是 5 張 ILSVRC-2010 測試圖像。剩下的列展示了 6 張訓(xùn)練圖像,這些圖像在最后的隱藏層的特征向量與測試圖像的特征向量有最小的歐氏距離。

$1 圖 4 展示了模型在測試圖像中的表現(xiàn),可以推測到這是在做模型的評估。

Another way to probe the network’s visual knowledge is to consider the feature activations induced by an image at the last, 4096-dimensional hidden layer. If two images produce feature activation vectors with a small Euclidean separation, we can say that the higher levels of the neural network consider them to be similar. Figure 4 shows five images from the test set and the six images from the training set that are most similar to each of them according to this measure. Notice that at the pixel level, the retrieved training images are generally not close in L2 to the query images in the first column. For example, the retrieved dogs and elephants appear in a variety of poses. We present the results for many more test images in the supplementary material.

探索網(wǎng)絡(luò)可視化知識的另一種方式是思考最后的 4096 維隱藏層在圖像上得到的特征激活。如果兩幅圖像生成的特征激活向量之間有較小的歐式距離,我們可以認(rèn)為神經(jīng)網(wǎng)絡(luò)的更高層特征認(rèn)為它們是相似的。圖 4 表明根據(jù)這個度量標(biāo)準(zhǔn),測試集的 5 張圖像和訓(xùn)練集的 6 張圖像中的每一張都是最相似的。注意在像素級別,檢索到的訓(xùn)練圖像與第一列的查詢圖像在 L2 上通常是不接近的。例如,檢索的狗和大象似乎有很多姿態(tài)。我們在補充材料中對更多的測試圖像呈現(xiàn)了這種結(jié)果。

Computing similarity by using Euclidean distance between two 4096-dimensional, real-valued vec-tors is inefficient, but it could be made efficient by training an auto-encoder to compress these vectors to short binary codes. This should produce a much better image retrieval method than applying auto-encoders to the raw pixels [14], which does not make use of image labels and hence has a tendency to retrieve images with similar patterns of edges, whether or not they are semantically similar.

通過兩個 4096 維實值向量間的歐氏距離來計算相似性是效率低下的,但通過訓(xùn)練一個自動編碼器將這些向量壓縮為短二值編碼可以使其變得高效。這應(yīng)該會產(chǎn)生一種比將自動編碼器應(yīng)用到原始像素上[14]更好的圖像檢索方法,自動編碼器應(yīng)用到原始像素上的方法沒有使用圖像標(biāo)簽,因此會趨向于檢索與要檢索的圖像具有相似邊緣模式的圖像,無論它們是否是語義上相似。

$3 作者介紹了如下幾個重點的點

  1. 兩個 GPU 在卷積核上的“分工”,一個針對有顏色,一個針對沒有顏色。
  2. 通過比較兩幅圖像生成的特征激活向量之間的歐式距離的方式來探索網(wǎng)絡(luò)可視化的思路,這其實是現(xiàn)在許多網(wǎng)絡(luò)可視化的基礎(chǔ)。
  3. 通過自動編碼器來提高兩個向量歐式距離的計算效率

7 Discussion

7 探討

Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results.

我們的結(jié)果表明一個大型深度卷積神經(jīng)網(wǎng)絡(luò)在一個具有高度挑戰(zhàn)性的數(shù)據(jù)集上使用純有監(jiān)督學(xué)習(xí)可以取得破紀(jì)錄的結(jié)果。值得注意的是,如果移除一個卷積層,我們的網(wǎng)絡(luò)性能會降低。例如,移除任何中間層都會引起網(wǎng)絡(luò)損失大約 2% 的 top-1 性能。因此深度對于實現(xiàn)我們的結(jié)果非常重要。

To simplify our experiments, we did not use any unsupervised pre-training even though we expect that it will help, especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase in the amount of labeled data. Thus far, our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the infero-temporal pathway of the human visual system. Ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing or far less obvious in static images.

為了簡化我們的實驗,我們沒有使用任何無監(jiān)督的預(yù)訓(xùn)練,盡管我們希望它會有所幫助,特別是在如果我們能獲得足夠的計算能力來顯著增加網(wǎng)絡(luò)的大小而標(biāo)注的數(shù)據(jù)量沒有對應(yīng)增加的情況下。到目前為止,我們的結(jié)果已經(jīng)提高了,因為我們的網(wǎng)絡(luò)更大、訓(xùn)練時間更長,但為了匹配人類視覺系統(tǒng)的下顳線(視覺專業(yè)術(shù)語)我們?nèi)匀挥性S多數(shù)量級要達(dá)到。最后我們想在視頻序列上使用非常大的深度卷積網(wǎng)絡(luò),視頻序列的時序結(jié)構(gòu)會提供非常有幫助的信息,這些信息在靜態(tài)圖像上是缺失的或遠(yuǎn)不那么明顯。

$2 本論文的探討部分,相當(dāng)于其他論文的 conclusions,主要表達(dá)了以下三點:

  1. 深度卷積神經(jīng)網(wǎng)絡(luò)很有效
  2. 對于神經(jīng)網(wǎng)絡(luò),深度非常重要
  3. 作者要將網(wǎng)絡(luò)在視頻序列上實踐,這里值得我們也去探索

References

  • [1] R.M.BellandY.Koren.
    Lessonsfromthenetflixprizechallenge.
    ACMSIGKDDExplorationsNewsletter, 9(2):75–79, 2007.

  • [2] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. www.image-net.org/challenges. 2010.

  • [3] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001.

  • [4] D. Cires ?an, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
    Arxiv preprint arXiv:1202.2745, 2012.

  • [5] D.C. Cires ?an, U. Meier, J. Masci, L.M. Gambardella, and J. Schmidhuber. High-performance neural
    networks for visual object classification. Arxiv preprint arXiv:1102.0183, 2011.

  • [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
    Image Database. In CVPR09, 2009.

  • [7] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL
    http://www.image-net.org/challenges/LSVRC/2012/.

  • [8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An
    incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understand-
    ing, 106(1):59–70, 2007.

  • [9] G. Griffin, A. Holub, and P. Perona.
    Caltech-256 object category dataset.
    Technical Report 7694, Cali-fornia Institute of Technology, 2007.
    URL http://authors.library.caltech.edu/7694.

  • [10] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov.
    Improving neural net-works by preventing co-adaptation of feature detectors.
    arXiv preprint arXiv:1207.0580, 2012.

  • [11] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun.
    What is the best multi-stage architecture for object recognition?
    In International Conference on Computer Vision, pages 2146–2153. IEEE, 2009.

  • [12] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis,
    Department of Computer Science, University of Toronto, 2009.

  • [13] A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010.

  • [14] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. In
    ESANN, 2011.

  • [15] Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al.
    Hand-written digit recognition with a back-propagation network.
    In Advances in neural information processing
    systems, 1990.

  • [16] Y. LeCun, F.J. Huang, and L. Bottou.
    Learning methods for generic object recognition with invariance to
    pose and lighting. In Computer Vision and Pattern Recognition, 2004.
    CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2,
    pages II–97. IEEE, 2004.

  • [17] Y. LeCun, K. Kavukcuoglu, and C. Farabet.
    Convolutional networks and applications in vision.
    In Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on,
    pages 253–256. IEEE, 2010.

  • [18] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng.
    Convolutional deep belief networks for scalable unsuper-
    vised learning of hierarchical representations.
    In Proceedings of the 26th Annual International Conference
    on Machine Learning, pages 609–616. ACM, 2009.

  • [19] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka.
    Metric Learning for Large Scale Image Classifi-cation: Generalizing to New Classes at Near-Zero Cost.
    In ECCV - European Conference on Computer Vision, Florence, Italy, October 2012.

  • [20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines.
    In Proc. 27th International Conference on Machine Learning, 2010.

  • [21] N. Pinto, D.D. Cox, and J.J. DiCarlo.
    Why is real-world visual object recognition hard? PLoS computa-
    tional biology, 4(1):e27, 2008.

  • [22] N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox.
    A high-throughput screening approach to discovering
    good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579, 2009.

  • [23] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman.
    Labelme: a database and web-based tool for
    image annotation. International journal of computer vision, 77(1):157–173, 2008.

  • [24] J.SánchezandF.Perronnin.
    High-dimensional signature compression for large-scale image classification.
    In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1665–1672. IEEE, 2011.

  • [25] P.Y. Simard, D. Steinkraus, and J.C. Platt.
    Best practices for convolutional neural networks applied to
    visual document analysis.
    In Proceedings of the Seventh International Conference on Document Analysis
    and Recognition, volume 2, pages 958–962, 2003.

  • [26] S.C.Turaga,J.F.Murray,
    V.Jain,F.Roth,M.Helmstaedter,
    K.Briggman,W.Denk,andH.S.Seung.
    Con-volutional networks can learn to generate affinity graphs for image segmentation.
    Neural Computation, 22(2):511–538, 2010.

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容