[論文閱讀-1]ImageNet Classification with Deep Convolutional Neural Networks

Abstract

We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called \color{green}{“dropout”} that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

我們訓(xùn)練了一個大型的深度卷積神經(jīng)網(wǎng)絡(luò),將ImageNet lsvprc -2010競賽中的120萬幅高分辨率圖像分類為1000個不同的類。在測試數(shù)據(jù)上,我們實(shí)現(xiàn)了top-1和top-5的錯誤率,分別為37.5%和17.0%,這與前的最高水平相比有了很大的提高。該神經(jīng)網(wǎng)絡(luò)有6000萬個參數(shù)和65萬個神經(jīng)元,由5個卷積層(其中一些后面接了最大池化層)和3個全連接層(最后的1000路softmax)組成。為了使訓(xùn)練更快,我們使用了非飽和神經(jīng)元和一個非常高效的GPU實(shí)現(xiàn)卷積運(yùn)算。為了減少全連通層的過擬合,我們采用了一種最近發(fā)展起來的正則化方法——dropout,結(jié)果顯示它非常有效。我們還在ILSVRC-2012比賽中輸入了該模型的一個變體,并獲得了15.3%的top-5測試錯誤率,而第二名獲得了26.2%的錯誤率.

1 Introduction

Current approaches to object recognition make essential use of machine learning methods. To improve their performance, we can collect larger datasets, learn more powerful models, and use better techniques for preventing overfitting. Until recently, datasets of labeled images were relatively small — on the order of tens of thousands of images (e.g., NORB [16], Caltech-101/256 [8, 9], and CIFAR-10/100 [12]). Simple recognition tasks can be solved quite well with datasets of this size, especially if they are augmented with label-preserving transformations. For example, the current best error rate on the MNIST digit-recognition task (<0.3%) approaches human performance [4]. But objects in realistic settings exhibit considerable variability, so to learn to recognize them it is necessary to use much larger training sets. And indeed, the shortcomings of small image datasets have been widely recognized (e.g., Pinto et al. [21]), but it has only recently become possible to collect labeled datasets with millions of images. The new larger datasets include LabelMe [23], which consists of hundreds of thousands of fully-segmented images, and ImageNet [6], which consists of over 15 million labeled high-resolution images in over 22,000 categories.

當(dāng)前的物體識別方法主要利用機(jī)器學(xué)習(xí)方法。為了提高它們的性能,我們可以收集更大的數(shù)據(jù)集,學(xué)習(xí)更強(qiáng)大的模型,并使用更好的技術(shù)來防止過度擬合。直到最近,標(biāo)記圖像的數(shù)據(jù)集在成千上萬的圖像(例如,NORB [16], Caltech-101/256 [8,9], CIFAR-10/100[12])中相對較小。使用這種大小的數(shù)據(jù)集可以很好地解決簡單的識別任務(wù),特別是如果使用保存標(biāo)簽的轉(zhuǎn)換來擴(kuò)展它們。例如,MNIST數(shù)字識別任務(wù)的當(dāng)前最佳錯誤率(<0.3%)接近人類性能[4]。但是現(xiàn)實(shí)環(huán)境中的物體表現(xiàn)出相當(dāng)大的可變性,所以為了學(xué)會識別它們,有必要使用更大的訓(xùn)練集。的確,小圖像數(shù)據(jù)集的缺點(diǎn)已經(jīng)被廣泛認(rèn)識(例如,Pinto等人的[21]),但直到最近才有可能收集數(shù)百萬張圖像的標(biāo)記數(shù)據(jù)集。新的更大的數(shù)據(jù)集包括LabelMe[23],它由成千上萬的全分段圖像組成,和ImageNet[6],它由超過22000個類別的超過1500萬標(biāo)記的高分辨率圖像組成。

To learn about thousands of objects from millions of images, we need a model with a large learning capacity. However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots of prior knowledge to compensate for all the data we don’t have. Convolutional neural networks (CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions about the nature of images (namely, stationarity of statistics and locality of pixel dependencies). Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have much fewer connections and parameters and so they are easier to train, while their theoretically-best performance is likely to be only slightly worse.

要從數(shù)百萬張圖像中了解數(shù)千個物體,我們需要一個具有巨大學(xué)習(xí)能力的模型。
然而,對象識別任務(wù)的巨大復(fù)雜性意味著即使像ImageNet這樣大的數(shù)據(jù)集也無法指定這個問題,因此我們的模型也應(yīng)該具有大量的先驗(yàn)知識來補(bǔ)償我們沒有的所有數(shù)據(jù)。卷積神經(jīng)網(wǎng)絡(luò)(Convolutional neural networks, CNNs)就是這樣一類模型[16,11,13,18,15,22,26]。它們的能力可以通過改變深度和寬度來控制,而且它們還對圖像的性質(zhì)(即統(tǒng)計的平穩(wěn)性和像素依賴的局部性)做出了強(qiáng)有力且最正確的假設(shè)。
因此,與具有相似大小層的標(biāo)準(zhǔn)前饋神經(jīng)網(wǎng)絡(luò)相比,CNNs具有更少的連接和參數(shù),因此更容易訓(xùn)練,而其理論上最好的性能可能只會稍微差一些。

Despite the attractive qualities of CNNs, and despite the relative efficiency of their local architecture, they have still been prohibitively expensive to apply in large scale to high-resolution images. Luckily, current GPUs, paired with a highly-optimized implementation of 2D convolution, are powerful enough to facilitate the training of interestingly-large CNNs, and recent datasets such as ImageNet contain enough labeled examples to train such models without severe overfitting.

盡管CNNs的質(zhì)量很吸引人,盡管它們的本地架構(gòu)相對高效,但在高分辨率圖像上大規(guī)模應(yīng)用仍然非常昂貴。幸運(yùn)的是,當(dāng)前的gpu與高度優(yōu)化的2D卷積實(shí)現(xiàn)相結(jié)合,已經(jīng)足夠強(qiáng)大,可以方便地訓(xùn)練有趣的大型CNNs,而最近的數(shù)據(jù)集(如ImageNet)包含了足夠多的標(biāo)記示例,可以在不嚴(yán)重過擬合的情況下訓(xùn)練此類模型。

The specific contributions of this paper are as follows: we trained one of the largest convolutional neural networks to date on the subsets of ImageNet used in the ILSVRC-2010 and ILSVRC-2012 competitions[2] and achieved by far the best results ever reported on these datasets. We wrote a highly-optimized GPU implementation of 2D convolution and all the other operations inherent in training convolutional neural networks, which we make available publicly1. Our network contains a number of new and unusual features which improve its performance and reduce its training time, which are detailed in Section 3. The size of our network made overfitting a significant problem, even with 1.2 million labeled training examples, so we used several effective techniques for preventing overfitting, which are described in Section 4. Our final network contains five convolutional and three fully-connected layers, and this depth seems to be important: we found that removing any convolutional layer (each of which contains no more than 1% of the model’s parameters) resulted in inferior performance.

本文的具體貢獻(xiàn)如下:

  • 我們在在 ILSVRC-2010 和 ILSVRC-2012 比賽[2]中使用過的 ImageNet 的子集上訓(xùn)練了迄今為止最大的卷積神經(jīng)網(wǎng)絡(luò)之一,并取得了這些數(shù)據(jù)集上迄今為止最好的結(jié)果。
  • 我們編寫了一個關(guān)于 2D 卷積和所有其他的訓(xùn)練卷積神經(jīng)網(wǎng)絡(luò)時固有的操作的高度優(yōu)化的 GPU 實(shí)現(xiàn),并將其公開了1
  • 我們的網(wǎng)絡(luò)包含了許多新的和不尋常的特性,這些特性提高了它的性能并減少了它的訓(xùn)練時間,這些特性在第3節(jié)中詳細(xì)介紹。
  • 即使有120萬個標(biāo)記的訓(xùn)練樣本,我們的網(wǎng)絡(luò)規(guī)模(過大)使得過度擬合成為一個重要的問題。所以我們使用了一些有效的技術(shù)來防止過度擬合,如第4節(jié)所述。
  • 我們最終的網(wǎng)絡(luò)包含5個卷積層和3個全連接層,這個深度似乎很重要:我們發(fā)現(xiàn)去掉任何卷積層(每個卷積層只包含不到1%的模型參數(shù))都會導(dǎo)致性能下降。

In the end, the network’s size is limited mainly by the amount of memory available on current \color{green}{GPUs} and by the amount of training \color{green}{time} that we are willing to tolerate. Our network takes between five and six days to train on two GTX580 3GB GPUs. All of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become available.

最后,網(wǎng)絡(luò)的大小主要受到當(dāng)前gpu上可用內(nèi)存的大小和我們愿意忍受的訓(xùn)練時間的大小的限制。我們的網(wǎng)絡(luò)需要5到6天的時間來訓(xùn)練兩個GTX 580 3GB GPU。我們所有的實(shí)驗(yàn)都表明,只要等待更快的gpu和更大的數(shù)據(jù)集可用,我們的結(jié)果就可以得到改善。

2 The Dataset

ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000 categories. The images were collected from the web and labeled by human labelers using Amazon’s Mechanical Turk crowd-sourcing tool. Starting in 2010, as part of the Pascal Visual Object Challenge, an annual competition called the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has been held. ILSVRC uses a subset of ImageNet with roughly 1000 images in each of 1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images, and 150,000 testing images.

ImageNet是一個包含超過1500萬張高分辨率圖像的數(shù)據(jù)集,屬于大約22000個類別。這些圖片是從網(wǎng)上收集來的,并由人工貼標(biāo)簽者使用亞馬遜的土耳其機(jī)械眾包工具進(jìn)行標(biāo)記。從2010年開始,作為Pascal視覺對象挑戰(zhàn)賽的一部分,每年都會舉辦一場名為ImageNet大型視覺識別挑戰(zhàn)賽(ILSVRC)的比賽。ILSVRC使用ImageNet的一個子集,每個類別大約有1000張圖片??偣泊蠹s有120萬張訓(xùn)練圖像、5萬張驗(yàn)證圖像和15萬張測試圖像。

ILSVRC-2010 is the only version of ILSVRC for which the test set labels are available, so this is the version on which we performed most of our experiments. Since we also entered our model in the ILSVRC-2012 competition, in Section 6 we report our results on this version of the dataset as well, for which test set labels are unavailable. On ImageNet, it is customary to report two error rates: top-1 and top-5, where the top-5 error rate is the fraction of test images for which the correct label
is not among the five labels considered most probable by the model.

ILSVRC-2010 是唯一可用測試集標(biāo)簽的 ILSVRC 版本,因此這是我們進(jìn)行大多數(shù)實(shí)驗(yàn)的版本。由于我們也在 ILSVRC-2012 競賽中加入了我們的模型,在第6節(jié)中,我們也報告了我們在這個版本的數(shù)據(jù)集上的結(jié)果,對于這個版本的數(shù)據(jù)集,測試集標(biāo)簽是不可用的。在 ImageNet 上,通常報告兩個錯誤率:top-1 和 top-5,其中 top-5 錯誤率是測試圖像的一部分,其中正確的標(biāo)簽不在模型認(rèn)為最可能的五個標(biāo)簽中。

ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we down-sampled the images to a fixed resolution of 256 * 256. Given a rectangular image, we first rescaled the image such that the shorter side was of length 256, and then cropped out the central 256?256 patch from the resulting image. We did not pre-process the images in any other way, except for subtracting the mean activity over the training set from each pixel. So we trained our network on the (centered) raw RGB values of the pixels.

ImageNet由可變分辨率的圖像組成,而我們的系統(tǒng)需要一個恒定的輸入維數(shù)。
因此,我們將圖像降采樣到256 * 256的固定分辨率。給定一個矩形圖像,我們首先重新調(diào)整圖像的大小,使其短邊長度為256,然后從結(jié)果圖像中裁剪出中心的256%256塊。除了從每個像素中減去訓(xùn)練集上的平均活動外,我們沒有以任何其他方式對圖像進(jìn)行預(yù)處理。因此,我們將網(wǎng)絡(luò)訓(xùn)練成像素的原始RGB值(居中)。

3 The Architecture

The architecture of our network is summarized in Figure 2. It contains eight learned layers — five convolutional and three fully-connected. Below, we describe some of the novel or unusual features of our network’s architecture. Sections 3.1-3.4 are sorted according to our estimation of their importance, with the most important first.

3.1 ReLU Nonlinearity

The standard way to model a neuron’s output f as a function of its input x is with f(x) = tanh(x) or f(x) = (1 + e^{-x})^{-1}. In terms of training time with gradient descent, these saturating nonlinearities are much slower than the non-saturating nonlinearity f(x) = max(0; x). Following Nair and Hinton [20], we refer to neurons with this nonlinearity as Rectified Linear Units (ReLUs). Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units. This is demonstrated in Figure 1, which shows the number of iterations required to reach 25% training error on the CIFAR-10 dataset for a particular four-layer convolutional network. This plot shows that we would not have been able to experiment with such large neural networks for this work if we had used traditional saturating neuron models.

We are not the first to consider alternatives to traditional neuron models in CNNs. For example, Jarrett et al. [11] claim that the nonlinearity f(x) = jtanh(x)j works particularly well with their type of contrast normalization followed by local average pooling on the Caltech-101 dataset. However, on this dataset the primary concern is preventing overfitting, so the effect they are observing is different from the accelerated ability to fit the training set which we report when using ReLUs. Faster learning has a great influence on the performance of large models trained on large datasets.

3.2 Training on Multiple GPUs

A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it. It turns out that 1.2 million training examples are enough to train networks which are too big to fit on one GPU. Therefore we spread the net across two GPUs. Current GPUs are particularly well-suited to cross-GPU parallelization, as they are able to read from and write to one another’s memory directly, without going through host machine memory. The parallelization scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one additional trick: the GPUs communicate only in certain layers. This means that, for example, the kernels of layer 3 take input from all kernel maps in layer 2. However, kernels in layer 4 take input only from those kernel maps in layer 3 which reside on the same GPU. Choosing the pattern of connectivity is a problem for cross-validation, but this allows us to precisely tune the amount of communication until it is an acceptable fraction of the amount of computation.

The resultant architecture is somewhat similar to that of the “columnar” CNN employed by Cire?san et al. [5], except that our columns are not independent (see Figure 2). This scheme reduces our top-1 and top-5 error rates by 1.7% and 1.2%, respectively, as compared with a net with half as many kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time to train than the one-GPU net2.

3.3 Local Response Normalization

ReLUs have the desirable property that they do not require input normalization to prevent them from saturating. If at least some training examples produce a positive input to a ReLU, learning will happen in that neuron. However, we still find that the following local normalization scheme aids generalization. Denoting by ai x;y the activity of a neuron computed by applying kernel i at position (x; y) and then applying the ReLU nonlinearity, the response-normalized activity bi x;y is given by the expression.
where the sum runs over n “adjacent” kernel maps at the same spatial position, and N is the total number of kernels in the layer. The ordering of the kernel maps is of course arbitrary and determined before training begins. This sort of response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels. The constants k; n; , and are hyper-parameters whose values are determined using a validation set; we used k = 2, n = 5, = 10??4, and = 0:75. We applied this normalization after applying the ReLU nonlinearity in certain layers (see Section 3.5).

This scheme bears some resemblance to the local contrast normalization scheme of Jarrett et al. [11], but ours would be more correctly termed “brightness normalization”, since we do not subtract the mean activity. Response normalization reduces our top-1 and top-5 error rates by 1.4% and 1.2%, respectively. We also verified the effectiveness of this scheme on the CIFAR-10 dataset: a four-layer
CNN achieved a 13% test error rate without normalization and 11% with normalization3.

3.4 Overlapping Pooling

Pooling layers in CNNs summarize the outputs of neighboring groups of neurons in the same kernel map. Traditionally, the neighborhoods summarized by adjacent pooling units do not overlap (e.g.,[17, 11, 4]). To be more precise, a pooling layer can be thought of as consisting of a grid of pooling units spaced s pixels apart, each summarizing a neighborhood of size z z centered at the location of the pooling unit. If we set s = z, we obtain traditional local pooling as commonly employed in CNNs. If we set s < z, we obtain overlapping pooling. This is what we use throughout our network, with s = 2 and z = 3. This scheme reduces the top-1 and top-5 error rates by 0.4% and 0.3%, respectively, as compared with the non-overlapping scheme s = 2; z = 2, which produces output of equivalent dimensions. We generally observe during training that models with overlapping pooling find it slightly more difficult to overfit.

3.5 Overall Architecture

Now we are ready to describe the overall architecture of our CNN. As depicted in Figure 2, the net contains eight layers with weights; the first five are convolutional and the remaining three are fully-connected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels. Our network maximizes the multinomial logistic regression objective, which is equivalent to maximizing the average across training cases of the log-probability of the correct label under the prediction distribution.

The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel maps in the previous layer which reside on the same GPU (see Figure 2). The kernels of the third convolutional layer are connected to all kernel maps in the second layer. The neurons in the fully-connected layers are connected to all neurons in the previous layer. Response-normalization layers follow the first and second convolutional layers. Max-pooling layers, of the kind described in Section 3.4, follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.

The first convolutional layer filters the 2242243 input image with 96 kernels of size 11113 with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring neurons in a kernel map). The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5548. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 33256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 33192 , and the fifth convolutional layer has 256 kernels of size 33192. The fully-connected layers have 4096 neurons each.

4 Reducing Overfitting

Our neural network architecture has 60 million parameters. Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label, this turns out to be insufficient to learn so many parameters without considerable overfitting. Below, we describe the two primary ways in which we combat overfitting.

4.1 Data Augmentation

The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations (e.g., [25, 4, 5]). We employ two distinct forms of data augmentation, both of which allow transformed images to be produced from the original images with very little computation, so the transformed images do not need to be stored on disk. In our implementation, the transformed images are generated in Python code on the CPU while the GPU is training on the previous batch of images. So these data augmentation schemes are, in effect, computationally free.

The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224224 patches (and their horizontal reflections) from the 256256 images and training our network on these extracted patches<u>4</u>. This increases the size of our training set by a factor of 2048, though the resulting training examples are, of course, highly interdependent.
Without this scheme, our network suffers from substantial overfitting, which would have forced us to use much smaller networks. At test time, the network makes a prediction by extracting five 224*224 patches (the four corner patches and the center patch) as well as their horizontal reflections (hence ten patches in all), and averaging the predictions made by the network’s softmax layer on the ten patches.

The second form of data augmentation consists of altering the intensities of the RGB channels in training images. Specifically, we perform PCA on the set of RGB pixel values throughout the ImageNet training set. To each training image, we add multiples of the found principal components, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel Ixy =
[IRxy; IGxy; IBxy]T we add the following quantity:

where p_i and \lambda_i are ith eigenvector and eigenvalue of the 3*3 covariance matrix of RGB pixel values, respectively, and i is the aforementioned random variable. Each i is drawn only once for all the pixels of a particular training image until that image is used for training again, at which point it is re-drawn. This scheme approximately captures an important property of natural images, namely, that object identity is invariant to changes in the intensity and color of the illumination. This scheme reduces the top-1 error rate by over 1%.

4.2 Dropout

Combining the predictions of many different models is a very successful way to reduce test errors[1, 3], but it appears to be too expensive for big neural networks that already take several days to train. There is, however, a very efficient version of model combination that only costs about a factor of two during training. The recently-introduced technique, called “dropout” [10], consists of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are “dropped out” in this way do not contribute to the forward pass and do not participate in backpropagation. So every time an input is presented, the neural network samples a different architecture, but all these architectures share weights. This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. At test time, we use all the neurons but multiply their outputs by 0.5, which is a reasonable approximation to taking the geometric mean of the predictive distributions produced by the exponentially-many dropout networks.

結(jié)合許多不同模型的預(yù)測是減少測試錯誤的一種非常成功的方法[1,3],但是對于已經(jīng)需要幾天訓(xùn)練的大型神經(jīng)網(wǎng)絡(luò)來說,這似乎太昂貴了。然而,有一個非常有效的模型組合版本,它在訓(xùn)練期間只花費(fèi)大約2倍的成本。最近介紹的技術(shù)稱為dropout[10],它將每個隱藏神經(jīng)元的輸出設(shè)置為0,概率為0.5。以這種方式丟棄的神經(jīng)元不參與正向傳遞,也不參與反向傳播。所以每次輸入時,神經(jīng)網(wǎng)絡(luò)都會對不同的結(jié)構(gòu)進(jìn)行采樣,但是所有這些結(jié)構(gòu)都共享權(quán)重。這種技術(shù)減少了神經(jīng)元之間復(fù)雜的相互適應(yīng),因?yàn)樯窠?jīng)元不能依賴于特定的其他神經(jīng)元的存在。因此,它被迫學(xué)習(xí)與其他神經(jīng)元的許多不同隨機(jī)子集結(jié)合使用的更健壯的特征。在測試時,我們使用所有的神經(jīng)元,但將它們的輸出乘以0.5,這是一個合理的近似值,近似于取由指數(shù)型多退出網(wǎng)絡(luò)產(chǎn)生的預(yù)測分布的幾何平均值。

We use dropout in the first two fully-connected layers of Figure 2. Without dropout, our network exhibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.

我們在圖2的前兩個完全連接的層中使用了dropout。沒有dropout,我們的網(wǎng)絡(luò)顯示出大量的過擬合。Dropout使收斂所需的迭代次數(shù)增加了一倍。

5 Details of learning

We trained our models using stochastic gradient descent with a batch size of 128 examples, momentum of 0.9, and weight decay of 0.0005. We found that this small amount of weight decay was important for the model to learn. In other words, weight decay here is not merely a regularizer: it reduces the model’s training error. The update rule for weight w was

where i is the iteration index, v is the momentum variable, \upsilon is the learning rate, and <> is the average over the ith batch Di of the derivative of the objective with respect to w, evaluated at wi. We initialized the weights in each layer from a zero-mean Gaussian distribution with standard deviation 0.01. We initialized the neuron biases in the second, fourth, and fifth convolutional layers, as well as in the fully-connected hidden layers, with the constant 1. This initialization accelerates the early stages of learning by providing the ReLUs with positive inputs. We initialized the neuron biases in the remaining layers with the constant 0.

We used an equal learning rate for all layers, which we adjusted manually throughout training. The heuristic which we followed was to divide the learning rate by 10 when the validation error rate stopped improving with the current learning rate. The learning rate was initialized at 0.01 and

7 Discussion

Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging dataset using purely supervised learning. It is notable that our network’s performance degrades if a single convolutional layer is removed. For example, removing any of the middle layers results in a loss of about 2% for the top-1 performance of the network. So the depth really is important for achieving our results.

To simplify our experiments, we did not use any unsupervised pre-training even though we expect that it will help, especially if we obtain enough computational power to significantly increase the size of the network without obtaining a corresponding increase in the amount of labeled data. Thus far, our results have improved as we have made our network larger and trained it longer but we still have many orders of magnitude to go in order to match the inferotemporal pathway of the human visual system. Ultimately we would like to use very large and deep convolutional nets on video sequences where the temporal structure provides very helpful information that is missing or far less obvious in static images.

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容