ESRGAN - Enhanced Super-Resolution Generative Adversarial Networks論文翻譯——中英文對照

文章作者:Tyan
博客:noahsnail.com ?|? CSDN ?|? 簡書

聲明:作者翻譯論文僅為學(xué)習(xí),如有侵權(quán)請聯(lián)系作者刪除博文,謝謝!

翻譯論文匯總:https://github.com/SnailTyan/deep-learning-papers-translation

ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

Abstract

The Super-Resolution Generative Adversarial Network (SR-GAN) [1] is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN [2] to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge [3]. The code is available at https://github.com/xinntao/ESRGAN.

摘要

超分辨率生成對抗網(wǎng)絡(luò)(SR-GAN)[1]是一項開創(chuàng)性的工作,其能夠在單圖像超分辨率期間生成逼真的紋理。然而,虛幻的細(xì)節(jié)常常伴隨討厭的偽像。為了進(jìn)一步增強視覺質(zhì)量,我們充分研究了SRGAN的三個關(guān)鍵組成部分——網(wǎng)絡(luò)架構(gòu)、對抗損失和感知損失,并對每一個都進(jìn)行了改進(jìn)以取得增強的SRGAN(ESRGAN)。特別的是,我們引入了沒有批歸一化的Residual-in-Residual Dense Block(RRDB)作為基本的網(wǎng)絡(luò)構(gòu)架單元。此外,我們借鑒了相對GAN[2]中的思想,讓判別器預(yù)測相對真實性而不是絕對值。最后,我們通過使用激活前的特征改進(jìn)感知損失,這可以對亮度一致性和紋理復(fù)原提供更強的監(jiān)督。得益于這些改進(jìn),相比于SRGAN,提出的ESRGAN一致地取得了更好的視覺質(zhì)量、更多真實自然的紋理,并在PIRM2018-SR Challenge[3]中獲得了第一名。源碼地址:https://github.com/xinntao/ESRGAN。

1 Introduction

Single image super-resolution (SISR), as a fundamental low-level vision problem, has attracted increasing attention in the research community and AI companies. SISR aims at recovering a high-resolution (HR) image from a single low-resolution (LR) one. Since the pioneer work of SRCNN proposed by Dong et al. [4], deep convolution neural network (CNN) approaches have brought prosperous development. Various network architecture designs and training strategies have continuously improved the SR performance, especially the Peak Signal-toNoise Ratio (PSNR) value [5,6,7,1,8,9,10,11,12]. However, these PSNR-oriented approaches tend to output over-smoothed results without sufficient high-frequency details, since the PSNR metric fundamentally disagrees with the subjective evaluation of human observers [1].

1 引言

作為一個基本的低級視覺問題,單圖像超分辨率(SISR)在研究領(lǐng)域和AI公司中引起了越來越多的關(guān)注。SISR目標(biāo)是從一張低分辨率(LR)圖像復(fù)原出一張高分辨率(HR)圖像。從Dong等[4]提出SRCNN的開創(chuàng)性工作以來,深度卷積神經(jīng)網(wǎng)絡(luò)(CNN)方法帶來了繁榮的發(fā)展。各種網(wǎng)絡(luò)架構(gòu)設(shè)計和訓(xùn)練策略持續(xù)地改善SR性能,尤其是峰值信噪比(PSNR)的值[5,6,7,1,8,9,10,11,12]。然而,這些面向PSNR的方法趨向于輸出過于平滑的結(jié)果,缺少足夠的高頻細(xì)節(jié),因為PSNR度量從根本上與人類觀察者的主觀評價[1]不符。

Several perceptual-driven methods have been proposed to improve the visual quality of SR results. For instance, perceptual loss [13,14] is proposed to optimize super-resolution model in a feature space instead of pixel space. Generative adversarial network [15] is introduced to SR by [1,16] to encourage the network to favor solutions that look more like natural images. The semantic image prior is further incorporated to improve recovered texture details [17]. One of the milestones in the way pursuing visually pleasing results is SRGAN [1]. The basic model is built with residual blocks [18] and optimized using perceptual loss in a GAN framework. With all these techniques, SRGAN significantly improves the overall visual quality of reconstruction over PSNR-oriented methods.

已經(jīng)提出了一些感知驅(qū)動的方法來改進(jìn)SR結(jié)果的視覺質(zhì)量。例如,提出感知損失[13,14]來優(yōu)化在特征空間而不是像素空間中的超分辨率模型。[1,16]引入生成對抗網(wǎng)絡(luò)[15]到SR中以鼓勵網(wǎng)絡(luò)支持看起來更像自然圖像的解。語義圖像先驗被進(jìn)一步合并以改善恢復(fù)的紋理細(xì)節(jié)[17]。追尋視覺愉悅效果的方法中的里程碑之一是SRGAN[1]?;灸P褪怯脷埐顗K構(gòu)建的[18],并在GAN框架中使用感知損失來進(jìn)行優(yōu)化。通過所有這些技術(shù),與面向PSNR的方法相比,SRGAN顯著改善了重建的整體視覺質(zhì)量。

However, there still exists a clear gap between SRGAN results and the ground-truth (GT) images, as shown in Fig. 1. In this study, we revisit the key components of SRGAN and improve the model in three aspects. First, we improve the network structure by introducing the Residual-in-Residual Dense Block (RDDB), which is of higher capacity and easier to train. We also remove Batch Normalization (BN) [19] layers as in [20] and use residual scaling [21,20] and smaller initialization to facilitate training a very deep network. Second, we improve the discriminator using Relativistic average GAN (RaGAN) [2], which learns to judge “whether one image is more realistic than the other” rather than “whether one image is real or fake”. Our experiments show that this improvement helps the generator recover more realistic texture details. Third, we propose an improved perceptual loss by using the VGG features before activation instead of after activation as in SRGAN. We empirically find that the adjusted perceptual loss provides sharper edges and more visually pleasing results, as will be shown in Sec. 4.4. Extensive experiments show that the enhanced SRGAN, termed ESRGAN, consistently outperforms state-of-the-art methods in both sharpness and details (see Fig. 1 and Fig. 7).

Figure 1

Fig.1: The super-resolution results of ×4 for SRGAN, the proposed ESRGAN and the ground-truth. ESRGAN outperforms SRGAN in sharpness and details.

Figure 7

Fig.7: Qualitative results of ESRGAN. ESRGAN produces more natural textures, e.g., animal fur, building structure and grass texture, and also less unpleasant artifacts, e.g., artifacts in the face by SRGAN.

然而,如圖1所示,SRGAN結(jié)果與真實(GT)圖像之間仍然存在明顯的差距。在本研究中,我們重新審視SRGAN的關(guān)鍵組件,并在三個方面改進(jìn)模型。首先,我們通過引入Residual-in-Residual Dense Block(RDDB)改進(jìn)網(wǎng)絡(luò)架構(gòu),該結(jié)構(gòu)具有較高的能力且更容易訓(xùn)練。我們像[20]中一樣也移除了批歸一化(BN)[19]層,使用殘差縮放[21,20]和更小的初始化來促進(jìn)訓(xùn)練一個非常深的網(wǎng)絡(luò)。其次,我們使用相對平均GAN(RaGAN)[2]來改進(jìn)判別器,RaGAN學(xué)習(xí)判斷“一張圖像是否比另一張更真實”而不是“一張圖像時真的還是假的”。我們的實驗表明這個改進(jìn)有助于生成器恢復(fù)更多的真實紋理細(xì)節(jié)。第三,我們提出了一種改進(jìn)的感知損失,使用激活之前的VGG特征來代替SRGAN中激活之后的VGG特征。從經(jīng)驗上我們發(fā)現(xiàn)調(diào)整之后的感知損失提供了更清晰的邊緣和視覺上更令人滿意的結(jié)果,如4.4節(jié)所示。大量的實驗表明增強SRGAN(稱為ESRGAN)在清晰度和細(xì)節(jié)方面都始終優(yōu)于最新的方法(見圖1和圖7)。

Figure 1

圖1:SRGAN、提出的ESRGAN和實際的4倍超分辨率結(jié)果。ESRGAN在清晰度和細(xì)節(jié)方面優(yōu)于SRGAN。

Figure 7

圖7:ESRGAN的定性結(jié)果。ESRGAN生成了更自然的紋理,例如,動物皮毛,建筑物結(jié)構(gòu)和草坪紋理,以及更少的令人不快的偽影,例如SRGAN中臉上的偽影。

We take a variant of ESRGAN to participate in the PIRM-SR Challenge [3]. This challenge is the first SR competition that evaluates the performance in a perceptual-quality aware manner based on [22], where the authors claim that distortion and perceptual quality are at odds with each other. The perceptual quality is judged by the non-reference measures of Ma’s score [23] and NIQE [24], i.e., perceptual index =\frac {1} {2}((10?Ma)+NIQE). A lower perceptual index represents a better perceptual quality.

我們采用ESRGAN的一個變種來參加PIRM-SR挑戰(zhàn)賽[3]。這個挑戰(zhàn)是第一個在[22]的基礎(chǔ)上以察覺感知質(zhì)量的方式評估性能的SR競賽,[22]中作者聲稱失真和感知質(zhì)量相互矛盾。感知質(zhì)量是通過Ma分?jǐn)?shù)[23]和NIQE[24]的非參考度量來判斷的,即感知指數(shù)=\frac {1} {2}((10?Ma)+NIQE)。更低的感知指數(shù)表示更好的感知質(zhì)量。

As shown in Fig. 2, the perception-distortion plane is divided into three regions defined by thresholds on the Root-Mean-Square Error (RMSE), and the algorithm that achieves the lowest perceptual index in each region becomes the regional champion. We mainly focus on region 3 as we aim to bring the perceptual quality to a new high. Thanks to the aforementioned improvements and some other adjustments as discussed in Sec. 4.6, our proposed ESRGAN won the first place in the PIRM-SR Challenge (region 3) with the best perceptual index.

Figure 2

Fig.2: Perception-distortion plane on PIRM self validation dataset. We show the baselines of EDSR [20], RCAN [12] and EnhanceNet [16], and the submitted ESRGAN model. The blue dots are produced by image interpolation.

如圖2所示,通過均方根誤差(RMSE)的閾值,將感知失真平面分成三個區(qū)域,每個區(qū)域中取得最低感知指數(shù)的算法為區(qū)域冠軍。我們主要關(guān)注區(qū)域3,因為我們旨在將感知質(zhì)量提升到新的高度。由于上述的改進(jìn)和4.6節(jié)中討論的一些其它調(diào)整,我們提出的ESRGAN在PIRM-SR挑戰(zhàn)賽(區(qū)域3)中以最好的感知指數(shù)贏得了第一名。

Figure 2

圖2:PIRM自驗證集上的感知失真平面。我們展示了EDSR[20],RCAN[12],EnhanceNet[16]以及提交的ESRGAN模型的基準(zhǔn)線。藍(lán)色的點通過圖像插值生成。

In order to balance the visual quality and RMSE/PSNR, we further propose the network interpolation strategy, which could continuously adjust the reconstruction style and smoothness. Another alternative is image interpolation, which directly interpolates images pixel by pixel. We employ this strategy to participate in region 1 and region 2. The network interpolation and image interpolation strategies and their differences are discussed in Sec. 3.4.

為了平衡視覺質(zhì)量和RMSE/PSNR,我們進(jìn)一步提出了網(wǎng)絡(luò)插值策略,其可以持續(xù)地調(diào)整重建風(fēng)格和平滑度。另一種替代方案是圖像插值,其直接逐像素地插值圖像。我們采用這個策略來參加區(qū)域1和區(qū)域2。網(wǎng)絡(luò)插值和圖像插值策略以及它們的差異在3.4節(jié)中討論。

2 Related Work

We focus on deep neural network approaches to solve the SR problem. As a pioneer work, Dong et al. [4,25] propose SRCNN to learn the mapping from LR to HR images in an end-to-end manner, achieving superior performance against previous works. Later on, the field has witnessed a variety of network architectures, such as a deeper network with residual learning [5], Laplacian pyramid structure [6], residual blocks [1], recursive learning [7,8], densely connected network [9], deep back projection [10] and residual dense network [11]. Specifically, Lim et al. [20] propose EDSR model by removing unnecessary BN layers in the residual block and expanding the model size, which achieves significant improvement. Zhang et al. [11] propose to use effective residual dense block in SR, and they further explore a deeper network with channel attention [12], achieving the state-of-the-art PSNR performance. Besides supervised learning, other methods like reinforcement learning [26] and unsupervised learning [27] are also introduced to solve general image restoration problems.

2 相關(guān)工作

我們專注于解決SR問題的深度神經(jīng)網(wǎng)絡(luò)方法。作為開創(chuàng)性工作,Dong等[4,25]提出了SRCNN以端到端的方式來學(xué)習(xí)從LR到SR圖像的映射,取得了優(yōu)于之前工作的性能。后來,這個領(lǐng)域見證了各種網(wǎng)絡(luò)架構(gòu),例如具有殘差學(xué)習(xí)的神經(jīng)網(wǎng)絡(luò)[5],拉普拉斯金字塔結(jié)構(gòu)[6],殘差塊[1],遞歸學(xué)習(xí)[7,8],密集連接網(wǎng)絡(luò)[9],深度反向投影[10]和殘差密集網(wǎng)絡(luò)[11]。具體來說,Lim等[20]通過移除殘差塊中不必要的BN層以及擴展模型尺寸提出了EDSR模型,取得了顯著的改善。Zhang等[11]在SR中提出了使用有效的殘差密集塊,并且他們進(jìn)一步開發(fā)了一個使用通道注意力[12]的更深網(wǎng)絡(luò),取得了最佳的PSNR性能。除了監(jiān)督學(xué)習(xí)之外,也引入了其它的方法像強化學(xué)習(xí)[26]以及無監(jiān)督學(xué)習(xí)[27]來解決一般的圖像復(fù)原問題。

Several methods have been proposed to stabilize training a very deep model. For instance, residual path is developed to stabilize the training and improve the performance [18,5,12]. Residual scaling is first employed by Szegedy et al. [21] and also used in EDSR. For general deep networks, He et al. [28] propose a robust initialization method for VGG-style networks without BN. To facilitate training a deeper network, we develop a compact and effective residual-in-residual dense block, which also helps to improve the perceptual quality.

已經(jīng)提出了一些方法來穩(wěn)定訓(xùn)練非常深的模型。例如,開發(fā)殘差路徑來穩(wěn)定訓(xùn)練并改善性能[18,5,12]。Szegedy等[21]首次采用殘差縮放,也在EDSR中使用。對于一般的深度網(wǎng)絡(luò),He等[28]為沒有BN的VGG風(fēng)格的網(wǎng)絡(luò)提出了一個魯棒的初始化方法。為了便于訓(xùn)練更深的網(wǎng)絡(luò),我們也開發(fā)了一個簡潔有效的殘差套殘差密集塊,這有助于改善感知質(zhì)量。

Perceptual-driven approaches have also been proposed to improve the visual quality of SR results. Based on the idea of being closer to perceptual similarity [29,14] perceptual loss [13] is proposed to enhance the visual quality by minimizing the error in a feature space instead of pixel space. Contextual loss [30] is developed to generate images with natural image statistics by using an objective that focuses on the feature distribution rather than merely comparing the appearance. Ledig et al. [1] propose SRGAN model that uses perceptual loss and adversarial loss to favor outputs residing on the manifold of natural images. Sajjadi et al. [16] develop a similar approach and further explored the local texture matching loss. Based on these works, Wang et al. [17] propose spatial feature transform to effectively incorporate semantic prior in an image and improve the recovered textures.

感知驅(qū)動的方法已經(jīng)被提出用來改善SR結(jié)果的視覺質(zhì)量。基于更接近于感知相似度[29,14]的想法提出感知損失[13],通過最小化特征空間而不是像素空間的誤差來增強視覺質(zhì)量。通過使用專注于特征分布而不是只比較外觀的目標(biāo)函數(shù),開發(fā)上下文損失[30]來生成具有自然圖像統(tǒng)計的圖像。Ledig等[1]提出SRGAN模型,使用感知損失和對抗損失來支持位于自然圖像流形的輸出。Sajjadi等[16]開發(fā)了類似的方法并進(jìn)一步探索了局部紋理匹配損失?;谶@些工作,Wang等[17]提出空間特征變換來有效地將語義先驗合并到圖像中并改進(jìn)恢復(fù)的紋理。

Throughout the literature, photo-realism is usually attained by adversarial training with GAN [15]. Recently there are a bunch of works that focus on developing more effective GAN frameworks. WGAN [31] proposes to minimize a reasonable and efficient approximation of Wasserstein distance and regularizes discriminator by weight clipping. Other improved regularization for discriminator includes gradient clipping [32] and spectral normalization [33]. Relativistic discriminator [2] is developed not only to increase the probability that generated data are real, but also to simultaneously decrease the probability that real data are real. In this work, we enhance SRGAN by employing a more effective relativistic average GAN.

在整個文獻(xiàn)中,通常通過與GAN[15]的對抗訓(xùn)練來獲得寫實主義照片。最近有很多工作致力于開發(fā)更有效的GAN框架。WGAN[31]提出最小化Wasserstein距離的合理和有效近似,并通過權(quán)重修剪來正則化判別器。其它對判別器的正則化包括梯度修剪[32]和譜歸一化[33]。開發(fā)的相對判別器[2]不僅提高了生成數(shù)據(jù)真實性的概率,而且同時降低了真實數(shù)據(jù)真實性的概率。在這項工作中,我們通過采用更有效的相對平均GAN來增強SRGAN。

SR algorithms are typically evaluated by several widely used distortion measures, e.g., PSNR and SSIM. However, these metrics fundamentally disagree with the subjective evaluation of human observers [1]. Non-reference measures are used for perceptual quality evaluation, including Ma’s score [23] and NIQE [24], both of which are used to calculate the perceptual index in the PIRM-SR Challenge [3]. In a recent study, Blau et al. [22] find that the distortion and perceptual quality are at odds with each other.

SR通常通過幾種廣泛使用的失真測量方式來進(jìn)行評估,例如PSNR和SSIM。然而,這些度量從根本上與人類觀察者的主觀評估不一致[1]。非參考度量通常用于感知質(zhì)量評估,包括Ma的分?jǐn)?shù)[23]和NIQE[24],兩者都用于PIRM-SR挑戰(zhàn)賽中[3]計算感知指數(shù)。在最近的一項研究中,Blau等[22]發(fā)現(xiàn)失真和感知質(zhì)量相互矛盾。

3 Proposed Methods

Our main aim is to improve the overall perceptual quality for SR. In this section, we first describe our proposed network architecture and then discuss the improvements from the discriminator and perceptual loss. At last, we describe the network interpolation strategy for balancing perceptual quality and PSNR.

3 提出的方法

我們的主要目標(biāo)是提高SR的整體感知質(zhì)量。在本節(jié)中,我們首先描述我們提出的網(wǎng)絡(luò)架構(gòu),然后討論判別器和感知損失的改進(jìn)。最后,我們描述用于平衡感知質(zhì)量和PSNR的網(wǎng)絡(luò)插值策略。

3.1 Network Architecture

In order to further improve the recovered image quality of SRGAN, we mainly make two modifications to the structure of generator G: 1) remove all BN layers; 2) replace the original basic block with the proposed Residual-in-Residual Dense Block (RRDB), which combines multi-level residual network and dense connections as depicted in Fig. 4.

Figure 4

Fig.4: Left: We remove the BN layers in residual block in SRGAN. Right: RRDB block is used in our deeper model and \beta is the residual scaling parameter.

3.1 網(wǎng)絡(luò)架構(gòu)

為了進(jìn)一步改進(jìn)SRGAN復(fù)原的圖像質(zhì)量,我們主要對生成器G的架構(gòu)進(jìn)行了兩個修改:1)移除所有的BN層;2)用提出的殘差套殘差密集塊(RRDB)替換原始的基本塊,它結(jié)合了多層殘差網(wǎng)絡(luò)和密集連接,如圖4所示。

Figure 4

圖4:左:我們移除了SRGAN殘差塊中的BN層。右:RRDB塊用在我們的更深模型中,\beta是殘差尺度參數(shù)。

Removing BN layers has proven to increase performance and reduce computational complexity in different PSNR-oriented tasks including SR [20] and deblurring [35]. BN layers normalize the features using mean and variance in a batch during training and use estimated mean and variance of the whole training dataset during testing. When the statistics of training and testing datasets differ a lot, BN layers tend to introduce unpleasant artifacts and limit the generalization ability. We empirically observe that BN layers are more likely to bring artifacts when the network is deeper and trained under a GAN framework. These artifacts occasionally appear among iterations and different settings, violating the needs for a stable performance over training. We therefore remove BN layers for stable training and consistent performance. Furthermore, removing BN layers helps to improve generalization ability and to reduce computational complexity and memory usage.

在不同的面向PSNR的任務(wù)(包括SR[20]和去模糊[35])中,已經(jīng)證實了移除BN層可以提高性能并降低計算復(fù)雜度。BN層在訓(xùn)練中使用一批數(shù)據(jù)的均值和方差對特征進(jìn)行歸一化,并在測試中使用整個訓(xùn)練集估計的均值和方差。當(dāng)訓(xùn)練集和測試集的統(tǒng)計差別很大時,BN層趨向于引入令人不快的偽影并限制泛化能力。我們憑經(jīng)驗觀察到,當(dāng)網(wǎng)絡(luò)較深且在GAN架構(gòu)下訓(xùn)練時,BN層更可能帶來偽影。這些偽影有時會在迭代中間和不同的設(shè)置下出現(xiàn),違背了訓(xùn)練過程中對于穩(wěn)定性能的需求。因此,我們?yōu)榱朔€(wěn)定的訓(xùn)練和一致的性能移除了BN層。此外,移除BN層有助于提高泛化能力并降低計算復(fù)雜度及內(nèi)存使用。

We keep the high-level architecture design of SRGAN (see Fig. 3), and use a novel basic block namely RRDB as depicted in Fig. 4. Based on the observation that more layers and connections could always boost performance [20,11,12], the proposed RRDB employs a deeper and more complex structure than the original residual block in SRGAN. Specifically, as shown in Fig. 4, the proposed RRDB has a residual-in-residual structure, where residual learning is used in different levels. A similar network structure is proposed in [36] that also applies a multilevel residual network. However, our RRDB differs from [36] in that we use dense block [34] in the main path as [11], where the network capacity becomes higher benefiting from the dense connections.

Figure 3

Fig. 3: We employ the basic architecture of SRResNet [1], where most computation is done in the LR feature space. We could select or design “basic blocks” (e.g., residual block [18], dense block [34], RRDB) for better performance.

我們保留了SRGAN的高級架構(gòu)設(shè)計(見圖3),并使用了一個新穎的名為RRDB的基本塊,如圖4所示。基于觀測,更多的層和連接總是可以提升性能[20,11,12],與SRGAN中的原始?xì)埐顗K相比,提出的RRDB采用了更深更復(fù)雜的架構(gòu)。具體地說,如圖4所示,提出了的RRDB有殘差套殘差的結(jié)構(gòu),其中殘差學(xué)習(xí)用在不同的級別中。[36]中提出的類似結(jié)構(gòu)也適用于多級殘差網(wǎng)絡(luò)。然而,我們的RRDB與[36]的不同在于我們在主路徑中使用了如[11]的密集塊[34],受益于密集連接其網(wǎng)絡(luò)容量變得更高。

Figure 3

圖3:我們采用SRResNet[1]的基本架構(gòu),大多數(shù)計算都在LR特征空間進(jìn)行。我們可以為了更佳的性能選擇或設(shè)計“基礎(chǔ)塊”(例如,殘差塊[18],密集塊[34],RRDB)。

In addition to the improved architecture, we also exploit several techniques to facilitate training a very deep network: 1) residual scaling [21,20], i.e., scaling down the residuals by multiplying a constant between 0 and 1 before adding them to the main path to prevent instability; 2) smaller initialization, as we empirically find residual architecture is easier to train when the initial parameter variance becomes smaller. More discussion can be found in the supplementary material.

除了改進(jìn)架構(gòu)之外,我們也利用幾種技術(shù)來促進(jìn)訓(xùn)練非常深的網(wǎng)絡(luò):1)殘差縮放[21,20],例如在將殘差加到主路徑上之前,通過將其乘以一個0-1之間的常量來縮小殘差以防止不穩(wěn)定性;2)更小的初始化,因為我們憑經(jīng)驗發(fā)現(xiàn)當(dāng)初始參數(shù)方差變得更小時,殘差結(jié)構(gòu)更容易訓(xùn)練。更多討論可在補充材料中找到。

The training details and the effectiveness of the proposed network will be presented in Sec. 4.

訓(xùn)練細(xì)節(jié)和提出網(wǎng)絡(luò)的有效性將在第4節(jié)中介紹。

3.2 Relativistic Discriminator

Besides the improved structure of generator, we also enhance the discriminator based on the Relativistic GAN [2]. Different from the standard discriminator D
in SRGAN, which estimates the probability that one input image x is real and natural, a relativistic discriminator tries to predict the probability that a real
image x_r is relatively more realistic than a fake one x_f , as shown in Fig. 5.

Figure 5

Fig. 5: Difference between standard discriminator and relativistic discriminator.

3.2 相對判別器

除了改進(jìn)生成器架構(gòu)之外,我們還在相對GAN[2]的基礎(chǔ)上增強了判斷器。不同于SRGAN中的標(biāo)注判別器D,D估算輸入圖像x是真實自然的概率,相對判別器嘗試預(yù)測真實圖像x_r比假圖像x_f相對更真實的概率,如圖5所示。

Figure 5

圖5:標(biāo)準(zhǔn)判別器和相對判別器的差異。

Specifically, we replace the standard discriminator with the Relativistic average Discriminator RaD [2], denoted as D_{Ra}. The standard discriminator in SRGAN can be expressed as D(x) = \sigma(C(x)), where \sigma is the sigmoid function and C(x) is the non-transformed discriminator output. Then the RaD is formulated as D_{Ra}(x_r, x_f) = \sigma(C(x_r) ? \mathbb{E}_{x_f}\[C(x_f)\]), where \mathbb{E}_{x_f}\[\bullet\] represents the operation of taking average for all fake data in the mini-batch. The discriminator loss is then defined as: L^{Ra}_{D} =?\mathbb{E}_{x_r}\[log(D_{Ra}(x_r, x_f))\]?\mathbb{E}_{x_f}\[1 - log(D_{Ra}(x_f, x_r))\]. \tag{1}

The adversarial loss for generator is in a symmetrical form: L^{Ra}_{G} =?\mathbb{E}_{x_r}\[1-log(D_{Ra}(x_r, x_f))\]?\mathbb{E}_{x_f}\[log(D_{Ra}(x_f, x_r))\], \tag{2}

where x_f = G(x_i) and x_i stands for the input LR image. It is observed that the adversarial loss for generator contains both x_r and x_f. Therefore, our generator benefits from the gradients from both generated data and real data in adversarial training, while in SRGAN only generated part takes effect. In Sec. 4.4, we will show that this modification of discriminator helps to learn sharper edges and more detailed textures.

具體來說,我們用相對平均判別器RaD[2]代替標(biāo)準(zhǔn)判別器,記為D_{Ra}。SRGAN中的標(biāo)準(zhǔn)判別器可表示為D(x) = \sigma(C(x)),其中\sigma是sigmoid函數(shù),C(x)是非變換判別器輸出。然后RaD用公式表示為D_{Ra}(x_r, x_f) = \sigma(C(x_r) ? \mathbb{E}_{x_f}\[C(x_f)\]),其中\mathbb{E}_{x_f}\[\bullet\]表示對小批次中所有假數(shù)據(jù)取平均值的操作。然后判別器損失定義為:L^{Ra}_{D} =?\mathbb{E}_{x_r}\[log(D_{Ra}(x_r, x_f))\]?\mathbb{E}_{x_f}\[1 - log(D_{Ra}(x_f, x_r))\]. \tag{1}

生成器的對抗損失呈對稱形式:L^{Ra}_{G} =?\mathbb{E}_{x_r}\[1-log(D_{Ra}(x_r, x_f))\]?\mathbb{E}_{x_f}\[log(D_{Ra}(x_f, x_r))\], \tag{2}

其中x_f=G(x_i)x_i代表輸入LR圖像。可以看出,生成器的對抗損失包含x_rx_f。因此,在對抗訓(xùn)練中,我們的生成器受益于生成數(shù)據(jù)和真實數(shù)據(jù)的梯度,而在SRGAN中僅生成部分起作用。在4.4節(jié)中,我們將展示判別器的這種修改有助于學(xué)習(xí)更清晰的邊緣和更細(xì)致的紋理。

3.3 Perceptual Loss

We also develop a more effective perceptual loss L_{percep} by constraining on features before activation rather than after activation as practiced in SRGAN.

3.3 感知損失

通過約束激活之前的特征而不是SRGAN中實踐的激活之后的特征,我們還開發(fā)了一種更有效的感知損失L_{percep}。

Based on the idea of being closer to perceptual similarity [29,14], Johnson et al. [13] propose perceptual loss and it is extended in SRGAN [1]. Perceptual loss is previously defined on the activation layers of a pre-trained deep network, where the distance between two activated features is minimized. Contrary to the convention, we propose to use features before the activation layers, which will overcome two drawbacks of the original design. First, the activated features are very sparse, especially after a very deep network, as depicted in Fig. 6. For example, the average percentage of activated neurons for image ‘baboon’ after VGG19-54 layer is merely 11.17\%. The sparse activation provides weak supervision and thus leads to inferior performance. Second, using features after activation also causes inconsistent reconstructed brightness compared with the ground-truth image, which we will show in Sec. 4.4.

Figure 6

Fig.6: Representative feature maps before and after activation for image ‘baboon’. With the network going deeper, most of the features after activation become inactive while features before activation contains more information.

基于更接近感知相似[29,14]的想法,Johnson等[13]提出了感知損失并在SRGAN[1]中得到了擴展。之前的感知損失定義在預(yù)訓(xùn)練深度網(wǎng)絡(luò)的激活層上,最小化兩個激活特征之間的距離。與常規(guī)用法相反,我們提出使用激活層之前的特征,這將克服原始設(shè)計的兩個缺點。首先,激活特征非常稀疏,尤其是在非常深的網(wǎng)絡(luò)之后,如圖6所示。例如,圖像“狒狒”在VGG19-54層之后激活神經(jīng)元的平均百分比只有11.17\%。稀疏的激活提供了弱監(jiān)督,因此導(dǎo)致性能較差。其次,與真實圖像相比,使用激活之后的特征也會導(dǎo)致重建亮度不一致,這將在4.4節(jié)中展示。

Figure 6

圖6:圖像“狒狒”激活之前和激活之后代表性的特征映射。隨著網(wǎng)絡(luò)加深,大多數(shù)激活之后的特征變得不活躍而激活之前的特征包含更多的信息。

Therefore, the total loss for the generator is: L_G = L_{percep} + \lambda L^{Ra}_G + \eta L_1 \tag{3} where L_1 = \mathbb{E}_{x_i} || G(x_i) ? y||_1 is the content loss that evaluate the 1-norm distance between recovered image G(x_i) and the ground-truth y, and \lambda, \eta are the coefficients to balance different loss terms.

因此,生成器的全部損失為:L_G = L_{percep} + \lambda L^{Ra}_G + \eta L_1 \tag{3},其中L_1 = \mathbb{E}_{x_i} || G(x_i) ? y||_1是內(nèi)容損失,用來評估恢復(fù)圖像G(x_i)和真實圖像y之間的1范數(shù)距離,\lambda, \eta是平衡不同損失項的系數(shù)。

We also explore a variant of perceptual loss in the PIRM-SR Challenge. In contrast to the commonly used perceptual loss that adopts a VGG network trained for image classification, we develop a more suitable perceptual loss for SR–MINC loss. It is based on a fine-tuned VGG network for material recognition [38], which focuses on textures rather than object. Although the gain of perceptual index brought by MINC loss is marginal, we still believe that exploring perceptual loss that focuses on texture is critical for SR.

我們在PIRM-SR挑戰(zhàn)賽中也探索了感知損失的變種。與采用圖像分類訓(xùn)練的VGG網(wǎng)絡(luò)的常用感知損失相比,我們?yōu)镾R–MINC損失開發(fā)了一種更合適的感知損失。它是基于材料識別[38]的微調(diào)VGG網(wǎng)絡(luò),該網(wǎng)絡(luò)注重于紋理而不是目標(biāo)。盡管MINC損失帶來的感知指數(shù)收益是微不足道的,但我們?nèi)匀徽J(rèn)為,采用注重紋理的感知損失對于SR至關(guān)重要。

3.4 Network Interpolation

To remove unpleasant noise in GAN-based methods while maintain a good perceptual quality, we propose a flexible and effective strategy – network interpolation. Specifically, we first train a PSNR-oriented network G_{PSNR} and then obtain a GAN-based network G_{GAN} by fine-tuning. We interpolate all the corresponding parameters of these two networks to derive an interpolated model G_{INTERP}, whose parameters are: \theta^{INTERP}_{G} = (1 ? \alpha) \theta^{PSNR}_{G} + \alpha \theta^{GAN}_{G} \tag{4} where G_{INTERP}, G_{PSNR} and G_{GAN} are the parameters of \theta^{INTERP}_{G}, \theta^{PSNR}_{G} and \theta^{GAN}_{G}, respectively, and \alpha \in [0, 1] is the interpolation parameter.

3.4 網(wǎng)絡(luò)插值

為了去除基于GAN方法中討厭的噪聲同時保持好的感知質(zhì)量,我們提出了一種彈性有效的策略——網(wǎng)絡(luò)插值。具體來說,我們首先訓(xùn)練一個面向PSNR的網(wǎng)絡(luò)G_{PSNR},然后通過微調(diào)獲得一個基于GAN的網(wǎng)絡(luò)G_{GAN}。我們插值這兩個網(wǎng)絡(luò)的所有對應(yīng)參數(shù)來取得插值模型G_{INTERP},其參數(shù)為:\theta^{INTERP}_{G} = (1 ? \alpha) \theta^{PSNR}_{G} + \alpha \theta^{GAN}_{G} \tag{4} 其中G_{INTERP}, G_{PSNR}G_{GAN}分別是\theta^{INTERP}_{G}, \theta^{PSNR}_{G}\theta^{GAN}_{G}的參數(shù),\alpha \in [0, 1]為插值參數(shù)。

The proposed network interpolation enjoys two merits. First, the interpolated model is able to produce meaningful results for any feasible \alpha without introducing artifacts. Second, we can continuously balance perceptual quality and fidelity without re-training the model.

提出的網(wǎng)絡(luò)插值有兩個優(yōu)點。首先,插值模型對于任何合理的\alpha都能產(chǎn)生有意義的結(jié)果而不會產(chǎn)生偽影。其次,我們可以持續(xù)平衡感知質(zhì)量和保真度都不必重新訓(xùn)練模型。

We also explore alternative methods to balance the effects of PSNR-oriented and GAN-based methods. For instance, one can directly interpolate their output images (pixel by pixel) rather than the network parameters. However, such an approach fails to achieve a good trade-off between noise and blur, i.e., the interpolated image is either too blurry or noisy with artifacts (see Sec. 4.5). Another method is to tune the weights of content loss and adversarial loss, i.e., the parameter \lambda and \eta in Eq. (3). But this approach requires tuning loss weights and fine-tuning the network, and thus it is too costly to achieve continuous control of the image style.

我們也探索了替代方法來平衡面向PSNR方法和基于GAN方法的影響。例如,可以直接插值它們的輸出圖像(逐像素)而不是網(wǎng)絡(luò)參數(shù)。然而,這種方法不會在噪聲和模糊之間取得良好的權(quán)衡,即插值圖像或太模糊或帶有偽影的噪聲太大(見4.5節(jié))。另一種方法是調(diào)整內(nèi)容損失和對抗損失的權(quán)重,即方程3中的參數(shù)\lambda\eta。但這種方法要求調(diào)整損失權(quán)重并微調(diào)網(wǎng)絡(luò),因此實現(xiàn)圖像風(fēng)格的連續(xù)控制代價很高。

4 Experiments

4.1 Training Details

Following SRGAN [1], all experiments are performed with a scaling factor of ×4 between LR and HR images. We obtain LR images by down-sampling HR images using the MATLAB bicubic kernel function. The mini-batch size is set to 16. The spatial size of cropped HR patch is 128 × 128. We observe that training a deeper network benefits from a larger patch size, since an enlarged receptive field helps to capture more semantic information. However, it costs more training time and consumes more computing resources. This phenomenon is also observed in PSNR-oriented methods (see supplementary material).

4 實驗

4.1 訓(xùn)練細(xì)節(jié)

按照SRGAN[1],所有實驗在LR和HR圖像間均以4倍的尺度系數(shù)進(jìn)行。我們通過使用MATLAB雙三次核函數(shù)對HR圖像進(jìn)行下采樣來獲得LR圖像。最小批次大小設(shè)置為16。裁剪的HR圖像塊的空間大小為128×128。我們觀察到,訓(xùn)練更深的網(wǎng)絡(luò)可以從更大的批次大小中獲益,因為擴大的感受野有助于捕獲更多的語義信息。但是,這會花費更多的訓(xùn)練時間并消耗更多的計算資源。這種現(xiàn)象也可以在面向PSNR的方法中觀察到(見補充材料)。

The training process is divided into two stages. First, we train a PSNR-oriented model with the L1 loss. The learning rate is initialized as 2 × 10^{?4} and decayed by a factor of 2 every 2 × 10^5 of mini-batch updates. We then employ the trained PSNR-oriented model as an initialization for the generator. The generator is trained using the loss function in Eq. (3) with \lambda = 5×10^{?3} and \eta = 1×10^{?2}. The learning rate is set to 1×10^{?4} and halved at [50k, 100k, 200k, 300k] iterations. Pre-training with pixel-wise loss helps GAN-based methods to obtain more visually pleasing results. The reasons are that 1) it can avoid undesired local optima for the generator; 2) after pre-training, the discriminator receives relatively good super-resolved images instead of extreme fake ones (black or noisy images) at the very beginning, which helps it to focus more on texture discrimination.

訓(xùn)練過程分為兩個階段。首先,我們訓(xùn)練一個具有L1損失的面向PSNR的模型。學(xué)習(xí)率初始化為2 × 10^{?4},每2 × 10^5個小批次更新的衰減因子為2。然后,我們采用訓(xùn)練的面向PSNR的模型作為生成器的初始化。生成器訓(xùn)練使用等式3中的損失函數(shù),\lambda = 5×10^{?3},\eta = 1×10^{?2}。學(xué)習(xí)率設(shè)置為1×10^{?4},在[50k, 100k, 200k, 300k]次迭代之后減半。使用逐像素?fù)p失進(jìn)行預(yù)訓(xùn)練有助于基于GAN的方法獲得視覺上更好的結(jié)果。原因是:1)它可以避免生成器不希望的局部最優(yōu);2)在預(yù)訓(xùn)練之后,最初判別器可以收到相對好的超分辨率圖像而不是極端假的圖像(黑色或噪聲圖像),這有助于其更關(guān)注紋理判別。

For optimization, we use Adam [39] with \beta_1 = 0.9, \beta_2 = 0.999. We alternately update the generator and discriminator network until the model converges. We use two settings for our generator – one of them contains 16 residual blocks, with a capacity similar to that of SRGAN and the other is a deeper model with 23 RRDB blocks. We implement our models with the PyTorch framework and train them using NVIDIA Titan Xp GPUs.

為了優(yōu)化,我們使用Adam[39],其中\beta_1 = 0.9, \beta_2 = 0.999。我們交替更新生成器和判別器網(wǎng)絡(luò),直到模型收斂。我們?yōu)樯善魇褂昧藘煞N設(shè)置——其中一種包含16個殘差塊,能力類似于SRGAN,另一種是具有23個RRDB塊的更深的模型。我們使用PyTorch框架實現(xiàn)我們的模型,并使用NVIDIA Titan Xp GPU對其進(jìn)行訓(xùn)練。

4.2 Data

For training, we mainly use the DIV2K dataset [40], which is a high-quality (2K resolution) dataset for image restoration tasks. Beyond the training set of DIV2K that contains 800 images, we also seek for other datasets with rich and diverse textures for our training. To this end, we further use the Flickr2K dataset [41] consisting of 2650 2K high-resolution images collected on the Flickr website, and the OutdoorSceneTraining (OST) [17] dataset to enrich our training set. We empirically find that using this large dataset with richer textures helps the generator to produce more natural results, as shown in Fig. 8.

Figure 8

Fig. 8: Overall visual comparisons for showing the effects of each component in ESRGAN. Each column represents a model with its configurations in the top. The red sign indicates the main improvement compared with the previous model.

4.2 數(shù)據(jù)

對于訓(xùn)練,我們主要使用DIV2K數(shù)據(jù)集[40],它是用于圖像復(fù)原任務(wù)的高質(zhì)量(2K分辨率)數(shù)據(jù)集。除了包含800張圖像的DIV2K訓(xùn)練集外,我們也搜尋了其它具有豐富多樣紋理的數(shù)據(jù)集進(jìn)行訓(xùn)練。為此,我們進(jìn)一步使用Flickr2K數(shù)據(jù)集[41],包含F(xiàn)lickr網(wǎng)站上收集的2650張2K高分辨率圖像,OutdoorSceneTraining(OST)[17]數(shù)據(jù)集來豐富我們的訓(xùn)練集。我們憑經(jīng)驗發(fā)現(xiàn),使用具有豐富紋理的大型數(shù)據(jù)集有助于生成器產(chǎn)生更自然的結(jié)果,如圖8所示。

Figure 8

圖8:展示ESRGAN中每個組件效果的整體視覺比較。每一列表示一個模型,其配置在頂部。紅色符號表示與前面模型相比的主要改進(jìn)。

We train our models in RGB channels and augment the training dataset with random horizontal flips and 90 degree rotations. We evaluate our models on widely used benchmark datasets – Set5 [42], Set14 [43], BSD100 [44], Urban100 [45], and the PIRM self-validation dataset that is provided in the PIRM-SR Challenge.

我們在RGB通道訓(xùn)練模型,并通過隨機水平翻轉(zhuǎn)和90度旋轉(zhuǎn)來增強訓(xùn)練集。我們在廣泛使用的基準(zhǔn)數(shù)據(jù)集——Set5[42],Set14[43],BSD100[44],Urban100[45]以及PIRM-SR挑戰(zhàn)賽提供的PIRM自驗證數(shù)據(jù)上評估我們的模型。

4.3 Qualitative Results

We compare our final models on several public benchmark datasets with state-ofthe-art PSNR-oriented methods including SRCNN [4], EDSR [20] and RCAN [12], and also with perceptual-driven approaches including SRGAN [1] and EnhanceNet [16]. Since there is no effective and standard metric for perceptual quality, we present some representative qualitative results in Fig. 7. PSNR (evaluated on the luminance channel in YCbCr color space) and the perceptual index used in the PIRM-SR Challenge are also provided for reference.

4.3 定性結(jié)果

我們將最終的模型與最新的面向PSNR的方法包括SRCNN[4],EDSR[20]和RCAN[12],以及感知驅(qū)動的方法包括在SRGAN[1]和EnhanceNet[16]在一些公開基準(zhǔn)數(shù)據(jù)集上進(jìn)行了比較。由于對于感知質(zhì)量沒有有效標(biāo)準(zhǔn)的度量標(biāo)準(zhǔn),我們在圖7中展示了一些具有代表性的結(jié)果,也提供了PSNR(在YCbCr顏色空間的亮度通道上評估)和PIRM-SR挑戰(zhàn)賽中的感知指數(shù)供參考。

It can be observed from Fig. 7 that our proposed ESRGAN outperforms previous approaches in both sharpness and details. For instance, ESRGAN can produce sharper and more natural baboon’s whiskers and grass textures (see image 43074) than PSNR-oriented methods, which tend to generate blurry results, and than previous GAN-based methods, whose textures are unnatural and contain unpleasing noise. ESRGAN is capable of generating more detailed structures in building (see image 102061) while other methods either fail to produce enough details (SRGAN) or add undesired textures (EnhanceNet). Moreover, previous GAN-based methods sometimes introduce unpleasant artifacts, e.g., SRGAN adds wrinkles to the face. Our ESRGAN gets rid of these artifacts and produces natural results.

從圖7可以看出,我們提出的ESRGAN在清晰度和細(xì)節(jié)方面都優(yōu)于之前的方法。例如,與面向PSNR的方法(更趨向于產(chǎn)生模糊的結(jié)果)和以前的基于GAN的方法(紋理不自然并包含令人不快的噪聲)相比,ESRGAN可以產(chǎn)生更清晰更自然的狒狒胡須和草的紋理(見圖43074)。在建筑物中(見圖102061),ESRGAN能夠產(chǎn)生更詳細(xì)的結(jié)構(gòu)而其它的方法要么不能產(chǎn)生足夠的細(xì)節(jié)(SRGAN),要么添加不必要的紋理(EnhanceNet)。此外,以前基于GAN的方法有時會引入令人不快的偽影,例如SRGAN會在臉上添加皺紋。我們的ESRGAN除去了這些偽影并產(chǎn)生了自然的結(jié)果。

4.4 Ablation Study

In order to study the effects of each component in the proposed ESRGAN, we gradually modify the baseline SRGAN model and compare their differences. The overall visual comparison is illustrated in Fig. 8. Each column represents a model with its configurations shown in the top. The red sign indicates the main improvement compared with the previous model. A detailed discussion is provided as follows.

4.4 消融研究

為了研究提出的ESRGAN中每個組件的效果,我們逐漸修改基準(zhǔn)的SRGAN模型并比較它們的差異。完整的視覺比較如圖8所示。每一列表示一個模型,其配置在頂部。紅色符號表明與前面模型相比的主要改進(jìn)。詳細(xì)討論提供如下。

BN removal. We first remove all BN layers for stable and consistent performance without artifacts. It does not decrease the performance but saves the computational resources and memory usage. For some cases, a slight improvement can be observed from the 2nd and 3rd columns in Fig. 8 (e.g., image 39). Furthermore, we observe that when a network is deeper and more complicated, the model with BN layers is more likely to introduce unpleasant artifacts. The examples can be found in the supplementary material.

移除BN。為了穩(wěn)定和沒有偽影的一致性能,我們首先移除了所有的BN層。它不會降低性能但會節(jié)省計算資源和內(nèi)存使用。在某些情況下,從圖8中的第2列和第3列可以觀察到輕微的改進(jìn)(例如,圖39)。此外,我們觀察到當(dāng)網(wǎng)絡(luò)更深更復(fù)雜時,具有BN層的模型更可能引入令人不快的偽影??梢栽谘a充材料中找到示例。

Before activation in perceptual loss. We first demonstrate that using features before activation can result in more accurate brightness of reconstructed images. To eliminate the influences of textures and color, we filter the image with a Gaussian kernel and plot the histogram of its gray-scale counterpart. Fig. 9a shows the distribution of each brightness value. Using activated features skews the distribution to the left, resulting in a dimmer output while using features before activation leads to a more accurate brightness distribution closer to that of the ground-truth.

Figure 9

Fig. 9: Comparison between before activation and after activation.

感知損失在激活之前。我們首先證實了使用激活之前的特征可以使重建圖像的亮度更準(zhǔn)確。為了消除紋理和顏色的影響,我們使用高斯核對圖像進(jìn)行了濾波并繪制了其對應(yīng)灰度圖像的直方圖。圖9a展示了每一個亮度值的分布。使用激活的特征會使分布偏向左,導(dǎo)致了較暗的輸出,而使用激活之前的特征會得到更精確的亮度分布,更接近于真實圖像的亮度分布。

Figure 9

圖9:激活之前和激活之后的比較。

We can further observe that using features before activation helps to produce sharper edges and richer textures as shown in Fig. 9b (see bird feather) and Fig. 8 (see the 3rd and 4th columns), since the dense features before activation offer a stronger supervision than that a sparse activation could provide.

我們可以進(jìn)一步觀察到,使用激活之前的特征有助于產(chǎn)生更清晰的邊緣和更豐富的紋理,如圖9b(見鳥羽)和圖8(見第三列和第四列)所示,因為與稀疏激活提供的特征相比,激活之前的密集特征能提供更強的監(jiān)督。

RaGAN. RaGAN uses an improved relativistic discriminator, which is shown to benefit learning sharper edges and more detailed textures. For example, in the 5th column of Fig. 8, the generated images are sharper with richer textures than those on their left (see the baboon, image 39 and image 43074).

RaGAN。RaGAN使用改進(jìn)的相對判別器,證明了其有利于學(xué)習(xí)更清晰的邊緣和更細(xì)致的紋理。例如,在圖8的第5列中,生成的圖像比其左側(cè)的圖像更清晰,具有更豐富的紋理(見狒狒,圖39和圖43074)。

Deeper network with RRDB. Deeper model with the proposed RRDB can further improve the recovered textures, especially for the regular structures like the roof of image 6 in Fig. 8, since the deep model has a strong representation capacity to capture semantic information. Also, we find that a deeper model can reduce unpleasing noises like image 20 in Fig. 8.

具有RRDB的更深網(wǎng)絡(luò)。具有提出的RRDB的更深模型可以進(jìn)一步改善恢復(fù)的紋理,尤其是像圖8中圖像6的屋頂這樣的常規(guī)結(jié)構(gòu),因為深度模型具有強大的表示能力來捕獲語義信息。 我們也發(fā)現(xiàn)更深的模型可以減少像圖8中圖像20這樣的令人不快的噪聲。

In contrast to SRGAN, which claimed that deeper models are increasingly difficult to train, our deeper model shows its superior performance with easy training, thanks to the improvements mentioned above especially the proposed RRDB without BN layers.

與SRGAN聲稱的更深的模型越來越難訓(xùn)練相比,由于上述提供的改進(jìn)尤其是提出的沒有BN層的RRDB,我們更深的模型展示了它容易訓(xùn)練且優(yōu)越性能。

4.5 Network Interpolation

We compare the effects of network interpolation and image interpolation strategies in balancing the results of a PSNR-oriented model and GAN-based method. We apply simple linear interpolation on both the schemes. The interpolation parameter \alpha is chosen from 0 to 1 with an interval of 0.2.

4.5 網(wǎng)絡(luò)插值

我們比較了網(wǎng)絡(luò)插值和圖像插值策略在平衡面向PSNR模型與基于GAN方法的結(jié)果方面的作用。我們在這個兩個方案中應(yīng)用了簡單的線性插值。插值參數(shù)\alpha從間隔為0.2的0-1之間選取。

As depicted in Fig. 10, the pure GAN-based method produces sharp edges and richer textures but with some unpleasant artifacts, while the pure PSNRoriented method outputs cartoon-style blurry images. By employing network interpolation, unpleasing artifacts are reduced while the textures are maintained. By contrast, image interpolation fails to remove these artifacts effectively.

Figure 10

Fig. 10: The comparison between network interpolation and image interpolation.

如圖10所示,單純的基于GAN的方法會產(chǎn)生清晰的邊緣和更豐富的紋理,但帶有一些令人不快的偽影,而單純的面向PSNR方法會輸出卡通風(fēng)格的模糊圖像。通過采用網(wǎng)絡(luò)插值,在減少令人不快的偽影的同時保持了紋理。相比之下,圖像插值不能有效消除這些偽影。

Figure 10

圖10:網(wǎng)絡(luò)插值和圖像插值的比較。

Interestingly, it is observed that the network interpolation strategy provides a smooth control of balancing perceptual quality and fidelity in Fig. 10.

有趣的是,在圖10中觀察到網(wǎng)絡(luò)插值策略提供了對平衡感知質(zhì)量和保真度的平滑控制。

4.6 The PIRM-SR Challenge

We take a variant of ESRGAN to participate in the PIRM-SR Challenge [3]. Specifically, we use the proposed ESRGAN with 16 residual blocks and also empirically make some modifications to cater to the perceptual index. 1) The MINC loss is used as a variant of perceptual loss, as discussed in Sec. 3.3. Despite the marginal gain on the perceptual index, we still believe that exploring perceptual loss that focuses on texture is crucial for SR. 2) Pristine dataset [24], which is used for learning the perceptual index, is also employed in our training; 3) a high weight of loss L_1 up to \eta = 10 is used due to the PSNR constraints; 4) we also use back projection [46] as post-processing, which can improve PSNR and sometimes lower the perceptual index.

4.6 PIRM-SR挑戰(zhàn)賽

我們采用ESRGAN的一個變種來參加PIRM-SR挑戰(zhàn)賽[3]。具體來說,我們使用提出的具有16個殘差塊的ESRGAN,并根據(jù)經(jīng)驗進(jìn)行了一些修改來迎合感知指數(shù)。1)使用MINC損失作為感知損失的一個變種,如3.3節(jié)所述。盡管在感知指數(shù)上有邊際收益,但我們?nèi)哉J(rèn)為采用專注于紋理的感知損失對于SR至關(guān)重要;2)我們的訓(xùn)練中也使用了Pristine數(shù)據(jù)集[24]來學(xué)習(xí)感知指數(shù);3)由于PSNR約束,L_1損失的權(quán)重高達(dá)\eta = 10;4)我們也使用反向投射[46]作為后處理,其可以改善PSNR,有時會降低感知指數(shù)。

For other regions 1 and 2 that require a higher PSNR, we use image interpolation between the results of our ESRGAN and those of a PSNR-oriented method RCAN [12]. The image interpolation scheme achieves a lower perceptual index (lower is better) although we observed more visually pleasing results by using the network interpolation scheme. Our proposed ESRGAN model won the first place in the PIRM-SR Challenge (region 3) with the best perceptual index.

對于其它需要較高PSNR的區(qū)域1和2,我們在ESRGAN的結(jié)果和面向PSNR方法RCAN[12]的結(jié)果之間使用圖像插值。盡管通過使用網(wǎng)絡(luò)插值方案我們觀察到了視覺上更令人滿意的效果,但圖像插值方案取得了較低的感知指數(shù)(越低越好)。我們提出的ESRGAN模型以最好的感知指數(shù)贏得了PIRM-SR挑戰(zhàn)賽(區(qū)域3)的第一名。

5 Conclusion

We have presented an ESRGAN model that achieves consistently better perceptual quality than previous SR methods. The method won the first place in the PIRM-SR Challenge in terms of the perceptual index. We have formulated a novel architecture containing several RDDB blocks without BN layers. In addition, useful techniques including residual scaling and smaller initialization are employed to facilitate the training of the proposed deep model. We have also introduced the use of relativistic GAN as the discriminator, which learns to judge whether one image is more realistic than another, guiding the generator to recover more detailed textures. Moreover, we have enhanced the perceptual loss by using the features before activation, which offer stronger supervision and thus restore more accurate brightness and realistic textures.

5 結(jié)論

我們提出了一種ESRGAN模型,它比以前的SR方法始終取得更好的感知質(zhì)量。就感知指數(shù)而言,該方法在PIRM-SR挑戰(zhàn)賽中獲得了第一名。我們構(gòu)建了一種包含一些沒有BN層的RDDB塊的新穎架構(gòu)。此外,采用了包括殘差縮放和較小初始化的有用技術(shù),以促進(jìn)提出的深度模型的訓(xùn)練。我們還介紹了使用相對GAN作為判別器,其學(xué)習(xí)判斷一張圖像是否比另一張更真實,引導(dǎo)生成器恢復(fù)更詳細(xì)的紋理。此外,我們通過使用激活之前的特征增強了感知損失,它提供了更強的監(jiān)督,從而恢復(fù)了更精確的亮度和真實紋理。

Acknowledgement. This work is supported by SenseTime Group Limited, the General Research Fund sponsored by the Research Grants Council of the Hong Kong SAR (CUHK 14241716, 14224316. 14209217), National Natural Science Foundation of China (U1613211) and Shenzhen Research Program (JCYJ20170818164704758, JCYJ20150925163005055).

致謝。這項工作由商湯科技支持,香港特別行政區(qū)研究資助局(CUHK 14241716、14224316、14209217),中國國家自然科學(xué)基金(U1613211)和深圳研究計劃(JCYJ20170818164704758,JCYJ20150925163005055)贊助。

References

  1. Ledig,C.,Theis,L.,Husza ?r,F.,Caballero,J.,Cunningham,A.,Acosta,A.,Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR. (2017)

  2. Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734 (2018)

  3. Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: The pirm challenge on perceptual super resolution. https://www.pirm2018.org/PIRM-SR. html (2018)

  4. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: ECCV. (2014)

  5. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: CVPR. (2016)

  6. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: CVPR. (2017)

  7. Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: CVPR. (2016)

  8. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: CVPR. (2017)

  9. Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: A persistent memory network for image restoration. In: ICCV. (2017)

  10. Haris, M., Shakhnarovich, G., Ukita, N.: Deep backprojection networks for super- resolution. In: CVPR. (2018)

  11. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR. (2018)

  12. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV. (2018)

  13. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: ECCV. (2016)

  14. Bruna, J., Sprechmann, P., LeCun, Y.: Super-resolution with deep convolutional sufficient statistics. In: ICLR. (2015)

  15. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NIPS. (2014)

  16. Sajjadi, M.S., Scho ?lkopf, B., Hirsch, M.: Enhancenet: Single image super-resolution through automated texture synthesis. In: ICCV. (2017)

  17. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: CVPR. (2018)

  18. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. (2016)

  19. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICMR. (2015)

  20. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: CVPRW. (2017)

  21. Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 (2016)

  22. Blau, Y., Michaeli, T.: The perception-distortion tradeoff. In: CVPR. (2017)

  23. Ma, C., Yang, C.Y., Yang, X., Yang, M.H.: Learning a no-reference quality metric for single-image super-resolution. CVIU 158 (2017) 1–16

  24. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a completely blind image quality analyzer. IEEE Signal Process. Lett. 20(3) (2013) 209–212

  25. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. TPAMI 38(2) (2016) 295–307

  26. Yu, K., Dong, C., Lin, L., Loy, C.C.: Crafting a toolchain for image restoration by deep reinforcement learning. In: CVPR. (2018)

  27. Yuan, Y., Liu, S., Zhang, J., Zhang, Y., Dong, C., Lin, L.: Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In: CVPRW. (2018)

  28. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: ICCV. (2015)

  29. Gatys, L., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: NIPS. (2015)

  30. Mechrez, R., Talmi, I., Shama, F., Zelnik-Manor, L.: Maintaining natural image statistics with the contextual loss. arXiv preprint arXiv:1803.04626 (2018)

  31. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)

  32. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: NIPS. (2017)

  33. Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018)

  34. Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: CVPR. (2017)

  35. Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR. (2017)

  36. Zhang, K., Sun, M., Han, X., Yuan, X., Guo, L., Liu, T.: Residual networks of residual networks: Multilevel residual networks. IEEE Transactions on Circuits and Systems for Video Technology (2017)

  37. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  38. Bell, S., Upchurch, P., Snavely, N., Bala, K.: Material recognition in the wild with the materials in context database. In: CVPR. (2015)

  39. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. In: ICLR. (2015)

  40. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: CVPRW. (2017)

  41. Timofte, R., Agustsson, E., Van Gool, L., Yang, M.H., Zhang, L., Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M., et al.: Ntire 2017 challenge on single image super-resolution: Methods and results. In: CVPRW. (2017)

  42. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC, BMVA press (2012)

  43. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: International Conference on Curves and Surfaces, Springer (2010)

  44. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV. (2001)

  45. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: CVPR. (2015)

  46. Timofte, R., Rothe, R., Van Gool, L.: Seven ways to improve example-based single image super resolution. In: CVPR. (2016)

  47. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics. (2010)

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容