CVPR 2024 超分辨率(super-resolution)方向上接收論文總結(jié)

CVPR 2024

CVPR 2024 會(huì)議將于 2024 年 6 月 17 日至 21 日在美國(guó)西雅圖舉行。

今年共提交了11532份有效論文,2719 篇論文被接收,錄用率為23.6%。2719篇被接收論文中,具體錄用情況:

  • Poster論文有2305篇,占比84.8%。CVPR組委會(huì)將以Poster形式展示論文。

  • Poster(Highlight)被所在地區(qū)主席認(rèn)為特別有趣或具有創(chuàng)新性的論文,今年共有324篇(11.9%)。

  • Oral論文共有90篇,占比3.3%。錄用者可以15分鐘介紹自己的論文。

現(xiàn)將超分辨率方向上接收的論文匯總?cè)缦拢z漏之處還請(qǐng)大家斧正。

圖像超分

  1. Self-Adaptive Reality-Guided Diffusion for Artifact-Free Super-Resolution
  2. Text-guided Explorable Image Super-resolution
  3. Training Generative Image Super-Resolution Models by Wavelet-Domain Losses Enables Better Control of Artifacts
  4. CFAT: Unleashing TriangularWindows for Image Super-resolution
  5. AdaBM: On-the-Fly Adaptive Bit Mapping for Image Super-Resolution
  6. Transcending the Limit of Local Window: Advanced Super-Resolution Transformer with Adaptive Token Dictionary
  7. SinSR: Diffusion-Based Image Super-Resolution in a Single Step
  8. Image Processing GNN: Breaking Rigidity in Super-Resolution
    • Paper:
    • Code:
    • Keywords: GNN
  9. Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning
  10. Boosting Flow-based Generative Super-Resolution Models via Learned Prior
  11. Learning Coupled Dictionaries from Unpaired Data for Image Super-Resolution
  12. Navigating Beyond Dropout: An Intriguing Solution towards Generalizable Image Super-Resolution
  13. CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster Image Generation

盲超分 / 真實(shí)世界 / 參考

  1. A Dynamic Kernel Prior Model for Unsupervised Blind Image Super-Resolution
  2. CDFormer: When Degradation Prediction Embraces Diffusion Model for Blind Image Super-Resolution
    • Paper:
    • Code:
    • Keywords: Diffusion; Blind
  3. Diffusion-based Blind Text Image Super-Resolution
  4. SeD: Semantic-Aware Discriminator for Image Super-Resolution
  5. Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer
  6. Universal Robustness via Median Random Smoothing for Real-World Super-Resolution
    • Paper:
    • Code:
    • Keywords: Real-World
  7. SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution
  8. CoSeR: Bridging Image and Language for Cognitive Super-Resolution
  9. Building Bridges across Spatial and Temporal Resolutions: Reference-Based Super-Resolution via Change Priors and Conditional Diffusion Model

視頻超分

  1. FMA-Net: Flow Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring (Oral)
  2. Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution
  3. Enhancing Video Super-Resolution via Implicit Resampling-based Alignment (Highlight)
  4. Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution
  5. Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention

數(shù)據(jù)集

  1. Continuous Optical Zooming: A Benchmark for Arbitrary-Scale Image Super-Resolution in Real World
    • Paper:
    • Code:
    • Keywords: Real World; Arbitrary-Scale

特殊場(chǎng)景

  1. Super-Resolution Reconstruction from Bayer-Pattern Spike Streams
  2. Rethinking Diffusion Model for Multi-Contrast MRI Super-Resolution
  3. Bilateral Event Mining and Complementary for Event Stream Super-Resolution
  4. APISR: Anime Production Inspired Real-World Anime Super-Resolution
  5. CycleINR: Cycle Implicit Neural Representation for Arbitrary-Scale Volumetric Super-Resolution of Medical Data
  6. Neural Super-Resolution for Real-time Rendering with Radiance Demodulation
  7. PFStorer: Personalized Face Restoration and Super-Resolution
  8. Learning Large-Factor EM Image Super-Resolution with Generative Priors

總結(jié)

從本屆接收的論文來(lái)看,Diffusion modelText 引入語(yǔ)義文本信息是大的熱點(diǎn),單純的超分基本已經(jīng)絕跡,一般必須帶上特殊場(chǎng)景或背景。

參考資料

  1. CVPR 2024 Accepted Papers
  2. Image/Video Super-Resolution(圖像超分辨率)
  3. CVPR2024 Super-Resolution
  4. CVPR2024 底層視覺(jué) | 圖像處理
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容