近兩年 CVPR ICCV ECCV 相機(jī)位姿估計(jì)、視覺(jué)定位、SLAM相關(guān)論文匯總

CVPR-2018

1.CodeSlam:對(duì)單目slam算法的關(guān)鍵幀進(jìn)行深度估計(jì),使用網(wǎng)絡(luò)架構(gòu)對(duì)單目圖像進(jìn)行處理
2.MapNet:去中心化的環(huán)境建圖,并且能夠完成重定位,使用RNN網(wǎng)絡(luò)
3.P2P-flyingcamera:飛行圖像合成中的p2p問(wèn)題求解
4.Unknown-Principal-Point:主點(diǎn)位置未知的相機(jī)位姿估計(jì)
5.GeoNet:使用無(wú)監(jiān)督學(xué)習(xí)的方法估計(jì)單目圖像深度,計(jì)算單目視頻中的光流和相機(jī)位姿
6.Nonminimal-Global-Optimal-Solution:Non-Minimal相對(duì)位姿問(wèn)題的可證明的全局最優(yōu)解
7.HybridPoseEstimation:2D-3D匹配和2D-2D匹配的混合位姿估計(jì)方法
8.PolarimetricSLAM:利用Polarimetric相機(jī)的稠密單目SLAM算法。
9.ICE-BA:針對(duì)VI-SLAM的一種BA算法。
10.Geometric-MapNet:自監(jiān)督,利用圖像幾何約束的建圖工作,用于相機(jī)定位
11.SingleCameraLocalization:給定3D建筑物和單幀圖像,預(yù)測(cè)相機(jī)拍攝時(shí)所在的位置,CNN
12.DeLS-3D:多傳感器融合算法,GPS/IMU給定粗略的相機(jī)位姿,投影出一個(gè)3D語(yǔ)義地圖,label map和圖像送到CNN網(wǎng)絡(luò)得到粗略的Pose,再利用RNN算法得到精確的Pose,最后把Pose和圖像送到segment CNN生成像素級(jí)別的語(yǔ)義分割
13.Semantic-Localization:一種生成式模型用于描述子學(xué)習(xí),可以表征3D幾何信息和語(yǔ)義信息,用于視覺(jué)定位
14.inLoc:稠密的特征提取和匹配方法,用于室內(nèi)場(chǎng)景的相機(jī)定位
15.BenchmarkLocalization:Benchmark,用于相機(jī)定位,同一場(chǎng)景的條件有巨大變化

references

[1]Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM-Learning a Compact, Optimisable Representation for Dense Visual SLAM[J]. arXiv preprint arXiv:1804.00874, 2018.
[2]Henriques J F, Vedaldi A. Mapnet: An allocentric spatial memory for mapping environments[C]//proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8476-8484.
[3]Lan Z, Hsu D, Lee G H. Solving the Perspective-2-Point Problem for Flying-Camera Photo Composition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4588-4596.
[4]Larsson V, Kukelova Z, Zheng Y. Camera Pose Estimation With Unknown Principal Point[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 2984-2992.
[5]Yin Z, Shi J. GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018, 2.
[6]Briales J, Kneip L, Gonzalez-Jimenez J. A Certifiably Globally Optimal Solution to the Non-Minimal Relative Pose Problem[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 145-154.
[7]Camposeco F, Cohen A, Pollefeys M, et al. Hybrid Camera Pose Estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 136-144.
[8]Yang L, Tan F, Li A, et al. Polarimetric Dense Monocular SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 3857-3866.
[9]Liu H, Chen M, Zhang G, et al. ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for Visual-Inertial SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1974-1982.
[10]Brahmbhatt S, Gu J, Kim K, et al. Geometry-Aware Learning of Maps for Camera Localization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 2616-2625.
[11]Brachmann E, Rother C. Learning less is more-6d camera localization via 3d surface regression[C]//Proc. CVPR. 2018, 8.
[12]Wang P, Yang R, Cao B, et al. Dels-3d: Deep localization and segmentation with a 3d semantic map[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 5860-5869.
[13]Sch?nberger J L, Pollefeys M, Geiger A, et al. Semantic Visual Localization[J]. ISPRS Journal of Photogrammetry and Remote Sensing (JPRS), 2018.
[14]Taira H, Okutomi M, Sattler T, et al. InLoc: Indoor Visual Localization with Dense Matching and View Synthesis[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7199-7209.
[15]Sattler T, Maddern W, Toft C, et al. Benchmarking 6dof outdoor visual localization in changing conditions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8601-8610.


CVPR-2017

1.NID-SLAM:使用Normalised information distance度量的單目slam算法,避免了photometric度量的諸如光照、天氣、環(huán)境結(jié)構(gòu)變化帶來(lái)的影響。
2.CNN-SLAM:CNN預(yù)測(cè)深度,并且和測(cè)量深度相融合的單目直接法slam
3.MistyThreePoints:水下圖像,使用三個(gè)點(diǎn)求解相機(jī)相對(duì)位姿
4.RegressionForests:使用一個(gè)預(yù)訓(xùn)練的regression forests做camera relocalization。
5.RankConstraintFMatrix:Multi-view中秩約束的基礎(chǔ)矩陣,并將其應(yīng)用到camera location恢復(fù)。
6.GeometricLossLocalization:深度學(xué)習(xí),利用幾何沖投影誤差的損失函數(shù),用于camera pose regression
7.EventVIO:使用EKF框架,event相機(jī)的VIO算法
8.3D-ModelsAreNotNecessary:相機(jī)的定位不依賴(lài)高精度的3D模型,只需要圖像數(shù)據(jù)庫(kù)和局部的三維重建即可實(shí)現(xiàn)visual localization。
9.ContextualFeatureReweight:圖像的Geo-localization,知道圖像拍攝的地理位置(和位姿不一樣),使用contextual reweight network預(yù)測(cè)圖像中的哪個(gè)部分更重要。
10.Cross-View-ImageMatching:不同視角的圖像匹配,用于image geo-localization。
11.TwoPointsLocalization:在一個(gè)3D場(chǎng)景中定位一個(gè)query image,2D-3D的匹配問(wèn)題,兩對(duì)對(duì)應(yīng)點(diǎn)可以將相機(jī)的位置約束在一個(gè)圓環(huán)面上,增加一個(gè)direction of triangulation就可以近似得到相機(jī)的位置。
12.DSAC:camera localization,將RANSAC中的deterministic hypothesis selection替換為 probabilistic selection,這種方法被稱(chēng)為RANSAC的可微副本,應(yīng)用該方法解決camera localization的問(wèn)題。

references

[1]Pascoe G, Maddern W, Tanner M, et al. NID-SLAM: Robust Monocular SLAM Using Normalised Information Distance[C]//Conference on Computer Vision and Pattern Recognition. 2017.
[2]Tateno K, Tombari F, Laina I, et al. CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017, 2.
[3]Palmér T, Astrom K, Frahm J M. The Misty Three Point Algorithm for Relative Pose[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 2786-2794.
[4]Cavallari T, Golodetz S, Lord N A, et al. On-the-fly adaptation of regression forests for online camera relocalisation[C]//CVPR. 2017, 2(4): 7.
[5]Sengupta S, Amir T, Galun M, et al. A New Rank Constraint on Multi-view Fundamental Matrices, and Its Application to Camera Location Recovery[C]//CVPR. 2017: 2413-2421.
[6]Kendall A, Cipolla R. Geometric loss functions for camera pose regression with deep learning[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 6555-6564.
[7]Zhu A Z, Atanasov N, Daniilidis K. Event-Based Visual Inertial Odometry[C]//CVPR. 2017: 5816-5824.
[8]Sattler T, Torii A, Sivic J, et al. Are large-scale 3D models really necessary for accurate visual localization?[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 6175-6184.
[9]Kim H J, Dunn E, Frahm J M. Learned Contextual Feature Reweighting for Image Geo-Localization[C]//CVPR. 2017: 3251-3260.
[10]Tian Y, Chen C, Shah M. Cross-view image matching for geo-localization in urban environments[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017: 1998-2006.
[11]Camposeco F, Sattler T, Cohen A, et al. Toroidal constraints for two-point localization under high outlier ratios[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 6700-6708.
[12]Brachmann E, Krull A, Nowozin S, et al. DSAC—Differentiable RANSAC for camera localization[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017: 2492-2500.


ICCV-2017

1.StereoDSO:雙目相機(jī)的DSO算法
2.VO-PPA:像素處理器陣列上的VO
3.ScaleRecovery:利用deep Convolutional Neural Fields估計(jì)深度,并實(shí)現(xiàn)單目VO中的尺度恢復(fù)
4.SpaceTimeLocalizationMapping:對(duì)動(dòng)態(tài)場(chǎng)景進(jìn)行建圖,引入了一個(gè)4D結(jié)構(gòu)的生成概率模型來(lái)說(shuō)明位置、空間和時(shí)間范圍
5.Global2D-3DMatching:大場(chǎng)景3D地圖中,用于相機(jī)定位的全局2D-3D匹配算法,在3D地圖上構(gòu)建了Markov網(wǎng)絡(luò),考慮了不僅僅時(shí)視覺(jué)相似性,同時(shí)還有全局一致性
6.InlierSetMaximization:?jiǎn)螏瑘D像與3D場(chǎng)景的對(duì)應(yīng),提出了一個(gè)全局最優(yōu)的inlier set cardinality maximisation聯(lián)合估計(jì)最優(yōu)相機(jī)位姿和最優(yōu)的點(diǎn)對(duì)應(yīng)。另外還利用了BnB搜索6D空間,這個(gè)和發(fā)表在T-PAMI上的Go-ICP算法類(lèi)似。
7.DistributedOptimizationBA:大場(chǎng)景下的SfM中的分布式BA算法,從經(jīng)典的ADMM優(yōu)化算法中推導(dǎo)一個(gè)分布式的formulation。
8.P4PfrMinimalSolvers:一個(gè)P4Pfr的minimal solvers。
9.EdgeSLAM:檢測(cè)圖像中的Edge點(diǎn)并使用光流法跟蹤,并利用three views的幾何關(guān)系去優(yōu)化點(diǎn)的對(duì)應(yīng)
10.DepthPredictions:CNN深度預(yù)測(cè),sparse 點(diǎn)跟蹤的單目slam,使用3D mesh的地圖表示方法使得盡可能剛性地更新變換。
11.IntegerArithmetic:在EKF SfM的基礎(chǔ)上提出了平方根濾波算法,能夠用整數(shù)運(yùn)算替代浮點(diǎn)型運(yùn)算。

references

[1]Wang R, Schworer M, Cremers D. Stereo dso: Large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 3903-3911.
[2]Bose L, Chen J, Carey S J, et al. Visual Odometry for Pixel Processor Arrays[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 4604-4612.
[3]Yin X, Wang X, Du X, et al. Scale Recovery for Monocular Visual Odometry Using Depth Estimated with Deep Convolutional Neural Fields[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 5870-5878.
[4]Lee M, Fowlkes C C. Space-Time Localization and Mapping[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 3912-3921.
[5]Liu L, Li H, Dai Y. Efficient global 2d-3d matching for camera localization in a large-scale 3d map[C]//Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017: 2391-2400.
[6]Campbell D, Petersson L, Kneip L, et al. Globally-optimal inlier set maximisation for simultaneous camera pose and feature correspondence[C]//The IEEE International Conference on Computer Vision (ICCV). 2017, 1(3).
[7]Zhang R, Zhu S, Fang T, et al. Distributed very large scale bundle adjustment by global camera consensus[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 29-38.
[8]Larsson V, Kukelova Z, Zheng Y. Making minimal solvers for absolute pose estimation compact and robust[C]//2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017: 2335-2343.
[9]Maity S, Saha A, Bhowmick B. Edge SLAM: Edge Points Based Monocular Visual SLAM[C]//ICCV Workshops. 2017: 2408-2417.
[10]Mukasa T, Xu J, Bjorn S. 3D Scene Mesh from CNN Depth Predictions and Sparse Monocular SLAM[C]//Computer Vision Workshop (ICCVW), 2017 IEEE International Conference on. IEEE, 2017: 912-919.
[11]Ahuja N A, Subedar M, Tickoo O, et al. A Factorization Approach for Enabling Structure-from-Motion/SLAM Using Integer Arithmetic[C]//Computer Vision Workshop (ICCVW), 2017 IEEE International Conference on. IEEE, 2017: 554-562.


ECCV-2018

1.SemanticMatch:visual localization的問(wèn)題,用語(yǔ)義信息來(lái)匹配
2.EventSemi-Dense:雙目Event相機(jī)的半稠密3D重建
3.TimeOffset:建模變化的camera-IMU時(shí)間偏移,提出了基于優(yōu)化的VIO算法
4.GoodLineCutting:提出了一種提取most-informative子線(xiàn)段的方法,主要研究在基于之間的最小二乘問(wèn)題中,line cutting對(duì)位姿估計(jì)中信息增益的影響
5.Shape-from-Template:Rolling Shutter的畸變可以被解釋為Global shutter相機(jī)采集模板的虛擬畸變。類(lèi)似于Shape-from-Template,提出使用局部微分約束
6.PointsLinesMinimalSolution:使用點(diǎn)和線(xiàn)的minimal solver問(wèn)題,提出了閉合形式的解
7.VSO:使用語(yǔ)義信息實(shí)現(xiàn)medium-term的點(diǎn)的tracking。幀與幀之間的trackin是short-term,loop closure是long-term。
8.RollingShutterDSO:Rolling shutter 相機(jī)的DSO算法
9.DeepTAM:基于關(guān)鍵幀的稠密相機(jī)跟蹤和深度map估計(jì)都是通過(guò)學(xué)習(xí)的方式得到的,利用學(xué)習(xí)的方法估計(jì)當(dāng)前圖像和合成的視點(diǎn)之間的小的位姿增量,生成大量的位姿假設(shè)會(huì)得到更精確的預(yù)測(cè);地圖構(gòu)建過(guò)程使用了學(xué)習(xí)的方法進(jìn)行深度預(yù)測(cè)
10.DeepDSO:深度學(xué)習(xí)的方法depth prediction,DSO算法
11.ADVIO:一個(gè)可靠的VIO數(shù)據(jù)集
12.LinearRGBDSLAM:基于線(xiàn)性EKF框架的RGBD slam算法,旋轉(zhuǎn)是非線(xiàn)性的,利用曼哈頓世界的structural regularity可以實(shí)現(xiàn)線(xiàn)性化

references

[1]Toft C, Stenborg E, Hammarstrand L, et al. Semantic match consistency for long-term visual localization[C]//European Conference on Computer Vision. Springer, Cham, 2018: 391-408.
[2]Zhou Y, Gallego G, Rebecq H, et al. Semi-dense 3d reconstruction with a stereo event camera[C]//European Conference on Computer Vision. Springer, Cham, 2018: 242-258.
[3]Ling Y, Bao L, Jie Z, et al. Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 484-500.
[4]Zhao Y, Vela P A. Good Line Cutting: Towards Accurate Pose Tracking of Line-Assisted VO/VSLAM[C]//European Conference on Computer Vision. Springer, Cham, 2018: 527-543.
[5]Lao Y, Ait-Aider O, Bartoli A. Rolling Shutter Pose and Ego-motion Estimation using Shape-from-Template[C]//European Conference on Computer Vision. Springer, Cham, 2018: 477-492.
[6]Miraldo P, Dias T, Ramalingam S. A Minimal Closed-Form Solution for Multi-Perspective Pose Estimation using Points and Lines[C]//European Conference on Computer Vision. Springer, Cham, 2018: 490-507.
[7]Lianos K N, Schonberger J L, Pollefeys M, et al. VSO: Visual Semantic Odometry[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 234-250.
[8]Schubert D, Demmel N, Usenko V, et al. Direct Sparse Odometry with Rolling Shutter[C]//European Conference on Computer Vision. Springer, Cham, 2018: 699-714.
[9]Zhou H, Ummenhofer B, Brox T. Deeptam: Deep tracking and mapping[C]//European Conference on Computer Vision. Springer, Cham, 2018: 851-868.
[10]Yang N, Wang R, Stückler J, et al. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry[C]//European Conference on Computer Vision. Springer, Cham, 2018: 835-852.
[11]Cortés S, Solin A, Rahtu E, et al. ADVIO: An authentic dataset for visual-inertial odometry[C]//European Conference on Computer Vision. Springer, Cham, 2018: 425-440.
[12]Kim P, Coltin B, Jin Kim H. Linear RGB-D SLAM for Planar Environments[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 333-348.

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀(guān)點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • Acronym Standard Name RankAAAI National Co...
    DaSE_Bee閱讀 1,624評(píng)論 0 0
  • 文章作者:Tyan博客:noahsnail.com | CSDN | 簡(jiǎn)書(shū) 聲明:作者翻譯論文僅為學(xué)習(xí),如有侵權(quán)請(qǐng)...
    SnailTyan閱讀 23,636評(píng)論 1 35
  • 我也擁有過(guò)閃閃發(fā)亮的日子。 可能你也是從某個(gè)閃耀的時(shí)刻一路走到現(xiàn)在,磕磕絆絆,滿(mǎn)身襤褸。 那時(shí)無(wú)論做什么,都相信自...
    瓏瓏公主閱讀 237評(píng)論 0 0
  • 聚散由天,富貴自知。 耗,不如散。不可能改變的事,不同頻的人,不浪費(fèi)腦力。自家公婆,不與你耗在一起。 損,視力下降...
    行一館閱讀 308評(píng)論 0 0
  • 法/ 弗雷德里克 . 馬丁 中文/戴日強(qiáng)、 弗雷德里克 . 馬丁 je te consolerai 我會(huì)安慰你 s...
    張?zhí)O果閱讀 197評(píng)論 0 0

友情鏈接更多精彩內(nèi)容