強化學習是目前熱門的研究方向。對不同強化學習的方法與paper進行分類有助于我們進一步了解針對不同的應用場景,如何使用合適的強化學習方法。本文將對強化學習進行分類并列出對應的paper。
1. Model free RL
a. Deep Q-Learning系列
算法名稱:DQN
論文標題:Playing Atari with Deep Reinforcement Learning
發(fā)表會議:NIPS Deep Learning Workshop, 2013.
論文鏈接:https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
當前谷歌學術引用次數(shù):5942
算法名稱:Deep Recurrent Q-Learning
論文標題:Deep Recurrent Q-Learning for Partially Observable MDPs
發(fā)表會議:AAAI Fall Symposia, 2015
論文鏈接:https://arxiv.org/abs/1507.06527
當前谷歌學術引用次數(shù):877
算法名稱:Dueling DQN
論文標題:Dueling Network Architectures for Deep Reinforcement Learning
發(fā)表會議:ICML, 2016
論文鏈接:https://arxiv.org/abs/1511.06581
當前谷歌學術引用次數(shù):1728
算法名稱:Double DQN
論文標題:Deep Reinforcement Learning with Double Q-learning
發(fā)表會議:AAAI, 2016
論文鏈接:https://arxiv.org/abs/1509.06461
當前谷歌學術引用次數(shù):3213
算法名稱:Prioritized Experience Replay (PER)
論文標題:Prioritized Experience Replay
發(fā)表會議:ICLR, 2016
論文鏈接:https://arxiv.org/abs/1511.05952
當前谷歌學術引用次數(shù):1914
算法名稱:Rainbow DQN
論文標題:Rainbow: Combining Improvements in Deep Reinforcement Learning
發(fā)表會議:AAAI, 2018
論文鏈接:https://arxiv.org/abs/1710.02298
當前谷歌學術引用次數(shù):903
b. Policy Gradients系列
算法名稱:A3C
論文標題:Asynchronous Methods for Deep Reinforcement Learning
發(fā)表會議:ICML, 2016
論文鏈接:https://arxiv.org/abs/1602.01783
當前谷歌學術引用次數(shù):4739
算法名稱:TRPO
論文標題:Trust Region Policy Optimization
發(fā)表會議:ICML, 2015
論文鏈接:https://arxiv.org/abs/1502.05477
當前谷歌學術引用次數(shù):3357
算法名稱:GAE
論文標題:High-Dimensional Continuous Control Using Generalized Advantage Estimation
發(fā)表會議:ICLR, 2016
論文鏈接:https://arxiv.org/abs/1506.02438
當前谷歌學術引用次數(shù):1264
算法名稱:PPO-Clip, PPO-Penalty
論文標題:Proximal Policy Optimization Algorithms
發(fā)表會議:Arxiv
論文鏈接:https://arxiv.org/abs/1707.06347
當前谷歌學術引用次數(shù):4059
算法名稱:PPO-Penalty
論文標題:Emergence of Locomotion Behaviours in Rich Environments
發(fā)表會議:Arxiv
論文鏈接:https://arxiv.org/abs/1707.02286
當前谷歌學術引用次數(shù):528
算法名稱:ACKTR
論文標題:Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
發(fā)表會議:NIPS, 2017
論文鏈接:https://arxiv.org/abs/1708.05144
當前谷歌學術引用次數(shù):408
算法名稱:ACER
論文標題:Sample Efficient Actor-Critic with Experience Replay
發(fā)表會議:ICLR, 2017
論文鏈接:https://arxiv.org/abs/1611.01224
當前谷歌學術引用次數(shù):486
算法名稱:SAC
論文標題:Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
發(fā)表會議:ICML, 2018
論文鏈接:https://arxiv.org/abs/1801.01290
當前谷歌學術引用次數(shù):1447
c. Deterministic Policy Gradients系列
算法名稱:DPG
論文標題:Deterministic Policy Gradient Algorithms
發(fā)表會議:ICML, 2014
論文鏈接:http://proceedings.mlr.press/v32/silver14.pdf
當前谷歌學術引用次數(shù):1991
算法名稱:DDPG
論文標題:Continuous Control With Deep Reinforcement Learning
發(fā)表會議:ICLR, 2016
論文鏈接:https://arxiv.org/abs/1509.02971
當前谷歌學術引用次數(shù):5539
算法名稱:TD3
論文標題:Addressing Function Approximation Error in Actor-Critic Methods
發(fā)表會議:ICML, 2018
論文鏈接:https://arxiv.org/abs/1802.09477
當前谷歌學術引用次數(shù):839
d. Distributional RL系列
算法名稱:C51
論文標題:A Distributional Perspective on Reinforcement Learning
發(fā)表會議:ICML, 2017
論文鏈接:https://arxiv.org/abs/1707.06887
當前谷歌學術引用次數(shù):600
算法名稱:QR-DQN
論文標題:Distributional Reinforcement Learning with Quantile Regression
發(fā)表會議:AAAI, 2018
論文鏈接:https://arxiv.org/abs/1710.10044
當前谷歌學術引用次數(shù):188
算法名稱:IQN
論文標題:Implicit Quantile Networks for Distributional Reinforcement Learning
發(fā)表會議:ICML, 2018
論文鏈接:https://arxiv.org/abs/1806.06923
當前谷歌學術引用次數(shù):139
算法名稱:Dopamine
論文標題:Dopamine: A Research Framework for Deep Reinforcement Learning
發(fā)表會議:ICLR, 2019
論文鏈接:https://openreview.net/forum?id=ByG_3s09KX
當前谷歌學術引用次數(shù):107
e. Policy Gradients with Action-Dependent Baselines系列
算法名稱:Q-Prop
論文標題:Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic
發(fā)表會議:ICLR, 2017
論文鏈接:https://arxiv.org/abs/1611.02247
當前谷歌學術引用次數(shù):259
算法名稱:Stein Control Variates
論文標題:Action-depedent Control Variates for Policy Optimization via Stein’s Identity
發(fā)表會議:ICLR, 2018
論文鏈接:https://arxiv.org/abs/1710.11198
當前谷歌學術引用次數(shù):46
算法名稱:The Mirage of Action-Dependent Baselines in Reinforcement Learning
論文標題:The Mirage of Action-Dependent Baselines in Reinforcement Learning
發(fā)表會議:ICML, 2018
論文鏈接:https://arxiv.org/abs/1802.10031
當前谷歌學術引用次數(shù):66
f. Path-Consistency Learning系列
算法名稱:PCL
論文標題:Bridging the Gap Between Value and Policy Based Reinforcement Learning
發(fā)表會議:NIPS, 2017
論文鏈接:https://arxiv.org/abs/1702.08892
當前谷歌學術引用次數(shù):223
算法名稱:Trust-PCL
論文標題:Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
發(fā)表會議:ICLR, 2018
論文鏈接:https://arxiv.org/abs/1707.01891
當前谷歌學術引用次數(shù):68
g. Other Directions for Combining Policy-Learning and Q-Learning系列
算法名稱:PGQL
論文標題:Combining Policy Gradient and Q-learning
發(fā)表會議:ICLR, 2017
論文鏈接:https://arxiv.org/abs/1611.01626
當前谷歌學術引用次數(shù):58
算法名稱:Reactor
論文標題:The Reactor: A Fast and Sample-Efficient Actor-Critic Agent for Reinforcement Learning
發(fā)表會議:ICLR, 2018
論文鏈接:https://arxiv.org/abs/1704.04651
當前谷歌學術引用次數(shù):42
算法名稱:IPG
論文標題:Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning
發(fā)表會議:NIPS, 2017
論文鏈接:http://papers.nips.cc/paper/6974-interpolated-policy-gradient-merging-on-policy-and-off-policy-gradient-estimation-for-deep-reinforcement-learning
當前谷歌學術引用次數(shù):117
算法名稱:Equivalence Between Policy Gradients and Soft Q-Learning
論文標題:Equivalence Between Policy Gradients and Soft Q-Learning
發(fā)表會議:Arxiv
論文鏈接:https://arxiv.org/abs/1704.06440
當前谷歌學術引用次數(shù):170
h. Evolutionary Algorithms
算法名稱:ES
論文標題:Evolution Strategies as a Scalable Alternative to Reinforcement Learning
發(fā)表會議:Arxiv
論文鏈接:https://arxiv.org/abs/1703.03864
當前谷歌學術引用次數(shù):802