技术博客
深入浅出,分享我们的技术探索与实践经验
学术论文
发表于顶级学术会议和期刊的研究成果
2023
Policy Representation via Diffusion Probability Model for Reinforcement Learning
L Yang, Z Huang, F Lei, Y Zhong, Y Yang, C Fang, S Wen, B Zhou, Z Lin
Popular reinforcement learning (RL) algorithms tend to produce a unimodal policy distribution, which weakens the expressiveness of complicated policy and decays the ability of exploration. The diffusion probability model is powerful to learn complicated multimodal distributions, which has shown promising and potential applications to RL. In this paper, we formally build a theoretical foundation of policy representation via the diffusion probability model and provide practical implementations of diffusion policy for online model-free RL. We propose DIPO, the first algorithm to solve model-free online RL problems with the diffusion model.
2024
Langevin Policy for Safe Reinforcement Learning
F Lei, L Yang, S Wen, Z Huang, Z Zhang, C Pang
This paper formulates the Langevin policy for safe RL, and proposes Langevin Actor-Critic (LAC) to accelerate the process of policy inference. Instead of parametric policy, the proposed Langevin policy provides a stochastic process that directly infers actions, which is the numerical solver to the Langevin dynamic of actions on the continuous time. Extensive empirical results show the effectiveness and superiority of LAC on the MuJoCo-based and Safety Gym tasks.