Standard Reinforcement Learning

  • JMLR 2017: Non-parametric Policy Search with Limited Information Loss.

    Learning complex control policies from non-linear and redundant sensory input is an important challenge for reinforcement learning algorithms. Non-parametric methods that approximate values functions or transition models can address this problem, by adapting to the complexity of the dataset. Yet, many current non-parametric approaches rely on unstable greedy maximization of approximate value functions, which might lead to poor convergence or oscillations in the policy update. A more robust policy update can be obtained by limiting the information loss between successive state-action distributions. In this paper, we develop a policy search algorithm with policy updates that are both robust and non-parametric. Our method can learn non-parametric control policies for infinite horizon continuous Markov decision processes with non-linear and redundant sensory representations.
    We investigate how we can use approximations of the kernel function to reduce the time requirements of the demanding non-parametric computations. In our experiments, we show the strong performance of the proposed method, and how it can be approximated efficiently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up task directly from image data.

    • H. van Hoof, G. Neumann, and J. Peters, “Non-parametric policy search with limited information loss,” Journal of Machine Learning Research, vol. 18, iss. 73, pp. 1-46, 2018.
      [BibTeX] [Abstract] [Download PDF]

      Learning complex control policies from non-linear and redundant sensory input is an important challenge for reinforcement learning algorithms. Non-parametric methods that approximate values functions or transition models can address this problem, by adapting to the complexity of the dataset. Yet, many current non-parametric approaches rely on unstable greedy maximization of approximate value functions, which might lead to poor convergence or oscillations in the policy update. A more robust policy update can be obtained by limiting the information loss between successive state-action distributions. In this paper, we develop a policy search algorithm with policy updates that are both robust and non-parametric. Our method can learn non-parametric control policies for infinite horizon continuous Markov decision processes with non-linear and redundant sensory representations. We investigate how we can use approximations of the kernel function to reduce the time requirements of the demanding non-parametric computations. In our experiments, we show the strong performance of the proposed method, and how it can be approximated effi- ciently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up task directly from image data.

      @article{lirolem28020,
      month = {December},
      pages = {1--46},
      author = {Herke van Hoof and Gerhard Neumann and Jan Peters},
      publisher = {Journal of Machine Learning Research},
      title = {Non-parametric policy search with limited information loss},
      number = {73},
      volume = {18},
      year = {2018},
      journal = {Journal of Machine Learning Research},
      url = {http://eprints.lincoln.ac.uk/28020/},
      keywords = {ARRAY(0x558aaf150418)},
      abstract = {Learning complex control policies from non-linear and redundant sensory input is an important
      challenge for reinforcement learning algorithms. Non-parametric methods that
      approximate values functions or transition models can address this problem, by adapting
      to the complexity of the dataset. Yet, many current non-parametric approaches rely on
      unstable greedy maximization of approximate value functions, which might lead to poor
      convergence or oscillations in the policy update. A more robust policy update can be obtained
      by limiting the information loss between successive state-action distributions. In this
      paper, we develop a policy search algorithm with policy updates that are both robust and
      non-parametric. Our method can learn non-parametric control policies for infinite horizon
      continuous Markov decision processes with non-linear and redundant sensory representations.
      We investigate how we can use approximations of the kernel function to reduce the
      time requirements of the demanding non-parametric computations. In our experiments, we
      show the strong performance of the proposed method, and how it can be approximated effi-
      ciently. Finally, we show that our algorithm can learn a real-robot underpowered swing-up
      task directly from image data.}
      }