Movement Representations

We are developing new probabilistic movement primitive representations that can capture the variability of the demonstrations and have several other beneficial properties such as joint coordination, variable stiffness control and flexible adaptation of the movement. A non-exhaustive list of papers can be found below.

Papers:

  • IJJR 2017: Learning Movement Primitive Libraries through Probabilistic Segmentation

    Movement primitives are a well established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.

    • R. Lioutikov, G. Neumann, G. Maeda, and J. Peters, “Learning movement primitive libraries through probabilistic segmentation,” International Journal of Robotics Research (IJRR), vol. 36, iss. 8, pp. 879-894, 2017.
      [BibTeX] [Abstract] [Download PDF]

      Movement primitives are a well established approach for encoding and executing movements. While the primitives themselves have been extensively researched, the concept of movement primitive libraries has not received similar attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected, mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.

      @article{lirolem28021,
      title = {Learning movement primitive libraries through probabilistic segmentation},
      journal = {International Journal of Robotics Research (IJRR)},
      pages = {879--894},
      author = {Rudolf Lioutikov and Gerhard Neumann and Guilherme Maeda and Jan Peters},
      volume = {36},
      year = {2017},
      number = {8},
      publisher = {SAGE},
      month = {July},
      abstract = {Movement primitives are a well established approach for encoding and executing movements. While the primitives
      themselves have been extensively researched, the concept of movement primitive libraries has not received similar
      attention. Libraries of movement primitives represent the skill set of an agent. Primitives can be queried and sequenced
      in order to solve specific tasks. The goal of this work is to segment unlabeled demonstrations into a representative
      set of primitives. Our proposed method differs from current approaches by taking advantage of the often neglected,
      mutual dependencies between the segments contained in the demonstrations and the primitives to be encoded. By
      exploiting this mutual dependency, we show that we can improve both the segmentation and the movement primitive
      library. Based on probabilistic inference our novel approach segments the demonstrations while learning a probabilistic
      representation of movement primitives. We demonstrate our method on two real robot applications. First, the robot
      segments sequences of different letters into a library, explaining the observed trajectories. Second, the robot segments
      demonstrations of a chair assembly task into a movement primitive library. The library is subsequently used to assemble the chair in an order not present in the demonstrations.},
      keywords = {ARRAY(0x564e3ca5efb8)},
      url = {http://eprints.lincoln.ac.uk/28021/}
      }

  • AURO2017: Using Probabilistic Movement Primitives in Robotics

    Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a uni-fied framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.

    • A. Paraschos, C. Daniel, J. Peters, and G. Neumann, “Using probabilistic movement primitives in robotics,” Autonomous Robots, vol. 42, iss. 3, pp. 529-551, 2018.
      [BibTeX] [Abstract] [Download PDF]

      Movement Primitives are a well-established paradigm for modular movement representation and generation. They provide a data-driven representation of movements and support generalization to novel situations, temporal modulation, sequencing of primitives and controllers for executing the primitive on physical systems. However, while many MP frameworks exhibit some of these properties, there is a need for a uni- fied framework that implements all of them in a principled way. In this paper, we show that this goal can be achieved by using a probabilistic representation. Our approach models trajectory distributions learned from stochastic movements. Probabilistic operations, such as conditioning can be used to achieve generalization to novel situations or to combine and blend movements in a principled way. We derive a stochastic feedback controller that reproduces the encoded variability of the movement and the coupling of the degrees of freedom of the robot. We evaluate and compare our approach on several simulated and real robot scenarios.

      @article{lirolem27883,
      author = {Alexandros Paraschos and Christian Daniel and Jan Peters and Gerhard Neumann},
      pages = {529--551},
      journal = {Autonomous Robots},
      title = {Using probabilistic movement primitives in robotics},
      month = {March},
      year = {2018},
      publisher = {Springer Verlag},
      number = {3},
      volume = {42},
      keywords = {ARRAY(0x564e3c684750)},
      abstract = {Movement Primitives are a well-established
      paradigm for modular movement representation and
      generation. They provide a data-driven representation
      of movements and support generalization to novel situations,
      temporal modulation, sequencing of primitives
      and controllers for executing the primitive on physical
      systems. However, while many MP frameworks exhibit
      some of these properties, there is a need for a uni-
      fied framework that implements all of them in a principled
      way. In this paper, we show that this goal can be
      achieved by using a probabilistic representation. Our
      approach models trajectory distributions learned from
      stochastic movements. Probabilistic operations, such as
      conditioning can be used to achieve generalization to
      novel situations or to combine and blend movements in
      a principled way. We derive a stochastic feedback controller
      that reproduces the encoded variability of the
      movement and the coupling of the degrees of freedom
      of the robot. We evaluate and compare our approach
      on several simulated and real robot scenarios.},
      url = {http://eprints.lincoln.ac.uk/27883/}
      }

    This paper is the extended journal version of

    • A. Paraschos, G. Neumann, and J. Peters, “A probabilistic approach to robot trajectory generation,” in 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2013, pp. 477-483.
      [BibTeX] [Abstract] [Download PDF]

      Motor Primitives (MPs) are a promising approach for the data-driven acquisition as well as for the modular and re-usable generation of movements. However, a modular control architecture with MPs is only effective if the MPs support co-activation as well as continuously blending the activation from one MP to the next. In addition, we need efficient mechanisms to adapt a MP to the current situation. Common approaches to movement primitives lack such capabilities or their implementation is based on heuristics. We present a probabilistic movement primitive approach that overcomes the limitations of existing approaches. We encode a primitive as a probability distribution over trajectories. The representation as distribution has several beneficial properties. It allows encoding a time-varying variance profile. Most importantly, it allows performing new operations – a product of distributions for the co-activation of MPs conditioning for generalizing the MP to different desired targets. We derive a feedback controller that reproduces a given trajectory distribution in closed form. We compare our approach to the existing state-of-the art and present real robot results for learning from demonstration.

      @inproceedings{lirolem25693,
      volume = {2015-F},
      year = {2013},
      publisher = {IEEE},
      number = {Februa},
      month = {October},
      booktitle = {13th IEEE-RAS International Conference on Humanoid Robots (Humanoids)},
      title = {A probabilistic approach to robot trajectory generation},
      pages = {477--483},
      author = {A. Paraschos and G. Neumann and J. Peters},
      url = {http://eprints.lincoln.ac.uk/25693/},
      abstract = {Motor Primitives (MPs) are a promising approach for the data-driven acquisition as well as for the modular and re-usable generation of movements. However, a modular control architecture with MPs is only effective if the MPs support co-activation as well as continuously blending the activation from one MP to the next. In addition, we need efficient mechanisms to adapt a MP to the current situation. Common approaches to movement primitives lack such capabilities or their implementation is based on heuristics. We present a probabilistic movement primitive approach that overcomes the limitations of existing approaches. We encode a primitive as a probability distribution over trajectories. The representation as distribution has several beneficial properties. It allows encoding a time-varying variance profile. Most importantly, it allows performing new operations - a product of distributions for the co-activation of MPs conditioning for generalizing the MP to different desired targets. We derive a feedback controller that reproduces a given trajectory distribution in closed form. We compare our approach to the existing state-of-the art and present real robot results for learning from demonstration.},
      keywords = {ARRAY(0x564e3c822cd8)}
      }

    • A. Paraschos, C. Daniel, J. Peters, and G. Neumann, “Probabilistic movement primitives,” in Advances in Neural Information Processing Systems, (NIPS), 2013.
      [BibTeX] [Abstract] [Download PDF]

      Movement Primitives (MP) are a well-established approach for representing modular and re-usable robot movement generators. Many state-of-the-art robot learning successes are based MPs, due to their compact representation of the inherently continuous and high dimensional robot movements. A major goal in robot learning is to combine multiple MPs as building blocks in a modular control architecture to solve complex tasks. To this effect, a MP representation has to allow for blending between motions, adapting to altered task variables, and co-activating multiple MPs in parallel. We present a probabilistic formulation of the MP concept that maintains a distribution over trajectories. Our probabilistic approach allows for the derivation of new operations which are essential for implementing all aforementioned properties in one framework. In order to use such a trajectory distribution for robot movement control, we analytically derive a stochastic feedback controller which reproduces the given trajectory distribution. We evaluate and compare our approach to existing methods on several simulated as well as real robot scenarios.

      @inproceedings{lirolem25785,
      title = {Probabilistic movement primitives},
      journal = {Advances in Neural Information Processing Systems},
      booktitle = {Advances in Neural Information Processing Systems, (NIPS)},
      month = {December},
      author = {A. Paraschos and C. Daniel and J. Peters and G. Neumann},
      year = {2013},
      keywords = {ARRAY(0x564e3c71e680)},
      abstract = {Movement Primitives (MP) are a well-established approach for representing modular
      and re-usable robot movement generators. Many state-of-the-art robot learning
      successes are based MPs, due to their compact representation of the inherently
      continuous and high dimensional robot movements. A major goal in robot learning
      is to combine multiple MPs as building blocks in a modular control architecture
      to solve complex tasks. To this effect, a MP representation has to allow for
      blending between motions, adapting to altered task variables, and co-activating
      multiple MPs in parallel. We present a probabilistic formulation of the MP concept
      that maintains a distribution over trajectories. Our probabilistic approach
      allows for the derivation of new operations which are essential for implementing
      all aforementioned properties in one framework. In order to use such a trajectory
      distribution for robot movement control, we analytically derive a stochastic feedback
      controller which reproduces the given trajectory distribution. We evaluate
      and compare our approach to existing methods on several simulated as well as
      real robot scenarios.},
      url = {http://eprints.lincoln.ac.uk/25785/}
      }

     

  • RAL & IROS 2017: Probabilistic prioritization of movement primitives

    Movement prioritization is a common approach to combine controllers of different tasks for redundant robots, where each task is assigned a priority. The priorities of the tasks are often hand-tuned or the result of an optimization, but seldomly learned from data. This paper combines Bayesian task prioritization with probabilistic movement primitives to prioritize full motion sequences that are learned from demonstrations. Probabilistic movement primitives (ProMPs) can encode distributions of movements over full motion sequences and provide control laws to exactly follow these distributions. The probabilistic formulation allows for a natural application of Bayesian task prioritization. We extend the ProMP controllers with an additional feedback component that accounts inaccuracies in following the distribution and allows for a more robust prioritization of primitives. We demonstrate how the task priorities can be obtained from imitation learning and how different primitives can be combined to solve even unseen task-combinations. Due to the prioritization, our  approach can efficiently learn a combination of tasks without requiring individual models per task combination. Further, our approach can adapt an existing primitive library by prioritizing additional controllers, for example, for implementing obstacle avoidance.
    Hence, the need of retraining the whole library is avoided in many cases. We evaluate our approach on reaching movements under constraints with redundant simulated planar robots and two physical robot platforms, the humanoid robot “iCub” and a KUKA LWR robot arm.

    • A. Paraschos, R. Lioutikov, J. Peters, and G. Neumann, “Probabilistic prioritization of movement primitives,” IEEE Robotics and Automation Letters, vol. PP, iss. 99, 2017.
      [BibTeX] [Abstract] [Download PDF]

      Movement prioritization is a common approach to combine controllers of different tasks for redundant robots, where each task is assigned a priority. The priorities of the tasks are often hand-tuned or the result of an optimization, but seldomly learned from data. This paper combines Bayesian task prioritization with probabilistic movement primitives to prioritize full motion sequences that are learned from demonstrations. Probabilistic movement primitives (ProMPs) can encode distributions of movements over full motion sequences and provide control laws to exactly follow these distributions. The probabilistic formulation allows for a natural application of Bayesian task prioritization. We extend the ProMP controllers with an additional feedback component that accounts inaccuracies in following the distribution and allows for a more robust prioritization of primitives. We demonstrate how the task priorities can be obtained from imitation learning and how different primitives can be combined to solve even unseen task-combinations. Due to the prioritization, our approach can efficiently learn a combination of tasks without requiring individual models per task combination. Further, our approach can adapt an existing primitive library by prioritizing additional controllers, for example, for implementing obstacle avoidance. Hence, the need of retraining the whole library is avoided in many cases. We evaluate our approach on reaching movements under constraints with redundant simulated planar robots and two physical robot platforms, the humanoid robot ?iCub? and a KUKA LWR robot arm.

      @article{lirolem27901,
      volume = {PP},
      year = {2017},
      number = {99},
      publisher = {IEEE},
      month = {July},
      journal = {IEEE Robotics and Automation Letters},
      title = {Probabilistic prioritization of movement primitives},
      booktitle = {, Proceedings of the International Conference on Intelligent Robot Systems, and IEEE Robotics and Automation Letters (RA-L)},
      author = {Alexandros Paraschos and Rudolf Lioutikov and Jan Peters and Gerhard Neumann},
      keywords = {ARRAY(0x564e3c35e8e8)},
      abstract = {Movement prioritization is a common approach
      to combine controllers of different tasks for redundant robots,
      where each task is assigned a priority. The priorities of the
      tasks are often hand-tuned or the result of an optimization,
      but seldomly learned from data. This paper combines Bayesian
      task prioritization with probabilistic movement primitives to
      prioritize full motion sequences that are learned from demonstrations.
      Probabilistic movement primitives (ProMPs) can
      encode distributions of movements over full motion sequences
      and provide control laws to exactly follow these distributions.
      The probabilistic formulation allows for a natural application of
      Bayesian task prioritization. We extend the ProMP controllers
      with an additional feedback component that accounts inaccuracies
      in following the distribution and allows for a more
      robust prioritization of primitives. We demonstrate how the
      task priorities can be obtained from imitation learning and
      how different primitives can be combined to solve even unseen
      task-combinations. Due to the prioritization, our approach can
      efficiently learn a combination of tasks without requiring individual
      models per task combination. Further, our approach can
      adapt an existing primitive library by prioritizing additional
      controllers, for example, for implementing obstacle avoidance.
      Hence, the need of retraining the whole library is avoided in
      many cases. We evaluate our approach on reaching movements
      under constraints with redundant simulated planar robots and
      two physical robot platforms, the humanoid robot ?iCub? and
      a KUKA LWR robot arm.},
      url = {http://eprints.lincoln.ac.uk/27901/}
      }

  • IJRR 2017: Phase estimation for fast action recognition and trajectory generation in human–robot collaboration

    This paper proposes a method to achieve fast and fluid human–robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.

    • G. Maeda, M. Ewerton, G. Neumann, R. Lioutikov, and J. Peters, “Phase estimation for fast action recognition and trajectory generation in human?robot collaboration,” The International Journal of Robotics Research, vol. 36, iss. 13-14, pp. 1579-1594, 2017.
      [BibTeX] [Abstract] [Download PDF]

      This paper proposes a method to achieve fast and fluid human?robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.

      @article{lirolem26734,
      volume = {36},
      month = {December},
      year = {2017},
      publisher = {SAGE},
      number = {13-14},
      journal = {The International Journal of Robotics Research},
      title = {Phase estimation for fast action recognition and trajectory generation in human?robot collaboration},
      author = {Guilherme Maeda and Marco Ewerton and Gerhard Neumann and Rudolf Lioutikov and Jan Peters},
      pages = {1579--1594},
      url = {http://eprints.lincoln.ac.uk/26734/},
      abstract = {This paper proposes a method to achieve fast and fluid human?robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semi-autonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction Probabilistic Movement Primitives with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping that must rely on a consistent stream of measurements at runtime. The resulting framework can achieve phase estimation, action recognition and robot trajectory coordination using a single probabilistic representation. We evaluated the method using a seven-degree-of-freedom lightweight robot arm equipped with a five-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on dynamic time warping.},
      keywords = {ARRAY(0x564e3cfff580)}
      }

  • Auro 2017: Probabilistic movement primitives for coordination of multiple human–robot collaborative tasks

    This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and human–robot movement coordination. It uses imitation learning to construct a mixture model of human–robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the human–robot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories.

    • G. J. Maeda, G. Neumann, M. Ewerton, R. Lioutikov, O. Kroemer, and J. Peters, “Probabilistic movement primitives for coordination of multiple human?robot collaborative tasks,” Autonomous Robots, vol. 41, iss. 3, pp. 593-612, 2017.
      [BibTeX] [Abstract] [Download PDF]

      This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and human?robot movement coordination. It uses imitation learning to construct a mixture model of human?robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the human?robot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories.

      @article{lirolem25744,
      author = {G. J. Maeda and G. Neumann and M. Ewerton and R. Lioutikov and O. Kroemer and J. Peters},
      pages = {593--612},
      title = {Probabilistic movement primitives for coordination of multiple human?robot collaborative tasks},
      journal = {Autonomous Robots},
      month = {March},
      year = {2017},
      publisher = {Springer},
      number = {3},
      note = {Special Issue on Assistive and Rehabilitation Robotics},
      volume = {41},
      abstract = {This paper proposes an interaction learning method for collaborative and assistive robots based on movement primitives. The method allows for both action recognition and human?robot movement coordination. It uses imitation learning to construct a mixture model of human?robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations. The method is scalable in relation to the number of tasks and can learn nonlinear correlations between the trajectories that describe the human?robot interaction. We evaluated the method experimentally with a lightweight robot arm in a variety of assistive scenarios, including the coordinated handover of a bottle to a human, and the collaborative assembly of a toolbox. Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories.},
      keywords = {ARRAY(0x564e3c6c26b0)},
      url = {http://eprints.lincoln.ac.uk/25744/}
      }

  • NIPS 2013: Probabilistic Movement Primitives

    Movement Primitives (MP) are a well-established approach for representing modular and re-usable robot movement generators. Many state-of-the-art robot learning successes are based MPs, due to their compact representation of the inherently continuous and high dimensional robot movements. A major goal in robot learning is to combine multiple MPs as building blocks in a modular control architecture to solve complex tasks. To this effect, a MP representation has to allow for blending between motions, adapting to altered task variables, and co-activating multiple MPs in parallel. We present a probabilistic formulation of the MP concept that maintains a distribution over trajectories. Our probabilistic approach allows for the derivation of new operations which are essential for implementing all aforementioned properties in one framework. In order to use such a trajectory distribution for robot movement control, we analytically derive a stochastic feedback controller which reproduces the given trajectory distribution. We evaluate and compare our approach to existing methods on several simulated as well as real robot scenarios.

    • A. Paraschos, C. Daniel, J. Peters, and G. Neumann, “Probabilistic movement primitives,” in Advances in Neural Information Processing Systems, (NIPS), 2013.
      [BibTeX] [Abstract] [Download PDF]

      Movement Primitives (MP) are a well-established approach for representing modular and re-usable robot movement generators. Many state-of-the-art robot learning successes are based MPs, due to their compact representation of the inherently continuous and high dimensional robot movements. A major goal in robot learning is to combine multiple MPs as building blocks in a modular control architecture to solve complex tasks. To this effect, a MP representation has to allow for blending between motions, adapting to altered task variables, and co-activating multiple MPs in parallel. We present a probabilistic formulation of the MP concept that maintains a distribution over trajectories. Our probabilistic approach allows for the derivation of new operations which are essential for implementing all aforementioned properties in one framework. In order to use such a trajectory distribution for robot movement control, we analytically derive a stochastic feedback controller which reproduces the given trajectory distribution. We evaluate and compare our approach to existing methods on several simulated as well as real robot scenarios.

      @inproceedings{lirolem25785,
      title = {Probabilistic movement primitives},
      journal = {Advances in Neural Information Processing Systems},
      booktitle = {Advances in Neural Information Processing Systems, (NIPS)},
      month = {December},
      author = {A. Paraschos and C. Daniel and J. Peters and G. Neumann},
      year = {2013},
      keywords = {ARRAY(0x564e3c71e680)},
      abstract = {Movement Primitives (MP) are a well-established approach for representing modular
      and re-usable robot movement generators. Many state-of-the-art robot learning
      successes are based MPs, due to their compact representation of the inherently
      continuous and high dimensional robot movements. A major goal in robot learning
      is to combine multiple MPs as building blocks in a modular control architecture
      to solve complex tasks. To this effect, a MP representation has to allow for
      blending between motions, adapting to altered task variables, and co-activating
      multiple MPs in parallel. We present a probabilistic formulation of the MP concept
      that maintains a distribution over trajectories. Our probabilistic approach
      allows for the derivation of new operations which are essential for implementing
      all aforementioned properties in one framework. In order to use such a trajectory
      distribution for robot movement control, we analytically derive a stochastic feedback
      controller which reproduces the given trajectory distribution. We evaluate
      and compare our approach to existing methods on several simulated as well as
      real robot scenarios.},
      url = {http://eprints.lincoln.ac.uk/25785/}
      }