dmp
Package Summary
dmp
- Author: Scott Niekum
- License: BSD
- Source: git https://github.com/sniekum/dmp.git (branch: master)
Contents
Overview
This package provides a general implementation of Dynamic Movement Primitives (DMPs). A good reference on DMPs can be found here, but this package implements a more stable reformulation of DMPs also described in the referenced paper. Current capabilities include the learning of multi-dimensional DMPs from example trajectories and generation of full and partial plans for arbitrary starting and goal points. Eventually, a wider selection of function approximators will be added, in addition to native support for reinforcement learning.
A few notes:
- This implementation is agnostic toward what is being generated by the DMP, i.e. force, acceleration, or any other quantity.
- We implement N-dimensional DMPs as N separate DMPs linked together with a single phase system, as in the paper reference above.
- Function approximation is done with a simple local linear interpolation scheme, but code for a global function approximator using the Fourier basis is also provided, along with an additional local approximation scheme using radial basis functions. However, it is recommended to just use linear interpolation unless the robot is learning from a large amount of data that should not be stored locally in full.
Using DMPs
First, the DMP server must be running. Type:
1 roslaunch dmp dmp.launch
Now, let's look at some sample code to learn a DMP from demonstration, set it as the active DMP on the server, and use it to plan, given a new start and goal:
1 #!/usr/bin/env python
2 import roslib;
3 roslib.load_manifest('dmp')
4 import rospy
5 import numpy as np
6 from dmp.srv import *
7 from dmp.msg import *
8
9 #Learn a DMP from demonstration data
10 def makeLFDRequest(dims, traj, dt, K_gain,
11 D_gain, num_bases):
12 demotraj = DMPTraj()
13
14 for i in range(len(traj)):
15 pt = DMPPoint();
16 pt.positions = traj[i]
17 demotraj.points.append(pt)
18 demotraj.times.append(dt*i)
19
20 k_gains = [K_gain]*dims
21 d_gains = [D_gain]*dims
22
23 print "Starting LfD..."
24 rospy.wait_for_service('learn_dmp_from_demo')
25 try:
26 lfd = rospy.ServiceProxy('learn_dmp_from_demo', LearnDMPFromDemo)
27 resp = lfd(demotraj, k_gains, d_gains, num_bases)
28 except rospy.ServiceException, e:
29 print "Service call failed: %s"%e
30 print "LfD done"
31
32 return resp;
33
34
35 #Set a DMP as active for planning
36 def makeSetActiveRequest(dmp_list):
37 try:
38 sad = rospy.ServiceProxy('set_active_dmp', SetActiveDMP)
39 sad(dmp_list)
40 except rospy.ServiceException, e:
41 print "Service call failed: %s"%e
42
43
44 #Generate a plan from a DMP
45 def makePlanRequest(x_0, x_dot_0, t_0, goal, goal_thresh,
46 seg_length, tau, dt, integrate_iter):
47 print "Starting DMP planning..."
48 rospy.wait_for_service('get_dmp_plan')
49 try:
50 gdp = rospy.ServiceProxy('get_dmp_plan', GetDMPPlan)
51 resp = gdp(x_0, x_dot_0, t_0, goal, goal_thresh,
52 seg_length, tau, dt, integrate_iter)
53 except rospy.ServiceException, e:
54 print "Service call failed: %s"%e
55 print "DMP planning done"
56
57 return resp;
58
59
60 if __name__ == '__main__':
61 rospy.init_node('dmp_tutorial_node')
62
63 #Create a DMP from a 2-D trajectory
64 dims = 2
65 dt = 1.0
66 K = 100
67 D = 2.0 * np.sqrt(K)
68 num_bases = 4
69 traj = [[1.0,1.0],[2.0,2.0],[3.0,4.0],[6.0,8.0]]
70 resp = makeLFDRequest(dims, traj, dt, K, D, num_bases)
71
72 #Set it as the active DMP
73 makeSetActiveRequest(resp.dmp_list)
74
75 #Now, generate a plan
76 x_0 = [0.0,0.0] #Plan starting at a different point than demo
77 x_dot_0 = [0.0,0.0]
78 t_0 = 0
79 goal = [8.0,7.0] #Plan to a different goal than demo
80 goal_thresh = [0.2,0.2]
81 seg_length = -1 #Plan until convergence to goal
82 tau = 2 * resp.tau #Desired plan should take twice as long as demo
83 dt = 1.0
84 integrate_iter = 5 #dt is rather large, so this is > 1
85 plan = makePlanRequest(x_0, x_dot_0, t_0, goal, goal_thresh,
86 seg_length, tau, dt, integrate_iter)
87
88 print plan
When executed, this produces:
Starting LfD... LfD done Starting DMP planning... DMP planning done plan: points: - positions: [0.0, 0.0] velocities: [0.0, 0.0] - positions: [0.8100075124021597, 0.7087565733518839] velocities: [2.492278234944417, 2.1807434555763616] - positions: [-3.2149288213481824, -6.1594619175155705] velocities: [-1.0934970518496467, -2.769417766916274] - positions: [0.8827255030673307, -1.937261989255104] velocities: [5.431303987123971, 6.50827015230193] - positions: [6.090630509119033, 4.786163676480743] velocities: [2.993882533676429, 3.953989716003782] - positions: [6.599551969911872, 5.5741095433201595] velocities: [-0.14593240997752455, -0.20901919742813732] - positions: [7.843761135045964, 7.064792076582948] velocities: [1.077860218553606, 1.237107151074346] times: [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0] at_goal: 1
Parameters
DMPs have several parameters for both learning and planning that require a bit of explanation.
LearnDMPFromDemo Parameters
- k_gains: This is a list of proportional gains (essentially a spring constant) for each of the dimensions of the DMP. These can be set very flexibly and still work. A value of 100 usually works for controlling the PR2.
- d_gains: This is a list of the damping gains for each of the dimensions of the DMP. These should almost always be set for critical damping (D = 2*sqrt(K)).
- num_bases: The number of basis functions to use (this does not apply to linear interpolation-based function approximation). More complex nonlinear functions require more bases, but too many can cause overfitting (although this does not matter in cases where desired trajectories are the same length as the demo trajectory; it only becomes a problem when tau is modified). Also, usually no more than 200 basis functions should be used, or thing start to slow down considerably.
GetDMPPlan Parameters
- x_0: The starting state from which to begin planning. This should be set to the current state for each generated plan, if doing piecewise planning / replanning.
- x_dot_0: The first derivative of state from which to begin planning.
- t_0: The time in seconds from which to begin the plan. Normally 0, unless doing piecewise planning.
- goal: The goal that the DMP should converge to.
- goal_thresh: A threshold in each dimension that the plan must come within before stopping planning, unless it plans for seg_length first.
- seg_length: The length of the plan segment in seconds. This can be used to do piecewise, incremental planning and replanning. Otherwise, set to -1 if planning until convergence is desired.
- tau: This can be interpreted as the desired length of the entire DMP generated movement in seconds (not just the segment being generated currently). Normally, if you want to execute at the same speed as the demonstration, just use the value of tau that LearnDMPFromDemo returns. Otherwise, scale tau accordingly, but performance may suffer, since the function approximator must now generalize / interpolate.
- dt: The time resolution of the plan in seconds
- integrate_iter: The number of times to numerically integrate when changing acceleration to velocity to position. This can usually be 1, unless dt is fairly large (i.e. greater than 1 second), in which case it should be larger.
Nodes
dmp_server
Services
learn_dmp_from_demo (pr2_dmp/LearnDMPFromDemo)- Given a demonstration trajectory and DMP parameters, return a learned multi-dimensional DMP.
- Sets the active multi-dimensional DMP that will be used for planning.
- Creates a full or partial plan from a start state to a goal state, using the currently active DMP.