<> <> == Which data types does tf use? == tf can deal with data objects describing poses, vectors, points, etc. You can find a complete list of the data types [[tf/data types|here]]. == How do I visualize tf? == tf comes with a large set of visualization and debugging tools. You can find a list of the tools [[tf/Debugging tools|here]]. == Frames are missing in my tf tree. Why? == You expected some frames to be in your tf tree, but [[tf/Debugging tools|view_frames]] shows you they are not there. There are several different causes: * No-one is publishing these frames. To look at the raw publishing data, type: {{{ $ rostopic echo /tf }}} * The frames are published incorrectly. tf expects the frames to form a '''tree'''. This means that each frame can '''only have one parent'''. If you publish for example both the transforms: * from "A (parent) to B (child)" and * from "C (parent) to B (child)". This means that B has two parents: A and C. tf cannot deal with this. == Can I transform a Point Cloud? == Yes, it's in the `tf` package right now. It will be moving to a "point cloud" package soon. Use `transformPointCloud` method in [[http://docs.ros.org/api/tf/html/c++/classtf_1_1TransformListener.html| tf::TransformListener]] class. == Can I set transforms from a matrix? == The [[http://www.bulletphysics.com/Bullet/BulletFull/classbtTransform.html|btTransform]] can be constructed using * a [[http://www.bulletphysics.com/Bullet/BulletFull/classbtQuaternion.html|btQuaternion]] and a [[http://www.bulletphysics.com/Bullet/BulletFull/classbtVector3.html|btVector3]], OR * a [[http://www.bulletphysics.com/Bullet/BulletFull/classbtMatrix3x3.html|btMatrix]] and a [[http://www.bulletphysics.com/Bullet/BulletFull/classbtVector3.html|btVector3]]. A [[http://www.bulletphysics.com/Bullet/BulletFull/classbtMatrix3x3.html|btMatrix]] takes 9 values in it's constructor. So you can do: {{{ #!cplusplus tf::Transform tr = tf::Transform(btMatrix3x3(1,2,3,4,5,6,7,8,9), btVector3(x,y,z)); }}} == Can I set a transform from axis/angle data? == Yes, it's easy to do with [[eigen]]. E.g,. to convert the output of PCL's [[http://pointclouds.org/documentation/tutorials/planar_segmentation.php#planar-segmentation|planar model segmenter]] to a quaternion and then to (the orientation component of) a transform: {{{ #include ... pcl::SACSegmentation seg; pcl::PointIndices inliers; pcl::ModelCoefficients coefficients; ... seg.segment(inliers, coefficients); ... // Convert to quaternion Eigen::eigen2_AngleAxisf aa(acos(coefficients.values[0]), Eigen::Vector3f(0, -coefficients.values[2], coefficients.values[1])); Eigen::eigen2_Quaternion q(aa); geometry_msgs::TransformStamped tx; tx.transform.rotation.x = q.x(); tx.transform.rotation.y = q.y(); tx.transform.rotation.z = q.z(); tx.transform.rotation.w = q.w(); }}} == What is the meaning of this error message == Take a look at the [[tf/Errors explained|error explained]] page. == Can I convert between different rotation representations? == Take a look at the Bullet [[http://www.bulletphysics.com/Bullet/BulletFull/classbtMatrix3x3.html|btMatrix]] and [[http://www.bulletphysics.com/Bullet/BulletFull/classbtQuaternion.html|btQuaternion]] data types. == Can I declare static transforms? == There is no such thing as a static transform in tf (there will be in tf2). tf is designed as a distributed system and communications are not guaranteed. It is designed to be robust to the loss of any message. To set a static transform there are a number of ways. There is a command line tool, [[tf#static_transform_publisher|static_transform_publisher]] which works well when integrated into [[roslaunch]] files. For examples of how to broadcast transforms in code, take a look at the tutorial about adding a transform [[tf/Tutorials/Adding a frame (C++)|(C++)]] [[tf/Tutorials/Adding a frame (Python)|(Python)]]. == Do transforms expire? == Some code (such as `amcl_player.cc`) refers to an expiration time for transforms. How does the concept of transforms that expire interact with the transform library's attempts at interpolation/extrapolation? Is this documented anywhere? The concept of expiring transforms is not actually supported. What [[amcl]] and a few others do is date their transforms in the future. This is to get around the problem that a chain of transforms is only as current as the oldest link of the chain. The future dating of a transform allows the slow transform to be looked up in the intervening period between broadcasts. This technique is dangerous in general, for it will cause tf to lag actual information, for tf will assume that information is correct until its timestamp in the future passes. However for transforms which change only very slowly this lag will not be observable, which is why it is valid for `amcl` to use this. It can safely be used for static transforms. I highly recommend being careful with future dating any measurements. Another technique available is to allow extrapolation. And either technique can creep up on you and cause unusual behavior which is untraceable. == Can I set different expiration limits for specific transforms? == After all, I would expect different transforms to update at rates that are potentially orders of magnitude different. For the current implementation, it is simply how far you future date your data for that transform. Remember, this will cause the transform to lag by however much you future date the values. I do not recommend this for any value which is expected to change at a rate more than a slight drift correction (ala amcl_player). == How does tf deal with interpolation and extrapolation? == Our experimentation has shown that interpolation is fine, but extrapolation almost always ends up becoming more of a problem than a solution. If you are having trouble with data being ready before transforms are available I suggest using the `tf::MessageFilter` class in tf. It will queue incoming data until transforms are available. Having tried allowing "just a little" extrapolation, waiting for accurate data to be available has proved a much better approach. == How can tf be used in a 3D-mapping framework? == An example serves to illustrate this better. Given 4 sensing devices: 1. Hokuyo -- [[hokuyo_node]] package 2. !SwissRanger SR4000 -- [[swissranger]] package 3. Videre STOC -- [[stoc_driver]] package 4. Thermal FLIR camera -- [[flir_driver]] package The first 3 produce point clouds, and the last one images. After calibration, you get the necessary rotation matrices that you need to use to transform the !SwissRanger point cloud into the left camera on the STOC, respectively the thermal image into left STOC, etc. While collecting data, you constantly need to apply these rotation matrices to the point clouds generated by some sensors, to annotate your 3D points with texture (RGB), thermal information, etc This is exactly where you want to use `tf::TransformListener`. The point cloud aggregator will be recieving messages over ROS from all of the sensors, in their respective date types. However, say the Hokuyo is mounted on the base, the Videre stereo camera is on the head, and the swiss ranger is on the left arm. The aggregating process has no idea where all these coordinate frames actually are in space. However the mechanism control process knows where the head and arms are in relationship to the base. And the localization process knows where the base is in relationship to the world. Now if they are all publishing transforms using their own broadcaster instances, all the aggregator node needs to do is instantiate the `TransformListener` and it will be able to relate the data from the Videre to the data from the hokuyo and data from the swiss ranger. The `TransformListene`r class will extract all that information from the ROS network and provide it automatically to the Transformer base class. The Transformer class then can provide the aggregator with any transform they wish. The goal is that the end user doesn't have to worry about collecting any transforms, and they are automatically cached in time and can provide interpolated or extrapolated results if desired. The way to make this easy for the the developer/user is the use of frame ids. The frame id's are simply strings which uniquely identify coordinate frames. When the system is operating, if you have a point cloud arrive from the Videre it will be in the "videre_camera_frame" to use it in whatever frame you want simply transform it to the frame id of the frame you want and use it. == What is the tf threading model? == A [[http://www.ros.org/doc/api/tf/html/classtf_1_1TransformListener.html|tf::TransformListener]] class is continuously listening for incoming coordinate transforms from [[http://www.ros.org/doc/api/tf/html/classtf_1_1TransformBroadcaster.html|tf::TransformBroadcaster]]s. Any blocking call would severely disrupt the flow of information to the listener. Therefore, a transform listener, by default, spins its own thread. This particularly helps with calls to functions like `tf::Transformer::waitForTransform` which can block callbacks while it is waiting to hear back from tf. * The API for `tf::Transformer`, `tf::TransformListener` and `tf::TransformSender` all use mutex locks to be threadsafe. * `tf::Transformer::waitForTransform` requires a dedicated listening thread (provided by default by [[http://www.ros.org/doc/api/tf/html/classtf_1_1TransformListener.html|tf::TransformListener]]) == How can I use Euler angles with tf data types? == See [[geometry/RotationMethods]] == Why don't frame_ids use ROS namespaces? == There is already a hierarchy defined by the tf graph in the child/parent relationships, vs. ROS names where the hierarchy is defined by the name. namespaces are used to determine how communication channels are connected. Whereas tf frame_ids are related to how coordinate frames are physically connected. Thus if you push down a set of nodes relating to the 2d navigation they will work in their own namespace and not have collisions on topics or services. However, the base is still connected to the body in the same way, despite the namespace push down. Thus pushing down doesn't make sense. There are a few other considerations, tf frames are not necessarily connected to any specific node. For example on the pr2 the base_laser frame is published by the robot_state_publisher based on the URDF, but the hokuyo_node publishes scans in the base_laser frame. And then rviz transforms the data into some arbitrary frame for viewing. If any of them disagree about the full name of "base_laser" this will not work, for the tree in tf will not be connected anymore. == What is the purpose of tf_prefix? == The reason for the tf_prefix is to allow multiple similar robots to work together. For example if you have two robots traveling to a conference on a truck. RobotA has 2 links, RobotB has 2 links, there's the world, and the truck with a bed to carry the robots Defining the links for each robot is as follows {{{ right -> base left -> base }}} But there are two of these so we differentiate between robots {{{ /a/left -> /a/base /a/right -> /a/base /b/left -> /b/base /b/right -> /b/base }}} Next we have the truck. which in the "one robot case" would be like this: {{{ bed -> chassis chassis -> map }}} and we could connect the single robot with: {{{ base -> bed }}} However if we have 2 robots, we now need {{{ /a/base -> bed /b/base -> bed }}} == Why do I see negative average delay in tf_monitor? == Negative average delay comes from the transform being timestamped in the future when sent or by network delays when running in simulation. In simulation time is sent over the '''/clock''' topic and may arrive after transforms sent from a different node with the same time. A few nodes stamp data in the future, most noteably AMCL. This is valid for AMCL and other similar nodes publishing a small correction transform which is expected to remain approximately static and never have inherent velocity. The value is that AMCL can update at a slow rate but does not prevent transforms through it's correction to be used at a high rate.