This page summarizes how the tf2 package was designed.

Design Overview

tf2.png

Design Goals

The high level goal was to allow developers and users not to have to worry about which coordinate frame any specific data is stored in.

A distributed system

  • Value: No bottle neck process and all processes are one step away for minimal latency.

    Implementation: Everything is broadcast and reassembled at end consumer points. There can be multiple data sources for tf information. Data is not required to be synchronized by using interpolation. Data can arrive out of order.

Only transform data between coordinate frames at the time of use

  • Value: Efficiency, both computational, bandwidth, and simplicity.

    Implementation:

Support queries on data which are timestamped at times other than the current time

  • Value: Handle data processing lag gracefully.

    Implementation: Interface class stores all transform data in memory and traverses tree on request.

Only have to know the name of the coordinate frame to work with data

  • Value: Ease of use for users/developers.

    Implementation: Use string frame_ids as unique identifiers.

The system doesn't need to know about the configuration before hand and can handle reconfiguring on the fly

  • Value: Generic system for any configuration.

    Implementation: Use directed tree structure. It allows fast traversal(order n where n is the depth of the tree) when evaluating a transform. It can be reconfigured simply by redefining a link. It does not require any structure verification or maintenance of the data structure, except for maintaining a sorted linked list of data for each link.

Core is ROS agnostic

  • Value: Code reuse.

    Implementation: Core library is C++ class. A second class provides ROS interface and instantiates the core library.

Thread Safe Interface

  • Values: Can be used in a multithreaded program.

    Implementation: Mutexes around data storage for each frame. Mutexes around frame_id lookup map. Each are individually locked and unlocked, neither can block the other.

Multi-Robot Support

  • Values: Can be used with multiple robots with the same or similar configuration.

    Implementation: Use a tf_prefix similar to a namespace for each robot.

Native Datatype Interfaces

  • Value: Users can interact with tf2_ros in their native datatypes, the conversion is handled implicitly by the library.

    Implementation: There is a tf2::convert(A, B) templated method that converts from type A to type B using the geometry_msg types as the common factor.

 template <class A, class B>
 void convert(const A& a, B& b)
 {
   fromMsg(toMsg(a), b);
 }

And as long as any datatype provides the methods msgType toMsg(datatype) and fromMsg(msgType, datatype) it can be automatically converted to any other datatype with the same methods defined and a matching msgType.

All tf2_ros interfaces can then be called with native type in and native type out. Note, the native type in and out do not need to match.

Known Limitations of tf

After using the current implementation there are a number of limitations which have become evident. Below are some of the most noticed ones.

tf_prefix is confusing and counterintuitive

  • It can be setup and used but it requires a strong knowledge of the system. And it also requires many components to be compliant which gets invasive.
    • Solution: tf_prefix is no longer supported (all frame_ids should not start with /)

The direction of the graph can be confusing

  • When setting up publishers it is common to setup one or more transforms backwards which results in a bifurcated tree.
  • This also limits the structure and requires some non intuitive relationships when there are multiple natural parent frames.
    • Not addressed by tf2

tf messages do not deal with low bandwidth networks well

  • Within a single well connected network tf works fine, but as you transition to wireless and lossy networks with low bandwidth the tf messages start to take up a large fraction of the available bandwidth. This is partially due to the many to many nature and partially due to the fact that there is no way to choose the desired data rate for each consumer.
    • Solution: Add support for /tf_static topic which will only publish latched topics.

tf doesn't keep a long history

  • The default timescale for storage is 10 seconds which is fine for live operations, but storing anything longer than that requires storing the data in a known fixed frame such that it will be transformable later.
    • Solution: A long history can be kept in a specialized node and queried remotely.

Wiki: tf2/Design (last edited 2015-12-01 08:07:31 by Hauptmech)