This page summarizes how the tf package was designed.
Contents
-
Design Goals
- A distributed system
- Only transform data between coordinate frames at the time of use
- Support queries on data which are timestamped at times other than the current time
- Only have to know the name of the coordinate frame to work with data
- The system doesn't need to know about the configuration before hand and can handle reconfiguring on the fly
- Core is ROS agnostic
- Thread Safe Interface
- Multi-Robot Support
- Known Limitations
Design Goals
The high level goal was to allow developers and users not to have to worry about which coordinate frame any specific data is stored in.
A distributed system
Value: No bottle neck process and all processes are one step away for minimal latency.
Implementation: Everything is broadcast and reassembled at end consumer points. There can be multiple data sources for tf information. Data is not required to be synchronized by using interpolation. Data can arrive out of order.
Only transform data between coordinate frames at the time of use
Value: Efficiency, both computational, bandwidth, and simplicity.
Implementation:
Support queries on data which are timestamped at times other than the current time
Value: Handle data processing lag gracefully.
Implementation: Interface class stores all transform data in memory and traverses tree on request.
Only have to know the name of the coordinate frame to work with data
Value: Ease of use for users/developers.
Implementation: Use string frame_ids as unique identifiers.
The system doesn't need to know about the configuration before hand and can handle reconfiguring on the fly
Value: Generic system for any configuration.
Implementation: Use directed tree structure. It allows fast traversal(order n where n is the depth of the tree) when evaluating a transform. It can be reconfigured simply by redefining a link. It does not require any structure verification or maintenance of the data structure, except for maintaining a sorted linked list of data for each link.
Core is ROS agnostic
Value: Code reuse.
Implementation: Core library is C++ class. A second class provides ROS interface and instantiates the core library.
Thread Safe Interface
Values: Can be used in a multithreaded program.
Implementation: Mutexes around data storage for each frame. Mutexes around frame_id lookup map. Each are individually locked and unlocked, neither can block the other.
Multi-Robot Support
Values: Can be used with multiple robots with the same or similar configuration.
Implementation: Use a tf_prefix similar to a namespace for each robot.
Known Limitations
After using the current implementation there are a number of limitations which have become evident. Below are some of the most noticed ones.
tf_prefix is confusing and counterintuitive
- It can be setup and used but it requires a strong knowledge of the system. And it also requires many components to be compliant which gets invasive.
The direction of the graph can be confusing
- When setting up publishers it is common to setup one or more transforms backwards which results in a bifurcated tree.
- This also limits the structure and requires some non intuitive relationships when there are multiple natural parent frames.
tf messages do not deal with low bandwidth networks well
- Within a single well connected network tf works fine, but as you transition to wireless and lossy networks with low bandwidth the tf messages start to take up a large fraction of the available bandwidth. This is partially due to the many to many nature and partially due to the fact that there is no way to choose the desired data rate for each consumer.
tf doesn't keep a long history
- The default timescale for storage is 10 seconds which is fine for live operations, but storing anything longer than that requires storing the data in a known fixed frame such that it will be transformable later.