Note: This tutorial assumes that you have completed the previous tutorials: ROS tutorials. |

Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags. |

# Tuning the world model: new objects

**Description:**The probability of new objects entering the scene is varied. As a result, new object appear sooner or later. This tutorial demonstrates the effect of the probability.

**Tutorial Level:**ADVANCED

**Next Tutorial:**Tuning the world model: object propagation models

## Goal

The world state estimate generated by wire depends on the probabilistic models underlying the world model. This tutorial shows how the probability of new objects influences the world state estimate. The two videos below show the output for two different settings. First, for a normal p(new object), the result is as expected:

If p(new objects) is decreased, the output changes, as shown in the second video:

## Approach

For each of the measurements provided to the world modeling algorithm, all possible data association explanations are considered:

- The measurement originates from an object not yet present in the world model (new object)
- The measurement originates from any of the objects already present in the world model
- The measurement represents a false positive (clutter)

As a result, a hypothesis tree in which each leaf represents a hypothesis and each hypothesis represents a possible world state is obtained. Each hypothesis gets a probability of being correct and the most probable hypothesis is given as resulting world state estimate.

For the calculation of the probabilities, a Bayesian update law is used. The probabilistic models representing the probability of measurements being any of the above options are crucial for the quality of the output of the algorithm. The models are defined in the models/world_object_models.xml file in the wire_core package and are object class dependent:

```
1 <knowledge>
2
3 <prior_new value="0.14" />
4 <prior_existing value="0.14" />
5 <prior_clutter value="0.72" />
6
7 <object_class name="object">
8
9 <behavior_model attribute="position" model="state_estimators/PositionEstimator">
10 <pnew type="uniform" dimensions="3" density="0.0001" />
11 <pclutter type="uniform" dimensions="3" density="0.0001" />
12
13 <param name="max_acceleration" value="8" />
14 <param name="fixed_pdf_cov" value="0.1" />
15 </behavior_model>
16
17 <behavior_model attribute="color" model="state_estimators/DiscreteEstimator">
18 <pnew type="discrete" domain_size="100" />
19 <pclutter type="discrete" domain_size="100" />
20 </behavior_model>
21
22 <behavior_model attribute="class_label" model="state_estimators/DiscreteEstimator">
23 <pnew type="discrete" domain_size="100" />
24 <pclutter type="discrete" domain_size="100" />
25 </behavior_model>
26
27 <behavior_model attribute="shape" model="state_estimators/DiscreteEstimator">
28 <pnew type="discrete" domain_size="10" />
29 <pclutter type="discrete" domain_size="10" />
30 </behavior_model>
31
32 </object_class>
33
34 <object_class name="person" base="object">
35
36 <behavior_model attribute="name" model="state_estimators/DiscreteEstimator">
37 <pnew type="discrete" domain_size="10" />
38 <pclutter type="discrete" domain_size="10" />
39 </behavior_model>
40
41 </object_class>
42
43 </knowledge>
```

## The code explained

The prior probabilities independent of the object class. The values do not have to sum up to one, only the relative values are important. In general, it is useful to set the prior for false positives (clutter) greater than the prior for new objects.

```
7 <object_class name="object">
```

Definition of the default behavior model set, i.e., the model used for objects that are of unknown class, or for instances are not associated with a class label.

```
9 <behavior_model attribute="position" model="state_estimators/PositionEstimator">
10 <pnew type="uniform" dimensions="3" density="0.0001" />
11 <pclutter type="uniform" dimensions="3" density="0.0001" />
12
13 <param name="max_acceleration" value="8" />
14 <param name="fixed_pdf_cov" value="0.1" />
15 </behavior_model>
```

The first behavior model is for the continuous attribute position. The position will be estimated using a position estimator. This estimator is a multiple model estimator that combines (i) a Kalman filter with a constant velocity motion model with (ii) a uniform distribution.

After position updates, the position is represented by the Kalman filter estimate, during propagation, the 'trust' shifts towards the uniform distribution, which represents the 'object lost' hypothesis. The rate of the transitioning from Kalman filter to 'object lost' depends on the process noise of the Kalman filter, which is set by means of the max_acceleration parameter. The higher the maximum acceleration, the faster the uncertainty increase during propagation and the lower the 'trust' in the motion model. In case custom models are preferred, this is possible (see the source code for examples).

Finally, the behavior model includes probabilistic models for new objects and clutter measurements. The type is uniform, which means that both new objects and clutter measurements can appear anywhere with equal probability. The densities represent the height of the uniform distribution.

```
7 <object_class name="object">
8
9 <behavior_model attribute="position" model="state_estimators/PositionEstimator">
10 <pnew type="uniform" dimensions="3" density="0.0001" />
11 <pclutter type="uniform" dimensions="3" density="0.0001" />
12
13 <param name="max_acceleration" value="8" />
14 <param name="fixed_pdf_cov" value="0.1" />
15 </behavior_model>
16
17 <behavior_model attribute="color" model="state_estimators/DiscreteEstimator">
18 <pnew type="discrete" domain_size="100" />
19 <pclutter type="discrete" domain_size="100" />
20 </behavior_model>
```

The second behavior model is for the discrete attribute color. It is a discrete property that will be estimated by a discrete estimator. A probability mass function over 100 possible values will be maintained. If a measurement states that the color is red with a probability of 0.5, this means that any of the other 99 possible color will get a probability 0.5/99 = 0.005.

```
22 <behavior_model attribute="class_label" model="state_estimators/DiscreteEstimator">
23 <pnew type="discrete" domain_size="100" />
24 <pclutter type="discrete" domain_size="100" />
25 </behavior_model>
26
27 <behavior_model attribute="shape" model="state_estimators/DiscreteEstimator">
28 <pnew type="discrete" domain_size="10" />
29 <pclutter type="discrete" domain_size="10" />
30 </behavior_model>
```

The third and fourth behavior models are for the discrete attributes class label and shape, the estimators are as explained above.

Here the behavior models for an object of class person can be defined. By defining the base to be object, all models defined in the default model set are used, unless defined differently here, i.e., only models that are different have to be defined.

## Data

In order to be able to reproduce the result shown in the video above, make sure that you have downloaded and compiled the wire packages:

$ git clone https://github.com/tue-robotics/wire.git $ catkin_make

Download the bag-file containing the data (demo03.bag) and decompress the file:

$ rosbag decompress demo03.bag

The bag file contains tfs, object detections and both rgb and depth images. The images are only included for ease of interpretation and inspection. These are not used by wire.

## Reproducing the result

First, start a ROS core and set the use_sim_time parameter to true:

$ roscore $ rosparam set use_sim_time true

Then launch wire_core:

$ roslaunch wire_core start.launch

In a second terminal, launch the visualization:

$ roslaunch wire_tutorials rviz_wire_kinetic.launch

Finally, play back the data:

$ rosbag play demo03.bag --clock

The results should be similar to the result in the first video above. By changing the probabilistic model (as indicated above) the result shown in the second video can be obtained.