The STRANDS Project Logo

Welcome to STRANDS documentation!

This site contains the documentation for the software and data produced by the EU STRANDS Project. For more information on the scientific aims of the project, please see our IEEE RAM overview article or the STRANDS Project website.

The project created autonomous mobile robots which were successfully deployed for long periods in real user environments. In the process of this we created a great deal of open source software for AI and robotics applications. This software is all available via the STRANDS GitHub organisation. This site provides a single location where the documentation from across that organisation can be viewed. It is also the main location for software tutorials and guides for creating systems, and provides an entry point into using our software for new users.

Please note that a large amount of this site is automatically generated from our code and package documentation, so the structure is currently not perfect. Our scripts for automatically generating this site are available here.

Getting Started

If you wish to understand or reuse the full STRANDS system, you should follow the STRANDS system tutorial. If you want to set things up as fast as possible, see the quick start instructions. Both of these will leave you with a system which has ROS and STRANDS packages installed, and can run a simulation which uses some of the core STRANDS subsystems.

Core Subsystems

A STRANDS system is formed of many components which provide various pieces of functionality, ranging from navigation to user interaction. A list of all packages with a brief overview of their purpose can be found here. The following sections give a brief overview of some of the packages which form the core of the system.

STRANDS Navigation

Navigation forms the core of the movement capabilities of robots using the system. Our work provides a monitored navigation system which detects failures in navigation and triggers recovery behaviours, and a topological navigation system where navigation nodes (waypoints) are linked by edges which the robot can traverse. Topological navigation underpins many of the other STRANDS capabilities.

STRANDS Executive

The STRANDS executive controls the execution of tasks requested by users or generated by the system itself, prioritising them using various metrics such as expected completion time, probability of successful completion, and so on. It provides facilities for both long-term task routines, task scheduling and task planning under uncertainty. There is a STRANDS Executive tutorial which covers the main parts of the system and an overview document.

Person Detection and Tracking

When operating in populated spaces it is crucial to be able to detect and track people. STRANDS produced an indoor multi-person tracker which fuses and tracks upper body detections and leg detections. We also have produced a wheelchair and walking aid detector.

3D Mapping and Vision

One of the major outputs of the project is a collection of systems for discovering and learning about objects in everyday environments. These are collected together into the STRANDS 3D Mapping collection, described here.

Semantic Object Maps (SOMa)

The outputs of person detection and 3D mapping are stored in our Semantic Object Map (SOMa) which captures the information the robot gathers over long durations in a central store which supports a range of visualisations and queries. This is described here. SOMa is backed by our integration of MongoDB into ROS: MongoDB Store.

Long-Term Data Processing (FreMEn and QSRLib)

After data is collected in SOMa our systems process it using various techniques. Major outputs of STRANDS include FreMen which provides frequency-based modelling for the temporal dimension of spatial representations, and QSRLib, a library for generating qualitative spatial relations from sensor data.

Datasets

You can find the datasets generated by the project here.

Documentation contents

STRANDS quick setup

These instructions get you into a strands system quickly, without doing any custom setup. For custom setup you should look at the more detailed instructions here.

ROS and package setup

Installing ROS

The first step is to follow the instructions found at http://wiki.ros.org/indigo/Installation/Ubuntu. The full install of ROS with the package ros-indigo-desktop-full should contain all the base ROS packages that are required.

Installing STRANDS packages

To install the strands packages, you should follow the instructions at https://github.com/strands-project-releases/strands-releases/wiki.

Database

Many of our packages rely on the mongodb_store database package. You therefore need to start this prior to the rest of the system. The first time you run it, you need to create a directory to hold the data. This can be shared across multiple systems, or you can use one for each system you build.

One time setup, with $DATA_DIR being where you’re storing your databases:

mkdir $DATA_DIR/strands_sim_example

Launch the database

roslaunch mongodb_store mongodb_store.launch db_path:=$DATA_DIR/strands_sim_example

Topological Map Creation

The STRANDS navigation system relies on continuous navigation (using movebase) and higher-level topological navigation. We have already created a topological map for the Birmingham simulation environment. You add this to the database as follows (assuming you have already launched mongodb_store as above).

rosrun topological_utils insert_map.py $(rospack find strands_morse)/bham/maps/cs_lg_sim.tplg cs_lg cs_lg

If you wish to create your own topological map, follow the topological navigation README/

Simulation

The following launches a simulation of a SCITOS A5 in the MORSE simulator. The map is based on the lower ground floor of the School of Computer Science, University of Birmingham.

roslaunch strands_morse bham_cs_morse.launch

Visual Docking

The robot docks on its charging station using a visual docking process. This needs to be calibrated the first time you use the system. When you launch the simulation, the robot should already be on the docking station. From this position (and assuming the mongodb_store database is running) do the following in two terminals. Launch the charging nodes:

roslaunch scitos_docking charging.launch

Then calibrate the docking software

rosrun scitos_docking visual_charging_client calibrate 100

After this you can kill the charging.launch file as it will be launched again later as part of the full system.

UI

The STRANDS system relies on a number of UI components to communicate with the world (e.g. asking for help with navigation actions). Therefore once you have the database running, launch the UI nodes:

roslaunch strands_bringup strands_ui.launch

If you want to see what the UI is showing (which will be nothing yet), open a browser at http://localhost:8090.

Tasks

The STRANDS system allows you to write tasks which the robot executes at nodes in the topological map. For a detailed description of how this works, see the documentation for the strands_executive framework.

Start the executive framework with

roslaunch task_executor task-scheduler.launch

The simplest task is to wait somewhere for some time. The following example script will add the task for the robot to wait at WayPoint7 for 20 seconds:

rosrun task_executor example_add_client.py WayPoint7 20

If you add multiple tasks the scheduler will optimise their order of execution taking into account time windows and travel time.

Routine Behaviours

To create repetitive long-term behaviour, you can use a routine script. One example of this is a patrol routine which visits every waypoint in the environment in regular windows. The execution of the following will start this process and trigger actions when the robot is idle.

rosrun routine_behaviours patroller_routine_node.py

You can still add tasks during routine execution and the scheduler will fit them in as appropriate.

Detailed STRANDS system setup

This readme will guide you through the setup and use of a system which uses the STRANDS packages. It is assumed that you have a system that is running Ubuntu 14.04. If you just want to get something running quickly, see the quick setup page.

ROS and package setup

Installing ROS

The first step is to follow the instructions found at http://wiki.ros.org/indigo/Installation/Ubuntu. The full install of ROS with the package ros-indigo-desktop-full should contain all the base ROS packages that are required.

Installing STRANDS packages

To install the strands packages, you should follow the instructions at https://github.com/strands-project-releases/strands-releases/wiki.

Custom system setup

Database

While the system is running, it will store data in a mongodb database so that various components can access it. Depending on what tasks you are running, the database can grow quite large (on the order hundreds of gigabytes), so you should create the database on a drive with a lot of space.

The database will be automatically created by the database node, but requires the creation of a directory before it is launched. Running the following will initialise the database. The database is launched in the same way when the full system is running - it should be run before any other part of the system runs, as many of them require it to be up.

DATA_DIR=/my/data/directory
mkdir -p $DATA_DIR/my_database_dir
roslaunch mongodb_store mongodb_store.launch db_path:=$DATA_DIR/my_database_dir

You can see more information about the database system at https://github.com/strands-project/mongodb_store/tree/hydro-devel/mongodb_store

Using a simulation

You can use a simulated environment in order to test the system, either with existing simulated environments, or you can set things up to simulate the environment you wish to operate in. Later on in this document, there are instructions on how to set up a metric map and topological map. These can both be applied in a simulated environment as well.

Existing simulations

Simulations are based on the strands_morse package. If you look in there, you’ll find several real environments which have already been set up. We’ll look at the strands_morse/bham directory here.

The maps directory contains maps, as you might expect. There are yaml and pgm map pairs for the ROS grid maps. The tplg file is a yaml export of the topological map for that area.

The urdf directory contains urdf files for the environment, which make use of dae files in the meshes directory.

The launch directory contains some launch files which can be used to launch everything with just one command.

The top level python files are used to construct various environments, setting up the robot position, model and so on.

Before starting the simulation, you’ll need to add the topological map to your database using

rosrun topological_utils load_yaml_map.py `rospack find strands_morse`/bham/maps/cs_lg_sim.tplg

Once this is done, run

roslaunch strands_morse bham_cs_morse.launch

This will bring up the blender window with the environment loaded and populated with objects and a robot.

roslaunch strands_morse bham_cs_nav2d.launch

Will bring up all navigation and related nodes.

Finally, use rviz to visualise the map and topological map:

roslaunch strands_morse bham_default_rviz.launch

To allow the robot to move, you’ll have to move it backwards first - you can do this by pressing the down arrow on your keyboard when focusing the blender window. Once you’ve done that, you can click the green arrows to make the robot navigate there, or you can specify a navigation location using the 2d nav goal tool in rviz.

Custom simulations

To set up a custom simulation, you’ll first need to generate a 3D environment to use. A simple environment is easy to construct. You’ll need to have GIMP, inkscape and blender installed to create it.

If you want to skip the details, you can find the files created by this section here. You will still have to add the topological maps to your mongo database with rosrun topological_utils load_yaml_map.py maps/basic_map.tpl.

PNG map

The first step is to use GIMP to create an image of the walls in the map that you want to use. The image below is made using the pencil tool and holding ctrl+shift to make straight lines. It has a scale of 35px/cm. We’ll use this later to scale the environment to our robot.

You can make something like this map using blueprints for the area you are working in and the process below should be similar.

SVG map

Once you have an image which contains only black and white pixels, open inkscape and import the image, with the embed option.

Make sure the image is selected, and then open the trace bitmap dialogue with alt+shift+b. Select the single scan colour quantisation option, with 2 colours, and check the box to invert the image. Click the update button on the right hand side and you should see the result of the tracing. This tracing will convert the image into a vector graphic which can be used in blender. You can fiddle with the options until you get a result that looks good. Once you’re satisfied, press the OK button. Then, save the image as an svg file.

Blender model

Open blender, and delete the cube that is in the space with the delete key. Then, with file>import import the svg that you just created. You should see it in the space as some black lines. You can select it in the top right hand side, where it will exist as a curve. The image we started with had a scale of 35px/cm, which will be very small for our robot, which is around 80cm across (assuming that we’re using the Scitos G5). On the right hand toolbar, you should see a set of icons - a camera, some photos, a chain link, a cube, and so on. Select the cube icon. This Will bring up a set of options which include scaling. In the image, some openings which could represent doors are approximately 50 pixels wide. We’ll make these openings 1.5 metres wide, to make them easy to get through. This means that each pixel has to be 0.03 (1.5/50) metres. At 35px/cm, each pixel in the image was 0.000286 metres. So, in order to get the size we want, we should scale each pixel by approximately 105 (0.03/0.000286). We’ll apply this scaling to both the x and y axes.

Once that scaling is done, we also need to make some proper 3D walls with height. Select the curve icon on the right hand side, and you should see under the geometry section the option to extrude. Set this value to 1.5 and it should be OK. Since extruding goes in both vertical directions, shift the model up by 1.5 in the transform options in the cube section.

We also need a floor, so add a plane in using add>mesh>plane. Scale it so that it covers approximately the area needed to cover all the floor space in the map, and transform it so that it sits below the imported map. You may wish to translate or rotate the map so that it sits in the positive quadrant of the space - when it is imported it will sit on the positive x axis but negative y axis.

There also needs to be a light source so you can see what is going on. Instead of the weak lamp that currently exists in the space, you should add a sun (add>lamp>sun), which provides much more light.

The final step is to convert the curve to a mesh so that it is correctly displayed. With the curve selected, press alt+c, which will bring up a conversion menu. Use “mesh from curve” option, and then save the blender file.

You can find example files created from this process here.

Creating the simulation files

Now that we have all these models, we need to create some files to run everything. We’ll put everything into a new package for convenience.

First, create a new ros package in your workspace.

roscd
cd src
catkin_create_pkg basic_example
cd basic_example
mkdir scripts

In the scripts directory, create a script simulator.sh that will be used to run the simulation. Its basic contents should look something like the following. It sets up various paths and then runs another file which defines what the simulation environment actually looks like:

#!/bin/bash
environment_name="basic_example"
strands_morse=`rospack find strands_morse`
example=`rospack find basic_example`
path="$example"
common="$strands_morse/strands_sim"

PYTHONPATH="$path/src:$common/src:$PYTHONPATH"
MORSE_RESOURCE_PATH="$strands_morse:$common/data:$common/robots:$path:$MORSE_RESOURCE_PATH"
export MORSE_RESOURCE_PATH PYTHONPATH
added=`$strands_morse/morse_config.py $environment_name $path`
echo "Running morse on $path with PYTHONPATH=$PYTHONPATH and MORSE_RESOURCE_PATH=$MORSE_RESOURCE_PATH"
PATH=/opt/strands-morse-simulator/bin:$PATH

morse run basic_example `rospack find basic_example`/example_sim.py

Don’t forget to run chmod +x scripts/simulator.sh to make it executable.

In the top level directory, create the simulation definition (example_sim.py)

#! /usr/bin/env morseexec

import sys
import subprocess
import os
import random

from morse.builder import *
from strands_sim.builder.robots import Scitosa5

# Other options for cameras are WITH_CAMERAS or WITHOUT_DEPTHCAMS. For those set fastmode=False below
robot = Scitosa5(with_cameras = Scitosa5.WITHOUT_CAMERAS)
# Specify the initial position and rotation of the robot
robot.translate(x=2,y=2, z=0)
robot.rotate(z=-1.57)

# Specify where the model of the environment is
model_file=os.path.join(os.path.dirname(os.path.abspath( __file__)),'maps/basic_map.blend')
# Create the environment with the model file, and use fast mode - you can do
# this to speed things up a little when you're using the scitos A5 without
# cameras.
env = Environment(model_file,fastmode=True)
# Place the camera in the environment
env.set_camera_location([0, 0, 10])
# Aim the camera so that it's looking at the environment
env.set_camera_rotation([0.5, 0, -0.5])

Download the basic map created above from github into the maps directory.

roscd basic_example
mkdir maps
cd maps
wget https://github.com/strands-project/strands_documentation/raw/master/resources/basic_map.blend

Create a launch file which will be used to launch the simulator (launch/basic_example.launch)

<launch>

   <!-- Scitos robot -->
   <include file="$(find strands_morse)/launch/scitos.launch"/>

   <node pkg="basic_example" type="simulator.sh" respawn="false" name="basic_example" output="screen"/>

</launch>

Finally, compile the package with catkin build basic_example. You should then be able to run roslaunch basic_example basic_example.launch, and see a robot in the world.

At this point, the robot will not be able to move. The following file (launch/basic_example_nav.launch) will launch the required parts of the strands system.

<launch>
  <!-- declare arg to be passed in -->
  <arg name="with_chest_xtion" default="false"/>
  <arg name="mon_nav_config_file"  default="" />
  <arg name="max_bumper_recoveries" default=".inf"/>
  <arg name="wait_reset_bumper_duration" default="0.0"/>
  <arg name="topological_navigation_retries" default="3"/>
  <arg name="topological_map_name" default="basic_map"/>
  <arg name="map" default="$(find strands_morse)/basic_example/maps/basic_map.yaml"/>

  <!-- 2D Navigation -->
  <include file="$(find strands_movebase)/launch/movebase.launch">
      <arg name="map" value="$(arg map)"/>
      <arg name="with_chest_xtion" value="$(arg with_chest_xtion)"/>
  </include>

  <node pkg="monitored_navigation" type="monitored_nav.py" name="monitored_nav" output="screen" args="$(arg mon_nav_config_file)">
    <param name="wait_reset_bumper_duration" value="$(arg wait_reset_bumper_duration)"/>
    <rosparam param="/monitored_navigation/recover_states/recover_bumper" subst_value="True">[True, $(arg max_bumper_recoveries)]</rosparam>
  </node>

  <node pkg="topological_navigation" type="map_manager.py" name="topological_map_manager" args="$(arg topological_map_name)" respawn="true"/>
  <node pkg="topological_navigation" name="topological_localisation" type="localisation.py" output="screen" respawn="true"/>
  <node pkg="topological_navigation" type="visualise_map.py" name="visualise_map" args="$(arg topological_map_name)" respawn="true"/>

  <node pkg="topological_navigation" name="topological_navigation" type="navigation.py" output="screen" respawn="true">
    <param name="retries" type="int" value="$(arg topological_navigation_retries)"/>
  </node>

  <node pkg="tf" type="static_transform_publisher" name="env_broadcaster"
        args="0 0 0 0 0 0 /odom /map 200">
  </node>
</launch>

You can also use the following launch file (launch/basic_example_rviz.launch) to launch an rviz instance with various interactive markers set up.

<launch>
  <node pkg="rviz" type="rviz" name="rviz" args="-d $(find strands_morse)/basic_example/default.rviz"/>
</launch>

To use this, you’ll first have to construct a pgm map. You can do this by colouring the image you used to create the simulation map with the correct colours for ROS map usage (e.g. the basic map). Alternatively, you can also use gmapping - see below for instructions. You should save the map to maps/basic_map.pgm and maps/basic_map.yaml, or save it elsewhere and point the above launch file to the correct location. If you make a map from the image, you will have to create a corresponding yaml file to describe it and give the scaling of the image and some other details. See map server for details on the yaml format.

If you use gmapping, you can use rosrun teleop_twist_keyboard teleop_twist_keyboard.py to control the motion of the robot. You may have to install this package first. You should also run rviz so that you can see the map being constructed and make sure you haven’t missed any part of it. You can leave the map as it is, or trim it to remove some of the excess parts if your map is small. In that case you will need to change the origin of the map so that it corresponds with where you want your origin to be.

You should follow the instructions in the topological map section below to create a topological map for the environment. Once you’ve created it and inserted it into the mongo database, you should change the default map_name to the name of the map in your database. You can find an example here.

When running the system, you may have to set the position of the robot in rviz to the correct location on the map, as the origin of the map there and in the simulation does not align.

You can find documentation for the MORSE simulator here, which gives more details about what you can do in the example_sim.py file.

Metric map

The metric map is a 2D map of the operating environment, where each cell of a grid is populated with a value which represents whether that that cell is occupied by an obstacle, is empty, or has an unknown occupancy state. The quickest and easiest way to map your environment is using the ROS gmapping package. How you use this package will depend on the type of robot you have. The package requires that you have laser and odometry data being published to the ROS system.

Assuming that your laser data is being published on /base_scan, and odometry on /odom, you can start the mapping process as below. The maxUrange parameter defines a threshold on the distance of usable range measurements received. For example, setting this to 20 will discard any readings received which are beyond 20 metres.

rosrun gmapping slam_gmapping scan:=base_scan maxUrange:=20

While this runs, you can observe the map being built in the rviz utility by adding a display for the /map topic. Push the robot around in your operation area. You should try to move relatively slowly. You should also try to ensure that you revisit previously mapped areas after going around the environment, so that the map can be properly adjusted.

Once you are happy with your map, you should save it using the `map_server <http://wiki.ros.org/map_server>`__:

rosrun map_server map_saver -f my_map map:=my_map_topic

This will produce a .pgm file and a .yaml file. The .pgm file contains an image of the map, which you can manipulate with an image editing program if necessary. The .yaml file contains information about the map. If something strange is happening with your map, then it might be worth checking that this file is set up to point to the correct .pgm file. You can also adjust the resolution of the map and its origin in the .yaml file.

At this point, you may wish to clean up the image to remove dynamic obstacles from the map. In GIMP, you can do this by using the pencil tool with white selected.

Along with the base map, it is also possible to provide a “nogo” map, which is used to more strictly define which areas are passable and which are not. You can use this to restrict the robot’s movement in an area where there are no walls to obstruct the robot’s motion. The nogo map should duplicate the base map. You can then draw obstacles onto the map in black (255) in the places which you would like to have a phantom obstacle. The pencil tool in GIMP is again useful. We recommend creating a new GIMP file with the nogo map on a new layer so that it can be more easily modified if necessary. GIMP can export the file to a .pgm file.

Once you are happy with your map, you can use

rosrun map_server map_server my_map.yaml map:=mymap

to make the map available on the /mymap topic.

Adding to the map

Sometimes it may be necessary to remap parts of the map due to changes in the environment. You can use gmapping to do this, and then stitch the images together in an image editing program. Sometimes, you may need to rotate the images. You should change the interpolation settings when you do this to “none” to prevent blurring. If you somehow end up with a blurred map, it is possible to fix it as follows.

Using GIMP, first make two duplicates of the layer containing the map. In the first layer duplicated layers, use colours>threshold, to extract out the black regions. A lower threshold of between 190 and 203 seems to be effective, with the upper at 255. You should tweak the lower threshold so that the grey unknown areas are white, and most of the obstacles are black. Then, using select>by colour, select the white part of the layer and cut and paste it onto a new layer (C-x C-v). When you paste, a floating selection layer will come up. Right click this floating selection in the layer list, and send it to a new layer. You now have two layers, one with free space, and one with obstacles.

In the other duplicated layer, do the same thing, but now extract the obstacles and unknown regions by thresholding. A lower threshold of between 230 and 240 should work. Select the white region with the colour select tool again, and delete it. Select the black pixels, and then use select>shrink to shrink the selection by a couple of pixels. 2 or 3 should be sufficient. With this selection still active, create a new layer. Use the pipette tool, to sample the “unknown” cell colour from the original map. Paint over the selected area with the pencil tool so that it has the “unknown” colour. Arrange the three layers so that the obstacles are on top, unknown regions below, and free space below that. Finally, merge the three layers by right clicking the obstacle layer and clicking “merge down”. Do this again to merge the newly created layer and the free space layer.

Topological map

Once you have a metric map, you need to set up a topological map for the robot to use for path planning and other actions. The topological map is made up of nodes, which represent some point of interest or navigation location, and edges, which are connections between the nodes. The easiest way to set up the topological map is using the strands utilities created for that purpose.

If you already have a map, you can add it into the database with

rosrun topological_utils load_yaml_map.py /path/to/my.yaml

This yaml file can be produced for a map that exists in the database using

rosrun topological_utils map_to_yaml.py map_pointset my_topmap.yaml

You can see which topological maps already exist in the database with

rosrun topological_utils list_maps

If you haven’t yet created a topological map you can add an empty topological map to the database with

rosrun topological_utils insert_empty_map.py my_pointset_name
Modifying the map

The best way to modify the map is the use the topological_rviz_tools package. This provides some tools and a panel in rviz which will allow you to quickly and easily modify things. If you need more direct access, you can always dump the map in the database (with map_to_yaml.py), edit things in the file, and then replace the values in the database with the modified values. This can result in internal inconsistencies, so it is not recommended.

You can launch the rviz tool as follows:

roslaunch topological_rviz_tools strands_rviz_topmap.launch map:=/path/to/map.yaml topmap:=topmap_pointset db_path:=/path/to/db

For the basic_example, if you’ve already created the metric map and added an empty topological map to the database: /path/to/map.yaml will be the path to your metric map yaml, topmap_pointset will be the name you used for my_pointset_name in rosrun topological_utils insert_empty_map.py my_pointset_name and /path/to/db is the path to your mongo db.

Once you have added a new node to the map, you should delete the temp_node. For instructions on using the rviz topological map editor, see the readme here.

If you intend to use the mdp-executor for planning (as we will in this tutorial), with a node where the robot can charge (such as the Scitos docking stations), you should ensure that the charging node localise_by_topic looks like this:

localise_by_topic: ‘{“topic”: “/battery_state”, “field”: “charging”, “val”: true,
“localise_anywhere”: false}’

This will ensure that the robot does not dock and undock repeatedly due to inaccurate localisation. The system will assume that it is at the charging station node as long as the charging field of the battery_state episode is true.

Launching the core nodes

In order to run the system, core nodes need to run. In general, this is the navigation, executive and database. You will also need to ensure that there are nodes providing odometry data and laser scans from your robot setup on the /odom and /scan topics. You should also ensure that you have battery data being published on the /battery_status topic using the Scitos message format:

std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
float32 voltage
float32 current
int8 lifePercent
int16 lifeTime
bool charging
bool powerSupplyPresent
float32[] cellVoltage

If you wish to use your own battery message, you will need to change some things in the routine classes in strands_executive_behaviours. You will need to modify this file in order to set things up for your required voltages.

We’ll assume here that the system is a scitos A5 robot.

The first thing to start is roscore as usual. We prefer to start roscore independently of other launch files so that they can be restarted if necessary without breaking the system.

After that, the robot drivers should be started

roslaunch --wait strands_bringup strands_robot.launch with_mux:=false with_magnetic_barrier:=false

Then, the database.

roslaunch --wait strands_bringup strands_core.launch db_path:=$DB_PATH

The navigation requires the UI to be started before it is fully initialised.

HOST_IP=$EXTERNAL_UI_IP roslaunch --wait strands_bringup strands_ui.launch

The EXTERNAL_UI_IP is the IP at which the interface will be displayed. You can choose localhost, but you should specify the IP that the machine is assigned. You can check this with ifconfig. You should then open a browser and access EXTERNAL_UI_IP:8090. For example, if you have the IP 10.0.11.161, then you would access 10.0.11.161:8090.

Basic navigation is launched with

roslaunch --wait strands_bringup strands_navigation.launch positionUpdate:=false map:=$NAV_MAP with_no_go_map:=$WITH_NO_GO no_go_map:=$NOGO_MAP topological_map:=$TOP_MAP

NAV_MAP is the map to use for navigation, and should point to a yaml file, such as that created by the map_saver.

NO_GO_MAP is a map that is used to specify nogo areas. It should point to a yaml map. This can be used to draw lines in open space which the robot will not cross, which can be useful for doorways or other areas which the robot should not enter.

TOP_MAP is the name of the topological map corresponding to the navigation map. This name should exist in the database that has been loaded above.

Finally, the executive deals with tasks.

roslaunch --wait task_executor mdp-executor.launch interruptible_wait:=false combined_sort:=true
Routine

The routine allows tasks to be scheduled on a regular basis. A task can be pretty much anything you define. You can schedule tasks to be performed within a specific time window each day. The routine also defines when the robot is active. You can specify when the robot should be active and when it should remain on the charging station for the day.

While you can set up your own routine in a python script, it is also possible to do it using the automated_routine package. You will need to set up a yaml file containing various settings for timings, actions and so on. An example with comments can be found here.

The routine requires that other parts of the system are already running, so it should be launched last.

roslaunch --wait automated_routine automated_routine.launch routine_config:=$ROUTINE_CONFIG

ROUTINE_CONFIG refers to the location of the yaml file which defines the routine.

To have a task run, all you need is an action server which will perform the required action, and a srv message corresponding to it. The task objects created by the routine define parameters for the population of the action object, and which actionserver the populated message should be passed to in order for the task to be executed.

To see an example of what more complex code for a custom task might look like, see here. You can see more about tasks here.

Example task in simulation

It’s also possible to run the routine in simulation. You’ll need to run the executor first with roslaunch strands_morse basic_example_executor.launch. The routine makes use of the file strands_morse/basic_example/conf/basic_routine.yaml. If you follow the instructions below to create a basic test action, you can leave this as is, but if you’d like to do something else you can modify it however you like.

Here is a small example task that you can use to test the routine. Create a package in your workspace with catkin_create_pkg print_string rospy std_msgs message_generation.

In the scripts directory, create a print_string.py script and make sure it’s executable with chmod +x nav_action.py

cd print_string
mkdir scripts
cd scripts
touch print_string.py
chmod +x print_string.py

The script should contain the following code:

#! /usr/bin/env python
import rospy
import actionlib
from pr_str.msg import PrintMessageAction

class print_string(object):

    def __init__(self):
        self.server = actionlib.SimpleActionServer('print_string_action', PrintMessageAction, self.process_request, False)
        self.server.start()

    def process_request(self, req):
        rospy.loginfo("Hello, here's a message at waypoint {0}: {1}".format(req.waypoint, req.message))
        self.server.set_succeeded()

if __name__ == '__main__':
    rospy.init_node('print_string_action')
    ps = print_string()
    rospy.loginfo("Waiting for action requests.")
    rospy.spin()

Tasks will be created by the routine which will send the robot to the waypoints requested in the routine definition, and then a string will be printed wherever you run this script. The PrintMessage service is defined as follows:

string waypoint
string message
----
----
bool result

You should create an action directory in the package and create a file PrintMessage.action with the above contents.

You’ll also need to populate the CMakeLists.txt and package.xml files like this:

cmake_minimum_required(VERSION 2.8.3)
project(pr_str)

find_package(catkin REQUIRED COMPONENTS
  rospy
  std_msgs
  message_generation
  actionlib_msgs
  actionlib
)

add_action_files(
  DIRECTORY action
  FILES
  PrintMessage.action
)

generate_messages(
  DEPENDENCIES
  std_msgs  # Or other packages containing msgs
  actionlib_msgs
)

catkin_package()

include_directories(
  ${catkin_INCLUDE_DIRS}
)
<?xml version="1.0"?>
<package>
  <name>pr_str</name>
  <version>0.0.0</version>
  <description>The print_string package</description>

  <maintainer email="me@mail.net">me</maintainer>

  <license>TODO</license>

  <buildtool_depend>catkin</buildtool_depend>
  <build_depend>actionlib</build_depend>
  <build_depend>actionlib_msgs</build_depend>
  <build_depend>rospy</build_depend>
  <build_depend>message_generation</build_depend>
  <run_depend>rospy</run_depend>
  <run_depend>message_runtime</run_depend>
  <run_depend>actionlib</run_depend>
  <run_depend>actionlib_msgs</run_depend>
</package>

Compile the package with catkin build print_string, and then run the script with rosrun print_string print_string.py

Finally, launch the routine with roslaunch strands_morse basic_example_routine.launch. You should see activity in the executor window and in the routine. You can monitor tasks currently in the routine with rosrun task_executor schedule_status.py.

Tmux

During the project we have found tmux to be very useful, as it allows persistent terminal sessions which can be accessed remotely. Here is a short tmuxinator script that can be used to start off the sessions

# ~/.tmuxinator/strands.yml

name: strands
root: ~/
pre_window: source `rospack find strands_bringup`/conf/env_vars.sh
windows:
  - ros: roscore
  - robot: roslaunch --wait strands_bringup strands_robot.launch with_mux:=false with_magnetic_barrier:=false
  - core:
      panes:
        - HOSTNAME=$DB_MACHINE roslaunch --wait strands_bringup strands_core.launch machine:=$DB_MACHINE user:=$RUNTIME_USER db_path:=$\
DB_PATH
        - HOST_IP=$EXTERNAL_UI_IP $DISPLAY_SETTING roslaunch --wait strands_bringup strands_ui.launch mary_machine:=$MARY_MACHINE mary_\
machine_user:=$RUNTIME_USER
  - navigation: roslaunch --wait strands_bringup strands_navigation.launch positionUpdate:=false map:=$NAV_MAP with_no_go_map:=$WITH_NO\
GO no_go_map:=$NOGO_MAP topological_map:=$TOP_MAP chest_xtion_machine:=$CHEST_CAM_MACHINE
  - executive:
      panes:
        - roslaunch --wait task_executor mdp-executor.launch interruptible_wait:=false combined_sort:=true
        - roslaunch --wait automated_routine automated_routine.launch routine_config:=$ROUTINE_CONFIG

It can also be found here.

Here is the script that runs in each tmux pane before the commands are passed:

#!/usr/bin/env bash

export EXTERNAL_UI_IP=10.0.11.161

# Database path
export DB_PATH=/data/y4_pre_dep/mongo
# Path to yaml files specifying defaults to load when the db is started
export DB_DEFAULTS=/data/y4_pre_dep/defaults

# Topological map to use. This value should exist in the database
export TOP_MAP=lg_march2016

# Location of the map to use for navigation
export NAV_MAP=/home/strands/tsc_y4_ws/maps/lg_march2016/cropped.yaml

# Whether or not to use nogo map
export WITH_NOGO_MAP=false

# Location of the map to use to define no-go areas
#export NOGO_MAP=

export ROUTINE_CONFIG=`rospack find automated_routine`/conf/bham_routine.yml

The file for environment variable setup can be found here

STRANDS Packages

Here you can find all the documentation generated by the STRANDS project, aggregated from the github repositories.

aaf_deployment/info_terminal

The info_terminal package

robblog

The robblog package

scitos_2d_navigation

This package contains components for using the ROS move base together
with the Scitos G5 robot.

scitos_common

This package contains robot-specific definitions of the SCITOS robot such
as the URDF description of the robot's kinematics and dynamics and 3D
models of robot components.

sensortag

The sensortag package

soma

The soma package

strands_data_to_qsrlib

The strands_data_to_qsrlib package

strands_morse

The strands_morse package

strands_movebase

This package contains components for using the ROS move base together
with the Scitos G5 robot. There is options for running obstacle avoidance
both with only laser and with an additional depth-sensing camera
mounted in front. The additional nodes in the package are for processing
the incoming clouds from the camera for obstacle avoidance.

strands_navigation/topological_navigation

The topological_navigation package

strands_perception_people

The strands_perception_people metapackage

strands_perception_people/perception_people_launch

The perception_people_launch package

strands_perception_people/visual_odometry

The visual_odometry package

strands_qsr_lib/qsr_lib

The qsr_lib package

strands_ui

The strands_ui metapackage

v4r

The Vision for Robotics Library of ACIN TUW (open source STRANDS version)

v4r_ros_wrappers

The v4r_ros_wrappers package

Getting the source code

If you want to get source code of all released packages from strands_desktop, you can use rosinstall_generator to generate a rosinstall file which is added/merged to a catkin ws using wstool http://wiki.ros.org/wstool

ROSDISTRO_INDEX_URL=https://raw.github.com/strands-project/rosdistro/strands-devel/index.yaml rosinstall_generator      strands_desktop --rosdistro indigo --deps --upstream --exclude-path ~/code/ros-install-osx/indigo_desktop_full_ws/src/ > ~/strands_ws/strands_desktop.rosinstall

Note that you need to change paths in this example to fit yours.

This creates an installation file for wstool. In this case the --exclude-path is used to ignore packages you already have (e.g. the ones produced by the main ROS installation) and --upstream gives you git repos where the selected branch is the tagged release, i.e. the most recently released branch.

Setting up a robot from scratch

Get the code

TODO link this to appropriate tutorial

You have several options how to obtain STRANDS system:

  • you can install it via Ubuntu packages
  • you can obtain all source code
  • you can clone only the repositories which you are interested in

I. Basic STRANDS system running on ROS Indigo and Ubuntu Trusty

a) Code

For this step, you need:

If you are using SCITOS robot, you can also use:

If you are using another robot, you will need to provide functionality covered by the three aforementioned repositories.

b) Changes

Even if you use SCITOS robot, your exact configuration of the robot will differ. Therefore, you must change scitos.xacro and scitos_calibration.xacro in scitos_common/scitos_description/urdf/ to fit your robot.

Chest camera for improved navigation

Our robot is equipped with a depth camera on its chest in order to detect objects in front of the robot and improve its navigation. Its position and angle are important for the correct functioning of the navigation. You could put the chest camera calibration into the scitos.xacro. However, we experienced that the chest camera is the first thing to be “attacked” by kids during demos or by people during transfering the robot. Therefore, we do not have the chest camera specified in the scitos_common/scitos_description but in strands_movebase/strands_description in order to load its position from calibration data database. Details about this calibration are provided later in this tutorial.

c) Compiling

If you are using SCITOS robot, your `~/.bashrc` file should look something like:

source /opt/ros/indigo/setup.bash
source /localhome/strands/strands_ws/devel/setup.bash
export MIRA_PATH="/opt/MIRA:/opt/SCITOS:/opt/MIRA-commercial:/localhome/strands/strands_ws/src"

When you compile aforementioned code, it will probably complain about version of MIRA. In that case, you can change the version version from 0.23.1 to 0.29 (or your version) in scitos_drivers/scitos_mira/CMakeLists.txt line 88

If you have SCITOS robot very similar to STRANDS robots, the rest should compile without issues.

d) Running

First, create a folder mongodb_store when mongo database will be saving data. This folder can be anywhere, but you must guarantee that the user running the system has writing access to this folder.

To test, if everything works fine, you need to modify two launch files in strands_system/strands_bringup:

  • strands_robot.launch (especially if you are using a different robot or sensors) See different parameters of the launch file.
  • strands_cameras.launch (in order to launch cameras on the robot, we use separate launch file). We use two Asus Xtion cameras, one on a pan-tilt unit on top of the robot, one as a chest camera as explained above. In our setting, the chest camera is plugged into a main control pc, where navigation should run. In contrast, head camera is plugged into additional pc with a good graphic card with GPUs. If your setting is similar, you need provide this launch file with head_ip and chest_ip paramaters and with head_camera and chest_camera boolean flags to enable/diable them.

Then, run:

roscore
roslaunch --wait strands_bringup strands_robot.launch
roslaunch strands_bringup strands_cameras.launch head_camera:=true head_ip:=(specify)
rosrun rviz rviz

If you display the robot model in rviz, it will not look correct. This is due to the fact that is mising /map frame. Hence, run, for example, rosrun gmapping slam_gmapping to get the frame.

To check:

  • your robot model and TF, if all transformation look correct.
  • the data publish by laser scanner (if you have any), cameras rgb pictures and registred point clouds
  • move the robot by joystick if you have any. If you use Logitech gamepad as we do and you are using our code, you need to keep pressed button LB while moving the robot.

If everything looks good, you are ready for the next step!

II. Mapping and Navigation

a) Code

For this step, you need:

aaf_sim

This package contains files that are necessary for running STRANDS simulations on the University of Lincoln environments.

Setting up Autonomous Patrolling Simulation

  1. Calibrate charging station parameters:
  • Launch strands_datacentre: ``` roslaunch mongodb_store mongodb_store.launch db_path:=/opt/strands/mongodb_store

    ```
    
  • Launch simulation: ``` roslaunch strands_morse aaf_sim_morse.launch

    ```
    
  • Launch scitos_docking: ``` roslaunch scitos_docking charging.launch

    ```
    
  • Drive the robot to the charging station

  • Calibrate charging parameters running: ``` rosrun scitos_docking visual_charging_client calibrate 100

    ```
    
  1. Insert waypoints on database:

Launching Autonomous Patrolling Simulation

If all previous steps are done launch simulation by running:

```
rosrun aaf_simulation start.sh

```

Original page: https://github.com/strands-project/aaf_deployment/blob/indigo-devel/aaf_simulation/Readme.md

Walking Group

This AAF task is meant to accompany the walking group, driving infront of the therapist and the patients, providing entertainment during rests.

Functionality

The interaction between the robot and therapist is based on the so-called key card (TODO: Upload key-card) which has to be worn around his/her neck. There can be several therpists, each wearing a key card, but there has to be at least one. The key card has to be worn visably, as the robot relies on it to identify the therapist.

  • The robot will be waiting for the group to start at the predefined starting location. During this phase it will provide entertainment, i.e. play music, play videos, or show images via a simple to use and clearly structured interface which is supposed to be used by the patients.
  • To start the group, the therapist shows the robot the key card in the correct orientation. This will trigger the so called guide interface. In the guide interface,
  • the therapist can toggle playing of music while the groups is walking along the corridors. This will drown out every other auditive feedback. If music is turned off, the robot will play jingles and nature sounds at predefined locations and events.
  • the next waypoint can be chosen, either by just pressing “Weiter” which lets the robot drive to the next waypoint in the tour, or by selecting them from a list of all possible waypoints.
  • the guide interface can be cancelled which results in showing the entertainment interface again.
  • the whole tour can be cancelled which results in the robot going away, doing a science.
  • During the actual walking, the robot is driving in front of the group, playing either music or jingles and nature sounds. After a predefined distance the robot stops, waiting for the therapist to be “close” (using the key card to meassure distance), and then shows a large “Weiter” button, which can be bressed by the patients to send the robot off and to elicit interaction.
  • When a resting area is reached, the robot waits for the guide to come “close” (using the key card to meassure distance), and then shows two buttons, i.e. “Witer” and “Rast”, which either send the robot to the next waypoint or shows the entertainment interface.

This will continue until the final waypoint is reached. During the whole tour, whenever the guide shows the key card in the specific orientation, the robot will stop and show the guide interface. This way, the music can be toggled, the waypoint chosen or the tour preempted at any point in time.

Usage

The whole behaviour is run via the axlaunch server , starting the components, and the abstract task server to create scheduable tasks.

Configuration

The most basic and important configuration happens via a datacentre entry storing the waypoints at which to rest and the distance at which the robot should stop in regular intervals. An example file could look like this:

slow:
    stopping_distance: 3.0
    waypoints:
        1: "WayPoint34"
        2: "WayPoint14"
        3: "WayPoint26"
        4: "WayPoint15"
        5: "WayPoint46"
fast:
    stopping_distance: 5.0
    waypoints:
        1: "WayPoint34"
        2: "WayPoint14"
        3: "WayPoint26"
        4: "WayPoint15"
        5: "WayPoint46"

slow and fast are internally used identifiers. They don’t have to have specific names but have to be unique and consistent with the group entry described below. The stopping_distance is the distance after which the robot will wait for the therapist to come close, to show a button, which can be pressed by the patients to send it off again. The waypoints entry specifies the waypoints at which the group usually rests. Waypoints not specified in this list can never be selected as resting areas during a tour; the index specifies the order of points.

This file has to be inserted into the datacentre using a provided script:

$ rosrun aaf_walking_group insert_yaml.py --help
usage: insert_yaml.py [-h] -i INPUT [--collection_name COLLECTION_NAME]
                      [--meta_name META_NAME]
                      dataset_name

positional arguments:
  dataset_name          The name of the dataset. Saved in meta information
                        using 'meta_name'

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        Input yaml file
  --collection_name COLLECTION_NAME
                        The collection name. Default: aaf_walking_group
  --meta_name META_NAME
                        The name of the meta filed to store 'dataset_name' in.
                        Default: waypoint_set

shows it’s functionalities. The defaults for meta_name and collection_name can be changed, but will then have to be specified at start-up. Otherwise, the defaults in the launch file correspond with the defaults in the script. The dataset_name name is a mendatory argument which is used to identify the configuration you stored. With -i <my_file> you specify the input file. Once that has been inserted we can move on to the second config file, loaded on start-up. An example file could look like this:

walking_group:
    walking_group_slow:
        group: 'slow'
        start_location: 'WayPoint34'
        max_duration: 3600
        parameters:
            - 'music_set_group'
            - 'waypoint_set'
        values:
            - 'my_set'
            - 'aaf_waypoints'
    walking_group_fast:
        group: 'fast'
        start_location: 'WayPoint34'
        max_duration: 3600
        parameters:
            - 'music_set_group'
            - 'waypoint_set'
        values:
            - 'my_set'
            - 'aaf_waypoints'

walking_group_slow and walking_group_fast is used to create two action servers with the corresponding name, therefore these names have to be unique. The number of action servers is dynamic and can be changed by adding another entry to this file. group is used to identify the specific set of waypoints from the datacentre (see above); slow or fast in our case. start_location is the waypoint to which the scheduler sends the robot at the beginning of the task and should be the same as the first waypoint in the list above; WayPoint34 in our case. The max_duration is another argument for the schaduler which tells it how long the group task lasts in the worst case. If this time is reached it will assume that the task failed and preempt it. The parameters and values fields are used together to create the start-up parameters for the internally used launch file:

  • head_machine default=”localhost”: The machine to which the head_camera is connected. The social card reader and position of card will be started there.
  • head_user default=”“: The user of the machine to which the head_camera is connected. The social card reader and position of card will be started there.
  • waypoint_set default=”aafwaypoints”_: The dataset_name used when inserting the yaml file using insert_yaml.py
  • meta_name default=”waypointset”_: The meta_name used when inserting the yaml file using insert_yaml.py
  • collection_name default=”aafwalking_group”_: The collection_name used when inserting the yaml file using insert_yaml.py
  • waypoint_sounds_file default=”$(find aafwalking_group)/conf/waypoint_sounds.yaml”_: The waypoint sounds file, describing the waypoints at which to play sounds and which sounds to play; see below.
  • music_set_group default=”aafwalking_group_music”_: The media server music set to play during the walking phase and via the entertainment interface.
  • music_set_waypoints default=”aafwaypoint_sounds”_: The media server music set containing the waypoint sounds.
  • music_set_jingles default=”aafjingles”_: The media server music set containing the jingles used.
  • music_set_recovery default=”aafwalking_group_recovery_sounds”_: The media server music set containing the jingles used.
  • video_set default=”aafwalking_group_videos”_: The media server video set containing the videos shown during entertainment.
  • image_set default=”aafwalking_group_pictures”_: The media server image set containing the pictures shown during entertainment.

Configuring the recovery behaviours

Recovery behaviours are dynamically turend on and off during start-up and after the end of the walking group to prevent some of them from kicking in, making the robot drive backwards. Additionally, a custom recovery behaviour, playing a sounds when in trouble, is added. To tell the navigation which behaviour should be used during the group, we create a so-called whitelist which could look like this:

recover_states:
  sleep_and_retry: [true, .inf]
  walking_group_help: [true, .inf]

This enables, the sleep_and_retry and walking_group_help recovery states and sets the possible retries to inf. Every behaviour not in this list, will be disabled during the group and reenabled afterwards.

Configuring the media sets

The video and image set can contain any form of images and videos and just have to be passed by name during start-up. The music_set_group can contain any kind of music and just has to be passed by name during start-up. The jingles and waypoint sets are a bit more special. The jingles used have to have the following filenames:

  • jingle_stop.mp3: Played when the robot stops and waits for someone to press the “Weiter” button.
  • jingle_patient_continue.mp3: Sound played when someone presses the “Weiter” button
  • jingle_therapist_continue.mp3: Sound played when the robot starts navigating after a therapist interaction.
  • jingle_waypoint_reached.mp3: Sound played when a resting point is reached.

Currently these names are hard coded. For the waypoint sounds, we provide a config file loaded from the waypoint_sounds_file parameter. An example file could look like this:

waypoint_sounds: '{"Lift1": "great_tit.mp3", "WayPoint7": "crested_tit.mp3", "Cafeteria": "northern_tit.mp3", "WayPoint8": "marsh_tit.mp3"}'

The keys of the dictornary are topological waypoint names and the values are the filenames of the music played when reaching that waypoint. In this case we play a selection of bird sounds.

Running

Start the managing action server(s):

roslaunch aaf_walking_group task_servers.launch

this launch file has one parameter: config_file which specifies the location of the yaml file used to specify parameters needed to run the behaviour. This file is the one described above setting the parameters in the walking_group namespace.

Once the launch file is started it provides the respective number of action servers which have an empty goals since everything is defined in the config file. These can easily be scheduled using the google calendar interface since they inherit from the abstract task server. An example would be, using all the above config files, we have two action servers, one called walking_group_slow and the other walking_group_fast. By creating a new event in the google calendar of the robot called either walking_group_slow or walking_group_fast it will schedule the task and start the specific action server. No additional configuration required. This makes it easy to schedule these events, the only thing that has to be observed is that the actual time window is larger than the max_duration set in the launch file. Otherwise the duration can be overridden in the calendar by using yaml style arguments in the event description.

Component Behaviour

The actually started action servers don’t do anything until they receive a goal. When a new goal is sent, they * start the necessary components using the axlaunch server * start the task immediately after the components are launched.

After the task is preempted or successful, the components are stopped to not use any memory or CPU if the task is not running.

The servers communicate via a topic if an instance of the walking group is already running or not. When trying to start a second instance, e.g. slow is already running and you want to start fast, the goal will be abborted.

Testing

To start the statemachine, run above launch file and configuration. Use, e.g.

rosrun actionlib axclient.py /walking_group_fast

To emulate the therapist being close, publish:

rostopic pub /socialCardReader/QSR_generator std_msgs/String "data: 'near'"

to emulate the therpist showing the key card in the specified orientation, publish:

rostopic pub /socialCardReader/commands std_msgs/String "data: 'PAUSE_WALK'"

Original page: https://github.com/strands-project/aaf_deployment/blob/indigo-devel/aaf_walking_group/README.md

Walking Group

This AAF task is meant to accompany the walking group, driving infront of the therapist and the patients, providing entertainment during rests.

Functionality

The interaction between the robot and therapist is based on the so-called key card (TODO: Upload key-card) which has to be worn around his/her neck. There can be several therpists, each wearing a key card, but there has to be at least one. The key card has to be worn visably, as the robot relies on it to identify the therapist.

  • The robot will be waiting for the group to start at the predefined starting location. During this phase it will provide entertainment, i.e. play music, play videos, or show images via a simple to use and clearly structured interface which is supposed to be used by the patients.
  • To start the group, the therapist shows the robot the key card in the correct orientation. This will trigger the so called guide interface. In the guide interface,
  • the therapist can toggle playing of music while the groups is walking along the corridors. This will drown out every other auditive feedback. If music is turned off, the robot will play jingles and nature sounds at predefined locations and events.
  • the next waypoint can be chosen, either by just pressing “Weiter” which lets the robot drive to the next waypoint in the tour, or by selecting them from a list of all possible waypoints.
  • the guide interface can be cancelled which results in showing the entertainment interface again.
  • the whole tour can be cancelled which results in the robot going away, doing a science.
  • During the actual walking, the robot is driving in front of the group, playing either music or jingles and nature sounds. After a predefined distance the robot stops, waiting for the therapist to be “close” (using the key card to meassure distance), and then shows a large “Weiter” button, which can be bressed by the patients to send the robot off and to elicit interaction.
  • When a resting area is reached, the robot waits for the guide to come “close” (using the key card to meassure distance), and then shows two buttons, i.e. “Witer” and “Rast”, which either send the robot to the next waypoint or shows the entertainment interface.

This will continue until the final waypoint is reached. During the whole tour, whenever the guide shows the key card in the specific orientation, the robot will stop and show the guide interface. This way, the music can be toggled, the waypoint chosen or the tour preempted at any point in time.

Usage

The whole behaviour is run via the axlaunch server , starting the components, and the abstract task server to create scheduable tasks.

Configuration

The most basic and important configuration happens via a datacentre entry storing the waypoints at which to rest and the distance at which the robot should stop in regular intervals. An example file could look like this:

slow:
    stopping_distance: 3.0
    waypoints:
        1: "WayPoint34"
        2: "WayPoint14"
        3: "WayPoint26"
        4: "WayPoint15"
        5: "WayPoint46"
fast:
    stopping_distance: 5.0
    waypoints:
        1: "WayPoint34"
        2: "WayPoint14"
        3: "WayPoint26"
        4: "WayPoint15"
        5: "WayPoint46"

slow and fast are internally used identifiers. They don’t have to have specific names but have to be unique and consistent with the group entry described below. The stopping_distance is the distance after which the robot will wait for the therapist to come close, to show a button, which can be pressed by the patients to send it off again. The waypoints entry specifies the waypoints at which the group usually rests. Waypoints not specified in this list can never be selected as resting areas during a tour; the index specifies the order of points.

This file has to be inserted into the datacentre using a provided script:

$ rosrun aaf_walking_group insert_yaml.py --help
usage: insert_yaml.py [-h] -i INPUT [--collection_name COLLECTION_NAME]
                      [--meta_name META_NAME]
                      dataset_name

positional arguments:
  dataset_name          The name of the dataset. Saved in meta information
                        using 'meta_name'

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT, --input INPUT
                        Input yaml file
  --collection_name COLLECTION_NAME
                        The collection name. Default: aaf_walking_group
  --meta_name META_NAME
                        The name of the meta filed to store 'dataset_name' in.
                        Default: waypoint_set

shows it’s functionalities. The defaults for meta_name and collection_name can be changed, but will then have to be specified at start-up. Otherwise, the defaults in the launch file correspond with the defaults in the script. The dataset_name name is a mendatory argument which is used to identify the configuration you stored. With -i <my_file> you specify the input file. Once that has been inserted we can move on to the second config file, loaded on start-up. An example file could look like this:

walking_group:
    walking_group_slow:
        group: 'slow'
        start_location: 'WayPoint34'
        max_duration: 3600
        parameters:
            - 'music_set_group'
            - 'waypoint_set'
        values:
            - 'my_set'
            - 'aaf_waypoints'
    walking_group_fast:
        group: 'fast'
        start_location: 'WayPoint34'
        max_duration: 3600
        parameters:
            - 'music_set_group'
            - 'waypoint_set'
        values:
            - 'my_set'
            - 'aaf_waypoints'

walking_group_slow and walking_group_fast is used to create two action servers with the corresponding name, therefore these names have to be unique. The number of action servers is dynamic and can be changed by adding another entry to this file. group is used to identify the specific set of waypoints from the datacentre (see above); slow or fast in our case. start_location is the waypoint to which the scheduler sends the robot at the beginning of the task and should be the same as the first waypoint in the list above; WayPoint34 in our case. The max_duration is another argument for the schaduler which tells it how long the group task lasts in the worst case. If this time is reached it will assume that the task failed and preempt it. The parameters and values fields are used together to create the start-up parameters for the internally used launch file:

  • waypoint_set default=”aafwaypoints”_: The dataset_name used when inserting the yaml file using insert_yaml.py
  • meta_name default=”waypointset”_: The meta_name used when inserting the yaml file using insert_yaml.py
  • collection_name default=”aafwalking_group”_: The collection_name used when inserting the yaml file using insert_yaml.py
  • waypoint_sounds_file default=”$(find aafwalking_group)/conf/waypoint_sounds.yaml”_: The waypoint sounds file, describing the waypoints at which to play sounds and which sounds to play; see below.
  • music_set_group default=”aafwalking_group_music”_: The media server music set to play during the walking phase and via the entertainment interface.
  • music_set_waypoints default=”aafwaypoint_sounds”_: The media server music set containing the waypoint sounds.
  • music_set_jingles default=”aafjingles”_: The media server music set containing the jingles used.
  • music_set_recovery default=”aafwalking_group_recovery_sounds”_: The media server music set containing the jingles used.
  • video_set default=”aafwalking_group_videos”_: The media server video set containing the videos shown during entertainment.
  • image_set default=”aafwalking_group_pictures”_: The media server image set containing the pictures shown during entertainment.

Configuring the recovery behaviours

Recovery behaviours are dynamically turend on and off during start-up and after the end of the walking group to prevent some of them from kicking in, making the robot drive backwards. Additionally, a custom recovery behaviour, playing a sounds when in trouble, is added. To tell the navigation which behaviour should be used during the group, we create a so-called whitelist which could look like this:

recover_states:
  sleep_and_retry: [true, .inf]
  walking_group_help: [true, .inf]

This enables, the sleep_and_retry and walking_group_help recovery states and sets the possible retries to inf. Every behaviour not in this list, will be disabled during the group and reenabled afterwards.

Configuring the media sets

The video and image set can contain any form of images and videos and just have to be passed by name during start-up. The music_set_group can contain any kind of music and just has to be passed by name during start-up. The jingles and waypoint sets are a bit more special. The jingles used have to have the following filenames:

  • jingle_stop.mp3: Played when the robot stops and waits for someone to press the “Weiter” button.
  • jingle_patient_continue.mp3: Sound played when someone presses the “Weiter” button
  • jingle_therapist_continue.mp3: Sound played when the robot starts navigating after a therapist interaction.
  • jingle_waypoint_reached.mp3: Sound played when a resting point is reached.

Currently these names are hard coded. For the waypoint sounds, we provide a config file loaded from the waypoint_sounds_file parameter. An example file could look like this:

waypoint_sounds: '{"Lift1": "great_tit.mp3", "WayPoint7": "crested_tit.mp3", "Cafeteria": "northern_tit.mp3", "WayPoint8": "marsh_tit.mp3"}'

The keys of the dictornary are topological waypoint names and the values are the filenames of the music played when reaching that waypoint. In this case we play a selection of bird sounds.

Running

Start the managing action server(s):

roslaunch aaf_walking_group task_servers.launch

this launch file has one parameter: config_file which specifies the location of the yaml file used to specify parameters needed to run the behaviour. This file is the one described above setting the parameters in the walking_group namespace.

Once the launch file is started it provides the respective number of action servers which have an empty goals since everything is defined in the config file. These can easily be scheduled using the google calendar interface since they inherit from the abstract task server. An example would be, using all the above config files, we have two action servers, one called walking_group_slow and the other walking_group_fast. By creating a new event in the google calendar of the robot called either walking_group_slow or walking_group_fast it will schedule the task and start the specific action server. No additional configuration required. This makes it easy to schedule these events, the only thing that has to be observed is that the actual time window is larger than the max_duration set in the launch file. Otherwise the duration can be overridden in the calendar by using yaml style arguments in the event description.

Component Behaviour

The actually started action servers don’t do anything until they receive a goal. When a new goal is sent, they * start the necessary components using the axlaunch server * start the task immediately after the components are launched.

After the task is preempted or successful, the components are stopped to not use any memory or CPU if the task is not running.

The servers communicate via a topic if an instance of the walking group is already running or not. When trying to start a second instance, e.g. slow is already running and you want to start fast, the goal will be abborted.

Testing

To start the statemachine, run above launch file and configuration. Use, e.g.

rosrun actionlib axclient.py /walking_group_fast

To emulate the therapist being close, publish:

rostopic pub /socialCardReader/QSR_generator std_msgs/String "data: 'near'"

to emulate the therpist showing the key card in the specified orientation, publish:

rostopic pub /socialCardReader/commands std_msgs/String "data: 'PAUSE_WALK'"

Original page: https://github.com/strands-project/aaf_deployment/blob/indigo-devel/info_terminal/README.md

Overview

The Infremen package contains a waypoint proposer for the Info Terminal. The tries to establish the probabilities of people presence at the individual waypoints and times and use this information to navigate to these waypoints when people are likely to be present.

Every midnight, it processes the information about the interactions it achieved during the day and creates dynamic probabilistic models of people presence at the individual waypoints. These models are then used to create a schedule that determines at which waypoints and times the robot offers its Info Terminal service.

Practical

You can edit the schedule manually.

Parameters

The collectionName determines what name will be used when saving and retrieving interaction history from the monngodb. The scheduleDirectory determines the place to store text files with the schedules.

Dynamically reconfigurable parameters

The explorationRatio determines the balance between exploration (trying to establish the probabilitic functions of people presence) and exploitation (trying to meet people to actually interact). If the battery level drops below the minimalBatteryLevel, the robot will go and recharge itself. The robot will not leave the waypoint if there was an interaction during the last interationTimeout seconds. The taskDuration determines how long the robot waits for an interaction. The maxTaskNumber sets how much Info Terminal tasks are maintained at the same time. Setting maxTaskNumber causes the Infremen to stop proposing tasks.

Original page: https://github.com/strands-project/aaf_deployment/blob/indigo-devel/info_terminal/infremen/README.md

This is a public Wiki, no confidential information to go in here!

AAF Deployment 2016

Discussions of improvements in Google doc shared via mailing list (can’t link here, as it’s public)

  • Can run during night in Administration Wing, including the lobby (effectively y1 area)
  • see issues

AAF Press Dates

  • 05.04. 13:00 CET Short interview for local press (Die ganze Woche”)
  • 05.04. 15:00 CET Filming/short interview for RT (Russian Television abroad)
  • 06.04. 13:00 Short interview and photo for local press (“Bezirksblatt”)
  • 14.04. 14:30 Interviews for Autrian radio

AAF Deployment 2015

A number of pages on strands_management (restricted access) describe the AAF scenario in greater detail: * General Scenario description * Bellbot (lead by Yiannis) * Walking Group (lead by Christian D) * Info Terminal (lead by Michi) * Setup Network etc. at AAF * General task and Deliverables for yr2 * Overview of STRANDS packaging and releasing procedures

Tasks that can be scheduled

Besides the normal info_task that is scheduled by infremen every 10 minutes to run for 5 minutes, we can schedule tasks in the Google Calendar. Here’s a list of possible tasks, the “Where” field of an event must name a valid waypoint.

General structure

  • The deployment shall be based on released packages on indigo amd64
  • This aaf_deployment repository is meant solely for AAF-specific packages, anything that has a wider use outside the AAF deployment shall go in dedicated packages, e.g. in strands_apps.
  • Contributions are via pull requests only and rigorous code review will take place, in particular after the feature freeze milestone.

Milestones

  1. AAF 2015 Feature Freeze 2/4/15:
  • All features implemented for the different task
  • A “global” scheduling approach working
  • Basic GUIs in place with functionality
  • The system will be ran continuously at UoL and Vienna from this point onwards, with bug fixes being integrated as soon as they arrive.
  1. AAF 2015 Pre-Deployment 13/4/15:
  • Henry will perform 3 days at AAF, based on the released packages
  • The pre-deployment will comprise a staff training session, to empower AAF members of staff to operate the robot and fill confident about using it
  • The interfaces and tasks need to be completed by this milestone, as staff training will be based on these interfaces. Minor design tweaks allowed based on feedback from staff. No structural changes in program logic or code structure beyond this point. No new features to be added either.
  1. AAF 2015 Deployment 11/5/15:
  • Start of the actual deployment for 30 days (including weekends)

Henry@AAF

  • For the deployment, Henry will be remotely administrated by the STRANDS team (check https://github.com/strands-project/strands_management/wiki/Y2-Integration-Scenarios for details to log in)
  • On site, two laptops are provided for staff interactions and robot monitoring, one in Tobias’ and Denise’s AAF office, and one at the reception desk
  • The control interface (web service) shall be running on this:
    • scheduling bellbot tasks
    • displaying the robots current screen
    • seeing the robot on a map
    • scheduling other tasks
    • monitoring the robot’s health (battery, disk space)
  • The laptop in the AAF office is also the control-PC, running mongodb replication and serving the websites for the control interfaces (see above)
  • The docking station will be in the reception for Henry to approach whenever needed autonomously
  • An additional charging opportunity is in the AAF office by cable, an non-interruptable “maintenance” task shall be implemented to call the robot into the office for maintenance and charging, where it will be explicitly released again for other jobs by clicking in the GUI
  • (Video-)Recording will be disabled in the therapy wing (AAF to indicate the topological nodes that should be excluded)
  • An additional web cam shall be place above the screen for recording interactions. This shall be continuously recording data (using data_compression node) whenever permitted at low framerate.
  • Data will be uploaded during charging period to the control-PC
Setup Henry
  • Networking:
  • when WIFI network connection drops we shall try to reconnect automatically by pasting nm-connect.sh into root’s crontab (sudo crontab -e)
  • Voice: script added that sets Henry’s voice

Topics to log:

Ros message printouts (above warning priority):

  • /rosout

Robot status:

  • /tf - The tf description
  • /robot_pose - Also in tf but might as well
  • /cmd_vel - Velocity sent to motors
  • /goal - Might be the move base goal
  • /mileage - Distance traveled
  • /motor_status - Is motor stop on? Free run etc.
  • /head/actual_state - Positions of eyes
  • /head/cmd_light_state - Eye lights
  • /head/commanded_state - Eye position commands?
  • /barrier_status - Magnetic strip?
  • /battery_state - Is charging, battery percentage etc.
  • /bumper - Is bumper pressed?
  • /charger_status - Charging status
  • /rfid - Magnetic strip etc.
  • /diagnostics - Network status, joystick driver status, PTU status etc.
  • /SetPTUState/goal - PTU position command
  • /ResetPtu/goal - PTU reset command
  • /EBC/parameter_updates - cfg changes of ebc
  • /Charger/parameter_updates - cfg changes of charger

Topological nav:

  • /topological_navigation/Route - current route through nodes
  • /topological_navigation/Statistics - statistics on top nav (already logged but might as well)
  • /current_node - current node in topo nav
  • /current_edge - current edge in topo nav
  • /closest_node - closest node in topo nav

Monitored nav:

  • /do_backtrack/goal - Command to do the backtrack recovery

Communication:

  • /speak/goal - mary tts speak command
  • /mary_tts/speak - same?
  • /strands_emails/goal - send email command
  • /strands_image_tweets/goal - send image tweet command
  • /pygame_player_negotiation - highest audio priority + node?

Docking servers:

  • /chargingServer/goal
  • /chargingServer/result
  • /chargingServer/cancel
  • /docking/goal
  • /docking/result
  • /docking/cancel
  • /undocking/goal
  • /undocking/result
  • /undocking/cancel

Move base:

  • /map_updates
  • /move_base/NavfnROS/plan - local plan?
  • /move_base/current_goal - current goal
  • /move_base/DWAPlannerROS/global_plan - global plan
  • /move_base/DWAPlannerROS/local_plan - local plan
  • /move_base/goal - current goal

Already logged (won’t be logged by this):

Task executor:

  • /execute_policy_mode/Statistics - Scheduling stats, times etc?
  • /task_executor/events - ?
  • /current_schedule - Current schedule

Monitored nav:

  • /monitored_navigation/monitored_nav_event - (Already logged by Bruno?)
  • /monitored_navigation/srv_pause_requested - ?
  • /monitored_navigation/stuck_on_carpet - Stuck on carpet state
  • /monitored_navigation/pause_requested - Nav pause

Original page: https://github.com/strands-project/aaf_deployment/wiki/Topics-to-log-during-deployments

3D ANNOTATION TOOL

Author: Adrià Gallart del Burgo

Co-author: Balasubramanian Rajasekaran, Akshaya Thippur

Emails: bara@kth.se, akshaya@kth.se

3D Annotation Tool is an application designed to annotate objects in a point cloud scene. Initially, it was developed with the aim to annotate the objects of a table.

Contents

  • data: folder that contains some examples of .pcd files and annotations (.xml file).
  • documentation: folder that contains the user guide.
  • icons: folder with all the icons used.
  • src: this folder contains the source code.
  • annotation_tool.pro: QtCreator project file.
  • CMakeLists.txt: instructions for the installation.
  • README.txt: readme file.
  • info_app.xml: .xml file with information used for the application.
  • info_objects.xml: .xml file with the list of objects annotated by the user.

System requirements

3D Annotation Tool requires installation of the ROS Indigo and catkin as prerequisites.

If (ros-indigo-desktop-full) is installed in your machine, you can proceed with the installation of the 3D Annotation Tool.

Else, make sure the following packages are installed

  1. Install the catkin library: sudo apt-get install ros-indigo-catkin
  2. Make sure that the pcl package is installed: sudo apt-get install libpcl-1.7-all-dev
  3. Install the vtk-qt library: sudo apt-get install libvtk5.8-qt4

Installation

To install the annotation tool:

With catkin:

  1. Git clone or extract the files into your catkin workspace (/catkin_workspace_path/src/3d_annotation_tool/)
  2. cd /catkin_workspace_path/
  3. catkin_make

Using the tool

To use the annotation tool:

With catkin:

  1. cd /catkin_workspace_path/devel/lib/3d_annotation_tool/
  2. cd annotation-tool
  3. ./Annotation_tool

Original page: https://github.com/strands-project/annotation_tool_kth/blob/master/annotation-tool/README.md

The 3D Annotation Tool is an application designed to manually annotate objects in a point cloud 3D image. Initially, it was developed with the aim to annotate the objects on a desktop. However, the tool can be used to annotate any objects in 3D standing on a supporting plane.

This is the tool that was used to annotate the KTH-3D-Total dataset - https://strands.pdc.kth.se/public/kth-3d-total/readme.html

KTH-3D-TOTAL: A 3D dataset for discovering spatial structures for long-term autonomous learning

Thippur, Akshaya and Ambrus, Rares and Agrawal, Gaurav and Del Burgo, Adria Gallart and Ramesh, Janardhan Haryadi and Jha, Mayank Kumar and Akhil, Malepati Bala Siva Sai and Shetty, Nishan Bhavanishankar and Folkesson, John and Jensfelt, Patric 13th International Conference of Control Automation Robotics & Vision (ICARCV), 2014

The format of the point cloud supported is .pcd.

3D Annotation Tool has been developed by Adria Gallart del Burgo at KTH in 2013 and modified and maintained by Akshaya Thippur since then.

The tool can perform the following: - Plane detection - Plane segmentation - Object annotation - Data file generation (XML)

The objects annotated are stored together per scene in their XML file. They record the the following for each annotated object: - position - pose - minimum oriented bounding box - comprising pixel indices in 3D - length, breadth, height

The tool also allows for loading a previous annotation and modifying it.

Original page: https://github.com/strands-project/annotation_tool_kth/blob/master/README.md

rgbd_grabber

This node grabbs images published by the openni ros node (RGB and depth) and saves them on the disk. By default the files are saved in a folder named based on the timestamp generated at the start of the program, however the user can force the creation of new folders where subsequent files will be saved.

Options supported (at runtime, on the console): - s : save one file. - a : start saving a sequence of files. - z : stop saving a sequence of files. - n : create a new folder. - i : increase the frame skip by 1 (default is 0, i.e. save all frames). - u : decrease the frame skip by 1.

The node saves both RGB and depth streams as png files. The naming convention is RGB_SEQ_.png and Depth_SEQ_.png, where SEQ is the sequence number and takes values between 0001 and 9999 (thus a maximum of 10000 images can be saved per folder). In addition, each folder contains an index.txt file where each line contains one file name and the timestamp at which it was saved (taken from the approriapte ros messages). The depth and RGB streams are synchronized based on the timestamps of the original ROS messages.

Note that when in the save sequence mode, if the lens of the device is covered, the node will automatically skip the incomming frames, as well as create a new folder where the new frames will be put, once the lens has been uncovered.

rgbd_grabber

Original page: https://github.com/strands-project/annotation_tool_kth/blob/master/rgbd_grabber/README.md

Automated benchmarking

Some of the datasets provide an easy way of use through our automated benchmarking system for robotics experiments. The system is based on open source, freely-available tools commonly used in software development. While it allows for a seamless and fair comparison of a newly developed method with the original one, it does not require disclosure of neither the original codes nor evaluation datasets. Apart from the description of the system, we provide two use cases, where the researchers were able to compare their methods to the original ones in a matter of minutes without running the method on their own hardware.




image1
Automated benchmarking system workflow.



Read more in our paper. We attached a bibtex record for your convenience.


In the future, the system will extend to all datasets in the LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/automated.html

Long-term indoor dataset

This dataset contains laser scans, odometry, and AMCL results of a SCITOS-G5 robot which was roaming an indoor environment for more than 100 days. The robot served as an info-terminal, assistant-therapist and bellboy in a care home in Vienna for from November 2016 to April 2017, covering autonomously over 100km.




image1
Map of the AAF environment



Dataset structure and download

The data, which are organised in archive files of one-day length are available here. Moreover, the events, when the robot got mislocalised, are here. The overall report on the robot navigation activity at the AAF is available in as a google doc spreadsheet


Condition of use

If you use the dataset for your research, please cite our paper, for which we originally collected the data. We attached a bibtex record for your convenience.


This dataset is part of the larger LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/aaf.html

STRANDS logo

STRANDS Data Sets

STRANDS data sets are provided for testing and benchmarking robotics and computer visions algorithms.

Automated benchmarking data sets: Automatic benchmarkning system at University of Lincoln (tkrajnik@lincoln.ac.uk)

Object Recognition RGB-D Data Sets:

TUW data sets: Several Ground truth and annotated data sets from TUW. This data is targeted towards object instance recognition.

3DNet Dataset: The 3DNet dataset is a free resource for object class recognition and 6DOF pose estimation from point cloud data. Alternative link: 3DNet Dataset

Long Term Mapping Data Sets:

MHT RGB-D: RGB-D image data of three locations in a MHT lab collect by a robot every 5 min over 16 days by the University of Lincoln. This data is described in Krajnik et al., Spectral analysis for long-term robotic mapping, ICRA 2014. (tkrajnik@lincoln.ac.uk)

Witham Wharf: RGB-D snapshots of eight locations collected by a mobile robot over the period of one year by the University of Lincoln. Data is described in: Krajnik et al., Long-Term Topological Localization for Service Robots in in Dynamic Environments using spectral maps, IROS 2014. (tkrajnik@lincoln.ac.uk) Alternative link: Witham Wharf Dataset

Small office data sets: Kinect depth images every 5 seconds between April 2014 and May 2016. Data is provided by the University of Lincoln. This data is described in Krajnik et al., FROctomap: an efficient spatio-temporal environment representation, Advances in Autonomous Robotics Systems, 2014. (tkrajnik@lincoln.ac.uk)

Meta rooms: RGB-D data with registration over several days and places. (raambrus@kth.se)

G4S meta rooms: RGB-D data 150 sweeps with 18 images per sweep. This data is not public. (c.j.c.burbridge@cs.bham.ac.uk)

Activity Data Sets:

LAD: The Leeds Activity Dataset–Breakfast (LAD–Breakfast) is currently composed of 15 annotated videos, representing five different people having breakfast or other simple meal; it is recorded within a lab setting, but is as realistic as possible (real food, eating and drinking). The videos were recorded using an ASUS Xtion PRO LIVE - RGB and Depth Sensor. The objects in the video were tracked using an OpenNI based tool created by our RACE project partner, the University of Aveiro (this data set has be co-created by RACE and STRANDS) and the ground truth provided makes reference to the objects tracked. The videos have been annotated with activities at various levels of complexity and abstraction to be selected, in order to build a hierarchy of activities. The videos were not scripted in detail: each person could choose what to eat or drink and when, and the objects he needed were on the table. The videos are quite varied in terms of duration, type and sequence of activities, number and type of visible objects. High level activities such as “preparing coffee” or “consuming meal” are composed of low level ones (e.g. “pickup kettle”, “pour kettle”, “putdown kettle”). (strands@comp.leeds.ac.uk) Alternative link: LAD Dataset

Long-term people activity data set: Several weeks of person activity in two different environments. Described in Coppola et al.: Learning temporal context for activity recognition, In ECAI, 2015.(tkrajnik@lincoln.ac.uk)

Mobile robot observing kitchen activities data set:

Extended Train Robots data set:

Leeds Robotic Commands data set:

Autonomous Long-term Learning Data Sets:

Longterm: 8 locations over more than one month; contains 51 stiched point clouds, with camera poses and camera parameters as well as registered and corrected poses and parameters. (raambrus@kth.se)

Longterm Labeled: This dataset contains a subset of the observations from the longterm dataset (longterm dataset above). In addition to the raw data, this dataset also stores for each observation object annotations (masks and labels). (raambrus@kth.se)

Moving Labeled: This dataset extends the longterm datatset with more locations within the same office environment at KTH. The dataset contains a subset of the labels and these objects are consistently located in different positions in multiple rooms. (nbore@kth.se)

KTH-3D-TOTAL: RGB-D Data with objects on desktops annotated. 20 Desks, 3 times per day, over 19 days. Data provided by KTH and described in, Thippur et al., “KTH-3D-TOTAL: A 3D Dataset for Discovering Spatial Structures for Long-Term Autonomous Learning”, International Conference on Control, Automation, Robotics and Vision, ICARCV 2014. (akshaya@kth.se)

Marathon: Strands Marathon 2014 data, from all universities; contains metric sweeps and mongodb databases with other system logs: (raambrus@kth.se)

People Data Sets:

People tracks: The data was collected at University of Birmingham library. It is composed of human tracks which can be used for human motion analysis. Contact: ferdian.jovan@gmail.com

Long-term object people presence data set: Several weeks of person/object presence in three different environments. Used for robot search in Krajnik et al.: Where’s waldo at time t ? using spatio-temporal models for mobile robot search. In ICRA 2014. Used for life-long exploration in Santos et al.: Spatio-temporal exploration strategies for long-term autonomy of mobile robots. Robotics and Autonomous Systems, 2016.(tkrajnik@lincoln.ac.uk)

Long-term robot navigation data set: Data from a mobile robot that served as an info-terminal, assistant-therapist and bellboy in a care home in Vienna for more than 120 days, covering over 100km. (tkrajnik@lincoln.ac.uk)

STRANDS robot


Contact
John Folksson
School of Computer Science and Cmmunications
KTH Royal Institute of Technolgy
SE-10044 Stockholm
Sweden

image2 image3

This project is funded by the European Community’s Seventh Framework Programme, Cognitive Systems and Robotics, project reference 600623.

This Website is maintained by the Automation and Control Institute (ACIN), TU Vienna

Impressum

Original page: http://strands.acin.tuwien.ac.at/data.html

KTH-3D-TOTAL

image3 image4 image5

Sample table tops recorded in the dataset

This dataset constains RGB-D data with annotated objects on desktops. In total 20 desks have been recorded, 3 times per day over 19 days.

Download

This dataset is available for download in a single archive file (~ 177 GB). As an alternative, the individual folders and files can be obtained from here, and would have to be downloaded manually.


Condition of use

If you use the dataset for your research, please cite our paper that describes it:

KTH-3D-TOTAL: A 3D dataset for discovering spatial structures for long-term autonomous learning
Thippur, Akshaya and Ambrus, Rares and Agrawal, Gaurav and Del Burgo, Adria Gallart and Ramesh,
Janardhan Haryadi and Jha, Mayank Kumar and Akhil, Malepati Bala Siva Sai and Shetty,
Nishan Bhavanishankar and Folkesson, John and Jensfelt, Patric
Control Automation Robotics & Vision (ICARCV), 2014 13th International Conference on

We attached a bibtex record for your convenience.

Original page: https://strands.pdc.kth.se/public/kth-3d-total/readme.html

KTH Longterm Dataset

image0

KTH Scitos G5 robot - Rosie

The data was collected autonomously by a Scitos G5 robot with an RGB-D camera on a pan-tilt, navigating through the KTH office environment over a period of approximately 30 days. Each observation consists of a set of 51 RGB-D images obtained by moving the pan-tilt in a pattern, in increments of 20 degrees horizontally and 25 vertically. Waypoints are visited between 80 and 100 times and a total of approximately 720 observations are collected at the eight waypoints that can be seen in the figure below. The data is a part of the Strands EU FP7 project.

image1

image2

WayPoint positions on the map

Observations overlayed on the 2D map

Dataset structure

The data is structured in folders as follows: YYYYMMDD/patrol_run_YYY/room_ZZZ, where:

  • YYYYMMDD represents the year, month & day when those particular observations were acquired. Each such folder contains the patrol runs the robot collected on that specific date.
  • patrol_run_YYY represents one of the patrol runs collected by the robot.
  • room_ZZZ represents a particular observation collected during a patrol run.

Each folder of the type YYYMMDD/patrol_run_YYY/room_ZZZ contains the following files:

  • room.xml - contains information relevant for the observation (described in the next section)
  • complete_cloud.pcd - the point cloud of the observation (obtained by merging the individual point clouds together)
  • intermediate_cloud*.pcd - ordered point clouds, each corresponding to an RGB and depth image acquired by the camera while conducting the sweep (51 such point clouds for each observation)

The room.xml file accompanying an observation contains the following (relevant) fields:

RoomLogName - identifier which associates the observation with the folder structure

RoomRunNumber - identifier which denotes when the observation was acquired during the patrol run (i.e. 0 - first, 1 - second, etc.)

RoomStringId - identifier which corresponds to the waypoint at which the observation was acquired.

RoomLogStartTime / RoomLogEndTime - acquisition time

Centroid - observation centroid in the map frame

RoomCompleteCloud - complete cloud filename

RoomIntermediateClouds

RoomIntermediateCloud - intermediate cloud filename

  • RoomIntermediateCloudTransform - transform from the RGB-D sensor frame to the map frame, as given by the robot odometry
  • RoomIntermediateCloudTransformRegistered - transform which corrects the pose of the intermediate clouds so that they are well registered with respect to each other

Parsing

A parser is provided here (can be installed with `` sudo apt-get install ros-indigo-metaroom-xml-parser``) which reads in the data and returns C++ data structures encapsulating the low-level data from the disk. Form more information please refer to the parser README ( or here for a list of supported methods). Information about how to use the Strands package repository can be found here.

Download

This dataset is available for download in a single archive file (~ 300 GB). As an alternative, the individual folders and files can be obtained from here, and would have to be downloaded manually.


Condition of use

If you use the dataset for your research, please cite our paper that describes it:

Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario
Ambrus, Rares and Ekekrantz, Johan and Folkesson, John and Jensfelt, Patric
Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on

We attached a bibtex record for your convenience.

Original page: https://strands.pdc.kth.se/public/KTH_longterm_dataset_registered/readme.html

KTH Longterm Dataset Labels

image0

KTH Scitos G5 robot - Rosie

This dataset contains a subset of observations from this dataset - 88 observations acquired at WayPoint16. Each observation consists of a set of 17 RGB-D images (originally 51, however this dataset only contains the RGB-D clouds corresponding to a single height value of the PTU) obtained by moving the pan-tilt in a horizontal pattern, in increments of 20 degrees. In addition to the raw sensor data, each observation contains object annotations (masks and labels). The data is a part of the Strands EU FP7 project.

Dataset structure

The data is structured in folders as follows: YYYYMMDD/patrol_run_YYY/room_ZZZ, where:

  • YYYYMMDD represents the year, month & day when those particular observations were acquired. Each such folder contains the patrol runs the robot collected on that specific date.
  • patrol_run_YYY represents one of the patrol runs collected by the robot.
  • room_ZZZ represents a particular observation collected during a patrol run.

Each folder of the type YYYMMDD/patrol_run_YYY/room_ZZZ contains the following files:

  • room.xml - contains information relevant for the observation (described in the next section)
  • complete_cloud.pcd - the point cloud of the observation (obtained by merging the individual point clouds together)
  • intermediate_cloudXXXX.pcd - ordered point clouds, each corresponding to an RGB and depth image acquired by the camera while conducting the sweep (17 such point clouds for each observation, with XXXX going from 0000 to 0016)
  • rgb_XXXX.jpg - RGB image generated from the corresponding intermediate_cloudXXXX.pcd
  • depth_XXXX.png - depth image generated from the corresponding intermediate_cloudXXXX.pcd
  • rgb_XXXX_label_#.jpg - binary mask corresponding to one of the objects annotated in the image rgb_XXXX.jpg
  • rgb_XXXX_label_#.pcd - point cloud generated from object mask rgb_XXXX_label_#.jpg
  • rgb_XXXX_label_#.txt - object label corresponding to annotation rgb_XXXX_label_#.jpg
  • rgb_XXXX_label_#.xml - xml file storing the labelled object data (created with data from rgb_XXXX_label_#.pcd, rgb_XXXX_label_#.txt)
  • rgb_XXXX_object_#.jpg - rgb mask corresponding to one of the objects annotated in the image rgb_XXXX.jpg
The description of the room.xml file accompanying an observation can be found here.
Each object xml file (rgb_XXXX_label_#.xml) contains the following data:
  • label - object label
  • filename - object pcd file
  • Centroid - object centroid, in the map frame
  • LogTime - time when the object was observed

Parsing

A parser is provided here (can be installed with `` sudo apt-get install ros-indigo-metaroom-xml-parser``) which reads in the data and returns C++ data structures encapsulating the low-level data from the disk. Form more information please refer to the parser README ( or here for a list of supported methods). Information about how to use the Strands package repository can be found here.
Assuming the parser has been installed, the labelled data can be visualized with the following sample application:
`` rosrun metaroom_xml_parser load_labelled_data /path/to/data WayPoint16 ``
This loads all the observations along with the labelled data from the path specified and displays each observation along with the corresponding labelled objects in a visualizer window.
image4 image5 image6

Observation (red) with labelled objects (RGB)

Labelling

For more information on the labelling tool used, please refer to this page.

Download

This dataset is available for download in a single archive file (~ 15 GB). As an alternative, the individual folders and files can be obtained from here, and would have to be downloaded manually.


Condition of use

If you use the dataset for your research, please cite our paper that describes it:

Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario
Ambrus, Rares and Ekekrantz, Johan and Folkesson, John and Jensfelt, Patric
Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on

We attached a bibtex record for your convenience.

Original page: https://strands.pdc.kth.se/public/KTH_longterm_dataset_labels/readme.html

KTH Moving Objects Dataset

image0

KTH Scitos G5 robot - Rosie

This dataset extends KTH Longterm Dataset Labels with more locations within the same office environment at KTH. The dataset contains a subset of the labels and these objects are consistently located in different positions in multiple rooms. The label subset annotated in this dataset is {chair6, chair1, water_boiler, backpack1, pillow, trash_bin, backpack3, chair2, hanger_jacket, backpack2}. Each observation consists of a set of 17 RGB-D images obtained by moving the pan-tilt in a horizontal pattern, in increments of 20 degrees. In addition to the raw sensor data, each observation contains object annotations (masks and labels). The data is a part of the Strands EU FP7 project.

Dataset structure

The data is structured in folders as follows: YYYYMMDD/patrol_run_YYY/room_ZZZ, where:

  • YYYYMMDD represents the year, month & day when those particular observations were acquired. Each such folder contains the patrol runs the robot collected on that specific date.
  • patrol_run_YYY represents one of the patrol runs collected by the robot.
  • room_ZZZ represents a particular observation collected during a patrol run.

Each folder of the type YYYMMDD/patrol_run_YYY/room_ZZZ contains the following files:

  • room.xml - contains information relevant for the observation (described in the next section)
  • surfel_map.pcd - the surfel map constructed by using the data fusion component of Elastic Fusion
  • complete_cloud.pcd - the point cloud of the observation (obtained by thresholding the confidence of surfel_map.pcd at 0.3)
  • intermediate_cloudXXXX.pcd - ordered point clouds, each corresponding to an RGB and depth image acquired by the camera while conducting the sweep (17 such point clouds for each observation, with XXXX going from 0000 to 0016)
  • rgb_XXXX.jpg - RGB image generated from the corresponding intermediate_cloudXXXX.pcd
  • depth_XXXX.png - depth image generated from the corresponding intermediate_cloudXXXX.pcd
  • rgb_XXXX_label_#.jpg - binary mask corresponding to one of the objects annotated in the image rgb_XXXX.jpg
  • rgb_XXXX_label_#.pcd - point cloud generated from object mask rgb_XXXX_label_#.jpg
  • rgb_XXXX_label_#.txt - object label corresponding to annotation rgb_XXXX_label_#.jpg
  • rgb_XXXX_label_#.xml - xml file storing the labelled object data (created with data from rgb_XXXX_label_#.pcd, rgb_XXXX_label_#.txt)
  • rgb_XXXX_object_#.jpg - rgb mask corresponding to one of the objects annotated in the image rgb_XXXX.jpg
The description of the room.xml file accompanying an observation can be found here.
Each object xml file (rgb_XXXX_label_#.xml) contains the following data:
  • label - object label
  • filename - object pcd file
  • Centroid - object centroid, in the map frame
  • LogTime - time when the object was observed

Parsing

A parser is provided here (can be installed with `` sudo apt-get install ros-indigo-metaroom-xml-parser``) which reads in the data and returns C++ data structures encapsulating the low-level data from the disk. Form more information please refer to the parser README ( or here for a list of supported methods). Information about how to use the Strands package repository can be found here.
Assuming the parser has been installed, the labelled data can be visualized with the following sample application:
`` rosrun metaroom_xml_parser load_labelled_data /path/to/data WayPoint10 ``
This loads all the observations along with the labelled data from the path specified and displays each observation along with the corresponding labelled objects in a visualizer window.
image4 image5 image6

Observation (red) with labelled objects (RGB)

Labelling

For more information on the labelling tool used, please refer to this page.

Download

This dataset is available for download in a single archive file (~ 6 GB). As an alternative, the individual folders and files can be obtained from here, and would have to be downloaded manually.


Strands Robot Marathon 2014

image1
The Strands project

This dataset was acquired by the Strands robots participating in the Strands Robot Marathon 2014. The data is a part of the Strands EU FP7 project.

Dataset structure

The dataset contains all the data logged by the robots during operation, exported from the mongodb database. For information on the relevant collections please refer to the Strands project.
Separately, data corresponding to observations acquired by the robot with the Pan-Tilt unit and the RGB-D sensor have been exported and can be found here. For a description of the format as well as how to parse the data please refer to this readme file.

Download

This dataset is available for download in a single archive file (~ 136 GB). As an alternative, the individual folders and files can be obtained from here, and would have to be downloaded manually.


Original page: https://strands.pdc.kth.se/public/Marathon_2014/readme.html

KTH Meta-rooms dataset

image0

image1

KTH Scitos G5 robot - Rosie

2D Map with waypoints

The data was collected autonomously by a Scitos G5 robot with an RGB-D camera on a pan-tilt, navigating through the KTH office environment over a period of 7 days. Three waypoints have been defined on the 2D map, and each one is visited once per day. Each observation consists of a set of 28 RGB-D images obtained by moving the pan-tilt in a pattern, in increments of 60 degrees horizontally and 30 degrees vertically. The data is a part of the Strands EU FP7 project.

image2

image3

image4

image5

Observations acquired by the robot

Dataset structure

The data is structured in folders as follows: YYYYMMDD/patrol_run_YYY/room_ZZZ, where:

  • YYYYMMDD represents the year, month & day when those particular observations were acquired. Each such folder contains the patrol runs the robot collected on that specific date.
  • patrol_run_YYY represents one of the patrol runs collected by the robot.
  • room_ZZZ represents a particular observation collected during a patrol run.

Each folder of the type YYYMMDD/patrol_run_YYY/room_ZZZ contains the following files:

  • room.xml - contains information relevant for the observation (described in the next section)
  • complete_cloud.pcd - the point cloud of the observation (obtained by merging the individual point clouds together)
  • intermediate_cloud*.pcd - ordered point clouds, each corresponding to an RGB and depth image acquired by the camera while conducting the sweep (28 such point clouds for each observation)

The room.xml file accompanying an observation contains the following (relevant) fields:

RoomLogName - identifier which associates the observation with the folder structure

RoomRunNumber - identifier which denotes when the observation was acquired during the patrol run (i.e. 0 - first, 1 - second, etc.)

RoomStringId - identifier which corresponds to the waypoint at which the observation was acquired.

RoomLogStartTime / RoomLogEndTime - acquisition time

Centroid - observation centroid in the map frame

RoomCompleteCloud - complete cloud filename

RoomIntermediateClouds

RoomIntermediateCloud - intermediate cloud filename

  • RoomIntermediateCloudTransform - transform from the RGB-D sensor frame to the map frame, as given by the robot odometry

Parsing

A parser is provided here (can be installed with `` sudo apt-get install ros-indigo-metaroom-xml-parser``) which reads in the data and returns C++ data structures encapsulating the low-level data from the disk. Form more information please refer to the parser README ( or here for a list of supported methods). Information about how to use the Strands package repository can be found here.

Download

This dataset is available for download in a single archive file (~ 12 GB). As an alternative, the individual folders and files can be obtained from here, and would have to be downloaded manually.


Condition of use

If you use the dataset for your research, please cite our paper that describes it:

Meta-rooms: Building and maintaining long term spatial models in a dynamic world
Ambrus, Rares and Bore, Nils and Folkesson, John and Jensfelt, Patric
Intelligent Robots and Systems (IROS), 2014 IEEE/RSJ International Conference on

We attached a bibtex record for your convenience.

Original page: https://strands.pdc.kth.se/public/metric_sweeps_201312/readme.html

MHT Building RGB-D and laser dataset

The MHT building dataset was collected for purposes of testing RGB-D mapping of changing environments. The dataset consists of 2D (laser scanner) and 3D (Asus Xtion) data collected by a SCITOS-G5 mobile robot. We provide the gathered data in form of rosbags.


Dataset purpose

The intended use was to create a dataset to benchmark spatio-temporal representations of changing environments. The robot was patrolling a small office (see the video below) every 5 minutes. Each patrol started and ended at a charging station. During each patrol, the robot continuously collected its laser scans and odometric data. Moreover, it stopped at three different locations, took snapshots using its RGB-D sensor and attempted to detect people presence.


 
MHT office night collections with 3D sweeps

Dataset structure

The provided archives contain rosbags, which are zipped into separate files according to the day of the data collection and data type. Each rosbag with a 3D prefix contains a depth/color image, camera information, robot position, tf data, laser scan and person detection gathered by the robot at a location and time that is encoded in the rosbag name, which contains day, month, year, hour, minute and location id. For example, 3D_23-08-13-15-20_place_2.bag contains data gathered at location 2 on August 23 2013 at 15:20 o’clock. Each rosbag with a 2D prefix contains AMCL position estimates, robot odometry, tf data and laser scans. Day, month, year, hour and minute are part of the bag file name.



Conditions of use

If you use the dataset for your research, please cite our paper that describes the data collection in detail. We attached a bibtex record for your convenience.


This dataset is part of the larger LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/mht_rgbd.html

Object/people presence aka where is Waldo? datasets

This dataset contains information about people and object presence gathered in three different environments. The dataset is divided in three smaller ones according to the environment these were gathered in:

  • Aruba dataset of person presence collected at a smart apartment by the Center for Advanced Studies in Adaptive Systems, CASAS,
  • Brayford dataset of person presence created at the premised of the Lincoln Centre for Autonomous System, LCAS),
  • KTH dataset of object presence created by the Centre of Autonomous Systems of the Royal Institute of Technology in Stockholm, CAS/KTH.

Dataset structure

Aruba

image0

image1

Aruba apartment visualisation

Aruba sensor layout (see CASAS)

The Aruba folder contains locations.min, which contains the location (room) of a person in a small apartment every minute for 16 weeks. The locations.names indicate which rooms corresponds to which number. Example: number 0 on line 10 of the locations.min means that in 10th minute after midnight of the first day, the person was in the Master bedroom. Aruba was extracted from the CASAS datasets.

Brayford

image2

Brayford office locations and topological structure

The Brayford directory contains presence of people every 10 minutes on eight different areas in an open-plan office. Again, locations.names file describes the locations. Example: Having ‘0’ on line 6 of file ‘learning_06.txt’ means that there was no-one at 1:00AM in the kitchenette.

KTH

image3

KTH dataset: object retrieval from point cloud data

The KTH data containe presence of objects observed by a mobile robot at different rooms of the CAS/KTH. The observations are stored in *.occ files that have ‘epoch time’ + presence of a given object at that time. Description of the individual objects is also provided in the clusters_meaning file. Example: line 50 of the test/31.occ file indicates that ‘Johan’s laptop’ was present at time 1411317123 (which is Sunday September 21, 2014 18:32:03).


Download

All of these datasets are available for download in a single archive file. After you unzip the file, you get three folders which correspond to the individual datasets.


Condition of use

If you use the dataset for your research, please cite our paper that describes it. We attached a bibtex record for your convenience. If you use the Aruba subset, you must also acknowledge the original CASAS paper.


This dataset is part of the larger LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/presence.html

KTH Track Dataset

image0

UoB Scitos G5 robot - Bob

The track data were collected autonomously by a Scitos G5 robot with an RGB-D camera on a pan-tilt and a mounted laser scanner on the base of the robot, navigating through University of Birmingham library for approximately ten hours. The data contain 6251 human tracks. Each track is regarded as a sequence of human positions sticthed together based on the chronology of the positions. This track is obtained by detecting and tracking a person passing in front of the robot. All collected tracks overlayed over UoB library map can be seen in the figure below. The chronology of each track is shown by the transition of the colour from blue to green. The data are a part of the Strands EU FP7 project.

image1

Human tracks overlayed on the 2D map


Dataset structure

Data contain 6251 human tracks. Each track is stored in a cell C:sub:`n` and each cell is composed of 4 x N matrix.

First and second rows correspond to (x, y) coordinates in meters for each captured position.

Third and fourth rows correspond to the second and nanosecond of the epoch/unix timestamp information of each position. The timestamp has the format ssssssssss.nnnnnnnnn where s is the second and n is the nanosecond of the timestamp.

The unix timestamp and corresponding date of a column j in a cell Cn is calculated using the formula:

  • `` TSn(j) = Cn(3,j) + Cn(4,j)*1e-09``
  • `` timestamp = str2num(num2str(TSn(j), ‘%.10g’)) ``
  • `` date = datestr(timestamp/86400 + datenum(1970,1,1))``

Download

This dataset is available for download as a .mat file (~8.5 MB).


Condition of use

If you use the dataset for your research, please cite our paper that describes it:

Real-Time Multisensor People Tracking for Human-Robot Spatial Interaction
Dondrup, Christian and Bellotto, Nicola and Jovan, Ferdian and Hanheide, Marc
Robotics and Automation (ICRA), 2015 IEEE International Conference on

We attached a bibtex record for your convenience.

For further questions regarding the dataset, you can contact Ferdian Jovan (fxj345[at]cs.bham.ac.uk).

Original page: https://strands.pdc.kth.se/public/TrackData/readme.html

Long-term person activity datasets

This dataset contains information about a given person activity over several weeks. It was used to evaluate which type of spatio-temporal models improve the accuracy of activity classification over time. The dataset contains information about human activity in two different environments:

  • Aruba dataset contains person activity collected at a smart apartment by the Center for Advanced Studies in Adaptive Systems, CASAS,
  • Witham Wharf dataset shows activity at the Lincoln Centre for Autonomous System, LCAS.

Dataset structure

Aruba

image0

image1

Aruba apartment visualisation

Aruba sensor layout (see CASAS)

The Aruba folder contains activity.min, which indicates the activity performed by a home-bound person in a small apartment every minute for 16 weeks. In addition locations.min contains the person location (room) minute-by-minute as well. The location.names and activity.names indicate which rooms and activities correspond to which number in the activity.min and location.min files. Example: number 0 on line 10 of the location.min and number 2 on 10th line of activity.min indicate that in the 10th minute after midnight of the first day, the person was Eating in the Master bedroom. The Aruba dataset was extracted from the CASAS datasets.

Witham Wharf

image2

image3

Witham office overview

Witham office topological layout

The Witham Wharf directory contains activity of a particular student in an open-plan office for three weeks. Again, locations.names and activity.names files describe the locations and activities, which are stored in location.min and activity.min files.

These contain the student’s activity and location on a minute-by-minute basis.

Download

All of these datasets are available for download in a single archive file. After you unzip the file, you get two folders which correspond to the individual datasets.


Condition of use

If you use the dataset for your research, please cite our paper that describes it. We attached a bibtex record for your convenience. If you use the Aruba subset, you must also acknowledge the original CASAS paper.


This dataset is part of the larger LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/activity.html

MHT building lecturer office

This dataset contains depth images captured by a stationary Asus Xtion sensor located in a small lecturer office in the University of Lincoln. The sensor captures a depth image every 5 seconds since April 2014 and the depth data are compressed and uploaded automatically every day. The data is recorded simply as a stream of 320x240x16b images. To convert them to ROS-compatible image_raw format, you will need to use the convert_raw utility, which is part of the froctomap package.




image2 image3
Typical afternoon depth map. Typical midnight depth map.



Dataset structure and download

The depth images, organised in chunks of one-day length are here. Each zip archive contains a xx.3D file with the depth image taked every 5 seconds and a xx.1D file that contains timestamps and distance of central pixel of the depth image taken at 30Hz. The date and time, encoded in the file name, denote the moment when the data collection started. Note that since a frame is captured every 5 seconds, the first frame of 2016-02-25_23:59:56.3D file is actually taken at midnight Feb 26 2016. The dataset collection terminated on May 2016, when the room started to be occupied by another researcher.

To obtain ROS-compatible depth data stream, unzip the downloaded file, and use the convert_raw utility, which is part of the froctomap package. Calling rosrun froctomap convert_raw xxx.3D will publish the depth images on the /camera/depth/image_raw topic with the timestamp from the data collection time. To modify the timestamp to the current time, simply do the changes aroung line 97 of convert_raw.cpp.


Conditions of use

If you use the dataset for your research, please cite our paper that describes it. We attached a bibtex record for your convenience.


This dataset is part of the larger LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/greg-office.html

3DNet Dataset

The 3DNet dataset is a free resource for object class recognition and 6DOF pose estimation from point cloud data. 3DNet provides a large-scale hierarchical CAD-model databases with increasing numbers of classes and difficulty with 10, 60 and 200 object classes together with evaluation datasets that contain thousands of scenes captured with an RGB-D sensor.

When using this database, please cite:
Walter Wohlkinger, Aitor Aldoma Buchaca, Radu Rusu, Markus Vincze. “3DNet: Large-Scale Object Class Recognition from CAD Models”. In IEEE International Conference on Robotics and Automation (ICRA), 2012. (PDF) (bibtex)

image0

image1

  • The basic dataset consists of common, simple, geometrically distinguishable but partially similar objects.

    Cat10_ModelDatabase.zip (42 MB)

    Cat10_TestDatabase.zip (3.6 GB)

  • The Cat50 model database consists of the Cat10 database with 50 additional classes. The classes in this database are still distinguishable by shape only, but also include subcategories (chair, office-chair, armchair and car, convertible, pickup, formula-car).

    Cat60_ModelDatabase.zip (222 MB)

    Cat60_TestDatabase.zip (3.7 GB)

  • This database adds objects which are similar in shape but can be uniquely distinguished when using color as an additional cue. One more important aspect of objects and object classes was not used and not needed in the previous category databases: size. To successfully distinguish among our 200 categories database, the real world size of the objects becomes important.

    Cat200_ModelDatabase.zip (569 MB)

Original page: https://repo.acin.tuwien.ac.at/tmp/permanent/3d-net.org/

image0

HOME

image1

TUW Object Instance Recognition Dataset

This website provides annotated RGB-D point clouds of indoor environments. The TUW dataset contains sequences of point clouds in 15 static and 3 partly dynamic environments. Each view of a scene presents multiple objects; some object instances occur multiple times and are highly occluded in certain views. The model database consists of 17 models with a maximum extent of 30 cm, which are partly symmetric and/or lack distinctive surface texture. The dataset consists of the reconstructed 3D object models, the individual key frames of the models, test scenes and the 6DOF pose of each object present in the respective view. Each point cloud is represented by RGB color, depth and normal information.

Furthermore, we provide annotation for the Willow and Challenge dataset .

TUW

image2

image3

This dataset is composed of 15 multi-view sequences of static indoor scenes totalling 163 RGB-D frames ( and 3 dynamic scenes with 61 views in total). The number of objects in the different sequences amounts to 162, resulting in 1911 object instances (some of them totally occluded in some frames).

Download:

Ground-truth annotation (created by [1] with manual verification)

Results:

Willow and Challenge Dataset

image4

image5

image6

image7

The Willow dataset is composed of 24 multi-view sequences totalling 353 RGB-D frames. The number of objects in the different sequences amounts to 110, resulting in 1628 object instances (some of them totally occluded in some frames). The Challenge dataset is composed of 39 multi-view sequences totalling 176 RGB-D frames. The number of objects in the different sequences amounts to 97, resulting in 434 object instances.

Download

Ground-truth annotation (created by [1] with manual verification):

Code is available at github.com/strands-project/v4r (for ROS wrappers and tutorials see github.com/strands-project/v4r_ros_wrappers).
For object learning data checkout strands.pdc.kth.se/public/ICRA-2016-Data/.
The research leading to these results has received funding from the European Community’s Seventh Framework Programme FP7/2007-2013 under grant agreement No. 600623, STRANDS and No. 610532, SQUIRREL.
[1] Aitor Aldoma, Thomas Fäulhammer, Markus Vincze, “Automation of Ground-Truth Annotation for Multi-View RGB-D Object Instance Recognition Datasets”, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2014 (bibtex)
[2] Thomas Fäulhammer, Aitor Aldoma, Michael Zillich, Markus Vincze, “Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments”, IEEE Int. Conf. on Robotics and Automation (ICRA), 2015 (PDF) (bibtex)
[3] Thomas Fäulhammer, Michael Zillich, Markus Vincze, “Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter”, IAPR Conference on Machine Vision Applications (MVA), 2015 (PDF) (bibtex)

Original page: https://repo.acin.tuwien.ac.at/tmp/permanent/dataset_index.php

Witham Wharf RGB-D dataset

The Witham Wharf was collected for purposes of testing RGB-D localization in changing environments. The dataset consists of one training set and three testing sets that cover changes of eight locations in an open-plan office for one year. We provide the gathered data in form of rosbags and color images.


Dataset purpose

The intended use was to create a dataset to benchmark visual and RGB-D localization in changing environments. See and hear the 2 minute video below. If you are interrested only in how the dataset was collected, simply skip the first minute of the video or click here.


 
LCAS Witham Wharf dataset collection process (time 1:13-1:32)

Dataset structure

Rosbags

The provided archives contain rosbags, which are zipped into separate files according to the day of the data collection. Each rosbag contains a depth/color image, camera information, robot position, tf data and laser scan captured by the robot at a location and time that is encoded in the rosbag name, which contains day, month, year, hour, minute and location id. For example, 3D_16-11-13-15-20_place_3.bag contains data gathered at location 3 on Nov 16 2013 at 15:20 o’clock.

Color images

Color images of the dataset are arranged into eight folders according to the location they were captured at. Images were captured on a regular basis every 10 minutes and their filename corresponds to the time they were captured at. For example 00000.bmp was captured at midnight the first day, 00012.bmp at 2:10 am on the first day and 00145.bmp at 0:10am of the second day of the data collection. Please note that sometimes the robot got stuck and failed to capture an image at a given location. For our method, we needed to substitute these missing images by taking the last previously captured image.


Download

Name Length Dates Images Rosbags
November 2013 training 7 days 10-16/Nov/2013 Nov 2013 training Nov 2013 training
November 2013 testing 1 day 17/Nov/2013 Nov 2013 testing Nov 2013 testing
February 2014 testing 1 day 02/Feb/2014 Feb 2014 testing Feb 2014 testing
December 2014 testing 1 day 14/Dec/2014 Dec 2014 testing Dec 2014 testing
Complete dataset 10 days all of the above Complete dataset Complete dataset

Additional information

If you need to transform the point clouds to a global coordinate frame, you can use the ww_transforms package, which provides a map of the environment and additional transforms that complement the tf information that is contained in the rosbags. The robot also performed 360°&times90° 3D sweeps on an hourly basis at three different locations of the Witham Wharf.


Conditions of use

If you use the dataset for your research, please cite our paper that describes the data collection in detail. We attached a bibtex record for your convenience.


This dataset is part of the larger LCAS-STRANDS long-term dataset collection.

Original page: https://lcas.lincoln.ac.uk/owncloud/shared/datasets/wharf_rgbd.html

g4s_deployment

|Join the chat at https://gitter.im/strands-project/g4s\_deployment| G4S and security specific deployment files and packages.

Useful commands

Commit some changes as yourself:

GIT_COMMITTER_NAME="Nick Hawes" GIT_COMMITTER_EMAIL="n.a.hawes@cs.bham.ac.uk" git commit --author "Nick Hawes <n.a.hawes@cs.bham.ac.uk>"

Add a repo to the wstool-controlled ws:

wstool set soma --git https://github.com/strands-project/soma --version=2.0

Original page: https://github.com/strands-project/g4s_deployment/blob/indigo-devel/README.md

The following is needed to ensure that remote hosts can connect to the control gui:

sudo apt-get install apache2 sudo cp rospack find tsc_control_ui/apache/000-default.conf /etc/apache2/sites-available/ sudo a2enmod proxy_wstunnel proxy_http proxy_html rewrite headers sudo service apache2 reload

Setting up remote access to the via a reverse ssh tunnel (or some other magic I don’t understand)

sudo apt-get install proxytunnel autossh

Include this in your ~/.ssh/config (the commented out line is the normal one when not behind a proxy)

Host harek-tunnel User strands ProxyCommand /usr/bin/proxytunnel -v -p webcache.cs.bham.ac.uk:3128 -r 194.80.55.142:443 -X -d localhost:22 # ProxyCommand /usr/bin/proxytunnel -p 194.80.55.142:443 -E -d localhost:22 Compression yes LocalForward *:22222 localhost:22 ServerAliveInterval 120 TCPKeepAlive no

autossh -M 32323 strands@harek-tunnel -R32280:localhost:80

sudo ln -s rospack find strands_webtools /var/www/html/webtools sudo ln -s rospack find tsc_control_ui/www/* /var/www/html/

Original page: https://github.com/strands-project/g4s_deployment/blob/indigo-devel/tsc_control_ui/README.md

  • before running the run_report.py for the first time, open the map you want to use in gimp and resaved it as ppm file
  • run the node with four parameters:
  • 1.: path to a folder where ppm map and original yaml can be found,for example “/home/strands/tsc_ws/…./”, it is absolute path,IMPORTANT, you must have “/” in the end
  • 2.: name of ppm map, such as “lg.ppm”
  • 3.: name of yaml file, such as “lg.yaml”
  • 4.: where to save output file, for example “/home/strands/tsc_ws/….”, it is absolute path

Original page: https://github.com/strands-project/g4s_deployment/blob/indigo-devel/tsc_temperature_logger/README.md

  • Install control PC
  • Get PC and robot on network
  • Update hosts etc files on both machines
  • create new directories for pre-deploy
  • Run both mongos in replication mode before starting code
  • build SLAM map whilst also rosbagging
  • crop map and create nogo map, sketching in glass walls
  • topological map (in parallel to nogo map). nodes for surface observations, learning, recording + consent, spatio-temporal exploration, doors
  • Run headless script on bettyr
  • Check symlink to ~/.semanticMap on bettyr
  • And the same for skeleons
  • Set up replication on control PC

Original page: https://github.com/strands-project/g4s_deployment/blob/indigo-devel/tsc_bringup/CHECKLIST.md

TSC bringup

OBJECT LEARNING

To start the object learning action server run:

roslaunch tsc_bringup object_learning.launch model_path:=/path/to/models record_run:=false

If the parameter record_run is set to true a rosbag of the robot navigating around the object will be recorded using rosbag_openni_compression. The rosbag is saved as tracking.bag in the folder where the sweep is saved.

Note recording a rosbag can take up to 1GB of disk space per run.

Note the xml of the current sweep where the rosbag is saved is published when the sweep finished on the topic /local_metric_map/room_observations.

To trigger an action run:

rosrun actionlib axclient.py /learn_object

The argument needed is the name of the node (from the topological map) where to run the learn_object action.

Nodes started

The launch file above is equivalent to the following:

  • roslaunch semantic_map_launcher semantic_map.launch
  • dependencies installed through strands_3d_mapping
  • roslaunch observation_registration_launcher observation_registration.launch
  • dependencies installed through strands_3d_mapping
  • roslaunch learn_objects_action learn_objects_dependencies.launch
  • a list of dependencies can be found here

Debug topics

For debugging, the following topics are useful: * /local_metric_map/merged_point_cloud_downsampled - the point cloud of the sweep * /local_metric_map/dynamic_clusters - the detected dynamic clusters * /object_manager/requested_object_mask - the mask of the dynamic cluster which will be learned * /object_learning/learned_object_xml - the xml of the dynamic cluster learning, pointing to the additional views and masks (among other things) * /additional_view_registration/registered_view_cloud - the point cloud of the registered additional views * /incremental_object_learning/learned_model - learned model using the RAL16 method * /object_learning/learned_object_model - learned model using the IROS16 method

Point Cloud History Search & Incremental Model Building

Note that you need the newest versions of strands_3d_mapping and quasimodo to be able to run this. quasimodo will shortly be merged into strands_3d_mapping. For a lot more info check the quasimodo readme.

In the tsc_start.sh tmux session, tab 10, there is are two panes that start the entire pipeline for retrieval and incremental model building. This needs to be started via ssh to set the display variable correctly. Note that you need to run both

rosrun tsc_bringup tsc_headless.sh

for starting up a virtual display (be sure to type the password), and

DISPLAY=:0 roslaunch tsc_bringup tsc_quasimodo.launch data_path:=/home/strands/.semanticMap

If you want some data to query for, we have uploaded all of the metric map data from the Birmingham deployment week to the server. Otherwise you will have to perform at least 20 sweeps with the robot to be able to query. Get the already collected data with:

wget https://strands.pdc.kth.se/public/semanticMap_BHAM.tar.gz
tar -zxvf semanticMap_BHAM.tar.gz

Note that the folder is 16GB on disk. If you do not want to replace the already existing ~/.semanticMap data you can just use the downloaded folder directly. Then you need to change the data_path argument of tsc_quasimodo.launch to /path/to/Birmingham/semanticMap. If you want to search for the data collected on the robot, it should be the default /home/strands/.semanticMap.

If you want to add previously collected metric maps to the retrieval representation (as opposed to processing them as they are collected), you can also set the parameter add_previous_maps:=true. Processing only has to be done once. If they have been processed previously (as is the case for the data on the server), they are loaded automatically. You will then have to wait to collect new metric maps until the processing of the previously collected maps has finished.

Debug topics

  • /quasimodo_retrieval/visualization - An image showing the result of there retrieval component. The leftmost image shows the masked RGB image of the query object and to the right are rendered views of the ten closest matches represented as 3D surfel clouds.
  • /models/fused - The fused model of the additional view observation that is queried at the moment. Published as sensor_msgs/PointCloud2, displayed in relation to /map.
  • /retrieval_processing/segmentation_cloud - The segmentation of a metric map cloud, published after a sweep is finished, published as sensor_msgs/PointCloud2

Note

You can manually trigger a search (i.e. without using the incremental object building framework) of an object with additional views by starting

rosrun quasimodo_retrieval quasimodo_retrieve_observation

and then, in another terminal specifying the path to the xml of the additional views:

rostopic pub /object_learning/learned_object_xml std_msgs/String "data: '/path/to/.semanticMap/201422/patrol_run_56/room_0/2016-Apr-22 14:58:33.536964_object_0.xml'"

You can also use soma to visualize the queries over time.

Original page: https://github.com/strands-project/g4s_deployment/blob/indigo-devel/tsc_bringup/README.md

Welcome to the g4s_deployment wiki!

TSC deployment remote services:

Most with the usual credentials handed out by @hawesie

By responsible with the amount of data you transfer!

lamor15

Join the chat at https://gitter.im/strands-project/lamor15 A repository for the STRANDS summer school to be cloned by participants.

Please see the Wiki for details.

Original page: https://github.com/strands-project/lamor15/blob/ICRAI/README.md

Welcome to the STRANDS Summer School on “Long-term Autonomy for Mobile Robots” (LAMoR)

Participant Information

Make sure you read the general information for participants about accommodation, travel, etc.

For online help join the chat at https://gitter.im/strands-project/lamor15

[[Individual Computer Setup]]

Make sure you accomplish the setup detailed on the “[[Individual Computer Setup]]” page prior to coming to Lincoln.

As we’ll be working in teams, it is recommended to have a github.com account. This will help you to obtain the tutorial material and to collaborate with your team.

[[Working with the STRANDS robots]]

Programme

The school’s programme is composed of morning research presentations by leading academics in their fields, followed by hands-on programming tutorials to get to know the software components being used to program the robots. The afternoons are dedicated to programming in teams to solve tasks on the robot during the course of the summer school. In these teams of 6-7 you will be working with one of four mobile Scitos G5 platform, equipped with two Kinect sensors, SICK Laser Scanner, Touch screen, pan-tilt camera, speakers, microphone, an actuated head for human-robot interaction, and a differential drive to go about.

You will be working together in the team to implement one of a number of [[possible tasks]] (in C++ or Python), cumulating in the opportunity to present your achievements and yourself to the participants of the ECMR conference during a dedicated poster and demonstration session at the ECMR welcome reception.

Day 1 - 27/8: Mapping and Navigation

  • Learning Objectives for the day:
  1. Understanding basics and creation of metric and topological mapping
  2. Get to know the challenges and solution of 4D (i.e. temporal) mapping
  3. Navigate in those maps
  4. Understand, create, and visualise “meta rooms” to perceive the dynamic environment
  • Lecture materials 1: Part 1, Part 2
  • [[Tutorial materials 1]]
  • Social event: Pub quiz at the tower bars starting at 18.00. If you want to eat there as well, please arrive around 17.30 as their kitchen will close at 18.00.

Day 2 - 28/8: Scheduling and Execution

  • Learning Objectives for the day:
  1. schedule task
  2. creating routines

Day 3 - 29/8: Computer Vision

  • Learning Objectives for the day:
  1. Modelling 3D objects
  2. Modelling object recogniser performance
  3. Searching for objects in the environment
  • Lecture materials 3
  • [[Tutorial materials 3]]
  • Social event: Social dinner at the Wig & Mitre starting 19.00. The distance might seem short but includes a rather steep hill, please allow some extra time for that.

Day 4 - 30/8: People tracking and Activities

  • Learning Objectives for the day:
  1. using and customising the people tracker
  2. using the qsr_lib to generate QSRs of arbitrary objects offline
  3. adding a new QSR to the qsr_lib
  4. understanding QTC and writing an online qtc creator for robot and a human
  • Hacking tasks:
  • creating a node that generates QRSs between people, or objects
  • creating your own QSR in the lbrary
  • Lecture materials 4
  • [[Tutorial materials 4]]
  • Social event: Self organised night out to explore the city’s night life.

Day 5 - 31/8: Sightseeing, Hacking, and Demonstration

  • Morning opportunity to go sightseeing
  • Afternoon, finalising implementation of tasks in teams
  • around 16:00, demonstration by groups of achievements

(Day 6 - 1/9: Free)

  • Participants not staying on for ECMR to depart

(Day 7 - 2/9: First day of ECMR)

  • 17:30 ECMR welcome reception with robot demonstrations and posters from LAMoR teams.

Photos / Social Media

If you take any pictures during the summer school that you’re happy to share, either send them to marc@hanheide.net, or let me know a link to download them. If you use Twitter, Facebook, or Instagram, please hash-tag with #LAMoR15. An album of collected photos is here

Original page: https://github.com/strands-project/lamor15/wiki

1. Some Previous Steps

  1. Enable the LCAS package repositories:

  2. Add the LCAS public key to verify packages:

    curl -s http://lcas.lincoln.ac.uk/repos/public.key | sudo apt-key add -

  3. Add the LCAS repository:

    sudo apt-add-repository http://lcas.lincoln.ac.uk/repos/release

  4. update your index:

    sudo apt-get update

  5. Install the package “uol_cmp3641m” which will install all required packages:

    sudo apt-get install ros-indigo-turtlebot-gazebo

2. Launch Simulation

  1. Define TURTLEBOT_GAZEBO_WORLD_FILE:

    export TURTLEBOT_GAZEBO_WORLD_FILE=/opt/ros/indigo/share/turtlebot_gazebo/empty.world

  2. Launch Simulation:

    roslaunch turtlebot_gazebo turtlebot_world.launch world_file:=$(rospack find turtlebot_gazebo)/worlds/empty.world

  3. Launch RVIZ:

    rosrun rviz rviz

  4. Keyboard teleop:

    roslaunch kobuki_keyop keyop.launch

2. Create 2D Map

  1. run gmapping after all steps in 2:

    rosrun gmapping slam_gmapping

2. Create Topological Map

TOM check this also needs to be running https://github.com/LCAS/Rusty/blob/indigo-devel/rusty_simulation/scripts/robot_pose_publisher.py

  1. Launch MongoDB:

    roslaunch mongodb_store mongodb_store.launch db_path:=/path/to/mongodb

  2. Launch Topological Navigation:

    roslaunch topological_navigation topological_navigation_empty_map.launch map:=Name_of_your_map

  3. Launch RVIz:

  4. Add Interactive markers to edit your map

Original page: https://github.com/strands-project/lamor15/wiki/ICRAI-workshop

The school organisers prepared the robots to be available to the individual groups with all software and hardware readily installed. Each participant, in order to be able to work in the team, should be following the installation instructions detailed below for their own PC:

  1. Install Ubuntu Linux 14.04LTS 64bit on your computer. Please make sure that you have exactly this version installed: 14.04 for 64bit. Download the image from here: http://www.ubuntu.com/download/desktop. Note, that you can perfectly install Ubuntu 14.04 alongside an existing operating system (even on a MacBook).
  2. Follow the instructions at https://github.com/strands-project-releases/strands-releases/wiki#using-the-strands-repository to install both ROS and the STRANDS software packages. We assume a basic understanding of Unix-like operating systems and shell usage here. If you need help using the command line, this might be a good start: https://help.ubuntu.com/community/UsingTheTerminal. The relevant installation steps are copied below for your convenience:
  3. Enable the ROS repositories: Accomplish steps 1.1-1.3 in http://wiki.ros.org/indigo/Installation/Ubuntu#Installation. No need to do the following steps after 1.3.
  4. Enable the STRANDS repositories:
    1. Add the STRANDS public key to verify packages: curl -s http://lcas.lincoln.ac.uk/repos/public.key | sudo apt-key add -
    2. Add the STRANDS repository: sudo apt-add-repository http://lcas.lincoln.ac.uk/repos/release
  5. update your index: sudo apt-get update
  6. install required packages: sudo apt-get install ros-indigo-desktop-full ros-indigo-strands-desktop
  7. Try out the “MORSE” simulator (run all the following in your terminal):
  8. configure ROS: source /opt/ros/indigo/setup.bash
  9. launch the simulator: roslaunch strands_morse uol_mht_morse.launch You should see the Morse simulator popping up with our robot platform being configured. If your graphics card cannot handle the simulation properly, please try the “fast mode”, using this command instead: roslaunch strands_morse uol_mht_morse.launch env:=uol_mht_nocam_fast

If your laptop uses an NVidia graphics card it might be worth looking at: https://wiki.ubuntu.com/Bumblebee to use it to its full potential. You should be all set now!

Original page: https://github.com/strands-project/lamor15/wiki/Individual-Computer-Setup

Setting up an Ubuntu machine to connect to the robot’s VPN

IPs might be different for your robot. Please ask someone from the STRANDS team.

Connecting via Network-Manager

In order to connect a random PC to the robot, it first needs to be able to connect to the robot via TCP port 1723. The onboard PC (with local IP 192.168.0.100 and remote IP as assigned by the WiFi DHCP server, the remote IP is referred to as ``WLAN_IP`` from now on) acts as a gateway.

  1. Make sure the Network-Manager PPTP plugin is installed: sudo apt-get install network-manager-pptp-gnome
  2. Create a new VPN connection in Network-Manager using PPTP protocol
  3. Set ``WLAN_IP`` as the gateway address (ask a member of the STRANDS team to get it)
  4. set username and password (ask a member of the STRANDS team to get it)
  5. In the “VPN” tab, choose “Advanced”, and select “Use Point-to-Point encryption (MPPE)”
  6. In the “IPv4 Settings” tab, choose “Address Only”
  7. Click on “Routes” in the “IPv4 Settings” tab, and select “Use this connection only for resources on its own network”
  8. Still in “Routes” add a static route with
  • Address: 192.168.0.0
  • Netmask: 24
  • Gateway: 0.0.0.0
  1. save and connect to the new VPN network,… Tadaaa, you should be connected. (Note: This connection is only used to connect to the robot, all other network traffic on your computer still goes via the default route, not the robot!)

configure /etc/hosts

in order to have it all working nicely for ROS to find all machines, you need to add the robot’s IP addresses in /etc/hosts on your own machine like this: The following is an example taken from Linda. Please ask a member of the team for the correct IP range

192.168.0.100 linda
192.168.0.101 left-cortex
192.168.0.102 right-cortex

192.168.0.230   vpn-00
192.168.0.231   vpn-01
192.168.0.232   vpn-02
192.168.0.233   vpn-03
192.168.0.234   vpn-04
192.168.0.235   vpn-05
192.168.0.236   vpn-06
192.168.0.237   vpn-07
192.168.0.238   vpn-08
192.168.0.239   vpn-09
192.168.0.240   vpn-10
192.168.0.241   vpn-11
192.168.0.242   vpn-12
192.168.0.243   vpn-13
192.168.0.244   vpn-14
192.168.0.245   vpn-15
192.168.0.246   vpn-16
192.168.0.247   vpn-17
192.168.0.248   vpn-18
192.168.0.249   vpn-19
192.168.0.250   vpn-20
192.168.0.251   vpn-21
192.168.0.252   vpn-22

Enjoy

You should be all set now. Just run

  1. export ROS_MASTER_URI=http://<robot_name>:11311/ and
  2. export ROS_HOSTNAME=vpn-XX to your assign name, with XX matching your assign IP (see above)

You should be able to e.g. run rosrun rviz rviz displaying all data streams.

convenience scripts

At UOL we use a little script to set our desktop PCs to use the robot ROS infrastructure, called robot-overlord.sh. Simply download it and run source robot-overlord.sh. This works for Linda. Please adjust robot name appropriately if you work on one of the other robots.

Original page: https://github.com/strands-project/lamor15/wiki/Robot-VPN-to-connect-from-Desktop-PCs-and-Laptops

Overall setup

(not all need to be done on all machines, some are set up already)

Network setups

Bob

Betty

Linda

  • robot network is 192.168.0.0/24
  • main PC linda: 192.168.0.100
  • side PC left-cortex: 192.168.0.101
  • side PC right-cortex: 192.168.0.102

Lucie

  • robot network is 10.0.0.0/24
  • main PC lucie01: 10.0.0.1
  • side PC lucie02: 10.0.0.2

Changes for LAMoR

Bob

  • Created users on all PCs
    • lamor -> uid=2000(lamor) gid=2000(lamor) groups=2000(lamor)
    • lamor-admin -> uid=2001(lamor-admin) gid=2001(lamor-admin) groups=2001(lamor-admin),27(sudo)
  • Removed http_proxy from /etc/environment, .bashrc, and in the GUI
  • Generated ssh keys for lamor and lamor-admin users (ssh-keygen -t rsa), and transfered them between machines: ssh-add, then ssh-copy-id
  • Updated packages with sudo apt-get update and sudo apt-get upgrade on bob, bobr and bobl

Betty

  • Created users on all PCs
    • lamor -> uid=2000(lamor) gid=2000(lamor) groups=2000(lamor)
    • lamor-admin -> uid=2001(lamor-admin) gid=2001(lamor-admin) groups=2001(lamor-admin),27(sudo)
  • generated ssh keys for lamor and lamor-admin users (ssh-keygen -t rsa), and transfered them between machines: ssh-add, then ssh-copy-id
  • Removed http_proxy from /etc/environment, .bashrc, and in the GUI
  • Updated packages with sudo apt-get update and sudo apt-get upgrade on betty, bettyr and bettyl

Linda

  • Created users on all PCs
    • lamor -> uid=2000(lamor) gid=2000(lamor) groups=2000(lamor)
    • lamor-admin -> uid=2001(lamor-admin) gid=2001(lamor-admin) groups=2001(lamor-admin),27(sudo)
  • generated ssh keys for lamor and lamor-admin users (ssh-keygen -t rsa), and transfered them between machines: ssh-add, then ssh-copy-id
  • Updated packages with sudo apt-get update and sudo apt-get upgrade on linda, left-cortex and right-cortex

Lucie

  • installed openssh-server on side-PC
  • installed dnsmasq
  • set up routed network
  • Created users on all PCs
    • lamor -> uid=2000(lamor) gid=2000(lamor) groups=2000(lamor)
    • lamor-admin -> uid=2001(lamor-admin) gid=2001(lamor-admin) groups=2001(lamor-admin),27(sudo)
  • put all hosts in all /etc/hosts
    • lucie01 in lucie02
    • lucie02 in lucie01
  • installed vim sudo apt-get install vim in lucie02
  • generated ssh keys for lamor and lamor-admin users (ssh-keygen -t rsa) on Lucie02, and transferred them to Lucie01: ssh-add, then ssh-copy-id
  • set up NFS and VPN

Original page: https://github.com/strands-project/lamor15/wiki/Robot-configuration

Using the Strands Navigation System

The strands navigation system is composed by three layers of components:

  • Action Layer: The action layer is where all the navigation actions that the robot could use are, in other words, in this layer you will find the actions that move the robot in the environment, these action could be for example Docking or Undocking, Door passing or move base.
  • Monitored Navigation Layer : This layer monitors the actions in the action layer and in case of failure it can activate specific recovery behaviours.
  • Topological Navigation Layer : This layer connects the different locations in the environment by means of navigation actions and the topology of the environment, to do so it use an occupancy grid map and a topological map that defines locations in the space where the robot can navigate to and the means to reach such locations.

The goal of this tutorial is to show you how to setup a working navigation system, from creating the occupancy grid map to set up the topological navigation system.

Creating occupancy grid map

For creating the map we are going to use gmapping, however you can use maps created with other methods. We are going to see a first example using a simulation.

  1. The first step is to launch a simulation, for convenience there is a tmux script in the lamor_bringup package, to run it execute: rosrun lamor_bringup sim_start.sh

  2. Once this is started switch to tab 2 (ctrl+ b + 2) and press enter, this will open a screen like this: Sim Screen shot

    Please note that the robot is on the charging station, this is not a coincidence, it is to have all maps zeroed in the charging station.

  3. the next step is to start gmapping, open a new tab and run: rosrun gmapping slam_gmapping

  4. run Rviz, rviz Rviz shot

  5. click on the morse window and use arrows to navigate the robot, you should see in rviz how the map is being created: Rviz shot

  6. Once you are happy with your map, you can save it running rosrun map_server map_saver this will create map.pgm and map.yaml files, but you will notice that this map might be a bit too big fro your liking, you can solve this problem by running rosrun map_server crop_map map.yaml this will create cropped.pgm and cropped.yaml the final result should look something like this map you can rename your map as you wish, but don’t forget that the image field in the yaml file must correspond to the name (and path if you wish) of your map image.

Creating the Topological Map

RViz can be used to edit topological maps on system runtime, one easy way of creating a map is inserting a basic topological map and editing it using RViz. The Following steps will guide through this process.

  1. It is necessary to insert a basic map in the DB and run the navigation system based on it: rosrun topological_utils insert_map.py $(rospack find topological_utils)/support/empty.tmap topological_map_name environment_name
  2. Once this is done you can run Rviz, rosrun rviz rviz and create two marker array interfaces /topological_node_zones_array and /topological_edges_array optionally an additional marker array interface can be created /topological_nodes_array. This will allow the visualization of the topological map.
  3. After this the map can be edited using interactive markers:
  • Adding Nodes: for this objective there is one interactive marker topic called /name_of_your_map/add_rm_node/update that will put a green box on top of the robot. Drive the robot where you want to add a new node and press on the green box that will add a node there connected to all close by nodes by a move_base action.
  • Editing Node Position: with /name_of_your_map_markers/update it is possible to move the waypoints around and change their final goal orientation.
  • Removing Unnecessary Edges: to remove edges there is another marker /name_of_your_map_edges/update that will allow you to remove edges by pressing on the red arrows.
  • Change the Influence Zones: Finally it is possible to change the influence zones with /name_of_your_map_zones/update and moving the vertices of such zones around.

Creating the topological map in simulation

  1. Run the tmux script tab by tab until tab 3 rosrun lamor_bringup sim_start.sh
  2. Afterwards run: roslaunch topological_utils create_topological_map.launch met_map:=bandl top_map:=bl_sim or alternatevely you can insert the empty map and run rosrun lamor_bringup sim_start.sh Tabs 0, 1, 2, 4

Local metric map / semantic map

Description

This set of nodes allows the creation, calibration, storage and querying of local metric maps as well as the detected dynamic elements. Recorded data is stored in ~/.semanticMap/ and the number of observations stored for each waypoint can be specified as a launch file parameter, as well as whether the observations should be stored in mongodb. Detailed information about how the individual nodes function, input/output topics and a parameters can be found in the corresponding readme files.

You cannot run this part of the tutorial in *fast-mode* (wireframe) simulation. You need the full simulation. Please either work with someone who has it running or use the provided desktop machines

  • Install dependencies
sudo apt-get install sudo apt-get install ros-indigo-semantic-map-launcher
sudo apt-get install ros-indigo-metaroom-xml-parser
sudo apt-get install ros-indigo-scitos-ptu

Also, please create the following folder (this folder should be created automatically, however as a precaution please create it yourself):

mkdir ~/.semanticMap/
  • Run dependencies:

Please make sure you are still running the components from the previous tutorial (mongo_db, simulator or robot, 2d_navigation, topological navigation, head_camera, etc). In addition, please start:

rosrun scitos_ptu ptu_action_server_metric_map.py
  • if you are running in simulation, you need to emulate the RGBD cameras correctly using this command (note that you cannot run the local metric mapping if you are using the fast-mode (wireframe) simulation, as you won’t have the camera topics needed):
roslaunch strands_morse generate_camera_topics.launch
  • System nodes (note that this command starts all the relevant nodes, and you should not start any other nodes when moving on to the next sections):
roslaunch semantic_map_launcher semantic_map.launch

Doing a metric sweep (package cloud_merge)

The underlying data used by the nodes in this package is obtained by performing a sweep of the Pan-Tilt Unit and collecting RGBD data at a number of positions during the sweep.

rosrun cloud_merge do_sweep.py

roslaunch cloud_merge cloud_merge.launch

rosrun scitos_ptu ptu_action_server_metric_map.py

Prior to starting a sweep, please make sure you are well localized on the map (you can do this by looking at your robot’s position in Rviz). To start a sweep use the do_sweep action server:

rosrun actionlib axclient.py /do_sweep

This action server takes as input a string, with the following values defined: complete, medium, short, shortest. Internally the action server from scitos_ptu called ptu_action_server_metric_map.py is used, so make sure that is running.

The behavior is the following:

  • If sweep type is complete, the sweep is started with parameters -160 20 160 -30 30 30 -> 51 positions
  • If sweep type is medium, the sweep is started with parameters -160 20 160 -30 30 -30 -> 17 positions
  • If sweep type is short, the sweep is started with parameters -160 40 160 -30 30 -30 -> 9 positions
  • If sweep type is shortest, the sweep is started with parameters -160 60 140 -30 30 -30 -> 6 positions (there might be blank areas with this sweep type, depending on the environment).

The cloud_merge_node acquires data from the RGBD sensor, as the PTU unit does a sweep, stopping at various positions. As the PTU stops at a position, a number of RGBD frames are collected and averaged, with the purpose of reducing noise. Each one of these frames are converted to the global frame of reference, and merged together to form an observation point cloud, which is further processed by the semantic_map semantic_map_node.

If the sweep intermediate positions have been calibrated (using the calibrate_sweeps calibrate_sweep_as action server) and the parameter register_and_correct_sweep is set to true, the collected sweeps are also registered. Note that this registration is for the intermediate point clouds making up the sweep, and not between two sweeps.

The observations are stored on the disk, in the folder

~.semanticMap/

The acquired observations are indexed according to the waypoint at which the sweeps were performed. This information is acquired on the topic /current_node, published by the topolofi

Intermediate cloud calibration (package calibrate_sweeps)

As described earlier, each observation consists of a number of point clouds acquired at intermediate positions during a sweep. However, the exact pose of the camera optical center (camera frame of reference) with respect to the body of the robot (the odometry frame of reference) is not known exactly apriori, and hence when merging together the intermediate clouds misalignment errors often occur. To account for this, a calibration procedure has been developed that considers data from all observations of a particular type recorded so far.

rosrun calibrate_sweeps calibrate_sweep_as

To start the calibration procedure, call:

rosrun actionlib axclient.py /calibrate_sweeps

The calibration process uses only observations recorded with the type complete if using the do_sweeps.py action server from the cloud_merge package, i.e. with 51 positions.

If using the ptu_action_server_metric_map.py action server from the scitos_ptu package to record observations, the parameters are -160 20 160 -30 30 30.

Sweeps recorded with different parameters are ignored for the calibration. For registration, sweeps with different parameters are also processed if their parameters are a subset of the complete sweep type parameters (e.g. comlpete sweep type parameters are -160 20 160 -30 30 30; an example subset of those would be -160 40 160 -30 30 30, i.e. fewer pan stops).

The calibration results are saved in ~/.ros/semanticMap. These are:

  • registration_transforms.txt the result of the 51 transforms for the intermediate poses.
  • registration_transforms_raw.txt legacy - contains the same data as above in a different format, needed by the strands_sweep_registration package.
  • camera_params.txt contains the optimized camera parameters. This is currently disabled, and the stored camera parameters should be the same as the input camera parameters.
  • sweep_paramters.txt the sweep parameters used by the calibration (-160 20 160 -30 30 30)

Metarooms and dynamic clusters (package semantic_map)

A meta-room contains only those parts of the scene which are observed to be static, and it is created incrementally as the robot re-observes the same location over time. Over time, as the robot visits the same location multiple times, the static part is modelled, and it is used to extract what is dynamic from subsequent observations.

This package takes room observations as they are constructed by the cloud_merge package and extracts the corresponding metaroom and also computes dynamic clusters at each waypoint. Note that one metaroom is constructed for each waypoint.

Some data is stored on the disk, in the folder

~.semanticMap/metarooms
roslaunch semantic_map semantic_map.launch

Once a new sweep has been performed at a waypoint, the corresponding metaroom is loaded (if it exists, otherwise a metaroom is initialized with the new observation), and it is used to extract the dynamic clusters/objects at that waypoint.

The NDT algorithm is used to register obseravtions taken at the same waypoint together. The initial guess is provided by the AMCL robot pose, however, depending on the scene, the amount of changes between observations and sensor noise, sometimes the observations are poorly registered, resulting in poor dynamic clusters. One alternative in that case is to reset the metarooms, using the service described in the next section.

This service resets the metarooms at specific waypoints.

Message type:

string[] waypoint_id
bool initialize
---

If initialize is set to True, the Meta-Rooms at the specific waypoints are re-initialized with the latest observations collected at those waypoints. Otherwise, the Meta-Rooms are just deleted.

The default behavior of this node is to return all the dynamic clusters in an observation, as compared to the corresponding metaroom. However, sometimes this may mean a lot of information, and for that the parameter newest_dynamic_clusters is provided (default value False) - if set to True, the node will compute dynamic clusters by comparing the latest sweep with the previous one (as opposed to comparing the latest sweep to the metaroom), thus reporting only the latest dynamic clusters.

Requesting dynamic clusters (package object_manager)

This package allows interaction with the dynamic clusters segmented by the semantic_map package

rosrun object_manager object_manager_node

Message tpe:

string waypoint_id
---
string[] object_id
sensor_msgs/PointCloud2[] objects
geometry_msgs/Point[] centroids

Given a waypoint id, this service returns all the dynamic clusters segmented at that waypoint, with their ids, point clouds and centroid.

Service topic: ObjectManager/DynamicObjectsService

The point cloud corresponding to all the dynamic clusters is also published on the topic "/object_manager/objects

Message type:

string waypoint_id
string object_id
---
sensor_msgs/PointCloud2 object_cloud
int32[] object_mask
geometry_msgs/Transform transform_to_map
int32 pan_angle
int32 tilt_angle

Given a waypoint id and a cluster id (should correspond to the ids received after calling the DynamicObjectsService), this service returns the point cloud corresponding to that dynamic cluster in the camera frame of reference and a transform to get the point cloud in the map frame of refence. In addition, a set of angles (pan_angle, and tilt_angle) to which to turn the PTU, and a set of indices representing image pixels corresponding to the dynamic cluster in the image obtained after turning the PTU to the specified angles. After calling this service, the requested dynamic cluster is “selected”, and after receiving the start_viewing mesasge on the object_learning/status topic, additional views received on the /object_learning/object_view topic will be added and logged together with this cluster.

Service topic: ObjectManager/GetDynamicObjectService

The point cloud corresponding to the requested dynamic cluster is also published on the topic /object_manager/requested_object.

The cluster mask is also published as an image on the topic: /object_manager/requested_object_mask

Note that the clusters are logged to the database when calling the DynamicObjectsService or the GetDynamicObjectService (if the log_to_db argument is set to True). Calling these services multiple times does not affect (negatively) the logging.

Accessing observation data online (package semantic_map_publisher)

This package provides an interface to observation data previously recorded and stored on the disk. The data can be queried using the services described below (note that only some of the services available are described below; for more information please refer to the package description).

rosrun semantic_map_publisher semantic_map_publisher

Message type:

---
string[] waypoint_id
int32[] observation_count

Returns a list of waypoints along with the number of observations collected at those waypoints.

Service name: SemanticMapPublisher/WaypointInfoService

ObservationService

Message type:

string waypoint_id
float64 resolution
---
sensor_msgs/PointCloud2 cloud

Given a waypoint, and a resolution, this service returns the latest observation collected at that waypoint as a PointCloud with the specified resolution.

Service name: SemanticMapPublisher/ObservationService

WaypointTimestampService

Message type:

string waypoint_id
---
string[] waypoint_timestamps

Given a waypoint, this service returns the timestamps of all the observations collected at that waypoint, as a list.

Service name: SemanticMapPublisher/WaypointTimestampService

ObservationInstanceService

Message type:

string waypoint_id
int64 instance_number # convention 0 - oldest available
float64 resolution
---
sensor_msgs/PointCloud2 cloud
string observation_timestamp

Given a waypoint id, an instance number and a resolution, this service returns a particular instance from the observations collected at that particular waypoint, with the desired resolution, along with the timestamp of the observation (as opposed to ObservationService which returns the latest observation at that particular waypoint). Service name: SemanticMapPublisher/ObservationInstanceService

Accessing data stored on the disk (package metaroom_xml_parser)

The metaroom_xml_parser package is used to parse previously saved room observations. The data will be read into an appropriate data structure containing: merged point cloud, individual point clouds, individual RGB and depth images and corresponding camera parameters.

Utilities

A number of utilities are provided by this package, for easy data manipulation. The definitions can be seen in the file load_utilities.h

Merged cloud utilities

The complete cloud datatype is:

template <class PointType> boost::shared_ptr<pcl::PointCloud<PointType>>

The utilities for loading only the merged cloud are: * loadMergedCloudFromSingleSweep # returns one cloud * loadMergedCloudFromMultipleSweeps # returns a vector of merged clouds, one for each sweep * loadMergedCloudForTopologicalWaypoint # same as above

Intermediate cloud utilities

The intermediate cloud datatype is:

template <class PointType>
struct IntermediateCloudCompleteData
{
    std::vector<boost::shared_ptr<pcl::PointCloud<PointType>>>  vIntermediateRoomClouds;
    std::vector<tf::StampedTransform>                           vIntermediateRoomCloudTransforms;
    std::vector<image_geometry::PinholeCameraModel>             vIntermediateRoomCloudCamParams;
    std::vector<cv::Mat>                                        vIntermediateRGBImages; // type CV_8UC3
    std::vector<cv::Mat>                                        vIntermediateDepthImages; // type CV_16UC1
};

The utilities for loading the intermediate clouds are: * loadIntermediateCloudsFromSingleSweep # just the point clouds * loadIntermediateCloudsCompleteDataFromSingleSweep # complete data, with transforms and images * loadIntermediateCloudsFromMultipleSweeps * loadIntermediateCloudsCompleteDataFromMultipleSweeps * loadIntermediateCloudsForTopologicalWaypoint * loadIntermediateCloudsCompleteDataForTopologicalWaypoint

Sweep XML utilities

The sweep XML is an std::string

The utilities for finding sweep XMLS are: * getSweepXmls # takes a folder where to search as argument. Returns a vector<string> * getSweepXmlsForTopologicalWaypoint

Dynamic cluster utilities

The dynamic clusters type is:

template <class PointType> std::vector<boost::shared_ptr<pcl::PointCloud<PointType>>>

The dynamic cluster utilities are: * loadDynamicClustersFromSingleSweep * loadDynamicClustersFromMultipleSweeps * loadDynamicClustersForTopologicalWaypoint

Data collected so far in Strands and available online

A number of datasets (in the formats described above) have been collected during Strands and made publicly available. For more information look at:

https://strands.pdc.kth.se/public/

Debugging / troubleshooting

If for some reason (i.e. poor registration) the meta-rooms have diverged, the reported dynamic clusters will not be correct. In that case, use the ClearMetaroomService service to reset the meta-rooms:

This service resets the Meta-Rooms at specific waypoints.

Message type:

string[] waypoint_id
bool initialize
---

If initialize is set to True, the Meta-Rooms at the specific waypoints are re-initialized with the latest observations collected at those waypoints. Otherwise, the Meta-Rooms are just deleted.

In case there are issues (i.e. nodes crashing) and restarting the nodes does not solve the problems, you can try deleting the data stored in ~/.semanticMap. Note that this operation cannot be undone, and that data will be lost (if you want to keep that data be sure to move it somewhere else).

If the issues are still not solved, delete the data logged in mongo_db in the metric_maps database (using e.g. robomongo).

Original page: https://github.com/strands-project/lamor15/wiki/Tutorial-materials-1

For an up-to-date tutorial please check out ‘https://github.com/strands-project/v4r_ros_wrappers/blob/master/Tutorial.md


Overview

In this tutorial you will learn how to recognise trained 3D object models. You will first model objects by rotating them in front of an RGB-D camera. These objects can then be recognised by a recognition component via a ROS service call. You will also learn how much you can (or can’t) trust recogniser responses.

This tutorial requires installation of a specific vision library, which is part of the STRANDS repositories.

The V4R (Vision for Robotics) library

This is now released as Ubuntu packages as well, so there should be no need to install this from source as detailed below. Just install with ``sudo apt-get install ros-indigo-v4r-ros-wrappers`` and you should be ready to go! Jump straight to [[Tutorial-materials-3#object-modelling]].

This library is part of the STRANDS repositories ‘https://github.com/strands-project/v4r’. The library itself is independent of ROS, so it is built outside ROS catkin. There are wrappers for ROS ‘https://github.com/strands-project/v4r_ros_wrappers’, which can then be placed inside the normal catkin workspace.

Dependencies

  • openni drivers: sudo apt-get install libopenni-dev libopenni-sensor-primesense0
  • Qt 4 (http://qt-project.org/): sudo apt-get install libqt4-dev
  • Boost (http://www.boost.org/): comes with ubuntu
  • Point Cloud Library 1.7.x (http://pointclouds.org/): comes with ROS
  • Eigen3 (http://eigen.tuxfamily.org/): sudo apt-get install libeigen3-dev
  • OpenCV 2.x (http://opencv.org/): comes with ROS
  • Ceres Solver 1.9.0 (http://ceres-solver.org/): sudo apt-get install libceres-dev
  • OpenGL GLSL mathematics (libglm-dev): sudo apt-get install libglm-dev
  • GLEW - The OpenGL Extension Wrangler Library (libglew-dev): sudo apt-get install libglew-dev
  • libraries for sparse matrices computations (libsuitesparse-dev): sudo apt-get install libsuitesparse-dev

Installation

Clone ‘https://github.com/strands-project/v4r’ somewhere on your computer, e.g. ~/somewhere/v4r/') and build using cmake: ` cd ~/somewhere git clone ‘https://github.com/strands-project/v4r’ cd v4r mkdir build cd build cmake .. make sudo make install (optional) ` Now you can install the STRANDS ros wrappers. Clone v4r_ros_wrappers into your catkin workspace: ` cd my_catkin_ws/src git clone https://github.com/strands-project/v4r_ros_wrappers.git cd .. ` Then call catkin_make once. If you chose not to install the V4R library (optional point above) you will get an error regarding V4RModules.cmake. This is easy to fix: ` cd my_catkin_ws/build ccmake ../src ` Locate the optionV4R_DIR` and set it, according to where you build V4R library, e.g.:

V4R_DIR   /home/somewhere/v4r/build

Then call catkin again, and all should now compile fine.

Object modelling

First you have to model all the objects you want to recognise later. You use an offline tool for this, the RTM Toolbox.

  • Place the object on a flat surface on a newspaper or something similar. This allows you to rotate the object without touching it. The texture on the newspaper also helps view registration. The pictures below were taken with the object on a turn table, which is the most convenient way of rotating the object.

  • Start the modelling tool: ~/somewhere/v4r/bin/RTMT

  • Press “Camera Start”: You should now see the camera image

    image0

  • Press “ROI Set” + click on the flat surface next to the object

    image1

  • Press Tracker Start: you now see the tracking quality bar top left

    image2

  • Rotate 360 degrees, the program will generate a number of keyframes.

    IMPORTANT: Do not touch the object itself while moving it. image3 image4

  • Press “Tracker Stop”

  • Press “Camera Stop”

  • Press “Pose Optimize” (optional): bundle adjustment of all cameras (be patient!)

  • Press “Object Segment”: The object should already be segmented correctly thanks to the ROI set previously

    image5 You can check segmentation (< > buttons), and you can click

    to wrong segmentations (undo) or add areas.

  • Press “Object …Ok”

    image6 image7

  • Press “Store for Recognizer”: saves the point clouds in a format for object recognition. You will be asked for an model name.

    By default the program will store models in various subfolders of

    the folder “./data”, which will be created if not present. This can be changed in the configuration options (see below).

  • Press “Store for Tracker”: save a different model suitable for tracking

  • If the 3D point cloud visualization is activated +/- can be used to increase/ decrease the size of dots

This is a convenient way to model objects with the STRANDS robots. Put the objects on something elevated (a trash can in this case) to bring it within a good distance to the robot’s head camera.

Configuration options:

  • Set data folder and model name:

    (File -> Preferences -> Settings -> Path and model name)

  • Configure number of keyfames to be selected using a camera rotation and a camera translation threshold:

    (File -> Preferences -> Settings -> Min. delta angle, Min. delta

    camera distance)

Trouble shooting

  • If you press any of the buttons in the wrong order, just restart. Recovery is futile.
  • If you do not get an image, there is a problem with the OpenNI device driver. Check the file /etc/openni/GlobalDefaults.ini, set UsbInterface=2 (i.e. BULK).

Object recognition

With the models created above you can now call the object recognition service within the STRANDS system.

Start the object recogniser on the side PC with the head camera attached. To do this, you must SSH into the side PC without X forwarding then run:

export DISPLAY=:0
roslaunch singleview_object_recognizer recognition_service.launch

There are some parameters to set in the launch file:

<launch>
  <arg name="data_dir" default="/home/somewhere/v4r/data" />
  <arg name="do_sift" default="true" />
  <arg name="do_shot" default="false" />
  <arg name="do_ourcvfh" default="false" />
  <arg name="chop_z" default="3.0" />
  <arg name="cg_size_thresh" default="5" />
  <arg name="knn_sift" default="3" />

  ...
</launch>

The ones you might want to change are: chop_z which sets the maximum distance where objects are searched (note, that RGB-D data from the Asus or Kinect gest worse with distance); and cg_size_thresh where higher values (5 or 6, the minimum is 3) increase speed at the ost of possibly missing objects if they are e.g. half occluded.

The recogniser offers a service /recognition_service/sv_recognition, defined in v4r_ros_wrappers/recognition_srv_definitions/srv:

sensor_msgs/PointCloud2 cloud  # input scene point cloud
float32[] transform            # optional transform to world coordinate system
std_msgs/String scene_name     # optional name for multiview recognition
std_msgs/String view_name      # optional name for multiview recognition
---
std_msgs/String[] ids                 # name of the recognised object model
geometry_msgs/Transform[] transforms  # 6D object pose
float32[] confidence                  # ratio of visible points
geometry_msgs/Point32[] centroid      # centroid of the cluster
object_perception_msgs/BBox[] bbox    # bounding box of the cluster
sensor_msgs/PointCloud2[] models_cloud  # point cloud of the model transformed into camera coordinates

For you, all you have to provide is a point cloud. The recogniser will return arrays of ids (the name you gave during modelling), transforms (the 6D object poses), as well as confidences, bounding boxes and the segmented point clouds corresponding to the recognised portions of the scene.

There is a test ROS component for you as an example:

rosrun singleview_object_recognizer test_single_view_recognition_from_file _topic:=/camera/depth_registered/points

where you have to set the topic to the respective RGB-D source of your robot, e.g. the head_xtion.

The recogniser publishes visualisation information. * /recognition_service/sv_recogniced_object_instances_img: displays the original image with overlaid bounding boxes of recognised objects * /recognition_service/sv_recogniced_object_instances: the model point clouds of the recognised objects, in the camera frame. The following picture shows an example where R2-D2 is detected in a shelf, with the debug picture and recognised model displayed in rviz. image8

Recognition performance

The modelling tool provides full 3D object models from all the views you provided, which in principle allow the recogniser to recogise the object in any condition dfrom any view. Practically, however, recognition performance depends on several factors: * Distance: Performance can quite rapidly decline with distance. First, because the object features on which the recogniser depends become too small (no way that it could detect an object just a few pixels large). Second, because the depth data, on which a pose verification step in the recogniser depends, becomes more and more noisy (and close to useless beyond 3 m or so) * Lighting conditions: In principle the object features are lighting invariant. Practically, different characteristics of the camera which was used for modelling and the camera used for recognition can affect the appearance of the object features. * Motion blur: The robot might be moving while it taske a picture. Motion blur will deteriorate object feature extraction. * Occlusions: Objects might not be entirely visible. The recogniser does not need all object features, so it can handle a certain amount of occlusion. How much, depends on the object and how many features it is has. * Object specifics: Some objects are harder to detect than others, e.g. because they have few features, are small, have a somewhat shiny surface, or are non-rigid.

Before using the recogniser in any object search scenario it is therefore important to gather some statistics about the recognition performance. The recogniser’s confidence value can be a useful measure. But don’t trust it too much -it is not an actual probability.

Useful aspects to learn are: * How fast the recognition rate (in how many cases is the object found) drops with distance. * How false positive rate and confidence measure are related.

Trouble shooting

  • If you get an error like this

    terminate called after throwing an instance of 'flann::FLANNException'
      what():  Saved index does not contain the dataset and no dataset was provided.
    [recognition_service-2] process has died [pid 17534, exit code -6, cmd /home/mz/work/STRANDS/code/catkin_ws/devel/lib/singleview_object_recognizer/recognition_service __name:=recognition_service __log:=/home/mz/.ros/log/61fb16b8-4afc-11e5-801d-503f56004d09/recognition_service-2.log].
    log file: /home/mz/.ros/log/61fb16b8-4afc-11e5-801d-503f56004d09/recognition_service-2*.log
    

    locate the file sift_flann.idx, probably right in your catkin workspace or in ~/.ros, and remove it.

References

When referencing this work, pleace cite:

  1. J. Prankl, A. Aldoma Buchaca, A. Svejda, M. Vincze, RGB-D Object Modelling for Object Recognition and Tracking. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
  2. Thomas Fäulhammer, Aitor Aldoma, Michael Zillich and Markus Vincze Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015.
  3. Thomas Fäulhammer, Michael Zillich and Markus Vincze Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter, IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 2015.
  4. A. Aldoma Buchaca, F. Tombari, J. Prankl, A. Richtsfeld, L. di Stefano, M. Vincze, Multimodal Cue Integration through Hypotheses Verification for RGB-D Object Recognition and 6DOF Pose Estimation. IEEE International Conference on Robotics and Automation (ICRA), 2013.
  5. J. Prankl, T. Mörwald, M. Zillich, M. Vincze, Probabilistic Cue Integration for Real-time Object Pose Tracking. Proc. International Conference on Computer Vision Systems (ICVS). 2013.

For further information check out this site.

Original page: https://github.com/strands-project/lamor15/wiki/Tutorial-materials-3

Tutorial on Qualitative Spatial relations and human tracking

Human tracking

The STRANDS project uses a Kalman Filter based human tracker that provides you with the x,y,theta position of the humans and their velocity vector. The tracker can be found here: https://github.com/strands-project/strands_perception_people/tree/indigo-devel/bayes_people_tracker and the whole framework can be installed via apt-get install ros-indigo-strands-perception-people or found on github at: https://github.com/strands-project/strands_perception_people

A short video explaining how it works, can be found here.

Using the emulator in simulation

These are generic instructions, please see descriptions at the bottom of the page on how to use the provided tmux script.

Starting the people tracker emulator in simulation requires the simulator to be running a human environment:

roslaunch strands_morse uol_bl_morse.launch env:=uol_bl_human

for the full simulation or

roslaunch strands_morse uol_bl_morse.launch env:=uol_bl_human_fast

for the fast wireframe mode. In the fast mode, the human can be seen everywhere in the environment, whereas in the normal mode the robot can only see the human when its camera can see it. The human can be moved via the W,A,S,D keys (press F5 to toggle between moving the camera and human) or automatically via publishing command velocities as a geometry_msgs/Twist message to /human/motion. Have a look at [[Working with the STRANDS robots]] for more hotkeys. In addition to the simulator, run the navigation

roslaunch lamor_bringup uol_bl_nav2d.launch

When both of are running, you can launch the emulator:

rosrun people_tracker_emulator posestamped_to_ppl_tracker_msgs.py

Please see below for a description of the provided output and the visualisations.

Using the people tracker on the robot

Starting the people tracker on the robot is as easy as running:

roslaunch perception_people_launch people_tracker_robot.launch machine:=$HEAD_PC user:=lamor

This is already provided on tab 6 of your robot tmux.

In the following the most important parameters:

  • machine default = localhost: Determines on which machine this node should run. Ideally this machine should be connected to the Asus Xtion it uses. Use machine:=$HEAD_PC
  • user default = “”: The user used for the ssh connection if machine is not localhost. Use user:=lamor
  • tf_target_frame default = /map: The coordinate system into which the localisations should be transformed
  • log default = false: Log people and robot locations together with tracking and detection results to message_store database into people_perception collection. Disabled by default because if it is enabled the perception is running continuously.
Provided output

The tracker publishes the following: * pose: A geometry_msgs/PoseStamped for the closets person. Orientations are based on velocity and therefore undefined for stationary people. * pose_array: A geometry_msgs/PoseArray for all detections. Orientations are based on velocity and therefore undefined for stationary people. * people: A people_msgs/People for all detections. Can be used for layered costmaps. * marker_array: A visualization_msgs/MarkerArray showing little stick figures for every detection. Figures are orient according to the direction of velocity. * positions: A bayes_people_tracker/PeopleTracker message. See below.

std_msgs/Header header
string[] uuids             # Unique uuid5 (NAMESPACE_DNS) person id as string. Id is based on system time on start-up and tracker id. Array index matches ids array index
geometry_msgs/Pose[] poses # The real world poses of the detected people in the given target frame. Default: /map. Array index matches ids/uuids array index
float64[] distances        # The distances of the detected persons to the robot (polar coordinates). Array index matches ids array index.
float64[] angles           # Angles of the detected persons to the coordinate frames z axis (polar coordinates). Array index matches ids array index.
float64 min_distance       # The minimal distance in the distances array.
float64 min_distance_angle # The angle according to the minimal distance.

The poses will be published in a given target_frame (see below) but the distances and angles will always be relative to the robot in the /base_link tf frame.

Visualising the output
Tracker visualisation

Tracker visualisation

The tracker provides a pose array /people_tracker/pose_array which gives you the position and orientation of the human (green arrow) and a a marker array /people_tracker/marker_array visualising the human position with a stick figure. They can easily be added in rviz to achieve above visualisation.

Configuring the tracker

The people tracker uses sensor fusion to achieve the best possible results. Adding your own detectors just requires to alter this config file:

bayes_people_tracker:
    filter_type: "UKF"                                         # The Kalman filter type: EKF = Extended Kalman Filter, UKF = Unscented Kalman Filter
    cv_noise_params:                                           # The noise for the constant velocity prediction model
        x: 1.4
        y: 1.4
    detectors:                                                 # Add detectors under this namespace
        upper_body_detector:                                   # Name of detector (used internally to identify them). Has to be unique.
            topic: "/upper_body_detector/bounding_box_centres" # The topic on which the geometry_msgs/PoseArray is published
            cartesian_noise_params:                            # The noise for the Cartesian observation model
                x: 0.5
                y: 0.5
            matching_algorithm: "NNJPDA"                       # The algorithm to match different detections. NN = Nearest Neighbour, NNJPDA = NN Joint Probability Data Association
        leg_detector:                                          # Name of detector (used internally to identify them). Has to be unique.
            topic: "/to_pose_array/leg_detector"               # The topic on which the geometry_msgs/PoseArray is published
            cartesian_noise_params:                            # The noise for the Cartesian observation model
                x: 0.2
                y: 0.2
            matching_algorithm: "NNJPDA"                       # The algorithm to match different detections. NN = Nearest Neighbour, NNJPDA = NN Joint Probability Data Association

New detectors are added under the parameter namespace bayes_people_tracker/detectors.

Tracker Parameters

The tracker offers two configuration parameters: * filter_type: This specifies which variant of the Kalman filter to use. Currently, it implements an Extended and an Unscented Kalman filter which can be chosen via EKF and UKF, respectively. * cv_noise_params: parameter is used for the constant velocity prediction model. * specifies the standard deviation of the x and y velocity.

Detector Parameters

  • For every detector you have to create a new namespace where the name is used as an internal identifier for this detector. Therefore it has to be unique. In this case it is upper_body_detector
  • The topic parameter specifies the topic under which the detections are published. The type has to be geometry_msgs/PoseArray. See to_pose_array in detector_msg_to_pose_array/README.md if your detector does not publish a PoseArray.
  • The cartesian_noise_params parameter is used for the Cartesian observation model.
  • specifies the standard deviation of x and y.
  • matching_algorithm specifies the algorithm used to match detections from different sensors/detectors. Currently there are two different algorithms which are based on the Mahalanobis distance of the detections (default being NNJPDA if parameter is misspelled):
  • NN: Nearest Neighbour
  • NNJPDA: Nearest Neighbour Joint Probability Data Association

All of these are just normal ROS parameters and can either be specified by the parameter server or using the yaml file in the provided launch file.

Using a detector that does not publish a PoseArray

Like the currently used leg_detector not all detectors will publish a PoseArray. Using the detector_msg_to_posearray node can circumvent this:

This small node that takes in an arbitrary message from a topic and extracts a pose according to a given identifier. The found poses are published as a geometry_msgs/PoseArray. The node is used to transform the output of any people detector to a pose array for the people_tracker. The node is configured using the detectors.yaml in the config directory:

to_pose_array:
    detectors:                                  # Add detectors under this namespace
        leg_detector:                           # Name of detector (used internally to identify them. Has to be unique.
            topic: "/people_tracker_measurements"  # The topic on which the geometry_msgs/PoseArray is published
            point_name: "pos"                   # The name of the point containing the coordinates in question

The parameter namespace to_pose_array/detectors can contain as many sub namespaces as needed. The above example shows the namespace to_pose_array/detectors/leg_detector which contains the information to parse the messages generated by the ros indigo package leg_detector.

  1. topic: this string is the topic name under which the messages containing the positions are published.

  2. point_name: this string specifies the identifier for the detected positions. In this case the leg_detector publishes a message which contains data like:

    pos.x
    pos.y
    pos.z
    

    The message is parsed for all occurrences of the pos identifier and the result is published as a PoseArray.

Make the robot look at you

There is an easy way to see if the tracker is working, just make the robot look at you when its camera picks you up. Start the strands gazing node:

rosrun strands_gazing gaze_at_pose

This creates an action server that needs a runtime in seconds (0 for inf) and a topic name on which a single pose is published:

rosrun actionlib axclient.py /gaze_at_pose

As a topic use: /upper_body_detector/closest_bounding_box_centre and set the time to 0. Once started, the robot should look at the closest person in front of its camera and also blink occasionally.

QTC and online qtc creation

QTC comes in many different variants. Here we will focus on two of them:

QTC Basic (QTC_B) simplified

QTC\_B Simplified

QTC_B Simplified

QTC Double Cross (QTC_C) Simplified

QTC\_C Simplified

QTC_C Simplified

See slides for recap of tutorial and brief overview of how it works.

Online QSR creator

In this tutorial we aim at creating a ROS node that creates QTC state chains using the qsr_lib. The two agents we’ll be using will be the robot and the simulated human.

To start your simulation with the human:

  • Make sure there is no tmux session running: tmux ls should not show any results.
  • Update the lamor repository: git pull and your installation: sudo apt-get update, sudo apt-get upgrade
  • Run the human simulation file: rosrun lamor_bringup sim_human_start.sh
  • Tab 0 start roscore and htop automatically
  • Switch to tab 1 Ctrl + b, 1 and start the datacentre
  • Switch to tab 2 Ctrl + b, 2 and start the simaulation. The default environment is the fast one. If you want the full experience run it with env:=uol_bl_human instead.
  • Switch to tab 3 Ctrl + b, 3 and start the 2d navigation. Don’t forget to specify your map map:=<my_map>
  • Switch to tab 4 Ctrl + b, 4 and start the topological navigation. Don’t forget to specify your topological map dataset:=<my_topo_map>
  • Switch to tab 5 Ctrl + b, 5 and start the people tracker emulator. You might have to install this component manually: ``sudo apt-get install ros-indigo-people-tracker-emulator``. I opened an issue to fix this already.
  • Switch to tab 6 Ctrl + b, 6 and start the qsr_lib ros server.

To visualise the results, start rviz and in addition to the map and robot model, add a MarkerArray: /people_tracker/marker_array and a PoseArray: /people_tracker/pose_array

Now, we’ll start with our node. The complete example can be found here

Python editor suggestions: Spyder, fisa-dev plugin package for vim, sublime

The most important parts explained:

  • Create a new package:

catkin_create_pkg online_qsr_creator rospy geometry_msgs bayes_people_tracker qsr_lib

  • Getting all the correct imports:

python import rospy from geometry_msgs.msg import Pose from bayes_people_tracker.msg import PeopleTracker from qsrlib_io.world_trace import Object_State, World_Trace from qsrlib_ros.qsrlib_ros_client import QSRlib_ROS_Client from qsrlib.qsrlib import QSRlib_Request_Message try:     import cPickle as pickle except:     import pickle

  • Setting the correct parameters for QTC:

python __parameters = {"qtcs": { # Namespace         "validate": True, # Create valid state chains         "no_collapse": False, # Collapse similar states         "quantisation_factor": 0.01     }, "for_all_qsrs": { # Global namespace             "qsrs_for": [("human","robot")] # Compute QSR for pair human robot only     }}

  • Getting the robot pose:

python # Robot pose is set in a separate callback because it is not stamped and cannot be synchronised def robot_cb(self, msg):     self.robot_pose = msg

  • The human callback:

```python def ppl_cb(self, msg): self.buffer[“human”].append(msg.poses[0]) self.buffer[“robot”].append(self.robot_pose)

# QTC needs at least two instances in time to work
ob = []
if len(self.__buffer["human"]) > 1:
    # Creating the world trace for both agents over all timestamps
    world = World_Trace()
    for idx, (human, robot) in enumerate(zip(self.__buffer["human"], self.__buffer["robot"])):
        ob.append(Object_State(
            name="human",
            timestamp=idx,
            x=human.position.x,
            y=human.position.y
        ))
        ob.append(Object_State(
            name="robot",
            timestamp=idx,
            x=robot.position.x,
            y=robot.position.y
        ))

    world.add_object_state_series(ob)

    # Creating the qsr_lib request message
    qrmsg = QSRlib_Request_Message(
        which_qsr="qtcbs",
        input_data=world,
        dynamic_args=self.__parameters
    )
    cln = QSRlib_ROS_Client()
    req = cln.make_ros_request_message(qrmsg)
    res = cln.request_qsrs(req)
    out = pickle.loads(res.data)

    # Printing the result, publishing might be more useful though ;)
    qsr_array = []
    for t in out.qsrs.get_sorted_timestamps():
        for k, v in out.qsrs.trace[t].qsrs.items():
            qsr_array.append(v.qsr.values()[0])

    print qsr_array

rospy.sleep(0.3) # Worst smoothing ever, I'm sure you can do better

```

Have a look at the usage example for a more detailed explanation. If you are looking into developing your own QSR, please refer to the guide for developers.

Tasks

  1. Try to run the sim_human_start.sh tmux file and run the people tracker emulator, visualising the output in rviz.
  2. Try to get the people tracker to work on the robot and make it look at you.
  3. Start the qsr_lib and run the example client to request a few QSRs.
  4. Have a look at the first of the references below and see if you find a QSR you might want to add to the library. If so, we are more than happy for you to give it a go and contribute.
  5. Have a go at programming your own online qsr creator, creating QSRs between people instead of between the robot and a person, as done in the example. Maybe you can find a useful application for this in your final task? For example, two persons being close together and stationary might be a group.

In order to connect other machines to the robot’s ROS infrastructure and to talk to all the hosts in that infrastructure, any external computers need to connect to the robot using a VPN (Virtual Private Network) to obtain an IP address in the robot subnet (192.168.0.x/24). We have set up a PPTP-based VPN for this purpose, allocating IPs in the range 192.168.0.230-253. (The IP prefix could differ, e.g. it could also be 10.0.0.x etc.)

Setting up an Ubuntu machine to connect to the robot’s VPN

connecting via Network-Manager

  1. Make sure the Network-Manager PPTP plugin is installed: sudo apt-get install network-manager-pptp-gnome
  2. Create a new VPN connection in Network-Manager using PPTP protocol
  3. Set ``WLAN_IP`` as the gateway address (check on the robot using ifconfig to find out which one that is)
  4. set user (e.g. lamor and password: lamor2015)
  5. In the “VPN” tab, choose “Advanced”, and select “Use Point-to-Point encryption (MPPE)”
  6. In the “IPv4 Settings” tab, choose “Address Only”
  7. Click on “Routes” in the “IPv4 Settings” tab, and select “Use this connection only for resources on its own network”
  8. Still in “Routes” add a static route with
  • Address: 192.168.0.0 (or 10.0.0.0 if your robot has a different local net)
  • Netmask: 24
  • Gateway: 0.0.0.0
  1. save and connect to the new VPN network,… Tadaaa, you should be connected. (Note: This connection is only used to connect to the robot, all other network traffic on your computer still goes via the default route, not the robot!)
  2. in order to use this VPN with ROS, make sure you run the ros-network.sh script in each of your terminals you want to use. The argument should be the local IP of the ROS_MASTER, e.g. ./ros-network.sh 192.168.0.100

Original page: https://github.com/strands-project/lamor15/wiki/VPN-to-the-robot

Rules and guidelines on how to use the STRANDS robots

Important security guidelines

  • The robots are never to be left alone. While the robot is operating outside of your allocated coding area, someone has to accompany the robot all the time. Even thought we have a few security measures in place, we don’t want the robot falling down the stairs.
  • Engage the emergency break when robot is unsupervised. See picture of button. Even if you are absolutely sure that nothing is running, it still might just go off because someone forgot to kill his software.
  • Always put the robot on the charging station when it is not used. Our robot’s batteries last for approx. 8 hours but you really don’t want to sit idly because the robot ran out of battery. Remember to put it back to charge when your not running anything.
  • Handle the robot with care. These robots are expensive machines and our project depends on them working properly. Please do not drive the robots into obstacles or mistreat them otherwise. The robot does have a bumper sensor in the bottom which kills the motors if pressed but be careful when driving around, regardless.

Emergency button: Emergency Button

Connecting to the robot

Please follow this guide on how to set-up remote access to the robot: [[Robot-VPN-to-connect-from-Desktop-PCs-and-Laptops]]

The username for the robot is lamor. Please ask a member of the team for the password.

Basic commands

ROS

The most basic commands to be run on the robot:

  • When executing something, make sure your workspace is sourced so the all the environment varaiables point to the right code:

source ~/<your_workspace>/devel/setup.bash

  • Running ros nodes either via rosrun:

rosrun your_package your_node

or roslaunch if you created a launch file for it or are using a package that provides one:

roslaunch your_package your_launch_file.launch

  • Building your workspace, either:

cd ~/<your_workspace>; catkin_make

or

catkin_make -C ~/<your_workspace>

For more basic ROS commands please have a look at the tutorials.

Starting the simulator
  • Starting morse:

roslaunch strands_morse uol_bl_morse.launch

this will start the default summer school environment which resembles the location of the school.

  • Starting the simulator in fast (wireframe) mode:

roslaunch strands_morse uol_bl_morse.launch env:=uol_bl_fast

this does not allow to use openni but is considerably faster

  • Starting the simulator with the human model

roslaunch strands_morse uol_bl_morse.launch env:=uol_bl_human

or

roslaunch strands_morse uol_bl_morse.launch env:=uol_bl_human_fast

for the fast mode. The human can be controlled via the WASD.

  • The robot is controlled with the arrow keys.
  • To switch between human control and moving the camera via WASD through the environment, press F5
  • Holding Ctrl and moving the mouse rotates the camera
  • Press F11 to reset robot and human to their start positions
Tmux

On our robots we use a software called tmux that allows to run thing in the background, have multiple terminals in one and detach from them without killing the processes.

  • The main components controlling the hardware will run in a tmux session that is named like the user lamor. To attach to this run

tmux a -t lamor

but please do not start or stop random components without consulting a member of the STRANDS team.

  • To start your own session, run tmux new -s <session_name>
  • You can then reattach to this session via tmux a -t <session_name>
  • List all current tmux sessions: tmux ls
  • In the following a few hotkeys that are useful in tmux:
  • The tmux magic key combination is Ctrl + b from here an referred to as tmux-key. This has to be pressed to enter the control mode in tmux. The tmux-key should be pressed independently of the actual control shortcut, so press tmux-key first and then press any of the hotkeys.
  • Detaching from the current session: tmux-key, d
  • Creating a new window: tmux-key, c
  • Switching windows: next: tmux-key, n, previous: tmux-key, p, switch to a specific window: tmux-key, 0-9, get an overview of all windows and select one: tmux-key, w
  • Switching into scroll mode: tmux-key, [
  • Splitting the current window: vertically: tmux-key, %, horizontally: tmux-key, ", switching between windows on split screen: tmux-key, <arrow_key>, zooming in and out of a window: tmux-key, z

A full list of all tmux commands can be found in the official manual or by using man tmux

We highly advise that you do use tmux to ensure that a bad network connections does not kill your programme and to check what is already running on the robot.

Please note that only one person at a time can use the same tmux window. If you attach to a session and see someone typing something, do not just interfere but find out who is using this session.

The joypad

All robots also function as expensive remote controlled cars and can be operated via the provided joypad. Have a look at the cheat sheet to find out how it works. The most important one is the ‘deadman switch’ which has to be pressed in order for the robot to accept any commands and the emergency stop button which kills the motors immediately. Also the start button can be used to reset the bumper sensor and enable the motors again.

What is not mentioned on the sheet is the X button. When pressed the robot will search for it’s charging station by turning on the spot and looking for the ‘Robot Station’ sign. If found, the robot will attempt to dock. If the robot is already charging it will undock and calibrate it’s position. Please for the process to finish, i.e. until the robot closed and opened it’s eyes twice. Both of the activities are automated and do not require you to do anything and you shouldn’t interfere with it either.

Calibrating the most important components

3D obstacle avoidance

If your robot refuses to drive, the reason might be that the 3D obstacle avoidance is miscalibrated and it sees obstacles where there are none. This can happened when the so-called ‘chest cam’ (the camera under the robots head facing downwards) has been moved. If this happens the costmaps in rviz should show you a lot of obstacles all around the robot. To recalibrate make sure the space in front of the robot is free and run the following directly on the robot as it needs a graphical interface:

rosrun rosrun calibrate_chest calibrate_chest

This will open a window showing the point cloud. You can zoom out using the keyboard’s touchpad and you can close the window via q. After the camera has been calibrate, he new parameters have to be published:

rosrun calibrate_chest chest_calibration_publisher

This will terminate after 10 seconds and update the calibration parameters. This should have removed the obstacles on the costmaps. If not, you can try to remove them yourself via:

rosservice call /move_base/clear_costmaps
Calibrating the autonomous docking

This only has to be done once. Make sure that the robot is standing on its station and charges (LED on the charging station). Make also sure that the robot can see the ‘Robot Station’ sign. Run:

rosrun scitos_docking visual_charging_client 100

Useful packages:

Packages that are not really part of the scientific efforts of STRANDS but useful for application design:

VPN to the robots

In order to connect other machines to the robot’s ROS infrastructure and to talk to all the hosts in that infrastructure, any external computers need to connect to the robot using a VPN (Virtual Private Network) to obtain an IP address in the robot subnet (192.168.0.x/24). We have set up a PPTP-based VPN for this purpose, allocating IPs in the range 192.168.0.230-253. (The IP prefix could differ, e.g. it could also be 10.0.0.x etc.) ## Setting up an Ubuntu machine to connect to the robot’s VPN

  1. Make sure the Network-Manager PPTP plugin is installed: sudo apt-get install network-manager-pptp-gnome
  2. Create a new VPN connection in Network-Manager using PPTP protocol
  3. Set ``WLAN_IP`` as the gateway address (check on the robot using ifconfig to find out which one that is)
  4. set user (e.g. lamor and password: lamor2015)
  5. In the “VPN” tab, choose “Advanced”, and select “Use Point-to-Point encryption (MPPE)”
  6. In the “IPv4 Settings” tab, choose “Address Only”
  7. Click on “Routes” in the “IPv4 Settings” tab, and select “Use this connection only for resources on its own network”
  8. Still in “Routes” add a static route with
  • Address: 192.168.0.0 (or 10.0.0.0 if your robot has a different local net)
  • Netmask: 24
  • Gateway: 0.0.0.0
  1. save and connect to the new VPN network,… Tadaaa, you should be connected. (Note: This connection is only used to connect to the robot, all other network traffic on your computer still goes via the default route, not the robot!)
  2. in order to use this VPN with ROS, make sure you run the ros-network.sh script in each of your terminals you want to use. The argument should be the local IP of the ROS_MASTER, e.g. ./ros-network.sh 192.168.0.100

Original page: https://github.com/strands-project/lamor15/wiki/Working-with-the-STRANDS-robots

The participants shall be working towards an general task, using the functionalities and packages they learn about. The resulting system shall be shown off at the ECMR welcome reception, giving participants that stay on a chance to talk about their achievements and their own research. Hence, the tasks selected should be demo-able in the end.


The following ones are suggestions only: * Security Guard: Patrol, spot suspicious changes / activities, * “Stalker bot”: Approach and engage with people, entertain them, identify groups, etc. * paparazzi: take photos and tweet then * “Voting Machine”: The robots could serve as voting terminals for the papers that go to RAS. The robots could try to understand where the people move around during coffee breaks and then use this knowledge to hunt them for the votes during the last day. As a side effect of the hunt, the robots could make plenty of pictures and tweet them on the conference site. Identity of the voters will be established by means of taking their pics when voting (or we could use circles printed at the back of the name-tags, which is easy to generate). * “Burglar alarm”: Check for 3 known objects whether they are still at their designated location, or have been taken away. * …

Original page: https://github.com/strands-project/lamor15/wiki/possible-tasks

Exercise 1

In this first exercise you will use our MDP/LTL planning framework to make the robot drive around the map, and also play with the edges of the underlying map to see how different long-term robot experiences influence its behaviour when creating navigation policies.

Background

You must first run some basic elements from the STRANDS system. You should ideally run each of these in a separate terminal where you have sourced both the ROS and your local workspace setup.bash files, as described in tutorial_prep.md.

MongoDB Store

First, check the db directory exists (which you should’ve created following tutorial_prep.md). The following should not list any files or report an error:

ls `rospack find planning_tutorial`/db

If that is ok, then launch mongodb_store using that directory as its data path:

roslaunch mongodb_store mongodb_store.launch db_path:=`rospack find planning_tutorial`/db

MORSE Simulation

In another terminal, launch our simplified simulation of the Transport Systems Catapult (TSC).

roslaunch strands_morse tsc_morse.launch

If you press the ‘h’ key in MORSE you can see a list of available keyboard commands.

2D and Topological Navigation

We have predefined a simple topological map for you to use in this tutorial. The first time (and only the first time!) you want to use topological navigation in the TSC simulation, you must add this map to the mongodb store. Do it with the following command:

rosrun topological_utils load_yaml_map.py `rospack find planning_tutorial`/maps/plan_tut_top_map.yaml

You can check the result of this with the following command which should print a description of the topological map plan_tut.

rosrun topological_utils list_maps

If this was successful you can launch the 2D (amcl and move_base) and topological localisation and navigation for your simulated robot.

roslaunch planning_tutorial tsc_navigation.launch

To see everything running, launch the ROS visualisation tool rviz with the provided config file:

rviz -d `rospack find planning_tutorial`/plan_tut.rviz

If you click on a green arrow in a topological node, the robot should start working its way there. Feel free to add whatever extra visualisation parts you want to this (or ask us what the various bits are if you’re new to robotics).

Edge Prediction and MDP Planning

For this tutorial we’re going to interfere with the expected duration and success probabilities for navigating edges in the topological map. Usually these are computed from data gathered as navigation happens, but for now we’re going to fix them. To launch a node which reports fixed predictions for map edges, do the following in a new terminal (the argument is the file which contains the values to be reported):

rosrun topological_navigation manual_edge_predictions.py `rospack find planning_tutorial`/maps/plan_tut_edges.yaml

Once this is running you can launch the MDP-based task executive system in (yet another!) new terminal:

roslaunch mdp_plan_exec mdp_plan_exec_extended.launch

Exercise 1a

With all this up and running, you’re now ready to give the robot some tasks. For now we’re only going to worry about navigation tasks, i.e. reaching a node in the topological graph. If you open up the file $WS_ROOT_DIR/src/planning_tutorial/scripts/ltl_nav.py you will see some code which creates an MDP task to achieve an LTL goal. You can execute this file as follows:

rosrun planning_tutorial ltl_nav.py

As this runs you should see the robot move around, as is appropriate for the task (i.e. following the optimal policy given the edge predictions). Parts of the policy are visualised as large arrows in rviz under the MarkerArray topic topological_edges_policies (this is part of the preconfigured rviz file you launched above).

The important part of this file is the goal specification, i.e.:

goal_formula = '(F "WayPoint2")'

In this case F means “eventually” and "WayPoint2" stands for the robot being at WayPoint2. Try editing this formula to try more complex goals involving other waypoints, LTL operators or Boolean connectives. For inspiration, take a look at the list below:

  • goal_formula = '(F "WayPoint2") & (F "WayPoint7") ' Eventually reach WayPoint2 and eventually reach WayPoint7. Choose best ordering to do so.
  • goal_formula = '(F "WayPoint2") | (F "WayPoint7") ' Eventually reach WayPoint2 or eventually reach WayPoint7. Choose best one to visit considering your current position.
  • goal_formula = '(F ("WayPoint2" & (F "WayPoint7"))) ' Eventually reach WayPoint2 and eventually reach WayPoint7. Choose best one to visit considering your current position.
  • goal_formula = '((!"WayPoint7") U "WayPoint5") ' Avoid WayPoint7 until you reach WayPoint5. Compare this policy with the one obtained for '(F "WayPoint5") '.
  • goal_formula = '(F ("WayPoint1" & (X "WayPoint2"))) ' Eventually reach WayPoint1 and immediately after (X) go to WayPoint2. This allows us to “tell” the robot to navigate through some edges, e.g., for environmental exploration.

These specifications can also be connected using boolean connectives, e.g., '((!"WayPoint7") U "WayPoint5") ' &  (F ("WayPoint1" & (X "WayPoint2"))).

Exercise 1b

So far the robot is using the default, static edge durations and probabilities which we provided earlier. Now we’ll play with these probabilities to observe the changes in the robot’s behaviour. If you kill the manual_edge_predictions.py node (CTRL-c in it’s terminal) then edit the yaml file $WS_ROOT_DIR/src/planning_tutorial/maps/plan_tut_edges.yaml you can alter the expected duration (in seconds) and the success probability of each edge in the map. After you’ve made your edits, restart the node as before and check if the robot creates policies which respect the new edge information you provided.

Original page: https://github.com/strands-project/planning_tutorial/blob/indigo-devel/doc/exercise_1.md

Exercise 2

In this exercise you will use our MDP/LTL planning framework to get the robot to perform 3D mapping sweeps of a simple. This example connects many of the different elements of our robot system, and may seem complex at first. If something is unclear, or you want more information, please just ask.

Background

You must first run some basic elements from the STRANDS system. You should ideally run each of these in a separate terminal where you have sourced both the ROS and your local workspace setup.bash files, as described in tutorial_prep.md.

MongoDB Store

(If you still have the database running from Exercise 1 you can skip this step)

First, check the db directory exists (which you should’ve created following tutorial_prep.md). The following should not list any files or report an error:

ls `rospack find planning_tutorial`/db

If that is ok, then launch mongodb_store using that directory as its data path:

roslaunch mongodb_store mongodb_store.launch db_path:=`rospack find planning_tutorial`/db

MORSE Simulation

In another terminal, launch our object simulation taken from the ALOOF Project.

roslaunch strands_morse aloof_morse.launch

If you press the ‘h’ key in MORSE you can see a list of available keyboard commands.

2D and Topological Navigation

We have predefined a simple topological map for you to use in this tutorial. The first time (and only the first time!) you want to use topological navigation in the ALOOF simulation, you must add this map to the mongodb store. Do it with the following command:

rosrun topological_utils load_yaml_map.py `rospack find strands_morse`/aloof/maps/aloof_top_map.yaml

You can check the result of this with the following command which should print a description of the topological map aloof.

rosrun topological_utils list_maps

If this was successful you can launch the 2D (amcl and move_base) and topological localisation and navigation for your simulated robot. Note that in this configuration, the statistics for edge transitions in the topological map will be learnt from experience (as opposed to manually specified as in Exercise 1).

roslaunch strands_morse aloof_navigation.launch

To see everything running, launch the ROS visualisation tool rviz with the provided config file:

rviz -d `rospack find planning_tutorial`/plan_tut.rviz

You’ll find that the robot is not localised at the correct place compared to the simulation. Therefore use the 2D Pose Estimate button and click (then hold and rotate to set angle) on the correct part of the map.

If you click on a green arrow in a topological node, the robot should start working its way there. Feel free to add whatever extra visualisation parts you want to this (or ask us what the various bits are if you’re new to robotics).

MDP Planning

Next launch the MDP-based task executive system in (yet another!) new terminal:

roslaunch mdp_plan_exec mdp_plan_exec_extended.launch

3D Mapping Nodes

Our 3D mapping framework makes use of an approach called meta-rooms which builds 3D maps at waypoints in the environment. Before you run this for the first time you need to create somewhere for the system to store maps. Do this with the following command:

mkdir ~/.semanticMap

If this was successful, you can launch the meta-room nodes with the following command:

roslaunch planning_tutorial meta_rooms.launch

Exercise 2a

In Exercise 1 you exploited the fact that the execution framework automatically creates an MDP for navigation across the topological map. In this exercise we will extend this MDP with additional actions which connect ROS actionlib servers to actions in the MDP.

In this part we will walk through an example where the outcome of the invocation of an action server is tied to a non-deterministic outcome of an action in an MDP. After showing you how to do this, the next step will be for you to edit the file to change how the action is encoded in the MDP.

All of the code for this part of the exercise is in planning_tutorial/script/sweep_at_waypoints.py so only the important fragments will be included here.

The task we will undertake is to trigger the /do_sweep action server with the argument medium (signifying the number of views to take in the sweep) at a set of waypoints in the topological map. The key idea is that we associate the action server with an MDP action which we add to the navigation MDP. To make sure we only complete the action once, we connect the successful completion of this action to a state variable in our MDP specification, which can be used in pre- and post-conditions for our action.

We start by creating a name for our action. As we care about the success of an action at a waypoint, we need a different action for each waypoint. This is captured in the naming of the action, which should be different for each waypoint, i.e.

action_name = 'do_sweep_at_' + waypoint

Next we create a the MDP state variable for tracking the success of the action. This state variable will form part of our goal statement, e.g. if we have the state variable executed_do_sweep_at_WayPoint1 the goal to eventually make this true would be (F executed_do_sweep_at_WayPoint1=1).

state_var_name = 'executed_' + action_name
# Create the state var object, initialise to 0 (false)
state_var = MdpStateVar(name = state_var_name, init_val = 0, min_range = 0, max_range = 1)

Following this we need to encode the possible outcomes of the action and the way they can change the state in the MDP. Although there is no restriction on the number of outcomes an action can have (over one), we will use two: succeeded or not succeeded. The /do_sweep action reports its outcomes when it completes, so we will use this return signal to tell the MDP which outcome occurred (allowing it to update its internal state correctly). Each outcome has a probability of occurring (which is used when creating a policy to solve the MDP), and a probability distribution over how long it might take to reach the outcome. In this example we make up these values, but in our full system we can learn them from experience at execution time.

successful_outcome = MdpActionOutcome(
            # the probability of this outcome occurring
            probability = 0.99,
            # the effects of this action on the MDP state, in this case setting the state variable to 1
            post_conds = [StringIntPair(string_data = state_var_name, int_data = 1)],
            # how long it will take to reach this outcome, and with what probability
            duration_probs = [1.0],
            durations = [240.0],
            # And the action server outcome which will tell us if this outcome occurred. In this case if the action server returns with SUCCEEDED
            status = [GoalStatus(status = GoalStatus.SUCCEEDED)])

Below is a similar encoding for the unsuccessful outcome, i.e. the cases where the action server reports that it has aborted or was preempted.

unsuccessful_outcome = MdpActionOutcome(probability = (1 - successful_outcome.probability),
            post_conds = [StringIntPair(string_data = state_var_name, int_data = 0)],
            duration_probs = [1.0],
            durations = [120.0],
            status = [GoalStatus(status = GoalStatus.ABORTED), GoalStatus(status = GoalStatus.PREEMPTED)])

The final step is to link the MDP action to the actionlib server. This is done in the code below. The action_server argument is used to configure the action server client, and the add_string_argument call is used to embed the parameters for the call into the action itself (see strands_executive_msgs.mdp_action_utils for more options).

# Finally we tie all this together to an actionlib server
action = MdpAction(name=action_name,
        # This is the actual action server topic to be used
        action_server='/do_sweep',
        # The action will only be attemped if the preconditions are satisfied. In this case we can't have succeeded in the action before
        pre_conds=[StringIntPair(string_data=state_var_name, int_data=0)],
        # These are the possible outcomes we defined above
        outcomes=[successful_outcome, unsuccessful_outcome],
        # And this is where we can execute the action.
        waypoints = [waypoint])
add_string_argument(action, 'medium')

For the purposes of the tutorial you should understand the overall approach we are taking for this, and be able to map this understanding into the code. To see it working you can run:

rosrun planning_tutorial sweep_at_waypoints.py

Feel free to edit the code as you wish and play around with the file. You could add more waypoints to the list, or add additional LTL navigation goals (as in Exercise 1) to the string mdp_spec.ltl_task to change the robot’s behaviour (e.g. get it to do a sweep without every visiting a given waypoint).

Exercise 2b

To successfully complete the following you need to be competent with both Python and ROS. If you’re not comfortable with these, please pair up with someone who is.

To test your understanding, create a copy of planning_tutorial sweep_at_waypoints.py and re-write it such that instead of completing a sweep at every waypoint, the problem is to simply complete a single sweep at any point at the map. However you should be able to specify that the chance of the action succeeding is different at different waypoints (with you inventing the numbers), and this should result in the robot choosing the waypoint with the highest chance of success for the sweep.

Exercise 2c

What happens when you combine your answer to 2b with the ability to change the probabilities of success on the edges of the topological map? Talk to one of the lecturers for help with getting this running if you want to try it.

Original page: https://github.com/strands-project/planning_tutorial/blob/indigo-devel/doc/exercise_2.md

Exercise 3 - UNFINISHED

In this exercise you will use our MDP/LTL planning framework to encode an object search task in a simple environment. This example connects many of the different elements of our robot system, and may seem complex at first. If something is unclear, or you want more information, please just ask.

Background

You must first run some basic elements from the STRANDS system. You should ideally run each of these in a separate terminal where you have sourced both the ROS and your local workspace setup.bash files, as described in tutorial_prep.md.

MongoDB Store

(If you still have the database running from Exercise 1 you can skip this step)

First, check the db directory exists (which you should’ve created following tutorial_prep.md). The following should not list any files or report an error:

ls `rospack find planning_tutorial`/db

If that is ok, then launch mongodb_store using that directory as its data path:

roslaunch mongodb_store mongodb_store.launch db_path:=`rospack find planning_tutorial`/db

MORSE Simulation

In another terminal, launch our object simulation taken from the ALOOF Project.

roslaunch strands_morse aloof_morse.launch

If you press the ‘h’ key in MORSE you can see a list of available keyboard commands.

2D and Topological Navigation

We have predefined a simple topological map for you to use in this tutorial. The first time (and only the first time!) you want to use topological navigation in the ALOOF simulation, you must add this map to the mongodb store. Do it with the following command:

rosrun topological_utils load_yaml_map.py `rospack find strands_morse`/aloof/maps/aloof_top_map.yaml

You can check the result of this with the following command which should print a description of the topological map aloof.

rosrun topological_utils list_maps

If this was successful you can launch the 2D (amcl and move_base) and topological localisation and navigation for your simulated robot. Note that in this configuration, the statistics for edge transitions in the topological map will be learnt from experience (as opposed to manually specified as in Exercise 1).

roslaunch strands_morse aloof_navigation.launch

To see everything running, launch the ROS visualisation tool rviz with the provided config file:

rviz -d `rospack find planning_tutorial`/plan_tut.rviz

You’ll find that the robot is not localised at the correct place compared to the simulation. Therefore use the 2D Pose Estimate button and click (then hold and rotate to set angle) on the correct part of the map.

If you click on a green arrow in a topological node, the robot should start working its way there. Feel free to add whatever extra visualisation parts you want to this (or ask us what the various bits are if you’re new to robotics).

MDP Planning

Next launch the MDP-based task executive system in (yet another!) new terminal:

roslaunch mdp_plan_exec mdp_plan_exec_extended.launch

Semantic Map

Our object search framework makes use of a semantic map of the environment to know where to look for objects. There is a predefined map in this repository. Before you run the semantic mapping launch file for the first time, load the predefined map into mongodb with the following command.

mkdir ~/.semanticMap
mongorestore --port 62345 `rospack find planning_tutorial`/maps/soma_dump

If this was successful, you can launch the semantic map nodes with the following command:

roslaunch planning_tutorial aloof_semantic_map.launch

After you’ve done this you should see some blue and yellow regions appear in RViz.

Exercise 2a

In Exercise 1 you exploited the fact that the execution framework automatically creates an MDP for navigation across the topological map. In this exercise we will extend this MDP with additional actions which connect ROS actionlib servers to actions in the MDP.

In order for the robot to search for objects, it first needs to execute a meta-room sweep in each room where it may need to look. This allows it to build a 3D map of each room for reasoning about supporting surfaces and views.

We connect the

Original page: https://github.com/strands-project/planning_tutorial/blob/indigo-devel/doc/exercise_3.md

Support

If you have difficulty following the steps in this document, please either contact Nick Hawes (n.a.hawes@cs.bham.ac.uk) or use the gitter chat at https://gitter.im/strands-project/planning_tutorial

Computer Setup

To take part in the practical session of the tutorial, you will need to have your own laptop configured with the following software.

  1. Install Ubuntu Linux 14.04LTS 64bit on your computer. Please make sure that you have exactly this version installed: 14.04 for 64bit. Download the image from here: http://releases.ubuntu.com/14.04.4/ (the image download is ubuntu-14.04.4-desktop-amd64.iso). Note, that you can perfectly install Ubuntu 14.04 alongside an existing operating system (even on a MacBook), or you could run this in a virtual machine.
  2. Within Ubuntu, follow the instructions at https://github.com/strands-project-releases/strands-releases/wiki#using-the-strands-repository to install both ROS and the STRANDS software packages. We assume a basic understanding of Unix-like operating systems and shell usage here. If you need help using the command line, this might be a good start: https://help.ubuntu.com/community/UsingTheTerminal. The relevant installation steps are copied below for your convenience:
    1. Enable the ROS repositories: Follow steps 1.1-1.3 in http://wiki.ros.org/indigo/Installation/Ubuntu#Installation. There is no need to do the steps after 1.3. In short, this is:
      1. Configure your Ubuntu repositories to allow “restricted,” “universe,” and “multiverse.”
      2. sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
      3. sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 0xB01FA116
    2. Enable the STRANDS repositories:
      1. Add the STRANDS public key to verify packages: curl -s http://lcas.lincoln.ac.uk/repos/public.key | sudo apt-key add -
      2. Add the STRANDS repository: sudo apt-add-repository http://lcas.lincoln.ac.uk/repos/release
    3. update your index: sudo apt-get update
    4. install required packages: sudo apt-get install ros-indigo-desktop-full ros-indigo-strands-desktop python-catkin-tools ros-indigo-semantic-map-launcher ros-indigo-door-pass
  3. Try out the “MORSE” simulator (run all the following in your terminal):
    1. configure ROS: source /opt/ros/indigo/setup.bash
    2. launch the simulator: roslaunch strands_morse tsc_morse.launch You should see the Morse simulator popping up with our robot platform being configured. If your laptop uses an NVidia graphics card it might be worth looking at: https://wiki.ubuntu.com/Bumblebee to use it to its full potential. You should be all set now!
  4. Add the line source /opt/ros/indigo/setup.bash to your .bashrc if you want ROS to be be loaded every time you open a terminal. You will be opening a few terminals, so this is advisable.

Configuration

Next, create the local directory where you will do your work for this tutorial. This will contain a database and a ROS catkin workspace. The following steps will be basic steps we will assume you execute:

export WS_ROOT_DIR=~/rob_plan_ws
mkdir -p $WS_ROOT_DIR/src

Next clone this repository into the newly created src dir and create a directory within it for our database later on:

cd $WS_ROOT_DIR/src
git clone https://github.com/strands-project/planning_tutorial.git
mkdir planning_tutorial/db

Finally, if that was all successful, you should be able to build your workspace using catkin tools (you can also use catkin_make if you prefer that approach).

cd $WS_ROOT_DIR
catkin init
catkin build

Assuming that was all successful you now need to source your workspace so you have access to the packages there. For this you can do

source $WS_ROOT_DIR/devel/setup.bash

(e.g. source ~/rob_plan_ws/devel/setup.bash if you used the default from above). As with the other setup.bash file mentioned above, you need to source this file in every terminal you open, therefore it is highly advisable to add this command to the end of your .bashrc file so that it is done automatically. You can test if the file was sourced by running roscd (with no arguments). It everything was sourced correctly, this should take you to the directory $WS_ROOT_DIR/devel/.

Tutorial Packages

For the tutorial itself you will be asked to write some simple Python scripts to control a robot in ROS via our planning and execution packages. You will be taught about these packages in the tutorial itself, but we will not have time to cover the basics of Python. If you are not familiar with Python, it would certainly help to run through a quick Python tutorial (there are many online) to get comfortable with the language. That said, we will try our hardest to ensure that you can access as much of the tutorial material as possible without knowing Python.

Original page: https://github.com/strands-project/planning_tutorial/blob/indigo-devel/doc/tutorial_prep.md

Deployment

Create a new non-privileged user “qrobot”:

# adduser qrobot --gecos ""

Relogin as the newly created user and navigate to it’s home directory. Clone this repository into “repo” folder:

$ git clone https://github.com/strands-project/qrobot repo

Navigate to the “repo” folder:

$ cd repo

Run production bootstrap and update scripts as root (note that since the “qrobot” user is not in sudoes, you need to su as the admin user first):

$ su %username%
# ./scripts/bootstrap.sh production
# ./scripts/update.sh

Configuration

Several configuration variables of the backend server are exposed in the “config/flask.py” file. After making changes you need to restart the application server:

# supervisorctl restart qrobot

Updates

To update to the latest version run the update script as root from the “repo” folder:

# ./scripts/update.sh

Original page: https://github.com/strands-project/qrobot/blob/master/README.md

Robblog – A robot blogging tool

Robblog is a tool which converts entries from the mongodb_store into blog posts for Jekyll. It runs as a ros node and provides its own web server, binding to a host and port of your choice.

Installation

Robblog currently uses Jekyll (version 2) to generate pages. To install the dependencies under Ubuntu 14.04 do:

sudo apt-get ruby1-dev nodejs
sudo -E gem install jekyll -v 2.5.3

On OS X do:

brew install ruby nodejs
gem install jekyll -v 2.5.3

Usage

Robblog has two parts: clients which add robblog/RobblogEntry entries to mongodb_store, and the blog engine which creates blog posts from these entries.

Running the Robblog engine

The configuration information that you need to run Robblog is as follows: mongodb_store collection to monitor for new entries; the path on the system where Jekyll will create your blog; and the hostname and port to server the webblog at. The code snippet below shows how to create and configure your blog. In this example mongodb_store collection is provided as an argument EntryConverter ('robblog'), the system path is the blog_path variable which is defined relative to the y1_interfaces package (roslib.packages.get_pkg_dir('y1_interfaces') + '/content'), and the host details are obtained from local parameters if provided.

#!/usr/bin/env python

import rospy
import roslib
import robblog
import robblog.utils

from datetime import *

if __name__ == '__main__':
    rospy.init_node("robblog")

    server_host = rospy.get_param("~host_ip", "127.0.0.1")
    server_port = rospy.get_param("~port", "4000")

    # where are the blog files going to be put
    blog_path = roslib.packages.get_pkg_dir('y1_interfaces') + '/content'

    # initialise blog
    robblog.utils.init_blog(blog_path)

    # and start serving content at this place
    proc = robblog.utils.serve(blog_path, server_host, server_port)

    try:
        converter = robblog.utils.EntryConverter(blog_path=blog_path, collection='robblog')

        while not rospy.is_shutdown():
            converter.convert()
            rospy.sleep(1)

    except Exception, e:
                rospy.logfatal(e)
    finally:
        proc.terminate()
Adding entries

To add entries to robblog, clients should add instances of the robblog/RobblogEntry message type to mongodb_store (usually via the message store proxy). The message type is simply:

string title
string body

The title is translated into the title of the blog post and the name of the markdown file containing the post in the Jekyll install. The body should be Markdown.

Rules for creating entries

Title choice

To march Jekyll’s structuring rules, each entry is converted into a file which is named using the date (YYYY-MM-DD) plus the title (with spaces converted to hyphens). If you create two entries on the same day with the same title this will only create one blog entry in Jekyll, despite their being two different entries in mongodb_store. Until Robblog is updated to fix this, if you do plan to create muliple entries with the same title on the same day, it might be worth adding a counter or some other unique characters to the title.

Illegal characters

As titles are turned into filenames, you need to avoid illegal characters. Currently no sanitisation is done. As far as seen, characters within the body of the entry are fine.

Images

Images can be included in entries. To include one, you must add the image to mongodb_store, retaining the object ID you receive in response. You can then use standard Markdown image inclusion, but replace the image URL with the object ID wrapped in ObjectID(), e.g.

This ID is used to automatically create a jpeg image to include in the blog post.

Example

The following example (also available in full here) adds a few entries to mongodb_store then serves them. Once you’ve run this go to http://localhost:4040 and you should see the blog entries.

#!/usr/bin/env python

import rospy
import roslib
from mongodb_store.message_store import MessageStoreProxy
from robblog.msg import RobblogEntry
import robblog.utils
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError

from datetime import *

if __name__ == '__main__':
    rospy.init_node("robblog_example")

    blog_collection = 'example_blog'

    # Create some blog entries
    msg_store = MessageStoreProxy(collection=blog_collection)

    create_entries = True

    robblog_path =  roslib.packages.get_pkg_dir('robblog')

    if create_entries:
        e1 = RobblogEntry(title='Test Title 1', body='blah blah')
        e2 = RobblogEntry(title='Test Title 2', body='blah blah')
        e3 = RobblogEntry(title='Test Title 3', body='blah blah')
        msg_store.insert(e1)
        msg_store.insert(e2)
        msg_store.insert(e3)

        # add a complex markdown example
        with open(robblog_path + '/data/example.md' , 'r') as f:
            e4 = RobblogEntry(title='Markdown Example', body=f.read())
            msg_store.insert(e4)
            # print e4

        # add an example with an image
        cv_image = cv2.imread(robblog_path + '/data/rur.jpg')
        bridge = CvBridge()
        img_msg = bridge.cv2_to_imgmsg(cv_image)
        img_id = msg_store.insert(img_msg)
        e5 = RobblogEntry(title='Image Test', body='This is what a robot looks like.\n\n![My helpful screenshot](ObjectID(%s))' % img_id)
        msg_store.insert(e5)


    serve = True

    if serve:
        # where are the blog files going to be put
        blog_path = robblog_path + '/content'

        # initialise blog
        robblog.utils.init_blog(blog_path)
        proc = robblog.utils.serve(blog_path, 'localhost', '4040')

        try:
            converter = robblog.utils.EntryConverter(blog_path=blog_path, collection=blog_collection)


            while not rospy.is_shutdown():
                # supply True convert to force all pages to be regenerated
                converter.convert()
                rospy.sleep(1)

        except Exception, e:
                    rospy.logfatal(e)
        finally:
            proc.terminate()

Original page: https://github.com/strands-project/robblog/blob/hydro-devel/README.md

scitos_2d_navigation

The scitos_2d_navigation stack holds common configuration options for running the 2D navigation stack on a Scitos G5 robot.

Usage

  • To be able to use this package you have to create a map with gmapping. This can be run with rosrun gmapping slam_gmapping, save the map with rosrun map_server map_saver.
  • Each launch file takes the argument map which is the path to the map saved with gmapping.
  • To just launch the DWA planner together with AMCL localization do roslaunch scitos_2d_navigation amcl.launch map:=/path/to/map.yaml.
  • If you want to launch it with the 3d obstacle avoidance, provide the additional argument with_camera:=true. Make sure that you have a depth camera publishing on the topic chest_xtion, for more details see https://github.com/strands-project/scitos_common.

Original page: https://github.com/strands-project/scitos_2d_navigation/blob/hydro-devel/README.md

ptu_follow_frame

This package provides a ROS node that controls the PTU so as to keep the ptu_mount frame X axis pointing to the origin of a user supplied TF frame.

It is activated and deactivated using two service calls:

  • /ptu_follow_frame/set_following : ptu_follow_frame/StartFollowing call this with a string representation of the target frame, for example map to have the ptu frame follower track that frame.
  • /ptu_follow_frame/stop_following : std_srv/Empty call this to stop the ptu frame follower from tracking whatever frame it currently tracks.

During tracking, the PTU will be in velocity control mode and no other node should try to control it.

Original page: https://github.com/strands-project/scitos_apps/blob/hydro-devel/ptu_follow_frame/README.md

scitos_cmd_vel_mux

This package provides a launch and parameter file that is tailored to a scitos A5 robot using the navigation stack and a teleoperation node like scitos_teleop.

Installation

  • Run catkin_make
  • Run dependencies are installed via rosdep.

Usage

  • Run

    roslaunch scitos_cmd_vel_mux mux.launch
    
  • Remap your navigation stack /cmd_vel output to /cmd_vel_mux/input/navigation

  • Run

    roslaunch scitos_teleop_mux.launch
    

    This remaps the joystick output to /cmd_vel_mux/input/joystick and now the joystick will always have priority as soon as you press the dead-man-switch. Or you can run any other teleoperation node and do the remapping yourself.

Original page: https://github.com/strands-project/scitos_apps/blob/hydro-devel/scitos_cmd_vel_mux/README.md

scitos_dashboard

This package provides an rqt_robot_dashboard for the Scitos robot. The dashboard displays motor and battery status, and allows the motors to be stopped or free-run enabled.

Installation

This should not require any additional packages to be installed…

Running

On your off-robot pc:

export ROS_MASTER_URI=http://bob:11311   # or not-bob
rosrun scitos_dashboard scitos_dashboard

This brings up a small dashboard window, which is an rqt docking pane:

ScreenShot

ScreenShot

From left to right, the widgets are: * Diagnostics: this provides information about the hardware of the robot - see http://www.ros.org/wiki/robot_monitor?distro=groovy for info. * ROS Console: provides a view of all ros console (INFO, DEBUG, WARN etc) messages, with optional filters - see http://www.ros.org/wiki/rqt_console * Motor Status: Clicking this allows you to select free run, or stop the motors. If it is green, then the robot is ok to drive but can’t be pushed; if it is yellow then it is in free run and can be pushed; if it is red then the motor stop is enabled and must be reset before it can drive. * Battery Status: This shows the battery state as percentage in a tooltip, and changes icon when plugged in. * Robot Mileage in metres

Original page: https://github.com/strands-project/scitos_apps/blob/hydro-devel/scitos_dashboard/README.md

Overview

The docking service detects and guides the robot to its charging station. The service is based on detection of the ‘o’ letters of the ‘ROBOT STATION’ tag in the robot camera image. Prior to use, the method has to establish the docking station reference frame relative to the robot position.

To install

Dependecies are: * libgsl0-dev * libblas-dev

sudo apt-get install libgsl0-dev libblas-dev

To setup:

  1. Print the station tag on a A4 paper. Do not let the printer to resize it.
  2. Center the robot at the charging station.
  3. Display the robot camera image and fix the tag on the wall approximattelly in the image center.
  4. Tell the system to measure its position relatively to the charger by calling a the calibration routine via actionlib. This can be done using a GUI as follows:
rosrun actionlib axclient.py /chargingServer

Then in the Goal textfield complete as follows:

Command: calibrate
Timeout: 1000

And then press the SEND GOAL button.

The robot should move its eyelids to indicate progress of the calibration process.

To use:

  • Just point the robot approximatelly in the direction of the charging station and call the node via actionlib as above, substituting charge for calibrate.
  • To leave the charging station, use undock instead.
  • Set the lightEBC parameter to the name of an EBC port you have attached a light to, e.g. Port0_12V_Enabled.

Original page: https://github.com/strands-project/scitos_apps/blob/hydro-devel/scitos_docking/README.md

Scitos Teleop

The scitos_teleop package is designed to work with a Logitech Wireless Gamepad F710.

Install udev rule for gamepad

To copy the udev rule to /etc/udev/rules run

rosrun scitos_teleop create_udev_rules

this will make the jaoypad availabe as /dev/input/rumblepad

Running and using the teleop node

  • Source the corresponding setup.bash: source <your_catkin_workspace>/devel/setup.bash
  • Launch the rumblepad control: roslaunch scitos_teleop teleop_joystick.launch
  • If the simulator or scitos_node is running, you should now be able to control the robot using the joypad.
  • Please also have look at: https://github.com/strands-project/scitos_apps/tree/master/scitos_cmd_vel_mux which represents a nice way of giving the joystick priority over the navigation stack.
  • Controlling the robot (if you do not press any buttons, the rumbelpad control will not interfere with any autonomous behaviour but can be used to emergency stop the robot or reset the bumper after a crash): You can find a cheat sheet in the doc directory of scitos_teleop.
  • Dead-Man-Switch: top left shoulder button, keep pressed to move robot or head.
  • Moving the robot: move the robot with the left joystick or D-Pad (toggel between those with the “Mode” button).
  • Emergency stop: lower left sholder button. Has to be pressed down completely. Be aware that the gamepad is turning itself off after a certain time and that this button does not turn it on automatically. You have to press one of the other buttons in order to turn it back on.
  • Freerun: “Back” button on the pad. (Move robot around manually)
  • Reset/Re-enable motors: “Start” button on the pad. (Use after emergency stop or bumper crash)
  • Move the head including eyes: use right joystick.
  • Move head to zero position: top right shoulder button.
  • Turn head 180 degrees: Y button.
  • Move eye lids: use lower right shoulder button.

Troubleshooting

If you get a message like: [ERROR] [1372933726.815471480]: Couldn't open joystick /dev/.... Will retry every second. you can export the joystick device, e.g.: export JOYSTICK_DEVICE=/dev/input/js1 and start the launch file again. If you installed the udev rule as mentioned above this should not happen. Try to follow the instruction for installing the udev rule again.

Original page: https://github.com/strands-project/scitos_apps/blob/hydro-devel/scitos_teleop/README.md

The Scitos touch package

Containing nodes meant to be run on the touch screen of a Scitos A5 robot. Currently containing an rqt plugin rqt_emergency_stop to disable and reenable the motors of a Scitos robot. After installation it should be availbale in the rqt plugins menu.

Original page: https://github.com/strands-project/scitos_apps/blob/hydro-devel/scitos_touch/README.md

  • [[Scitos-teleop]]
  • [[Scitos-cmd_vel_mux]]

Original page: https://github.com/strands-project/scitos_apps/wiki

Installation

  • Run catkin_make

  • Source the environment

  • Run rosdep

    rosdep install scitos_cmd_vel_mux
    

Usage

  • Run

    roslaunch scitos_cmd_vel_mux mux.launch
    
  • Remap your navigation stack /cmd_vel output to /cmd_vel_mux/input/navigation

  • Run

    roslaunch scitos_teleop_mux.launch
    

    This runs the scitos_teleop and remaps the joystick output to /cmd_vel_mux/input/joystick. Now the joystick will always have priority as soon as you press the dead-man-switch.

Inputs

Sorted by priority of incoming commands (high to low): * /cmd_vel_mux/input/joystick: For teleoperation * /cmd_vel_mux/input/webapp: For teleoperation over the browser * /cmd_vel_mux/input/navigation: For the navigation stack * /cmd_vel_mux/input/default: For everything that is not aware of the arbitration

To add input topics or change priorities modify: param/mux.yaml

Original page: https://github.com/strands-project/scitos_apps/wiki/Scitos-cmd_vel_mux

Set-up

This is just a simple example set-up and might differ from yours. If you are familiar with ROS and/or using one of the set-up files from strands_systems you can skip this part. * Create a ros workspace: mkdir -p ros-ws/src && cd ros-ws/src * Clone the github repository: git clone https://github.com/strands-project/scitos_apps.git * Setting up the catkin workspace: in ros-ws/src run catkin_init_workspace * Change to the root directory of the repository which is ros-ws in our case. * Make sure that scitos_apps is in the same workspace as scitos_common or is overlaying the scitos_common workspace to enable the emergency stop for the robot. * Run catkin_make in ros-ws (catkin_make builds all binary files and creates environment variables like the setup.bash) Now everything should be built and you go to the next part which describes the usage of the scitos_teleop package.

Usage:

Launch files: * Source the environment variables source devel/setup.bash * Run one of the following launch files: * teleop_joystick_core.launch: This launches the core functionalities of the rumblepad: emergency stop, motor reset, freerun and the action button topic is published. * teleop_joystick_just_[base/head].launch: This launches teleop_joystick_core.launch and the control for the head or base depending on the file you choose. * teleop_joystick.launch: This launches all the functionalities of the rumblepad and gives you fulkl control. * teleop_joystick_mux.launch: This launches all the functionalities of the rumblepad and relies on the scitos_cmd_vel_mux (see [[Scitos-cmd_vel_mux]]).

Buttons: See Cheat Sheet

Original page: https://github.com/strands-project/scitos_apps/wiki/Scitos-teleop

scitos_common

This package contains robot-specific definitions of the SCITOS robot such as the URDF description of the robot’s kinematics and dynamics and 3D models of robot components.

Original page: https://github.com/strands-project/scitos_common/blob/indigo-devel/README.md

The strands launch files are divided in different levels one per level:

strands_core is a level which can be used to launch every system that has to run in the robot before actually starting the robot hardware.

strands_robot launches the hardware of the robot is started

strands_navigation launches the navigation system

To use the strands launch files you can create launch files for each of this level calling the launch files and setting the needed parameters some examples are as following

  1. Launching strands_core
<launch>
    <!-- Remote Launching -->
    <arg name="machine" default="localhost" />
    <arg name="user" default="" />

    <!-- Datacentre -->
    <arg name="db_path" default="/opt/strands/ros_datacentre"/>


    <include file="$(find scitos_bringup)/launch/strands_core.launch">
        <arg name="machine"  value="$(arg machine)"/>
        <arg name="user"  value="$(arg user)"/>
        <arg name="db_path" default="$(arg db_path)"/>
    </include>
</launch>
  1. Launching strands_robot
<launch>
    <arg name="machine" default="localhost" />
    <arg name="user" default="" />

    <arg name="scitos_config" default="$(find scitos_mira)/resources/SCITOSDriver-with-udev.xml"/>

    <arg name="laser" default="/dev/laser" />

    <arg name="head_camera" default="true" />
    <arg name="head_ip" default="left-cortex" />
    <arg name="head_user" default="strands" />

    <arg name="chest_camera" default="true" />
    <arg name="chest_ip" default="right-cortex" />
    <arg name="chest_user" default="strands" />

    <arg name="with_mux" default="false"/>
    <arg name="js" default="$(optenv JOYSTICK_DEVICE /dev/js1)" />


    <!-- Robot -->
    <include file="$(find scitos_bringup)/launch/strands_robot.launch">
        <arg name="machine"  value="$(arg machine)"/>
        <arg name="user"  value="$(arg user)"/>

        <!-- SCITOS G5 Robot -->
        <arg name="scitos_config" value="$(arg scitos_config)"/>

        <!-- SICK S300 -->
        <arg name="laser"  value="$(arg laser)"/>

        <!-- Head Xtion Camera -->
        <arg name="head_camera" value="$(arg head_camera)" />
        <arg name="head_ip"     value="$(arg head_ip)" />
        <arg name="head_user"   value="$(arg head_user)" />

        <!-- Chest Xtion Camera -->
        <arg name="chest_camera" value="$(arg chest_camera)" />
        <arg name="chest_ip"     value="$(arg chest_ip)" />
        <arg name="chest_user"   value="$(arg chest_user)" />

        <!-- cmd vel mux -->
        <arg name="with_mux" value="$(arg with_mux)"/>

        <!--- Teleop Joystick -->
        <arg name="js" value="$(arg js)" />
    </include>


</launch>
  1. Launching strands_navigation
<launch>
  <arg name="machine" default="localhost" />
  <arg name="user" default=""/>

  <arg name="with_camera" default="true"/>

  <arg name="camera" default="chest_xtion"/>
  <arg name="camera_ip" default="right-cortex"/>
  <arg name="camera_user" default="strands"/>


  <arg name="map" default="/opt/strands/maps/WW_2014_06_18-cropped.yaml"/>
  <arg name="with_no_go_map" default="true"/>
  <arg name="no_go_map" default="/opt/strands/maps/WW_2014_06_18-nogo.yaml"/>
  <arg name="with_mux" default="false" />

  <arg name="topological_map" default="WW_2014_06_18"/>


  <!-- STRANDS navigation -->
  <include file="$(find scitos_bringup)/launch/strands_navigation.launch" >
    <arg name="machine" value="$(arg machine)"/>
    <arg name="user" value="$(arg user)"/>
    <!-- <arg name="remote" value="$(arg remote)"/> -->

    <!-- strands_movebase -->
    <arg name="with_camera" value="$(arg with_camera)"/>
    <arg name="camera" value="$(arg camera)"/>
    <arg name="camera_ip" value="$(arg camera_ip)"/>
    <arg name="camera_user" value="$(arg camera_user)"/>

    <arg name="map" value="$(arg map)"/>
    <arg name="with_no_go_map" value="$(arg with_no_go_map)"/>
    <arg name="no_go_map" value="$(arg no_go_map)"/>

    <arg name="with_mux" value="$(arg with_mux)"/>

    <arg name="topological_map" value="$(arg topological_map)"/>

  </include>


</launch>

Original page: https://github.com/strands-project/scitos_drivers/blob/indigo-devel/scitos_bringup/README.md

scitos_mira

Scitos G5 drivers that interface ROS to MIRA.

Using the robot

Various aspects of the robot are exposed as ROS services, published topics, subscribed topics and dynamic_reconfigure parameters. These are divided into 5 “modules”: * Drive - for the control of the motors. * EBC - for controlling the power for extra devices (pan tilt and cameras). * Head - for controlling the HRI head. * Display - for the small status display on the base. * Charger - for battery monitoring and charging control.

Drive
Published topics
  • /odom (nav_msgs::Odometry) The odometric position of the robot (Odometry.pose), and it’s linear/angular velocity (Odometry.twist.linear.x, Odometry.twist.angular.z). This is also published as a TF between /odom and /base_footprint_link.
  • /bumper (std_msgs::Bool) State of the robots bumper, published regularly not only when the state changes.
  • /mileage (std_msgs::Float32) The distance in metres that the robot has travelled since the beginning of time.
  • /charger_status (scitos_msgs::ChargerStatus) Detailed information about the current status of the charging circuit.
  • /motor_status (scitos_msgs::MotorStatus) The state of the motors, free-run mode, emergency button status, bumer status.
Subscribed topics
  • /cmd_vel (geometry_msgs::Twist) Any velocity published on this topic will be sent to the robot motor controller. Twist.linear.x corresponds to the desired linear velocity; Twist.angular.z corresponds to the angular velocity.
Services
  • /reset_motorstop (scitos_msgs::ResetMotorStop) This service is an empty request and empty response. It turns off the motor stop, which is engaged when the robot bumps into something. It can only be turned off if the robot is not longer in collision.
  • /reset_odometry (scitos_msgs::ResetOdometry) This empty request/response service sets the robot odometry to zero.
  • /emergency_stop (scitos_msgs::EmergencyStop) This empty request/response service stops the robot. It is equivalent to the bumper being pressed - the motor stop is engaged, and can be reset with /reset_motorstop.
  • /enable_motors (scitos_msgs::EnableMotors) This service takes a std_msgs::Bool enabled in the request, and gives an empty response. Disabling the motors is the same as placing the robot into “Free Run” mode from the status display.
EBC
Reconfigure parameters
  • Port0_5V_Enabled (bool) Is 5V enabled at port 0
  • Port0_5V_MaxCurrent (float) Max current for port 0 5V
  • Port0_12V_Enabled (bool) Is 12V enabled at port 0
  • Port0_12V_MaxCurrent (float) Max current for port 0 12V
  • Port0_24V_Enabled (bool) Is 24V enabled at port 0
  • Port0_24V_MaxCurrent (float) Max current for port 0 24V
  • Port1_5V_Enabled (bool) Is 5V enabled at port 1
  • Port1_5V_MaxCurrent (float) Max current for port 1 5V
  • Port1_12V_Enabled (bool) Is 12V enabled at port 1
  • Port1_12V_MaxCurrent (float) Max current for port 1 12V
  • Port1_24V_Enabled (bool) Is 24V enabled at port 1
  • Port1_24V_MaxCurrent (float) Max current for port 1 24V
Charger
Published topics
  • /battery_state (scitos_msgs::BatteryState
  • /charger_status (scitos_msgs::ChargerStatus
Reconfigure parameters

There are some parameters, but they currently not implemented for fear of messing up…

Display
Published topics
  • /user_menu_selected (std_msgs::Int8) This topic is published when one of the user sub-menus (as set using the dynamic reconfigure parameters) is selected. The value 0..3 indicates which menu item was “clicked”.
Reconfigure parameters
  • EnableUserMenu (string) Is the user menu entry in the status display enabled
  • UserMenuName (string) The name of the user menu entry in the main menu of the status display
  • UserMenuEntryName1 (string) The name of the first sub menu entry in the user menu of the status display
  • UserMenuEntryName2 (string) The name of the second sub menu entry in the user menu of the status display
  • UserMenuEntryName3 (string) The name of the third sub menu entry in the user menu of the status display

Original page: https://github.com/strands-project/scitos_drivers/blob/indigo-devel/scitos_mira/README.md

scitos_pc_monitor

This package provides a node that monitors the health of the embedded scitos pc. Status updates are published as diagnostics messages on /diagnostics. These messages are collated by a diagnostics aggregator, which then allows them to be viewed on the scitos_dashboard.

Running

The monitor is started automatically when you use scitos_bringup. To start independently (assuming a roscore is existing somewhere, ie from scitos bringup):

rosrun scitos_pc_monitor pc_monitor.py

This then sends publishes on /diagnostics. To make use of the information, launch the diagnostics aggregator:

roslaunch scitos_bringup diagnostic_agg.launch

and view the message on the dashboard on your of-board pc:

rosrun scitos_dashboard scitos_dashboard.py

See https://github.com/strands-project/scitos_apps/tree/master/scitos_dashboard for dashboard documentation.

Original page: https://github.com/strands-project/scitos_drivers/blob/indigo-devel/scitos_pc_monitor/README.md

Network configuration

Bob (hostname bob) has the following network interfaces: - eth0 connects to the internal bob network [192.168.0.*], statically assigned IP 192.168.0.100 - eth1 is unused, but can be connected to the robotics lab wired network for maintenance. Dynamic IP. - wlan0 is connected to what ever wireless network is in use, usually ccarobot.

The additional onboard PC (hostname bobl) has a single wired network connection eth0 to the internal bob network. Network manager is removed and setup is in /etc/network/interfaces.

Internal “bob network”

The bob network [192.168.0.*] is supplied with an outside connection by NAT through bob. bob is the main machine on this network providing routing to other networks and DNS services. bobl is able to access any machine accessible from bob over the wlan0 interface by default. However, to access bobl from a machine that has access to bob (woody for example) you need to fist add bob as a route in the machines table:

sudo route add -net 192.168.0.0/24 dev eth0 gw bob

Hostnames and IP addresses

To enable ROS communication between your machine and bob network hosts, add an entry in /etc/hosts on bob and then update the internal network dns server:

sudo /etc/init.d/dnsmasq restart

Then both bobl and bob will be able to find your IP, editing /etc/hosts on bobl is not required. You might also want to add bob as an addition DNS server on your machine so that you do not have to maintain your own hosts file.

Time

The time on bob and bobl are synchronised using chrony. bob is the master, and can be used by offboard machines to synchonise with:

sudo ntpdate bob

ROS communication

bobl looks to bob for the ros core. On any off board machine, set the ROS_MASTER_URI to bob and make sure you add your hostname as above. In order to have full access to topics published by bobl you must first set the route as above.

Ping of life

Fixed machines sometimes loose communication with bob due to some unknown problem with the ccarobot network routers. This has always been fixable by pinging the lost machine from Bob, which then allows the machine to ping/ssh/etc Bob. As a work-around and to avoid regular manual pinging, the script /usr/bin/ping-hosts can be used to ping all hosts in Bob’s hosts file. This is also ran automatically every 2 minutes, logging to /var/log/ping-hosts.

strands@bob:~$ cat /var/log/ping-hosts
----------------------------------------------------------------------
Wed, 14 Jan 2015 11:14:01 +0000
localhost                    live
bob                          live
bobl                         live
bobr                         down
cca-1128                     live
heatwave                     live
woody                        live

Configuration files

Original page: https://github.com/strands-project/scitos_robot/wiki/Bob’s-Network-Configuration

References

The following list includes important references used for implementations within libForest:

  • Online Random Forests / Decision Trees:

    A. Saffari, C Leistner, J. Santner, M.Godec. On-Line Random Forests. International Conference on Computer Vision Workshops, 2009.

  • Variable Importance:

    G. Louppe, L. Wehenkel, A. Sutera, P. Geurts. Understanding Variable Importance in Forests of Randomized Trees. Advances in Neural Information Processing Systems, 2013.

    G. Louppe. Understanding Random Forests. PhD thesis, Universite de Liege, Belgium, 2014.

  • Density Forests:

    A. Criminisi, J. Shotton. Density Forests. In Decision Forests for Computer Vision and Medical Image Analysis, Springer, 2013.

  • Kullback-Leibler Divergence:

    F. Perez-Cruz. Kullback-Leibler DIvergence Estimation of Continuous Distributions. International Symposium on Information Theory, 2008.

  • Kernel Density Estimation:

    B. E. Hansen. Lecture Notes on Nonparametrics. University of Wisconsin, 2009.

    P. B. Stark. Statistics 240 Lecture Notes, part 10: Density Estimation. University of California Berkeley, 2008.

    M. C. Jones, J. S. Marron, S. J. Sheather. A Brief Survey of Bandwidth Selection for Density Estimation. Journal of the American Statistical Association, 91(433), 1996.

    B. A. Turlach. Bandwidth Selection in Kernel Density Estimation: A Review. C.O.R.E. and Intitut de Statistique, Universite Catholique de Louvain, Belgium.

  • K-Means:

    D. Arthur, S. Vassilvitskii. k-means++: The Advantages of Careful Seeding. Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 2007.

    C. Elkan. Using the Triangle Inequality to Accelerate k-Means. International Conference on Machine Learning, 2003.

    J. Han, M. Kamber, J.Pei. Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers Inc. San Francisco, CA, 2005.

Original page: https://github.com/strands-project/semantic_segmentation/blob/master/src/backend/third-party/libforest/docs/references.md

sensortag

  • Make sure the sensor is on (i.e., the green led blinks, if not press the on/off button) and run: sudo hcitool lescan This command scan for bluetooth devices and you should see on line something like: B0:B4:48:BE:CC:06 CC2650 SensorTag, the address might be different depending on the sensor
  • copy this address
  • run rosrun sensortag sensortagRead.py COPIED_ADDRESS

Original page: https://github.com/strands-project/sensortag/blob/master/README.md

SOMA

Semantic Object Map (SOMA) package. SOMA can include objects, regions of interest (ROI) and trajectories. It is based on Mongodb for storing high-level data obtained from perceptual pipeline of a robot. The extracted data could be stored along with spatial and temporal information which can be later used for building high-level queries with spatio-temporal constraints.

Prerequisites

  • MongoDB (>=2.6)
  • ROS mongodb_store package
  • ROS navigation stack (only map server)
  • Qt5 (sudo apt-get install qtbase5-dev)

Getting started (general steps)

  1. Start the ros core:

    $ roscore
    
  2. Launch the ROS datacentre:

    $ roslaunch mongodb_store mongodb_store.launch db_path:=<path_to_db>
    

    By default, the SOMA data are stored in somadata database. The collections under this database are object for SOMA objects, roi for SOMA rois and map for 2D occupancy maps.

SOMA map manager

  1. SOMA is based on the assumption that all the data are with respect to 2D global map frame. So it is mandatory to publish a 2D map using SOMA map manager before using SOMA. This node is used for storing, reading and publishing 2D map:

    $ rosrun soma_map_manager soma_map_manager_node.py --mapname <map_name>
    

    If there are any stored 2D occupancy maps in the datacenter, the name of the map could be inputted as an argument to the map manager. Alternatively, user can choose the map to be published from the outputted list. If there are no stored maps, it will wait for a 2D map to be published from map_server. Run the map_server with a 2D map: $ rosrun map_server map_server <map.yaml> where map.yaml specifies the map you want to load. After running the map_server, you should save the published map using the SOMA map manager.

  2. If you want to check the published map, start RVIZ, add a Map display type and subscribe to the soma/map topic:

$ rosrun rviz rviz

SOMA ROI manager

  1. If you want to create SOMA ROIs, run the SOMA ROI manager:

    $ rosrun soma_roi_manager soma_roi_node.py <config_name>
    

    where config_name denotes an object configuration name. By default, the configuration file soma_roi_manager/config/default.json is used to initialize the list of available ROI types. Alternatively, the following command can be used to use a different configuration file:

    $ rosrun soma_roi_manager soma_roi.py -t /path/to/config/file <config>
    

    2D map information will be gathered from soma/map_info service of SOMA map manager.

  2. In RVIZ, add an InteractiveMarker display type, and subscribe to the /soma_roi/update topic:

  3. Add, delete, modify ROIs in RVIZ using the interactive marker and the context menu (right-mouse-click)

marker

marker

ROS Services

The other nodes can communicate with SOMA using the SOMA service calls. In order to use these services, one should run the soma data manager: ## SOMA data manager 1. Run the soma data manager:

$ rosrun soma_manager data_manager_node.py
--object_collection_name <collection_name> --object_db_name <db_name>

The parameters db_name and collection_name are optional which can be used to define the database and collection name for data storage.

SOMA query manager

  1. Run the soma query manager:

    $ rosrun soma_query_manager query_manager_node
    <object_db_name> <object_collection_name> <roi_db_name> <roi_collection_name>
    

By default the data is stored under default db and collections :

  object ROI map
db_name somadata somadata somadata
collection_name object roi map
Object insertion

One or multiple SOMA objects can be inserted using the SOMA service call /soma/insert_objects. The unique mongodb ids and a boolean value are returned. The boolean return value determines whether the request was successfully completed or not. ### Object deletion One or multiple SOMA objects can be deleted using the SOMA service call /soma/delete_objects. The SOMA object ids are used for deletion. The boolean return value determines whether the request was successfully completed or not. ### Object update A SOMA object can be updated using the SOMA service call /soma/update_object. The boolean return value determines whether the request was successfully completed or not. ### Object query SOMA objects could be queried using SOMA service call /soma/query_objects. The query request should be filled according to the spatio-temporal constraints. The results are returned based on the query type and constraints. ### ROI query SOMA ROIs could be queried using SOMA service call /soma/query_rois. The query request should be filled according to the spatio-temporal constraints. The results are returned based on the query type and constraints.

Original page: https://github.com/strands-project/soma/blob/indigo-devel/README.md

SOMA Low-Level Sensory Datastore

This package provides a totally optional set of helper tools for the task of composing SOMA objects from individual observations of real-world objects. The tools here, provide a custom set of ROS messages and services. The classic use-case for this package being that of collecting multiple observations of a single or multiple objects before merging them into combined object models. In this use-case, the raw data and low-level segmented observations are stored in the data structures provided by this package, and are used as the source material to create high-level SOMA objects. However, the implementation of how these things are achieved – data collection, segmentation, processing – are all left up to the application developer.

Messages

The messages are organised in a graph structure. The conceptual root node is the Scene message, which contains various forms of unprocessed robot perception data – RGB-D data, odometry, arbitrary metadata etc. – that a system may use in further processing. This represents the raw sensor output of a Robot performing some task such as taking multiple views of a surface, whereby each view would be represented as a Scene message. Multiple Scenes can be grouped together by sharing a common episode_id.

Given a Scene, processing such as segmentation is usually applied to extract objects. Each of these objects can be represented using the Segment message. There may be multiple Segments in a single Scene, and the same Segment representing the same object may be observed in multiple Scenes during a given episode. The message types support this sort of relationship. For each segment, any number of Observations can be attached to a Segment. An Observation represents the data produced by any processing performed on the Scene, and is intended to store things like cropped images and masked point clouds that describe specific observations of objects, and the Segment links these observations together to the same object across multiple views.

Services

There are multiple services designed to make it easy to insert and extract data from the database. These are generally straightforward in use, and there are examples in the /tests/ folder you can try out. The only service that may bear further explanation is InsertSceneAuto, which allows you to specify robot_pose and camera_info topics, and will automatically record and insert these along with ~2 seconds of output from the /tf topic to your Scene message. Recording a brief burst of /tf allows you to re-use it later and re-calculate things like frame transforms. Transforming to and from a tf transformer is very easy, see the TransformationStore class in server.py for examples. This is the recommended way of putting scenes into the database, as it requires the least amount of effort on the part of the user.

In general, all fields in all messages are optional, and arbitrary extra relationships between messages can be encoded by using the meta_data fields, which are intended to be filled by JSON objects.

Services are deliberately kept bare-bones, as in contrast to the main SOMA services that provide feature-rich query services, it is intended that users of the LLSD perform more complicated query tasks by using MongoDB queries.

Original page: https://github.com/strands-project/soma/blob/indigo-devel/soma_llsd/README.md

SOMA Trajectory

SOMA Trajectory is a package to query and display human trajectories. It includes a simple visual interface to query spatio-temporal constraints.

Prerequisites

  • MongoDB (>=2.6)
  • mongodb_store
  • pymongo
  • shapely

Getting started (general steps)

  1. Start the ros core:

    $ roscore
    
  2. Launch the ROS datacentre:

    $ roslaunch mongodb_store mongodb_store.launch db_path:=<path_to_db>
    

SOMA map manager

  1. Run the soma map manager for storing, reading and publishing 2D map. Running this node is essential for running the robot_state_viewer_node: $ rosrun soma_map_manager soma_map_manager_node.py --mapname <map_name> If there are any stored 2D occupancy maps in the datacentre then map manager will let you choose the map to be published or you can input the name of the stored map as an argument as <map_name>. If not, it will wait for map_server. Run the map_server with a 2D map: $ rosrun map_server map_server <map.yaml> where map.yaml specifies the map you want to load. After running the map_server, you should save the published map using the soma map manager.

  2. If you want to check the published map, start RVIZ, add a Map display type and subscribe to the soma/map topic:

    $ rosrun rviz rviz
    

SOMA Trajectory Visualizer

You can run the visualizer by calling rosrun soma_trajectory soma_trajectory_manager.py and rosrun soma_trajectory visualiser.py

  1. Add an InteractiveMarkers display type in RVIZ and subscribe to the soma_trajectory topic. Trajectories will appear once the query submit button on the visualisation is pressed.
  2. Run rviz to display the results of the queries. You can choose the time interval to be inspected in terms of hours from the hour the first trajectory obtained to the hour the last trajectory obtained. You can also execute temporal periodic queries by setting days of the week, and hours of a day. Whenever a checkbox inside the temporal peridic query is ticked, the regular query with the time interval will be ignored. A simple analysis is displayed in the message box in the bottom of the visualiser for each query.
marker

marker

Original page: https://github.com/strands-project/soma/blob/indigo-devel/soma_trajectory/README.md

SOMA Visualizer

SOMA Visualizer is a GUI for querying and visualizing SOMA objects.

Using the visual interface, advanced queries with spatio-temporal constraints can be specified. The returned SOMA objects are displayed in rviz as point clouds.

Prerequisites

  • MongoDB (>=2.6)
  • mongodb_store
  • Qt5

Getting started (general steps)

  1. Start the ros core:

    $ roscore
    
  2. Launch the ROS datacentre:

    $ roslaunch mongodb_store mongodb_store.launch db_path:=<path_to_db>
    

SOMA map manager

  1. Run the soma map manager for storing, reading and publishing 2D map. Running this node is essential for running the robot_state_viewer_node: $ rosrun soma_map_manager soma_map_manager_node.py --mapname <map_name> If there are any stored 2D occupancy maps in the datacentre then map manager will let you choose the map to be published or you can input the name of the stored map as an argument as <map_name>. If not, it will wait for map_server. Run the map_server with a 2D map: $ rosrun map_server map_server <map.yaml> where map.yaml specifies the map you want to load. After running the map_server, you should save the published map using the soma map manager.

  2. If you want to check the published map, start RVIZ, add a Map display type and subscribe to the soma/map topic:

    $ rosrun rviz rviz
    

SOMA Visualizer

You can run the visualizer by calling roslaunch soma_visualizer soma_visualizer.launch

  1. Add a MarkerArray display type in RVIZ and subscribe to the soma_roi_marker_array topic. The drawer will draw when a region is selected in the visualizer. Add a pointcloud2 display type and subscribe to /soma_visualizer_node/world_state_3d to visualize query results.
  2. Run rviz to display the results of the queries. You can go back and forth between robot states using the slider. You can choose the time interval to be inspected in terms of days,hours and minutes. You can also execute advanced queries by setting dates, times, etc, enabling them using the checkboxes and by pressing the Query button. When you make changes in query constrains, make sure to press Query button for updating the query. You can also export the executed query in JSON format using the Export JSON button. You can reload data from databse using Reload button. The returned objects are also displayed in the table view. You can double click on any of the rows to see the detailed information and images of that object. If there are multiple images, you can go through them by pressing to left and right buttons.

For any query, if more than 50 objects are fetched, only first 50 of them are fetched with point clouds and images because of the performance issues.

marker

marker

Original page: https://github.com/strands-project/soma/blob/indigo-devel/soma_visualizer/README.md

EXAMPLE OF A ROI-object KB

{ “1”: { # ROI ID “name” : “kitchen table”, # OPTIONAL NAME OF THE ROI
“pos_objects” : [“cup”, “plate”] # ALLOWED OBJECTS (IF MISSING

EVERYTHING EXCEPT NEG-OBJECT ARE ALLOWED) }, “2”: { # ROI ID “name” : “kitchen counter”, # OPTIONAL NAME OF THE ROI “neg_objects” : [“unknown”] # NOT-ALLOWED OBJECTS (IF MISSING EVERYTHING EXCEPT POS-OBJECTS ARE NOT-ALLOWED) } “3”: { # ROI ID “name” : “meeting room table”, # OPTIONAL NAME OF THE ROI “pos_objects” : [“projector”], # ALLOWED OBJECTS (IF MISSING EVERYTHING EXCEPT NEG-OBJECT ARE ALLOWED) “neg_objects” : [“cup”] # NOT-ALLOWED OBJECTS (IF MISSING EVERYTHING EXCEPT POS-OBJECTS ARE NOT-ALLOWED) } } ###

Original page: https://github.com/strands-project/soma/blob/indigo-devel/soma_utils/config/README.txt

STRANDS will produce intelligent mobile robots that are able to run for months in dynamic human environments. We will provide robots with the longevity and behavioural robustness necessary to make them truly useful assistants in a wide range of domains. Such long-lived robots will be able to learn from a wider range of experiences than has previously been possible, creating a whole new generation of autonomous systems able to extract and exploit the structure in their worlds.

Our approach is based on understanding 3D space and how it changes over time, from milliseconds to months. We will develop novel approaches to extract spatio-temporal structure from sensor data gathered during months of autonomous operation. Extracted structure will include reoccurring 3D shapes, objects, people, and models of activity. We will also develop control mechanisms which exploit these structures to yield adaptive behaviour in highly demanding, realworld security and care scenarios.

You can find out more on our website.

Original page: https://github.com/strands-project/soma/blob/indigo-devel/soma_utils/content/robblog/about.md

[image0](https://microbadger.com/images/strands/strands-base “Get your own image badge on microbadger.com” d )

STRANDS distro docker image(s)

Do run an interactive session in the fully installed STRANDS base system, simply make sure you have docker installed on your machine, and then you can simply run

docker run -it --rm strands/strands-base /bin/bash

to launch an interactive session. In there, most STRANDS packages are available, however, access to any local hardware (GPU) is not directly possible, there is more documentation for this at http://wiki.ros.org/docker/Tutorials/Hardware%20Acceleration

But this is enough to have a play with some STRANDS software and connect it to other ROS components as required.

Running on a Linux host

If you want to run this with your local user and actually have the docker container access your X server, run something like:

docker run -it --rm \
    --user=`id -u` \
    --env="DISPLAY" \
    --workdir="/home/$USER" \
    --volume="/home/$USER:/home/$USER" \
    --volume="/etc/group:/etc/group:ro" \
    --volume="/etc/passwd:/etc/passwd:ro" \
    --volume="/etc/shadow:/etc/shadow:ro" \
    --volume="/etc/sudoers.d:/etc/sudoers.d:ro" \
    --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
    strands/strands-base /bin/bash

Running on OSX/Windows

This is a useful guide for running X-enabled docker images on OSX: https://blog.bennycornelissen.nl/bwc-gui-apps-in-docker-on-osx/

and here is a gist run this on a MAC: https://gist.github.com/marc-hanheide/d9b4bb6057665acf7524c7b79827f1c8

Requirements: * install docker on OSX: https://docs.docker.com/docker-for-mac/ * create a docker machine: docker-machine create --driver virtualbox --virtualbox-memory 2048 docker-vm * source docker-x.sh from https://gist.github.com/marc-hanheide/d9b4bb6057665acf7524c7b79827f1c8 * run docker_run strands/strands-base

Builds

Building locally

build locally via docker build --tag ros:strands --network host

Automated builds on hub.docker.com

This repository is set up to release automatically a STRANDS docker image into the official docker repository at https://hub.docker.com/r/strands/strands-base/

Original page: https://github.com/strands-project/strands-docker/blob/master/README.md

calibrate_sweeps

Implements an action server for calibrating the intermediate sweep positions. Uses the strands_sweep_registration package internally.

Start node

rosrun calibrate_sweeps calibrate_sweep.as

Call action server

rosrun actionlib axclient.py /calibrate_sweeps

Action

#goal
int32 min_num_sweeps
int32 max_num_sweeps
string sweep_location
string save_location
---
#result
string calibration_file
---
#feedback
int32 percentage_done
  • min_num_sweeps - the minimum number of sweeps needed to start the calibration
  • max_num_sweeps - the maximum number of sweeps to use when doing the calibration
  • sweep_location - where to look for the sweeps. If this argument is left empty, the default path is ~/.semanticMap/
  • save_location - where to save the registered sweeps after the calibration has finished. If this argument is left empty, the default path is the same as sweep_location

Sweeps used

The calibration process uses only sweeps recorded with the type complete if using the do_sweeps.py action server from the cloud_merge package, i.e. with 51 positions.

If using the ptu_action_server_metric_map.py action server from the scitos_ptu package, the parameters are -160 20 160 -30 30 30.

Sweeps recorded with different parameters are ignored for the calibration. For registration, sweeps with different parameters are also processed if their parameters are a subset of the complete sweep type parameters (e.g. comlpete sweep type parameters are -160 20 160 -30 30 30; an example subset of those would be -160 40 160 -30 30 30, i.e. fewer pan stops).

Results

The calibration results are saved in ~/.ros/semanticMap. These are:

  • registration_transforms.txt the result of the 51 transforms for the intermediate poses.
  • registration_transforms_raw.txt legacy - contains the same data as above in a different format, needed by the strands_sweep_registration package.
  • camera_params.txt contains the optimized camera parameters. This is currently disabled, and the stored camera parameters should be the same as the input camera parameters.
  • sweep_paramters.txt the sweep parameters used by the calibration (-160 20 160 -30 30 30)

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/calibrate_sweeps/README.md

Package for building local metric maps

cloud_merge_node

Dependencies

Make sure you have Qt installed on the robot by getting the rqt packages:

sudo apt-get install ros-hydro-qt-ros
sudo apt-get install ros-hydro-rqt
sudo apt-get install ros-hydro-qt-build

Description

The cloud_merge_node acquires data from the RGBD sensor, as the PTU unit does a sweep, stopping at various positions as provided as input to the scitos_ptu ptu_pan_tilt_metric_map.py action server. (As an alternative, one can use the do_sweep.py action server from this package, which provides a higher level interface to doing a sweep).

As the PTU stops at a position, a number of RGBD frames are collected and averaged, with the purpose of reducing noise. Each one of these frames are converted to the global frame of reference, and merged together to form an observation point cloud, which is further processed by the semantic_map semantic_map_node node.

If the sweep intermediate positions have been calibrated (using the calibrate_sweeps calibrate_sweep_as action server) and the parameter register_and_correct_sweep is set to true, the collected sweeps are also registered. Note that this registration is for the intermediate point clouds making up the sweep, and not between two sweeps.

The observations are stored on the disk, in the folder

~.semanticMap/

To start the cloud_merge_node, run:

roslaunch cloud_merge cloud_merge.launch
Input topics
  • /ptu/log : this topic provides information about the sweep (i.e. parameters, started, position reached, finished).
  • /current_node : the waypoint id received on this topic is associated with the sweep collected
Output topics
  • /local_metric_map/intermediate_point_cloud - RGBD point cloud corresponding to an intermediate position
  • /local_metric_map/merged_point_cloud - merged point cloud with resolution specified by the voxel_size_table_top parameter
  • /local_metric_map/merged_point_cloud_downsampled - merged point cloud with resolution specified by the voxel_size_observation parameter
  • /local_metric_map/depth/depth_filtered - averaged depth frames corresponding to an intermediate position
  • /local_metric_map/rgb/rgb_filtered - averaged RGB frames corresponding to an intermediate position
  • /local_metric_map/depth/camera_info - camera info message corresponding to the image published on the /local_metric_map/depth/depth_filtered topic
  • /local_metric_map/rgb/camera_info - camera info message corresponding to the image published on the /local_metric_map/rgb/rgb_filtered topic
  • /local_metric_map/room_observations - string message containing the absolute path to the xml file corresponding to the collected sweep. This is used by the semantic_map semantic_map_node to trigger a Meta-Room update.
Parameters:
  • save_intermediate (true/false)- whether to save the intermediate point clouds to disk; default true
  • cleanup (true/false) - whether to remove previously saved data from ~/.semanticMap/; default false
  • generate_pointclouds (true/false) - generate point clouds from RGBD images or use the point clouds produced by the camera driver directly; default true. Note that setting false here has not been used for a while and might not work as expected.
  • log_to_db (true/false) - whether to log data to mongodb database; default true
  • voxel_size_table_top (double) - the cell size to downsample the merged point cloud to before being published for detecting table tops; default 0.01 m
  • voxel_size_observation (double) - the cell size to downsample the merge point cloud to for visualisation purposes in rviz; default 0.03 m
  • point_cutoff_distance (double) - maximum distance after which data should be discarded when constructing the merged point cloud; default 4.0 m
  • max_instances (int) - how many instances of each observation to keep stored on disk; default 2
  • input_cloud - name of the topic for the RGBD input point clouds (this is used when generate_pointclouds is false); default /depth_registered/points. Note: this has not been used for a while and might not work as expected.
  • input_rgb - name of the topic for the RGB image (this is used when generate_pointclouds is true); default /head_xtion/rgb/image_color
  • input_depth - name of the topic for the depth image (this is used when generate_pointclouds is true); default /head_xtion/depth/image_raw
  • input_caminfo - name of the topic for the camera parameters (this is used when generate_pointclouds is true); default /head_xtion/rgb/camera_info
Extracting data from mongodb

After logging some data, you can extract if from the database and saved it to disk in a folder of your choice using:

rosrun semantic_map load_from_mongo /path/where/to/save/

After extracting data from the database, you can load all the recorded observations in appropriate datastructures (containing the waypoint_id, merged cloud, individual point clouds, individual rgb and depth images and camera parameters):

rosrun metaroom_xml_parser load_multiple_files /path/where/to/load/from/

(Note the / at the end of the path in the command above).

do_sweeps.py

To start the action server manually:

rosrun cloud_merge do_sweep.py

Use:

rosrun actionlib axclient.py /do_sweep

This action server takes as input a string, with the following values defined: “complete”, “medium”, “short”, “shortest”. Internally the action server from scitos_ptu ptu_action_server_metric_map.py is used, so make sure that is running.

The behavior is the following:

If sweep type is complete, the sweep is started with parameters -160 20 160 -30 30 30 -> 51 positions If sweep type is medium, the sweep is started with parameters -160 20 160 -30 30 -30 -> 17 positions If sweep type is short, the sweep is started with parameters -160 40 160 -30 30 -30 -> 9 positions If sweep type is shortest, the sweep is started with parameters -160 60 140 -30 30 -30 -> 6 positions (there might be blank areas with this sweep type, depending on the environment).

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/cloud_merge/README.md

————- WARNING - SURF features disabled in this version

Section 1: License Section 2: Dependencies Section 3: BUILD Section 4: SOURCE CODE Section 5: EXAMPLES Section 6: TODOLIST

————————-Section 1: License————————————

License: BSD

————————-Section 2: Dependencies——————————-

Dependencies required: 1: Install ros hydro from: http://wiki.ros.org/hydro/Installation/ 2: Install ros-pcl with: apt-get install ros-hydro-perception-pcl 3: Install ros-opencv2: apt-get install ros-hydro-opencv2

Optional(recommended): 4: Install ros openni: apt-get install ros-hydro-openni-launch ————————-Section 3: BUILD————————————– Place in catkin src folder. Use catkin_make from catkin folder.

————————-Section 4: SOURCE CODE——————————– found in installation directory + /src/ This section summarizes the contents of the src directory. For each folder a summary of the contents are provided with a short list important files that the user is likely to interact with.

Fi le : ek z. h In fo : To in cl ud e li br ar y ju st in cl ud e ek z. h. In cl ud es of ot he r fi le s fo r th e li br ar y.

Folder:core Info: Contains core classes for the library. Important Files:

Calibration.h //Calibration class, controls focal length etc

FrameInput.h //Contains the input of one frame. Got functions like getXYZ(int w, int h), getPointCloud() etc.

RGBDFrame.h //Representation of a frame. Contains extracted information for the frame such as segmentation and keypoints etc.

Transformation.h //Represents a transformation between two frames.

Fo ld er :F ea tu re De sc ri pt or In fo : Co nt ai ns di ff er en t ty pe s of fe at ur e de sc ri pt or s us ed by th e li br ar y. Im po rt an t fi le s: Fe at ur eD es cr ip to r. h // Ba se cl as s fo r Fe at ur e de sc ri pt or s, co nt ai ns fu nc ti on s su ch as di st an ce (F ea tu re De sc ri pt or * ot he r_d es cr ip to r) .

Folder:FeatureExtractor Info: Contains different types of feature extractors to be chosen from by the library. Important files: FeatureExtractor.h //Core class for feature extractors OrbExtractor.h //Class used to extract Orb keypoints SurfExtractor.h //Class used to extract Surf keypoints

Fo ld er :F ra me Ma tc he r In fo : Co nt ai ns re gi st ra ti on al go ri th ms th at ta ke s tw o fr am es as in pu ts (n o in it ia l gu es s gi ve n) . Im po rt an t fi le s: Fr am eM at ch er .h // Co re cl as s fo r Fr am e ma tc he rs AI CK .h // AI CK ba se im pl em en ta ti on wi th ou t he ur is ti c ma tc hi ng al go ri th m. bo wA IC K. h // AI CK im pl em en ta ti on wi th he ur is ti c ma tc hi ng al go ri th m. Fa st er th an AI CK .h .

Folder:Map Info: Contains Map3D classes. Important files: Map3D.h //Basic map class. Contains many usefull functions to reduce the complexity for the user. Registers frames added sequentially.

Fo ld er : my ge om et ry In fo : Co nt ai ns ge om et ry cl as se s su ch as pl an es an d po in ts . Al so co nt ai ns Ke yp oi nt s ba se cl as s.

Folder:RGBDSegmentation Info: Contains RGBD segmentation algorithms to be used on the RGBDFrames. Currently unused.

Fo ld er :T ra ns fo rm at io nF il te r In fo :C on ta in s re gi st ra ti on al go ri th ms th at ta ke s tw o fr am es as in pu ts wi th an in it ia l tr an sf or ma ti on an d im pr ov es th e so lu ti on . Cu rr en tl y un us ed .

Folder:apps Info:Contains example code of how to use the library. See Section 5 for details.

————————-Section 5: EXAMPLES———————————– found in installation directory + /src/apps/ Unzip testdata.7z to gain access to some test data to run the examples with.

====================image_recorder.cpp==================== Summary: Records data and stores it in .pcd files from a the rostopic /camera/depth_registered/points Input: path where to store recorded data Output: png image pairs with RGBD data captured from a the rostopic /camera/depth_registered/points USAGE: Run roscore Run roslaunch openni_launch openni.launch Run image_recorder program with an argument telling the recorder where to store the data

====================pcd_recorder.cpp==================== Summary: Records data and stores it in .png files from a the rostopic /camera/depth_registered/points Input: path where to store recorded data Output: captured pairs(depth and RGB) of .png files USAGE: Run roscore Run roslaunch openni_launch openni.launch Run pcd_recorder program with an argument telling the recorder where to store the data

====================example_register_pcd_map.cpp==================== Summary: Minimalistic example for registering data provided in .pcd files sequentially using a Map3D object. Input: a set of paths to pcd files Output: .pcd file of aligned data USAGE: Run example_register_pcd_map program with a set of paths to pcd files to be registed

====================example_register_images_map.cpp==================== Summary: Minimalistic example for registering data provided in .png files sequentially using a Map3D object. Input: a path to a folder where png files with the correct names are located Output: .pcd file of aligned data USAGE: Run example_register_images_map program with an argument telling the program where to find the data

====================example_register_pcd_standalone.cpp==================== Summary: Example for registering data provided in .pcd files sequentially. Input: a set of paths to pcd files Output: .pcd file of aligned data USAGE: Run example_register_pcd_map program with a set of paths to pcd files to be registed

====================example_register_images_standalone.cpp==================== Summary: Example for registering data provided in .png files sequentially. Input: a path to a folder where png files with the correct names are located Output: .pcd file of aligned data USAGE: Run example_register_images_map program with an argument telling the program where to find the data

====================example_register_images_fast_map.cpp==================== Summary: Example for registering data provided in .png files sequentially using a Map3D object using ORB features and AICK with bag of words. Input: a path to a folder where png files with the correct names are located a path+fileprefix to a folder where a pre trained bag of words model is located Output: .pcd file of aligned data USAGE: Run example_register_images_fast_map program with an argument telling the program where to find the data

====================example_bow_images.cpp==================== Summary: Example for training a bag of words model for data provided in .png files using a Map3Dbow. Input: a path to a folder where png files with the correct names are located, a path/name for output, number of files to read and a number to controll how what part of the frames given will be used. Output: .pcd file of aligned data USAGE: Run example_bow_images program with a path to a folder where png files with the correct names are located, a path/name for output, number of files to read and a number to controll how what part of the frames given will be used.

————————-Section 1: TODOLIST ————————- Use trees to speed up word association during frame generation. Provide more maptypes. Give option to provide initial guess for poses in map.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/ekz-public-lib/README.txt

scitos_3d_mapping

Tools for building 3D maps and using these maps for navigation and visualization.

Start the system

Start all the nodes in this repository using:

roslaunch semantic_map_launcher semantic_map.launch

Data acquisition

To collect sweeps, use the action server from: cloud_merge do_sweep.py

To start the action server manually (already launched with roslaunch semantic_map_launcher semantic_map.launch):

rosrun cloud_merge do_sweep.py

Use:

rosrun actionlib axclient.py /do_sweep

This action server takes as input a string, with the following values defined: “complete”, “medium”, “short”, “shortest”. Internally the action server from scitos_ptu called ptu_action_server_metric_map.py is used, so make sure that is running.

The behavior is the following: * If sweep type is complete, the sweep is started with parameters -160 20 160 -30 30 30 -> 51 positions * If sweep type is medium, the sweep is started with parameters -160 20 160 -30 30 -30 -> 17 positions * If sweep type is short, the sweep is started with parameters -160 40 160 -30 30 -30 -> 9 positions * If sweep type is shortest, the sweep is started with parameters -160 60 140 -30 30 -30 -> 6 positions (there might be blank areas with this sweep type, depending on the environment).

Calibrate sweep poses

Once a number of sweeps of type “complete” have been collected, you can run the calibration routine which will compute the registration transformations for the 51 poses. Afterwards, you can execute sweeps of any type (from the types defined above) and the correct transformations will be loaded so that the sweeps are registered.

To start the action server manually (already launched with roslaunch semantic_map_launcher semantic_map.launch):

rosrun calibrate_sweeps calibrate_sweep_as

Use:

rosrun actionlib axclient.py /calibrate_sweeps

(Here you have to specify the minimum and maximum number of sweeps to use for the optimization. To get good registration results you should have collected > 5 sweeps. Note that only sweeps of type “complete” are used here, all others are ignored).

Once the calibration has been executed, the parameters are saved in ~/.ros/semanticMap/ from where they are loaded whenever needed. All sweeps recorded up to this point are automatically corrected using the registered sweeps.

Meta-Rooms

The Meta-Rooms are created by the semantic_map semantic_map_node. To start, run:

roslaunch semantic_map semantic_map.launch

For more information check out the semantic_map package.

The dynamic clusters are published on the /local_metric_map/dynamic_clusters topic and the Meta-Rooms are published on the /local_metric_map/metaroom topic.

Reinitialize the Meta-Rooms

After the calibration you can re-initialize the metarooms (in general a good idea, as the registration between the sweeps should be better now that the poses have been calibrated).

rosservice call /local_metric_map/ClearMetaroomService "waypoint_id: - 'WayPointXYZ' initialize: true"

Set the argument initialize to true and provide all the waypoints for which you want to re-initialize the metarooms in the waypoint_id list.

Access invidual dynamic clusters

The package object_manager allows access to individual dynamic clusters, via a number of services. To start use:

rosrun object_manager object_manager_node

For more information check out the object_manager package.

semantic_map_publisher

The package semantic_map_publisher provides a number of services for accessing previously collected data which is stored on the disk. To start use:

rosrun semantic_map_publisher semantic_map_publisher

For more information check out the semantic_map_publisher package.

Accessing saved data

The package metaroom_xml_parser provides a number of utilities for reading previously saved sweep data. These include utilities for accessing:

  • merged point clouds
  • individual point clouds
  • dynamic clusters
  • labelled data
  • sweep xml files.

Check out the metaroom_xml_parser package for more information.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/README.md

This package provides an action server for performing object learning, as demonstrated at the Y2 review and published in (RA-L reference). It depends heavily on the STRANDS setup, in particular the robot PTU configuration and topological navigation.

Dependencies

There are a large number of STRANDS package dependencies for this package:

Starting the server node

A lot of nodes need to be running before this action server will start. If any are missing then a warning will be printed out. The required nodes are as follows:

Standard nodes/launches: - strands_bringup / strands_core is needed to provide MongoDB access with message store - strands_bringup / strands_robot is needed for robot motion and to enable the PTU to be controlled by velocity - strands_bringup / strands_cameras is needed to provide head_xtion, using the latest OpenNI2 launch approach. - strands_bringup / strands_navigation is required for monitored_navigation etc.

Additional specific nodes to launch: - metric mapping: roslaunch semantic_map_launcher semantic_map.launch - metric map ptu: rosrun scitos_ptu ptu_action_server_metric_map.py - map down projector: rosrun semantic_map_to_2d semantic_map_2d_server - camera tracker: rosrun camera_tracker camera_track_service  _camera_topic:=/head_xtion/depth_registered - object learning node: rosrun incremental_object_learning incremental_object_learning_service - ptu tracker: rosrun ptu_follow_frame ptu_follow.py - transform manager: rosrun static_transform_manager static_tf_services.py - view planner: rosrun object_view_generator view_points_service.py _map_topic:=/waypoint_map

Finally the learn object action can be started:

rosrun learn_objects_action server.py _model_path:=/home/strands/test_models _record_run:=False

There node can be started with the following parameters: - model_path: this is non-optional and must provide the directory where the learnt model should be stored. - rois_file: this is the optional path to a file detailing the SOMA roi to use for which waypoint. If not provided then no SOMA regions will be needed. (see below for example file) - debug_mode: by default this is False, but if set True then the action will step through the various parts of the learning process. At each stage the action server will need confirmation to proceed, supplied over a ros topic. - planning_method: This selects which planning method to use to aquire the additional object views. Currently just the default ‘ral16’ is working, but there is a skeleton method ‘infogain’ that is ready to add the nbv_planning code to. - record_run : This denotes whether or not to record the ros topics as the robot navigates around a cluster collecting additional views. By default this is False.

Launch file

All the dependencies, including the learn_objects_action action server can be started with:

roslaunch learn_objects_action learn_objects_dependencies.launch model_path:=/path/to/models/folder

Triggering a learning session

The object learning is triggered by the action /learn_object. This takes the waypoint name as the only field in it’s goal definition. This sould be the waypoint that the robot is already at when triggering the learning.

Once started, the robot will perform a short metric map sweep, calculate the difference between the new map and the previous one at that location, select a cluster to learn based on how many views it can get of it, then drive around the object acquiring views. These views will be stitched together by the incremental object learning code from V4r, and finally a model will be save in the specified (by a parameter) folder.

RViz montoring

There are several topics that can be monitored in RViz to get an idea of what is happening: - /waypoint_map shows the free space that the robot can plan in. Add it as a costmap to make it easier to see ontop of the base map. - /object_view_goals : PoseArray shows the view points that the robot will try and reach - /local_metric_map/dynamic_clusters : PointCloud2 shows the difference between this map and the last one. Once of the shown clusters will be the learning target. - /local_metric_map/observations : PointCloud2 shows the new observations of the target object as they arrive.

Debug mode

When in debug mode the action server will wait for confirmation to proceed between states, and confirmation of which dynamic cluster to select. If running in debug mode then it is best to look at the code to see what is required.

SOMA Rois file format

The optional file detailing which ROI to use for which region should be of the following format:

WayPointName: 'conf/map/region_no`
OtherPointName: 'conf/map/another_region`
WayPointN: ''

where every waypoint is covered, ones that should not be constrained to a region are given empty strings.

Limitations

  • Sometimes, if the object is too far behind the robot, the robot turning completely will be too fast and the camera tracker fail. This results in bad models due to failed registration.
  • If the object crosses the back of the robot while driving, then the PTU has to do a full 360 degree spin to keep tracking it. During this the camera tracker will likely fail. Therefore runs with objects infront of the waypoint are more likely to be nice.
  • If monitored navigation fails to move the robot, only one re-attempt is made. If that fails the action fails.
  • The PTU tilt angle is super restricted. Very often objects are too low down, so the PTU can not see them at a reasonable close distance to the object, resulting in tracking of the object without it actually being in view. Make sure objects to learn are at chest height.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/learn_objects_action/README.md

Package for parsing saved room observations

Description

The metaroom_xml_parser package is used to parse previously saved room observations. The data will be read into an appropriate data structure containing: merged point cloud, individual point clouds, individual RGB and depth images and corresponding camera parameters.

Usage

  • Parsing one file

The load_single_file application reads in one room observation.

rosrun metaroom_xml_parser load_single_file /path/to/xml
  • Parsing multiple observations

The ‘load_multiple_files’ application reads in multiple room observations and returns a vector. It takes as input the root folder where the observations are stored.

rosrun metaroom_xml_parser load_multiple_files /path/to/folder
  • Parsing labelled data

The load_labelled_data application reads labelled data for observations taken at a particular waypoint. The waypoint id and the folder where the observations are stored are taken in as parameters.

rosrun metaroom_xml_parser load_labelled_data` /path/to/sweeps WayPointXYZ

Utilities

A number of utilities are provided by this package, for easy data manipulation. The definitions can be seen in the file load_utilities.h

Merged cloud utilities

The complete cloud datatype is:

template <class PointType> boost::shared_ptr<pcl::PointCloud<PointType>>

The utilities for loading only the merged cloud are: * loadMergedCloudFromSingleSweep # returns one cloud * loadMergedCloudFromMultipleSweeps # returns a vector of merged clouds, one for each sweep * loadMergedCloudForTopologicalWaypoint # same as above

Intermediate cloud utilities

The intermediate cloud datatype is:

template <class PointType>
struct IntermediateCloudCompleteData
{
    std::vector<boost::shared_ptr<pcl::PointCloud<PointType>>>  vIntermediateRoomClouds;
    std::vector<tf::StampedTransform>                           vIntermediateRoomCloudTransforms;
    std::vector<image_geometry::PinholeCameraModel>             vIntermediateRoomCloudCamParams;
    std::vector<tf::StampedTransform>                           vIntermediateRoomCloudTransformsRegistered;
    std::vector<image_geometry::PinholeCameraModel>             vIntermediateRoomCloudCamParamsCorrected;
    std::vector<cv::Mat>                                        vIntermediateRGBImages; // type CV_8UC3
    std::vector<cv::Mat>                                        vIntermediateDepthImages; // type CV_16UC1
};

The utilities for loading the intermediate clouds are: * loadIntermediateCloudsFromSingleSweep # just the point clouds * loadIntermediateCloudsCompleteDataFromSingleSweep # complete data, with transforms and images * loadIntermediateCloudsFromMultipleSweeps * loadIntermediateCloudsCompleteDataFromMultipleSweeps * loadIntermediateCloudsForTopologicalWaypoint * loadIntermediateCloudsCompleteDataForTopologicalWaypoint

Sweep XML utilities

The sweep XML is an std::string

The utilities for finding sweep XMLS are: * getSweepXmls # takes a folder where to search as argument. Returns a vector<string> * getSweepXmlsForTopologicalWaypoint

Dynamic cluster utilities

The dynamic clusters type is:

template <class PointType> std::vector<boost::shared_ptr<pcl::PointCloud<PointType>>>

The dynamic cluster utilities are: * loadDynamicClustersFromSingleSweep * loadDynamicClustersFromMultipleSweeps * loadDynamicClustersForTopologicalWaypoint

Labelled data utilities

The labelled data type is:

template <class PointType>
struct LabelledData
{
    boost::shared_ptr<pcl::PointCloud<PointType>>               completeCloud;
    tf::StampedTransform                                        transformToGlobal;
    tf::Vector3                                                 sweepCenter;
    std::vector<boost::shared_ptr<pcl::PointCloud<PointType>>>  objectClouds;
    std::vector<std::string>                                    objectLabels;
    boost::posix_time::ptime                                    sweepTime;
    std::string                                                 waypoint;

};

The labelled data utilities are: * loadLabelledDataFromSingleSweep

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/metaroom_xml_parser/README.md

This package provides a implementation of the paper

“A probabilistic framework for next best view estimation in a cluttered environment.”, Potthas & Sukhatme, 2014

Instead of using s simple 3D array to store occupancy probabilities, an octomap is used. This more efficient and avoids the need for the markov random field gap filling method in the paper.

The code provides two interfaces, a C++ class and a ROS node. The C++ class is documented in source, with an example for running offline on PCD files.

Running an example on PCDs

The example program “nbv_pcds” can be used to run offline with some logged PCD files. Run this with a single command line argument:

rosrun nbv_planning nbv_pcds path_to_yaml

Where the YAML file supplied gives the location of the pointclouds, target volume and sensor model. See test_files/out.yaml for an example YAML file. The pointclouds supplied need to contain the pose of the camera within the VIEWPOINT field. Compatible point clouds can be captured using scripts/capture_some_clouds.py. The output of the program will be the view scores and the selected view. To visualise the progress in RViz, subscribe to /nbv_planner/views, /nbv_planner/octomap, and /nbv_planner/volume.

Running and using the planning as a ROS node

This package provides a single ROS node that can be used to do NBV planning. To start the node:

rosrun nbv_planning nbv_server _camera_info_topic:=.ead_xtion/depth/camera_info

The camera info topic needs to be correct so that the planner can get the intrinsic parameters of the camera.

/nbv_planner/set_target_volume

This sets the volume that views should be selected for. The request takes the centroid of a bounding box, and the extents. See the service definition here #### /nbv_planner/update This updates the current knowledge of the target volume, so that the next best view can be selected based on updated information from the last selected view. See the service definition here #### /nbv_planner/select_next_view This returns the view that should be used next, based on how much information gain is predicted to be achieved by it. This returns the view as a pose (for the camera), the index of the view and the score. As argument it takes a boolean to say if the selected view should be disabled after selection - i.e not selected again later. See the service definition here #### /nbv_planner/set_views This service sets the ‘candidate’ views that the planner should select from. Each view should be a geometry_msgs/Pose, which is the pose of the camera not the robot. See the service definition here

RViz

There are some topics to visualise the planning: - /nbv_planner/octomap : this is an octomap_msgs/Octomap, subscribe using the RViz plugin to see the current volume knowledge - /nbv_planner/volume : this is a MarkerArray showing the target region to select views for - /nbv_planner/views : this is a MarkerArray showing the candidate views the planner is working with.

Limitations

This code has not been thoroughly tested. In particular, there is likely to be bugs in relation to formulae (6) and (7), and the advantage of the method stated in the paper over “simple_inf_gain” was not so apparant in tests that I carried out. None the less, hopefully this package can be the basis for some better implementation or alternative method.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/nbv_planning/README.md

object_manager

This package allows interaction with the dynamic clusters segmented by the semantic_map package.

Parameters

  • log_objects_to_db - whether the clusters segmented should be logged to mongodb. The default value is True
  • object_folder - the folder where to look for dynamic clusters. The default path is ~/.semanticMap/
  • min_object_size - clusters with fewer points than this threshold are discarded. The default value is 500.
  • additional_views_topic - the topic on which the additional views are published. The default is /object_learning/object_view
  • additional_views_status_topic - the topic on which status messages when collecting additional views are published. The default is /object_learning/status. The topic messages supported are start_viewing (only accepted if a cluster has been previously selected) and stop_viewing

DynamicObjectsService

Message tpe:

string waypoint_id
---
string[] object_id
sensor_msgs/PointCloud2[] objects
geometry_msgs/Point[] centroids

Given a waypoint id, this service returns all the dynamic clusters segmented at that waypoint, with their ids, point clouds and centroid.

Service topic: ObjectManager/DynamicObjectsService

The point cloud corresponding to all the dynamic clusters is also published on the topic "/object_manager/objects ## GetDynamicObjectService

Message type:

string waypoint_id
string object_id
---
sensor_msgs/PointCloud2 object_cloud
int32[] object_mask
geometry_msgs/Transform transform_to_map
int32 pan_angle
int32 tilt_angle

Given a waypoint id and a cluster id (should correspond to the ids received after calling the DynamicObjectsService), this service returns the point cloud corresponding to that dynamic cluster in the camera frame of reference and a transform to get the point cloud in the map frame of refence. In addition, a set of angles (pan_angle, and tilt_angle) to which to turn the PTU, and a set of indices representing image pixels corresponding to the dynamic cluster in the image obtained after turning the PTU to the specified angles. After calling this service, the requested dynamic cluster is “selected”, and after receiving the start_viewing mesasge on the object_learning/status topic, additional views received on the /object_learning/object_view topic will be added and logged together with this cluster.

Service topic: ObjectManager/GetDynamicObjectService

The point cloud corresponding to the requested dynamic cluster is also published on the topic /object_manager/requested_object.

The cluster mask is also published as an image on the topic: /object_manager/requested_object_mask

Note that the clusters are logged to the database when calling the DynamicObjectsService or the GetDynamicObjectService (if the log_to_db argument is set to True). Calling these services multiple times does not affect (negatively) the logging.

Export logged dynamic clusters from mongodb

rosrun object_manager load_objects_from_mongo /path/where/to/export/data/

The data exported is saved according to the sweeps where the clusters were extracted (i.e. YYYYMMDD/patrol_run_#/room_#/...)

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/object_manager/README.md

This package contains a service providing node that generates view poses around an object/region/arbitrary point of interest. The poses it generates are for the robot base, no PTU angles are provided - these are either easily calculated separately to have the PTU point at the target, or the ptu_follow_frame package can be used in conjunction with the static_transform_manager package to have the PTU always looking at the target point. The generated poses are collision checked against a 2D map to be reasonably sure that the robot is capable of navigating to those positions. In the STRANDS work using this code we use a down projected 3D map (generated by the semantic_map_to_2d package) to effectively collision check in 3D.

The node provides two methods of generating the target points, one in which the aim is to generate a sequence of views - i.e a robot trajectory, and the other is just to generate a set of view points. The later method is the initial step of the first method, details are available in !!RA-L paper!!. The method to use is selected via a parameter in the service call.

Running

start the service node with

rosrun object_view_generator view_points_service.py

This will start the service with the default parameter values, namely:

  • map_topic : default '/map' - This specifies the topic to pull the map from for testing candidate poses for collision. The topic type must be nav_msgs/OccupancyGrid, and a fresh copy of the map will be waited for at each service call.
  • is_costmap : default False - This specifies the map type, as costmaps usually have different value meanings than standard maps. This allows the inflated costmap to be used in place of the static 2D map if desired.

Usage

Once started the node will provide a single service /generate_object_views with the following request fields:

  • geometry_msgs/Pose target_pose - The pose of the target to generate views for. Currently only the position is used, the orientation can be left default.
  • float32 min_dist - This is the minimum distance that the view poses should be from the tartget.
  • float32 max_dist - The maximum distance that the view poses should be from the object.
  • int16 number_views - How many views (maximum) should be generated. If some of the views are not valid then less than this number might be returned.
  • float32 inflation_radius - How much space should the robot footprint be expanded by when collision checking the candidate poses.
  • bool return_as_trajectory - This selects the method for generating the views. If True, then only a smaller subset of connected views will be returned, thereby returning a trajectory. If False then a larger set of all possible views is provided, but the path between these views is not considered when selecting them.
  • string SOMA_region - This optional parameter, if supplied, will cause the generator to restrict views to the specified SOMA region. If specified then the format must be “soma_map/soma_config/region_number” - the parsing of this might be sensitive :-). If you supply an empty string then SOMA will not be required on your system.

The return of the service is as follows:

  • geometry_msgs/PoseArray goals - This is simply the list of goals.

For convenience the service handle will also publish the generated goals on the topic /object_view_goals of type PoseArray.

Testing

The test client can be started with

rosrun object_view_generator test_with_rviz.py

This provides a simple text interface to call the service with different parameters. Subscribe to /object_view_goals in RViz to see the results.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/object_view_generator/README.md

Package for building metarooms and extracting dynamic clusters

semantic_map_node

Description

The local metric map consists of a series of meta-rooms, each corresponding to a different location. A meta-room contains only those parts of the scene which are observed to be static, and it is created incrementally as the robot re-observes the same location over time.

This package takes room observations as they are constructed by the cloud_merge package and extracts the corresponding metaroom and also computes dynamic clusters at each waypoint.

Some data is stored on the disk, in the folder

~.semanticMap/metarooms

To start this node run:

roslaunch semantic_map semantic_map.launch

Input topics

  • /local_metric_map/room_observations : on this topic the observation xml is published. After receiving the xml, the appropriate Meta-Room will be loaded and updated.

Output topics

  • /local_metric_map/metaroom - RGBD point cloud corresponding to a meta-room, published after a new observation has been processed.
  • /local_metric_map/dynamic_clusters - RGBD point cloud corresponding to the dynamic clusters, published after a new observation has been processed.

ClearMetaroomService

This service resets the Meta-Rooms at specific waypoints.

Message type:

string[] waypoint_id
bool initialize
---

If initialize is set to True, the Meta-Rooms at the specific waypoints are re-initialized with the latest observations collected at those waypoints. Otherwise, the Meta-Rooms are just deleted.

Parameters

  • save_intermediate : whether to save intermediate Meta-Room update steps to the disk. Default false
  • log_to_db : log the Meta-Rooms to mongodb. Default false
  • update_metaroom : update the Meta-Rooms (if false they will only be initialized and used as they are for dynamic cluster computation). Default true
  • min_object_size : a dynamic cluster will be reported only if it has more points than this threshold. Default 500
  • newest_dynamic_clusters : compute dynamic clusters by comparing the latest sweep with the previous one (as opposed to comparing the latest sweep to the metaroom). Default false

Export sweeps from mongodb to the disk

rosrun semantic_map load_from_mongo /path/where/to/export/data/

Import sweeps from the disk into mongodb

rosrun semantic_map add_to_mongo /path/where/to/load/data/from/

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/semantic_map/README.md

semantic_map_launcher

This launches the metric mapping / semantic mapping nodes from the strands_3d_mapping repository.

roslaunch semantic_map_launcher semantic_map.launch

Nodes started

  • cloud_merge
  • semantic_map
  • calibrate_sweep_as
  • semantic_map_publisher
  • objcet_manager
  • do_sweep
  • ptu_action_server_metric_map
  • dynamic_object_compute_mask_server

Parameters

The parameters accepted by the launch file are:

  • save_intermediate_clouds : whether to save the intermediate point clouds from the sweeps to the disk. Default true
  • save_intermediate_images : whether to save all the images making up an intermediate cloud to the disk (this takes a lot of space!!). Default false
  • log_to_db : log the sweeps to mongodb. Default true
  • log_objects_to_db : log the dynamic clusters to mongodb. Default true
  • cleanup : at startup, delete everything in the ~/.semanticMap/ folder. Default false
  • max_instances : maximum number of sweeps, per waypoint to keep in the ~/.semanticMap/ folder. Default: 10
  • cache_old_data : if there are more sweeps per waypoint than the max_instances parameter, delete them or move them to the cache folder ~/.semanticMap/cache/. Default false, i.e. delete older sweeps.
  • update_metaroom : update the metaroom with new sweeps. Default true
  • newest_dynamic_clusters : compute dynamic clusters by comparing the latest sweep with the previous one (as opposed to comparing the latest sweep to the metaroom). Default true
  • min_object_size : the minimum number of points for a cluster to be reported. Default 500
  • segmentation_method : the segmentation method used to segment the object from the additional views collected by the value. Supported methods: convex_segmentation and meta_room. Default: meta_room.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/semantic_map_launcher/README.md

semantic_map_publisher

This package provides an interface to observation data previously recorded and stored on the disk. The data can be queried using the services described below.

WaypointInfoService

Message type:

---
string[] waypoint_id
int32[] observation_count

Returns a list of waypoints along with the number of observations collected at those waypoints.

Service name: SemanticMapPublisher/WaypointInfoService

SensorOriginService

Message type:

string waypoint_id
---
geometry_msgs/Vector3 origin

Given a waypoint this service returns the origin from where the latest observation was acquired at that waypoint.

Service name: SemanticMapPublisher/SensorOriginService

ObservationService

Message type:

string waypoint_id
float64 resolution
---
sensor_msgs/PointCloud2 cloud

Given a waypoint, and a resolution, this service returns the latest observation collected at that waypoint as a PointCloud with the specified resolution.

Service name: SemanticMapPublisher/ObservationService

ObservationOctomapService

Message type:

string waypoint_id
float64 resolution
---
octomap_msgs/Octomap octomap

Same as ObservationService but returns the latest observation as an Octomap.

Service name: SemanticMapPublisher/ObservationOctomapService

WaypointTimestampService

Message type:

string waypoint_id
---
string[] waypoint_timestamps

Given a waypoint, this service returns the timestamps of all the observations collected at that waypoint, as a list.

Service name: SemanticMapPublisher/WaypointTimestampService

ObservationInstanceService

Message type:

string waypoint_id
int64 instance_number # convention 0 - oldest available
float64 resolution
---
sensor_msgs/PointCloud2 cloud
string observation_timestamp

Given a waypoint id, an instance number and a resolution, this service returns a particular instance from the observations collected at that particular waypoint, with the desired resolution, along with the timestamp of the observation (as opposed to ObservationService which returns the latest observation at that particular waypoint). Service name: SemanticMapPublisher/ObservationInstanceService

ObservationOctomapInstanceService

Message type:

string waypoint_id
int64 instance_number # convention 0 - oldest available
float64 resolution
---
octomap_msgs/Octomap octomap
string observation_timestamp

Same as ObservationInstanceService, but returns the observation instance as an Octomap.

Service name: SemanticMapPublisher/ObservationOctomapInstanceService

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/semantic_map_publisher/README.md

This package contains a single map server node that publishes a downprojected metric map. A service is provided for chaning which waypoint is currently being published.

Usage

Start it with

rosrun semantic_map_to_2d semantic_map_2d_server

This then provides the service /set_waypoint with the following format [TODO: This should be moved to the nodes namespace…]

Request: - string waypoint : The name of the waypoint to switch the map to.

Response: - bool is_ok : If the map was switched or not - string response : Some textual description of what when wrong if not ok.

The map server publishes the map as a nav_msgs::OccupancyGrid on the topic /waypoint_map.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/semantic_map_to_2d/README.md

object_3d_retrieval

This contains our implementation of a vocabulary tree (Nister & Stewenius) together with some modifications for our paper. It also includes some functions e.g. for extracting segments & features and querying on our data structure.

To use the system, follow the instructions in scripts/menu.py. There are submenus, for example for querying that can be run separately, i.e. scripts/training_menu.py and scripts/querying_menu.py.

Instructions for stepping through menu.py

If you build the project and go to your build folder, you should see the python scripts mentioned above. To use the scripts you will need data captured according to the meta room description, defined in strands_3d_mapping.

When you run menu.py, you are first asked for the data path. This is the base folder of your sweeps and it should contain folders with different dates as names. Once you enter the name you should get the menu itself:

Please supply the data path: /home/nbore/Data/KTH_longterm_dataset_labels

Please make sure to do the following in order both for the noise data folder and for the annotated data. Note that you should choose the a or b option consistently.

Working on data path /home/nbore/Data/KTH_longterm_dataset_labels

1.  Set data path (if you want to change)
2.  Init sweep segment folders
3a. Create convex segments (alternative to 3b & 5b)
3b. Create convex segments and supervoxels (alternative to 3a & 5a)
4.  Extract PFHRGB features
5a. Segment keypoints into subsegments (alternative to 5b)
5b. Create PFHRGB feature for supervoxels (alternative to 5a)
6.  Extract SIFT from sweeps (for re-weighting)
7.  Vocabulary training menu
8.  Querying & benchmarking menu
9.  Exit

Please enter an option 1-9(a/b):

Now you should step through the numbers in order. Note that if you choose e.g. 3a you should also choose 5a. Start by inputting 2. This will set up all the folders. Then choose either 3a or 3b. This will perform a convex segmentation of the data. It will take a while. Then input 4 to extract features, this will also take a while. The depending on if you chose a or b previously, input 5a or 5b. Finally, input 6 to finish the data processing by extracting sift features.

You are now ready to go on to training the vocabulary tree representation!

Instructions for running training_menu.py

To go into the training menu, either input 7 in the previous menu or run training_menu.py separately. This time you are asked for the path to your vocabulary representation. Simply give the path to some empty folder:

Please supply the path to the vocabulary: /home/nbore/Data/KTH_longterm_dataset_labels_convex

Please make sure to do the following in order

Working on vocabulary path /home/nbore/Data/KTH_longterm_dataset_labels_convex

1. Set vocabulary path (if you want to change)
2. Initialize vocabulary folders and files
3. Build vocabulary tree representation
4. Exit

Please enter an option 1-4:

First, input 2. This will ask you for paths to an annotated meta room data set and another “noise” data set which does not need to be annotated. It will also ask if you want a “standard” or “incremental” type vocabulary. The two types are detailed in the paper, basically the standard works on the convex segments extracted earlier while incremental is more flexible. Once this is done, simply enter 3 to train and build the representation. You are now ready to query the representation, see https://github.com/nilsbore/quasimodo/tree/master/quasimodo_retrieval for information on the ROS interface. Also check out example usage in https://github.com/nilsbore/quasimodo/tree/master/quasimodo_test.

Happy querying!

Dependencies

Right now, we do not use the debian package of strands_3d_mapping (see below), instead you should compile https://github.com/RaresAmbrus/strands_3d_mapping manually in a catkin workspace, be sure to check out the hydro-devel branch. There is a commented section in the cmake file where you can set the variable parser_workspace, which should point to the catkin workspace where your strands_3d_mapping checkout lives. See the lines https://github.com/nilsbore/dynamic_object_retrieval/blob/dynamic/CMakeLists.txt#L52 and https://github.com/nilsbore/dynamic_object_retrieval/blob/dynamic/benchmark/CMakeLists.txt#L31.

In the future, you will instead use the packaged version of strands_3d_mapping from the STRANDS project http://strands.acin.tuwien.ac.at/. Follow the instructions on https://github.com/strands-project-releases/strands-releases/wiki to add the debian package repository. Then install the metaroom_xml_parser by typing sudo apt-get install ros-indigo-metaroom-xml-parser.

Tested with Ubuntu 14.04 with ROS Indigo and corresponding OpenCV + PCL, QT4. The repos https://github.com/nilsbore/k_means_tree, https://github.com/USCiLab/cereal and https://github.com/mp3guy/Stopwatch are included in the repo as subtrees.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/dynamic_object_retrieval/README.md

This whole directory is a subtree of https://github.com/mp3guy/Stopwatch . As such, it is created by Thomas Whelan et al. (https://github.com/mp3guy) without any changes by us. The code is re-distributed here to be able to use it within the larger package structure.

Since some of the code is attributed to the b-human code release, we include the following license, which applies to those parts:

LICENSE

Copyright (c) 2016 B-Human. All rights reserved.

Preamble: B-Human releases most of the software it uses at RoboCup competitions to allow teams participating in the Standard Platform League that do not have the resources to develop a complete robot soccer system on their own, but still have important contributions to make to the goals of RoboCup. We intend to enable such teams to benchmark their own scientific approaches in RoboCup competitions. We also hope that the scientific community will benefit from their work through the publication of their findings. A second reason for B-Human releasing its code is that source code is the most solid documentation of how problems were actually solved.

Parts of this distribution were not developed by B-Human. This license doesn’t apply to these parts, the rights of the copyright owners remain.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  1. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  2. The end-user documentation included with the redistribution, if any, must include the following acknowledgment: “This product includes software developed by B-Human (http://www.b-human.de).” Alternately, this acknowledgment may appear in the software itself, if and wherever such third-party acknowledgments normally appear.
  3. If the source code or parts of the source code shall be used for a RoboCup competition, the competing program must differ in at least multiple major parts from the original distribution. “Major parts” means own contributions to the RoboCup goal, which are potentially publishable and usually manifest themselves as new modules (i.e. source code) and not just as a parameterization of existing technology (e.g. walking parameters, kicks, behavior options).
  4. For each B-Human code release from which parts are used in a RoboCup competition, the usage shall be announced in the SPL mailing list (currently robocup-nao@cc.gatech.edu) one month before the first competition in which you are using it. The announcement shall name which parts of this code are used. It shall also contain a description of the own contribution that addresses the criterions mentioned above.
  5. If you are using this source code or parts of this source code and happen to meet members of B-Human at a RoboCup competition, please provide these members with a few bottles of your favorite beer.
  6. Bug fixes regarding existing code shall be sent back to B-Human via GitHub pull request (https://github.com/bhuman).

THIS SOFTWARE IS PROVIDED BY B-HUMAN ``AS IS’’ AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL B-HUMAN NOR ITS MEMBERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ——————————————————————

Stopwatch

Easy to use simple benchmarking tool.

Sends UDP packets to localhost, which StopwatchViewer receives and displays stats on, including plots (credit to Fiachra Matthews here).

StopwatchViewer needs Qt4 to build. Simply include Stopwatch.h in whatever code you want to benchmark and use as such;

#include <string>
#include <stdio.h>
#include "Stopwatch.h"

int main(int argc, char *argv[])
{
  //This stops duplicate timings on multiple runs
  Stopwatch::getInstance().setCustomSignature(32434);

  STOPWATCH("Timing1",

  if(argc >= 1)
  {
    sleep(1);
  }

  );


  TICK("Timing2");

  while(argc > 0)
  {
    sleep(2);
    argc--;
  }

  TOCK("Timing2");

  Stopwatch::getInstance().sendAll();
}

Then just watch the stats in StopwatchViewer.

Uses some code from the B-Human code release (http://www.b-human.de/).

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/dynamic_object_retrieval/dynamic_object_retrieval/stopwatch/README.md

Additional view registration

This package provides services for the registration of additional views of an object. The services are defined here.

  • ObjectAdditionalViewRegistrationService
string observation_xml # can be blank
string object_xml
---
geometry_msgs/Transform[] additional_view_transforms
int32[] additional_view_correspondences
geometry_msgs/Transform observation_transform
int32 observation_correspondences
  • AdditionalViewRegistrationService
string observation_xml # can be blank
sensor_msgs/PointCloud2[] additional_views
geometry_msgs/Transform[] additional_views_odometry_transforms
---
geometry_msgs/Transform[] additional_view_transforms
int32[] additional_view_correspondences
geometry_msgs/Transform observation_transform
int32 observation_correspondences

Two interfaces are provides: the first takes as input an object_xml file which points to the additional views acquired by the robot (loaded with the `metaroom_xml_parser <https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/metaroom_xml_parser/include/metaroom_xml_parser/load_utilities.h#L99>`__), while the second one takes as input directly the additional views and the initial odometry poses. Note that the system will attempt registration even when the odometry transforms are not available, however in general the results are exepcted to be worse. Both interfaces take as input a second parameter observation_xml which points to an observation that the additional views should be registered to.

The result is stored in additional_view_transforms - corresponding to the transforms which align the views with each other, along with additional_view_correspondences denoting the number of SIFT correspondences used to compute the transforms. Similary, observation_transform stores the transform which aligns the additional views to the observation, and observation_correspondences denotes the number of correspondences used to compute it.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/observation_registration/additional_view_registration_server/README.md

observation_registration

This contains packages (nodes) for the registration of: * observations * additional views

These packages should be called using the services defined here.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/observation_registration/README.md

Observation registration server

This node provides a service which registers two observations. The service used is defined here:

string source_observation_xml
string target_observation_xml
---
geometry_msgs/Transform transform
int32 total_correspondences

The service returns the transform which aligns the source observation to the target observation. The underlying registration computes correspondences between pairs of images from the source and target observations from which the registration transformation is computed.

Test observation registration server

For convenience, a test routine is provided in this package which takes as input two observation xml files, calls the observation_registration_server and visualizes the registration results. To run execute:

rosrun observation_registration_server test_observation_registration XML_1 XML_2

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/observation_registration/observation_registration_server/README.md

Observation registration services

This package defines a number of service messages, which should be included by the packages using observation/additional view registration (thus avoiding a dependency on the registration packages, which in turn depend on CUDA).

ObservationRegistrationService

string source_observation_xml
string target_observation_xml
---
geometry_msgs/Transform transform
int32 total_correspondences

This service returns the 3D transformation aligning the source observation to the target observation (i.e. source * transform = target). The number of correspondences in total_correspondences gives an indication of the quality of the registration (specifically, if total_correspondences = 0 the registration has failed).

ObjectAdditionalViewRegistrationService

string observation_xml # can be blank
string object_xml
---
geometry_msgs/Transform[] additional_view_transforms
int32[] additional_view_correspondences
geometry_msgs/Transform observation_transform
int32 observation_correspondences

This service registers the additional views acquired for a specific object a) with respect to each other and b) with an observation. The inputs to this service are the object_xml file, pointing to the object containing the additional views to be registered. The object is loaded from the disk using the metaroom_xml_parser. The second parameter is the observation_xml previously acquired using the strands_3d_mapping pipeline. If this parameters is not blank, the additional views of the object will also be registered to the observation intermediate clouds. The underlying registration is done using siftgpu and the CERES optimization engine.

The registered poses of the object additional views are recorded in the vector additional_view_transforms, while the number of correspondences used per view are stored in additional_view_correspondences. Similarly, the transform that aligns the views to the observation is stored in observation_transform and the number of correspondences used to computed it in observation_correspondences.

AdditionalViewRegistrationService

string observation_xml # can be blank
sensor_msgs/PointCloud2[] additional_views
geometry_msgs/Transform[] additional_views_odometry_transforms
---
geometry_msgs/Transform[] additional_view_transforms
int32[] additional_view_correspondences
geometry_msgs/Transform observation_transform
int32 observation_correspondences

This service can be used for the same purpose as the previous one. The difference is that, instead of passing an object_xml file, one can pass a number of additional_view (in the form of point clouds), and, optionally (though strongly advised), odometry poses for the additional views, which the optimizer will use as initial guesses for the additional view poses.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/observation_registration/observation_registration_services/README.md

A GPU implementation of David Lowe’s Scale Invariant Feature Transform

Changchang wu

http://cs.unc.edu/~ccwu

University of North Carolina at Chapel Hill

  1. SIFT

    SIFTGPU is an implementation of SIFT for GPU. SiftGPU uses GPU to process pixels and features parallely in Gaussian pyramid construction, DoG keypoint detection and descriptor generation for SIFT. Compact feature list is efficiently build through a GPU/CPU mixed reduction.

    SIFTGPU is inspired by Andrea Vedaldi’s sift++ and Sudipta N Sinha et al’s GPU-SIFT. Many parameters of sift++ ( for example, number of octaves,number of DOG levels, edge threshold, etc) are available in SiftGPU.

    SIFTGPU also includes a GPU exhaustive/guided sift matcher SiftMatchGPU. It basically multiplies the descriptor matrix on GPU and find closest feature matches on GPU. GLSL/CUDA/CG implementations are all provided.

    NEW: The latest SIFTGPU also enables you to use Multi-GPUs and GPUS on different computers. Check doc/manual.pdf for more information. You can modify some marcros definition in SimpleSIFT.cpp and speed.cpp to enable the testing of the new functions.

  2. Requirements

    The default implemntation uses GLSL, and it requires a GPU that has large memory and supports dynamic branching. For nVidia graphic cards, you can optionally use CG(require fp40) or CUDA implementation. You can try different implementations and to find out the fastest one for different image sizes and parameters.

    The GLSL version may not work on ATI now. They did compile sucessfully with ATI Catalyst 8.9,
    but not any more with 9.x versions.
    
    SiftGPU uses DevIl Image library, GLEW and GLUT. You'll need to make sure your system has
    

    all the dependening libraries. SiftGPU should be able to run on any operation system that supports the above libraries

    For windows system visual studio solution are provided as msvc/SiftGPU.dsw, msvc/SiftGPU.sln and msvc/SiftGPU_CUDA_Enabled.sln. Linux/Mac makefile is in folder Linux of the package.

  3. Helps

    Use -help to get parameter information. Check /doc/manual.pdf for samples and explanations. In the vc workspace, there is a project called SimpleSIF that gives an example of simple SiftGPU usage. There are more examples of different ways of using SiftGPU in manual.pdf

    Check /doc/manual.pdf for help on the viewer.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/observation_registration/siftgpu/README.txt

quasimodo

Quasimodo - Qu(erying) A(nd) S(patio-Temporal) I(ncremental) Mod(el building of) O(bjects)

Build & Setup

The packes distributed here do note require any special setup, simply build them in your catkin workspace. Note that you will need the 2.0 branch of https://github.com/strands-project/soma (which in turn requires an up-to-date version of https://github.com/strands-project/mongodb_store). However, our packages require graphics for data fusion and visualization, the following describes how to set that up on a computer without display.

Headless Display & ElasticFusion

This package is meant to run on a robot with no screen but with Nvidia graphics card. This requires some setup every time the package is run (see the last two lines of code).

Right now, this package has a few dependencies that have to be installed manually. In the future, our ElasticFusion fork will be replaced by a new version implemented in CUDA. For now, you need to clone the repo https://github.com/stevenlovegrove/Pangolin anywhere in your computer home folder. Follow the build instructions in the readme.

Then you need to install our fork of ElasticFusion. For project members, please contact one of the maintainers and we will give you access to the code. Note that you need at least CUDA version 7 to run ElasticFusion. To get a graphics driver compatible with this version, the easiest way (under Ubuntu 14.04) is to sudo apt-get install nvidia-352.

To run these programs on a headless computer, we need to perform the following steps. First, do

sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024

and then restart the computer. Further, we need to point to the new display that will be used in the X server that will be used by typing

export DISPLAY=:0

If you are gonna run the programs multiple times, you might consider putting this in the .bashrc. Note that this will mean that you have to set the DISPLAY again manually if you want to ssh with -X. Then, every time you restarted the computer and run the nodes, you need to run

sudo service lightdm stop
sudo /usr/bin/X :0 &

This will kill the previous X server and start a new one that works in a headless state.

Apart from these acrobatics, all you should need to do is catkin_make in your workspace.

General Info & Launching

The packages in this repo provides functionality for building a database of objects from observations in an unsupervised manner. It builds on the Metaroom paradigm for data collection. The libraries used for retrieving point clouds from several weeks of data collection can be found in https://github.com/strands-project/strands_3d_mapping/tree/hydro-devel/dynamic_object_retrieval.

For launching the pipeline for building the representation and querying on the robot, do

roslaunch quasimodo_launch quasimodo.launch data_folder:=/path/to/.semanticMap

Here, /path/to/.semanticMap is typically located in ~/.semanticMap. Please type the complete path. This will launch the necessary nodes and launch files, both for maintaining a data base of object models and for retrieving point clouds across weeks of exploration.

Visualization

Retrieval component

The easiest way to visualize the output of the retrieval (point cloud history search) pipeline is to look at the image published on the /quasimodo_retrieval/visualization topic. The leftmost image shows the masked RGB image of the query object and to the right are rendered views of the ten closest matches represented as 3D surfel clouds.

You can manually trigger a search (i.e. without using the incremental object building framework) of an object with additional views by starting

rosrun quasimodo_retrieval quasimodo_retrieve_observation

and then, in another window specifying the path to the xml of the additional views:

rostopic pub /object_learning/learned_object_xml std_msgs/String "data: '/path/to/.semanticMap/201422/patrol_run_56/room_0/2016-Apr-22 14:58:33.536964_object_0.xml'"

You can also use soma to visualize the queries over time.

Model building component

@jekekrantz Add some stuff here!

Detailed description of packages, nodes, launch files and messages

retrieval_processing

This package runs in conjunction with the metaroom nodes online on the robot. As metarooms are collected, the package continuously segments, extracts features and adds them to a feature vocabulary representation. To launch the entire pipeline, do

roslaunch retrieval_processing processing.launch data_folder:=~/.semanticMap
Nodes
Launch files
  • processing.launch - this file launches all of the above nodes. By default it does not launch retrieval_simulate_observations, this is trigged with the parameter simulate_observations:=true. It is important to set the parameter data_folder:=/path/to/metarooms to the location of the metarooms, typically ~/.semanticMap.

quasimodo_retrieval

This package provides the nodes for retrieving point clouds from the memory created by retrieval_processing. Launch everything simply with

roslaunch quasimodo_retrieval retrieval.launch vocabulary_path:=/path/to/vocabulary

where the vocabulary is most often located in ~/.semanticMap/vocabulary.

Nodes
  • quasimodo_retrieval_node - provides the service /query_normal_cloud and subscribes to the topic /models/new. If something is published on the topic, it returns the result on /retrieval_result.
  • quasimodo_visualization_server - this node simply subscribes to /retrieval_result and visualizes the query result using the tools in the package object_3d_benchmark, https://github.com/strands-project/strands_3d_mapping/tree/hydro-devel/dynamic_object_retrieval/benchmark. The resulting image is published on /visualization_image.
  • quasimodo_retrieve_observation - allows the system to bypass the model building component, instead searching for results directly using the retrieval framework. Simply publish something like rostopic pub /object_learning/learned_object_xml std_msgs/String "data: '/path/to/.semanticMap/201422/patrol_run_56/room_0/2016-Apr-22 14:58:33.536964_object_0.xml'" to retrieve more views of that object.
Other Nodes
  • quasimodo_visualize_model - this node simply visualizes the topic /models/new by integrating it into a point cloud and showing a PCL visualizer
  • quasimodo_retrieval_publisher - this node queries for all the labeled objects in a particular metaroom sweep, given by the parameter data_path.
  • quasimodo_retrieval_server - a barebones version of quasimodo_retrieval_node, simply returns the retrieved clouds without loading any images or objects masks
Launch files
  • retrieval.launch - launches quasimodo_retrieval_node, quasimodo_visualization_server and a node for fusing the incoming RGB-D frames. Takes the parameter vocabulary_path, most often this is ~/.semanticMap/vocabulary.

quasimodo_optimization

This package is a general tool for optimizing some value by evaluating some metric that comes from analyzing a rosbag. The tool package uses dynamic_reconfigure to play back the rosbag with different parameters and record the values associated with the parameters.

Can be launched with

roslaunch quasimodo_optimizer optimizer.launch

Afterwards, run plot_values.py in the folder where you ran the launch file.

Nodes
  • optimizer.py - steps through the parameters and plays back the rosbags for every parameter configuration
  • rosbag_player.py - an action server for playing back ros bags on demand
  • plot_values.py - plots the values as a heat map in parameter space
Launch files
  • optimizer.launch - launches optimizer.py and rosbag_player.py.

quasimodo_msgs

All the message and service types required for the Quasimodo framework.

Message types
  • image_array.msg - an array of images
  • int_array.msg - an array of ints
  • model.msg - a model object, consisting of point clouds, frames, camera parameters and relative transforms
  • retrieval_query.msg - message type for querying quasimodo_retrieval
  • retrieval_result.msg- message type result from querying quasimodo_retrieval
  • retrieval_query_result.msg - a combined message for querying and result
  • rgbd_frame.msg - RGD images, depth images and camera parameters
  • string_array.msg - an array of strings
Service types
  • cloud_from_model.srv - service for fusing models into clouds
  • fuse_models.srv - several models to one fused model
  • get_model.srv - get model for identifier
  • index_frame.srv - add frame to model data base
  • model_from_frame.srv - turn frame into model
  • query_cloud.srv - query retrieval using retrieval_query.msg
  • simple_query_cloud.srv - query retrieval using sensor_msgs/PointCloud2 pointcloud with normals
  • visualize_query.srv- visualize a retrieval_result.msg

quasimodo_brain

This package controlls the flow of data in the quasimodo system and maintains the database of object models. Relies heavily on the quasimodo_models package. The package also contains loaders for different formats of data, such as for example the metarooms.

roslaunch quasimodo_brain modelserver.launch
roslaunch quasimodo_brain robot_listener.launch
Nodes
  • preload_object_data - Reads data in the metarooms format. Uppon requests publishes data for the modelserver. Input: paths to a set of folders containing data.
  • robot_listener - Listenes to topic. Whenever it recieves the path to an xml file it reads data in the metarooms format from the file and publishes data for the modelserver. Input: topicname to listen at.
  • modelserver - Listens to data from input modules, uses the quasimodo_models package to register and merge models into more complete models and thereby maintain the database of objects. Input: ‘-v’ for visualization, ‘-p /path/to/folder’ to set a folder where the database is read/stored, ‘-m’ initializes the database with the data from /path/to/folder, ‘-massreg_timeout value’ sets the stopping time for the multiview registration, ‘-occlusion_penalty value’ sets the penalty value for occlusions(controlling how likeley the database is to merge models).
Launch files
  • modelserver.launch - this file launches the modelserver node without the visualization flag.
  • robot_listener.launch - this file launches the robot_listener node without the topicname set to “/some/topic”.
  • brain.launch - Launches the modelserver and the preload_object_data nodes. On automatic restart.

quasimodo_models

This package is contains libraries for registering, splitting, merging and optimizing quasimodo object models. Quasimodo object models contain RGBDFrames, Segmentation masks, Depthimages and relative poses between the data for the frames.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/quasimodo/README.md

Quasimodo Retrieval

This package contains nodes to interface with the object_3d_retrieval and object_3d_benchmark packages. They are meant to be used together with with the scripts in object_3d_benchmark to build a feature representation than can be used by these nodes for querying. Please look at the package quasimodo_test (specifically the node test_msgs) for an example of this in action.

Retrieval Server

The retrieval server basically takes a point cloud and queries for similar objects observed previously within our feature representation. The definition of the service looks like:

sensor_msgs/PointCloud2 cloud
sensor_msgs/Image image
sensor_msgs/CameraInfo camera
---
retrieval_result result

Please check the message quasimodo_msgs/retrieval_result for the exact format of the return type. Invoke a server instance that can be called with the above service definition by running

rosrun quasimodo_retrieval quasimodo_retrieval_server _vocabulary_path:=/path/to/vocabulary.

Visualization Server

The idea is that the output from the retrieval server can be sent on to another server for visualization. The original query data, together with the result from the retrieval server is passed on to the visualization server. It offers the following service:

sensor_msgs/Image image
sensor_msgs/CameraInfo camera
geometry_msgs/Transform room_transform
retrieval_result result
---
sensor_msgs/Image image

Run the node by simply typing

rosrun quasimodo_retrieval quasimodo_visualization_server.

Original page: https://github.com/strands-project/strands_3d_mapping/blob/hydro-devel/quasimodo/quasimodo_retrieval/README.md

The repo contains the catkin package package octomap_launch. To be able to run it you need the octomap stack. You can get it by running sudo apt-get install ros-groovy-octomap-mapping. If you also want to visualize the maps you create you can get the octovis visualizer tool by running sudo apt-get install ros-groovy-octovis.

roslaunch octomap_launch octomap.launch map:=(PATH_TO_MAP)/map.yaml - for running the scitos nav stack package for localization and octomap for map building. You need to have the scitos drivers, including the laser scanner and the state_publisher up and running before doing this.

If you want to view the map while building it using octomap, you can visualize it in rviz by adding a MarkerArray on topic occupied_cells_vis_array.

To save the map created by octomap, just do rosrun octomap_server octomap_saver -f (FILENAME).bt. You can view the map file by just typing octovis (FILENAME).bt in the command line. There is an example 3D map created on the robot in octomap_launch/example_maps/map3d.bt that you can try this out with.

Original page: https://github.com/strands-project/strands_3d_mapping/wiki/Get-up-and-running

Overview

This package provides a script to report marathon run attempts to the STRANDS Robot Marathon webserver. The script monitors the robot’s odometry to find out how far the the robot has moved. Starting the script signals the beginning of a run, ending the script finishes the run.

Pre-requisites

  • The robot needs to have internet access at the beginning of the run. This is so that it can close any outstanding previous attempts and request a new run key from the server.
  • The robot must publish odometry data as nav_msgs/Odometry messages on the /odom topic.
  • The monogodb_store must be running.
  • Your marathon teamname/password must be set in ~/.marathon_auth

Setting up .marathon_auth

The team name and password supplied to you must be placed in the file ~/.marathon_auth in the format:

team: "your_team_name"
password: "your_pa33w0rd"

Original page: https://github.com/strands-project/strands_apps/blob/indigo-devel/marathon_reporter/README.md

Odometry Mileometre

This very simple node subscribes to /odom and calculates the travelled distance since start in metres. The result is publish on /odom_mileage.

Start with rosrun odomoetry_mileage odometry_mileage

If the parametere saved_mileage is present on the rosparam server (e.g. via the datacentre), then the node will use the stored mileage to intialise itself. The node will also check against the actual /mileage topic on startup and will choose the large of both (parameter or published value). The mileage topic might be actually higher then the odom_mileage topic after restarting the robot because at that point the mileage is read directly from the EEPROM and the odometry_mileage node might not have been running all the time when the system crashes.

While the node is running the mileage parameter is constantly set to the current value and saved to the datacentre. The default save interval is every 500 odometry messages, which is every ~10 seconds. This can be change via the save_interval parametre.

Original page: https://github.com/strands-project/strands_apps/blob/indigo-devel/odometry_mileage/README.md

roslaunch_axserver: Running roslaunch files via the actionlib interface

goal definition

This component allows to launch ROS launch files via an actionlib server. A simple goal (of type launchServerGoal) would look like this:

pkg: 'strands_morse'
launch_file: 'uol_mht_nav2d.launch'
monitored_topics: ['/map']

this would launch the launch file uol_mht_nav2d.launch in package strands_morse.

monitoring the roslaunch process

The action server implements a monitor thread that will provide feedback on the actionlib feedback channel as a boolean (parameter ready in the feedback), indicating if all the topics given via monitored_topics (in the example above just one: /map) are actually advertised. This allows to wait until all components have been launched. If monitored_topics is empty, the feedback is always ready=True.

state transitions

The goal status will be set ACTIVE when it was possible to launch a roslaunch process (and will be REJECTED if this was not possible). If the roslaunch process terminates with an error (e.g. when the given roslaunch file doesn’t exist), the goal is set to ABORTED. If the roslaunch process terminates successfully, then the goal is set to SUCCESS (usually, we will never see this, as the roslaunch process will likely be running until we actively preempt it).

using in Python

In order to use this in your code, include something like the following in your python code:

import rospy
import roslaunch_axserver.msg
import actionlib

...

    client = actionlib.SimpleActionClient('launchServer', roslaunch_axserver.msg.launchAction)

    # Waits until the action server has started up and started
    # listening for goals.
    client.wait_for_server()

    # Creates a goal to send to the action server.
    goal = roslaunch_axserver.msg.launchGoal(pkg='pkg', launch_file='filename', monitored_topics=[])

    # Sends the goal to the action server.
    client.send_goal(goal)

The server is reentrant, meaning that you can run several roslaunch processes in parallel (not a SimpleActionServer)

Original page: https://github.com/strands-project/strands_apps/blob/indigo-devel/roslaunch_axserver/README.md

static_transform_manager

This provides a transformation manager node that broadcasts static transformations at 30Hz, in the same way that static transformations can be created using static_tf_broadcaster. Using this node static transformations can be easier setup and removed than managing a static_tf_broadcasters as sub processes.

Usage

rosrun static_transform_manager static_tf_services.py

will start the manager node. This provides two services:

  • /static_transforms_manager/set_tf
  • /static_transforms_manager/stop_tf

set_tf takes a single argument geometry_msgs/TransformStamped transform, and returns a debug string response and a bool success. The supplied transform is broadcast at 30Hz.

stop_tf takes a single argument string child_frame_id which is the child frame to stop broadcasting.

Example

rosrun static_transform_manager rviz_click_to_tf.py will listen to points clicked in rviz and turn them into a transformation in the TF tree.

Original page: https://github.com/strands-project/strands_apps/blob/indigo-devel/static_transform_manager/README.md

watchdog_node

This package provides a ‘watchdog’ node that monitors for specified conditions, and takes a specified action if any condition becomes true.

Usage

rosrun watchdog_node watchdog_node _config:=/path/to/config.yaml

where config.yaml is the configuration specifying a list of watchdogs in terms of ‘monitors’ and ‘actions’:

- name: check_battery
  description: Monitors the battery level.
  restart_timeout: 30
  monitors:
      - monitor_type: TopicFieldCondition
        topic: /battery_state/lifePercent
        condition: "{value} < 97"
  actions:
      - action_type: SendEmail
        to_addresses:  [cburbridge@gmail.com]
        from_address: cburbridge@gmail.com
        message: "Warning: robot battery is critically low."
        server: mail-relay.cs.bham.ac.uk
        port: 25
- name: check_scitos_node
  description: Checks the /odom topic, kills scitos node if not published for 20s.
  restart_timeout: 30
  monitors:
      - monitor_type: TopicAlive
        topic: /odom
        max_duration: 20
  actions:
      - action_type: KillNode
        node_name: /scitos_mira

Each watchdog must have the following fields:

  • name: any name chosen by the user
  • description: some hopefully useful description string
  • restart_timeout: how many seconds to wait after this watchdog has fired before restarting the watchdog. If value is -1 then the watchdog will stop running after it has been fired.
  • monitors: a list of monitors that make up the watchdog. If any one of the monitors fires then the watchdog will fire and stop, restarting after restart_timeout.
  • actions: a list of actions to take when this watchdog fires.

Monitors

The following monitors are available:

- type:  TopicPublished
  description:  This monitor triggers the actions if a message is published on a given topic
  configuration fields:
    -  topic  :  The topic to listen to

- type:  TopicAlive
  description:  This monitor triggers the actions if there are no messages on the given topic for a given period
  configuration fields:
    -  topic  :  The topic to monitor
    -  max_duration  :  The maximum number of seconds to accept not receiving a message

- type:  TopicFieldCondition
  description:  This monitor checks a field in a given topic and triggers the actions when a condition is met.
  configuration fields:
    -  topic  :  The topic & field to watch, eg /topic/field
    -  condition  :  A python expression containing {value} as a place holder for the value of the topic field. For example '{value} < 30'

Actions

The following actions are available:

- type:  KillNode
  description:  Sends a specified node a kill signal.
  configuration fields:
    -  node_name  :  The rosgraph name of the node to kill.

- type:  SendEmail
  description:  Sends an email using simple unsecure SMTP.
  configuration fields:
    -  to_addresses  :  A list of email addresses to send to.
    -  from_address  :  The email address to be sent from.
    -  message  :  The message body of the email.
    -  server  :  The SMTP server.
    -  port  :  The port the SMTP server uses.

Developing new actions/monitors

Monitors

Monitors are classes that derive from the base classes watchdog_node.MonitorType. They must provide name, description and config_keys fields at the class level and implement a start() and stop() method. start() will be called when the monitor is started or restarted following the firing of the watchdog. stop() will be called when the watchdog fires or is shutdown.

In order to signal that the watchdog should fire, the class should call ‘self.set_invalid()`.

TopicPublished provides a straightforward example.

Note that all modules defining monitors need to be imported in monitors/__init__.py

Actions

Actions are classes that derive from the base class watchdog_node.ActionType. Similar to monitors they must provide name, description and config_keys fields at the class level. They must also provide a execute(self) method that will be called when the watchdog fires.

KillNode provides a straightforward example.

Note that all modules defining actions need to be imported in actions/__init__.py

Original page: https://github.com/strands-project/strands_apps/blob/indigo-devel/watchdog_node/README.md

STRANDS data tools that work with QSRlib

This package aims to provide easy parsing of various datasets mainly keeping them in QSRlib format.

It might end up being part of the QSRlib given its functionality and strong dependency to it.

There are usually two types of files: * readers: responsible for reading from the raw data or from saved files and keeping the raw data (and other info as needed) in some QSRlib friendly format. * keepers: responsible for taking a reader and making QSRs via QSRlib. They usually have the option to load the QSRs directly from some file.

Available tools for the following datasets

Usage and Help

Installation

You do need to have QSRlib somewhere installed where it can be found by the readers and the keepers. Easiest way is probably to modify your PYTHONPATH or if you are using an IDE then check its documentation on how to resolve dependencies.

CAD120 reader

cad120_data_reader.py provides the class CAD120_Data_Reader. In most cases it is enough to just call the constructor without any of the optional arguments, and if you have a suitable config.ini then things should go smoothly.

About the config.ini

Create a config.ini base on the following template that tells where the CAD120 folder is. If the notion of corrected labeling.txt files makes no sense then just use the same path for both <path1> and <path2>.

[cad120_data_reader]
path = <path1>/CAD_120
corrected_labeling_path = <path2>/CAD_120
; use load_from_files=True in the constructor to load from the following files
sub_sequences_filename = <pickle_file.p>
sub_time_segmentation_filename = <pickle_file.p>
ground_truth_tracks_filename = <pickle_file.p>

Just make sure that your program can find your config.ini. If you are not familiar how to do this then an easy way is to pass the directory of config.ini in the constructor, e.g.:

reader = CAD120_Data_Reader(config_path=<path string to config.ini>)
CAD120 keeper

cad120_qsr_keeper.py provides the class CAD120_QSR_Keeper. If you want to make QSRs from the reader then you need to pass some parameters. See the main part for an example and you will need a local.ini file. If you want to load the QSRs from a file simply call with argument `-l .

Running cad120_qsr_keeper.py -h will give sufficient help also.

About the local.ini
[local]
reader_ini = <reader config.ini file>
reader_load = true

reader_load true if you want the reader to load the data from the files in the config.ini

Original page: https://github.com/strands-project/strands_data_to_qsrlib/blob/master/README.md

STRANDS data tools that work with QSRlib

Trajectory reader

traj_data_reader.py provides the class Trajectory_Data_Reader. config.ini needs to include qsr options. It can optionally contain Activity Graph options.

About the config.ini

Create a config.ini based on the following template:

[trajectory_data_reader]
path = <path1>
; use load_from_files=True in the constructor to load from the following files
date = date
qsr = qtcb
q = 0_01
v = False
n = True

You can add any parameters you like in the config file, but also initiate them in the data_reader config `??? <>`__.

Original page: https://github.com/strands-project/strands_data_to_qsrlib/blob/master/src/novelTrajectories/README.md

The STRANDS Executive Framework – Tutorial

This tutorial was originally written for the STRANDS Project Long-term Autonomy for Mobile Robots Summer School 2015.

Preliminaries

In addition to assuming a working knowledge of ROS, this tutorial also assumes you are comfortable working with:

This tutorial also assumes you have the strands_executive software installed. You should have done this when preparing for the summer school. If not, following the installation instructions.

Tasks

This executive framework, schedules and manages the execution of tasks. An instance of a task object describes the desired exection of an actionlib action server, usually within a time window and at a given robot location. In the first part of this tutorial you will create instances of tasks and manage their execution via the executive framework.

Note, whilst any actionlib server can be used with the execution framework, it is important that the server correctly responds to preemption requests.

The Wait Action

To save the work of implementing an action server, we will work with one which is provided by the execution framework: the wait action. This action server simply either waits until a particular time or for a given duration before returning. The definition of action is as follows:

# the time to wait until
time wait_until
# if wait until is 0 then try waiting for this long, if this is 0 too, wait until explicitly woken or cancelled
duration wait_duration
---
---
# feedback message
float32 percent_complete

The server has three behaviours. If wait_until is specified, then the server will wait until this time until returning. If wait_until is not specified (i.e. it is a default time instance, one with 0 in both fields) then wait_duration is used to determine how long the server should wait before returning. If this is also unspecified (i.e. it is a default duration instance, one with 0 in both fields) then the server simply waits until a the Empty service /wait_action/end_wait is called and then returns.

To see this in action, start the wait server as follows (assuming roscore is already running):

rosrun wait_action wait_node.py

You can then use the actionlib GUI client to explore different values:

rosrun actionlib axclient.py /wait_node

The wait_duration argument is self explanatory. The wait_until argument is seconds since epoch in UTC (which is one hour behind BST if you’re doing this at the summer school).

Task Specification

Now we have an action server we’d like to execute, we can write a task to have its execution managed by the executive framework. To create a task, first create an instance of the Task message type. Examples are given in Python, as the helper functions currently only exist for Python, but C++ is also possible (and C++ helpers will be added if someone asks for them in the future).

#!/usr/bin/env python

import rospy

from strands_executive_msgs.msg import Task
task = Task()

To specify that this task should trigger the wait action, set the action field to the topic prefix of the action server (i.e. the string you use in the action server constructor, or the one you used as the last argument to axclient previously):

task.action = '/wait_action'

You must also set the maximum length of time you expect your task to execute for. This is used by the execution framework to determine whether your task is executing correctly, and by the scheduler to work out execution times. The duration is a rospy.Duration instance and is defined in seconds. The max_duration field is not connected with any argument to the action server itself.

max_wait_minutes = 60 * 60
task.max_duration = rospy.Duration(max_wait_minutes)

As our action server requires arguments (i.e. wait_until or wait_duration), we must add these to the task too. Arguments must be added in the order they are defined in your action message. Arguments are added to the task using the helper functions from strands_executive_msgs.task_utils. For our wait_action, this means we must add a value for wait_until then a value for wait_duration (as this is the order defined in the action definition included above). The following code specifies an action that waits for 10 seconds.

from strands_executive_msgs import task_utils

task_utils.add_time_argument(task, rospy.Time())
task_utils.add_duration_argument(task, rospy.Duration(10))

Tasks can be assigned any argument type that is required by the action server. This is done via the mongodb_store and is explained in more detail here.

Task Execution

We have now specified enough information to allow the task to be executed. For this to happen we must do three things. First, we must start up the execution system. The strands_executive_tutorial package contains a launch file which runs everything you need to get started: mongodb_store; a dummy topological navigation system (we will replace this with the robot’s navigation system later); and the executive framework itself. Run this in a separate terminal as follows:

roslaunch strands_executive_tutorial tutorial_dependencies.launch

Second, we must tell the execution system that it can start executing task. This is done using the SetExecutionStatus service which can be used to pause and resume execution at runtime. The following provides functions which return the correct ROS service, after waiting for it to exist.

from strands_executive_msgs.srv import AddTasks, DemandTask, SetExecutionStatus

def get_service(service_name, service_type):
    rospy.loginfo('Waiting for %s service...' % service_name)
    rospy.wait_for_service(service_name)
    rospy.loginfo("Done")
    return rospy.ServiceProxy(service_name, service_type)

def get_execution_status_service():
    return get_service('/task_executor/set_execution_status', SetExecutionStatus)

The resulting service can then be used to enable execution:

set_execution_status = get_execution_status_service()
set_execution_status(True)

Note, that this only needs to be done once after the execution system has been started. If you don’t set it to True then tasks can be added but nothing will happen. Once it has been set to True then it won’t be set to False unless the service is called again (which will pause execution), or the task executor is shut down and started again.

Finally, we can request the immediate execution of the task using the DemandTask service. This service interrupts any task that is currently executing (provided the associated action server supports pre-emption) and starts the execution of the specified one. The following code demonstrates this.

def get_demand_task_service():
    return get_service('/task_executor/demand_task', DemandTask)

demand_task = get_demand_task_service()
demand_task(task)

If you run all of the code provided so far (structured in an appropriate way), you should see some output like the following from the launch file you previously started:

[INFO] [WallTime: 1439285826.115682] State machine starting in initial state 'TASK_INITIALISATION' with userdata:
    ['task']
[INFO] [WallTime: 1439285826.118216] State machine transitioning 'TASK_INITIALISATION':'succeeded'-->'TASK_EXECUTION'
[INFO] [WallTime: 1439285826.118692] Concurrence starting with userdata:
    ['task']
[INFO] [WallTime: 1439285826.118842] Action /wait_action started for task 1
[WARN] [WallTime: 1439285826.121560] Still waiting for action server '/wait_action' to start... is it running?
[INFO] [WallTime: 1439285826.155626] Connected to action server '/wait_action'.
[INFO] [WallTime: 1439285826.192815] target wait time: 2015-08-11 10:37:16
[INFO] [WallTime: 1439285836.196404] waited until: 2015-08-11 10:37:16
[INFO] [WallTime: 1439285836.207261] Concurrent state 'MONITORED' returned outcome 'succeeded' on termination.
[INFO] [WallTime: 1439285837.166333] Concurrent state 'MONITORING' returned outcome 'preempted' on termination.
[INFO] [WallTime: 1439285837.181308] Concurrent Outcomes: {'MONITORED': 'succeeded', 'MONITORING': 'preempted'}
[INFO] [WallTime: 1439285837.181774] Action terminated with outcome succeeded
[INFO] [WallTime: 1439285837.186778] State machine transitioning 'TASK_EXECUTION':'succeeded'-->'TASK_SUCCEEDED'
[INFO] [WallTime: 1439285837.190386] Execution of task 1 succeeded
[INFO] [WallTime: 1439285837.190591] State machine terminating 'TASK_SUCCEEDED':'succeeded':'succeeded'

Most of this is output from the task execution node as it steps through the finite state machine used for task execution. The following lines show the task is working as we want:

[INFO] [WallTime: 1439285826.192815] target wait time: 2015-08-11 10:37:16
[INFO] [WallTime: 1439285836.196404] waited until: 2015-08-11 10:37:16
Task Scheduling

We can now extend our task specification to include information about when the task should be executed. This is an essential part of our long-term autonomy approach, which requires behaviours to happen at times specified either by a user, or the robot’s on subsystems. Each task should be given a time window during which task execution should occur. The task scheduler which is part of the executive framework sequences all the tasks it is managing to ensure that the time window of every task is respected. If this is not possible then the most recently added tasks are dropped (unless you are using priorities, in which case lower priority tasks are dropped until a schedule can be found).

The opening time of the task’s execution window is specified using the start_after field. The code below sets the start window to ten seconds into the future.

task.start_after = rospy.get_rostime() + rospy.Duration(10)

The execution window should provide enough time to execute the task, but should also offer some slack as other tasks added to the system may have to executed first. The closing time of the the execution window is specified using the end_before field. The code below sets the end of the window such that the window is three times the maximum duration of the task.

task.end_before = task.start_after + rospy.Duration(task.max_duration.to_sec() * 3)

Previously we demanded task execution using the DemandTask service. This ignores the time window of the task and executes it immediately. To respect the time window we must use the service /task_executor/add_tasks of type AddTasks.

def get_add_tasks_service():
    return get_service('/task_executor/add_tasks', AddTasks)

This adds a list of tasks to the executive framework, which in turn triggers scheduling and updates the robot’s execution schedule (assuming execution status has already been set to True).

add_tasks = get_add_tasks_service()
add_tasks([task])

When this script is executed, rather than the immediate execution of your task as previously, you should see the executor delay for some time (not quite ten seconds as there is an assumption of a minimum travel time before task execution) before executing the wait action. The output should be similar to that shown below, which shows the scheduler creating a schedule (which only contains the wait task) then the executor delaying execution until the star window of the task is open.

[ INFO] [1439548004.349319000]: SCHEDULER: Going to solve
[ INFO] [1439548004.397583000]: SCHEDULER: found a solution
[INFO] [WallTime: 1439548004.398210] start window: 11:26:53.412277
[INFO] [WallTime: 1439548004.398465]          now: 11:26:44.398102
[INFO] [WallTime: 1439548004.398639]  travel time:  0:00:02
[INFO] [WallTime: 1439548004.398825] need to delay 7.14174938 for execution
[INFO] [WallTime: 1439548004.399264] Added 1 tasks into the schedule to get total of 1
Execution Information

You can use the following sources of information to inspect the current state of the executive framework.

The current execution status can be obtained using the service GetExecutionStatus on /task_executor/get_execution_status. A return value of true means the execution system is running, whereas false means that the execution system has either not been started or it has been paused.

To see the execution schedule, subscribe to the topic /current_schedule which gets the list of tasks in execution order. If currently_executing is true then this means the first element of execution_queue is the currently active task. If it is false then the system is delaying until it starts executing that task.

To just get the currently active task, use the service strands_executive_msgs/GetActiveTask on /task_executor/get_active_task. If the returned task has a task_id of 0 then there is no active task (as you can’t return None over a service).

To get print outs in the terminal describing the operation of the executive framework, you can use the following scripts:

rosrun task_executor schedule_status.py

schedule_status.py prints out the current schedule and execution information. For example, for the above wait action, you might see something like

[INFO] [WallTime: 1439549489.280268] Waiting to execute /wait_action (task 3)
[INFO] [WallTime: 1439549489.280547] Execution to start at 2015-08-14 11:51:39
[INFO] [WallTime: 1439549489.280785] A further 0 tasks queued for execution
[INFO] [WallTime: 1439549494.295445]
rosrun task_executor task_status.py

task_status.py prints out the events that occur internally in the framework. For one of our wait tasks, you might see the following:

task 3    /wait_action    ADDED   14/08/15 11:51:29
task 3    /wait_action    TASK_STARTED    14/08/15 11:51:37
task 3    /wait_action    EXECUTION_STARTED   14/08/15 11:51:37
task 3    /wait_action    EXECUTION_SUCCEEDED   14/08/15 11:51:47
task 3    /wait_action    TASK_SUCCEEDED    14/08/15 11:51:47

You can also subscribe to these events on the topic /task_executor/events.

Exercise 1
  1. Given what you know so far, create a script which schedules a configurable number of wait action tasks. All tasks should have the same duration and time window, but you should make sure that the time window is created such that it allows all the tasks to be executed. Use schedule_status.py and task_status.py to monitor the progress of execution and debug your script if necessary.
  2. Optional. Create an action server which prints out all the details of a task, then uses this in place of the wait action for 1(a). See the task documentation for how to add user-defined message types as task arguments.
Tasks with Locations

So far we have ignored the robot entirely. Now we will extend our task to feature the location of the robot when the task is performed. Each task can optionally have a start location and an end location. Locations are names of nodes in the robot’s topological map (which you should have worked with yesterday). If the start location is provided, the task executive drives the robot to this location before starting the execution of the task action. If navigation fails, the task will not be executed. The end location describes the location the executive system expects the robot to be at when the task execution is completed. If left blank, this is assumed to be the same as the start location. The start and end locations are used by the scheduler to determine what order to execute the tasks.

For this part of the tutorial, we will work with the UOL MHT simulation and the associated topological map. From here on the examples assume you are running the correct mongodb store, simulation and navigation launch files for this simulated setup.

Since you now have a simulated robot running (see note above), you now should not use the tutorial_dependencies launch file. Instead, launch the executive framework as follows:

roslaunch task_executor task-scheduler.launch scheduler_version:=1

The scheduler_version:=1 parameter ensures the optimal version of the scheduler is used (the original Coltin et al. algorithm). This version produces optimal schedules (including the travel time between locations), although it starts to take a long time to produce schedules once you go above 20 or so tasks in a single scheduling problem. You can omit this parameter to run the non-optimal version which uses preprocessing to produce non-optimal schedules quickly.

To configure a task’s location, set the following fields in the task object:

task.start_node_id = `Waypoint3`
task.end_node_id = `Waypoint3`

Make sure you add the above lines to your existing code before the task is send to the scheduler. You can then use the add_tasks service as before, and you should see the robot drive to Waypoint3 before executing a wait action (assuming you’re extending the code from above).

Note that now the robot must travel between locations you need to be make sure your time windows for execution are updated appropriately. The time window specifies the time in which the action is executed, not the navigation time. However, whereas you could previously easily fit two 10 second wait tasks in a 30 second time window (if you gave them both the same window), if it takes 120 seconds to travel between the two locations then the scheduler will not be able to include them both in the same schedule as the travel time between them will place one task outside its time window.

Exercise 2
  1. Extend your previous multi-task code to include locations in the tasks, with the robot travelling to at least three different locations.
Changes to Execution

You now know how to have a list of tasks scheduled and executed. If a task is demanded using the DemandTask service (as in the first part of this tutorial), the currently executing task is interrupted and the demanded one is executed in its place. In parallel, a new schedule is created which contains the remaining scheduled tasks and the interrupted one if possible (i.e. if the time windows can be respected). In all cases, if all tasks cannot be scheduled some are dropped until a valid schedule can be produced.

You can also pause the execution of the current task by sending a value of false to the SetExecutionStatus service. This will pause the task execution in the current state it is in, either navigation or action execution, until a value of true is sent to the service.

If you have written a task which should not be interrupted by these methods, you can create a service which informs the executive whether or not interruption is possible. The instructions for this are here.

Exercise 3
  1. Use your previous code to start the simulated robot executing scheduled tasks. Once it is executing, experiment with setting the execution status and demand tasks. Monitor the effects of these interactions using
rosrun task_executor schedule_status.py

You can alter the execution status from the command line using

rosservice call /task_executor/set_execution_status "status: true"
Routines

For long-term operation, we often want the robot to execute tasks according to some routine, e.g. visit all the rooms in the building between 9am and 11am. The STRANDS executive framework supports the creations of such routines through two mechanisms. The DailyRoutine and DailyRoutineRunner classes (described here) support the creation of routines of daily repeating tasks. The RobotRoutine class (described here) builds on these objects to provide managed routine behaviour for a robot. This class also manages the battery level of the robot, provides a hook for when the robot is idle (i.e. it has no tasks to execute in the near future), and also manages the difference between day and night for the robot (at night the robot docks on a charger and can then only execute tasks without movement).

The PatrolRoutine class provides an example subclass of RobotRoutine which simply generates a patrol behaviour which creates tasks to wait at every node in the topological map. You can see this in operation by running:

rosrun routine_behaviours patroller_routine_node.py

With this node running you should see that the robot creates a schedule to visit all the nodes, or it fails to create on (if the time windows are not satisfiable) and instead visits a random node when idle.

Before going further, we can use the code from this node to understand the basic properties of a routine.

import rospy

from datetime import time
from dateutil.tz import tzlocal

from routine_behaviours.patrol_routine import PatrolRoutine


if __name__ == '__main__':
    rospy.init_node("patroller_routine")

    # start and end times -- all times should be in local timezone
    localtz = tzlocal()
    start = time(8,00, tzinfo=localtz)
    end = time(20,00, tzinfo=localtz)

    idle_duration=rospy.Duration(20)

    routine = PatrolRoutine(daily_start=start, daily_end=end, idle_duration=idle_duration)

    routine.create_routine()

    routine.start_routine()

    rospy.spin()

Looking at the code in more detail:

start = time(8,00, tzinfo=localtz)
end = time(20,00, tzinfo=localtz)

The above code defines the working day of the robot. No task execution will happen before start, and when end is reached, all execution will cease and the robot will dock for the night.

idle_duration=rospy.Duration(20)

The above code is used to configure how long the robot should not be executing a task before it considers itself idle.

routine = PatrolRoutine(daily_start=start, daily_end=end, idle_duration=idle_duration)

This creates the routine object, passing the defined values.

routine.create_routine()

routine.start_routine()

create_routine() runs code which specifies the tasks and populates the routine within the object. You can see the code for this here. Following this, start_routine triggers the execution of the populated routine from the previous step.

If you want to create a routine that performs tasks at all, or a selection of, nodes in the map, you can create a subclass of PatrolRoutine and override methods such as create_patrol_routine and create_patrol_task to create task-specific behaviour. For the final part of this tutorial we will eschew this approach, and instead create a simple routine from RobotRoutine. Use the following code as a structure for your routine.

#!/usr/bin/env python

import rospy

from routine_behaviours.robot_routine import RobotRoutine

from datetime import time, date, timedelta
from dateutil.tz import tzlocal

from strands_executive_msgs.msg import Task
from strands_executive_msgs import task_utils


class ExampleRoutine(RobotRoutine):
    """ Creates a routine which simply visits nodes. """

    def __init__(self, daily_start, daily_end, idle_duration=rospy.Duration(5), charging_point = 'ChargingPoint'):
        # super(PatrolRoutine, self).__init__(daily_start, daily_end)
        RobotRoutine.__init__(self, daily_start, daily_end, idle_duration=idle_duration, charging_point=charging_point)

    def create_routine(self):
        pass

    def on_idle(self):
        """
            Called when the routine is idle. Default is to trigger travel to the charging. As idleness is determined by the current schedule, if this call doesn't utlimately cause a task schedule to be generated this will be called repeatedly.
        """
        rospy.loginfo('I am idle')



if __name__ == '__main__':

    rospy.init_node('tutorial_4')

    # start and end times -- all times should be in local timezone
    localtz = tzlocal()
    start = time(8,00, tzinfo=localtz)
    end = time(20,00, tzinfo=localtz)

    # how long to stand idle before doing something
    idle_duration=rospy.Duration(20)

    routine = ExampleRoutine(daily_start=start, daily_end=end, idle_duration=idle_duration)

    routine.create_routine()

    routine.start_routine()

    rospy.spin()

This is similar to the patroller_routine_node code we saw previously, except we now see the (currently empty) implementation of some parts of routine class. If you run this (alongside the executive framework, simulation, navigation etc.) you should see something like the follows:

[INFO] [WallTime: 1440522933.015355] Waiting for task_executor service...
[INFO] [WallTime: 1440522933.018245] Done
[INFO] [WallTime: 1440522933.023142] Fetching parameters from dynamic_reconfigure
[INFO] [WallTime: 1440522933.027528] Config set to 30, 10
[INFO] [WallTime: 1440522933.533364] Current day starts at 2015-08-25 08:00:00+01:00
[INFO] [WallTime: 1440522933.533773] Current day ends at 2015-08-25 20:00:00+01:00
[INFO] [WallTime: 1440522933.547498] extra_tasks_for_today
[INFO] [WallTime: 1440522933.547924] Scheduling 0 tasks now and 0 later
[INFO] [WallTime: 1440522933.548184] triggering day start cb at 2015-08-25 18:15:33.547396+01:00
[INFO] [WallTime: 1440522933.548409] Good morning
[INFO] [WallTime: 1440522933.554520] Scheduling 0 tasks now and 0 later
[INFO] [WallTime: 1440522958.590649] I am idle

The I am idle line should appear after 20 seconds, as configured by idle_duration. Nothing else should happen, as we have not yet added tasks.

To add tasks, we will fill in the create_routine method. In this method we will use the instance of DailyRoutine contained within RobotRoutine. Tasks are added using the repeat_every* methods from this object. These take a list of tasks and store them such that they can be correctly instantiated with start and end times every day. When creating tasks for a routine, you should not specify start_after or end_before as these will be determined by the routine itself. Let’s assume we have a function called wait_task_at_waypoint which creates a Task object to perform a 10 second wait at a given waypoint. We can then get the routine to schedule two such wait tasks to be performed every day.

def create_routine(self):
    daily_wps = ['WayPoint6', 'WayPoint3']
    daily_wp_tasks = [wait_task_at_waypoint(wp) for wp in daily_wps]
    self.routine.repeat_every_day(daily_wp_tasks)

If you use this code in your previous script, you should see the robot execute the two tasks, then go idle. If you wait long enough (until the start of the next day!) the robot will repeat this execution. Rather that wait that long, let’s add some tasks with a shorter repeat rate into create_routine.

minute_wps = ['WayPoint5']
minute_wp_tasks = [wait_task_at_waypoint(wp) for wp in minute_wps]
self.routine.repeat_every_delta(minute_wp_tasks, delta=timedelta(minutes=5))

This causes a task to wait at the given waypoint to be scheduled at five minute intervals throughout the day. These tasks will be scheduled alongside the ones from the daily_wp_tasks (provided you didn’t remove the code). repeat_every_delta is the most flexible of the routine creation tools, but you can also use other methods from ``DailyRoutine` <http://strands-project.github.io/strands_executive/task_executor/html/classtask__executor_1_1task__routine_1_1DailyRoutine.html>`__.

Exercise 4
  1. Create a routine that mixes tasks with hourly and daily repeats.
  2. Add tasks that either happen in the morning or afternoon. You will need to use the start_time argument to repeat_every.
  3. Once you have all of this working in simulation, move your code to your group’s robot. Create a routine to patrol the nodes in your topological map. Extend this code to create meta-room maps at certain waypoints at regular intervals. You can also try creating your own tasks/action servers, or scheduling the execution of existing ones (speech, PTU movement, topological edge traversal etc.).

Google Calendar Routine

This package reads dates from a Google Calendar and submits them to the scheduling service via the `AddTasks <https://github.com/strands-project/strands_executive/blob/hydro-devel/strands_executive_msgs/srv/AddTasks.srv>`__ service.

This requires the `task_scheduler <https://github.com/strands-project/strands_executive#running-scheduled-patrols>`__ to be running, including its requirements.

Start it with rosrun gcal_routine gcal_routine_node.py. You may have to also enable task execution via rosservice call /task_executor/set_execution_status "status: true"

The gcal_routine submits task to the scheduler queried from Google calendar using an API key via the official Google REST API. Goggle Calendar resolves all recurrences and this node gets a “linearised” view of events. By default, the event title of the event on GCal describes the action and the Where describes the start and end place id. Further arguments of the `Task <https://github.com/strands-project/strands_executive/blob/hydro-devel/strands_executive_msgs/msg/Task.msg>`__ structure can be set in YAML in the description of the event. In addition, to make the user experience easier, tasks can be configured via a predefined template YAML file like this:

{
    'wait': {
        'action': 'wait_action',
        'max_duration': {secs: 300, nsecs: 0},
    },
    'patrol': {
        'action': 'wait_action',
        'max_duration': {secs: 30, nsecs: 0}
    }
}

This allows to define pre-defined values depending on the action entered in the title of the event. E.g. if wait is entered, then the attributes of the `Task <https://github.com/strands-project/strands_executive/blob/hydro-devel/strands_executive_msgs/msg/Task.msg>`__ structure are being automatically populated from the template file.

The following parameters are being recognised:

  • ~calendar: The ID of the google calendar, defaults tohenry.strands%40hanheide.net (find the correct calendar ID in Calendar setting on Google Calendar)
  • ~key: The private access key to gain access to GCal, defaults to AIzaSyC1rqV2yecWwV0eLgmoQH7m7PdLNX1p6a0 (This is the App key for the henry.strands@hanheide.net account. You may want to use other accounts.)
  • ~available_tasks_conf_file: The path to a yaml file providing the template as shown above
  • ~pre_start_window_min: Number of minutes that tasks are submitted to the scheduler prior to their start_after time. (default: 15 minutes)
  • routine_update_wait_sec: seconds to sleep between checks for new tasks to be valid to be sent to the scheduler (default: 10s)
  • ~gcal_poll_wait_sec: seconds between polls to the Google calendar. Default is 60s (i.e. 1 poll per minute). You don’t want this to be too small or Google will ban the account due to high bandwidth. Once per minute is a good start.
  • ~priority: Priority to assign to task. Default is 2.

All events in a Google calendar are translated into UTC time for the generation of tasks. So it should work in any time zone.

Here is an example call:

rosrun  gcal_routine gcal_routine_node.py _stand_alone_test:=true _calendar:="hanheide.net_6hgulf44ij7ctjrf2iscj0m24o@group.calendar.google.com" _pre_start_window_min:=30 _gcal_poll_wait_sec:=10 _routine_update_wait_sec:=1

Original page: https://github.com/strands-project/strands_executive/blob/indigo-devel/gcal_routine/README.md

The STRANDS Executive Framework

An executive framework for a mobile robot. The basic unit of behaviour for the framework is a task as defined by strands_executive_msgs/Task. The framework triggers and manages the execution of tasks plus navigation between them. The framework includes a task scheduler to optimise the execution time of a list of times.

Installation

Binary

If you are using Ubuntu, the easiest way to install the executive framework is to add the STRANDS apt releases repository. You can then run sudo apt-get install ros-indigo-task-executor. This will install the framework and it’s dependencies alongside your existing ROS install under /opt/ros/indigo.

Source

To compile from source you should clone this repository into your catkin workspace and compile as normal. For dependencies you will also need (at least) the following repsositories: strands_navigation and mongodb_store. Source installs have been tested on Ubuntu 12.04, 14.04 and OS X.

Runtime Dependencies

For the executive framework to function correctly, you must have the mongodb_store nodes running. These are used by the framework to store tasks with arbitrary arguments.

roslaunch mongodb_store mongodb_store.launch

or with path specifying, where should the db is stored:

roslaunch mongodb_store mongodb_store.launch db_path:=/...

Currently the framework abstracts over navigation actions using the STRANDS topological navigation framework. Therefore you must have this framework running. For testing, or if you’re not running the full topological navigation system, you can run a simulple simulated topological system with the following command:

roslaunch topological_utils dummy_topological_navigation.launch

This produces a map with 9 nodes: ChargingPoint in the centre, with v_-2, v_-1 to v_2 running vertically and h_-2 to h_2 running horizontally, joining ChargingPoint in the middle.

Running the executive framework

To start the executive framework, launch the following launch file.

roslaunch task_executor task-scheduler-top.launch

This launches using topolgical navigation for moving the robot. If you wish to use the MDP execution (which has additional runtime dependencies) replace top with mdp in the launch file name.

Running scheduled patrols

To test the executive framework you can try running the robot around the topological map.

Start the executive framework:

roslaunch task_executor task-scheduler.launch

With this up and running you can start the robot running continuous patrols using:

rorun task_executor continuous_patrolling.py

If this runs successfully, then your basic systems is up and running safely.

Tasks

This executive framework, schedules and manages the execution of tasks. A task maps directly to the execution of an actionlib action server, allowing you to resuse or integrate your desired robot behaviours in a widely used ROS framework.

Most task instances will contain both the name of a topological map node where the task should be executed, plus the name of a SimpleActionServer to be called at the node and its associated arguments. Tasks must contain one of these, but not necessarily both.

To create a task, first create an instance of the Task message type. Examples are given in Python, as the helper functions currently only exist for Python, but C++ is also possible (and C++ helpers will be added if someone asks for them).

from strands_executive_msgs.msg import Task
task = Task()

Then you can set the node id for where the task will be executed (or you can do this inline in the constructor):

task.start_node_id = 'WayPoint1'

If you don’t add a start node id then the task will be executed wherever the robot is located when it starts executing the task. If your task will end at a different location than it starts you can also specify end_node_id. This allows the scheduler to make better estimates of travel time between tasks.

To add the name of the action server, do:

task.action = 'do_dishes'

Where ‘do_dishes’ is replaced by the action name argument you would give to the actionlib.SimpleActionClient constructor. If you do not specify an action, the executive will assume the task is to simply visit the location indicated by start_node_id.

You must also set the maximum length of time you expect your task to execute for. This is be used by the execution framework to determine whether your task is executing correctly, and by the scheduler to work out execution times. The duration is a rospy.Duration instance and is defined in seconds.

# wash dishes for an hour
dishes_duration = 60 * 60
task.max_duration = rospy.Duration(dishes_duration)

You can also specify the time window during which the task should be executed.

# don't start the task until 10 minutes in the future
task.start_after = rospy.get_rostime() + rospy.Duration(10 * 60)
# and give a window of three times the max execution time in which to execute
task.end_before = task.start_after + rospy.Duration(task.start_after.to_sec() * 3)

If the goal of the actionlib server related to your task needs arguments, you must then add them to the task in the order they are used in your goal type constructor. Arguments are added to the task using the provided helper functions from strands_executive_msgs.task_utils. For example, for the following action which is available under task_executor/action/TestExecution.action, you need to supply a string argument followed by a pose, then an int then a float.

# something to print
string some_goal_string
# something to test typing
geometry_msgs/Pose test_pose
# something for numbers
int32 an_int
float32 a_float
---
---
# feedback message
float32 percent_complete

To add the string, do the following

from strands_executive_msgs import task_utils
task_utils.add_string_argument(task, 'my string argument goes here')

For the pose, this must be added to the mongodb_store message store and then the ObjectID of the pose is used to communicate its location. This is done as follows

from mongodb_store.message_store import MessageStoreProxy
msg_store = MessageStoreProxy()

p = Pose()
object_id = msg_store.insert(p)
task_utils.add_object_id_argument(task, object_id, Pose)

Ints and floats can be added as follows

task_utils.add_int_argument(task, 24)
task_utils.add_float_argument(task, 63.678)
Adding a Task

Tasks can be added to the task executor for future execution via the add_tasks service. These tasks are queued or scheduled for execution, and may not be executed immediately.

add_tasks_srv_name = '/task_executor/add_tasks'
set_exe_stat_srv_name = '/task_executor/set_execution_status'
rospy.wait_for_service(add_tasks_srv_name)
rospy.wait_for_service(set_exe_stat_srv_name)
add_tasks_srv = rospy.ServiceProxy(add_tasks_srv_name, strands_executive_msgs.srv.AddTask)
set_execution_status = rospy.ServiceProxy(set_exe_stat_srv_name, strands_executive_msgs.srv.SetExecutionStatus)

try:
    # add task to the execution framework
    task_id = add_tasks_srv([task])
    # make sure the executive is running -- this only needs to be done once for the whole system not for every task
    set_execution_status(True)
except rospy.ServiceException, e:
    print "Service call failed: %s"%e
Demanding a Task

If you want your task to be executed immediately, pre-empting the current task execution (or navigation to that task), you can use the demand_task service:

demand_task_srv_name = '/task_executor/demand_task'
set_exe_stat_srv_name = '/task_executor/set_execution_status'
rospy.wait_for_service(demand_task_srv_name)
rospy.wait_for_service(set_exe_stat_srv_name)
demand_task_srv = rospy.ServiceProxy(demand_task_srv_name, strands_executive_msgs.srv.DemandTask)
set_execution_status = rospy.ServiceProxy(set_exe_stat_srv_name, strands_executive_msgs.srv.SetExecutionStatus)

try:
    # demand task execution immedidately
    task_id = demand_task_srv([task])
    # make sure the executive is running -- this only needs to be done once for the whole system not for every task
    set_execution_status(True)
except rospy.ServiceException, e:
    print "Service call failed: %s"%e
Execution Information

The current execution status can be obtained using the service strands_executive_msgs/GetExecutionStatus typically on /task_executor/get_execution_status. True means the execution system is running, false means that the execution system has either not been started or it has been paused (see below).

To see the full schedule subscribe to the topic /current_schedule which gets the list of tasks in execution order. If currently_executing that means the first element of execution_queue is the currently active task. If it is false then the system is delaying until it starts executing that task.

To just get the currently active task, use the service strands_executive_msgs/GetActiveTask on /task_executor/get_active_task. If the returned task has a task_id of 0 then there is no active task (as you can’t return None over a service).

Interruptibility at Execution Time

By default the execution of tasks is interruptible (via actionlib preempt). Interruptions happen if another task is demanded while a task is running, or if the task exceeds its execution duration. If you do not wish your task to be interrupted in these condition you can provide the IsTaskInterruptible.srv service at the name <task name>_is_interruptible, e.g. do_dishes_is_interruptible from the example above. You can change the return value at runtime as this will be checked prior to interruption.

Here’s an example from the node which provides the wait_action.

class WaitServer:
    def __init__(self):
        self.server = actionlib.SimpleActionServer('wait_action', WaitAction, self.execute, False)
        self.server.start()
        # this is not necessary in this node, but included for testing purposes
        rospy.Service('wait_action_is_interruptible', IsTaskInterruptible, self.is_interruptible)

    def is_interruptible(self, req):
        # rospy.loginfo('Yes, interrupt me, go ahead')
        # return True
        rospy.loginfo('No, I will never stop')
        return False

Creating a Routine

Our use case for task execution is that the robot has a daily routine which is a list of tasks which it carries out every day. Such are routine can be created with the task_routine.DailyRoutine object which is configured with start and end times for the robot’s daily activities:

```python # some useful times localtz = tzlocal() # the time the robot will be active start = time(8,30, tzinfo=localtz) end = time(17,00, tzinfo=localtz) midday = time(12,00, tzinfo=localtz)

morning = (start, midday)
afternoon = (midday, end)

routine = task_routine.DailyRoutine(start, end)

```

Tasks are then added using the repeat_every* methods. These take the given task and store it such that it can be correctly instantiated with start and end times every day:

# do this task every day
routine.repeat_every_day(task)
# and every two hours during the day
routine.repeat_every_hour(task, hours=2)
# once in the morning
routine.repeat_every(task, *morning)
# and twice in the afternoon
routine.repeat_every(task, *afternoon, times=2)

The DailyRoutine declares the structure of the routine. The routine tasks must be passed to the DailyRoutineRunner to manage the creation of specific task instances and their addition to the task executor.

# this uses the newer AddTasks service which excepts tasks as a batch
add_tasks_srv_name = '/task_executor/add_tasks'
add_tasks_srv = rospy.ServiceProxy(add_tasks_srv_name, AddTasks)


# create the object which will talk to the scheduler
runner = task_routine.DailyRoutineRunner(start, end, add_tasks_srv)
# pass the routine tasks on to the runner which handles the daily instantiation of actual tasks
runner.add_tasks(routine.get_routine_tasks())

# Set the task executor running (if it's not already)
set_execution_status(True)

Original page: https://github.com/strands-project/strands_executive/blob/indigo-devel/README.md

MDP Plan Exec

An optimal, high-level robot route planner and travel time estimator, using probabilistic model-checking techniques.

Overview

This package provides single-robot optimal route planning capabilities, along with execution time expectations based on the method presented on:

B. Lacerda, D. Parker, and N. Hawes. Optimal and Dynamic Planning for Markov Decision Processes with Co-Safe LTL Specifications In 20114 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, Illinois, USA, September 2014.

This uses MDP models of the environment and an extension of the probabilistic model checker tool PRISM that allows for communication with a ROS node via sockets, to generate the cost-optimal policies.

In order to fill the costs and probabilities of the MDP models with predictions from real-world data, we use the approach presented on:

J. Fentanes, B. Lacerda, T. Krajník, N. Hawes, and M. Hanheide. Now or later? Predicting and Maximising Success of Navigation Actions from Long-Term Experience. In 2015 IEEE International Conference on Robotics and Automation (ICRA 2015), Seattle, Washington, USA, May 2015.

Usage

You can use this package ithin the scheduled_task_executor node. This allows for a complete framework that allows scheduling and execution of tasks taking into account the time of day. If you wish to use it in a standalone fashion, you can do roslaunch mdp_plan_exec mdp_plan_exec.launch topological_map:=t where tis the name of the map saved in the `mongo_db <https://github.com/strands-project/mongodb_store>`__ using the `topological_utils <https://github.com/strands-project/strands_navigation/tree/indigo-devel/topological_utils>`__ package.

Nodes

special_waypoints_manager.py

Dynamically manages special waypoints defined, for example, by an end user.

Services

/mdp_plan_exec/add_delete_special_waypoint (strands_executive_msgs/AddDeleteSpecialWaypoint)

Adds “special” waypoints. These can be “forbidden” waypoints (i.e., waypoints to avoind during task execution, e.g., for safety reasons), or “safe” waypoints, for which the robot should navigate to when told to get out of the way by an end-user. These sets of waypoints are used to automatically generate the co-safe LTL specifications for the action server provided by the mdp_travel_time_estimator.py node (see below). The boolean is_addition defines if the service call is adding or removing a waypoint, and the waypoint_typeconstant defines if the waypoint in the service call is forbidden or safe.

/mdp_plan_exec/get_special_waypoints (strands_executive_msgs/GetSpecialWaypoints)

Returns the current lists of forbidden and safe waypoints, along with the corresponding LTL subformulas. For safe waypoints the corresponding subformula is the disjunction of all waypoints in the safe waypoint list, and for forbidden waypoints the corresponding subformula is the conjunction of negation of all waypoints in the forbidden waypoints list.

mdp_travel_time_estimator.py

Provides expected times to a target waypoint, by performing value iteration on a product MDP obtained from the MDP model representing the topological map.

Services

/mdp_plan_exec/get_expected_travel_times_to_waypoint (strands_executive_msgs/GetExpectedTravelTimesToWaypoint)

Given a string target_waypoint identifying an element of the topological map, and a ROS timestamp, this service returns the expected times to travel from all waypoints to the target waypoint. These are enconded by vectors string[] source_waypoints and float64[] travel_times, where travel_times[i]represents the expected travel time from source_waypoints[i]to target_waypoint.

mdp_policy_executor.py

Generates and executes a policy for a given input task, by performing value iteration on a product MDP obtained from the MDP model representing the topological map.

Actions

/mdp_plan_exec/execute_policy (strands_executive_msgs/ExecutePolicy)

Given a task_type and target_id, generates a cost optimal policy and executes the navigation actions associated to it. The specification is a co-safe LTL formula generated taking into account the task_type. In the following forbidden_waypoints_ltl_string and safe_waypoints_ltl_string are the strings obtained from the get_special_waypointsservice:

  • task_type = GOTO_WAYPOINT. This generates and executes a policy for formula F target_id (i.e., “reach target_id”) if there are no forbidden waypoints, or for formula forbidden_waypoints_ltl_string U target_id (i.e., reach target_idwhile avoiding the forbidden waypoints).
  • task_type = LEAVE_FORBIDDEN_AREA. This generates and executes a policy for formula F forbidden_waypoints_ltl_string (i.e., get out of the forbidden waypoints as soon as possible). It should be called when the robot ends up in forbidden areas of the environment due to some failure in execution.
  • task_type = GOTO_CLOSEST_SAFE_WAYPOINT. This generates and executes a policy for formula F safe_waypoints_string (i.e., reach a safe waypoint as soon as possible). It should be called when the robot is sent to a safe zone, for example by an end-user.
  • task_type = COSAFE_LTL. This generates and executes a policy for formula target_id. In this case, target_idis a general co-safe LTL formula written in the PRISM specification language.

The action server also provides the expected execution time as feedback.

Original page: https://github.com/strands-project/strands_executive/blob/indigo-devel/mdp_plan_exec/README.md

Scheduler

A task scheduler implemented using mixed integer programming.

Overview

This package provides single-robot task scheduling capabilities based on the algorithm presented in:

L. Mudrova and N. Hawes. Task scheduling for mobile robots using interval algebra. In 2015 IEEE International Conference on Robotics and Automation (ICRA 2015), Seattle, Washington, USA, May 2015.

This extends prior work presented in

B. Coltin, M. Veloso, and R. Ventura. Dynamic user task scheduling for mobile robots. In AAAI Workshop on Automated Action Planning for Autonomous Mobile Robots. AAAI, 2011.

Usage

If you want your tasks to be executed, then you should use the scheduler within the scheduled_task_executor node. If you wish to use it in a standalone fashion, you can call the get_schedule service described below, and demonstrated in the scheduler/tests/test_scheduler_node.py.

Libraries

The library libscheduler provides the scheduling functionality outside of ROS. Use this if you want to include the scheduler in our own C++ library without using ROS. An example of its usage is provided in the scheduler_example executable, compiled from scheduler/src/main.cpp.

Nodes

scheduler_node

The scheduler_node provides task scheduling as a ROS service. Examples of using this service are available in the tests for the scheduler scheduler/tests/test_scheduler_node.py.

Services

get_schedule (strands_executive_msgs/GetSchedule)

Given a list of tasks (Task[] tasks), this service call returns the order in which they should be executed and the desired execution start time of each task. Execution order is specified by the return value uint64[] task_order which contains the ids of input tasks in the order they should be executed. Execution times are provided by time[] execution_times which is in the same order as task_order and states the ideal time for task execution assuming all the input durations to the scheduler are as expected. The DurationMatrix argument specifies the travel time between task locations. It is a 2D array (implemented as a list of lists) where durations[i].durations[j] specifies the travel time between the end location of task i and the start location of task j.

Parameters

~save_problems (bool)

If true, the scheduling problems received by the scheduler (via get_schedule) are stored into the mongodb_store message store collection ‘scheduling_problems’. This allows for later replay via rosrun scheduler replay_scheduling_problems.py. Defaults to true.

~scheduler_version (int)

Switch the version of the algorithm the scheduler is running. If not specified, the Mudrova and Hawes algorithm is used. If the value 1 specified, then the original Coltin et al algorothm is used.

~output_file (string)

Path to a desired output file. If set, output from the scheduler is saved to this file.

~timeout (int)

How many seconds to allow the scheduler to run before timing out and returning (still potentially returning a result). The default value is to not have a timeout.

Original page: https://github.com/strands-project/strands_executive/blob/indigo-devel/scheduler/README.md

Task Executive

A set of nodes and libraries for defining and executing long-term task-level behaviour.

Overview

The task executor is the central element of the strands_executive framework. It receives tasks from clients, queues them for execution, then manages their execution and the required navigation between tasks. This package provides two executors: the simple fifo_task_executor which simply executes tasks in the order it receives them (ignoring their timing constraints) and the scheduled_task_executor which uses the scheduler node to create and maintain an execution schedule based on the time windows of the tasks. This package also provides the task_routine Python module which facilitates the creation of daily task routines with repeating temporal structure.

Modules

For the task routine module see the description on the overview page, and the API documentation.

Runtime Dependencies

For the executive framework to function correctly, you must have the mongodb_store nodes running. These are used by the framework to store tasks with arbitrary arguments.

roslaunch mongodb_store mongodb_store.launch

or with path specifying, where should the db is stored:

roslaunch mongodb_store mongodb_store.launch db_path:=/...

Currently the task executive abstracts over navigation actions using the STRANDS topological navigation framework. Therefore you must have this framework running. For testing, or if you’re not running the full topological navigation system, you can run a simple simulated topological system with the following command:

roslaunch topological_utils dummy_topological_navigation.launch

This produces a map with 9 nodes: ChargingPoint in the centre, with v_-2, v_-1 to v_2 running vertically and h_-2 to h_2 running horizontally, joining ChargingPoint in the middle.

Nodes

scheduled_task_executor.py

This node receives tasks via the services, schedules them for execution, then executes them in the order defined by the schedule.

Execution Status

When the executor is started, it will not start executing any tasks until the execution status is set to True via a call to the /task_executor/set_execution_status (strands_executive_msgs/GetExecutionStatus) service. If execution status is set to False then execution pauses, interrupting any currently executing task.

Task Addition and Scheduling

Tasks are added using the add_task (single task) and add_tasks (multiple tasks) services. All received tasks are added to a queue for scheduling which is monitored by the executor. When then queue contains tasks, the new tasks, plus the tasks already scheduled are sent to the scheduler node. If a task is successfully created, this replaces the previous schedule and execution continues. If scheduling fails then the newly added tasks are dropped by the executor and the previous schedule is reinstated. Adding new tasks does not interrupt the currently executing task.

Task Demanding

If a task should be executed immediately, the demand_task (strands_executive_msgs/DemandTask) service can be used. This interrupts the currently executing task and replaces it with the demanded task. The schedule is then recreated to respect the execution constraints of the demanded task, and, if possible the interrupted task is included in this new schedule.

Interruptibility

By default the execution of tasks is interruptible (via actionlib preempt). If you do not wish your task to be interrupted in these condition you can provide the IsTaskInterruptible.srv service at the name <task name>_is_interruptible, e.g. do_dishes_is_interruptible from the example above. You can change the return value at runtime as this will be checked prior to interruption.

Here’s an example from the node which provides the wait_action.

class WaitServer:
    def __init__(self):
        self.server = actionlib.SimpleActionServer('wait_action', WaitAction, self.execute, False)
        self.server.start()
        # this is not necessary in this node, but included for testing purposes
        rospy.Service('wait_action_is_interruptible', IsTaskInterruptible, self.is_interruptible)

    def is_interruptible(self, req):
        # rospy.loginfo('Yes, interrupt me, go ahead')
        # return True
        rospy.loginfo('No, I will never stop')
        return False
Task Execution and Monitoring

When a task is executed it pass through two phases: navigation and action execution. If the task has a start_node_id set then the executor uses topological navigation to move the robot to this start node. Before doing so it obtains an estimate of the travel time from the topological_navigation/travel_time_estimator service. If the travel time greatly exceeds this estimate, the topological navigation action is preempted and the task execution is failed. If the topological_navigation action reports anything but success on completion then the task execution is failed. If it reports success then the task moves on to the action execution phase. This phases triggers the action server described by the task. If execution of the action server greatly exceeds the max_duration of the task, it is preempted and execution is considered failed. The overall execution state machine is pictured below.

Task executor state machine

Task executor state machine

Services

task_executor/add_tasks (strands_executive_msgs/AddTasks)

Add a list of tasks to be scheduled for execution.

task_executor/add_task (strands_executive_msgs/AddTask)

Add a single task to be scheduled for execution.

task_executor/demand_task (strands_executive_msgs/DemandTask)

Triggers the immediate execution of a task, interrupting the currently executing task.

task_executor/set_execution_status (strands_executive_msgs/SetExecutionStatus)

Sets the execution status of the executor. Set to false to pause execution. Starts at false so much be set to true on start-up.

task_executor/get_execution_status (strands_executive_msgs/GetExecutionStatus)

Gets the current execution status of the executor.

task_executor/clear_schedule (std_srvs/Empty)

Clears all tasks scheduled for execution. Cancels any active task.

task_executor/cancel_task (strands_executive_msgs/CancelTask)

Removes the task with the given id from the schedule. If this task is currently executing, execution is interrupted.

task_executor/get_active_task (strands_executive_msgs/GetActiveTask)

Gets the task which is currently executing.

Published Topics

task_executor/events strands_executive_msgs/TaskEvent)

Events that happen as the task executor passes through its state machine for each task.

current_schedule strands_executive_msgs/ExecutionStatus)

The list of upcoming tasks and what is currently being executed.

fifo_task_executor.py

A greatly simplified task executor that executes tasks in the order they are added, and only supports task addition and very little else when compared to the scheduled_task_executor.

schedule_status.py

Prints a summary of the current_schedule topic.

Original page: https://github.com/strands-project/strands_executive/blob/indigo-devel/task_executor/README.md

Routine Behaviours

Overview

This package provides tools to generate long-term routine behaviours for mobile robots. The RobotRoutine class in this package provides facilities for extending the strands_executive/task_routine with useful runtime behaviours.

API documentation: http://strands-project.github.io/strands_executive_behaviours/routine_behaviours/html

Idle Behaviours

When the robot is inactive for a configurable amount of time, with sufficient battery charge, the methods RobotRoutine.on_idle() is called. Overriding this method allows you to generate specific behaviour when the robot is idle. For example PatrolRoutine.on_idle() moves the robot to random node when idle.

Night Behaviours

When the robot’s routine for the day is complete, we assume that the robot is then parked at a charging station. As the routine is complete, tasks which are scheduled as part of the normal routine are not sent to the robot. To provide tasks which are executed by the robot overnight, you can use RobotRoutine.add_night_task. This adds a task which will be sent for execution every night after the routine finishes. These tasks cannot involve movement and therefore must either have an empty start_node_id or be performed at the charging station. A system can have multiple night tasks, but currently no checking is done to ensure the night tasks fit within the time available between the end of one routine day and the start of the next.

Original page: https://github.com/strands-project/strands_executive_behaviours/blob/hydro-devel/routine_behaviours/README.md

Spatio-temporal topological exploration

This package provides the nodes necessary to run spatio-temporal exploration over the nodes in the topological map.

Parameters

  • tag_node: nodes where is not possible to build a ground truth. The exploration task is set by an exploration routine.
  • schedule_directory: directory where the schedules are saved.
  • grids_directory: directory where the 3D grids are saved.
  • resolution: voxel size.
  • dimX: number of cells along the axis X.
  • dimY: number of cells along the axis Y.
  • dimZ: number of cells along the axis Z.
  • sweep_type: do_sweep action type (complete, medium, short, shortest).
  • taskDuration: exploration task time duration in seconds.

Running

roslaunch topological_exploration topological_exploration.launch

Original page: https://github.com/strands-project/strands_exploration/blob/indigo-devel/spatiotemporal_exploration/README.md

Strands gazing package

Since gazing is one important part of human-robot interaction this package will provide general applications to perform gazing behaviours.

Gaze at pose

This node takes a given geometry_msgs/PoseStamped and moves the head to gaze at this position. This is implemented as an action server.

Parameters
  • head_pose Default: /head/commandedstate_: The topic to which the joint state messages for the head position are published.
  • head_frame Default: /headbase_frame_: The target coordinate frame.
Running
rosrun strands_gazing gaze_at_pose [_parameter:=value]

To start the actual gazing you have to use the actionlib client architecture, e.g. rosrun actionlib axclient.py /gaze_at_pose * goal * int32 runtime_sec: The time the gazing should be executed in seconds. Set to 0 for infinit run time (has to be cancelled to stop). Cancelling a goal will trigger the head to move to the zero position. * topic_name: The name of the topic on which to listen for the poses. Currently: /move_base/current_goal and /upper_body_detector/closest_bounding_box_centre but really everything that poblishes a stamped pose can be used. * result * bool expired: true if run time is up, false if cancelled. * feedback * float32 remaining_time: The remaining run time. * geometry_msgs/Pose target: The pose at which the robot currently gazes.

Notes

Since some of the topics that publish a PoseStamped do this only once, e.g. /move_base/current_goal, the transformation runs a infinite loop. This means that move_base publishes the pose once and inside the look the transform will run for the updated robot position with the received pose from move_base. As soon as a new pose is received the transformloop will use this one to calculate the transform. This still runs in real time but accounts for nodes that are lazy publishers.

Original page: https://github.com/strands-project/strands_hri/blob/hydro-devel/strands_gazing/README.md

Human aware velocities

This package adjusts the velocity of the robot in the presence of humans.

Human aware navigation

This node sets a navigation target and adjusts the velocity move_base/DWAPlannerROS uses to move the robot to a goal. This is achieved by using the dynamic reconfigure callback of the DWAPlannerROS to set the max_vel_x, max_rot_vel, and max_trans_vel according to the distance of the robot to the closest human. As a consequence the robot will not be able to move linear or angular if a human is too close and will graduately slow down when approaching humans. The rotation speed is only adjusted to prevent the robot from spinning in place when not being able to move forward because this behaviour was observed even if the clearing rotation recovery behaviour is turnd off.

This is implemented as an action server.

Parameters
  • pedestrian_locations Default: /pedestrianlocalisation/localisations_: The topic on which the actual pedestrian locations are published by the pedestrian localisation package (bayes_people_tracker/PeopleTracker).
  • /human_aware_navigation/max_dist Default: 5.0: maximum distance in metres to a human before altering the speed.
  • /human_aware_navigation/min_dist Default: 1.2: minimum distance in metres to a human. Robot stops at that distance.
  • /human_aware_navigation/timeout Default: 5.0: time in seconds after which speed is reseted if no human is detected any more.
Running
rosrun strands_human_aware_velocity human_aware_planner_velocity.py

To start the actual functionality you have to use the actionlib client architecture, e.g. rosrun actionlib axclient.py /human_aware_planner_velocities * goal type: move_base/MoveBaseGoal

Human aware cmd_vel

A node to adjust the velocity of the robot by taking the /cmd_vel topic and republishing it. In order for this to work you have to remap the /cmd_vel output of your node or navigation stack to the input topic of this node. It relies on the output of the strands_pedestrian_localisation package to provide the actual position of humans in the vicinity of the robot.

Parameters
  • pedestrian_location Default: /pedestrianlocalisation/localisations_: The topic on which the actual pedestrian locations are published by the pedestrian localisation package (bayes_people_tracker/PeopleTracker).
  • cmd_vel_in Default: /humanaware_cmd_vel/input/cmd_vel_: The topic to which the original /cmd_vel should be published.
  • cmd_vel_out Default: /cmdvel_: The modified /cmd_vel.
  • threshold Default: 3: Threshold in seconds to determine which person detections from the cache are too old to use.
  • max_speed Default: 0.7: The maximum speed the robot should achiev.
  • max_dist Default: 5.0: The distance at which the node starts taking detections of humans into account.
  • min_dist Default: 1.5: The cmd_vel will be 0.0 if there is a person closer to the robot than this.
Running
rosrun strands_human_aware_velocity human_aware_cmd_vel [_parameter:=value]

Original page: https://github.com/strands-project/strands_hri/blob/hydro-devel/strands_human_aware_navigation/README.md

Package Description

This package contains simple human following

The main function of this package is making the robot follow humans based on the information from passive people detection.

The State Machine inside the package contains:

Start stage - Wandering stage - Following stage - Local-searching stage

Getting Start

Before you can launch this package, you need:

roscore
strands robot bringup
strands robot camera
strands ui
strands navigation
perception people tracker

To start this module, simply launch:

roslaunch strands_human_following simple_follow.launch [wandering_mode, config_file]
rosrun strands_human_following simple_follow_client.py

There are two parameters that can be changed: * wandering_mode [wait, normal] with wait as default * config_file where the configuration of this module is stored. It is stored in /strands_hri/strands_human_following/conf/default.yaml by default

Configurations

To change the settings of this package, find:

/strands_hri/strands_human_following/conf/default.yaml

Parameters you can change are listed below:

wander_area            : (polygon)
follow_area            : (polygon)
max_t_frames(tolerence): (int)
alpha                  : (double)
distance(safe distance): (double)

Wander area is the area where the robot will wander around when there is no person detected. Follow area is the area where the robot will follow a person; The robot will not follow outside the follow area even when there are persons detected. Both wander area and follow area are polygon which are defined by points [x,y].

Max_t_frames is the maximum frames where the robot searches a person after it loses the person. Alpha is the factor of how far the pan tilt unit rotates when the robot follows a person. Distance is the distance between the robot and a person when the robot follows the person.

Tasks and Routines

There is an example called routine_example.py of how to create a task and add it to scheduler. First of all,

roslaunch task_executor task-scheduler.launch

must be up and running. Create the task as follows:

task = Task(start_node_id=[WayPoint], max_duration=[DurationOfTheTask], action='simple_follow')

task_utils.add_int_argument(task, [DurationOfHumanFollowing])

Task_utils is needed since SimpleFollow.action has one integer argument to determine how long the human following runs.

Problems

Can not catch up with human’s normal pace.

Can not follow when people walk towards or pass by the robot.

Original page: https://github.com/strands-project/strands_hri/blob/hydro-devel/strands_human_following/README.md

Visualise speech package

This package provides a control node for the head lights of the robot to compensate for the fact that it does not have a mouth for speaker identification. At the current stage the node uses pulseaudio to read the output levels of the soundcard and translates this to control commands for the head lights. The louder the more lights are illuminated.

Installation

  • make sure that the submodules are there before running catkin_make
  • cd strands_hri
  • git submodule init
  • git submodule update

Usage

Cannot be run remotely. Needs to access the robots hardware.

This node is a action server and needs a goal to start doing anything. The only parameter of the goal is the runtime of the system: * seconds: run time in seconds. 0 for infinite (until preempted)

Run with: rosrun strands_visualise_speech sound_to_light.py or better: have a look at the strands_hri_utils package to run together with the convenience action server provided in there.

Troubleshooting: * The audio grabber assumes that your sound sink is called alsa_output.usb-Burr-Brown_from_TI_USB_Audio_CODEC-00-CODEC.analog-stereo if it is not, the node won’t do anything. You can check if the name is correct by looking at the output. The node will list all available sinks when started. Re-run the node with the parameter _sink_name:=<your sink>. If the output states: setting up peak recording using <your sink name> it is configured correctly. * If the programme states Connection failed, make sure that no other user is logged in, hogging the sound resource.

Problems: * The light control is very slow and therefore the lights are a bit delayed. * The publish rate had to be artificially reduced to 4hz because it would otherwise interfer with all the hardware control commands.

Original page: https://github.com/strands-project/strands_hri/blob/hydro-devel/strands_visualise_speech/README.md

bham

The package provides a simulation of the CS building at Birmingham. This comprises of a Blender model of the building (data/CS_Building.blend) and a couple of Morse actuators to simulate the four-level lift.

To test it out, get bham on your ROS path and run:

rosrun bham simulator.sh

and after that has loaded up:

rosrun bham lift_controller.py.

In the current simulation a B21 is sitting in the centre of floor 1. It can be controlled through ROS using /b21/cmd_vel.

The lift is controlled by publishing a std_msgs/Bool message on “/lift_sim/[command|call]floor”, where floor is [B|G|1|2]. These topics correspond to a user pressing the “call” button on the outside of the lift, or the “command” button on the inside of the lift to goto to “floor”. The lift travels in the order requested, and waits 8 seconds before closing the door.

Original page: https://github.com/strands-project/strands_morse/blob/indigo-devel/bham/README.md

strands_morse

A MORSE-based simulation for the STRANDS project

Prerequisites

For running the simulation you would need:

We recommend to install the morse-blender-bundle from our debian server:

  • Set-up instructions: Using the STRANDS repository
  • Install the bundle: sudo apt-get install morse-blender-2.65-py-3.3
  • Source the environment: source /opt/morse-blender-2.65-py-3.3/.bashrc You might want to add this to your ~/.bashrc
  • Check if morse has installed correctly: morse check

Please refer to the respective installation guides to install them on your system.

The current software has been tested under the following configuration:

  • MORSE (strands-project/morse)
  • Blender 2.63a
  • ROS Groovy
  • Python 3.2.3 (3.3.0)
  • Ubuntu 12.04 LTS

Please note: Using a depth camera requires Python 3.3.0 and a corresponding Blender version (>2.65).

Getting Started

Start some terminals and run the commands below:

  1. Fire up roscore:

    $ roscore
    
  2. Run the MORSE simulation:

    $ roslaunch strands_morse bham_cs_morse.launch
    
  3. (optional) Launch the 2D navigation:

    $ roslaunch strands_morse bham_cs_nav2d.launch
    
  4. (optional) To visualize the environment model in RVIZ run

    $ rosrun rviz rviz
    
    $ roslaunch strands_morse bham_cs_rviz.launch
    

and add a new display of type RobotModel in RVIZ. Set the robot_description to /env/env_description and TF prefix to env. (requires strands_utils)

The commands above use the lower ground floor of the Computer Science building by default. For simulating a different level of the building please run:

$ roslaunch strands_morse bham_cs_morse.launch env:=cs_1

Other available environments are: cs_lg, cs_ug, cs_1, cs_2, cs (for the whole building)

Similarly, you have to run the navigation and visualization with an extra argument as follows:

$ roslaunch strands_morse bham_cs_nav2d.launch env:=cs_1

$ roslaunch strands_morse bham_cs_rviz.launch env:=cs_1

To start run the MORSE simulation of the TUM kitchen with a ScitosA5:

$ roslaunch strands_morse tum_kitchen_morse.launch

Alternatively:

$ roslaunch strands_morse tum_kitchen_morse.launch env:=tum_kitchen_WITHOUT_CAMERAS

The Scitos robot model is equipped with a video, depth and semantic camera by default.

If you wish to start the robot without any camera provide Scitosa5.WITHOUT_CAMERAS in the robot’s constructor (see example below). Starting the robot without any camera is useful to run MORSE in fast_mode.

If you only want to disable the depth camera, provide Scitosa5.WITHOUT_DEPTHCAM in the constructor. This might be useful if you run MORSE on MacOSX, because users have reported problems when using a depth camera.

Example usage in the MORSE builder script:

  • with cameras:

    robot = Scitosa5()
    # Alterntively
    # robot = Scitosa5(Scitosa5.WITH_CAMERAS)
    
  • without cameras

    robot = Scitosa5(Scitosa5.WITHOUT_CAMERAS)
    
  • without depth camera

    robot = Scitosa5(Scitosa5.WITHOUT_DEPTHCAMS)
    
  • with OpenNI stack generating point clouds and images, be sure to launch roslaunch strands_morse generate_camera_topics.launch and

    robot = Scitosa5(Scitosa5.WITH_OPENNI)
    

Original page: https://github.com/strands-project/strands_morse/blob/indigo-devel/README.md

BHAM CS Building Simulation with Moving Lift

Setup

The following additional setup is required in addition to the STRANDS standard Morse install.

PySide

PySide is needed for the control manager GUI. It must be installed from source for the STRANDS install of Python 3.3:

  1. Install system depends
sudo apt-get install build-essential git cmake libqt4-dev libphonon-dev libxml2-dev libxslt1-dev qtmobility-dev
  1. Install Python3.3 setuptools:
cd /opt/strands/tmp  # OR THE PLACE YOU CHOSE
wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
python3.3 ez_setup.py
  1. Build & Install PySide (this takes a while)
wget https://pypi.python.org/packages/source/P/PySide/PySide-1.2.1.tar.gz
tar -xvzf PySide-1.2.1.tar.gz
cd PySide-1.2.1
python3.3 setup.py install --qmake=/usr/bin/qmake-qt4

Using the Lift

  1. Start Morse with all floors of the CS building:
roslaunch strands_morse bham_cs_morse.launch env:=cs
  1. Start the ‘Lift Controller’ - a small program that maintains a queue of lift calls and commands through ROS, and passes them on to Morse.
rosrun strands_morse lift_controller.py
  1. Launch the control interface
rosrun strands_morse bham_control_gui.py

The control GUI allows you to * Hide / Show different floors of the building - Top row of buttons * Call the lift to a floor (ie. press the button outside the lift) - Second row of buttons * Command the lift to a floor (ie. press a button inside the lift) - Bottom row of buttons.

How the Lift works

The lift acts as a “robot” within the environment, with morse.core.actuator.Actuators to control the doors and the lift platform. These actuators provide services to open and close doors, and to move the lift platform between floors, exposed as Morse RPC functions using the basic morse socketstream ‘middleware’.

The ‘Lift Controller’ uses the morse middleware through the pymorse library to command the lift between floors. It receives floor requests through ROS topics, subscribing to:

/lift_sim/[command|call]floor : std_messages/Bool

where floor is [B|G|1|2]. For example, /lift_sim/callG.

The topics correspond to a user pressing the “call” button on the outside of the lift on floor, or the “command” button on the inside of the lift to got to “floor”.

The controller maintains a simple queue and moves the lift in the order requested, and waits a minimum of 8 seconds before closing the door between floor visits.

Original page: https://github.com/strands-project/strands_morse/wiki/BHAM-Morse

Simple install script for MORSE and ROS for STRANDS

The easiest way to install MORSE is to use the install script here. This gets all of the necessary packages, and installs Python 3.3, Blender, and MORSE in a user directory. It also installs ROS Groovy in /opt.

  1. Download the script setup.sh

  2. Create a directory in home under which packages will be installed. The directory will later contain the sub-directories bin/ include/ lib/ opt/ share/ src/ tmp/, following the usual filesystem structure.

  3. Open a terminal, cd to the newly created directory and execute the install script. During install, your password will be requested to install ROS in /opt using the official packages. After all the components are installed, your directory will have a .bashrc file inside it. Sourcing this file sets the PATH etc in the shell session. The install script sets up automatic sourcing of this file at the end of your ~/.bashrc file.

  4. Open a new terminal and run

    morse check

This should tell you that your environment is setup correctly. :-)

  1. get yourself a github account (https://github.com) and send your account name to marc@hanheide.net

The manual way

Issues and solutions

  • Cannot find a MORSE environment or simulation scene matching <strands_sim>!
  • add the path to strands_sim in ~/.morse/config. This is done by going into the strands_sim directory,
  • To easily achieve this do:
    • run morse create test: this will create the setupfile you need by creating a dummy morse project.
    • You can then remove the newly created created dummy project by rm -rf test/
    • Now its time to edit the config file that was created to work with your path to the strands morse project, do this by: gedit ~/.morse/config
    • change the path and name to fit your installation in .morse/config. For me this file looks like this: [sites]     strands_sim = /home/johane/strands/strands_morse/strands_sim
  • Run bham morse sim with ros:
  • Create a directory like: strands/sim/src and cd to that
  • clone the git repository to this location git clone git://github.com/strands-project/strands_morse.git and cd strands_morse
  • create a src dir and move everything into it: mkdir src && mv * src
  • go to strands/sim/src and run catkin_init_workspace
  • go to strands/sim and run catkin_make
  • source the ros environment: source devel/setup.bash
  • run it: rosrun strands_sim simulator.sh

Original page: https://github.com/strands-project/strands_morse/wiki/MORSE-on-Ubuntu

calibrate_chest

The new way to use this package is through the calibration_server action server. To use it, run rosrun calibrate_chest calibration_server and, in another terminal, rosrun actionlib axclient.py /calibrate_chest. To compute a new transformation between the camera and the floor, enter command calibrate. If you want to publish an already saved calibration (the calibration is saved in mongodb between runs), enter command publish. When you run the calibrate command, it will check that you are closer than 3 degrees from the desired 46 degrees angle of the chest camera. Otherwise, no calibration will be stored.

calibrate_chest node (legacy)

  • Do rosrun calibrate_chest calibrate_chest with the datacentre running if you want to store the parameters there, make sure that the largest visible plane for the chest camera is the floor. Also, notice that you have to have the chest camera running and publishing on topic chest_xtion.
  • To use these parameters when you launch the bringup next time, be sure to have the datacentre running.
  • The urdf can be updated manually by doing rosrun calibrate_chest chest_calibration_publisher if you don’t want to restart the larger system (e.g. strands_movebase.launch, which includes this).

Example

When running rosrun calibrate_chest calibrate_chest the node tries to find the largest visible plane and determine the angle and height of the chest camera. It will display a window of the current point cloud, with the points belonging to the floor coloured in red. It should look something like the following, you might need a mouse to rotate:

Example of how the calibration should look.

Calibration point cloud example

Close the window to save the calibration.

Original page: https://github.com/strands-project/strands_movebase/blob/indigo-devel/calibrate_chest/README.md

strands_movebase

A repository for all the STRANDS-augmented movebase, including 3D obstacle avoidance, etc. Relies on scitos_2d_navigation if it is configured to only use laser scan input for navigation, https://github.com/strands-project/scitos_2d_navigation.

Usage

  • roslaunch strands_movebase movebase.launch map:=/path/to/map.yaml
  • To be able to use this package you have to create a map with gmapping. This can be run with rosrun gmapping slam_gmapping, save the map with rosrun map_server map_saver.
  • Each launch file takes the argument map which is the path to the map saved with gmapping.
  • Optionally, provide a with_no_go_map:=true and no_go_map, the path to a map annotated with no-go areas.
  • If you do not want to launch it with the 3d obstacle avoidance, provide the additional argument with_camera:=false.
  • Provide camera:=<camera_namespace> if you have an OpenNI camera publishing on another namespace than the default chest_xtion.
  • Optionally provide z_obstacle_threshold:=<value>, where <value> m is the distance from the floor above which we consider points as obstacles. Making this larger than the default 0.1 improves navigation robustness but may lead to missing small obstacles.
  • Same goes for z_stair_threshold:=<value>, the distance below which points are considered negative obstacles. Again, increasing improves robustness, but make sure you don’t have any stairs with smaller gaps than this.

If you run with the camera option, be sure that you have a depth camera publishing on the camera_namespace topic. The camera also needs a valid TF transform connecting it to base_link. For more details on the Strands solution, see https://github.com/strands-project/strands_movebase/tree/hydro-devel/calibrate_chest and https://github.com/strands-project/strands_movebase/tree/hydro-devel/strands_description.

Original page: https://github.com/strands-project/strands_movebase/blob/indigo-devel/README.md

Message Store Map Server

This package provides tools for serving OccupancyGrid maps from the mongodb_store.

Running the datacentre

All of the below assumes you have the mongodb_store nodes running. Do this with:

roslaunch mongodb_store datacentre.launch

Adding maps to the datacentre

At the moment maps must be added using the message_store_map_saver executable. This loads a map described by a yaml file (using code from the main map_server) and then inserts the map into the message store with the name up to the last “.” in the yaml file name. Usage is:

rosrun message_store_map_switcher message_store_map_saver <map.yaml>

E.g.

rosrun message_store_map_switcher message_store_map_saver cs_lg.yaml

Results in a map named cs_lg being added to the message store. This currently uses the default database and collection (both named message_store).

Running the map server

The message_store_map_server can be started as follows:

rosrun message_store_map_switcher message_store_map_server.py

This will start it without any map being published. If you want to start with a default map you can use the -d command line parameter:

rosrun message_store_map_switcher message_store_map_server.py -d cs_lg

This can be overridden with the ROS parameter “default_map”.

Switching maps

To switch to another map from the message store, you must use the message_store_map_switcher/SwitchMap service at /switch_map. This accepts a string for the map name and returns a boolean if the map was switched. For example,

rosservice call /switch_map "cs_lg"
result: True
rosservice call /switch_map "not_cs_lg"
result: False

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/message_store_map_switcher/README.md

The monitored navigation package

This package provides an actionlib server that executes, monitors and recovers from failure (possibly by asking for human help) a continuous navigation actionlib server, e.g., move_base. It receives a goal in the form:

string action_server
geometry_msgs/PoseStamped target_pose

where action_server is an actionlib server that receives a geometry_msgs/PoseStamped goal (as move_base).

To run the bare-bone monitored navigation state machine you can do

rosrun monitored_navigation monitored_nav.py

Running monitored_navigation within the STRANDS system

To launch the monitored_navigation with the default STRANDS recovery behaviours and human help interfaces, you need to have `strands_recovery_behaviours <https://github.com/strands-project/strands_recovery_behaviours>`__ installed and do:

roslaunch strands_recovery_behaviours strands_monitored_nav.launch

Configuring monitors and recovery behaviours, and interfaces for asking help from humans

The monitored_navigation package provides a skeleton state machine where one can add smach implementations of monitor and recovery behaviours. Furthermore, it allows the robot to ask for human help via pre-defined interfaces.

  • The monitors and recoveries should be subclasses of the `MonitorState <src/monitored_navigation/monitor_state.py>`__ and `RecoverStateMachine <src/monitored_navigation/recover_state_machine.py>`__ classes respectively. A monitor runs in parallel with the execution of the continuous navigation action. When it outputs 'invalid', the continuous navigation, and any other monitors that are running, are preempted, and the state machine jumps to the corresponding recovery behaviour. Examples of STRANDS specific behaviours can be found here.
  • The interfaces for asking help should be subclasses of the `UIHelper <src/monitored_navigation/ui_helper.py>`__ class. The ask_help, being_helped, help_finished and help_failed methods need to be defined for the specific ui being used. Help is asked by a `RecoverStateMachine <src/monitored_navigation/recover_state_machine.py>`__ via a service call to '/monitored_navigation/human_help/'. Service definition here. The ask_help, being_helped, help_finished and help_failed methods receive datafrom the service request, and do something appropriate, depending on the ui they are implementing (say something, send email, …). Examples of STRANDS specific helpers can be found here.
At initialization

A yaml file that defines the monitor and recovery behaviours that should be added to the state machine at startup can be provided by doing

rosrun monitored_navigation monitored_nav.py path_to_yaml_file

The yaml file specifies the nav_recovery mechanism and the monitor_recovery_pairs.

  • The nav_recovery is specified by:
    • package: The package where the recovery behaviour can be found
    • recovery_file: The file where the RecoverStateMachine subclass implementation can be found (without the .py extenstion)
    • recovery_class: The name of the recovery class. This should be a subclass of RecoverStateMachine
  • The monitor_recovery_pairs is a list where each element is specified by:
    • name: The name to be used as an indentifier of the monitor/recovery pair
    • package: The package where the monitor and recovery behaviour can be found
    • monitor_file: The file where the MonitorState subclass implementation can be found (without the .py extenstion)
    • monitor_class: The name of the monitor class. This should be a subclass of MonitorState
    • recovery_file: The file where the RecoverStateMachine subclass implementation can be found (without the .py extenstion)
    • recovery_class: The name of the recovery class. This should be a subclass of RecoverStateMachine
  • The human_help is a list where each element is specified by:
    • package: The package where the helper implementation can be found
    • helper_file: The file where the UIHelper subclass implementation can be found (without the .py extenstion)
    • recovery_class: The name of the rhelper class. This should be a subclass of UIHelper

An example yaml file for the STRANDS-specific configuration can be found here.

At run-time

Monitors and recovery behaviours can also be added and removed at runtime (only if the monitored navigation action server is not running). To do this, the monitored_navigation node provides the following services, based on the DynClassLoaderDef message. The meaning of the message fields is the same as described above for the yaml file:

  • /monitored_navigation/add_monitor_recovery_pair. Add a monitor/recovery pair. Service definition is here
  • /monitored_navigation/del_monitor_recovery_pair. Remove a monitor/recovery pair, by name. Service definition is here
  • /monitored_navigation/set_monitor_recovery_pairs. Set a list of monitor/recovery pairs. The ones currently being used are removed. Service definition is here
  • /monitored_navigation/get_monitor_recovery_pairs. Get current monitor/recovery pairs. Service definition is here
  • /monitored_navigation/set_nav_recovery. Set the recovery state machine for navigation. The current recovery for navigation is replaced by the new one. Service definition is here
  • /monitored_navigation/get_nav_recovery. Get the current recovery state machine for navigation. Service definition is here
  • /monitored_navigation/add_helper. Add a human helper interface. Service definition is here
  • /monitored_navigation/del_helper. Remove a human helper interface, by name. Service definition is here
  • /monitored_navigation/set_helpers. Set a list of human helper interfaces. The ones currently being used are removed. Service definition is here
  • /monitored_navigation/get_helpers. Gett list of current human helper interfaces. Service definition is here

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/monitored_navigation/README.md

Usage

First make sure that an occupancy grid map (nav_msgs/OccupancyGrid) is published on a topic, e.g. /map. Second, launch the service as follows:

roslaunch nav_goal_generation nav_goal_generation.launch

The default map is /map. It can be overwritten by passing an argument as follows:

roslaunch nav_goal_generation nav_goal_generation.launch map:=/projected_map

If the map argument is a costmap, you should also set the flag is_costmap to true. Then the inflation radius in the service call is ignored (a costmap is already inflated)

roslaunch nav_goal_generation nav_goal_generation.launch map:=/move_base/global_costmap/costmap  is_costmap:=true

You can send a service request as follows:

rosservice call /nav_goals '{n: 100, inflation_radius: 0.5, roi: {points: [[0,0,0],[2,-2,0],[-2,-2,0]]}}'

whereby the first argument is the number of goal loactions to be generated (here 100) and and the second argument is the inflation radius of the robot’s footprint (here 0.5), and the third argument is a ROI specified as a list of points (at least three). The result of the pose generation is additionally published on the topic /nav_goals in order to visualize the result in RVIZ.

If the service is called with an empty ROI, the full map is considered as ROI by default.

rosservice call /nav_goals '{n: 100, inflation_radius: 0.5, roi: {}}'

If a specified ROI includes a point that is outside the map, its conflicting coordinates are automatically adjusted to the map’s bounding box.

Known Issues

The service fails if there is no map topic available or no message has been published on this topic after the service has been started. As a workaround, the map server could be started after this service.

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/nav_goals_generator/README.md

Topological loggind based on a white list of nodes

Due to privacy protection we might not be allowed to record certain data in specific areas. Like the east wing of AAF for example. This package contains a node, that uses the topologocal map information to trigger logging of sensitive data.

Usage

The logging is based on a so-called white list of nodes, e.g. this AAF example:

nodes:
    - "Eingang"
    - "Rezeption"
    - "WayPoint2"
    - "WayPoint17"
    - "WayPoint35"

This file is in YAML format. The node has to be started with _white_list_file:=<file_name>.

It monitores the topics for the current edge and current node. Whenever the robot is at a node that is in the white list, the script publishes true, false otherwise. In addition, it uses the look-up service of topological navigation, to get the edges between these nodes. When the robot is on that edge, it also publishes true and false otherwise. Publishing rate is 30hz, can be changed with the _publishing_rate parameter.

To allow logging on all nodes and edges use:

nodes:
    - "ALL"

which you can find in the conf dir.

Published are a bool and a “stamped bool” which is the custom LoggingManager.msg containing a header and a bool. The stamped bool is useful in combination with the ApproximateTimeSynchronizer. Let’s say you want to log camera images:

import rospy
import message_filters
from topological_logging_manager.msg import LoggingManager
from sensor_msgs.msg import Image
...

class SaveImages():
    def __init__(self):
        ...

        subs = [
            message_filters.Subscriber(rospy.get_param("~rgb", "/head_xtion/rgb/image_rect_color"), Image),
            message_filters.Subscriber(rospy.get_param("~logging_manager", "/logging_manager/log_stamped"), LoggingManager),
        ]

        ts = message_filters.ApproximateTimeSynchronizer(subs, queue_size=5, slop=0.1)
        ts.registerCallback(self.cb)
...

    def cb(self, img, log):
        if log:
            ...
...

If the message is not published, the callback will not be triggered, if it publishes, you can check if recording is true or false.

The normal bool can be used as a flag for everything else.

Parameters

  • white_list_file: the yaml file containing the node names
  • current_node_topic default=”/currentnode_: The topic where the current node is published
  • closest_node_topic default=”/closestnode_: The topic where the closest node is published
  • use_closest_node default=”true”: If true uses the closest node to determine if true or false. If false, only being in the influence zone of the node will give true.
  • edge_topic default=”/currentedge_: The topic where the current edge is published
  • check_topic_rate default=”1”: The rate at which the edge and node topics are checked if they are still alive.
  • publishing_rate default=”30”: Rate at which the result is published. 30 to match camera output.
  • bool_publisher_topic default=”/loggingmanager/log: The topic n which the std_msgs/Bool is published
  • bool_stamped_publisher_topic default=”/loggingmanager/log_stamped_: The topic n which the topological_logging_manager/LoggingManager is published

Security

To ensure that this works as reliable as possible, the default should always be not to record. As mentioned above, using the “stamped bool” acheives this quite easily. To ensure that the node always publishes the correct values, it has topic monitores for the two used topics that check if they are still alive, every second. The results are published in a different thread because the edges and nodes are only published when they change. The monitores are therefore used to detect if the navigation died.

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/topological_logging_manager/README.md

topological_rviz_tools

Rviz tool for creating a STRANDS topological map

Usage

This rviz toolset can be launched using

roslaunch topological_rviz_tools strands_rviz_topmap.launch map:=/path/to/map.yaml topmap:=topmap_pointset db_path:=/path/to/db standalone:=true

map specifies the yaml file for the map you are using, and topmap the corresponding pointset that exists in the database. db_path is used to point to the database you want to use. If standalone is true, everything needed will be run automatically. If false, it is assumed that other parts of the strands system are running (navigation and mongodb_store in particular), and only run a few additional things.

You can also add the node tool, edge tool and topological map panel to rviz individually by using the buttons for adding tools or panels. You will need to run the above launch file with standalone=false for this to work correctly.

When you launch with a database which contains a topological map, you should see something like the following:

Annotated rviz\_tools image

Annotated rviz_tools image

You can move nodes around by clicking the arrow at the centre of topological nodes and dragging. The ring around the node allows you to change the orientation of the node. You can delete edges using the red arrows in the middle of edges.

The following sections give a little more detail about to tools and panel labelled in the image above.

1. The edge tool

Use this tool to create edges between nodes. Left click to select the start point of the edge, then click again to create an edge. The edge will be created between the nodes closest to the two clicks, but only if there are nodes within some distance of the clicks. Left clicking will create a bidirectional edge, whereas right clicking will create an edge only from the node closest to the first click to the second one. This tool stays on until you press escape.

The shortcut is e.

2. The node tool

This tool allows you to add nodes to the topological map. Click the tool and then click on the map to add a node in that location. Edges will automatically be added between the new node and any nodes in close proximity.

The shortcut is n.

3. Add tag button

This button allows you to add tags to nodes. You can select multiple nodes, and the tag you enter in the dialog box will be added to all of them.

4. Remove button

With this button, you can remove edges, tags, and nodes from the topological map. You can select multiple elements and they will all be removed at once.

5. Topological map panel

You can see all the elements of the topological map here. You can edit the following elements:

  • Node name
  • Node pose
  • Node tags
  • Node yaw tolerance
  • Node xy tolerance
  • Edge action
  • Edge velocity

Ctrl-click allows you to select multiple distinct elements. Shift-click will select elements between the previously selected element and the current one.

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/topological_rviz_tools/README.md

Topological Navigation

This node provides support for topological navigation in the STRANDS system.

This module requires: * move_base * strands_datacentre * monitored_navigation

You will also need to create a topological map

Launching Topological Localisation and Navigation

  1. Once your map is inserted in the DB you can launch your topological navigation nodes using: roslaunch topological_navigation topological_navigation.launch map:=topological_map_name node_by_node:=false

note: When Node by node navigation is active the robot will cancel the topological navigation when it reaches the influence zone of a node and it will navigate to its waypoint aborting the action.

note: Statistics are being recorded in the nav_stats collection within the autonomous_patrolling

note: every action server for the actions stored in the topological map have to be running, for example if the ramp_climb action is required you will need the ramp_climb server running, you can run it using: rosrun ramp_climb ramp_climb

Create Topological Maps

There are different ways of creating topological maps:

  1. Simultaneous Metric and Topological Mapping
  2. From WayPoint File
  3. Using RViz
  4. Using Gamepad
  5. Edit using MongoDB Client

*. Using Localise by Topic (#Using-Localise-by-Topic)

Simultaneous Metric and Topological Mapping

@To improve

To follow this process you will need to place the robot in the Charging station and reset the odometry, once it is there you can launch the mapping by running:

roslaunch topological_utils mapping.launch met_map:='name_of_metric_map' top_map:='name_of_topological_map'

Once this is done undock the robot using the Visual_charging_server.

When the robot is undocked you can move it around using the gamepad and when you wish to create a new topological node press button ‘B’ once you are done save your metric map by pressing ‘A’.

To edit the topological map you can use the following methods

Creation of the Topological map From WayPoint File

  1. (Modified) The first step is to create the waypoint file by running: rosrun topological_utils joy_add_waypoint.py name_of_the_waypoint_file.csv
  2. Next create a topological map file from the waypoint file using: rosrun topological_utils tmap_from_waypoints.py input_file.csv output_file.tmap this will create a topological tree on which every waypoint in the file is a node and each waypoint is connected to every other waypoint using move_base and an octagonal influence area, for example,

From a normal waypoint file like this: STON   -2.3003,0.120533,-4.88295e-05,0,0,-0.0530802,0.99859   -7.95369,-3.03503,-4.4791e-05,0,0,0.928319,-0.371784   -8.91935,2.94528,0.00139425,0,0,0.473654,0.880711   -2.68708,-2.23003,-0.000478224,0,0,-0.759842,0.650108 It will create a topological map file like this: ```JSON node: ChargingPoint waypoint: -2.3003,0.120533,-4.88295e-05,0,0,-0.0530802,0.99859 edges: WayPoint1, move_base WayPoint2, move_base WayPoint3, move_base vertices: 1.380000,0.574000 0.574000,1.380000 -0.574000,1.380000 -1.380000,0.574000 -1.380000,-0.574000 -0.574000,-1.380000 0.574000,-1.380000 1.380000,-0.574000 node: WayPoint1 waypoint: -7.95369,-3.03503,-4.4791e-05,0,0,0.928319,-0.371784 edges: ChargingPoint, move_base WayPoint2, move_base WayPoint3, move_base vertices: 1.380000,0.574000 0.574000,1.380000 -0.574000,1.380000 -1.380000,0.574000 -1.380000,-0.574000 -0.574000,-1.380000 0.574000,-1.380000 1.380000,-0.574000 node: WayPoint2 waypoint: -8.91935,2.94528,0.00139425,0,0,0.473654,0.880711 edges: ChargingPoint, move_base WayPoint1, move_base WayPoint3, move_base vertices: 1.380000,0.574000 0.574000,1.380000 -0.574000,1.380000 -1.380000,0.574000 -1.380000,-0.574000 -0.574000,-1.380000 0.574000,-1.380000 1.380000,-0.574000 node: WayPoint3 waypoint: -2.68708,-2.23003,-0.000478224,0,0,-0.759842,0.650108 edges: ChargingPoint, move_base WayPoint1, move_base WayPoint2, move_base vertices: 1.380000,0.574000 0.574000,1.380000 -0.574000,1.380000 -1.380000,0.574000 -1.380000,-0.574000 -0.574000,-1.380000 0.574000,-1.380000 1.380000,-0.574000

``1. *Editing the topological map*, lets say that you want to make sure that to reach a node you get there using a specific action like ramp_climb from another node then you will need to edit the topological map file. You can also edit the vertices of the influence area so it can be defined using different kinds of polygons.   For example, lets suppose that in the previous map you want WayPoint3 to be reached only from the Waypoint1 usingramp_climb` and that you want to define rectangular areas around your waypoints and a triangular area around the ChargingPoint, in such case your map file should look like this:

JSON   node:      ChargingPoint     waypoint:         -2.3003,0.120533,-4.88295e-05,0,0,-0.0530802,0.99859     edges:          WayPoint1, move_base          WayPoint2, move_base     vertices:         0.0,1.5         -1.5,-1.5         1.5,-1.5   node:      WayPoint1     waypoint:         -7.95369,-3.03503,-4.4791e-05,0,0,0.928319,-0.371784     edges:         ChargingPoint, move_base         WayPoint2, move_base         WayPoint3, ramp_climb     vertices:         1.380000,0.574000         -1.380000,0.574000         -1.380000,-0.574000         1.380000,-0.574000   node:      WayPoint2     waypoint:         -8.91935,2.94528,0.00139425,0,0,0.473654,0.880711     edges:         ChargingPoint, move_base         WayPoint1, move_base     vertices:         0.574000,1.380000         -0.574000,1.380000         -0.574000,-1.380000         0.574000,-1.380000   node:      WayPoint3     waypoint:         -2.68708,-2.23003,-0.000478224,0,0,-0.759842,0.650108     edges:         WayPoint1, ramp_climb     vertices:         1.5,1.5         -1.5,1.5         -1.5,-1.5         1.5,-1.5 In this case the topological navigation node will always call ramp_climb when reaching or leaving WayPoint3, you can also edit the names of the nodes to fit your needs. note: At the moment only move_base and ramp_climb can be included. note: The polygons of the influence area will be created using the convex-hull approach so they may not correspond exactly to the vertices previously defined.

  1. Once you are happy with your topological map you have to insert it in the strands_datacentre using: rosrun topological_utils insert_map.py topological_map.tmap topological_map_name map_name

Using RViz

RViz can be used to edit topological maps on system runtime, however there is also the possibility of inserting a basic topological map and editing it using RViz. The Following steps will guide through this process.

  1. It is necessary to insert a basic map in the DB and run the topological system based on it for this a launch file is provided and it can be used in the following way:
  • Launch 2d navigation using for example: roslaunch strands_movebase movebase.launch map:=/path/to/your/map.yaml with_chest_xtion:=false
  • Launch empty topological map: roslaunch topological_navigation topological_navigation_empty_map.launch map:='name_of_topological_map'

This will create a basic map and run the topological system with it.

  1. Once this is done you can run Rviz, rosrun rviz rviz and create two marker array interfaces /topological_node_zones_array and /topological_edges_array optionally an additional marker array interface can be created /topological_nodes_array. This will allow the visualization of the topological map.
  2. After this the map can be edited using interactive markers:
  • Adding Nodes: for this objective there is one interactive marker topic called /name_of_your_map/add_rm_node/update that will put a green box on top of the robot. Drive the robot where you want to add a new node and press on the green box that will add a node there connected to all close by nodes by a move_base action.
  • Editing Node Position: with /name_of_your_map_markers/update it is possible to move the waypoints around and change their final goal orientation.
  • Removing Unnecessary Edges: to remove edges there is another marker /name_of_your_map_edges/update that will allow you to remove edges by pressing on the red arrows.
  • Change the Influence Zones: Finally it is possible to change the influence zones with /name_of_your_map_zones/update and moving the vertices of such zones around.

Using Gamepad

This method is basically the same as the previous method (follow steps 1 and 2, also 3 if/when needed) with the addition that in this case it is possible to add nodes at the current robot position by pressing button ‘B’ on the Gamepad.

Edit using MongoDB Client

@TODO

Using Localise by Topic

Localise by topic is a JSON string defined in the topological map which is empty by default "localise_by_topic" : "" which means that the robot will use its pose to obtained its location on the topological map.

However in some specific cases it might be necessary to localise the robot by means of an specific topic in this cases the localise by topic string should be defined with the following mandatory fields

  • topic: the name of the topic that will be used for localisation
  • field: the field of the topic that will be compared
  • val: the value of field that will be used to set the current node (/current_node)

A typical localise by topic string will look like this:

"{"topic":"/battery_state","field":"charging","val":true}"

There are also some optional fields that can be set:

  • localise_anywhere: (default value true) If set to true topological localisation will assume the robot to be at the node whenever localisation by topic corresponds to the values on the string, else this will only happen when the robot is in the influence area of the specific node.
  • persistency: (default value 10) is number of consecutive messages with the correct value that must arrive before localisation by topic is determined
  • timeout: @todo
  • Please note: when localised by topic is active the robot will never assume this node to be closest node unless it is also current node

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/topological_navigation/README.md

Topological Navigation tests

The topological navigation tests are designed to optimise the parameter set of the DWA planner and to check the functionalitites of topological navigation in its execute policy mode. The test are run automatically on our Jenkins server but can also be run locally on your machine to test a new paremeterset.

Run test

Test for your whole worksapce can be run with catkin_make test which assumes that strands_navigation is present in your ros workspace. If you only want to run the test described here, use catkin_make test --pkg topological_navigation. This will only run the critical topological navigation tests. A set of supplementary tests can be run with:

roslaunch topological_navigation navigation_scenarios.test
rosrun topological_navigation topological_navigation_tester_supplementary.py

To run the testes from an installed version of topological_navigation do:

roslaunch topological_navigation navigation_scenarios.test

and

rosrun topological_navigation topological_navigation_tester_critical.py

for the critical tests or

rosrun topological_navigation topological_navigation_tester_supplementary.py

for the supplementary tests to test parameter sets.

This will start all the tests and report the result to the terminal and a log file.

Test Scenarios, and Infrastructure

Infrastructure

In order to run the tests, the commands above will start the following nodes:

  • strands_morse using the move_base_arena environment
  • mongodb_store in test mode, creating a new and empty temporary datacentre
  • scitos_2d_navigation with the correct slam map for the tests
  • A static transform publisher with the transformation from map to world
  • topological_navigation:
  • monitored_navigation with all recovery behaviours disabled
  • map_manager.py
  • fremenserver
  • localisation.py
  • navigation.py
  • execute_policy_server.py
  • visualise_map.py
  • travel_time_estimator.py
  • topological_prediction.py
  • get_simple_policy.py
  • The test scneario server which is responsible for the test control
  • The ros test file topological_navigation_tester.py
Test Scenarios

All the test scnearios are based on a simple topological map that has to have a Node called Start and a node called End. The robot will always be teleported (in simulation) or pushed (on the real robot) to this node and then use policy execution to traverse the edges towards the node called End. If the node End cannot be reached, the navigation will fail and the scneario_server will trigger the graceful death behaviour, trying to navigate the robot back to the node Start.

The topological maps used for the tests presented here can be found in strands_morse/mba/maps and are loaded into the datacentre (if they haven’t already been inserted) by the scanario_server. Currently run tests are:

Static:

  • System critical:
  • mb_test0: Traversing a 2m wide l-shaped corridor.
  • Supplementary:
  • mb_test1: The robot starting 10cm away from a wall facing it straight on (0 degrees)
  • mb_test2: The robot starting 10cm away from a wall facing it turned to the left (-45 degrees)
  • mb_test3: The robot starting 10cm away from a wall facing it turned to the right (+45 degrees)
  • mb_test4: Traversing a 1.55m wide straight corridor with .55m wide chairs on one side.
  • mb_test5: Traversing a 2.1m wide straight corridor with .55m wide chairs on both sides.
  • mb_test6: Cross a 80cm wide door.
  • mb_test7: Cross a 70cm wide door.
  • mb_test8: The robot is trapped in a corner and has to reach a goal behind it.
  • mb_test9: Traverse a 1m wide straight corridor.
  • mb_test10: Traverse a 1m wide l-shaped corridor.
  • mb_test11: Wheelchair blocking an intermediate node in open space.
  • mb_test12: Wheelchair blocking final node in open space. Graceful fail has to be tru to pass this test as the node itself can never be reached precisely.
  • mb_test13: Human blocking an intermediate node in open space.
  • mb_test14: Human blocking final node in open space. Graceful fail has to be tru to pass this test as the node itself can never be reached precisely.
  • mb_test15: Non-SLAM map chairs on one side of 2m wide l-shaped corridor.
  • mb_test16: Non-SLAM map wheelchairs on one side of 2m wide l-shaped corridor.
  • mb_test17: Non-SLAM map wheelchairs block 2m wide l-shaped corridor after intermediate waypoint. Graceful fail has to be tru to pass this test as the node itself can never be reached.
  • mb_test18: Non-SLAM map static humans block 2m wide l-shaped corridor after intermediate waypoint. Graceful fail has to be tru to pass this test as the node itself can never be reached.

Dynamic

Coming soon

Scenario Server control

The scenario server can also be used without the unit test file for tests on the real robot. Running

roslaunch topological_navigation navigation_scenarios.test robot:=true map_dir:="path_to_topologicalmaps"

will start only the scneario server, the simple policy generation, and the joypad control. This assumes that everything on the robot, up to topological navigation, is running. The map_dir argument specifies a directory which holds the topological maps that you want to use for testing. If this is given, the maps will automatically inserted into the datacentre. If the maps have been inserted previously, the server will print a warning and skip the insertion. If the map_dir argument is omitted, no maps are inserted. The scneario server then offers 3 simple services:

  • /scneario_server/load <map_name> expects a string which is the name of the topological map you want to test. Keep in mind that this has to have the node Start and End.
  • /scenario_server/reset is an empty service and is called to reset the data recording and the robot position. The robot position, however, cannot as easily be changed in real life as it can be in simulation, hence the robot has to be pushed to the starting node and then confirmed via the A button on the joypad. The reasoning behind having to push the robot is that the starting position might not be easily reachable via move_base. The server will print +++ Please push the robot to 'Start' +++ where Start is the name of the starting node, until the node is reached. Once the node is reached, the server will print +++ Please confirm correct positioning with A button on joypad: distance 0.65m 180.06deg +++ where distance represents the distance of the current /robot_pose to the metric coordinates of the node. In simulation this will just teleport the robot to the correct node.
  • /scneario_server/start starts the policy execution. Returns

bool nav_success bool graceful_fail bool human_success float64 min_distance_to_human float64 mean_speed float64 distance_travelled float64 travel_time

Joypad control

For convenience, a joypad control of the reset and start service is provided. The load service still has to be called manually to tell the server which map to use. The joypad control then offers an easy way to interact with the scenario server during the tests:

  • A: Toggle between start and reset.
  • B: Confirm current selection.

If in doubt, press A and look for the output on the terminal.

Creating Topological maps for testing

The topological map used for each test has to be a different one and the nodes have to conform to a specific naming scheme. By default, the start node has to be called Start and the goal has to be called End. This can be changed in topologcial_navigation/tests/conf/scenario_server.yaml. The easiest way to create these maps is:

  1. Start topological navigation with roslaunch topological_navigation topologocal_navigation_empty_map.launch map:=<map_name> where map_name will be the name of your new map and cannot be the name of an existing one.
  2. Drive the robot to the positions of the nodes you want to create and use the add_rm_node interactive marker in rviz to create a new node.
  3. Use the interactive edges marker in rviz to delete unwanted edges.
  4. Rename the start and end node to Start and End using rosrun topological_utils rename_node <old_name> <new_name> <map_name>
  5. This map can now be loaded with /scenario_server/load <map_name>

Creating scnearios with obstacles that are not in the SLAM map

For this purpose, there are Chairs, OfficeChairs, WheelChairs, and StaticHumans present in the current test environment that can be positioned via topological nodes. The exact obstacle types are defined in the conf/scenario_server.yaml under obstacle_types; The names used have to be the variable name of the obstacle in the morse environment. Obstacle types have to be lower case, node names can be camel case to enhance readability. Currently, in the test environment, each object has 10 instances so you can only use 10 of any single object. The Objects are positioned according to topological nodes following a naming scheme: ObstacleStaticHuman1 for example positions a static human model on this node. Obstacle is the obstacle_node_prefix defined in the conf/scneario_server.yaml, StaticHuman is the identifier of the obstacle type (if this omitted, an arbitrary object will be used), and 1 is just an arbitrary number to make the node name unique. When creating the topoligocal map, make sure that all edges from and to obstacle nodes are removed. Additionally, the nodes need to define a localise_by_topic json string so the robot will never localise itself based on these nodes. In the current simulation, we use the charging topic, because the robot will never charge and hence never localise itself at these nodes. We have to use an existing topic otherwise topological navigation fails. Example node:

- meta:
     map: mb_arena
     node: ObstacleStaticHuman1
     pointset: mb_test16
   node:
     edges: []
     localise_by_topic: '{"topic": "/battery_state", "field": "charging", "val": true}'
     map: mb_arena
     name: ObstacleStaticHuman1
     pointset: mb_test16
     pose:
       orientation:
         w: 0.816770076752
         ...
       position:
        x: -4.58532047272
        ...
     verts:
     - x: 0.689999997616
       y: 0.287000000477
     ...
     xy_goal_tolerance: 0.3
     yaw_goal_tolerance: 0.1

Where the important bit is the localise_by_topic: '{"topic": "/battery_state", "field": "charging", "val": true}' entry.

After the map has been loaded the obstacles will be spawned at (or better moved to) the respective nodes before the robot starts navigating. Before each test the arena is cleared to make sure that no obstacles linger.

Original page: https://github.com/strands-project/strands_navigation/blob/indigo-devel/topological_navigation/tests/README.md

Topological Map

A topological map in the strands system is published over the ‘/topological_map’ topic its ros message is type defined the following way:

string name          # topological map name generally should be the same as in the pointset field
string map           # 2D Metric map name (not used currently)
string pointset      # The name of the group of nodes that compose the topological map, typically the same as name
string last_updated  # Last time the map was modified
strands_navigation_msgs/TopologicalNode[] nodes  # The group of nodes that composes the topological map

Topological Nodes

The nodes are the locations to which the robot should navigate on its working environment these location are related to coordinates in the robot map frame, the nodes are defined as a ros message in the following [way] (https://github.com/strands-project/strands_navigation/blob/indigo-devel/strands_navigation_msgs/msg/TopologicalNode.msg)

string name                 # Node name
string map                  # 2D Metric map name (not used currently)
string pointset             # The name of the group of nodes that compose the topological map
geometry_msgs/Pose pose     # Node X,Y,Z,W coordinates in the map frame
float64 yaw_goal_tolerance  # The tolerance in radians for the waypoint position
float64 xy_goal_tolerance   # The tolerance in meters for the waypoint position
Vertex[] verts              # X,Y coordinates for the vertices of the influence zone relative to the node pose
Edge[] edges                # The group of outward oriented edges connecting this node to others
string localise_by_topic    # The configuration for localisation by topic

Edges

The edges are the definitions for the connections between nodes, typically this connections are included in the node message, which means that the edges defined for each node are the ones on which the node they are defined at is the point of origin, they are also defined as a ros message type in the following way

string edge_id                        # An identifier for the edge
string node                           # Name of destination node
string action                         # Name of navigation action, e.g. move_base
float64 top_vel                       # Maximum speed in [m/s]
string map_2d                         # Name of the 2D map
float64 inflation_radius              # Obstacle inflation radius on this edge, 0 for default
string recovery_behaviours_config     # Not currently used, typically a json string

YAML format

the topological map can also be defined in yaml format, this is used to export and manually modify maps a map on this format would look like this:

- meta:
    map: mb_arena
    node: Start
    pointset: mb_test2
  node:
    edges:
    - action: move_base
      edge_id: Start_End
      inflation_radius: 0.0
      map_2d: mb_arena
      node: End
      recovery_behaviours_config: ''
      top_vel: 0.55
    localise_by_topic: ''
    map: mb_arena
    name: Start
    pointset: mb_test2
    pose:
      orientation:
        w: 0.92388
        x: 0
        y: 0
        z: 0.38268
      position:
        x: -1.5
        y: 2.6
        z: 0.0
    verts:
    - x: 0.689999997616
      y: 0.287000000477
    - x: 0.287000000477
      y: 0.689999997616
    - x: -0.287000000477
      y: 0.689999997616
    - x: -0.689999997616
      y: 0.287000000477
    - x: -0.689999997616
      y: -0.287000000477
    - x: -0.287000000477
      y: -0.689999997616
    - x: 0.287000000477
      y: -0.689999997616
    - x: 0.689999997616
      y: -0.287000000477
    xy_goal_tolerance: 0.3
    yaw_goal_tolerance: 0.1
- meta:
    map: mb_arena
    node: End
    pointset: mb_test2
  node:
    edges:
    - action: move_base
      edge_id: End_Start
      inflation_radius: 0.0
      map_2d: mb_arena
      node: Start
      recovery_behaviours_config: ''
      top_vel: 0.55
    localise_by_topic: ''
    map: mb_arena
    name: End
    pointset: mb_test2
    pose:
      orientation:
        w: 0.70711
        x: 0.0
        y: 0.0
        z: -0.70711
      position:
        x: -1.5
        y: -0.5
        z: 0.0
    verts:
    - x: 0.689999997616
      y: 0.287000000477
    - x: 0.287000000477
      y: 0.689999997616
    - x: -0.287000000477
      y: 0.689999997616
    - x: -0.689999997616
      y: 0.287000000477
    - x: -0.689999997616
      y: -0.287000000477
    - x: -0.287000000477
      y: -0.689999997616
    - x: 0.287000000477
      y: -0.689999997616
    - x: 0.689999997616
      y: -0.287000000477
    xy_goal_tolerance: 0.3
    yaw_goal_tolerance: 0.1

All the fields correspond to the definitions presented before, please note that each node has a “meta” field.

Each node is stored in the datacentre with additional meta information, in the YAML version of a topological map this meta information MUST be included with at lest the three fields shown in the map, however any other meta information can be added as long as it is included in a valid YAML format for example:

- meta:
    map: mb_arena
    node: End
    pointset: mb_test2
    example: 'this is an example field'

Original page: https://github.com/strands-project/strands_navigation/wiki/Topological-Map-Definition

Topological maps are stored in the topological_maps collection in the message_store database. Using Robomongo (or the mongodb tool) one can easily run mongodb queries and updates to inspect or modify the topological maps. See http://docs.mongodb.org/manual/reference/method/db.collection.update/#db.collection.update for more information about ´update´.

Robomongo

To run queries in robomongo connect to the mongodb_store database (usually running on port 62345), and then open a shell in the message_store database by right clicking on it.

Searching for Tags

db.getCollection('topological_maps').find(
    {pointset:'aaf_predep', '_meta.tag':'no_go'}
)

Changing Actions of Edges

This will find all entries of pointset aaf_predep where the action is defined as move_base. Then it will set the first matching entry in the edges array to human_aware_navigation.

Note that this always only replaces the first matching value in the array, hence this needs to be executed repeatedly until no more changes occur. Mongodb unfortunately doesn’t all yet to change all at once (see https://jira.mongodb.org/browse/SERVER-1243)

db.topological_maps.update(
    {pointset:'aaf_predep', 'edges.action':'move_base'},
    {$set: {'edges.$.action':'human_aware_navigation'}},
    {multi:true}
)

Original page: https://github.com/strands-project/strands_navigation/wiki/Useful-Mongodb-Queries-and-Updates

People Tracker

This package uses the bayes_tracking library developed by Nicola Bellotto (University of Lincoln), please cite with: 10.5281/zenodo.15825 and [1]

The people_tracker uses a single config file to add an arbitrary amount of detectors. The file config/detectors.yaml contains the necessary information for the upper_body_detector and the ROS leg_detector (see to_pose_array in detector_msg_to_pose_array/README.md):

bayes_people_tracker:
    filter_type: "UKF"                                         # The Kalman filter type: EKF = Extended Kalman Filter, UKF = Uncented Kalman Filter
    cv_noise_params:                                           # The noise for the constant velocity prediction model
        x: 1.4
        y: 1.4
    detectors:                                                 # Add detectors under this namespace
        upper_body_detector:                                   # Name of detector (used internally to identify them). Has to be unique.
            topic: "/upper_body_detector/bounding_box_centres" # The topic on which the geometry_msgs/PoseArray is published
            cartesian_noise_params:                            # The noise for the cartesian observation model
                x: 0.5
                y: 0.5
            matching_algorithm: "NNJPDA"                       # The algorthim to match different detections. NN = Nearest Neighbour, NNJPDA = NN Joint Probability Data Association
        leg_detector:                                          # Name of detector (used internally to identify them). Has to be unique.
            topic: "/to_pose_array/leg_detector"               # The topic on which the geometry_msgs/PoseArray is published
            cartesian_noise_params:                            # The noise for the cartesian observation model
                x: 0.2
                y: 0.2
            matching_algorithm: "NNJPDA"                       # The algorthim to match different detections. NN = Nearest Neighbour, NNJPDA = NN Joint Probability Data Association

New detectors are added under the parameter namespace bayes_people_tracker/detectors. Let’s have a look at the upper body detector as an example:

Tracker Parameters

The tracker offers two configuration parameters: * filter_type: This specefies which variant of the Kalman filter to use. Currently, it implements an Extended and an Unscented Kalman filter which can be chosen via EKF and UKF, respectively. * cv_noise_params: parameter is used for the constant velocity prediction model. * specifies the standard deviation of the x and y velocity.

Detector Parameters

  • For every detector you have to create a new namespace where the name is used as an internal identifier for this detector. Therefore it has to be unique. In this case it is upper_body_detector
  • The topic parameter specifies the topic under which the detections are published. The type has to be geometry_msgs/PoseArray. See to_pose_array in detector_msg_to_pose_array/README.md if your detector does not publish a PoseArray.
  • The cartesian_noise_params parameter is used for the Cartesian observation model.
  • specifies the standard deviation of x and y.
  • matching_algorithm specifies the algorithm used to match detections from different sensors/detectors. Currently there are two different algorithms which are based on the Mahalanobis distance of the detections (default being NNJPDA if parameter is misspelled):
  • NN: Nearest Neighbour
  • NNJPDA: Nearest Neighbour Joint Probability Data Association

All of these are just normal ROS parameters and can be either specified by the parameter server or using the yaml file in the provided launch file.

Message Type:

The tracker publishes the following: * pose: A geometry_msgs/PoseStamped for the clostes person. * pose_array: A geometry_msgs/PoseArray for all detections. * people: A people_msgs/People for all detections. Can be used for layerd costmaps. * marker_array: A visualization_msgs/MarkerArray showing little stick figures for every detection. Figures are orient according to the direction of velocity. * positions: A bayes_people_tracker/PeopleTracker message. See below.

std_msgs/Header header
string[] uuids             # Unique uuid5 (NAMESPACE_DNS) person id as string. Id is based on system time on start-up and tracker id. Array index matches ids array index
geometry_msgs/Pose[] poses # The real world poses of the detected people in the given target frame. Default: /map. Array index matches ids/uuids array index
float64[] distances        # The distances of the detected persons to the robot (polar coordinates). Array index matches ids array index.
float64[] angles           # Angles of the detected persons to the coordinate frames z axis (polar coordinates). Array index matches ids array index.
float64 min_distance       # The minimal distance in the distances array.
float64 min_distance_angle # The angle according to the minimal distance.

The poses will be published in a given target_frame (see below) but the distances and angles will always be relative to the robot in the /base_link tf frame.

Running

Parameters:

  • target_frame: Default: /baselink_:the tf frame in which the tracked poses will be published.
  • position: Default: /peopletracker/positions_: The topic under which the results are published as bayes_people_tracker/PeopleTracker`
  • pose: Default: /peopletracker/pose_: The topic under which the closest detected person is published as a geometry_msgs/PoseStamped`
  • pose_array: Default: /peopletracker/pose_array_: The topic under which the detections are published as a geometry_msgs/PoseArray`
  • poeple: Default: /peopletracker/people_: The topic under which the results are published as people_msgs/People`
  • marker: Default /peopletracker/marker_array_: A visualisation marker array.

You can run the node with:

roslaunch bayes_people_tracker people_tracker.launch

This is the recommended way of launching it since this will also read the config file and set the right parameters for the detectors.

Updating old bag files

With version >1.1.8 the message type of the people tracker has been changed to include the velocities of humans as a Vector3. To update old rosbag files just run rosbag check this should tell you that there is a rule file to update this bag. Then run rosbag fix to update your old bag file and change the message type to the new version. The velocities will all be 0 but apart from that everything should work as intended.

[1] N. Bellotto and H. Hu, “Computationally efficient solutions for tracking people with a mobile robot: an experimental evaluation of bayesian filters,” Autonomous Robots, vol. 28, no. 4, pp. 425–438, 2010.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/bayes_people_tracker/README.md

Logging package

This packge contains a logging node to save the detections to the message_store.

All the information given on how to run the nodes should only be used if you need to run them seperately. In normal cases please refer to the perception_people_launch package to start the whole perception pipeline.

Logging

This node uses the Logging.msg to save the detected people together with their realworld position, the robots pose, the upper body detector and people tracker results, and the tf transform used to create the real world coordinates in the message store.

Run with:

roslaunch bayes_people_tracker_logging logging.launch

Parameters: * log: Default: true This convenience parameter allows to start the whole system without logging the data

Updating old database entries

This assumes that your mongodb_store is running.

With version >1.1.8 the message type of the people tracker has been changed to include the velocities of humans as a Vector3. To update old database entries just run rosrun bayes_people_tracker_logging migrate.py which will update entries in place. Please be careful and create a copy of the people_perception collection in the message store before running this.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/bayes_people_tracker_logging/README.md

Message conversion package

This package contains a node to convert messages of people detectors to a pose_array which is naturally understood by the bayes_people_tracker. Some detectors use custom message types which makes it hard to use them otherwise.

All the information given on how to run the nodes should only be used if you need to run them seperately. In normal cases please refer to the perception_people_launch package to start the whole perception pipeline.

to_pose_array

Small node that takes in an arbitrary message from a topic and extracts a pose according to a given identifier. The found poses are published as a geometry_msgs/PoseArray. The node is used to transform the output of any people detector to a pose array for the people_tracker. The node is configured using the detectors.yaml in the config directory:

to_pose_array:
    detectors:                                  # Add detectors under this namespace
        leg_detector:                           # Name of detector (used internally to identify them. Has to be unique.
            topic: "/people_tracker_measurements"  # The topic on which the geometry_msgs/PoseArray is published
            point_name: "pos"                   # The name of the point containing the coordinates in question

The parameter namespace to_pose_array/detectors can contain as many sub namespaces as needed. The above example shows the namespace to_pose_array/detectors/leg_detector which contains the information to parse the messages generated by the ros hydro package leg_detector.

  1. topic: this string is the topic name under which the messages containing the positions are published.

  2. point_name: this string specifies the identifier for the detected positions. In this case the leg_detector publishes a message which caontains data like:

    pos.x
    pos.y
    pos.z
    

    The message is parsed for all occurences of the pos identifier and the result is published as a PoseArray.

Run with

roslaunch detector_msg_to_pose_array to_pose_array.launch

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/detector_msg_to_pose_array/README.md

Ground Plane

This package estimates the ground plane using depth images.

It can also be used with a fixed ground plane which is just rotated according to the ptu tilt angle. In this case the optimal ground plane is assumed and represented by the normal vector [0,-1,0] and the distance of 1.7 (Distance of the origin of the camera coordinated system to the plane in meters). These values can be changed in the config/fixed_gp.yaml file. As you can see this assumes to be used with the head_xtion and is therefore exclusive to the robot but it also prevents failurs due to wrongly estimated planes in cases where the camera can’t see the ground. All this was necessary because the camera on the head of the robot is rather high and therefore has a high risk of not seeing the actual ground plane. At the current state of development it is advised to use the fixed plane set-up when running it on the robot.

Run

Parameters for estimation: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by param_file * param_file default = $(find groundplane_estimation)/config/estimated_gp.yaml_: The config file containing all the essential parameters. Only used if load_params_from_file == true. * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * queue_size default = 5: The synchronisation queue size * config_file default = “”: The global config file. Can be found in ground_plane_estimation/config * camera_namespace default = /headxtion_: The camera namespace. * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * camera_info_rgb default = /rgb/camerainfo_: camera_namespace + camera_info_rgb = rgb camera info topic * ground_plane default = /groundplane_: The estimated ground plane

Parameters for the fixed ground plane: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by param_file * param_file default = $(find groundplane_estimation)/config/fixed_gp.yaml_: The config file containing all the essential parameters. Only used if load_params_from_file == true. * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * ptu_state default = /ptu/state: The current angles of the ptu * ground_plane default = /groundplane_: The rotated ground plane

rosrun:

rosrun ground_plane_estimation ground_plane_estimated [_parameter_name:=value]

or

rosrun ground_plane_estimation ground_plane_fixed [_parameter_name:=value]

roslaunch:

roslaunch ground_plane_estimation ground_plane_estimated.launch [parameter_name:=value]

or

roslaunch ground_plane_estimation ground_plane_fixed.launch [parameter_name:=value]

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/ground_plane_estimation/README.md

Human Trajectory

This is a ROS package that extracts human poses information from people tracker and stitches each pose for each human together. This package can be run online by subscribing to /people_tracker/positions or offline by retrieving human poses from either perception_people collection or people_trajectory collection.

Run this package by typing

roslaunch human_trajectory human_trajectory.launch

The online stitching provides both mini batch trajectories messages where trajectories are split into chunked and complete batch trajectories which only publishes complete trajectory messages when the persons are not detected anymore.

Parameters

  • with_logging_manager: the option (true/false) to subscribe to logging manager and get permission to store obtained data
  • path_visualisation: the option to visualise each detected trajectory in rviz using Path. However, it only works for online construction.
  • online_construction: the option (true/false) to stitch human poses online from other packages (bayes_people_tracker) or offline from perception_people or people_trajectory collection in mongodb.
  • tracker_topic: the name of the people tracker topic, default is /people_tracker/positions
  • logging_manager_topic: the name of the logging manager topic, default is /logging_manager/log_stamped

Note

The offline retrieval limits the number of trajectories obtained from mongodb to 10000 trajectories. To work around the limitation, OfflineTrajectories class provides a query that can be passed to mongodb to obtain specific trajectories.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/human_trajectory/README.md

strands_perception_people

Please see perception_people_launch/README.md for start-up information.

When using the default STRANDS perception pipeline, please cite:

@inproceedings{dondrup2015tracking,
  title={Real-time multisensor people tracking for human-robot spatial interaction},
  author={Dondrup, Christian and Bellotto, Nicola and Jovan, Ferdian and Hanheide, Marc},
  publisher={ICRA/IEEE},
  booktitle={Workshop on Machine Learning for Social Robotics at International Conference on Robotics and Automation (ICRA)},
  year={2015}
}

This package contains the people perception pipeline. It is comprised of two detectors: * Upper body detector * Leg Detector: http://wiki.ros.org/leg_detector

Depricated and moved to attic branch: * Ground HOG feature detector

Two trackers: * Bayesian People Tracker * Pedestrian Tracker (currently depricated)

And a lot of utility and helper nodes. See https://www.youtube.com/watch?v=zdnvhQU1YNo for a concise explanation.

Please refere to the READMEs in the specific packages.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/README.md

Pedestrian Tracking

This package uses the data generated by ground_hog, visual_odometry und upper_body_detector to track people. It is also possible to omit the ground_hog detections.

Run

Parameters: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by config_file * config_file default = $(find mdlpeople_tracker)/config/mdl_people_tracker.yaml_: The config file containing all the essential parameters. Only used if load_params_from_file == true. * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * queue_size default = 20: The synchronisation queue size * camera_namespace default = /headxtion_: The camera namespace. * rgb_image default = /rgb/imagerect_color_: camera_namespace + rgb_image = rgb image topic * camera_info_rgb default = /rgb/camerainfo_: camera_namespace + camera_info_rgb = rgb camera info topic * ground_plane default = /groundplane_: The ground plane published by the upper_body_detector * ground_hog default = /groundHOG/detections: The ground_hog detections * upper_body_detections default = /upperbody_detector/detections_: The upper_body_detector_detections * visual_odometry default = /visualodometry/motion_matrix_: The visual_odometry data * pedestrain_array default = /mdlpeople_tracker/people_array_: The resulting tracking data. * people_markers" default="/mdl_people_tracker/marker_array_: A visualisation array for rviz *people_poses” default = /mdl_people_tracker/pose_array_: A PoseArray of the detected people

rosrun:

rosrun mdl_people_tracker mdl_people_tracker [_parameter_name:=value]

roslaunch:

roslaunch mdl_people_tracker mdl_people_tracker_with_HOG.launch [parameter_name:=value]

To run the tracker without the groundHOG feture extraction running use:

rosrun mdl_people_tracker mdl_people_tracker _ground_hog:=""

or

roslaunch mdl_people_tracker mdl_people_tracker.launch [parameter_name:=value]

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/mdl_people_tracker/README.md

Odometry to motion_matrix package

This packge contains a tool for the conversion of the robots odometry to a motion matrix to substitude the visual odometry.

All the information given on how to run the nodes should only be used if you need to run them seperately. In normal cases please refer to the perception_people_launch package to start the whole perception pipeline.

odom2visual

This node creates a motion matrix from the robots odometry using the Eigen library to substitude the visual odometry.

Run with:

roslaunch odometry_to_motion_matrix odom2visual.launch

Parameters: * odom: Default: /odom The topic on which the robots odometry is published * motion_parameters: Default: /visualodometry/motion_matrix_ The topic on which the resulting motion matrix is published

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/odometry_to_motion_matrix/README.md

Upper body skeleton estimator

This package estimates the 3d positions of the 9 upper body skeleton joints.

Launching

Use the following launchfile for launching:

roslaunch rwth_upper_body_skeleton_random_walk fixed.launch

The skeleton detector requires the output of the upper body detector as a starting point for rough person segmentation from background and will not work if upper bodies are not detected.

Run

Dependencies

This node needs upper_body_detector/upper_body_detector.launch to run, which in turn needs ground_plane_estimation/ground_plane_fixed.launch.

Parameters

  • depth_image_msg *default = /head_xtion/depth/image_rect_meters: Depth Image Frame.
  • upper_body_msg *default = /upper_body_detector/detections: The deteced upper bodies
  • rgb_image_msg *default = /head_xtion/rgb/image_rect_color: RGB Image Frame.

roslaunch

roslaunch rwth_upper_body_skeleton_random_walk fixed.launch  [parameter_name:=value]

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/rwth_upper_body_skeleton_random_walk/README.md

Head orientation estimator

This package estimates the orientation of the head of people detected by the upper body detector.

For the G4S Y1 deployment

For the G4S Y1 deployment, we won’t run the actual estimation, we’ll only run logging code and that’ll run 24/7, but can be paused/resumed by service calls.

Everything is logged into the heads collection.

Launching

Use the g5s launchfile:

roslaunch strands_head_orientation g4s.launch

Directly edit the parameters in the launchfile if they need adapting. Especially, please check expected_runtime and modify it if the expected runtime is not 3 weeks.

There is a hardcoded (by intent) limit of detection and image count to be logged (g_maxStored* in store_detections.cpp) which amounts to roughly 40Gb and 10Gb, respectively.

Pausing and resuming

In total, ~1 CPU core is used by this node and its dependencies. It can be paused and later on resumed if all available power is needed by someone else.

Pausing

Send a message of type strands_head_orientation/StopHeadAnalysis to the stop_head_analysis service and all activity will be paused.

The stop executable in this package does exactly that, see its source for an example or just execute it with recording as parameter:

rosrun strands_head_orientation stop recording
Resuming

For resuming from the paused state, send a message of type strands_head_orientation/StartHeadAnalysis to the start_head_analysis service.

Again, the start executable and its source can be helpful:

rosrun strands_head_orientation start recording
Poking

You can also check for the current state by sending a message of type strands_head_orientation/IsHeadAnalysisRunning to the status_head_analysis service.

Yet again, the status executable and its source help you:

rosrun strands_head_orientation status recording

Anything below this line can be ignored for the g4s scenario.


Install

Please download and extract one of the available model files into the models folder. The larger models will have better prediction but will run slower. By default:

cd strands_head_orientation/models
wget https://omnomnom.vision.rwth-aachen.de/strands/data/ghmodels-l/model-small.tar.bz2
tar xjv < model-small.tar.bz2
ln -s default-0.1 default

This should create the models/default link pointing to the models/default-0.1 folder with a bunch of files.

Run

Dependencies

This node needs upper_body_detector/upper_body_detector.launch to run, which in turn needs ground_plane_estimation/ground_plane_fixed.launch. Or just run the people tracker which starts both of the above.

Parameters

  • queue_size default = 10: The synchronisation queue size
  • camera_namespace default = /head_xtion: The camera namespace.
  • upper_body_detections default = /upper_body_detector/detections: The deteced upper bodies
  • head_ori default = /head_orientation/head_ori: Publishing an orientation array for each detected upper body.
  • model_dir default = “”: The learned model to use to do predictions. Models can be found in strands_head_orientation/models/.
  • autostart default = true: Whether to start analyzing detections right away or wait for a signal.

rosrun

rosrun strands_head_orientation strands_head_orientation [_parameter_name:=value]

roslaunch

roslaunch strands_head_orientation head_orientation.launch [parameter_name:=value]

Start/stop

This node should be permanently running. While running, it can be started and stopped and when stopped it should hardly use any resources, except for some memory.

Its activity can be controlled on-demand through calls to the following services: - start_head_analysis: starts the analysis of all incoming detections. - stop_head_analysis: stops the analysis of any detections. - status_head_analysis: returns whether the analysis is currently started.

For manual usage, you can use the start, stop and status executables:

rosrun strands_head_orientation start
rosrun strands_head_orientation status
rosrun strands_head_orientation stop

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/strands_head_orientation/README.md

Upper Body Detector

This package detects the upper bodies of persons using the depth image.

Run

Parameters: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by config_file * config_file default = :math:`(find upper_body_detector)/config/upper_body_detector.yaml_: The config file containing all the essential parameters. Only used if `load_params_from_file == true`. * `template_file` _default = `(find upperbody_detector)/config/upper_body_template.yaml_: The upper body template file. Read from the database if load_params_from_file == true. * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * queue_size default = 20: The synchronisation queue size * config_file default = “”: The global config file. Can be found in strands_upper_bodydetector/config * template_file default = “”: The template file. Can be found in config. * camera_namespace default = /headxtion_: The camera namespace. * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * rgb_image default = /rgb/imagerect_color_: camera_namespace + rgb_image = rgb image topic * camera_info_depth default = /depth/camerainfo_: camera_namespace + camera_info_depth = depth camera info topic * ground_plane default = /groundplane_: The estimated/fixed ground plane * upper_body_detections default = /upperbody_detector/detections_: The deteced upper bodies * upper_body_bb_centres default = /upperbody_detector/bounding_box_centres_: Publishing a pose array of the centres of the bounding boxes * upper_body_image default = /upperbody_detector/image_: The resulting image showing the detections as a boundingbox * `upper_body_markers default = /upper_body_detector/marker_array_: A visualisation array for rviz

rosrun:

rosrun upper_body_detector upper_body_detector [_parameter_name:=value]

roslaunch:

roslaunch upper_body_detector upper_body_detector.launch [parameter_name:=value]

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/upper_body_detector/README.md

Vision/People Logging package

This packge contains a logging node to save the detections to the message_store.

All the information given on how to run the nodes should only be used if you need to run them seperately. In normal cases please refer to the perception_people_launch package to start the whole perception pipeline.

Logging

This node uses the LoggingUBD.msg to save the detected people together with the robots pose and the tf transform which can be used to create the real world coordinates in the message store.

Run with:

roslaunch vision_people_logging logging_ubd.launch

Parameters: * log: Default: true This convenience parameter allows to start the whole system without logging the data

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/vision_people_logging/README.md

DROW detector

This is the DROW detector package (http://arxiv.org/abs/1603.02636). Given a laser scan it will detect two types of mobility aids and publish these as a pose array.

As input only a laser scan is needed (of type LaserScan). Currently there are three output topics (all of type PoseArray), two class specific ones for wheelchairs and walkers and one class agnostic one, that summarizes both topics. Our pretrained model was trained based on a Laser scanner with a field of view of 225 degrees and an angular resolution of 0.5 degrees at a height of ~37cm. The model will generalize to different field of views and should be robust to slight height deviations. In order to apply it to different angular resolutions or completely different heights you will need to train it on your own data though. See the paper for further info.

Params

  • laser_topic default: /scan the input laser scan.
  • wheelchair_detection_topic default: /wheelchair_detections the topics wheelchair detections will be published on.
  • walker_detection_topic default: /walker_detections the topics walker detections will be published on.
  • class_agnostic_detection_topic default: /mobility_aid_detections the topic where both types of mobility aid detections will be published to jointly.
  • threshold default: 0.5 the detection threshold. This will increase the precision at the cost of recall and vice versa.
  • use_cudnn, default: false determines if we force cudnn for the convolutions. With is faster, but harder to setup. (see dependencies)
  • network_param_file no default, the path to the network parameter file.

Dependencies

Running the detector

In order to run the detector, make sure all dependencies are installed and download the network model from our server. * $ roscd wheelchair_detector * $ cd scripts * $ sh get_network_parameter_file.sh The launch file is configured in such a way that it will look for the network in the resources folder, but you can put it anywhere you like it to be as long as you change the launch file accordingly.

In order to launch the node, you will need to provide the theano flags: * $ THEANO_FLAGS='cuda.root=/usr/local/cuda/,floatX=float32,device=gpu0' roslaunch wheelchair_detector drow.launch If you have cuda somewhere else, make sure to adapt the flag. All of these can also be set in a theano config once instead.

TODO

  • Make the voting parameters public. This is especially important for lasers with a larger FoV.
  • Add a project page with training code, data, etc.
  • Change the name to DROW?

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/wheelchair_detector/README.md

This code uses dense vectors to store data instances. If most feature values are non-zeros, the training/testing time is faster than the standard libsvm, which implement sparse vectors.

Experimental Setting:

We select four concepts, animal, people, sky, and weather, from MediaMill Challenge Problem for this experiment. All instances are 120-dimension dense vectors. We compare the standard libsvm and this modification.

The experimental procedure performs parameter selection (30 parameter sets), model training (best parameters), test data predictions. There are 30,993 and 12,914 records in training (used in parameter selection and model training stages) and testing (used in prediction stage) sets, respectively. The following table shows the results.

  1. parameter-selection execution time (sec.): original dense change libsvm repr. ratio animal 2483.23 1483.02 -40.3% people 22844.26 13893.21 -39.2% sky 13765.39 8460.06 -38.5% weather 2240.01 1325.32 -40.1% AVERAGE 10083.17 6540.46 -39.1%
  2. model-training execution time (sec.): original dense change libsvm repr. ratio animal 113.238 70.085 -38.1% people 725.995 451.244 -37.8% sky 1234.881 784.071 -36.5% weather 123.179 76.532 -37.9% AVERAGE 549.323 345.483 -37.1%
  3. prediction execution time (sec.): original dense change libsvm repr.. ratio animal 12.226 6.895 -43.6% people 99.268 54.855 -44.7% sky 78.069 42.417 -45.7% weather 10.843 6.495 -44.9% AVERAGE 50.102 27.666 -44.8%

Overall, dense-representation libsvm saves on average 39.1%, 37.1%, and 44.8% of the execution time in parameter selection, model training and prediction stages, respectively. This modification is thus useful for dense data sets.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/opencv_warco/src/opencv_warco/libsvm/README.dense

opencv_warco

A C++, OpenCV-based reimplementation of Tosato’s WARCO image classification algorithm.

This is an unfinished experiment, still highly in flux. It will not work. It will break. It might microwave your cat. It might classify pictures of heads according to their gaze direction.

License: MIT

For ease of use, this bundles jsoncpp which is MIT/Public Domain.

Copyright (c) 2013 Lucas Beyer

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/opencv_warco/src/opencv_warco/README.md

Attic directory containing depricated launch files

these launch files are used to start the ppl perception using HOG featur detection. This is currently deprecated and not supported. The code for this can be found in the attic branch. This directory is not installed and therefore just to preserve the files themselves. Following are the instructions on how to use these files:

people_tracker_robot_with_HOG.launch

This version of the tracking does rely on the ground_hog feature extraction and is therefore only usable on PCs with an NVIDIA graphics card. It also relys on the fixed ground plane assumption made for the robot. To use this you have to run it remotely on a machine talking to the rosmaster on the robot, e.g. a laptop inside the robot.

Parameters: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by param_file * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * gp_queue_size default = 5: The ground plane sync queue size * gh_queue_size default = 20: The ground plane sync queue size * ubd_queue_size default = 5: The upper body detector sync queue size * pt_queue_size default = 10: The people tracking sync queue size * ptu_state default = /ptu/state: The ptu state topic * camera_namespace default = /headxtion_: The camera namespace. * rgb_image default = /rgb/imagerect_color_: camera_namespace + rgb_image = rgb image topic * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * mono_image default = /rgb/imagemono_: camera_namespace + mono_image = mono image topic * camera_info_rgb default = /rgb/camerainfo_: camera_namespace + camera_info_rgb = rgb camera info topic * camera_info_depth default = /depth/camerainfo_: camera_namespace + camera_info_depth = depth camera info topic * ground_plane default = /groundplane_: The fixed ground plane * upper_body_detections default = /upperbody_detector/detections_: The detected upper body * upper_body_bb_centres default = /upperbody_detector/bounding_box_centres_: Publishing a pose array of the centres of the bounding boxes * upper_body_markers default = /upper_body_detector/marker_array_: A visualisation array for rviz *upper_body_image_default = /upper_body_detector/image_: The detected upper body image *visual_odometry_default = /visual_odometry/motion_matrix_: The odometry. This takes the real odometry and only follows naming conventions for the ease of use. *pedestrain_array_default = /mdl_people_tracker/people_array_: The detected and tracked people *people_markers” default=”/mdl_people_tracker/marker_array_: A visualisation array for rviz * people_poses" default = /mdl_people_tracker/pose_array_: A PoseArray of the detected people *tf_target_frame_default = /map: The coordinate system into which the localisations should be transformed *pd_positions_default = /people_tracker/positions_: The poses of the tracked people *pd_marker_default = /people_tracker/marker_array_: A marker arry to visualise found people in rviz *log` default = false: Log people and robot locations together with tracking and detection results to message_store database into people_perception collection. Disabled by default because if it is enabled the perception is running continuously.

Running:

roslaunch perception_people_launch people_tracker_robot_with_HOG.launch [parameter_name:=value]

people_tracker_standalone_with_HOG.launch

This depends on the strands_ground_hog package which has to be built with the libcudaHOG. See README file of 3rd_party directory. It also uses the /camera namespace as a default and estimates the groundplane because it is not supposed to be run on the robot but on an external PC with a different set-up.

Parameters: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by param_file * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * gh_queue_size default = 10: The ground hog sync queue size * gp_queue_size default = 5: The ground plane sync queue size * vo_queue_size default = 5: The visual odometry sync queue size * ubd_queue_size default = 5: The upper body detector sync queue size * pt_queue_size default = 10: The people tracking sync queue size * camera_namespace default = /camera: The camera namespace. * rgb_image default = /rgb/imagerect_color_: camera_namespace + rgb_image = rgb image topic * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * mono_image default = /rgb/imagemono_: camera_namespace + mono_image = mono image topic * camera_info_rgb default = /rgb/camerainfo_: camera_namespace + camera_info_rgb = rgb camera info topic * camera_info_depth default = /depth/camerainfo_: camera_namespace + camera_info_depth = depth camera info topic * ground_plane default = /groundplane_: The estimated ground plane * ground_hog_detections default = /groundHOG/detections: The ground HOG detections * upper_body_detections default = /upperbody_detector/detections_: The detected upper body * upper_body_bb_centres default = /upperbody_detector/bounding_box_centres_: Publishing a pose array of the centres of the bounding boxes * upper_body_markers default = /upper_body_detector/marker_array_: A visualisation array for rviz *ground_hog_image_default = /groundHOG/image_: The ground HOG image *upper_body_image_default = /upper_body_detector/image_: The detected upper body image *visual_odometry_default = /visual_odometry/motion_matrix_: The visual odometry *people_markers” default=”/mdl_people_tracker/marker_array_: A visualisation array for rviz * people_poses" default = /mdl_people_tracker/pose_array_: A PoseArray of the detected people *people_markers” default=”/mdl_people_tracker/marker_array: A visualisation array for rviz *tf_target_frame` default = “”: The coordinate system into which the localisations should be transformed. As this might not run on a robot and therefore no tf is available this is an empty string.

Running:

roslaunch perception_people_launch people_tracker_standalone_with_HOG.launch [parameter_name:=value]

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/perception_people_launch/attic/README.md

Launch package

This convenience package contains launch files to start-up the whole people tracker system.

General remarks

All the the packages rely heavily on the synchronisation of rgb and depth images and the generated data of the other nodes. The synchronization is realised using a queue which saves a predefined number of messages on which the synchronisation is performed. As a rule of thumb: the faster your machine the shorter the queue to prevent unnecessary use of memory. You can set queue sizes using:

roslaunch perception_people_launch file.launch gh_queue_size:=11 vo_queue_size:=22 ubd_queue_size:=33 pt_queue_size:=44

This will overwrite the default values. gh = groundhog, vo = visual_odemetry, ubd = upper_body_detector, pt = mdl_people_tracker_

The whole pipeline is desinged to unsuscribe from everything if there is no subscriber to the published topics. This causes the nodes to not use any CPU when there is no one listening on the published topics. This might result in a 1-2 second dealy after subscribing to one of the topics before the first data is published. Also, setting log to true when starting the perception pipeline will cause it to always run and log data.

Running on robot

These launch files will make use of the fixed ground plane which is just rotated according to the PTU tilt and the robot odometry instead of the visual odometry. Additionally, the necessary parameters are assumed to be provided by the parameter server (see mongodb_store confog_manager on default parameters) therefore the load_params_from_file parameter is set to false and the nodes will querry the config parameters from the parameter server. The standalone version on the other hand uses the provided config files. If parameters are not present in the paremeter server on the robot but you want to launch the ppl perception, run with load_params_from_file:=true.

people_tracker_robot.launch

This version of the tracking does not rely on the ground_hog feature extraction and is therefore usable on PCs with no NVIDIA graphics card (like the embedded robot PC). However, this has the drawback that the system only relies on depth data to detect people which limits the distance at which persons can be detected to approx. 5 meters. Where possible the ground_hog detection should be used to enhance tracking results. It also uses the fixed ground plane assumption because it is ment to be executed on the robots head xtion camera.

Parameters: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by param_file * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * gp_queue_size default = 5: The ground plane sync queue size * ubd_queue_size default = 5: The upper body detector sync queue size * pt_queue_size default = 10: The people tracking sync queue size * ptu_state default = /ptu/state: The ptu state topic * camera_namespace default = /headxtion_: The camera namespace. * rgb_image default = /rgb/imagerect_color_: camera_namespace + rgb_image = rgb image topic * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * mono_image default = /rgb/imagemono_: camera_namespace + mono_image = mono image topic * camera_info_rgb default = /rgb/camerainfo_: camera_namespace + camera_info_rgb = rgb camera info topic * camera_info_depth default = /depth/camerainfo_: camera_namespace + camera_info_depth = depth camera info topic * ground_plane default = /groundplane_: The fixed ground plane * upper_body_detections default = /upperbody_detector/detections_: The detected upper body * upper_body_bb_centres default = /upperbody_detector/bounding_box_centres_: Publishing a pose array of the centres of the bounding boxes * upper_body_markers default = /upper_body_detector/marker_array_: A visualisation array for rviz *upper_body_image_default = /upper_body_detector/image_: The detected upper body image *visual_odometry_default = /visual_odometry/motion_matrix_: The odometry. This takes the real odometry and only follows naming conventions for the ease of use. *mdl_people_array_default = /mdl_people_tracker/people_array_: The detected and tracked people *mdl_people_markers” default=”/mdl_people_tracker/marker_array_: A visualisation array for rviz * mdl_people_poses" default = /mdl_people_tracker/pose_array_: A PoseArray of the detected people *tf_target_frame_default = /map: The coordinate system into which the localisations should be transformed *bayes_people_positions_default = /people_tracker/positions_: The poses of the tracked people *bayes_people_pose: _Default: /people_tracker/pose_: The topic under which the closest detected person is published as a geometry_msgs/PoseStamped * bayes_people_pose_array: Default: /peopletracker/pose_array_: The topic under which the detections are published as a geometry_msgs/PoseArray*bayes_people_poeple: _Default: /people_tracker/people_: The topic under which the results are published as people_msgs/People * pd_marker default = /peopletracker/marker_array_: A marker arry to visualise found people in rviz * log default = false: Log people and robot locations together with tracking and detection results to message_store database into people_perception collection. Disabled by default because if it is enabled the perception is running continuously. * with_mdl_tracker default = false: Starts the mdl people tracker in addition to the bayes tracker * with_laser_filter default = true: Starts the laser filter to reduce false positives from the leg detector * with_tracker_filter_map default = false: Use a special map to filter the tracker results instead of just the map used for navigation. * tracker_filter_map: The map to use instead of the navigation map to filter the tracker results. * tracker_filter_positions default = /peopletracker_filter/positions_: The filtered tracker results. * tracker_filter_pose default = /peopletracker_filter/pose_: The filtered pose for the closest person. * tracker_filter_pose_array default = /peopletracker_filter/pose_array_: The filetered pose array. * tracker_filter_people default = /peopletracker_filter/people_: The filetered people message. * tracker_filter_marker default = /peopletracker_filter/marker_array_: The filetered marker array.

Running:

roslaunch perception_people_launch people_tracker_robot.launch [parameter_name:=value]
people_tracker_standalone.launch

This version of the tracking does not rely on the ground_hog feature extraction and is therefore usable on PCs with no NVIDIA graphics card. However, this has the drawback that the system only relies on depth data to detect people which limits the distance at which persons can be detected to approx. 5 meters. Where possible the ground_hog detection should be used to enhance tracking results.

It also uses the /camera namespace as a default and estimates the groundplane because it is not supposed to be run on the robot but on an external PC with a different set-up.

Parameters: * load_params_from_file default = true: false tries to read parameters from datacentre, true reads parameters from YAML file specified by param_file * machine default = localhost: Determines on which machine this node should run. * user default = “”: The user used for the ssh connection if machine is not localhost. * gh_queue_size default = 20: The ground plane sync queue size * vo_queue_size default = 5: The visual odometry sync queue size * ubd_queue_size default = 5: The upper body detector sync queue size * pt_queue_size default = 10: The people tracking sync queue size * camera_namespace default = /camera: The camera namespace. * rgb_image default = /rgb/imagerect_color_: camera_namespace + rgb_image = rgb image topic * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * mono_image default = /rgb/imagemono_: camera_namespace + mono_image = mono image topic * camera_info_rgb default = /rgb/camerainfo_: camera_namespace + camera_info_rgb = rgb camera info topic * camera_info_depth default = /depth/camerainfo_: camera_namespace + camera_info_depth = depth camera info topic * ground_plane default = /groundplane_: The estimated ground plane * upper_body_detections default = /upperbody_detector/detections_: The detected upper body * upper_body_bb_centres default = /upperbody_detector/bounding_box_centres_: Publishing a pose array of the centres of the bounding boxes * upper_body_markers default = /upper_body_detector/marker_array_: A visualisation array for rviz *upper_body_image_default = /upper_body_detector/image_: The detected upper body image *visual_odometry_default = /visual_odometry/motion_matrix_: The visual odometry *pedestrain_array_default = /mdl_people_tracker/people_array_: The detected and tracked people *people_markers” default=”/mdl_people_tracker/marker_array_: A visualisation array for rviz * people_poses" default = /mdl_people_tracker/pose_array_: A PoseArray of the detected people *tf_target_frame` default = “”: The coordinate system into which the localisations should be transformed. As this might not run on a robot and therefore no tf is available this is an empty string.

Running:

roslaunch perception_people_launch people_tracker_standalone.launch [parameter_name:=value]

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/perception_people_launch/README.md

This software is constructed according to the Pods software policies and templates. The policies and templates can be found at:

http://sourceforge.net/projects/pods

About this project

Name: libfovis Authors: Albert Huang ashuang@gmail.com Abraham Bachrach abachrach@gmail.com Daniel Maturana dimatura@gmail.com

Summary: Visual odometry with a depth camera (e.g., Kinect/Primesense) or stereo camera.

Requirements:

CMake (http://www.cmake.org) Eigen 3 (http://eigen.tuxfamily.org) Intel SSE2

Fovis was developed on Ubuntu, but may work with other platforms.

Build Instructions

For system-wide installation:

$ mkdir build $ cd build $ cmake .. $ make $ sudo make install $ sudo ldconfig

This usually installs Fovis to /usr/local or something like that.

For use in the source directory:

$ mkdir build $ make

Documentation

Fovis is documented using Doxygen (http://www.doxygen.org). To build the documentation:

$ cd doc $ doxygen

Following that, open up doc/html/index.html in your web browser.

License

fovis is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

fovis is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with fovis. If not, see http://www.gnu.org/licenses/.

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/visual_odometry/3rd_party/fovis/README

This example produces visual odometry estimates for a Microsoft Kinect using libfreenect.

You must have libfreenect installed. It is strongly recommended that you install libfreenect from source to get the latest version possible. For more information on libfreenect, see here: http://openkinect.org/wiki/Getting_Started

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/visual_odometry/3rd_party/fovis/examples/fv-example-freenect/README

This example obtains depth images using the OpenNI API. It’s designed to work with either a Microsoft Kinect, or PrimeSense sensor, although I haven’t actually tested it on a PrimeSense.

You must have OpenNI installed, and either the Kinect or PrimeSense.

OpenNI: http://www.openni.org

Kinect modules: As of Apr 21, 2011, there’s no “official” repository of Kinect modules for OpenNI, but there are community-developed forks of the PrimeSense module. One promising such fork is here:

https://github.com/avin2/SensorKinect

PrimeSense modules: http://www.openni.org

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/visual_odometry/3rd_party/fovis/examples/fv-example-openni/README

Visual Odometry

This package calculates the visual odometry using depth and mono images.

Run

Parameters: * queue_size default = 20: The synchronisation queue size * camera_namespace default = /headxtion_: The camera namespace. * depth_image default = /depth/imagerect_: camera_namespace + depth_image = depth image topic * mono_image default = /rgb/imagemono_: camera_namespace + mono_image = mono image topic * camera_info_depth default = /depth/camerainfo_: camera_namespace + camera_info_depth = depth camera info topic * motion_parameters default = /visualodometry/motion_matrix_: The visual odometry

rosrun:

rosrun visual_odometry visual_odometry [_parameter_name:=value]

roslaunch:

roslaunch visual_odometry visual_odometry.launch [parameter_name:=value]

Troubleshooting

If you get an error message that states:

/usr/lib/gcc/i686-linux-gnu/4.6/include/emmintrin.h:32:3: error: #error "SSE2 instruction set not enabled"

or similar, you have to add set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -msse -msse2 -msse3") to the CMakeLists.txt file after the

if(CMAKE_COMPILER_IS_GNUCXX)
    set(CMAKE_CXX_FLAGS "-O3")        ## Optimize
endif()

statement so that it looks like:

if(CMAKE_COMPILER_IS_GNUCXX)
    set(CMAKE_CXX_FLAGS "-O3")        ## Optimize
endif()

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -msse -msse2 -msse3")

Original page: https://github.com/strands-project/strands_perception_people/blob/indigo-devel/visual_odometry/README.md

This page provides the training data for both *real* and *simulated* scenes and also describes the format of the data sets.

Data format

The training data is described in JSON (http://en.wikipedia.org/wiki/JSON), a human-readable text format to transmit data objects consisting of key:value pairs.

Mandatory data structures

The file contains a list of scenes whereby each scene is represented by at least the following data structures:

  • scene_id, a unique identifier for the scene
  • objects, list of object IDs for objects that are present in the scence. The IDs are used as a key to retrieve the information about an object from all other data structures. The ID is not consistent across different scenes.
  • type, a dictionary for ID:type pairs denotes the type of an object.
  • position, a dictionary for ID:position pairs. The position is given in the coordinate frame of the table, whereby the bottom-left corner of the table resembles the origin.
  • orientation, a dictionary for ID:orientation pairs. The orientation is represented as a quaternion in the reference frame of the table.
  • bbox, a dictionary for ID:bbox pairs. The bounding box (bbox) is given by eight points (x,y,z) in the reference frame of the table.

Optional data structures

In addition to the above mentioned data structures, a scene description might contain more information. This additional information depends on the origin of the data.

For the *real-world* data, a scene description also includes the following data:

  • front-face, a dictionary for ID:front-face pairs. The front-face is a subset of the bounding box, denoting the intrinsic front face of an object.
  • tableID, a unique ID for the table. This ID is identical across scenes.
  • tableType, a type description of the table. Examples include: StudentDesk,ProfessorDesk, and StaffDesk.
  • date, the date and time when the data collection took place. The format is as follows: YYYY:MM:DD HH:MM. This timestamp is identical for all scenes collected in a single data collection run.

For the *simulated* data, a scene description also includes the following data:

  • tablePose, the position and orientation of the table in the world frame.
  • cameraPose, the position and orientation of the camera view in the world frame.

Example

The following example includes the mandatory data structures for one scene.

[
  {
    "scene_id": "scene1",
    "type": {
      "monitor": "Monitor",
      "cup": "Cup",
      "keyboard": "Keyboard",
      "pencil": "Pencil",
      "telephone": "Telephone",
      "pc": "PC"
    },
    "orientation": {
      "monitor": [
        0.9989563872446908,
        0.0,
        0.0,
        0.04567424200832827
      ],
      "cup": [
        0.6135444651092773,
        0.0,
        0.0,
        0.7896601733237981
      ],
      "keyboard": [
        0.9960360851196927,
        0.0,
        0.0,
        0.08895008229021645
      ],
      "pencil": [
        -0.6901722579088913,
        0.0,
        0.0,
        0.7236451163470551
      ],
      "telephone": [
        0.9998801896319474,
        0.0,
        0.0,
        0.015479224191834938
      ],
      "pc": [
        0.9996278322010864,
        0.0,
        0.0,
        0.02727997597060249
      ]
    },
    "position": {
      "monitor": [
        0.2808067798614502,
        0.8186612129211426,
        0.2287198305130005
      ],
      "cup": [
        0.29055213928222656,
        1.423274040222168,
        0.0857805609703064
      ],
      "keyboard": [
        0.6437058448791504,
        0.8054618835449219,
        0.02877497673034668
      ],
      "pencil": [
        0.8851318359375,
        0.7548675537109375,
        0.0085601806640625
      ],
      "telephone": [
        0.6974833011627197,
        0.4091925621032715,
        0.0903427004814148
      ],
      "pc": [
        0.3361496925354004,
        0.384305477142334,
        0.24617063999176025
      ]
    },
    "bbox": {
      "monitor": [
        [
          0.17071890830993652,
          0.5517492294311523,
          0.004999995231628418
        ],
        [
          0.17071890830993652,
          0.5517492294311523,
          0.45243966579437256
        ],
        [
          0.12404227256774902,
          1.0611200332641602,
          0.45243966579437256
        ],
        [
          0.12404227256774902,
          1.0611200332641602,
          0.004999995231628418
        ],
        [
          0.43757128715515137,
          0.576202392578125,
          0.004999995231628418
        ],
        [
          0.43757128715515137,
          0.576202392578125,
          0.45243966579437256
        ],
        [
          0.39089465141296387,
          1.0855731964111328,
          0.45243966579437256
        ],
        [
          0.39089465141296387,
          1.0855731964111328,
          0.004999995231628418
        ]
      ],
      "cup": [
        [
          0.3886749744415283,
          1.3467788696289062,
          0.004999995231628418
        ],
        [
          0.3886749744415283,
          1.3467788696289062,
          0.16656112670898438
        ],
        [
          0.24104976654052734,
          1.309129238128662,
          0.16656112670898438
        ],
        [
          0.24104976654052734,
          1.309129238128662,
          0.004999995231628418
        ],
        [
          0.3400545120239258,
          1.5374188423156738,
          0.004999995231628418
        ],
        [
          0.3400545120239258,
          1.5374188423156738,
          0.16656112670898438
        ],
        [
          0.1924293041229248,
          1.4997692108154297,
          0.16656112670898438
        ],
        [
          0.1924293041229248,
          1.4997692108154297,
          0.004999995231628418
        ]
      ],
      "keyboard": [
        [
          0.5719075202941895,
          0.5088963508605957,
          0.004999995231628418
        ],
        [
          0.5719075202941895,
          0.5088963508605957,
          0.05254995822906494
        ],
        [
          0.4729793071746826,
          1.0583620071411133,
          0.05254995822906494
        ],
        [
          0.4729793071746826,
          1.0583620071411133,
          0.004999995231628418
        ],
        [
          0.8144323825836182,
          0.5525617599487305,
          0.004999995231628418
        ],
        [
          0.8144323825836182,
          0.5525617599487305,
          0.05254995822906494
        ],
        [
          0.7155041694641113,
          1.102027416229248,
          0.05254995822906494
        ],
        [
          0.7155041694641113,
          1.102027416229248,
          0.004999995231628418
        ]
      ],
      "pencil": [
        [
          0.8866786956787109,
          0.8634657859802246,
          0.004999995231628418
        ],
        [
          0.8866786956787109,
          0.8634657859802246,
          0.012120306491851807
        ],
        [
          0.8938477039337158,
          0.863126277923584,
          0.012120306491851807
        ],
        [
          0.8938477039337158,
          0.863126277923584,
          0.004999995231628418
        ],
        [
          0.877265453338623,
          0.6647830009460449,
          0.004999995231628418
        ],
        [
          0.877265453338623,
          0.6647830009460449,
          0.012120306491851807
        ],
        [
          0.8844344615936279,
          0.6644434928894043,
          0.012120306491851807
        ],
        [
          0.8844344615936279,
          0.6644434928894043,
          0.004999995231628418
        ]
      ],
      "telephone": [
        [
          0.630591869354248,
          0.35100269317626953,
          -0.012934625148773193
        ],
        [
          0.630591869354248,
          0.35100269317626953,
          0.15775072574615479
        ],
        [
          0.6271209716796875,
          0.4630756378173828,
          0.15775072574615479
        ],
        [
          0.6271209716796875,
          0.4630756378173828,
          -0.012934625148773193
        ],
        [
          0.7880260944366455,
          0.3558783531188965,
          -0.012934625148773193
        ],
        [
          0.7880260944366455,
          0.3558783531188965,
          0.15775072574615479
        ],
        [
          0.784555196762085,
          0.46795129776000977,
          0.15775072574615479
        ],
        [
          0.784555196762085,
          0.46795129776000977,
          -0.012934625148773193
        ]
      ],
      "pc": [
        [
          0.11110544204711914,
          0.2659010887145996,
          0.004999935626983643
        ],
        [
          0.11110544204711914,
          0.2659010887145996,
          0.4873412847518921
        ],
        [
          0.0995481014251709,
          0.4774947166442871,
          0.4873412847518921
        ],
        [
          0.0995481014251709,
          0.4774947166442871,
          0.004999935626983643
        ],
        [
          0.5727512836456299,
          0.29111623764038086,
          0.004999935626983643
        ],
        [
          0.5727512836456299,
          0.29111623764038086,
          0.4873412847518921
        ],
        [
          0.5611939430236816,
          0.5027098655700684,
          0.4873412847518921
        ],
        [
          0.5611939430236816,
          0.5027098655700684,
          0.004999935626983643
        ]
      ]
    },
    "objects": [
      "monitor",
      "pc",
      "keyboard",
      "pencil",
      "telephone",
      "cup"
    ]
  }
]

Original page: https://github.com/strands-project/strands_qsr/wiki/Data-sets

The evaluation client calls the ROS service ‘group_classification’ for each scene in a supplied JSON scenes file(see here). The perception for each scene is read from a perceptions file (Simulated Perception).

The output of the classification is stored to a flat text file for later processing.

Evaluation client usage

Usage: rosrun strands_qsr_evalution evaluation_client.py [options]

Options:
  -h, --help            show this help message and exit
  -p FILE, --perception_filename=FILE
                        read perceptions from FILE
  -s FILE, --scenes=FILE
                        load the scenes from FILE
  -o FILE, --output=FILE
                        store results in FILE
  -O, --overwrite       overwrite output file if already exists.
chris@heatwave:~/documents/strands/strands_qsr.wiki$

for example

rosrun strands_qsr_evalution evaluation_client.py -s data/simulation/bham_office_desk_500.json -p simulated_perceptions.json -o run1_results.txt

All object classes that exist in the specified simulated perceptions file will be considered - ie. to test on a subset of the object classes in a given scenes file, create a simulated perception file with only the desired object types.

Output format

The results file looks like this:

--
scene1
Pencil  Monitor Cup Telephone   PC  Keyboard
Pencil  Laptop  Cup Mouse   PC  Keyboard
Pencil  Cup Cup Mouse   PC  Keyboard
--
scene2
Pencil  Monitor Monitor Telephone   Bottle  Keyboard    Pencil
Pencil  Monitor Monitor Telephone   Bottle  Cup Stapler
Pencil  Monitor Monitor Telephone   Bottle  Cup Stapler
--
...

each labelled scene is specified by 5 lines: 1. A separator ‘–’ 1. The scene id as read from the scenes file 1. A tab separated list of ground truth labels 1. A tab separated list of perception labels 1. A tab separated list of relation+perception labels

So, if perception is 100% then lines 3 & 4 will be the same.

Original page: https://github.com/strands-project/strands_qsr/wiki/Evaluation-Client

Simulating perception

To simulate the perception for each scene in a scenes file, use the ‘generate_perception.py’ script inside the strands_qsr_evaluation package. This will generate a perceptions file that contains artificial confidence values for the class of every object inside every scene.

Usage

rosrun strands_qsr_evalution generate_perception.py [options]

Options:
  -h, --help            show this help message and exit
  -o FILE, --out=FILE   write scene perceptions to FILE
  -s FILE, --scenes=FILE
                        load the scenes from FILE
  -p P_TYPE, --perception=P_TYPE
                        generate perceptions of type P_TYPE
  -t OBJ_CLASS, --object_type=OBJ_CLASS
                        conside objects of type OBJ_CLASS
  -a, --all_objects     conside all objects in scenes (replaces -t)
  -O, --overwrite       overwrite output file if already exists.

for example

rosrun strands_qsr_evalution generate_perception.py -o perceptions.json -s data/simulation/bham_office_desk_500.json -p gt_60 -t Mouse -t Keyboard -t Mug -t Monitor -t Headphone -t Pencil -t Keys -t Lamp -t Laptop

Perception models

The perception model to use to generate the artificial object class confidences can be specified using the -p option. Available models: * random - every possible class is given equal confidence * gt_N where N is 1..100 - ground truth from 1 to 100%, the correct object label is given confidence N while the others are equally distributed 100-N. * a ‘library’ model

Library models are specified at the bottom of strands_qsr_evaluation/src/perception.py as global variables. These are matrices that give p(perception_label | true_label). To define a new perception model, create a new variable at the bottome of the file using the existing model as a guide. The Gnumeric spreadsheet in the package can be used as a guide.

Perceptions file data format

JSON reprentation of a dictionary, key = scene_id, value is another dictionary where key is the object id in the scene and value is another dictionary of entries ‘type’ -> list of possible types for the object, ‘confidence’ - list of confidences associated to the type.

Original page: https://github.com/strands-project/strands_qsr/wiki/perception

Generate documentation

From within the docs folder run the following:

To autogenerate the packages and modules documentation from their docstrings:

sphinx-apidoc ../qsr_lib/src/ -o rsts/api/ -e

NOTE: sphinx-apidoc does not overwrite any files, so if you want to regenerate the documentation for a package or a module then delete first its rst file in the docs/rsts/api folder.

Then simply run:

make html

qsrs.rst

The table in qsrs.rst is generated from qsrs_table.md as writing tables in RsT is a pain. md files can be converted to rst ones using pandoc. The actual command run from within the docs/rsts/handwritten/qsrs folder is:

pandoc --from=markdown --to=rst --output=qsrs_table.rst qsrs_table.md

If pandoc converts “`” to “``” then do two string replacements:

  1. “``” to “`”
  2. “` ” to “` “, i.e ‘backtilt+2 spaces’ to ‘backtilt+6 spaces’

Original page: https://github.com/strands-project/strands_qsr_lib/blob/master/docs/README.md

The QSR Probabilistic Representation library

This library provides functionalities to create probablistic models of QSR state chains, e.g. produced by the qsr_lib. This can for example be used for classification and sampling of new state chains. Currently only HMMs are supported.

Usage

The recommended language is python 2.7 using ROS Indigo. Eventhough the library uses ROS only for communication, the provided infrastructure currently relies on ROS services to be used. Python is recommended due to the fact that the provided ros_client is implemented in python and using the services directly might be a bit confusing at first since they rely on json parsable strings.

For a python usage exmple, please see the example_ros_client.py in scripts. Try rosrun qsr_prob_rep example_hmm_client.py -h or rosrun qsr_prob_rep example_pf_client.py -h for usage information depending on if you want to create a Hidden Markov Model or a Particle Filter.

Currently implemented functionality

HMM

  • create: This takes the desired qsr_type and a json parsable list of lists of QSR state chains and returns the xml representation of the trained HMM as a string. This function is easiest to use when reading the state chains from files as it’s done in the example client. The resulting xml can either be written to disk, kept in memory, or stored in a datacentre. The xml string is used in all other functionalities to load the HMM.
  • sample: Given a HMM as an XML string, the desired number and length of samples, and the qsr the HMM models, this function produces sample state chains and returns them as numpy arrays.
  • loglikelihood: Given an HMM and a (list of) state chain(s), this function calculates the accumulative loglikelihood for the state chain(s) to be produced by the given HMM. Might produce -inf if production is impossible.

Particle Filter

  • create: This takes a model consisting or several transition probability and observation probability matrices in a dictionary and a state look up table to create a particle filter. Create an instance of the PfRepRequestCreate class filling all the necessary information. The required model can be built with a helper class qsrrep_pf.pf_model.PfModel that includes tests for the model sanity. Have a look at the example client on how to use this. The look up table has to have as many states as the matrices have rows and coloumns, and should be a simple list or numpy array of the states that will be observed. The index of the state in the look up table has to correspond to the index for this state in the given matrices. Have a look at the qsrrep_utils.qtc_model_generation class for inspiration on how to create one. This returns a uuid identifying your particle filter.
  • The necessary files can be create directly from the HMM by exporting the emission matrix as the observation probability matrix and the transitions as the transition probability matrix. This however, requires that you have enough training data to learn sensible emissians and transitions at the same time. If not, create your own observation probability matrix like it is done in qsrrep_utils.qtc_model_generation. The create_pf_models.py script can help to generate the files from the HMM.
  • predict: Takes the uuid of an existing particle filter and the number of sample generations to predict and returns the most likely next states and model including their probabilities.
  • update: Runs the baysian update for the particle filter identified by uuid id using an observation provided.
  • listfilters_: Lists all currently active filters.
  • remove: Removes the particle filter with the given uuid from memory.

General usage advice

The ros_client for python hides a lot of the complexity from the user. Just create an instance of the correct request class and call ROSClient().call_service and the right service will be called for you. All the json parsing happens in the background and you don’t have to worry about this. Have a look at the example clients to see how to do this.

Currently implemented QSRs
  • QTCB: The basic variant of the Qtalitative Trajectory Calculus. For more information on the implementation see [1]
  • QTCC: The double-cross variant of the Qtalitative Trajectory Calculus. For more information on the implementation see [1]
  • QTCBC: The mixture representation of the basic and double-cross variant of the Qtalitative Trajectory Calculus. For more information on the implementation see [1]
  • RCC3: A very simple rcc3 representation using uniform transition and emission matrices.

For Developers

The following is currently hevily outdated and needs updating!!!

Adding a new QSR

Adding a new QSR based on the simple RCC3 example:

  • Create a new file in src/qsrrep_hmms or copy the rcc3_hmm.py
  • Import: from qsrrep_hmms.hmm_abstractclass import HMMAbstractclass
  • Create a new class that inherits from HMMAbstractclass
  • In the __init__(self) function:
  • Call the super calss constructor: super(self.__class__, self).__init__()
  • Set self.num_possible_states to the number of possible states your QSR has
  • Overwrite the two functions _qsr_to_symbol and _symbol_to_qsr. See rcc3 example for inspiration.
  • Add your new QSR to src/qsrrep_lib/rep_lib.py:
  • Import your new file from qsrrep_hmms
  • Add your new QSR to the dictionary of available QSRs hmm_types_available following the examples given
  • Enjoy your new QSR HMM!

The example client is agnostic of the QSR implemented, a simple restart of the eserver and allows to use your QSR with the example client immediately.

Adding a more specialised QSR

Sometimes a uniformly distributed transition and emission matrix (as they are used in above example by default) is just not precise enough like in the case of QTC. Please have a look at src/qsrrep_hmms/qtc_hmm_abstractclass.py on how to deal with that.

Basic instructions:

  • Follow above instructions to create a new QSR
  • Overwrite the _create_transition_matrix and _create_emission_matrix
Adding a new functionality
  • Add your new functionality to qsrrep_hmms.hmm_abstractclass.py following the examples of _create, _sample, and _log_likelihood.
  • Add a getter for your new functionality in qsrrep_hmms.hmm_abstractclass.py following the examples of get_hmm, get_samples, and get_log_likelihood.
  • Create a request class following the naming scheme: HMMRepRequestFunctionality in qsrrep_lib.rep_io.py where Functionality is replaced by your new functionality name.
  • Inherit from HMMRepRequestAbstractclass
  • Define the _const_function_pointer to use your function in rep_lib.py.
    • Make the pointer look like the one in HMMRepRequestAbstractclass and replace my_function with the function name in rep_lib.py (implemented later on)
    • Override __init__ definig a custom function header and adding the variables to the variable self.kwargs, following the example of the other classes in this file.
  • Create a response class in qsrrep_lib.rep_io.py following the naming scheme HMMReqResponseFunctionality where Funtionality should be the same as for the request class.
  • Override the get function to make sure it returns a string (str or json dump)
  • Add your new functionality to available_services in the bottom of qsrrep_lib.rep_io.py.
    • The string key will be used to create the service name
    • The value should be a list where the first entry is your request class and the second the response class.
  • Add a new function in qsrrep_lib.rep_lib.py that calls your getter function from qsrrep_hmms.hmm_abstractclass.py and returns your response from qsrrep_lib.rep_io.py
  • The ros server will automatically create a service for your new functionality and the ros_client will know how to deal with it given your request class is used as an input for it.
  • Add thee new functionality to the example_ros_client.py using proper argument parsing like done for the other functions.

[1] Dondrup, C.; Bellotto, N.; Hanheide, M.; Eder, K.; Leonards, U. A Computational Model of Human-Robot Spatial Interactions Based on a Qualitative Trajectory Calculus. In: Robotics 2015, 4, 63-102.

Original page: https://github.com/strands-project/strands_qsr_lib/blob/master/qsr_prob_rep/README.md

Datasets description

  • data1.csv: A set of hand-drawn int 2D-points for 2 objects (“o1”, “o2”) with fixed width and length (i.e. bounding boxes are provided). Tried to cover all possible cardir and RCC states.
  • data2.csv: A set of 10K random integer 2D-points (random.randint(0, 50)) for 3 objects (“o1”, “o2”, “o3”) and random float width and length (random.randint(3, 6)) (i.e. bounding boxes are provided).
  • data3.csv: A set of 10K random integer 2D-points (random.randint(0, 50)) for 3 objects (“o1”, “o2”, “o3”), but no width and length are given (i.e. no bounding boxes are provided).
  • data4.csv: A set of 10K random float 2D-points (random.uniform(0, 50)) for 3 objects (“o1”, “o2”, “o3”) and random float width and length (random.randint(3, 6)) (i.e. bounding boxes are provided).
  • data2_first100.csv: First 100 instances of data2.
  • data3_first100.csv: First 100 instances of data3.
  • data4_first100.csv: First 100 instances of data4.

Running the tests

rosrun qsr_lib unittests_generate_ground_truth.py -i <world name> -o <output filename> <qsr>

E.g.

rosrun qsr_lib unittests_generate_ground_truth.py -i data1 -o qsr_lib/tests/data/data1_rcc4_defaults.txt rcc4

-i options: data1 | data2 | data3 | data4

Types of generated data

  • Defaults. E.g.

    rosrun qsr_lib unittests_generate_ground_truth.py -i data1 -o qsr_lib/tests/data/data1_rcc4_defaults.txt rcc4
    

Original page: https://github.com/strands-project/strands_qsr_lib/blob/master/qsr_lib/tests/data/README.md

QSR_Lib

Can be run as python standalone, as well as a ROS server is provided with an example client.

Notes for users

  1. Running it as python standalone Just run the example.py in the $STRANDS_QSR_LIB/qsr_lib/scripts/standalone

  2. Running it via ROS

    $ rosrun qsr_lib qsrlib_ros_server.py
    

example client:

$ rosrun qsr_lib example_ros_client.py

see qsrlib_ros_client.py for more details

QTC

QTC descirbes the movement of 2 agents in 2d space. The implemented version is described in [1,2]. Both agents need to have their x,y coordinates described for at least two timesteps. Please see qsr_lib/share/qtc_data or the example_ros_client.py for a very short example.

To run the ros example client uisng qtc:

rosrun qsr_lib example_ros_client.py <qtc-varaient>

This has a few options

  • Mendatory:
  • <qtc-varaient>: Can be either qtcb, qtcc, or qtcbc. Please refer to [1,2] for additional infromation about the variants
  • Optional:
  • -h|--help: to display usage information
  • -i|--input FILE: read x,y coordinates from a csv file. See qsr_lib/share/qtc_data/qtc_example.csv
  • --quantisation_factor FLOAT: Determines the minimum distance one of the agents have to diverge from the double cross to be counted as movement. Please refer to [1] to read more about smoothing and quantisation of QTC.
  • --validate: Creates a QTC representation that is valid according to the CND [3]
  • --no_collapse: Does not collapse similar adjacent states as it the default in QTC since it does not represent discrete time but a sequence of states
  • --distance_threshold: Sets a threshold distance for the transition from QTCB to QTCC and vice-versa. Only for QTCBC [2].

To create a valid QTCC state chain with a quantisation factor of 1 (in the same unit as x,y) one would run:

rosrun qsr_lib example_ros_client.py qtcc --validate --quantisation_factor 1 -i my_file.csv

To create a QTCB state for each transition from one timestep to the other without validating them, one would run:

rosrun qsr_lib example_ros_client.py qtcb -i my_file.csv --no_collapse

To create a QTCBC state chain, one would run:

rosrun qsr_lib example_ros_client.py qtcbc -i my_file.csv --distance_threshold 1.5 --validate

Notes for QSR developers

You need to change two files:

  1. Make a copy of $STRANDS_QSR_LIB/qsr_lib/scripts/standalone/makers/maker_qsr_rcc3_rectangle_bounding_boxes_2d.py rename it to something suitable (preferably keepung maker_qsr prefix) and edit that file.

Change the 3 attribute variables: - self.qsr_type : is a string - self.required_fields : list of strings, what type and order of input data the QSR requires - self.all_possible_relations : list of string, what QSRs it provides

Change the method: - make : see in the code for where to start and finish your code

Optionally change the following also: - custom_help - custom_checks

  1. Edit qsrlib.py in $STRANDS_QSR_LIB/qsr_lib/scripts/standalone/
  1. import your new QSR class
  2. Append to the dictionary (self.__const_qsrs_available) of the QSRlib class an appropriate entry for your new QSR using as key the same string you have assigned to YOUR_QSR_CLASS.qsr_type and as value the class name WITHOUT parentheses in the end

That’s it! For any trouble preferably a ticket via the github system or alternatively if you are unable to do so contact me directly in the email address y.gatsoulis@leeds.ac.uk

Cheers! Yianni

[1] Christian Dondrup, Nicola Bellotto, and Marc Hanheide. “A probabilistic model of human-robot spatial interaction using a qualitative trajectory calculus.” 2014 AAAI Spring Symposium Series. 2014.

[2] Christian Dondrup, Nicola Bellotto, and Marc Hanheide. “Social distance augmented qualitative trajectory calculus for Human-Robot Spatial Interaction.” Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on. IEEE, 2014.

[3] Matthias Delafontaine. “Modelling and analysing moving objects and travelling subjects: Bridging theory and practice”. Diss. Ghent University, 2011.

Original page: https://github.com/strands-project/strands_qsr_lib/blob/master/qsr_lib/README.md

Overview

The package allows to issue commands to the robot by presenting it specially designed cards. While the first card allows to issue only one type of command (i.e. take a photo), the second one allows to issue four different commands.

You will need to print (no rescaling!) the cards.pdf file, cut out the cards and fold the ‘operator’ one. Presenting the pattern to the robot’s camera causes it to publish the command on /socialCardReader/command topic.

Original page: https://github.com/strands-project/strands_social/blob/hydro-devel/social_card_reader/README.md

strands_tweets

Utility package containing a node that can be used to manage twitter accounts from ROS

Installation

Install Twython via pip

$ pip install twython

or, with easy_install

$ easy_install twython

Starting Out

  • Go to https://dev.twitter.com/apps and register an application

  • If the application is registered, hit the Test OAuth. Skip the two following steps.

  • Go to the settings tab and chage permitions to Read, Write and Access direct messages

  • Go back to the Details tab and at the botton hit the Create Access Token Button

  • Go to OAuth tool tab and get the Consumer key, Consumer secret, Access token and Access token secret and save them on /opt/strands/strands_catkin_ws/src/strands_deployment/strands_parameters/defaults/twitter_params.yaml with the format as follows: ``` twitter: appKey: ‘’ appSecret: ‘’ oauthToken: ‘’ oauthTokenSecret: ‘’

    ```

  • Launch the mongodb_store: roslaunch mongodb_store mongodb_store.launch

  • Save the parameters on your locals collection:

rosservice call /config_manager/save_param /twitter/appKey

rosservice call /config_manager/save_param /twitter/appSecret

rosservice call /config_manager/save_param /twitter/oauthToken

rosservice call /config_manager/save_param /twitter/oauthTokenSecret * Now you are ready to go!

Tweeting

Text Only

Run the Strands_tweets node rosrun strands_tweets tweet.py

You can send a tweet by calling the /strands_tweets/Tweet service like this:

rosservice call /strands_tweets/Tweet 'Whatever you want to say' false

You can also send a tweet using the tweet_test client like this:

rosrun strands_tweets tweet_test.py 'Whatever you want to say'

You can tweet using actions too try running:

rosrun actionlib axclient.py /strands_tweets

Fill the text field with the text of your tweet and press SEND GOAL

Tweeting Images

You can tweet Images using an script especially provided for it:

rosrun strands_tweets tweet_image_test.py 'text for the tweet less than a 140 characters' /path/to/an/image.png

Using strands_tweets in your code

Goal definition

strands_tweets is an action server with the following goal

#goal definition
string text
bool force
bool with_photo
sensor_msgs/Image photo
---
#result definition
bool success
---
#feedback
string tweet
  • text: is a field that contains the text to be tweeted
  • force: is a boolean to tell the action server to tweet the text even if is longer than a 140 characters, on that texted it will send as many tweets as needed to send the whole string. Note: this doesn’t work when tweeting images
  • with_photo: is a boolean to tell the action server that the tweet contains an image
  • photo: is the image to be tweeted with the text when ‘with_photo’ is set to true
Using the action server (python)
  • Import the necessary modules

    import actionlib
    import strands_tweets.msg
    

    if using images you’ll need to import this too:

from sensor_msgs.msg import Image
  • Create action server client

    client = actionlib.SimpleActionClient('strands_tweets', strands_tweets.msg.SendTweetAction)
    client.wait_for_server()
    

    strands_tweets is the action server name

  • Create goal definition (text only)

    tweetgoal = strands_tweets.msg.SendTweetGoal()
    tweetgoal.text = text
    tweetgoal.force = false
    tweetgoal.with_photo = False
    
  • Create goal definition (with image)

    tweetgoal = strands_tweets.msg.SendTweetGoal()
    tweetgoal.text = text
    tweetgoal.with_photo = True
    tweetgoal.photo = image_from_ros_msg
    

    Note: the image HAS to be a ‘sensor_msgs.msg.Image’ ROS message

  • Send goal

    client.send_goal(tweetgoal)
    # Waits for the server to finish performing the action.
    client.wait_for_result()
    # Prints out the result of executing the action
    ps = client.get_result()
    print ps
    
  • Where can I see an example of this?

You can check the test nodes for this code in :

https://raw.githubusercontent.com/strands-project/strands_social/hydro-devel/strands_tweets/scripts/tweet_image_test.py

and

https://raw.githubusercontent.com/strands-project/strands_social/hydro-devel/strands_tweets/scripts/tweet_test.py

Using the action server (C++)

There are no examples of this in C++ but you can use it following the steps on this Readme using the action lib API for C++ should do, following this tutorial should help http://wiki.ros.org/actionlib_tutorials/Tutorials/SimpleActionClient

Original page: https://github.com/strands-project/strands_social/blob/hydro-devel/strands_tweets/README.md

STRANDS Tabletop Perception

Tabletop perception for STRANDS. The tabletop perception is realised as a ROS action server. Given the information about a table in the environment, i.e. its pose and its shape specified as a polygon, the robot plans multiple views to perceive the tabletop, segments the data from the depth camera at each of the planned views, and classifies the extracted segments for a given set of object classes. The individual components of the system are described below.

The information about the tables is stored in the ros datacentre (MongoDB). This information can either be added through the autonomous table detection or a manual process using a marker (cf. to the descriptions below).

Finding/Storing Tables

Tables are stored in the datacentre as strands_perception_msgs/Table messages, using the MessageStore system.

Manually inserting tables into the datacentre:

There are a few possible ways this can be done:

OPTION A) Use RoboMongo or any MongoDB client to create a new Table message inside the message_store collection inside the message_store database. Create a document that looks like:

 {
    "_id" : ObjectId("533c147c9f9d51517be039af"),
    "header" : {
        "stamp" : {
            "secs" : 0,
            "nsecs" : 0
        },
        "frame_id" : "/map",
        "seq" : 0
    },
    "pose" : {
        "pose" : {
            "position" : {
                "y" : -5.604931354522705,
                "x" : 5.736222267150879,
                "z" : 1.433120727539062
            },
            "orientation" : {
                "y" : 0.6713822484016418,
                "x" : 0.7393708229064941,
                "z" : 0.04276063665747643,
                "w" : 0.02735294029116631
            }
        },
        "covariance" : [
            0, 0, 0, 0, 0, 0,
            0, 0, 0, 0, 0, 0,
            0, 0, 0, 0, 0, 0,
            0, 0, 0, 0, 0, 0,
            0, 0, 0, 0, 0, 0,
            0, 0, 0, 0, 0, 0
        ]
    },
    "table_id" : "lg_table_8",
    "tabletop" : {
        "points" : [
            {
                "y" : 0,
                "x" : 0,
                "z" : 0
            },
            {
                "y" : 1,
                "x" : 0,
                "z" : 0
            },
            {
                "y" : 1,
                "x" : 1,
                "z" : 0
            },
            {
                "y" : 0.2000000029802322,
                "x" : 1,
                "z" : 0
            },
            {
                "y" : 0,
                "x" : 0,
                "z" : 0
            }
        ]
    },
    "_meta" : {
        "stored_type" : "strands_perception_msgs/Table",
        "stored_class" : "strands_perception_msgs.msg._Table.Table"
    }
}

OPTION B) Create a Table message in your program and use the message store proxy classes to insert it:

from strands_perception_msgs.msg import Table
from geometry_msgs.msg import PoseWithCovariance, Polygon

from mongodb_store.message_store import MessageStoreProxy

my_table = Table()
my_table.table_id = "MagicTable1"
my_table.header.frame_id = "/map"  # The parent frame that the table is in

table_pose = PoseWithCovariance()  # The transformation to the table frame
# Fill in the table position...
my_table.pose = table_pose

polygon = Polygon()                # The table top surrounding polygon in the table frame
# Fill in the points in the polygon....
my_table.tabletop = polygon

_msg_store = MessageStoreProxy()
# Store the table
_msg_store.insert(my_table)
Semi-automatic table insertion:

The manual_table_storer package provides a script to create tables at locations given by a calibration board/chessboard. This avoids the need to work out where the table is in space, but does still require the manual measurements of the table plane polygon.

To do so:

  1. Checkout and compile the chessboard detection code into your catkin workspace:

bash roscd cd ../src git clone https://github.com/cburbridge/chessboards cd .. catkin_make

  1. Measure your chosen table’s top, choosing an origin point. +z will point down, so +y will be clockwise to +x. Write down the co-ordinates of the table top:
table

table

  1. Add your new table top to the top of store.py file:
TABLES["LGType4"]=[(0,0,0),
                   (0.4,0,0),
                   (0.4,0.6,0),
                   (-1.0,.6,0),
                   (-1,-1.0,0),
                   (-0.4,-1,0),
                   (-0.4,-0.6,0)]
  1. Launch the table store program with the table type and the new name for the table:
rosrun manual_table_storer store.py LGType my_magic_table
  1. Print out an A3 calibration pattern, found in chessboards/chessboards/boards/A3 cal.pdf. Stick it to some card.
  2. Place the calibration pattern on to the table, with the centre of the board at your origin and the x & y axis aligned with your axis. See image above.
  3. Make sure your robot is well localised in the 2D map then run the chessboard detector:
roslaunch chessboard_pose detect_a3_8_5.launch

When the image shows the chessboard highlighted in rainbow colours, it has been found. At that point, the storer node will store and exit.

Autonomous table detection (KTH, Nils)

See README.

Table Visualisation and Tweeking

Once tables are inside the datacentre, they can be manually moved about using the visualise_tables package.

To visualise the tables, without the option to move them (safer):

rosrun visualise_tables visualise.py

To visualise the tables with interactive markers enabled to move the table:

rosrun visualise_tables visualise.py edit

In RViz, add an InteractiveMarkers display and set the Update Topic to /table_markers/update. If enabled, dragging the table about updates the datacentre.

View planning for tabletop perception (BHAM, Lars)

  1. Make sure that you have the table information available in the ros datacentre

  2. Make sure that you have a octomap server running with a local 3D map:

    roslaunch perceive_tabletop_action octomap.launch
    
  3. Launch the view planning components and action server:

    roslaunch perceive_tabletop_action perceive_tabletop.launch
    

3D Object recognition (TUW, Aitor, Thomas, Michael)

Run the ‘perceive tabletop’ action

Run the tabletop action client with a table id (known by the ros datacentre):

rosrun perceive_tabletop_action PerceiveTabletopActionClient.py test_lg_1

Visualisation of the viewplanning in RVIZ:

table

table

Appendix: table and object data management using MongoDB (BHAM, Chris)

On route.

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/README.md

It is based on the object_classifier ros package (read instructions there on how to train it).

You can launch the multi-view service, single-view service and segmentation module as follows:

roslaunch mv_object_classifier mv_classifier_from_files_demo.launch models_dir:=/media/DATA/Cat200_ModelDatabase_small/ trainingdir:=/media/DATA/Cat200_ModelDatabase_smalltrained/ chop_z:=1 mv_visualize_output:=true

and then:

rosrun mv_object_classifier mv_object_classifier_demo_from_file _folder:=${DATA}

where data should point to the data/mv_seq1/ folder within this package or any folder containing PCD files that will be used as input for the multi-view classifier.

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/mv_object_classifier/README.md

This classifier relies on object shape and is based on the approach of [W. Wohlkinger and M. Vincze, Ensemble of Shape Functions for 3D Object Classification].

It uses CAD data for training, such as https://repo.acin.tuwien.ac.at/tmp/permanent/Cat200_ModelDatabase.zip.

In order to speed up training, please select a subset of the training data you want to use and put it in a folder Cat200_ModelDatabase__small.

The classifier can then be started by:

roslaunch openni_launch openni.launch depth_registration:=true roslaunch object_classifier classifier_demo.launch models_dir:=your_dataset_dir/Cat200_ModelDatabase__small/ topic:=/camera/depth_registered/points training_dir:=your_training_dir

, where models_dir is your training directory with the CAD models of the classes, training_dir is the directory containing the trained data (if they exist - otherwise they will be re-trained) and topic is the camera topic.

The launch file also lets you choose which segmentation type you want to use. The pcl_object_segmenter segments objects on a table plane, whereby the highest plane parallel to the largest plane (usually the floor) is considered as table. On the contrary, the object_rgb_segmenter segments all objects within a distance chop_z from the camera. The segmentation takes a bit longer in this case.

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/object_classifier/README.md

This package filters points from a point cloud and can be used for processing training data for the object classifier. It saves the indices of the point cloud that are on the heighest table plane parallel to the largest plane (this will mostly be the floor) and within a distance chop_z_max from the camera. It saves these point indices as *.pcd file in the same folder as the input files adding a filename prefix *object_indices_*. This file is later used for the object classifier to train the objects.

To gather input data, put your training objects individually on an empty plane (e.g. floor) and grab the point clouds with a pcd grabber. Put them into a directory structure like:

_my_input_dir
|
+-- _apple
|   +-- 000.pcd
|   +-- 001.pcd
+-- _banana
|   +-- 000.pcd
|   +-- 001.pcd
|   +-- 002.pcd
+-- _mug
|   +-- 000.pcd
|   +-- 001.pcd
+-- _mouse
|   +-- 000.pcd
|   +-- 001.pcd
|   +-- 002.pcd
|   +-- 003.pcd
 ```

Finally, filter the pcd files in *my\_input\_dir* by running:

rosrun pcd_filter pcd_filter_node inputdir:=~/my_input_dir inputdir:=/home/thomas/Projects/v4r/trunk/bin/records/ chopz_max:=1.1 visualize:=true *force_refilter:=true ``` , where *_visualize* can be set to visualize the filtered point clouds and **forcerefilter* to do a new filtering process for already filtered .pcd files (i.e. pcd files where corresponding object_indices_ already exist).

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/pcd_filter/README.md

QSR KB and Visualisation

Knowledge representation of and reasoning about QSRs using SWI-Prolog. Specialized predicades can be used to visualize QSRs in RVIZ.

Starting the KB service

First the knowledge base service has to be started:

roslaunch qsr_kb kb_bringup.launch

(Please note that SWI-Prolog needs to be installed; on Ubuntu run: sudo apt-get install swi-prolog)

The knowledge base service provides a simple tell and ask interface to a Prolog engine. Queries and answers are exchanged using strings.

Adding knowledge about QSRs to the KB

New perception events with different classifications can be added using the create_event(Evt,QSRs,Loc,Classifications,Poses) predicate, where Evt denotes the event that will be generated, QSRs denotes a list of QSRs between segmented object clusters, Loc denotes a location where the event occured, Classifications denotes a list of different classifiaction results (incl. probabilities) for all segemented object clusters, and Poses denotes a list of object poses.

create_event(EVT,
             % QSRs, (Note: object names such as 'keyboard' are only used for a better understanding, there is no meaning attached to them)
             [['left-of', 'keyboard', 'cup'], ['left-of', 'monitor', 'cup'],
              ['behind', 'keyboard', 'cup'], ['in-front-of', 'monitor', 'cup'],
              ['in-front-of', 'keyboard', 'monitor'], ['right-of', 'cup', 'monitor']],
             % Loc
             'table27',
             % Classifications (here bottom-up [BU], and top-down [TD])
             [['BU', [['keyboard', 'Keyboard', 0.8], ['keyboard', 'Monitor', 0.2],
                      ['cup', 'Cup', 0.4], ['cup', 'Mouse', 0.5], ['cup', 'Keyboard', 0.1],
                      ['monitor', 'Keyboard', 0.1], ['monitor', 'Monitor', 0.9],
                      ['mouse', 'Cup', 0.9], ['mouse', 'Mouse', 0.1]]],
              ['TD', [['keyboard', 'Keyboard', 0.9], ['keyboard', 'Monitor', 0.1],
                      ['cup', 'Cup', 0.6], ['cup', 'Mouse', 0.2], ['cup', 'Keyboard', 0.2],
                      ['monitor', 'Keyboard', 0.1], ['monitor', 'Monitor', 0.9],
                      ['mouse', 'Cup', 0.1], ['mouse', 'Mouse', 0.9]]]],
             % Poses
             [ ['monitor', [[1.0,0.0,0.0],[1,0,0,0]]],
               ['cup', [[0.5,1.0,0.0],[1,0,0,0]]],
               ['mouse', [[0.5,-0.5,0.0],[1,0,0,0]]],
               ['keyboard', [[0.0,0.0,0.0],[1,0,0,0]]] ]).

Querying QSRs from the KB and visualise the result

QSRs can be queried using predicates such as qsr(Rel, Obj1, Obj2, QSR) and qsrT(Rel, Cls1, Cls2, QSR) which use segmented object clusters and object classes (or types) to retrieve QSRs repectively. For more options please refer to the source code in qsr.pl.

QSRs cannot directly be visualized. Using the vis(QSRLst) predicate where QSRLst denotes a list of QSRs they can be cached for visualisation. To visualise the different QSRs, one has to iterate over the set of solutions via the next_vis(X) predicate (see below).

qsrT(Rel,Cls1,Cls2,QSR), vis([QSR]).

To visualize the next solution use the following predicate:

next_vis(X).

If there is no more solution the predicate will return an empty list and clear the visualization in RVIZ.

The list of cached QSRs can be cleared using the predicate new_vis.

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/qsr_kb/README.md

table_detection

This package uses the primitive_extraction package to extract planes and determines which of the planes are tables based on size, angle with respect to floor, shape and height above floor. A list of detected tables is kept locally and in the mongodb_store and new tables overlapping with old ones are merged by finding the common convex hull. All currently found tables can be found by querying all strands_perception_msgs/Table from the mongodb_store.

The table detection is launched by:

roslaunch table_detection detection.launch cloud:=/pointcloud2/topic,

where /pointcloud2/topic is a point cloud in the frame of reference of a global map. Currently, the package outputs three visualization topics in RViz:

  • /primitive_extraction/primitive_marker_array - All detected planes from the last processed point cloud as visualization_msgs/MarkerArray.
  • /table_detection/primitive_marker_array - All detected tables from the last processed point cloud as visualization_msgs/MarkerArray.
  • /primitives_to_tables/table_markers - Tracked tables that have been determined to be new instances or merged with an old one, published as visualization_msgs/Marker.

The launch file has a bunch of options and parameters, notably for determining what is a table:

  • min_height - Minimum height of a table plane.
  • max_height - Maximum height of a table plane.
  • max_angle - Maximum deviation of table normal from straight up.
  • min_side_ratio - Minimum ratio of shortest side divided by longest side.
  • min_area - Minimum area of table plane.

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/table_detection/README.md

This package visualizes all points of the point clouds in directory my_input_dir that have a corresponding file with saved point indices (file prefix: object_indices_).

_my_input_dir
|
+-- _apple
|   +-- 000.pcd
|   +-- 001.pcd
+-- _banana
|   +-- 000.pcd
|   +-- 001.pcd
|   +-- 002.pcd
+-- _mug
|   +-- 000.pcd
|   +-- 001.pcd
+-- _mouse
|   +-- 000.pcd
|   +-- 001.pcd
|   +-- 002.pcd
|   +-- 003.pcd
 ```

For instance:

To show all point clouds of all classes, run:

rosrun visualize_pcd_indices visualize_pcd_indices_node dir:=myinput_dir

to show all point clouds of a particular class (e.g. banana), run:

rosrun visualize_pcd_indices visualize_pcd_indices_node dir:=myinput_dir/banana ```

One possibility to compute the indices of the point cloud above a table plane is to use the catkin package pcd_filter (https://github.com/strands-project/strands_tabletop_perception/tree/hydro-devel/pcd_filter).

Original page: https://github.com/strands-project/strands_tabletop_perception/blob/hydro-devel/visualize_pcd_indices/README.md

STRANDS User Interfaces

User interfaces for the robots, displayed using a web browser.

Installation

You can add the STRANDS UI source to your catkin workspace (if it’s not there already) using wstool:

wstool merge https://raw.github.com/hawesie/strands_ui/master/strands_ui_rosinstall.yaml
wstool update strands_ui twitter_bootstrap

rosdep should give you most requirements as usual (e.g. rosdep install --from-paths src --ignore-src --rosdistro groovy -y -r), but we don’t yet have a rosdep for web.py which is required by strands_webserver. This can be installed using, e.g.,

sudo apt-get install python-webpy

or

sudo pip install web.py

You can then run catkin_make as usual.

strands_webserver

The strands_webserver is a node which both acts as a webserver and a ROS node which can receive input from other nodes. This allows these other nodes to control what the webserver displays and receive feedback from user interaction with the pages.

Running

To run the server, first run rosbrige if you don’t have it running already.

roslaunch rosbridge_server rosbridge_websocket.launch

Then run the server

rosrun strands_webserver strands_webserver

This will launch the webserver on localhost port 8090, so browse to http://localhost:8090 and you should see something like

Ready; display = 1

Displaying content

The webserver manages individual displays separately, one for each request to its URL. These can be published to individual by specifying their number, or you can just send 0 to publish to all of them.

Interaction with strands_webserver typically happens by publishing instructions to it for display. These can either be URLs or bits of html which will be injected into its main page. The URLs are loaded in a full size iframe. The bits of html are injected as the innerHTML of the content node in the main page (strands_webserver/data/main.html).

The example below shows how to publish a simple html page to the webserver, both a normal URL (given that it is happy to appear in an iframe) then a page defined locally.

#! /usr/bin/env python

import sys
import os
import roslib
import rospy

import strands_webserver.client_utils

if __name__ == '__main__':
    rospy.init_node("strands_webserver_demo")
    # The display to publish on, defaulting to all displays
    display_no = rospy.get_param("~display", 0)

    # display a start-up page
    strands_webserver.client_utils.display_url(display_no, 'http://strands-project.eu')

    # sleep for 2 seconds
    rospy.sleep(5.)

    # tell the webserver where it should look for web files to serve
    http_root = os.path.join(roslib.packages.get_pkg_dir("strands_webserver"), "data")
    strands_webserver.client_utils.set_http_root(http_root)

    # start with a basic page pretending things are going normally
    strands_webserver.client_utils.display_relative_page(display_no, 'example-page.html')

To display arbitrary HTML within the the content node in the main page (strands_webserver/data/main.html) you can use calls like the following

strands_webserver.client_utils.display_content(display_no, '<p>Hello world</p>')

Note that any script nodes will not be evaluated in this injected html.

Using the Webloader action server

The webloader action server offers 4 ways of manipultating webpages shown by the webserver (all of this of course only works if you exclusivly use the webloader server to display pages): * Loading relative or contant generated pages * Reloading the current page * Going back to the previous page * Showing a temporary page for a specific number of seconds

Usage still requires your main programm to set

http_root = os.path.join(roslib.packages.get_pkg_dir("my_package"), "directory")

Currently all these pages are sent to all displays. Possible future extensions could therefore include a history for each display.

Every of the above mentioned actions has their own action file. Reloading and going back are just empty actions. Loading a page reuires to specify goal.relative=BOOL to toggle between relative and content generated pages and goal.page=STRING is either the name of the relative page or the content for the generated page. For the temporary page you also have to specify goal.timeout=INT after which the previous page will be shown again.

This action server was developed to manage the user interface for the NHM deployment where people could trigger to tweet an image. This was displayed on a temporary page and afterwards the display was reset to the previous page. This had the advantage of showing arbitrary pages without care about what should be displayed afterwards.

Auto-generated button pages

The strands_webserver package also provides functionality for automatically generating pages with buttons which can call services. Such a call might look like this

notice = 'Help me, I am <em>stuck</em>'
buttons = [('No', 'trigger_pleading'), ('Sure', 'party_time')]
service_prefix = '/caller_services'
content = strands_webserver.page_utils.generate_alert_button_page(notice, buttons, service_prefix)
strands_webserver.client_utils.display_content(display_no, content)

In this notice is the used to generate a large banner notice and buttons maps between a button label (e.g. No) and the service call which will be triggered when the button is pressed, e.g. ('/caller_services/trigger_pleading'). This service is of type std_srvs/Empty. Instead of generate_alert_button_page you can use generate_button_page which accepts arbitrary html instead of a banner notice.

The full example is as follows (also available as strands_webserver/scripts/strands_webserver_demo.py).

#! /usr/bin/env python

import sys
import os
import roslib
import rospy

import strands_webserver.page_utils
import strands_webserver.client_utils
import std_srvs.srv


def trigger_pleading(req):
      print 'please, please help'


def party_time(req):
      print 'woo, off I go baby'


if __name__ == '__main__':
    rospy.init_node("strands_webserver_demo")
    # The display to publish on, defaulting to all displays
    display_no = rospy.get_param("~display", 0)

    # display a start-up page
    strands_webserver.client_utils.display_url(display_no, 'http://strands-project.eu')

    # sleep for 5 seconds
    rospy.sleep(5.)

    # tell the webserver where it should look for web files to serve
    http_root = os.path.join(roslib.packages.get_pkg_dir("strands_webserver"), "data")
    strands_webserver.client_utils.set_http_root(http_root)

    # start with a basic page pretending things are going normally
    strands_webserver.client_utils.display_relative_page(display_no, 'example-page.html')

    # sleep for 5 seconds
    rospy.sleep(5.)

    # now ask for help
    name = 'Help me, I am <em>stuck</em>'
    buttons = [('No', 'trigger_pleading'), ('Sure', 'party_time')]
    service_prefix = '/caller_services'
    content = strands_webserver.page_utils.generate_alert_button_page(name, buttons, service_prefix)
    strands_webserver.client_utils.display_content(display_no, content)
    rospy.Service('/caller_services/trigger_pleading', std_srvs.srv.Empty, trigger_pleading)
    rospy.Service('/caller_services/party_time', std_srvs.srv.Empty, party_time)
    rospy.spin()

Includes

Need to describe where the webserver fetches javascript and css from, and the standard includes available.

marathon_touch_gui

This package uses the strands_webserver to create an interface for the patroller during the marathon event. It’s not very pretty, but it’s a start. There is a main page (map, pause button) which is displayed using strands_webserver.client_utils.display_relative_page and two pages for recovery methods which are generated using strands_webserver.page_utils.generate_alert_button_page. These are wrapped up in marathon_touch_gui.client for ease of use. They can be called as follows:

# Setup -- must be done before other marathon_touch_gui calls
marathon_touch_gui.client.init_marathon_gui()

# Show the main page of the GUI
marathon_touch_gui.client.display_main_page(displayNo)

rospy.sleep(2)

# All callback services should be under this prefix
service_prefix = '/patroller'

# Do something like this on bumper collision

# when the human gives the 'ok' then /patroller/bumper_recovered is called
# note that this service must be provided by some other node
on_completion = 'bumper_recovered'
marathon_touch_gui.client.bumper_stuck(displayNo, service_prefix, on_completion)

rospy.sleep(2)

# Do something like this on move_base failure

# when the human gives the 'ok' then /patroller/robot_moved is called
# note that this service must be provided by some other node
on_completion = 'robot_moved'
marathon_touch_gui.client.nav_fail(displayNo, service_prefix, on_completion)

rospy.sleep(2)

# Return to main page
marathon_touch_gui.client.display_main_page(displayNo)

The full example is in marathon_touch_gui/scripts/demo.py

Running

To run the marathon GUI, first launch strands_webserver plus rosbridge and the necessary additional publishers (with HOST_IP set to an external ip for your machine if you want this to be externally accessible, else leave blank for 127.0.0.1):

HOST_IP=10.0.11.158 roslaunch marathon_touch_gui marathon_gui_dependencies.launch

This will launch the webserver on localhost port 8090, so browse to http://localhost:8090.

Then you can call the marathon_touch_gui functions. To test you can cycle through them with

rosrun marathon_touch_gui demo.py

A simple interface for the MARY TTS system. Offers a service /ros_mary and an actionlib interface /speak that accepts a simple text argument and plays the audio using PulseAudio.

  • launch it: roslaunch mary_tts ros_mary.launch
  • make the robot speak: rosservice call /ros_mary 'Welcome to the school of computer science. Please follow me!'
  • in order to use the actionlib cleint you can run rosrun actionlib axclient.py /speak
  • switching voices: rosservice call /ros_mary/set_voice "dfki-obadiah-hsmm"
  • switching languages: rosservice call /ros_mary/set_locale "en_US"

Available languages and voices: * it * None * te * None * en_US * cmu-slt-hsmm (female) * dfki-obadiah-hsmm (male) * dfki-prudence-hsmm (female) * dfki-poppy-hsmm (female) * dfki-spike-hsmm (male) * tr * None * ru * None * de * bits1-hsmm (female) * bits3-hsmm (male) * dfki-pavoque-neutral-hsmm (male) * sv * None

Installing new voices: Use strands_ui/mary_tts/marytts-5.0/bin/marytts-component-installer.sh

Trouble shooting

If you experience errors like this:

Traceback (most recent call last):
  File "/opt/ros/hydro/lib/mary_tts/marybridge.py", line 363, in <module>
    player.play(the_sound)
  File "/opt/ros/hydro/lib/mary_tts/marybridge.py", line 284, in play
    pa.strerror(ctypes.byref(error))))
Exception: Could not create pulse audio stream: 30873552!
[WARN] [WallTime: 1416493711.323501] mary speach action failed; maybe took too long (more than 10 seconds), maybe pulse is broke.

It means that mary was started when it could not determine which pulse resource to use. This could have multiple reasons: * mary was started remotely without logging in as the same user on the PC or robot. Only one user can access pulse on a PC. Who that user is decided by who is currently logged in. If no one is logged in then pulse is not running, therefore you have to log in to the PC before starting mary remotely. * mary was started as a different user than the one that is logged in. If user1 is logged in and user2 logs in remotely starting mary, mary won’t work because pulse is held by user1. * If you are using tmux, as done by most of the STRANDS systems, not only the two points above apply but if you start tmux via ssh with activated X forwarding, mary will try to access the pulse resource on the remote machine. try to always start the tmux session on the robot or PC that is supposed to run mary as the user that is supposed to run mary and is currently logged in. If you want to start it remotely, make sure not to use X forwarding.

If MARY server is only binding to IP6, you can force it to bind to IP4 (from http://superuser.com/a/453329):

export _JAVA_OPTIONS="-Djava.net.preferIPv4Stack=true"

Original page: https://github.com/strands-project/strands_ui/blob/hydro-devel/mary_tts/README.md

Overview

This package provides a web server for serving media files (photos, music, videos) to robot applications. The content is uploaded and managed through a web interface, and is stored in the mongodb database using GridFS.

Running

rosrun mongodb_media_server server.py

Editing the available content

Goto to http://localhost:8027

The ‘File Manager’ tab allows you to upload files of any type. The ‘BLAH Sets’ tabs allow you to edit the data groupings. For example, the ‘Photo Sets’ can be though of a albums of images to be shown together, likewise with Music and Video.

Accessing the content

Every file is accessible through:

http://localhost:8027/get_media/MEDIAKEY

where MEDIAKEY is the id of the file, or

http://localhost:8027/get_media_by_name/name

where name is the filename of the media file.

The IDs of the files are retrievable via the Python api. For example, to retrieve the items in a photo album called ‘AAF-Deploy-InfoTerm’:

from mongodb_media_server import MediaClient

mc = MediaClient('localhost', 62345) # or rospy.get_param('db_host') etc
file_set = mc.get_set("Photo/AAF-Deploy-InfoTerm")
for f in file_set:
    print "Media name:", i[0]
    print "Media type:", i[1]
    print "Media key:", i[2]
    print "Media URL: http://localhost:8027/get_media/"+i[2]

In this way it is straight forward to include managed media in a web.py application .

Alternatively, the file can also be read in python rather than through the web server interface.

Please see src/mongodb_media_server/media_client.py for the API.

To Do

A more complete API for both the web interface (to provide file sets to webpages not working on web.py), and for the python API (to provide file set creation etc.)

Original page: https://github.com/strands-project/strands_ui/blob/hydro-devel/mongodb_media_server/README.md

Robot talk

A simple interface for managing short text phrases a robot might use in a conversation via text and/or text-to-speech output. The text phrases are grouped wrt topics and have associated weights. This allows a user to sample a random text phrase for a given topic according to the weights. All information are stored in MongoDB. The commandline tool rtalk.py can be used for managing the contents of the MongoDB collection. The RobotTalkProxy provides a Python API for interacting with the collection.

Commandline interface rtalk.py

Please run the following command for help:

$ rosrun robot_talk rtalk.py -h
usage: rtalk.py [-h]
                {add,remove,update,list,search,play,play_random,topics} ...

positional arguments:
  {add,remove,update,list,search,play,play_random,topics}
                        sub-command -h|--help
    add                 add -h|--help
    remove              remove -h|--help
    update              update -h|--help
    list                list -h|--help
    search              search -h|--help
    play                play -h|--help
    play_random         play_random -h|--help
    topics              topics -h|--help

optional arguments:
  -h, --help            show this help message and exit

RobotTalkProxy (Python API)

Please confer to the file run_example.py for an example for using the API.

$ rosrun robot_talk run_example.py

Adding entry: ID: 1 - Topic: Greeting - Text: Hi! - Weight 1.0
Adding entry: ID: 2 - Topic: Greeting - Text: Hello! - Weight 2.0
Adding entry: ID: 3 - Topic: Greeting - Text: Hey! - Weight 0.5

Listing entries for topic: Greeting
====  ========  ======  ========
  ID  Topic     Text      Weight
====  ========  ======  ========
   1  Greeting  Hi!          1
   2  Greeting  Hello!       2
   3  Greeting  Hey!         0.5
====  ========  ======  ========
Total number: 3

EXAMPLE 1: Get random text for a given topic. Here 'Greeting'
Chosen ID:  1 - Probability: 0.285714285714
Text: Hi!

Chosen ID:  2 - Probability: 0.571428571429
Text: Hello!

Chosen ID:  1 - Probability: 0.285714285714
Text: Hi!

Chosen ID:  3 - Probability: 0.142857142857
Text: Hey!

Chosen ID:  1 - Probability: 0.285714285714
Text: Hi!

Chosen ID:  2 - Probability: 0.571428571429
Text: Hello!

Chosen ID:  1 - Probability: 0.285714285714
Text: Hi!

Chosen ID:  1 - Probability: 0.285714285714
Text: Hi!

Chosen ID:  2 - Probability: 0.571428571429
Text: Hello!

Chosen ID:  2 - Probability: 0.571428571429
Text: Hello!

EXAMPLE 2: Playing random text (via marytts) for a given topic
Chosen ID:  2 - Probability: 0.571428571429
Now playing: Hello!

Removing entry ID: 1
Removal successful.
Removing entry ID: 2
Removal successful.
Removing entry ID: 3
Removal successful.

Original page: https://github.com/strands-project/strands_ui/blob/hydro-devel/robot_talk/README.md

The STRANDS webserver package

Quite a lot information is missing here…

The blueprint_edited.xcf file contains several layers which were used to construct the pgm map and the blender file used in the simulator. The original blueprint file (as a vector) was imported into gimp at a scale of 5x the base image. The original blueprint has a scale of 1:250.

Original page: https://github.com/strands-project/thermo_fisher/blob/master/map/readme.md

Human Trajectory

This is a ROS package that classifies human poses from bayes_people_tracker_logging in mongodb into human movements or non-human movements. The classification is based on poses and velocity of poses which are applied to KNN classifier. This package can only be run offline retrieving human poses from perception_people

Run this package by typing

rosrun human_movement_identifier classifier.py [train_ratio] [accuracy (1/0)]

where [train_ratio] is the training data ratio between 0.0 and 1.0. Setting 0.9 makes 0.1 of all data will be used as test data. [accuracy] is the option to choose between knowing the accuracy of this classifier (1) or just wanting to test data from test data.

Original page: https://github.com/strands-project/trajectory_behaviours/blob/master/human_trajectory_classifier/README.md

Relational Learner

A ROS package that uses qualitative spatio-temporal relations to learn and classify human trajectories as statistically novel or not.

Prerequisites

  • roscore
  • mongodb_store
  • strands_perception_people
  • soma_trajectory
  • qsrlib

Getting started

  1. Run mongodb datacentres: $ roslaunch mongodb_store mongodb_store.launch db_path:= <path>     $ roslaunch mongodb_store mongodb_store.launch db_path:=/home/strands/mongodb_store/bham_trajectory_store/ where path specifies the path to your mongodb_store

Also run the mongodb robot pose logging tool: $ rosrun mongodb_log mongodb_log.py /robot_pose

  1. Make sure people perception is running to publish detected trajectories: $ roslaunch perception_people_launch people_tracker_robot.launch
  2. Also run the online-trajectory stiching tool: $ rosrun human_trajectory trajectory_publisher.py [publish_interval][online(1)/offline(0)] see here for more details.

Alternatively to step 2 and 3, you can run soma_trajectory and obtain test trajectories from mongodb store: $ rosrun soma_trajectory trajectory_query_service.py

  1. Make sure the QSRLib Service is running: $ rosrun qsr_lib qsrlib_ros_server.py

  2. Run the Episodes Service: $ rosrun relational_learner episode_server.py

  3. Run the Novelty Service:

    $ rosrun relational_learner novelty_server.py
    
  4. Run the Episodes node: $ rosrun relational_learner episodes_client.py

  5. Run Novelty node: $ rosrun relational_learner novelty_client.py

Note:

This package can be run offline by running soma_trajectory in step 2 and 3 instead of people_perception. In this case, step 7 becomes: $ rosrun relational_learner episodes_client_OT.py

which queries one region’s trajectories from mongodb_store instead of subscribing to published trajectories.

Original page: https://github.com/strands-project/trajectory_behaviours/blob/master/relational_learner/README.md

Multi-modal RGB-D Object Instance Detector

In this tutorial you will learn how to detect objects in 2.5 RGB-D point clouds. The proposed multi-pipeline recognizer will detect 3D object models and estimate their 6DoF pose in the camera’s field of view.

Object Model Database

The model folder structure is structured as:

object-model-database
│
└───object-name-1
│   │    [3D_model.pcd]
│   │
│   └─── views
│        │   cloud_000000.pcd
│        │   cloud_00000x.pcd
│        │   ...
│        │   object_indices_000000.txt
│        │   object_indices_00000x.txt
│        │   ...
│        │   pose_000000.txt
│        │   pose_00000x.txt
│        │   ...
│   │
│   └─── [trained-pipeline-1]*
│        │   ...
│   │
│   └─── [trained-pipeline-x]*
│        │   ...
│
└───object-name-2
│   │    [3D_model.pcd]
│   │
│   └─── views
│        │   cloud_000000.pcd
│        │   cloud_00000x.pcd
│        │   ...
│        │   object_indices_000000.txt
│        │   object_indices_00000x.txt
│        │   ...
│        │   pose_000000.txt
│        │   pose_00000x.txt
│        │   ...
│   │
│   └─── [trained-pipeline-1]*
│        │   ...
│   │
│   └─── [trained-pipeline-x]*
│        │   ...
│        │   ...
│ ...

* this data / folder will be generated

Objects are trained from several training views. Each training view is represented as an organized point cloud (cloud_xyz.pcd), the segmentation mask of the object (object indices_xyz.txt) and a camera pose ( pose_xyz.pcd) that aligns the point cloud into a common coordinate system when multiplied by the given 4x4 homogenous transform. The training views for each object are stored inside a viewfolder that is part of the object model folder. The object model folder is named after the object and contains all information about the object, i.e. initially contains the views and a complete 3D model ( 3D_model.pcd ) of the object that is in the same coordinate system as the one in views. The complete model is used for hypotheses verification and visualization purposes.

At the first run of the recognizer (or whenever retraining is desired), the object model will be trained with the features of the selected pipeline. The training might take some time and extracted features and keypoints are stored in a subfolder called after its feature descriptor (e.g. sift keypoints for object-name-1 will be stored in model-database/object-name-1/sift/keypoints.pcd). If these files already exist, they will be loaded directly from disk and skip the training stage. Please note that the trained directory needs to be deleted whenever you update your object model (e.g. by adding/removing training views), or the argument --retrain set when calling the program.

If you have not obtained appropriate data yet, you can download example model files together with test clouds by running

./scripts/get_TUW.sh

from your v4r root directory. The files (2.43GB) will be extracted into data/TUW.

Usage

Assuming you built the ObjectRecognizer app, you can now run the recognizer. If you run it for the first time, it will automatically extract the descriptors of choice from your model data (-m). The test directory can be specified by the argument -t. The program accepts either a specific file or a folder with test clouds. To use 2D features, these test clouds need to be organized. The -z argument defines the cut-off distance in meter for the object detection (note that the sensor noise increases with the distance to the camera). Objects further away than this value, will be neglected.

Computed feature matches will be grouped into geometric consistent clusters from which a pose can be estimated using Singular Value Decomposition. The cluster size -c needs to be equal or greater than 3 with a higher number giving more reliable pose estimates (since in general there will be more keypoints extracted from high resolution images, also consider increasing this threshold to avoid false positives and long runtimes).

Parameters for the recognition pipelines and the hypotheses verification are loaded from the cfg/folder by default. For a full list of available parameters, you can use -h. Example command:

./build/bin/ObjectRecognizer -m data/TUW/TUW_models -t data/TUW/test_set -z 2.5 --do_sift true --do_shot false --do_esf false --do_alexnet false -c 5 -v

Recognized results will be stored in a single text file, where each detected object is one line starting with name (same as folder name) of the found object model followed by the confidence (disabled at the moment and always set to -1), and the object pose as a 4x4 homogenous transformation matrix in row-major order aligning the object represented in the model coordinate system with the current camera view. Example output:

object_08 (-1.): 0.508105 -0.243221 0.826241 0.176167 -0.363111 -0.930372 -0.0505756 0.0303915 0.781012 -0.274319 -0.561043 1.0472 0 0 0 1
object_10 (-1.): 0.509662 0.196173 0.837712 0.197087 0.388411 -0.921257 -0.0205712 -0.171647 0.767712 0.335861 -0.545726 1.07244 0 0 0 1
object_29 (-1.): -0.544767 -0.148158 0.825396 0.0723312 -0.332103 0.941911 -0.0501179 0.0478761 -0.770024 -0.301419 -0.562326 0.906379 0 0 0 1
object_34 (-1.): 0.22115 0.501125 0.83664 0.0674237 0.947448 -0.313743 -0.0625169 -0.245826 0.231161 0.806498 -0.544174 0.900966 0 0 0 1
object_35 (-1.): 0.494968 0.0565292 0.86707 0.105458 0.160923 -0.986582 -0.0275425 -0.104025 0.85388 0.153165 -0.497424 0.954036 0 0 0 1
object_35 (-1.): -0.196294 -0.374459 0.906228 0.1488 -0.787666 0.610659 0.0817152 -0.331075 -0.583996 -0.697765 -0.414817 1.01101 0 0 0 1
To visualize results, add argument -v. This will visualize the

input scene, the generated hypotheses and the verified ones (from bottom to top). | For further parameter information, call the program with -h or have a look at the doxygen documents.

References

Original page: https://github.com/strands-project/v4r/blob/master/apps/ObjectRecognizer/ReadMe.md

If you use V4R in your work, please cite the appropriate moduls:

RTM toolbox

Prankl et al., RGB-D Object Modelling for Object Recognition and Tracking. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015

@inproceedings{prankl2015rgb,     title={RGB-D object modelling for object recognition and tracking},     author={Prankl, Johann and Aldoma, Aitor and Svejda, Alexander and Vincze, Markus},     booktitle={Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on},     pages={96--103},     year={2015},     organization={IEEE}   }

Object Instance Recognition

Thomas Fäulhammer, Michael Zillich, Johann Prankl, Markus Vincze, “A Multi-Modal RGB-D Object Recognizer”, IAPR Internation Conference on Pattern Recognition (ICPR), 2016,

@incproceedings{faeulhammer2016_ICPR,
  title={A Multi-Modal RGB-D Object Recognizer},
  author={F{\"a}ulhammer, Thomas and Zillich, Michael and Prankl, Johann and Vincze, Markus},
  booktitle={Pattern Recognition (ICPR), International Conference on},
  year={2016},
  organization={IAPR}
}

Multi-View Object Instance Recognition

  • if parameter transfer_feature_matches is set to true

Fäulhammer et al, “Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments”, IEEE Int. Conf. on Robotics and Automation (ICRA), 2015

@inproceedings{faeulhammer2015_featInt,     title={Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered   And Dynamic Environments},     author={F{\"a}ulhammer, Thomas and Aldoma, Aitor and Zillich, Michael and Vincze, Markus},     booktitle={Proc.\ of the International Conference on Robotics and Automation (ICRA)},     year={2015},     organization={IEEE}   }

  • else (default)

Fäulhammer et al., “Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter”, IAPR Conference on Machine Vision Applications (MVA), 2015

@inproceedings{faulhammer2015multi,      title={Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter},     author={F{\"a}ulhammer, Thomas and Zillich, Michael and Vincze, Markus},     booktitle={IAPR Conference on Machine Vision Applications (MVA)},     year={2015}   }

Incremental object learning

Fäulhammer et al., “Autonomous Learning of Object Models on a Mobile Robot”, IEEE IEEE Robotics and Automation Letters, 2016

@ARTICLE{faeulhammer_object_learning_ral,        author={Thomas F\"{a}ulhammer and Rare\cb{s} Ambru\cb{s} and Chris Burbridge and Michael Zillich and John Folkesson and Nick Hawes and Patric Jensfelt and Markus Vincze},        journal={IEEE Robotics and Automation Letters},        title={Autonomous Learning of Object Models on a Mobile Robot},        year={2016},        volume={2},        number={1},        pages={26-33},        keywords={ Autonomous Agents;RGB-D Perception;Object detection;segmentation;categorization;Visual Learning;Motion and Path Planning},        doi={10.1109/LRA.2016.2522086},        ISSN={2377-3766},        month={1}    }

ESF Object Classification

Walter Wohlkinger and Markus Vincze, “Ensemble of Shape Functions for 3D Object Classification”, IEEE International Conference on Robotics and Biomimetics (IEEE-ROBIO), 2011

@inproceedings{wohlkinger2011ensemble,
  title={Ensemble of shape functions for 3d object classification},
  author={Wohlkinger, Walter and Vincze, Markus},
  booktitle={Robotics and Biomimetics (ROBIO), 2011 IEEE International Conference on},
  pages={2987--2992},
  year={2011},
  organization={IEEE}
}

Original page: https://github.com/strands-project/v4r/blob/master/CITATION.md

Contributing to V4R

Please take a moment to review this document in order to make the contribution process easy and effective for everyone involved.

Following these guidelines helps to communicate that you respect the time of the developers managing and developing this open source project. In return, they should reciprocate that respect in addressing your issue or assessing patches and features.

Dependencies

V4R is an open-source project with the goal to be easily installed on

different platforms by providing released Debian packages for Ubuntu systems. To allow packaging the V4R library, all dependencies need to be defined in package.xml. The required names for specific packages can be found here or here. Packages not included in this list need to be added as 3rdparty libraries to V4R. Whenever possible, try to depend on packaged libraries. This especially applies to PCL and OpenCV. Currently this means contribute your code such that it is compatible to PCL 1.7.2 and OpenCV 2.4.9. | Also, even though V4R stands for Vision for Robotics, our library is independent of ROS. If you need a ROS component, put your core algorithms into this V4R library and create wrapper interfaces in the seperate v4r_ros_wrappers repository.

Dependency Installation (Ubuntu 14.04 & Ubuntu 16.04)

If you are running a freshly installed Ubuntu 14.04 or Ubuntu 16.04, you can use ‘setup.sh’ to install all necessary dependencies. Under the hood ‘./setup.sh’ installs a phyton script which associates the dependencies listed in the package.xml file with corresponding debian packages and installs them right away.

Ubuntu 14.04, ‘Trusty’

cd /wherever_v4r_is_located/v4r
./setup.sh

Ubuntu 16.04, ‘Xenial’

cd /wherever_v4r_is_located/v4r
./setup.sh xenial kinetic

After executing ./setup.sh within your v4r root directory you should be able to compile v4r on your machine with

cd /wherever_v4r_is_located/v4r
mkdir build && cd build
cmake ..
make -j8

If you don’t want to use ‘setup.sh’ for any wired reason, click here to jump to “How to Build V4R without using ‘./setup.sh’? (Ubuntu 14.04)”.

Using the issue tracker

The issue tracker for internal or STRANDS are the preferred channel for submitting pull requests and bug reports, but please respect the following restrictions:

  • Please do not derail or troll issues. Keep the discussion on topic and respect the opinions of others.
## Pull requests

Pull requests let you tell others about changes you’ve pushed to a repository on GitHub. Once a pull request is sent, interested parties can review the set of changes, discuss potential modifications, and even push follow-up commits if necessary. Therefore, this is the preferred way of pushing your changes - do not push your changes directly onto the master branch! Also, keep your pull requests small and focussed on a specific issue/feature. Do not accumulate months of changes into a single pull request! Such a pull request can not be reviewed!

Good pull requests - patches, improvements, new features - are a fantastic help. They should remain focused in scope and avoid containing unrelated commits.

### Checklist

Please use the following checklist to make sure that your contribution is well prepared for merging into V4R:

  1. Source code adheres to the coding conventions described in V4R Style Guide. But if you modify existing code, do not change/fix style in the lines that are not related to your contribution.
  2. Commit history is tidy (no merge commits, commits are squashed into logical units).
  3. Each contributed file has a license text on top.
## Bug reports

A bug is a demonstrable problem that is caused by the code in the repository. Good bug reports are extremely helpful - thank you!

Guidelines for bug reports:

  1. Check if the issue has been reported — use GitHub issue search.
  2. Check if the issue has been fixed — try to reproduce it using the latest master branch in the repository.
  3. Isolate the problem — ideally create a reduced test case.

A good bug report shouldn’t leave others needing to chase you up for more information. Please try to be as detailed as possible in your report. What is your environment? What steps will reproduce the issue? What would you expect to be the outcome? All these details will help people to fix any potential bugs. After the report of a bug, a responsible person will be selected and informed to solve the issue.

## License

V4R is MIT licensed, and by submitting a patch, you agree to allow V4R to license your work under the terms of the MIT License. The corpus of the license should be inserted as a C++ comment on top of each .h and .cpp file:

/******************************************************************************
 * Copyright (c) 2016 Firstname Lastname
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to
 * deal in the Software without restriction, including without limitation the
 * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
 * sell copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 * SOFTWARE.
 *
 ******************************************************************************/

Please note that if the academic institution or company you are affiliated with does not allow to give up the rights, you may insert an additional copyright line.

## Structure The repostiory consists of several folders and files

containing specific parts of the library. This section gives a short introduction to the most important ones.

./3rdparty

See Dependencies.

./apps

Bigger code examples and tools (=more than only one file) such as RTMT. Apps depend on modules.

./cmake

Several cmake macros.

./docs

Tutorials and further documentations.

./modules

Contains all core components of the library and is organized in logical sub folders which are further called ‘packages’. A package holds the source files which are located in ‘./src’. The corresponding header files are located in ‘./include/v4r/package_name/’

By following this structure, new modules can be easily added to the corresponding CMakeLists.txt with

v4r_define_module(package_name REQUIRED components)

i.e.

v4r_define_module(change_detection REQUIRED v4r_common pcl opencv)
  • ./modules/common — anything that can be reused by other packages

  • ./modules/core — core is used by every module and does only include macros -> To make your modules visible to other modules you have to add the macros and ‘V4R_EXPORTS’ to your header files.

    #include <v4r/core/macros.h>
    ...
    class V4R_EXPORTS ...
    
samples

*./samples/exsamples: short code pieces that demonsrate how to use a module. *./samples/tools: small tools with only one file

Other Files

CITATION.md — This files includes bibTex encoded references. They can be used to cite the appropriate modules if you use V4R in your work.

CONTRIBUTING.md — The file you read at the moment.

## Documentation ALWAYS document your code. We use Doxygen. A nice

introduction to Doxygen can be found here.

The Doxygen documentation has to be compiled localy on your system for the moment. However, it will be available online on gitlab quiet soon. Bajo will find a nice solution for that using the CI system.

How to Build V4R without using ‘./setup.sh’? (Ubuntu 14.04)

As mentioned allready, V4R is using a package.xml file and rosdep to install all necessary dependencies. This can be easily done using ./setup.sh. However, if you don’t want to use ‘setup.sh’ for any reason follow the instructions below.

  1. Do all the necessary git magic to download v4r on your machine, then open a console.

  2. Install and initialize rosdep, cmake and build-essential

    sudo apt-get update
    sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu trusty main" > /etc/apt/sources.list.d/ros-latest.list'
    wget http://packages.ros.org/ros.key -O - | sudo apt-key add -
    sudo apt-get update
    sudo apt-get install -y python-rosdep build-essential cmake
    sudo rosdep init
    rosdep update
    
  3. Install dependencies and compile v4r

    cd /wherever_v4r_is_located/v4r
    rosdep install --from-paths . -i -y -r --rosdistro indigo
    mkdir build && cd build
    cmake ..
    make -j8
    

Original page: https://github.com/strands-project/v4r/blob/master/CONTRIBUTING.md

build status

The library itself is independent of ROS, so it is built outside ROS catkin. There are wrappers for ROS (https://github.com/strands-project/v4r_ros_wrappers), which can then be placed inside the normal catkin workspace.

Dependencies:

stated in `package.xml <https://github.com/strands-project/v4r/blob/master/package.xml>`__ There are two options to use the SIFT recognizer: - Use V4R third party library SIFT GPU (this requires a decent GPU - see www.cs.unc.edu/~ccwu/siftgpu) [default] - Use OpenCV non-free SIFT implementation (this requires the non-free module of OpenCV - can be installed from source). This option is enabled if BUILD_SIFTGPU is disabled in cmake.

Installation:

In order to use V4R in ROS, use the v4r_ros_wrappers.

From Ubuntu Package

simply install sudo apt-get install ros-indigo-v4r after enabling the STRANDS repositories.

From Source

cd ~/somewhere
git clone 'https://rgit.acin.tuwien.ac.at/root/v4r.git'
cd v4r
./setup.sh
mkdir build
cd build
cmake ..
make
sudo make install (optional)

Original page: https://github.com/strands-project/v4r/blob/master/Readme.md

Random Forest Training:

// define labels to be trained std::vector labels;

// for NYU depth dataset v2 this could be… labels.push_back(21); //

wall | labels.push_back(11); // floor labels.push_back(3); // cabinet labels.push_back(5); // chair labels.push_back(19); // table labels.push_back(4); // ceiling

// load training data from files ClassificationData trainingData; trainingData.LoadFromDirectory(“myTrainingDataDirectory”, labels);

// define Random Forest // parameters: // int nTrees // int maxDepth (-1… find depth automatically) // float baggingRatio // int nFeaturesToTryAtEachNode // float minInformationGain // int nMinNumberOfPointsToSplit Forest rf(10, 20 , 0.1, 200, 0.02, 5);

// train forest // parameters: // ClassificationData data // bool refineWithAllDataAfterwards // int verbosityLevel (0 - quiet, 3 - most detail) rf.TrainLarge(trainingData, false, 3);

// save after training rf.SaveToFile(“myforest”);

// Hard classification, returning label ID: std::vector featureVector; // assuming featureVector contains values… int ID = rf.ClassifyPoint(featureVector);

// Soft classification, returning probabilities for each label std::vector labelProbabilities = rf.SoftClassify(featureVector);


Directory structure of training data:

mytrainingdirectory - 0001.data - 0002.data - 0003.data … - 1242.data - categories.txt

-> for every label (the example above has 1242 labels) there is a corresponding data file containing all feature vectors of this label. such a file looks like this:

2.46917e-05 0.000273396 0.000204452 0.823049 0.170685 0.988113 0.993125 3.20674e-05 0.000280648 0.000229576 0.829844 0.207543 0.987969 0.992765 3.73145e-05 0.000279801 0.000257583 0.831597 0.235013 0.987825 0.9925 ……….. ……….. ……….. ……….. ……….. ……….. ………..

Each row: One feature vector (here 7 dimensional)

IMPORTANT: Values separated by spaces, vectors separated by newline. COLUMNS MUST ALL HAVE THE SAME CONSTANT WIDTH!!! CODE IS ASSUMING THAT FOR SPEED UP! (here width is 12)

The file categories.txt is necessary to define the label names and has a simple format like this: 1 floor 2 wall 3 ceiling 4 table 5 chair 6 furniture 7 object

Original page: https://github.com/strands-project/v4r/blob/master/modules/ml/README

/* :math:`Header: ` / / :math:`NoKeywords: ` */

More Information: Please see

http://en.wiki.mcneel.com/default.aspx/McNeel/OpenNURBS.html

for information about opennurbs including supported compilers, build instructions, and a description of the examples.

Legal Stuff:

The openNURBS Initiative provides CAD, CAM, CAE, and computer graphics software developers the tools to accurately transfer 3-D geometry between applications.

The tools provided by openNURBS include:

* C++ source code libraries to read and write the file format.

* Quality assurance and revision control.

* Various supporting libraries and utilities.

* Technical support.

Unlike other open development initiatives, alliances, or consortia:

* Commercial use is encouraged.

* The tools, support, and membership are free.

* There are no restrictions. Neither copyright nor copyleft
  restrictions apply.

* No contribution of effort or technology is required from
  the members, although it is encouraged.

For more information, please see http://www.openNURBS.org.

The openNURBS toolkit uses zlib for mesh and bitmap compression. The zlib source code distributed with openNURBS is a subset of what is available from zlib. The zlib code itself has not been modified. See ftp://ftp.freesoftware.com/pub/infozip/zlib/zlib.html for more details.

Zlib has a generous license that is similar to the one for openNURBS. The zlib license shown below was copied from the zlib web page ftp://ftp.freesoftware.com/pub/infozip/zlib/zlib_license.html on 20 March 2000.

zlib.h -- interface of the 'zlib' general purpose compression library
version 1.1.3, July 9th, 1998

Copyright (C) 1995-1998 Jean-loup Gailly and Mark Adler

This software is provided 'as-is', without any express or implied
warranty.  In no event will the authors be held liable for any damages
arising from the use of this software.

Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:

1. The origin of this software must not be misrepresented; you must not
   claim that you wrote the original software. If you use this software
   in a product, an acknowledgment in the product documentation would be
   appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
   misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.

Jean-loup Gailly        Mark Adler
jloup@gzip.org          madler@alumni.caltech.edu


The data format used by the zlib library is described by RFCs (Request for
Comments) 1950 to 1952 in the files ftp://ds.internic.net/rfc/rfc1950.txt
(zlib format), rfc1951.txt (deflate format) and rfc1952.txt (gzip format).

Copyright (c) 1993-2011 Robert McNeel & Associates. All Rights Reserved. Rhinoceros is a registered trademark of Robert McNeel & Associates.

THIS SOFTWARE IS PROVIDED “AS IS” WITHOUT EXPRESS OR IMPLIED WARRANTY. ALL IMPLIED WARRANTIES OF FITNESS FOR ANY PARTICULAR PURPOSE AND OF MERCHANTABILITY ARE HEREBY DISCLAIMED.

Original page: https://github.com/strands-project/v4r/blob/master/3rdparty/opennurbs/readme.txt

Keypoint based Object Recognition in Monocular Images

IMKRecognizer is able to recognize objects in monocular images and to estimate the pose using a pnp-method. Object models, i.e. a sequence of RGBD-keyframes, the corresponding poses and the an object mask, can be created with RTMT, which supports OpenNI-Cameras (ASUS, Kinect).

Usage

For object recognition the model directory with the object-data stored in sub-directories need to be provided. For a RGBD-keyframe this can look like: “data/models/objectname1/views/frame_xxxx.pcd”. If the recognizer is started the first time the keyframes are loaded, interest points are detected, clustered and stored in a concatenated ‘imk_objectname1_objectname2.bin’ file. In case any recognition parameter is changed this file must be deleted and it will be created newly next time.

bin/example-imkRecognizeObject-file --help
bin/example-imkRecognizeObject-file -d data/models/ -n objectname1 objectname2 ... -f ~/testimages/image_%04d.jpg -s 0 -e 5 -t 0.5

References

  • J. Prankl, T. Mörwald, M. Zillich, M. Vincze: “Probabilistic Cue Integration for Real-time Object Pose Tracking”; in: Proceedings of the 9th International Conference on Computer Vision Systems, (2013), 10 S.

Original page: https://github.com/strands-project/v4r/blob/master/docs/IMKRecognizer.md

Object Classification by ESF

In this tutorial you will learn how to classify objects using the ESF descriptor. In our example, we will render views from mesh files and then classify point clouds by classifying extracted segments. We use the Cat10 models from 3dNet for training and the test clouds of 3dNet for testing.

Rendering training views from mesh files

In the first step, we will render training views ( point clouds *.pcd ) from mesh files ( *.ply). The example program example_depth_map_renderer produces a set of training views (in the output folder given by argument -o) from a mesh input file (provided by argument -i). The training views are rendered by placing virtual cameras on a 3D sphere with radius -r around the mesh file; sampled on an icosahedron with subdivision -s. Camera parameters are provided by parameters --fx, --fy(focal length in x and y direction), --cx, --cy(central point of projection in x and y direction), and --width, --height(image width and height). To visualize, add argument -v, for help -h.

Note that the program above renders single .ply files only. To render a bunch of .ply files, there is a tool called tool_create_classification_db_from_ply_files in the V4R library.

If you have not obtained data yet, you can download them using the script file. Go to the v4r root directory and run

./scripts/get_3dNet_Cat10.sh

This will download and extract the Cat10 models (42MB).

object-model-database
│
└───class-name-1
        |
        └─── instance-name-1
│            │
│            └─── views
│                 │   cloud_000000.pcd
│                 │    cloud_00000x.pcd
│                 │   ...
│                 │   object_indices_000000.txt
│                 │   object_indices_00000x.txt
│                 │   ...
│                 │   pose_000000.txt
│                 │   pose_00000x.txt
│                 │   ...
│          │
│          └─── [trained-pipeline-1]*
│                 │   ...
│                 │
│          └─── [trained-pipeline-x]*
│                 │   ...
        └─── instance-name-x
│            │
│            └─── views
│                 │   cloud_000000.pcd
│                 │    cloud_00000x.pcd
│                 │   ...
│                 │   object_indices_000000.txt
│                 │   object_indices_00000x.txt
│                 │   ...
│                 │   pose_000000.txt
│                 │   pose_00000x.txt
│                 │   ...
│          │
│          └─── [trained-pipeline-1]*
│                 │   ...
│                 │
│          └─── [trained-pipeline-x]*
│                 │   ...
│
│
└───class-name-2
        |
        └─── instance-name-1
│            │
│            └─── views
│                 │   cloud_000000.pcd
│                 │    cloud_00000x.pcd
│                 │   ...
│                 │   object_indices_000000.txt
│                 │   object_indices_00000x.txt
│                 │   ...
│                 │   pose_000000.txt
│                 │   pose_00000x.txt
│                 │   ...
│          │
│          └─── [trained-pipeline-1]*
│                 │   ...
│                 │
│          └─── [trained-pipeline-x]*
│                 │   ...
        └─── instance-name-x
│            │
│            └─── views
│                 │   cloud_000000.pcd
│                 │    cloud_00000x.pcd
│                 │   ...
│                 │   object_indices_000000.txt
│                 │   object_indices_00000x.txt
│                 │   ...
│                 │   pose_000000.txt
│                 │   pose_00000x.txt
│                 │   ...
│          │
│          └─── [trained-pipeline-1]*
│                 │   ...
│                 │
│          └─── [trained-pipeline-x]*
│                 │   ...
│ ...

* this data / folder will be generated

Training a classifier

If you also want some annotated example point clouds, you can obtain the test set from 3dNet (3.6GB) by running

./scripts/get_3dNet_test_data.sh

The files will be extracted in the data/3dNet directory.

Usage

Assuming you built the examples samples, you can now run the classifier. If you run it for the first time, it will automatically render views by placing a virtual camera on an artificial sphere around the mesh models in -i and store them in the directory given by the -m argument. These views are then used for training the classifier, in our case by extracting ESF descriptors. For testing, it will segment the point cloud given by the argument -t by your method of choice (default: searching for a dominant plane and running Euclidean clustering for the points above). Each segment is then described by ESF and matched by nearest neighbor search to one of your learned object classes. The results will be stored in a text file which has the same name as the input cloud, just replacing the suffix from .pcd to .anno_test in the output directory specified by -o.

./build/bin/example-esf_object_classifier -i data/3dNet/Cat10_ModelDatabase -m data/3dNet/Cat10_Views -t data/3dNet/Cat10_TestDatabase/pcd_binary/ -o /tmp/3dNet_ESF_results

References

Original page: https://github.com/strands-project/v4r/blob/master/docs/ObjectClassification.md

Multi-modal RGB-D Object Instance Recognizer

In this tutorial you will learn how to detect objects in 2.5 RGB-D point clouds. The proposed multi-pipeline recognizer will detect 3D object models and estimate their 6DoF pose in the camera’s field of view.

Object Model Database

The model folder structure is structured as:

object-model-database
│
└───object-name-1
│   │    [3D_model.pcd]
│   │
│   └─── views
│        │   cloud_000000.pcd
│        │   cloud_00000x.pcd
│        │   ...
│        │   object_indices_000000.txt
│        │   object_indices_00000x.txt
│        │   ...
│        │   pose_000000.txt
│        │   pose_00000x.txt
│        │   ...
│   │
│   └─── [trained-pipeline-1]*
│        │   ...
│   │
│   └─── [trained-pipeline-x]*
│        │   ...
│
└───object-name-2
│   │    [3D_model.pcd]
│   │
│   └─── views
│        │   cloud_000000.pcd
│        │   cloud_00000x.pcd
│        │   ...
│        │   object_indices_000000.txt
│        │   object_indices_00000x.txt
│        │   ...
│        │   pose_000000.txt
│        │   pose_00000x.txt
│        │   ...
│   │
│   └─── [trained-pipeline-1]*
│        │   ...
│   │
│   └─── [trained-pipeline-x]*
│        │   ...
│        │   ...
│ ...

* this data / folder will be generated

Objects are trained from several training views. Each training view is represented as an organized point cloud (cloud_xyz.pcd), the segmentation mask of the object (object indices_xyz.txt) and a camera pose ( pose_xyz.pcd) that aligns the point cloud into a common coordinate system when multiplied by the given 4x4 homogenous transform. The training views for each object are stored inside a viewfolder that is part of the object model folder. The object model folder is named after the object and contains all information about the object, i.e. initially contains the views and a complete 3D model ( 3D_model.pcd ) of the object that is in the same coordinate system as the one in views. The complete model is used for hypotheses verification and visualization purposes.

At the first run of the recognizer (or whenever retraining is desired), the object model will be trained with the features of the selected pipeline. The training might take some time and extracted features and keypoints are stored in a subfolder called after its feature descriptor (e.g. sift keypoints for object-name-1 will be stored in model-database/object-name-1/sift/keypoints.pcd). If these files already exist, they will be loaded directly from disk and skip the training stage. Please note that the trained directory needs to be deleted whenever you update your object model (e.g. by adding/removing training views), or the argument --retrain set when calling the program.

If you have not obtained appropriate data yet, you can download example model files together with test clouds by running

./scripts/get_TUW.sh

from your v4r root directory. The files (2.43GB) will be extracted into data/TUW.

Usage

Assuming you built the ObjectRecognizer app, you can now run the recognizer. If you run it for the first time, it will automatically extract the descriptors of choice from your model data (-m). The test directory can be specified by the argument -t. The program accepts either a specific file or a folder with test clouds. To use 2D features, these test clouds need to be organized.

Example command:

./build/bin/ObjectRecognizer -m data/TUW/TUW_models -t data/TUW/test_set -z 2.5 --or_do_sift true --or_do_shot false --or_remove_planes 1 -v

Parameters:

Parameters for the recognition pipelines and the hypotheses verification are loaded from the cfg/folder by default but most of them can also be set from the command line. For a list of available command line parameters, you can use -h. Note that all parameters will be initialized by the cfg/*.xml files first, and then updated with the given command line arguments. Therefore, command line arguments will always overwrite the parameters from the XML file!

Let us now describe some of the parameters:

The -z argument defines the cut-off distance in meter for the object detection (note that the sensor noise increases with the distance to the camera). Objects further away than this value, will be neglected.

Use --or_do_sift true --or_do_shot false to enable or disable the local recognition pipelines of SIFT and SHOT, respectively.

Computed feature matches will be grouped into geometric consistent clusters from which a pose can be estimated using Singular Value Decomposition. The cluster size --or_cg_thresh needs to be equal or greater than 3 with a higher number giving more reliable pose estimates (since in general there will be more keypoints extracted from high resolution images, also consider increasing this threshold to avoid false positives and long runtimes).

If the objects in your test configuration are always standing on a clearly visible surface plane (e.g. ground floor or table top), we recommend to remove points on a plane by setting the parameter --or_remove_planes 1.

To visualize the results, use command line argument -v.

To visualize intermediate hypotheses verification results, you can use --hv_vis_model_cues.

./build/bin/ObjectRecognizer -m data/TUW/TUW_models -t data/TUW/test_set -z 2.5 --or_do_sift true --or_do_shot false --or_remove_planes 1 -v

Multiview Recognizer

If you want to use multiview recognition, enable the entry

<use_multiview_>1</use_multiview_>

in the cfg/multipipeline_config.xml file.

Additionally, you can also enable

<use_multiview_hv_>1</use_multiview_hv_>
<use_multiview_with_kp_correspondence_transfer_>1</use_multiview_with_kp_correspondence_transfer_>

In most settings, we however recommend to leave use_multiview_with_kp_correspondence_transfer_ disabled to save computation time.

The object recognizer will then treat all *.pcd files within a folder as observations belonging to the same multi-view sequence. An additional requirement of the multi-view recognizer is that all observations need to be aligned into a common reference frame. Therefore, each *.pcdfile needs to be associated with a relative camera pose (coarse point cloud registration is not done inside the recognizer!!). We use the header fields sensor_origin and sensor_orientation within the point cloud structure to read this pose. The registration can so be checked by viewing all clouds with the PCL tool pcd_viewer /path/to/my/mv_observations/*.pcd. Please check if the registration of your clouds is correct befor using the multi-view recognizer!

Output

Recognized results will be stored in a single text file, where each detected object is one line starting with name (same as folder name) of the found object model followed by the confidence (between 0 for poor object hypotheses and 1 for very high confidence – value in brackets), and the object pose as a 4x4 homogenous transformation matrix in row-major order aligning the object represented in the model coordinate system with the current camera view. Example output:

object_08 (0.251965): 0.508105 -0.243221 0.826241 0.176167 -0.363111 -0.930372 -0.0505756 0.0303915 0.781012 -0.274319 -0.561043 1.0472 0 0 0 1
object_10 (0.109282): 0.509662 0.196173 0.837712 0.197087 0.388411 -0.921257 -0.0205712 -0.171647 0.767712 0.335861 -0.545726 1.07244 0 0 0 1
object_29 (0.616981): -0.544767 -0.148158 0.825396 0.0723312 -0.332103 0.941911 -0.0501179 0.0478761 -0.770024 -0.301419 -0.562326 0.906379 0 0 0 1
object_34 (0.565967): 0.22115 0.501125 0.83664 0.0674237 0.947448 -0.313743 -0.0625169 -0.245826 0.231161 0.806498 -0.544174 0.900966 0 0 0 1
object_35 (0.60515): 0.494968 0.0565292 0.86707 0.105458 0.160923 -0.986582 -0.0275425 -0.104025 0.85388 0.153165 -0.497424 0.954036 0 0 0 1
object_35 (0.589806): -0.196294 -0.374459 0.906228 0.1488 -0.787666 0.610659 0.0817152 -0.331075 -0.583996 -0.697765 -0.414817 1.01101 0 0 0 1
To visualize results, add argument -v. This will visualize the

input scene, the generated hypotheses and the verified ones (from bottom to top). | For further parameter information, call the program with -h or have a look at the doxygen documents.

References

Original page: https://github.com/strands-project/v4r/blob/master/docs/ObjectDetection.md

RTMT Recognition Modeling & Tracking Toolbox

In this tutorial you will learn how to model objects with an RGB-D camera. These 3D models can used for training an object recognizer or used for object tracking

Object modelling

  • Place the object on a flat surface on a newspaper or something similar. This allows you to rotate the object without touching it. The texture on the newspaper also helps view registration. The pictures below were taken with the object on a turn table, which is the most convenient way of rotating the object. You can also model objects without a turntable by putting the object on the floor for instance and moving your camera. In this case just skip the step definining the ROI.

  • Start the modelling tool: ~/somewhere/v4r/bin/RTMT

  • Press “Camera Start”: You should now see the camera image

    image0

  • If you use a turntable like shown in the image, select the region of interest by pressing “ROI Set” and clicking on the flat surface next to the object. If you model the object without a turntable, skip this step.

    image1

  • Press Tracker Start: you now see the tracking quality bar top left

    image2

  • Rotate 360 degrees, the program will generate a number of keyframes.

    IMPORTANT: Do not touch the object itself while moving it. Also,

    if you selected a ROI, try not to move your fingers above the turntable, otherwise it might be added to the object model.

    image3 image4

  • Press “Tracker Stop”

  • Press “Camera Stop”

  • Press “Pose Optimize” (optional): bundle adjustment of all cameras (be patient!)

  • Press “Object Segment”: The object should already be segmented correctly thanks to the ROI set previously

    image5 You can check segmentation (< > buttons). Highlighted areas

    are segments that will be clustered to the object model. If some highlighted areas do not belong to the object (e.g. supporting plane), you can click on these wrong segmentations to undo. You can also click on dark areas to add these segments to the object model. These areas should automatically be projected into the other frames.

  • Press “Object …Ok”

    image6 image7

  • Press “Store for Recognizer”: saves the point clouds in a format for object recognition. You will be asked for an model name.

    By default the program will store models in various subfolders of

    the folder “./data”, which will be created if not present. This can be changed in the configuration options (see below).

  • Press “Store for Tracker”: save a different model suitable for tracking

  • If the 3D point cloud visualization is activated +/- can be used to increase/ decrease the size of dots

This is a convenient way to model objects with the STRANDS robots. Put the objects on something elevated (a trash can in this case) to bring it within a good distance to the robot’s head camera.

Configuration options:
  • Set data folder and model name:

    (File -> Preferences -> Settings -> Path and model name)

  • Configure number of keyfames to be selected using a camera rotation and a camera translation threshold:

    (File -> Preferences -> Settings -> Min. delta angle, Min. delta

    camera distance)

Trouble shooting
  • If you press any of the buttons in the wrong order, just restart. Recovery is futile.
  • If you do not get an image, there is a problem with the OpenNI device driver. Check the file /etc/openni/GlobalDefaults.ini, set UsbInterface=2 (i.e. BULK).
  • If the plane supporting the object is not removed completely, try to increase the inlier distance for dominant plane segmentation in File -> Preferences -> Postprocessing.

References

When referencing this work, pleace cite:

  1. J. Prankl, A. Aldoma Buchaca, A. Svejda, M. Vincze, RGB-D Object Modelling for Object Recognition and Tracking. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
  2. Thomas Fäulhammer, Aitor Aldoma, Michael Zillich and Markus Vincze Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015.
  3. Thomas Fäulhammer, Michael Zillich and Markus Vincze Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter, IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 2015.
  4. A. Aldoma Buchaca, F. Tombari, J. Prankl, A. Richtsfeld, L. di Stefano, M. Vincze, Multimodal Cue Integration through Hypotheses Verification for RGB-D Object Recognition and 6DOF Pose Estimation. IEEE International Conference on Robotics and Automation (ICRA), 2013.
  5. J. Prankl, T. Mörwald, M. Zillich, M. Vincze, Probabilistic Cue Integration for Real-time Object Pose Tracking. Proc. International Conference on Computer Vision Systems (ICVS). 2013.

For further information check out this site.

Original page: https://github.com/strands-project/v4r/blob/master/docs/ObjectModeling.md

1. Naming

1.1. Files

All files should be under_scored.

  • Header files have the extension .h
  • Templated implementation files have the extension .hpp
  • Source files have the extension .cpp

1.2. Directories

All directories and subdirectories should be under_scored.

  • Header files should go under include/v4r/module_name/
  • Templated implementation files should go under include/v4r/module_name/impl/
  • Source files should go under src/

1.3. Includes

All include statement are made with , e.g.:

#include <pcl/module_name/file_name.h>
#incluce <pcl/module_name/impl/file_name.hpp>

Also keep the list of includes clean and orderly by first including system level, then external libraries, then v4r internal. I.e. as a general rule: the more general include files go first. Sort includes within a group alphabetically.

1.4. Defines & Macros & Include guards

Macros should all be ALL_CAPITALS_AND_UNDERSCORED. To avoid the problem of double inclusion of header files, guard each header file with a #pragma once statement placed just past the MIT license.

// the license
#pragma once
// the code

You might still encounter include guards in existing files which names are mapped from their include name, e.g.: v4r/filters/bilateral.h becomes V4R_FILTERS_BILATERAL_H_.

// the license

#ifndef V4R_MODULE_NAME_FILE_NAME_H_
#define V4R_MODULE_NAME_FILE_NAME_H_

// the code

#endif

1.5. Namespaces

Put all your code into the namespace v4r and avoid using sub-namespaces whenever possible.

namespace v4r
{
    ...
}

1.6. Classes / Structs

Class names (and other type names) should be CamelCased . Exception: if the class name contains a short acronym, the acronym itself should be all capitals. Class and struct names are preferably nouns: PFHEstimation instead of EstimatePFH.

Correct examples:

class ExampleClass;
class PFHEstimation;

1.7. Functions / Methods

Functions and class method names should be camelCased, and arguments are under_scored. Function and method names are preferably verbs, and the name should make clear what it does: checkForErrors() instead of errorCheck(), dumpDataToFile() instead of dataFile().

Correct usage:

int
applyExample (int example_arg);

1.8. Variables

Variable names should be under_scored.

int my_variable;

Give meaningful names to all your variables.

1.8.1. Constants

Constants should be ALL_CAPITALS, e.g.:

const static int MY_CONSTANT = 1000;

1.8.2. Member variables

Variables that are members of a class are under_scored_, with a trailing underscore added, e.g.:

int example_int_;

2. Indentation and Formatting

V4R uses a variant of the Qt style formatting. The standard indentation for each block is 4 spaces. If possible, apply this measure for your tabs and other spacings. ## 2.1. Namespaces

In a header file, the contents of a namespace should be indented, e.g.:

namespace v4r
{
    class Foo
    {
        ...
    };
}

2.2. Classes

The template parameters of a class should be declared on a different line, e.g.:

template <typename T>
class Foo
{
    ...
}

2.3. Functions / Methods

The return type of each function declaration must be placed on a different line, e.g.:

void
bar ();

Same for the implementation/definition, e.g.:

void
bar ()
{
  ...
}

or

void
Foo::bar ()
{
  ...
}

or

template <typename T> void
Foo<T>::bar ()
{
  ...
}

2.4. Braces

Braces, both open and close, go on their own lines, e.g.:

if (a < b)
{
    ...
}
else
{
    ...
}

Braces can be omitted if the enclosed block is a single-line statement, e.g.:

if (a < b)
    x = 2 * a;

3. Structuring

3.1. Classes and API

For most classes in V4R, it is preferred that the interface (all public members) does not contain variables and only two types of methods:

  • The first method type is the get/set type that allows to manipulate the parameters and input data used by the class.
  • The second type of methods is actually performing the class functionality and produces output, e.g. compute, filter, segment.

3.2. Passing arguments

For getter/setter methods the following rules apply:

  • If large amounts of data needs to be set it is preferred to pass either by a const reference or by a const boost shared_pointer instead of the actual data.
  • Getters always need to pass exactly the same types as their respective setters and vice versa.
  • For getters, if only one argument needs to be passed this will be done via the return keyword. If two or more arguments need to be passed they will all be passed by reference instead.

For the compute, filter, segment, etc. type methods the following rules apply:

  • The output arguments are preferably non-pointer type, regardless of data size.
  • The output arguments will always be passed by reference.

3.3 Use const

To allow clients of your class to immediately see which variable can be altered and which are used for read only access, define input arguments const. The same applies for member functions which do not change member variables.

3.3 Do not clutter header files

To reduce compile time amongst others, put your definitions into seperate .cpp or .hpp files. Define functions inline in the header only when they are small, say, 10 lines or fewer.

3.4 Fix warnings

To keep the compiler output non-verbose and reduce potential conflicts, avoid warnings produced by the compiler. If you encounter warnings from other parts in the library, either try to fix them directly or report them using the issue tracker.

3.5 Check input range and handle exceptions

To avoid confusing runtime errors and undefined behaviour, check the input to your interfaces for potential conflicts and provide meaningful error messages. If possible, try to catch these exceptions and return in a well-defined state.

3.6 Document

V4R uses Doxygen for documentation of classes and interfaces. Any code in V4R must be documented using Doxygen format.

3.7 Template your classes/functions whenever appropriate

As many classes and functions in V4R depend on e.g. templated point clouds, allow your classes to accommodate with the various types by using template classes and functions.

Original page: https://github.com/strands-project/v4r/blob/master/docs/v4r_style_guide.md

For developers

This repository is a ROS wrapper only and shall therefore only include ROS interfaces (service calls, test nodes, visualizations, etc.) to objects / functions defined in the V4R library (e.g. by deriving from classes defined in the V4R library). Please do not commit any essential code into this repository - this should go directly into V4R library (so that people can use it also without ROS). This also means that external dependencies should already be resolved by the V4R library.

As a convention, please create seperate folders for service/message definitions (e.g. do not put your *.cpp/*.hpp files together with *.msg/*.srv). In your package.xml file, please fill in the maintainer (and optional author) field with your name and e-mail address.

License

The V4R libary and v4r_ros_wrappers are published under the MIT license.

Please add this to the top of each of your files.

/******************************************************************************
 * Copyright (c) 2015 Author's name
 *
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to
 * deal in the Software without restriction, including without limitation the
 * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
 * sell copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
 * SOFTWARE.
 *
 ******************************************************************************/

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/CONTRIBUTING.md

This repository contains ROS wrappers for the V4R library.

Installation

From Ubuntu Packages

simply install sudo apt-get install ros-indigo-v4r-ros-wrappers after enabling the STRANDS repositories.

From Source

Make sure you install the V4R library first.

Then clone this repository and build it with catkin_make. You might get an error regarding V4RModules.cmake. This is easy to fix:

cd my_catkin_ws/build
ccmake ../src

Locate the option V4R_DIR and set it, according to where you build/installed V4R library, e.g.:

V4R_DIR   /home/somewhere/v4r/build

Then call catkin again, and all should now compile fine.

Tutorial

A tutorial can be found here.

Troubleshooting

OpenGL not working for SiftGPU

A GPU (best: NVIDIA) is required for many components. For this to work, the user running the software needs to be allowed to access the X server. The easiest (but very insecure in a not secured network) way to achieve this is to allow access for all users via DISPLAY=:0 sudo xhost +. Alternatively, SIFT can be computed using the non-free library of OpenCV. In this case, link the v4r and v4r_ros_wrappers libraries to the right OpenCV library.

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/Readme.md

usage:

rosrun multiview_object_recognizer multiview_object_recognizer_node -m /path/to/your/models/ [--optional_parameter p]

params (see extended help output with -h):

  • models_dir [in] - Directory containing the object models (REQUIRED)
  • do_sift [in] - if true, does SIFT feature matching
  • do_shot [in] - if true, does SHOT feature matching
  • chop_z [in] - cut of distance in meters with respect to the camera coordinate system
  • compute_mst [in] - if true, does point cloud registration by SIFT background matching (given scene_to_scene == true), by using given pose (if use_robot_pose == true) and by common object hypotheses (if hyp_to_hyp == true) from all the possible connection a Mimimum Spanning Tree is computed. If false, it only uses the given pose for each point cloud. The given pose can be extracted from the .pcd’s header fields sensor_origin and sensor_orientation or from the extracted camera pose e.g. from the camera_tracker package
  • scene_to_scene [in] - if true, estimates relative camera transform between two viewpoints using SIFT background matching
  • use_go3d [in] - if true, verifies against a reconstructed scene from multiple viewpoints. Otherwise only against the current viewpoint.
  • cg_size_thresh [in] - minimum number of feature corrrespondences neccessary to generate an object hypothesis (low number increases recall and compuation time, high number increases precision)
  • hv_color_sigma_l [in] - maximum allowed standard deviation of the luminance color (L channel in LAB color space) for points for generated object hypotheses and scene (low number increases precision, high number increases recall)
  • hv_color_sigma_ab [in] - maximum allowed standard deviation of the luminance color (AB channels in LAB color space) for points for generated object hypotheses and scene (low number increases precision, high number increases recall)
  • max_vertices_in_graph [in] - maximum number of view points taken into account by the multi-view recognizer
  • transfer_feature_matches [in] - defines the method used for transferring information from other viewpoints (1 = keypoint correspondences (Faeulhammer ea., ICRA2015 paper); 0 = full hypotheses only (Faeulhammer ea., MVA2015 paper))
  • distance_keypoints_get_discarded [in] - defines the squared distance [m²] for keypoints to be considered the same. This avoids redundant keypoints when transformed from other view points (only used for extension_mode=0)

further description of parameters in the example object_recognizer_multiview in V4R/samples

Test:

rosrun multiview_object_recognizer test_multiview_object_recognizer_node [_optional_parameter:=p]

Test params (NOTE THAT THESE ARE ROS PARAMETERS):

  • input_method[in] (default: 0) - 0=camera input; 1 = input from disk
  • topic[in] (default: /camera/depth_registered/points) - camera topic being used when input_method=0
  • directory[in] - directory being used to read test .pcd files when input_method=1

Object models and test scenes can be obtained from https://repo.acin.tuwien.ac.at/tmp/permanent/dataset_index.php To model your own objects have a look at http://www.acin.tuwien.ac.at/forschung/v4r/software-tools/rtm/

References:

  • [Thomas Fäulhammer, Aitor Aldoma, Michael Zillich, Markus Vincze, Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments, IEEE Int. Conf. on Robotics and Automation (ICRA), 2015]
  • [Thomas Fäulhammer, Michael Zillich, Markus Vincze, Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter, IAPR Conference on Machine Vision Applications (MVA), 2015]

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/multiview_object_recognizer/ReadMe.md

This classifier relies on object shape and is based on the approach of [1].

It uses CAD data for training, such as https://repo.acin.tuwien.ac.at/tmp/permanent/Cat200_ModelDatabase.zip (taken from https://repo.acin.tuwien.ac.at/tmp/permanent/3d-net.org). In order to speed up training, please select a subset of the training data you want to use and put it in a folder Cat200_ModelDatabase__small.

Usage:

The classifier can then be started by:

  • roslaunch openni_launch openni.launch depth_registration:=true
  • run segmentation to get some point cloud clusters (if you want to launch both segmentation and classification at the same time, please refer to the launch file in segment_and_classify and add the classification parameters desrcibed below)
  • roslaunch object_classifier classifier.launch models_dir:=your_dataset_dir/Cat200_ModelDatabase__small/ topic:=/camera/depth_registered/points training_dir:=your_training_dir

params:

  • models_dir - training directory with the CAD models of the classes,
  • training_dir - directory containing the trained data (if they exist - otherwise they will be re-trained)

Test:

To test, you can use the test node in segment_and_classify.

References:

  • [1] W. Wohlkinger and M. Vincze*, *Ensemble of Shape Functions for 3D Object Classification*

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/object_classifier/README.md

# object_gestalt_segmentation

The current segmentation is based on the work of Ekaterina Popatova and Andreas Richtsfeld. Segmented objects are discarded if they are too tall or too far away from the robot. A service is provided that returns the segmented object in the current scene. A more sophisticated solution based on attention cues will be provided in the near future.

Technical Maintainer: [markus](https://github.com/edith-langer (Edith Langer, TU Wien) - langer@acin.tuwien.ac.at

Contents

  1. Installation Requirements
  2. Execution
  3. Software architecture

1. Installation Requirements:

ROS packages

The ROS packages dependencies can be installed with the command:

rosdep install --from-path object_gestalt_segmentation -i -y

2. Execution:

roslaunch object_gestalt_segmentation startup.launch

top

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/object_gestalt_segmentation/README.md

usage:

rosrun singleview_object_recognizer recognition_service -m /path/to/your/models/ [--optional_parameter p]

params (see extended help output with -h)::

  • models_dir [in] - Directory containing 3D models (saved as *.pcd files)
  • chop_z [in] - cut of distance in meters with respect to the camera

Test:

rosrun singleview_object_recognizer test_single_view_recognition [_optional_parameter:=p]

Test params (NOTE THAT THESE ARE ROS PARAMETERS):

  • input_method[in] (default: 0) - 0=camera input; 1 = input from disk
  • topic[in] (default: /camera/depth_registered/points) - camera topic being used when input_method=0
  • directory[in] - directory being used to read test .pcd files when input_method=1

Object models and test scenes can be obtained from https://repo.acin.tuwien.ac.at/tmp/permanent/dataset_index.php To model your own objects have a look at http://www.acin.tuwien.ac.at/forschung/v4r/software-tools/rtm/

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/singleview_object_recognizer/ReadMe.md

Overview

In this tutorial you will learn how to recognise and track trained 3D object models. You will first model objects by rotating them in front of an RGB-D camera. These objects can then be recognised by a recognition component via a ROS service call. You will also learn how much you can (or can’t) trust recogniser responses.

This tutorial requires installation of a specific vision library, which is part of the STRANDS repositories.

The V4R (Vision for Robotics) library

This is now released as Ubuntu packages as well, so there should be no need to install this from source as detailed below. Just install with ``sudo apt-get install ros-indigo-v4r-ros-wrappers`` and you should be ready to go! Jump straight to [[Tutorial.md#object-modelling]].

This library is part of the STRANDS repositories ‘https://github.com/strands-project/v4r’. The library itself is independent of ROS, so it is built outside ROS catkin. There are wrappers for ROS ‘https://github.com/strands-project/v4r_ros_wrappers’, which can then be placed inside the normal catkin workspace.

Dependencies

  • openni drivers: sudo apt-get install libopenni-dev libopenni-sensor-primesense0 libopenni2-dev
  • Qt 4 (http://qt-project.org/): sudo apt-get install libqt4-dev
  • Boost (http://www.boost.org/): comes with ubuntu
  • Point Cloud Library 1.7.x (http://pointclouds.org/): comes with ROS
  • Eigen3 (http://eigen.tuxfamily.org/): sudo apt-get install libeigen3-dev
  • OpenCV 2.x (http://opencv.org/): comes with ROS
  • Ceres Solver 1.9.0 (http://ceres-solver.org/): sudo apt-get install libceres-dev
  • OpenGL GLSL mathematics (libglm-dev): sudo apt-get install libglm-dev
  • GLEW - The OpenGL Extension Wrangler Library (libglew-dev): sudo apt-get install libglew-dev
  • libraries for sparse matrices computations (libsuitesparse-dev): sudo apt-get install libsuitesparse-dev
  • Google Logging Library sudo apt-get install libgoogle-glog-dev
  • X-related libraries sudo apt-get install libx11-dev libxinerama-dev libxi-dev libxrandr-de libassimp-dev

Installation

Clone ‘https://github.com/strands-project/v4r’ somewhere on your computer, e.g. `~/somewhere/v4r/’) and build using cmake:

cd ~/somewhere
git clone 'https://github.com/strands-project/v4r'
cd v4r
mkdir build
cd build
cmake ..
make
sudo make install (optional)

Now you can install the STRANDS ros wrappers. Clone v4r_ros_wrappers into your catkin workspace:

cd my_catkin_ws/src
git clone https://github.com/strands-project/v4r_ros_wrappers.git
cd ..

Then call catkin_make once. If you chose not to install the V4R library (optional point above) you will get an error regarding V4RModules.cmake. This is easy to fix by locating the build/install directory of V4R and setting it appropriately within cmake:

catkin_make -DV4R_DIR=/path/to/your/v4r

It should compile fine now. Eventually you have to source the workspace with

source devel/setup.bash

Object modelling

First you have to model all the objects you want to recognise later. You use an offline tool for this, the RTM Toolbox.

  • Place the object on a flat surface on a newspaper or something similar. This allows you to rotate the object without touching it. The texture on the newspaper also helps view registration. The pictures below were taken with the object on a turn table, which is the most convenient way of rotating the object. You can also model objects without a turntable by putting the object on the floor for instance and moving your camera. In this case just skip the step definining the ROI.

  • Start the modelling tool: ~/somewhere/v4r/bin/RTMT

  • Press “Camera Start”: You should now see the camera image

    image0

  • Press “ROI Set” + click on the flat surface next to the object

    image1

  • Press Tracker Start: you now see the tracking quality bar top left

    image2

  • Rotate 360 degrees, the program will generate a number of keyframes.

    IMPORTANT: Do not touch the object itself while moving it. image3 image4

  • Press “Tracker Stop”

  • Press “Camera Stop”

  • Press “Pose Optimize” (optional): bundle adjustment of all cameras (be patient!)

  • Press “Object Segment”: The object should already be segmented correctly thanks to the ROI set previously

    image5 You can check segmentation (< > buttons). Highlighted areas

    are segments that will be clustered to the object model. If some highlighted areas do not belong to the object (e.g. supporting plane), you can click on these wrong segmentations to undo. You can also click on dark areas to add these segments to the object model. These areas should automatically be projected into the other frames.

  • Press “Object …Ok”

    image6 image7

  • Press “Store for Recognizer”: saves the point clouds in a format for object recognition. You will be asked for an model name.

    By default the program will store models in various subfolders of

    the folder “./data”, which will be created if not present. This can be changed in the configuration options (see below).

  • Press “Store for Tracker”: save a different model suitable for tracking

  • If the 3D point cloud visualization is activated +/- can be used to increase/ decrease the size of dots

This is a convenient way to model objects with the STRANDS robots. Put the objects on something elevated (a trash can in this case) to bring it within a good distance to the robot’s head camera.

Configuration options:

  • Set data folder and model name:

    (File -> Preferences -> Settings -> Path and model name)

  • Configure number of keyfames to be selected using a camera rotation and a camera translation threshold:

    (File -> Preferences -> Settings -> Min. delta angle, Min. delta

    camera distance)

Trouble shooting

  • If you press any of the buttons in the wrong order, just restart. Recovery is futile.
  • If you do not get an image, there is a problem with the OpenNI device driver. Check the file /etc/openni/GlobalDefaults.ini, set UsbInterface=2 (i.e. BULK).
  • If the plane supporting the object is not removed completely, try to increase the inlier distance for dominant plane segmentation in File -> Preferences -> Postprocessing.

Object recognition

With the models created above you can now call the object recognition service within the STRANDS system.

Start the object recogniser on the side PC with the head camera attached. To do this, you must SSH into the side PC without X forwarding then run:

export DISPLAY=:0
rosrun singleview_object_recognizer recognition_service -m /path/to/your/model/data

If you want to get a description about other parameters, just add -h to the command. The ones you might want to change are: * -z 2.5 (will neglect all points further away than 2.5m - this will ensure the noise level of the measured points is not too high. Note that RGB-D data from the Asus or Kinect gets worse with distance) * --knn_sift 3 (this will increase the number of generated hypotheses) * --do_shot true (this will enable SHOT feature extraction - necessary for objects without visual texture information) * -c 5 (increase speed at the cost of possibly missing objects if they are e.g. half occluded.)

The recogniser offers a service /recognition_service/sv_recognition, defined in v4r_ros_wrappers/recognition_srv_definitions/srv:

sensor_msgs/PointCloud2 cloud  # input scene point cloud
float32[] transform            # optional transform to world coordinate system
std_msgs/String scene_name     # optional name for multiview recognition
std_msgs/String view_name      # optional name for multiview recognition
---
std_msgs/String[] ids                 # name of the recognised object model
geometry_msgs/Transform[] transforms  # 6D object pose
float32[] confidence                  # ratio of visible points
geometry_msgs/Point32[] centroid      # centroid of the cluster
object_perception_msgs/BBox[] bbox    # bounding box of the cluster
sensor_msgs/PointCloud2[] models_cloud  # point cloud of the model transformed into camera coordinates

For you, all you have to provide is a point cloud. The recogniser will return arrays of ids (the name you gave during modelling), transforms (the 6D object poses), as well as confidences, bounding boxes and the segmented point clouds corresponding to the recognised portions of the scene.

There is a test ROS component for you as an example:

rosrun singleview_object_recognizer test_single_view_recognition_from_file _topic:=/camera/depth_registered/points

where you have to set the topic to the respective RGB-D source of your robot, e.g. the head_xtion.

The recogniser publishes visualisation information.

  • /recognition_service/sv_recogniced_object_instances_img: displays the original image with overlaid bounding boxes of recognised objects
  • /recognition_service/sv_recogniced_object_instances: the model point clouds of the recognised objects, in the camera frame. The following picture shows an example where R2-D2 is detected in a shelf, with the debug picture and recognised model displayed in rviz. image8

Recognition performance

The modelling tool provides full 3D object models from all the views you provided, which in principle allow the recogniser to recogise the object in any condition dfrom any view. Practically, however, recognition performance depends on several factors: * Distance: Performance can quite rapidly decline with distance. First, because the object features on which the recogniser depends become too small (no way that it could detect an object just a few pixels large). Second, because the depth data, on which a pose verification step in the recogniser depends, becomes more and more noisy (and close to useless beyond 3 m or so) * Lighting conditions: In principle the object features are lighting invariant. Practically, different characteristics of the camera which was used for modelling and the camera used for recognition can affect the appearance of the object features. * Motion blur: The robot might be moving while it taske a picture. Motion blur will deteriorate object feature extraction. * Occlusions: Objects might not be entirely visible. The recogniser does not need all object features, so it can handle a certain amount of occlusion. How much, depends on the object and how many features it is has. * Object specifics: Some objects are harder to detect than others, e.g. because they have few features, are small, have a somewhat shiny surface, or are non-rigid.

Before using the recogniser in any object search scenario it is therefore important to gather some statistics about the recognition performance. The recogniser’s confidence value can be a useful measure. But don’t trust it too much -it is not an actual probability.

Useful aspects to learn are: * How fast the recognition rate (in how many cases is the object found) drops with distance. * How false positive rate and confidence measure are related.

Trouble shooting

  • If you get an error like this

    terminate called after throwing an instance of 'flann::FLANNException'
      what():  Saved index does not contain the dataset and no dataset was provided.
    [recognition_service-2] process has died [pid 17534, exit code -6, cmd /home/mz/work/STRANDS/code/catkin_ws/devel/lib/singleview_object_recognizer/recognition_service __name:=recognition_service __log:=/home/mz/.ros/log/61fb16b8-4afc-11e5-801d-503f56004d09/recognition_service-2.log].
    log file: /home/mz/.ros/log/61fb16b8-4afc-11e5-801d-503f56004d09/recognition_service-2*.log
    

    locate the file sift_flann.idx, probably right in your catkin workspace or in ~/.ros, and remove it. Do the same if you get some vector::_M_range_check error. In case you enable SHOT, please also remove shot_omp_flann.idx.

  • Please also make sure that your camera uses VGA resolution for both RGB and depth. You can check the values for color_mode and depth_mode in rosrun rqt_reconfigure rqt_reconfigure (camera -> driver).

  • Also ensure you turned on depth_registration (i.e. start the camera with roslaunch openni2_launch openni2.launch depth_registration:=true)

Object tracker

If you stored you object model for tracking, you can track single objects in real-time using the object-tracker module. You can start the service by

rosrun object_tracker object_tracker_service -m /path/to/your/model/data/object_name/tracking_model.ao

The tracker will start as soon as you call the service

rosservice call /object_tracker/start_recording

This will publish topics for the 3D object pose and the confidence of this estimate

rostopic echo /object_tracker/object_pose
rostopic echo /object_tracker/object_tracker_confidence

To visualize the pose, you can use RVIZ and check out the image topic /object_tracker/debug_images.

To stop the tracker call

rosservice call /object_tracker/stop_recording
rosservice call /object_tracker/cleanup

References

When referencing this work, pleace cite:

  1. J. Prankl, A. Aldoma Buchaca, A. Svejda, M. Vincze, RGB-D Object Modelling for Object Recognition and Tracking. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
  2. Thomas Fäulhammer, Aitor Aldoma, Michael Zillich and Markus Vincze Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015.
  3. Thomas Fäulhammer, Michael Zillich and Markus Vincze Multi-View Hypotheses Transfer for Enhanced Object Recognition in Clutter, IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 2015.
  4. A. Aldoma Buchaca, F. Tombari, J. Prankl, A. Richtsfeld, L. di Stefano, M. Vincze, Multimodal Cue Integration through Hypotheses Verification for RGB-D Object Recognition and 6DOF Pose Estimation. IEEE International Conference on Robotics and Automation (ICRA), 2013.
  5. J. Prankl, T. Mörwald, M. Zillich, M. Vincze, Probabilistic Cue Integration for Real-time Object Pose Tracking. Proc. International Conference on Computer Vision Systems (ICVS). 2013.

For further information check out this site.

Original page: https://github.com/strands-project/v4r_ros_wrappers/blob/master/Tutorial.md