The AVENUE Project

Atanas Georgiev and Peter K. Allen

Overview

Model

Fig.1: A 3-D model of a building

AVENUE stands for Autonomous Vehicle for Exploration and Navigation in Urban Environments. The project targets the automation of the urban site modeling process. The main goal is to build not only realistically looking but also geometrically accurate and photometrically correct models of complex outdoor urban environments. These environments are typified by large 3-D structures that encompass a wide range of geometric shapes and a very large scope of photometric properties.

The models are needed in a variety of applications, such as city planning, urban design, historical preservation and archaeology, fire and police planning, military applications, virtual and augmented reality, geographic information systems and many others. Currently, such models are typically created by hand which is extremely slow and error prone. AVENUE addresses these problems by building a mobile system that will autonomously navigate around a site and create a model with minimum human interaction, if any.

Video of AVENUE

Video: AVENUE in action

The task of the mobile robot is to go to desired locations and acquire requested 3-D scans and images of selected buildings. The locations are determined by the sensor planning (a.k.a view planning) system and are used by the path planning system to generate reliable trajectories which the robot then follows. When the robot arrives at the target location, it uses the sensors to acquire the scans and images and forwards them to the modeling system. The modeling system registers and incorporates the new data into the existing partial model of the site (which in the beginning could be empty). After that, the view planning system decides upon the next best data acquisition location and the above steps repeat. The process starts from a certain location and gradually expands the area it covers until a complete model of the site is obtained.

The entire task is complex and requires the solution of a number of fundamental problems:

The modeling and view planning aspects have been addressed in the work of Ioannis Stamos --- a former member of our group.

The problem of the automated data acquisition is further decomposed into:

Mobile Platform

Robot

Fig.2: Our mobile platform

The robot that we use is an ATRV-2 model manufactured by Real World Interface, Inc, which is now iRobot. It has a maximum payload of 100kg (220lbs) and we are trying to make a good use of that. To the twelve sonars that come with the robot we have added numerous additional sensors and periphery:

The robot and all devices above are controlled by an on-board dual Pentium III 500Mhz machine with 512MB RAM running Linux.

Software Architecture

Architecture

Fig.3: Software Architecture

We have designed a distributed object-oriented software architecture that facilitates the coordination of the various components of the system. It is based on Mobility -- a robot integration software framework developed by iRobot -- and makes heavy use of CORBA.

The main building blocks are concurrently executing distributed software components. Components can communicate (via IPC) with one another within the same process, across processes and even across physical hosts. Components performing related tasks are grouped into servers. A server is a multi-threaded program that handles an entire aspect of the system, such as navigation control or robot interfacing. Each server has a well-defined interface that allows clients to send commands, check its status or obtain data.

The hardware is accessed and controlled by seven servers. A designated server, called NavServer builds on top of the hardware servers and provides localization and motion control services as well as a higher-level interface to the robot from remote hosts.

Components that are too computationally intense (e.g the modeling components) or require user interaction (e.g the user interface) reside on remote hosts and communicate with the robot over the wireless network link.

Localization

GPS results

Fig.4: Open space localization results


Visual results

Fig.5: Visual localization results

Our localization system system employs two methods. The first method uses odometry, the digital compass/pan-tilt module and the global positioning sensor for localization in open space. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. When the global positioning data is reliable, the method is used as the only localization method. When the data deteriorates, the method detects the increased uncertainty and seeks additional data by invoking the second localization method.

The second method, called visual localization, is based on camera pose estimation. It is heavier computationally, but is only used when it is needed. When invoked, it stops the robot, chooses a nearby building to use and takes an image of it. The pose estimation is done by matching linear features in the image with a simple and compact model of the building. A database of the models is stored on the on-board computer. No environmental modifications are required.

Publications