[Logo]

Phoenix Mobile Experimental Platform

Research Group Robotics & Process Control





Visual Scene Classification

Active ranging devices like ultrasonic sensors or laser scanners sense only fractions of relevant environment properties. The employment of cameras ensures a much higher degree of environment information, instead. However, one basic problem of using visual data is the extraction of information which can be used for self-localization.

In the visual scene classifier, which was introduced by G. von Wichert and implemented for the CAROL-project, a neural network is trained with feature filtered environment images. This self-organizing net autonomously extracts the distinguishing criteria which are later needed to classify different video images.

During self-localization, these classification results can be used to confirm or reject position hypotheses from other sources, or to create completely new position estimates.


Implementation: Lutz Franken, Klaus Schmitt



Last modified: 15.09.00, Joachim Weber