|
Phoenix Mobile Experimental PlatformResearch Group Robotics & Process Control |
Visual Scene ClassificationActive ranging devices like ultrasonic sensors or laser scanners sense only fractions of relevant environment properties. The employment of cameras ensures a much higher degree of environment information, instead. However, one basic problem of using visual data is the extraction of information which can be used for self-localization. In the visual scene classifier, which was introduced by G. von Wichert and implemented for the CAROL-project, a neural network is trained with feature filtered environment images. This self-organizing net autonomously extracts the distinguishing criteria which are later needed to classify different video images. During self-localization, these classification results can be used to confirm or reject position hypotheses from other sources, or to create completely new position estimates.
| |
Last modified: 15.09.00, Joachim Weber
|