Vision based persistance localization of a humanoid robot for locomotion tasks

 Typical monocular localization schemes involve solving for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This work introduces a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for a successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position. This project is part of the thesis of Dr. Pablo A. Martínez.

Selected publications on this topic

Pablo A. Martínez-González, Mario Castelán and Gustavo Arechavaleta. Vision based persistent localization of a humanoid robot for locomotion Tasks,  International Journal of Applied Mathematics and Computer Science,  26 (3)  (2016).


Posted on

8 December, 2017

Submit a Comment

Your email address will not be published. Required fields are marked *