RAVG: Robotics Active Vision
ResearchPUBLICATIONS
This is the list of publications of this laboratory
Ramos-Oliveira, Jorge; Baltazar, Arturo; Castelan, Mario On ray tracing for sharp changing media Journal Article In: Journal of the Acoustic Society of America, vol. 146, no. 3, pp. 1595-1604, 2019. Luna-Aguilar, Christian; Morales-Diaz, America; Castelan, Mario; Nadeu, Climent Incorporation of acoustic sensors in the regulation of a mobile robot Journal Article In: Advanced Robotics, vol. 33, no. 2, pp. 61-73, 2019, ISSN: 0169-1864. Rico-Fernandez, Maria; Rios-Cabrera, Reyes; Castelan, Mario; Guerrero-Reyes, Hector; Juarez-Maldonado, Antonio A contextualized approach for segmentation of foliage in different crop species Journal Article In: Computers and Electronics in Agriculture, vol. 156, pp. 378-386, 2019, ISSN: 0168-1699. Lopez-Juarez, Ismael; Rios-Cabrera, Reyes; Hsieh, S J; Howarth, M. A hybrid non-invasive method for internal/external quality assessment of potatoes Journal Article In: European Food Research and Technology, 2017, ISSN: 1438-2385. Arechavaleta, Gustavo; Morales-Diaz, America B.; Perez-Villeda, Hector Manuel; Castelan, Mario Hierarchical Task-Based Control of Multirobot Systems With Terminal Attractors Journal Article In: IEEE Transactions on Control Systems Technology, vol. 25, no. 1, pp. 334 - 341, 2017, ISSN: 1063-6536. Perez-Alcocer, R. R.; Torres-Mendez, Luz Abril; Olguin-Diaz, Ernesto; Maldonado-Ramirez, Alejandro Vision-based Autonomous Underwater Vehicle Navigation in Poor Visibility Conditions using a Model-free Robust Control Journal Article In: 2016. Martínez-González, Pablo Arturo; Castelan, Mario; Arechavaleta, Gustavo Vision Based Persistent Localization of a Humanoid Robot for Locomotion Tasks Journal Article In: 2016. Hernandez-Rodriguez, Felipe; Castelan, Mario A photometric sampling method for facial shape recovery Journal Article In: Machine Vision and Applications, vol. 27, no. 4, pp. 483-497, 2016. Martinez-Gonzalez, Pablo; Castelan, Mario; Arechavaleta, Gustavo Vision based persistent localization of a humanoid robot for locomotion Tasks Journal Article In: International Journal of Applied Mathematics and Computer Science, vol. 26, no. 3, 2016. Delfin, Josafat; Becerra, Hector M; Arechavaleta, Gustavo Visual Servo Walking Control for Humanoids with Finite-time Convergence and Smooth Robot Velocities Journal Article In: International Journal of Control, vol. 89, no. 7, pp. 1342-1358, 2016, ISSN: 1366-5820. Rios-Cabrera, Reyes; Morales-Diaz, America B.; Aviles-Viñas, Jaime F; Lopez-Juarez, Ismael Robotic GMAW online learning: issues and experiments Journal Article In: The International Journal of Advanced Manufacturing Technology, vol. 87, no. 5, pp. 2113–2134, 2016, ISSN: 1433-3015. Aviles-Viñas, Jaime F; Rios-Cabrera, Reyes; Lopez-Juarez, Ismael On-line learning of welding bead geometry in industrial robots Journal Article In: The International Journal of Advanced Manufacturing Technology, vol. 83, no. 1, pp. 217–231, 2016, ISSN: 1433-3015. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril Robotic Visual Tracking of Relevant Cues in Underwater Environments with Poor Visibility Conditions Journal Article In: Journal of Sensors, vol. 2016, 2016. Sánchez-Escobedo, Dalila; Castelan, Mario; Smith, William A P Statistical 3D face shape estimation from occluding contours Journal Article In: Computer Vision and Image Understanding, vol. 142, pp. 111 - 124, 2016, ISSN: 1077-3142. Delfin, Josafat; Becerra, Hector M; Arechavaleta, Gustavo Visual servo walking control for humanoids with finite-time convergence and smooth robot velocities Journal Article In: International Journal of Control, vol. 89, no. 7, pp. 1342-1358, 2016. Benitez Perez, H.; Lopez-Juarez, Ismael; Garza-Alanis, P. C.; Rios-Cabrera, Reyes; Duran Chavesti, A. Reconfiguration Distributed Objects in an Intelligent Manufacturing Cell Journal Article In: IEEE Latin America Transactions, vol. 14, no. 1, pp. 136-146, 2016, ISSN: 1548-0992. Cortes-Perez, Noel; Torres-Mendez, Luz Abril A Low-Cost Mirror-Based Active Perception System for Effective Collision Free Underwater Robotic Navigation Journal Article In: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 61-68, 2016. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril; Castelan, Mario A bag of relevant regions for visual place recognition in challenging environments Conference 2016 23rd International Conference on Pattern Recognition (ICPR), 2016. Delfin, Josafat; Becerra, Héctor M; Arechavaleta, Gustavo Humanoid Localization and Navigation using a Visual Memory Conference IEEE-RAS 16th International Conference on Humanoid Robots, IEEE, 2016, ISSN: 2164-0580. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril A Bag of Relevant Regions Model for Place Recognition in Coral Reefs Conference OCEANS 2016, IEEE 2016. Maldonado-Ramirez, Alejandro; Torres-Méndez, Luz Abril A bag of relevant regions model for visual place recognition in coral reefs Proceedings Article In: OCEANS 2016 MTS/IEEE Monterey, pp. 1-5, 2016. Ponce-Hinestroza, A. N.; Torres-Mendez, Luz Abril; Drews, Paulo A statistical learning approach for underwater color restoration with adaptive training based on visual attention Proceedings Article In: OCEANS 2016 MTS/IEEE Monterey, pp. 1-6, 2016. Ponce-Hinestroza, A-N; Torres-Mendez, Luz Abril; Drews, Paulo A statistical learning approach for underwater color restoration with adaptive training based on visual attention Proceedings Article In: OCEANS 2016 MTS/IEEE Monterey, pp. 1–6, IEEE 2016. Ponce-Hinestroza, A-N; Torres-Mendez, Luz Abril; Drews, Paulo Using a MRF-BP Model with Color Adaptive Training for Underwater Color Restoration Proceedings Article In: ICPR 2016 IEEE Cancun, pp. 1–6, IEEE 2016. Mirelez-Delgado, Flabio; Morales-Diaz, America B.; Rios-Cabrera, Reyes; Gutierrez-Flores, Hugo Towards intelligent robotic agents for cooperative tasks Proceedings 2016. Mirelez-Delgado, Flabio; Morales-Diaz, America B.; Rios-Cabrera, Reyes Kinematic control for an omnidirectional mobile manipulator Proceedings 2016. Luna-Aguilar, C. A.; Castelan, Mario; Morales-Diaz, America B.; Nadeu, C. Incorporación de sensores acústicos en el control de regulación a un punto de un robot móvil Journal Article In: pp. 582-587, 2015. Aviles-Viñas, Jaime F; Lopez-Juarez, Ismael; Rios-Cabrera, Reyes Acquisition of welding skills in industrial robots Journal Article In: Industrial Robot: An International Journal, vol. 42, no. 2, pp. 156-166, 2015. Navarro-Gonzalez, Jose Luis; Lopez-Juarez, Ismael; Ordaz-Hernandez, Keny; Rios-Cabrera, Reyes On-line incremental learning for unknown conditions during assembly operations with industrial robots Journal Article In: Evolving Systems, vol. 6, no. 2, pp. 101–114, 2015, ISSN: 1868-6486. Navarro-Gonzalez, Jose Luis; Lopez-Juarez, Ismael; Rios-Cabrera, Reyes; Ordaz-Hernandez, Keny On-line knowledge acquisition and enhancement in robotic assembly tasks Journal Article In: Robotics and Computer-Integrated Manufacturing, vol. 33, pp. 78 - 89, 2015, ISSN: 0736-5845, (Special Issue on Knowledge Driven Robotics and Manufacturing). Castelan, Mario; Cruz-Perez, Elier; Torres-Mendez, Luz Abril A Photometric Sampling Strategy for Reflectance Characterization and Transference Journal Article In: Computación y Sistemas, vol. 19, no. 2, pp. 255-272, 2015. Luna-Aguilar, C. A.; Castelan, Mario; Morales-Diaz, America B.; Nadeu, C. Incorporación de sensores acústicos en el control de regulación a un punto de un robot móvil Conference 2015. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril; Rodriguez-Telles, Francisco G Ethologically inspired reactive exploration of coral reefs with collision avoidance: Bridging the gap between human and robot spatial understanding of unstructured environments Conference Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE 2015. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril Autonomous robotic exploration of coral reefs using a visual attention-driven strategy for detecting and tracking regions of interest Conference OCEANS 2015-Genova, IEEE 2015. Labastida-Valdés, L.; Torres-Mendez, Luz Abril; Hutchinson, S. A. Using the motion perceptibility measure to classify points of interest for visual-based AUV guidance in a reef ecosystem Proceedings Article In: OCEANS 2015 - MTS/IEEE Washington, pp. 1-6, 2015. Romero-Martínez, C. E.; Torres-Mendez, Luz Abril; Martinez-Garcia, Edgar A. Modeling motor-perceptual behaviors to enable intuitive paths in an aquatic robot Proceedings Article In: OCEANS 2015 - MTS/IEEE Washington, pp. 1-5, 2015. Mirelez-Delgado, Flabio; Morales-Diaz, America B.; Rios-Cabrera, Reyes; Perez-Villeda, Hector Manuel Control Servovisual de un Kuka youBot para la manipulacion y traslado de objetos Proceedings Article In: 2015. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril using supercolor-pixels descriptors for tracking relevant cues in underwater environments with poor visibility conditions Proceedings Article In: ICRA 2015 Workshop on Visual Place Recognition in Changing Environmen ts, 2015. González-García, Luis C.; Torres-Mendez, Luz Abril; Martínez, Julieta; Sattar, Junaed; Little, James Are You Talking to Me? Detecting Attention in First-Person Interactions Proceedings Article In: pp. 137-142, 2015, ISSN: 2308-4197. Martinez-Garcia, Edgar A.; Torres-Mendez, Luz Abril; Elara Mohan, Rajesh Multi-legged robot dynamics navigation model with optical flow Journal Article In: International Journal of Intelligent Unmanned Systems, vol. 2, no. 2, pp. 121-139, 2014. Rios-Cabrera, Reyes; Tuytelaars, Tinne Boosting Masked Dominant Orientation Templates for Efficient Object Detection Journal Article In: Comput. Vis. Image Underst., vol. 120, pp. 103–116, 2014, ISSN: 1077-3142. Martinez-Gonzalez, Pablo; Varas, David; Castelan, Mario; Camacho, Margarita; Marques, Ferran; Arechavaleta, Gustavo 3D shape reconstruction from a humanoid generated video sequence Conference 2014 IEEE-RAS International Conference on Humanoid Robots, 2014, ISSN: 2164-0572. Rodriguez-Telles, Francisco G; Perez-Alcocer, Ricardo; Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril; Bikram Dey, Bir; Martinez-Garcia, Edgar A. Vision-based reactive autonomous navigation with obstacle avoidance: Towards a non-invasive and cautious exploration of marine habitat Conference 2014 IEEE International Conference on Robotics and Automation (ICRA), IEEE 2014. Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril; Martinez-Garcia, Edgar A. Robust detection and tracking of regions of interest for autonomous underwater robotic exploration Conference Proc. 6th Int. Conf. on Advanced Cognitive Technologies and Applications, 2014. Rodríguez-Teiles, F. G.; Perez-Alcocer, Ricardo; Maldonado-Ramirez, Alejandro; Torres-Mendez, Luz Abril; Dey, B. B.; Martinez-Garcia, Edgar A. Vision-based reactive autonomous navigation with obstacle avoidance: Towards a non-invasive and cautious exploration of marine habitat Proceedings Article In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3813-3818, 2014, ISSN: 1050-4729. Estopier-Castillo, Vicente; Arechavaleta, Gustavo; Olguín-Díaz, Ernesto Generacion de Movimientos Humanoides con Dinamica Inversa Jerarquica Proceedings Article In: Generacion de Movimientos Humanoides con Dinamica Inversa Jerarquica, Congreso Latinoamericano de Control Automático CLCA 2014, 2014. Sanchez-Escobedo, Dalila; Castelan, Mario 3D face shape prediction from a frontal image using cylindrical coordinates and partial least squares Journal Article In: Pattern Recognition Letters, vol. 34, no. 4, pp. 389 - 399, 2013, ISSN: 0167-8655, (Advances in Pattern Recognition Methodology and Applications). Lopez-Juarez, Ismael; Castelan, Mario; Castro-Martînez, Francisco Javier; Peña-Cabrera, Mario; Osorio-Comparan, Roman Using Object’s Contour, Form and Depth to Embed Recognition Capability into Industrial Robots Journal Article In: Journal of Applied Research and Technology, vol. 11, no. 1, pp. 5 - 17, 2013, ISSN: 1665-6423. Rivero-Juarez, Joaquin; Martinez-Garcia, Edgar A.; Torres-Mendez, Luz Abril; Elara Mohan, Rajesh 3D Heterogeneous Multi-sensor Global Registration Journal Article In: Procedia Engineering, vol. 64, pp. 1552 - 1561, 2013, ISSN: 1877-7058. Rios-Cabrera, Reyes; Tuytelaars, Tinne Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach Proceedings Article In: The IEEE International Conference on Computer Vision (ICCV), 2013.2019
Journal Articles
@article{Ramos-Oliveira2019,
title = {On ray tracing for sharp changing media},
author = {Ramos-Oliveira, Jorge and Baltazar, Arturo and Castelan, Mario},
url = {https://doi.org/10.1121/1.5125133},
doi = {10.1121/1.5125133},
year = {2019},
date = {2019-07-10},
journal = {Journal of the Acoustic Society of America},
volume = {146},
number = {3},
pages = {1595-1604},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{{Luna-Aguilar}2019,
title = {Incorporation of acoustic sensors in the regulation of a mobile robot},
author = {Luna-Aguilar, Christian and Morales-Diaz, America and Castelan, Mario and Nadeu, Climent},
editor = {Taylor and Francis},
url = {https://doi.org/10.1080/01691864.2019.1573703},
doi = {10.1080/01691864.2019.1573703},
issn = {0169-1864},
year = {2019},
date = {2019-01-01},
journal = {Advanced Robotics},
volume = {33},
number = {2},
pages = {61-73},
abstract = {This article introduces the incorporation of acoustic sensors for the localization of a mobile robot. The robot is considered as a sound source and its position is located applying a Time Delay of Arrival (TDOA) method. Since the accuracy of this method varies with the microphone array, a naviga- tion acoustic map that indicates the location errors is built. This map also provides the robot with navigation trajectories point-to-point and the control is capable to drive the robot through these trajectories to a desired configuration. The proposed localization method is thoroughly tested using both a 900 Hz square signal and the natural sound of the robot, which is driven near the desired point with an average error of 0.067 m.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Rico-Fernandez2019,
title = {A contextualized approach for segmentation of foliage in different crop species},
author = {Rico-Fernandez, Maria and Rios-Cabrera, Reyes and Castelan, Mario and Guerrero-Reyes, Hector and Juarez-Maldonado, Antonio},
editor = {Elsevier},
url = {https://doi.org/10.1016/j.compag.2018.11.033},
issn = {0168-1699},
year = {2019},
date = {2019-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {156},
pages = {378-386},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2017
Journal Articles
@article{Lopez-Juarez2017,
title = {A hybrid non-invasive method for internal/external quality assessment of potatoes},
author = {Lopez-Juarez, Ismael and Rios-Cabrera, Reyes and Hsieh,S J and Howarth, M .},
url = {https://doi.org/10.1007/s00217-017-2936-9},
doi = {10.1007/s00217-017-2936-9},
issn = {1438-2385},
year = {2017},
date = {2017-07-11},
journal = {European Food Research and Technology},
abstract = {Consumers purchase fruits and vegetables based on its quality, which can be defined as a degree of excellence which is the result of a combination of characteristics, attributes and properties that have significance for market acceptability. In this paper, a novel hybrid active imaging methodology for potato quality inspection that uses an optical colour camera and an infrared thermal camera is presented. The methodology employs an artificial neural network (ANN) that uses quality data composed by two descriptors as input. The ANN works as a feature classifier so that its output is the potato quality grade. The input vector contains information related to external characteristics, such as shape, weight, length and width. Internal characteristics are also accounted for in the input vector in the form of excessive sugar content. The extra sugar content of the potato is an important problem for potato growers and potato chip manufacturers. Extra sugar content could result in diseases or wounds in the potato tuber. In general, potato tubers with low sugar content are considered as having a higher quality. The validation of the methodology was made through experimentation which consisted in fusing both, external and internal characteristics in the input vector to the ANN for an overall quality classification. Results using internal data as obtained from an infrared camera and fused with optical external parameters demonstrated the feasibility of the method since the prediction accuracy increased during potato grading.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{7454708,
title = {Hierarchical Task-Based Control of Multirobot Systems With Terminal Attractors},
author = {Arechavaleta, Gustavo and Morales-Diaz, America B. and Perez-Villeda, Hector Manuel and Castelan, Mario },
url = {http://ieeexplore.ieee.org/abstract/document/7454708/},
doi = {10.1109/TCST.2016.2549279},
issn = {1063-6536},
year = {2017},
date = {2017-01-01},
journal = {IEEE Transactions on Control Systems Technology},
volume = {25},
number = {1},
pages = {334 - 341},
abstract = {This brief proposes a hierarchical control scheme based on the definition of a set of multirobot task functions. To deal with the inherent conflicts between tasks, a strict hierarchy is imposed on them. We present a novel scheme that copes with two main difficulties shared in standard task-based controllers: 1) to impose a desired time convergence of tasks and 2) to avoid discontinuous task transitions occurred when a task is inserted or removed in the hierarchical structure. As a result, continuous input references are generated for the low-level control of the group. The validation is achieved in simulation and by performing an experiment with wheeled mobile robots.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2016
Journal Articles
@article{P\'{e}rez-Alcocer2016,
title = {Vision-based Autonomous Underwater Vehicle Navigation in Poor Visibility Conditions using a Model-free Robust Control},
author = {Perez-Alcocer, R. R. and Torres-Mendez, Luz Abril and Olguin-Diaz, Ernesto and Maldonado-Ramirez, Alejandro },
year = {2016},
date = {2016-06-06},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Mart\'{i}nez-Gonz\'{a}lez2016b,
title = {Vision Based Persistent Localization of a Humanoid Robot for Locomotion Tasks},
author = {Mart\'{i}nez-Gonz\'{a}lez, Pablo Arturo and Castelan, Mario and Arechavaleta, Gustavo},
year = {2016},
date = {2016-06-06},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Hernandez-Rodriguez2016,
title = {A photometric sampling method for facial shape recovery},
author = {Hernandez-Rodriguez, Felipe and Castelan, Mario },
url = {http://link.springer.com/article/10.1007%2Fs00138-016-0755-9},
year = {2016},
date = {2016-04-01},
journal = {Machine Vision and Applications},
volume = {27},
number = {4},
pages = {483-497},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Mart\`{i}nez-Gonz\'{a}lez2016,
title = {Vision based persistent localization of a humanoid robot for locomotion Tasks},
author = {Martinez-Gonzalez, Pablo and Castelan, Mario and Arechavaleta, Gustavo },
url = {https://drive.google.com/file/d/0B-7dVUdTjeJUNGdXd0N6UWRvdk0/view},
year = {2016},
date = {2016-03-26},
journal = {International Journal of Applied Mathematics and Computer Science},
volume = {26},
number = {3},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Delfin2016,
title = {Visual Servo Walking Control for Humanoids with Finite-time Convergence and Smooth Robot Velocities},
author = {Delfin, Josafat and Becerra, Hector M and Arechavaleta, Gustavo },
url = {http://www.tandfonline.com/doi/abs/10.1080/00207179.2015.1129558},
doi = {http://dx.doi.org/10.1080/00207179.2015.1129558},
issn = {1366-5820},
year = {2016},
date = {2016-01-10},
journal = {International Journal of Control},
volume = {89},
number = {7},
pages = {1342-1358},
abstract = {In this paper, we address the problem of humanoid locomotion guided from information of a monocular camera. The goal of the robot is to reach a desired location defined in terms of a target image, i.e., a positioning task. The proposed approach allows us to introduce a desired time to complete the positioning task, which is advantageous in contrast to the classical exponential convergence. In particular, finite-time convergence is achieved while generating smooth robot velocities and considering the omnidirectional waking capability of the robot. In addition, we propose a hierarchical task-based control scheme, which can simultaneously handle the visual positioning and the obstacle avoidance tasks without affecting the desired time of convergence. The controller is able to activate or inactivate the obstacle avoidance task without generating discontinuous velocity references while the humanoid is walking. Stability of the closed loop for the two task-based control is demonstrated theoretically even during the transitions between the tasks. The proposed approach is generic in the sense that different visual control schemes are supported. We evaluate a homography-based visual servoing for position-based and image-based modalities, as well as for eye-in-hand and eye-to-hand configurations. The experimental evaluation is performed with the humanoid robot NAO.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Rios-Cabrera2016,
title = {Robotic GMAW online learning: issues and experiments},
author = {Rios-Cabrera, Reyes and Morales-Diaz, America B. and Aviles-Vi\~{n}as, Jaime F and Lopez-Juarez, Ismael },
url = {http://dx.doi.org/10.1007/s00170-016-8618-0},
doi = {10.1007/s00170-016-8618-0},
issn = {1433-3015},
year = {2016},
date = {2016-01-01},
journal = {The International Journal of Advanced Manufacturing Technology},
volume = {87},
number = {5},
pages = {2113--2134},
abstract = {This paper presents three main contributions: (i) an experimental analysis of variables, using well-defined statistical patterns applied to the main parameters of the welding process. (ii) An on-line/off-line learning and testing method, showing that robots can acquire a useful knowledge base without human intervention to learn and reproduce bead geometries. And finally, (iii) an on-line testing analysis including penetration of the bead, that is used to train an artificial neural network (ANN). For the experiments, an optic camera was used in order to measure bead geometry (width and height). Also real-time computer vision algorithms were implemented to extract training patterns. The proposal was carried out using an industrial KUKA robot and a GMAW type machine inside a manufacturing cell. We present expermental analysis that show different issues and solutions to build an industrial adaptive system for the robotics welding process.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Aviles-Vi\~{n}as2016b,
title = {On-line learning of welding bead geometry in industrial robots},
author = {Aviles-Vi\~{n}as, Jaime F and Rios-Cabrera, Reyes and Lopez-Juarez, Ismael },
url = {http://dx.doi.org/10.1007/s00170-015-7422-6},
doi = {10.1007/s00170-015-7422-6},
issn = {1433-3015},
year = {2016},
date = {2016-01-01},
journal = {The International Journal of Advanced Manufacturing Technology},
volume = {83},
number = {1},
pages = {217--231},
abstract = {In this paper, we propose an architecture based on an artificial neural network (ANN), to learn welding skills automatically in industrial robots. With the aid of an optic camera and a laser-based sensor, the bead geometry (width and height) is measured. We propose a real-time computer vision algorithm to extract training patterns in order to acquire knowledge to later predict specific geometries. The proposal is implemented and tested in an industrial KUKA KR16 robot and a GMAW type machine within a manufacturing cell. Several data analysis are described as well as off-line and on-line training, learning strategies, and testing experimentation. It is demonstrated during our experiments that, after learning the skill, the robot is able to produce the requested bead geometry even without any knowledge about the welding parameters such as arc voltage and current. We implemented an on-line learning test, where the whole experiments and learning process take only about 4 min. Using this knowledge later, we obtained up to 95 % accuracy in prediction.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{maldonado2016robotic,
title = {Robotic Visual Tracking of Relevant Cues in Underwater Environments with Poor Visibility Conditions},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril},
url = {https://www.hindawi.com/journals/js/2016/4265042/},
year = {2016},
date = {2016-01-01},
journal = {Journal of Sensors},
volume = {2016},
publisher = {Hindawi Publishing Corporation},
abstract = {Using visual sensors for detecting regions of interest in underwater environments is fundamental for many robotic applications. Particularly, for an autonomous exploration task, an underwater vehicle must be guided towards features that are of interest. If the relevant features can be seen from the distance, then smooth control movements of the vehicle are feasible in order to position itself close enough with the final goal of gathering visual quality images. However, it is a challenging task for a robotic system to achieve stable tracking of the same regions since marine environments are unstructured and highly dynamic and usually have poor visibility. In this paper, a framework that robustly detects and tracks regions of interest in real time is presented. We use the chromatic channels of a perceptual uniform color space to detect relevant regions and adapt a visual attention scheme to underwater scenes. For the tracking, we associate with each relevant point superpixel descriptors which are invariant to changes in illumination and shape. The field experiment results have demonstrated that our approach is robust when tested on different visibility conditions and depths in underwater explorations.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{S\'{a}nchezEscobedo2016111,
title = {Statistical 3D face shape estimation from occluding contours},
author = {S\'{a}nchez-Escobedo, Dalila and Castelan, Mario and Smith, William A P},
url = {http://www.sciencedirect.com/science/article/pii/S1077314215001885},
doi = {http://dx.doi.org/10.1016/j.cviu.2015.08.012},
issn = {1077-3142},
year = {2016},
date = {2016-01-01},
journal = {Computer Vision and Image Understanding},
volume = {142},
pages = {111 - 124},
abstract = {Abstract This paper addresses the problem of 3D face shape approximation from occluding contours, i.e., the boundaries between the facial region and the background. To this end, a linear regression process that models the relationship between a set of 2D occluding contours and a set of 3D vertices is applied onto the corresponding training sets using Partial Least Squares. The result of this step is a regression matrix which is capable of estimating new 3D face point clouds from the out-of-training 2D Cartesian pixel positions of the selected contours. Our approach benefits from the highly correlated spaces spanned by the 3D vertices around the occluding boundaries of a face and their corresponding 2D pixel projections. As a result, the proposed method resembles dense surface shape recovery from missing data. Our technique is evaluated over four scenarios designed to investigate both the influence of the contours included in the training set and the considered number of contours. Qualitative and quantitative experiments demonstrate that using contours outperform the state of the art on the database used in this article. Even using a limited number of contours provides a useful approximation to the 3D face surface.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{doi:10.1080/00207179.2015.1129558,
title = {Visual servo walking control for humanoids with finite-time convergence and smooth robot velocities},
author = {Delfin, Josafat and Becerra, Hector M and Arechavaleta, Gustavo},
url = {http://dx.doi.org/10.1080/00207179.2015.1129558},
doi = {10.1080/00207179.2015.1129558},
year = {2016},
date = {2016-01-01},
journal = {International Journal of Control},
volume = {89},
number = {7},
pages = {1342-1358},
abstract = {ABSTRACTIn this paper, we address the problem of humanoid locomotion guided from information of a monocular camera. The goal of the robot is to reach a desired location defined in terms of a target image, i.e., a positioning task. The proposed approach allows us to introduce a desired time to complete the positioning task, which is advantageous in contrast to the classical exponential convergence. In particular, finite-time convergence is achieved while generating smooth robot velocities and considering the omnidirectional waking capability of the robot. In addition, we propose a hierarchical task-based control scheme, which can simultaneously handle the visual positioning and the obstacle avoidance tasks without affecting the desired time of convergence. The controller is able to activate or inactivate the obstacle avoidance task without generating discontinuous velocity references while the humanoid is walking. Stability of the closed loop for the two task-based control is demonstrated theoretically even during the transitions between the tasks. The proposed approach is generic in the sense that different visual control schemes are supported. We evaluate a homography-based visual servoing for position-based and image-based modalities, as well as for eye-in-hand and eye-to-hand configurations. The experimental evaluation is performed with the humanoid robot NAO.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{7430073,
title = {Reconfiguration Distributed Objects in an Intelligent Manufacturing Cell},
author = {Benitez Perez, H. and Lopez-Juarez, Ismael and Garza-Alanis, P. C. and Rios-Cabrera, Reyes and Duran Chavesti, A.},
doi = {10.1109/TLA.2016.7430073},
issn = {1548-0992},
year = {2016},
date = {2016-01-01},
journal = {IEEE Latin America Transactions},
volume = {14},
number = {1},
pages = {136-146},
abstract = {A manufacture system with the abilities of easy reconfiguration and highly scalability becomes flexible, dynamic and open to the use of software technologies. To give these abilities to a manufacture cell formed of three industrial robots and two conveyors, a middleware based on the programming standard Common Object Request Broker Architecture (CORA) was developed, thus creating a distributed manufacture cell, allowing us to have a real production with different final products. In order to optimize the production times of the different products to be manufactured, a product scheduler was developed using the algorithm Earlies Deadline First (EDF) and the support algorithm Deferrable Server (DS). Given that failures may occur on any of the specialized modules of the manufacture system, the self reconfiguration of the manufacture system is something very desirable. This article propose an algorithm to solve this problem, the algorithm identifies the failures in relation to the time it takes the system to make a product, then makes a modification on the working speed of the plant elements of the specialized modules.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Cortx00E9sPx00E9rez2016ALM,
title = {A Low-Cost Mirror-Based Active Perception System for Effective Collision Free Underwater Robotic Navigation},
author = {Cortes-Perez, Noel and Torres-Mendez, Luz Abril},
year = {2016},
date = {2016-01-01},
journal = {2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
pages = {61-68},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Conferences
@conference{7899826,
title = {A bag of relevant regions for visual place recognition in challenging environments},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril and Castelan, Mario},
doi = {10.1109/ICPR.2016.7899826},
year = {2016},
date = {2016-12-01},
booktitle = {2016 23rd International Conference on Pattern Recognition (ICPR)},
pages = {1358-1363},
abstract = {In this paper, we present a method for vision-based place recognition in environments with a high content of similar features and that are prone to variations in illumination. The high similarity of features makes difficult the disambiguation between two different places. The novelty of our method relies on using the Bag of Words (BoW) approach to derive an image descriptor from a set of relevant regions, which are extracted using a visual attention algorithm. We name our approach Bag of Relevant Regions (BoRR). The descriptor of each relevant region is built by using a 2D histogram of the chromatic channels of the CIE-Lab color space. We have compared our results with those using state of the art descriptors that include the BoW and demonstrate that our approach performs better in most of the cases.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{conf:Delfin2016,
title = {Humanoid Localization and Navigation using a Visual Memory},
author = {Delfin, Josafat and Becerra, H\'{e}ctor M and Arechavaleta, Gustavo },
doi = {10.1109/HUMANOIDS.2016.7803354},
issn = {2164-0580},
year = {2016},
date = {2016-11-15},
booktitle = {IEEE-RAS 16th International Conference on Humanoid Robots},
pages = {725-731},
publisher = {IEEE},
abstract = {A visual memory (VM) is a topological map in which a set of key images organized in form of a graph represents an environment. In this paper, a navigation strategy for humanoid robots addressing the problems of localization, visual path planning and path following based on a VM is proposed. Assuming that the VM is given, the main contributions of the paper are: 1) A novel pure vision-based localization method. 2) The introduction of the estimated rotation between key images in the path planning stage to benefit paths with enough visual information and with less effort of robot rotation. 3) The integration of the complete navigation strategy and its experimental evaluation with a Nao robot in an unstructured environment. The humanoid robot is modeled as a holonomic system and the strategy might be used in different scenarios like corridors, uncluttered or cluttered environments.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{maldonado2016bag,
title = {A Bag of Relevant Regions Model for Place Recognition in Coral Reefs},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril },
year = {2016},
date = {2016-01-01},
booktitle = {OCEANS 2016},
pages = {1--5},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Proceedings Articles
@inproceedings{7761188,
title = {A bag of relevant regions model for visual place recognition in coral reefs},
author = {Maldonado-Ramirez, Alejandro and Torres-M\'{e}ndez, Luz Abril},
doi = {10.1109/OCEANS.2016.7761188},
year = {2016},
date = {2016-09-01},
booktitle = {OCEANS 2016 MTS/IEEE Monterey},
pages = {1-5},
abstract = {Vision-based place recognition in underwater environments is a key component for autonomous robotic exploration. However, this task can be very challenging due to the inherent properties of this kind of places such as: color distortion, poor visibility, perceptual aliasing and dynamic illumination. In this paper, we present a method for vision-based place recognition in coral reefs. Our method relies on using the Bag-of-Words (BoW) approach to derive a descriptor, for the whole image, from a set of relevant regions, which are extracted by utilizing a visual attention algorithm. The descriptor for each relevant region is built by using an histogram of the chromatic channels of the CIE-Lab color space. We present results of our method for a place recognition task in real life videos as well as comparisons of our method against other popular techniques. It can be seen that our approach performs better in most of the cases.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{7761187,
title = {A statistical learning approach for underwater color restoration with adaptive training based on visual attention},
author = {Ponce-Hinestroza, A. N. and Torres-Mendez, Luz Abril and Drews, Paulo},
doi = {10.1109/OCEANS.2016.7761187},
year = {2016},
date = {2016-09-01},
booktitle = {OCEANS 2016 MTS/IEEE Monterey},
pages = {1-6},
abstract = {In most artificial vision systems the quality of acquired images is directly related with the amount of information that can be obtained from them, and, particularly in underwater robotics applications involving monitoring and inspection tasks this is crucial. Statistical learning methods like Markov Random Fields with Belief Propagation (MRF-BP) provide a solution by using existing essential correlations in training sets. However, as in any restoration/correction method for real applications, it is not possible to have color ground truth available on-line. In this paper, we present a MRF-BP model formulated in the chromatic domain of underwater scenes such that we synthesize the ground truth color to train the model and maximize the capabilities of our method. The generated ground truth introduces some improvements to existing color correction methods and visual attention considerations which also helps to choose a small size training set for the MRF-BP model. Feasibility of our approach is shown from the results in which a good color discrimination is observed even in poor visibility conditions.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{ponce2016oceansb,
title = {A statistical learning approach for underwater color restoration with adaptive training based on visual attention},
author = {Ponce-Hinestroza, A-N and Torres-Mendez, Luz Abril and Drews, Paulo },
year = {2016},
date = {2016-01-01},
booktitle = {OCEANS 2016 MTS/IEEE Monterey},
pages = {1--6},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{ponce2016icpr,
title = {Using a MRF-BP Model with Color Adaptive Training for Underwater Color Restoration},
author = {Ponce-Hinestroza, A-N and Torres-Mendez, Luz Abril and Drews, Paulo},
year = {2016},
date = {2016-01-01},
booktitle = {ICPR 2016 IEEE Cancun},
pages = {1--6},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Proceedings
@proceedings{Mirelez-Delgado2016,
title = {Towards intelligent robotic agents for cooperative tasks},
author = {Mirelez-Delgado, Flabio and Morales-Diaz, America B. and Rios-Cabrera, Reyes and Gutierrez-Flores, Hugo},
year = {2016},
date = {2016-06-06},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
@proceedings{Mirelez-Delgado2016,
title = {Kinematic control for an omnidirectional mobile manipulator},
author = {Mirelez-Delgado, Flabio and Morales-Diaz, America B. and Rios-Cabrera, Reyes},
year = {2016},
date = {2016-06-06},
keywords = {},
pubstate = {published},
tppubtype = {proceedings}
}
2015
Journal Articles
@article{Luna-Aguilar2015,
title = {Incorporaci\'{o}n de sensores ac\'{u}sticos en el control de regulaci\'{o}n a un punto de un robot m\'{o}vil},
author = {Luna-Aguilar, C. A. and Castelan, Mario and Morales-Diaz, America B. and Nadeu, C.},
url = {https://upcommons.upc.edu/handle/2117/102668},
year = {2015},
date = {2015-06-06},
pages = {582-587},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{doi:10.1108/IR-09-2014-0395,
title = {Acquisition of welding skills in industrial robots},
author = {Aviles-Vi\~{n}as, Jaime F and Lopez-Juarez, Ismael and Rios-Cabrera, Reyes },
url = {http://dx.doi.org/10.1108/IR-09-2014-0395},
doi = {10.1108/IR-09-2014-0395},
year = {2015},
date = {2015-01-01},
journal = {Industrial Robot: An International Journal},
volume = {42},
number = {2},
pages = {156-166},
abstract = {Purpose \textendash The purpose of this paper was to propose a method based on an Artificial Neural Network and a real-time vision algorithm, to learn welding skills in industrial robotics. Design/methodology/approach \textendash By using an optic camera to measure the bead geometry (width and height), the authors propose a real-time computer vision algorithm to extract training patterns and to enable an industrial robot to acquire and learn autonomously the welding skill. To test the approach, an industrial KUKA robot and a welding gas metal arc welding machine were used in a manufacturing cell. Findings \textendash Several data analyses are described, showing empirically that industrial robots can acquire the skill even if the specific welding parameters are unknown. Research limitations/implications \textendash The approach considers only stringer beads. Weave bead and bead penetration are not considered. Practical implications \textendash With the proposed approach, it is possible to learn specific welding parameters despite of the material, type of robot or welding machine. This is due to the fact that the feedback system produces automatic measurements that are labelled prior to the learning process. Originality/value \textendash The main contribution is that the complex learning process is reduced into an input-process-output system, where the process part is learnt automatically without human supervision, by registering the patterns with an automatically calibrated vision system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Navarro-Gonzalez2015,
title = {On-line incremental learning for unknown conditions during assembly operations with industrial robots},
author = {Navarro-Gonzalez, Jose Luis and Lopez-Juarez, Ismael and Ordaz-Hernandez, Keny and Rios-Cabrera, Reyes },
url = {http://dx.doi.org/10.1007/s12530-014-9125-x},
doi = {10.1007/s12530-014-9125-x},
issn = {1868-6486},
year = {2015},
date = {2015-01-01},
journal = {Evolving Systems},
volume = {6},
number = {2},
pages = {101--114},
abstract = {The assembly operation using industrial robots can be accomplished successfully in well-structured environments where the mating pair location is known in advance. However, in real-world scenarios there are uncertainties associated to sensing, control and modelling errors that make the assembly task very complex. In addition, there are also unmodeled uncertainties that have to be taken into account for an effective control algorithm to succeed. Among these uncertainties, it can be mentioned disturbances, backlash and aging of mechanisms. In this paper, a method to overcome the effect of those uncertainties based on the Fuzzy ARTMAP artificial neural network (ANN) to successfully accomplish the assembly task is proposed. Experimental work is reported using an industrial 6 DOF robot arm in conjunction with a vision system for part location and wrist force/torque sensing data for assembly. Force data is fed into an ANN evolving controller during a typical peg in hole (PIH) assembly operation. The controller uses an incremental learning mechanism that is solely guided by the sensed forces. In this article, two approaches are presented in order to compare the incremental learning capability of the manipulator. The first approach uses a primitive knowledge base (PKB) containing 16 primitive movements to learn online the first insertion. During assembly, the manipulator learns new patterns according to the learning criteria which turn the PKB into an enhanced knowledge base (EKB). During a second insertion the controller uses effectively the EKB and operation improves. The second approach employs minimum information (it contains only the assembly direction) and the process starts from scratch. After several operations, that knowledge base increases by including only the needed patterns to perform the insertion. Experimental results showed that the evolving controller is able to assemble the matting pairs enhancing its knowledge whenever it is needed depending on the part geometry and level of expertise. Our approach is demonstrated through several PIH operations with different tolerances and part geometry. As the robot's expertise evolves, the PIH operation is carried out faster with shorter assembly trajectories.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{NavarroGonzalez201578b,
title = {On-line knowledge acquisition and enhancement in robotic assembly tasks},
author = {Navarro-Gonzalez, Jose Luis and Lopez-Juarez, Ismael and Rios-Cabrera, Reyes and Ordaz-Hernandez, Keny},
url = {http://www.sciencedirect.com/science/article/pii/S073658451400074X},
doi = {http://dx.doi.org/10.1016/j.rcim.2014.08.013},
issn = {0736-5845},
year = {2015},
date = {2015-01-01},
journal = {Robotics and Computer-Integrated Manufacturing},
volume = {33},
pages = {78 - 89},
abstract = {Abstract Industrial robots are reliable machines for manufacturing tasks such as welding, panting, assembly, palletizing or kitting operations. They are traditionally programmed by an operator using a teach pendant in a point-to-point scheme with limited sensing capabilities such as industrial vision systems and force/torque sensing. The use of these sensing capabilities is associated to the particular robot controller, operative systems and programming language. Today, robots can react to environment changes specific to their task domain but are still unable to learn skills to effectively use their current knowledge. The need for such a skill in unstructured environments where knowledge can be acquired and enhanced is desirable so that robots can effectively interact in multimodal real-world scenarios. In this article we present a multimodal assembly controller (MAC) approach to embed and effectively enhance knowledge into industrial robots working in multimodal manufacturing scenarios such as assembly during kitting operations with varying shapes and tolerances. During learning, the robot uses its vision and force capabilities resembling a human operator carrying out the same operation. The approach consists of using a MAC based on the Fuzzy ARTMAP artificial neural network in conjunction with a knowledge base. The robot starts the operation having limited initial knowledge about what task it has to accomplish. During the operation, the robot learns the skill for recognising assembly parts and how to assemble them. The skill acquisition is evaluated by counting the steps to complete the assembly, length of the followed assembly path and compliant behaviour. The performance improves with time so that the robot becomes an expert demonstrated by the assembly of a kit with different part geometries. The kit is unknown by the robot at the beginning of the operation; therefore, the kit type, location and orientation are unknown as well as the parts to be assembled since they are randomly fed by a conveyor belt.},
note = {Special Issue on Knowledge Driven Robotics and Manufacturing},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Castelan2015,
title = {A Photometric Sampling Strategy for Reflectance Characterization and Transference},
author = {Castelan, Mario and Cruz-Perez, Elier and Torres-Mendez, Luz Abril},
url = {http://www.cys.cic.ipn.mx/ojs/index.php/CyS/article/view/1944},
year = {2015},
date = {2015-01-01},
journal = {Computaci\'{o}n y Sistemas},
volume = {19},
number = {2},
pages = {255-272},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Conferences
@conference{Luna-Aguilar2015b,
title = {Incorporaci\'{o}n de sensores ac\'{u}sticos en el control de regulaci\'{o}n a un punto de un robot m\'{o}vil},
author = {Luna-Aguilar, C. A. and Castelan, Mario and Morales-Diaz, America B. and Nadeu, C.},
year = {2015},
date = {2015-06-06},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{maldonado2015ethologically,
title = {Ethologically inspired reactive exploration of coral reefs with collision avoidance: Bridging the gap between human and robot spatial understanding of unstructured environments},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril and Rodriguez-Telles, Francisco G},
year = {2015},
date = {2015-01-01},
booktitle = {Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on},
pages = {4872--4879},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{maldonado2015autonomous,
title = {Autonomous robotic exploration of coral reefs using a visual attention-driven strategy for detecting and tracking regions of interest},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril
},
year = {2015},
date = {2015-01-01},
booktitle = {OCEANS 2015-Genova},
pages = {1--5},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Proceedings Articles
@inproceedings{7404605,
title = {Using the motion perceptibility measure to classify points of interest for visual-based AUV guidance in a reef ecosystem},
author = {Labastida-Vald\'{e}s, L. and Torres-Mendez, Luz Abril and Hutchinson, S. A.},
url = {http://ieeexplore.ieee.org/document/7404605/},
doi = {10.23919/OCEANS.2015.7404605},
year = {2015},
date = {2015-10-01},
booktitle = {OCEANS 2015 - MTS/IEEE Washington},
pages = {1-6},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{7404424,
title = {Modeling motor-perceptual behaviors to enable intuitive paths in an aquatic robot},
author = {Romero-Mart\'{i}nez, C. E. and Torres-Mendez, Luz Abril and Martinez-Garcia, Edgar A.},
url = {http://ieeexplore.ieee.org/document/7404424/},
doi = {10.23919/OCEANS.2015.7404424},
year = {2015},
date = {2015-10-01},
booktitle = {OCEANS 2015 - MTS/IEEE Washington},
pages = {1-5},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{Mireles-Delgado2015,
title = {Control Servovisual de un Kuka youBot para la manipulacion y traslado de objetos},
author = {Mirelez-Delgado, Flabio and Morales-Diaz, America B. and Rios-Cabrera, Reyes and Perez-Villeda, Hector Manuel},
url = {http://amca.mx/memorias/amca2015/articulos/0044_MiCT3-04.pdf},
year = {2015},
date = {2015-01-01},
abstract = {Este trabajo presenta la implementaci´on de un Control Servovisual Basado en
Imagen en un robot manipulador m´ovil omnidireccional Kuka youBot. El sistema de visi´on
est´a compuesto por un sensor RGB-D Asus Xtion Pror. La ley de control implementada tiene
la estructura de un PD cl´asico para la plataforma m´ovil. El manipulador m´ovil se desplaza a
puntos 3D conocidos mediante el c´alculo de cinem´atica inversa. En este art´ıculo se demuestra
la efectividad del algoritmo en la localizaci´on del objeto de inter´es as´ı como en la manipulaci´on
del mismo para llevarlo de su lugar original a otro espacio deseado.
},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Imagen en un robot manipulador m´ovil omnidireccional Kuka youBot. El sistema de visi´on
est´a compuesto por un sensor RGB-D Asus Xtion Pror. La ley de control implementada tiene
la estructura de un PD cl´asico para la plataforma m´ovil. El manipulador m´ovil se desplaza a
puntos 3D conocidos mediante el c´alculo de cinem´atica inversa. En este art´ıculo se demuestra
la efectividad del algoritmo en la localizaci´on del objeto de inter´es as´ı como en la manipulaci´on
del mismo para llevarlo de su lugar original a otro espacio deseado.
@inproceedings{Maldonao-Ramirez2015,
title = {using supercolor-pixels descriptors for tracking relevant cues in underwater environments with poor visibility conditions},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril},
year = {2015},
date = {2015-00-00},
publisher = {ICRA 2015 Workshop on Visual Place Recognition in Changing Environmen ts},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{Gonz\'{a}lez-Garc\'{i}a2015,
title = {Are You Talking to Me? Detecting Attention in First-Person Interactions},
author = {Gonz\'{a}lez-Garc\'{i}a, Luis C. and Torres-Mendez, Luz Abril and Mart\'{i}nez, Julieta and Sattar, Junaed and Little, James},
url = {https://www.researchgate.net/publication/274065286_Are_You_Talking_to_Me_Detecting_Attention_in_First-Person_Interactions},
issn = {2308-4197},
year = {2015},
date = {2015-00-00},
pages = { 137-142},
abstract = {This paper presents an approach for a mobile robot to detect the level of attention of a human in first-person interactions. Determining the degree of attention is an essential task in day-today interactions. In particular, we are interested in natural Human-Robot Interactions (HRI's) during which a robot needs to estimate the focus and the degree of the user's attention to determine the most appropriate moment to initiate, continue and terminate an interaction. Our approach is novel in that it uses a linear regression technique to classify raw depth-image data according to three levels of user attention on the robot (null, partial and total). This is achieved by measuring the linear independence of the input range data with respect to a dataset of user poses. We overcome the problem of time overhead that a large database can add to real-time Linear Regression Classification (LRC) methods by including only the feature vectors with the most relevant information. We demonstrate the approach by presenting experimental data from human-interaction studies with a PR2 robot. Results demonstrate our attention classifier to be accurate and robust in detecting the attention levels of human participants. I. INTRODUCTION Determining the attention of people is an essential component of day-today interactions. We are constantly monitoring other people's gaze, head and body poses while engaged in a conversation [1][2][3]. We also perform attention estimation in order to perform natural interactions [4][5]. In short, attention estimation is a fundamental component of effective social interaction; therefore, for robots to be efficient social agents it is necessary to provide them with reliable mechanisms to estimate human attention. We believe that human attention estimation, particularly in the context of interactions, is highly subjective. However, attempts to model it have been relatively successful, e.g., allowing a robot to ask for directions when it finds a human, as in the work of Weiss et al. [6]. Nonetheless, the state-of-the-art is still far from reaching a point where a robot can successfully interact with humans without relying on mechanisms not common to natural language. Recently, the use of range images to make more natural human-machine interfaces has been in the agenda of researchers, like in the case of the Microsoft Kinect TM , which delivers a skeleton of
Are You Talking to Me? Detecting Attention in First-Person Interactions (PDF Download Available). Available from: https://www.researchgate.net/publication/274065286_Are_You_Talking_to_Me_Detecting_Attention_in_First-Person_Interactions [accessed Jun 17, 2017].},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Are You Talking to Me? Detecting Attention in First-Person Interactions (PDF Download Available). Available from: https://www.researchgate.net/publication/274065286_Are_You_Talking_to_Me_Detecting_Attention_in_First-Person_Interactions [accessed Jun 17, 2017].2014
Journal Articles
@article{doi:10.1108/IJIUS-04-2014-0003,
title = {Multi-legged robot dynamics navigation model with optical flow},
author = {Martinez-Garcia, Edgar A. and Torres-Mendez, Luz Abril and Elara Mohan, Rajesh },
url = {http://dx.doi.org/10.1108/IJIUS-04-2014-0003},
doi = {10.1108/IJIUS-04-2014-0003},
year = {2014},
date = {2014-01-01},
journal = {International Journal of Intelligent Unmanned Systems},
volume = {2},
number = {2},
pages = {121-139},
abstract = {Purpose \textendash The purpose of this paper is to establish analytical and numerical solutions of a navigational law to estimate displacements of hyper-static multi-legged mobile robots, which combines: monocular vision (optical flow of regional invariants) and legs dynamics. Design/methodology/approach \textendash In this study the authors propose a Euler-Lagrange equation that control legs’ joints to control robot's displacements. Robot's rotation and translational velocities are feedback by motion features of visual invariant descriptors. A general analytical solution of a derivative navigation law is proposed for hyper-static robots. The feedback is formulated with the local speed rate obtained from optical flow of visual regional invariants. The proposed formulation includes a data association algorithm aimed to correlate visual invariant descriptors detected in sequential images through monocular vision. The navigation law is constrained by a set of three kinematic equilibrium conditions for navigational scenarios: constant acceleration, constant velocity, and instantaneous acceleration. Findings \textendash The proposed data association method concerns local motions of multiple invariants (enhanced MSER) by minimizing the norm of multidimensional optical flow feature vectors. Kinematic measurements are used as observable arguments in the general dynamic control equation; while the legs joints dynamics model is used to formulate the controllable arguments. Originality/value \textendash The given analysis does not combine sensor data of any kind, but only monocular passive vision. The approach automatically detects environmental invariant descriptors with an enhanced version of the MSER method. Only optical flow vectors and robot's multi-leg dynamics are used to formulate descriptive rotational and translational motions for self-positioning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{Rios-Cabrera:2014:BMD:2583127.2583285,
title = {Boosting Masked Dominant Orientation Templates for Efficient Object Detection},
author = {Rios-Cabrera, Reyes and Tuytelaars, Tinne},
url = {http://dx.doi.org/10.1016/j.cviu.2013.12.008},
doi = {10.1016/j.cviu.2013.12.008},
issn = {1077-3142},
year = {2014},
date = {2014-01-01},
journal = {Comput. Vis. Image Underst.},
volume = {120},
pages = {103--116},
publisher = {Elsevier Science Inc.},
address = {New York, NY, USA},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Conferences
@conference{7041439,
title = {3D shape reconstruction from a humanoid generated video sequence},
author = {Martinez-Gonzalez, Pablo and Varas, David and Castelan, Mario and Camacho, Margarita and Marques, Ferran and Arechavaleta, Gustavo },
url = {http://ieeexplore.ieee.org/document/7041439/?arnumber=7041439\&tag=1},
doi = {10.1109/HUMANOIDS.2014.7041439},
issn = {2164-0572},
year = {2014},
date = {2014-11-01},
booktitle = {2014 IEEE-RAS International Conference on Humanoid Robots},
pages = {699-706},
abstract = {This paper presents a strategy for estimating the geometry of an interest object from a monocular video sequence acquired by a walking humanoid robot. The problem is solved using a space carving algorithm, which relies on both the accurate extraction of the occluding boundaries of the object as well as the precise estimation of the camera pose for each video frame. For data acquisition, a monocular visual-based control has been developed that drives the trajectory of the robot around an object placed on a small table. Due to the stepping of the humanoid, the recorded sequence is contaminated with artefacts that affect the correct extraction of contours along the video frames. To overcome this issue, a method that assigns a fitness score for each frame is proposed, delivering a subset of camera poses and video frames that produce consistent 3D shape estimations of the objects used for experimental evaluation.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{rodriguez2014vision,
title = {Vision-based reactive autonomous navigation with obstacle avoidance: Towards a non-invasive and cautious exploration of marine habitat},
author = {Rodriguez-Telles, Francisco G and Perez-Alcocer, Ricardo and Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril and Bikram Dey, Bir and Martinez-Garcia, Edgar A.},
year = {2014},
date = {2014-01-01},
booktitle = {2014 IEEE International Conference on Robotics and Automation (ICRA)},
pages = {3813--3818},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{maldonado2014robust,
title = {Robust detection and tracking of regions of interest for autonomous underwater robotic exploration},
author = {Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril and Martinez-Garcia, Edgar A.},
year = {2014},
date = {2014-01-01},
booktitle = {Proc. 6th Int. Conf. on Advanced Cognitive Technologies and Applications},
pages = {165--171},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Proceedings Articles
@inproceedings{6907412,
title = {Vision-based reactive autonomous navigation with obstacle avoidance: Towards a non-invasive and cautious exploration of marine habitat},
author = {Rodr\'{i}guez-Teiles, F. G. and Perez-Alcocer, Ricardo and Maldonado-Ramirez, Alejandro and Torres-Mendez, Luz Abril and Dey, B. B. and Martinez-Garcia, Edgar A.},
url = {http://ieeexplore.ieee.org/document/6907412/},
doi = {10.1109/ICRA.2014.6907412},
issn = {1050-4729},
year = {2014},
date = {2014-05-01},
booktitle = {2014 IEEE International Conference on Robotics and Automation (ICRA)},
pages = {3813-3818},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{Castillo2014,
title = {Generacion de Movimientos Humanoides con Dinamica Inversa Jerarquica},
author = {Estopier-Castillo, Vicente and Arechavaleta, Gustavo and Olgu\'{i}n-D\'{i}az, Ernesto},
url = {http://amca.mx/memorias/amca2014/articulos/0112.pdf},
year = {2014},
date = {2014-00-00},
booktitle = {Generacion de Movimientos Humanoides con Dinamica Inversa Jerarquica},
publisher = {Congreso Latinoamericano de Control Autom\'{a}tico CLCA 2014},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2013
Journal Articles
@article{SanchezEscobedo2013389,
title = {3D face shape prediction from a frontal image using cylindrical coordinates and partial least squares},
author = {Sanchez-Escobedo, Dalila and Castelan, Mario},
url = {http://www.sciencedirect.com/science/article/pii/S0167865512002929},
doi = {http://dx.doi.org/10.1016/j.patrec.2012.09.007},
issn = {0167-8655},
year = {2013},
date = {2013-01-01},
journal = {Pattern Recognition Letters},
volume = {34},
number = {4},
pages = {389 - 399},
abstract = {This paper addresses the problem of linearly approximating 3D shape from intensities in the context of facial analysis. In other words, given a frontal pose grayscale input face, the direct estimation of its 3D structure is sought through a regression matrix. Approaches falling into this category generally assume that both 2D and 3D features are defined under Cartesian schemes, which is not optimal for the task of novel view synthesis. The current article aims to overcome this issue by exploiting the 3D structure of faces through cylindrical coordinates, aided by the partial least squares regression. In the context of facial shape analysis, partial least squares builds a set of basis faces, for both grayscale and 3D shape spaces, seeking for maximizing shared covariance between projections of the data along the basis faces. Experimental tests show how the cylindrical representations are suitable for the purposes of linear regression, resulting in a benefit for the generation of novel facial views, showing a potential use in model based face identification.},
note = {Advances in Pattern Recognition Methodology and Applications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{LopezJuarez20135,
title = {Using Object’s Contour, Form and Depth to Embed Recognition Capability into Industrial Robots},
author = {Lopez-Juarez, Ismael and Castelan, Mario and Castro-Mart\^{i}nez, Francisco Javier and Pe\~{n}a-Cabrera, Mario and Osorio-Comparan, Roman},
url = {http://www.sciencedirect.com/science/article/pii/S1665642313715116},
doi = {http://dx.doi.org/10.1016/S1665-6423(13)71511-6},
issn = {1665-6423},
year = {2013},
date = {2013-01-01},
journal = {Journal of Applied Research and Technology},
volume = {11},
number = {1},
pages = {5 - 17},
abstract = {Abstract Robot vision systems can differentiate parts by pattern matching irrespective of part orientation and location. Some manufacturers offer 3D guidance systems using robust vision and laser systems so that a 3D programmed point can be repeated even if the part is moved varying its location, rotation and orientation within the working space. Despite these developments, current industrial robots are still unable to recognize objects in a robust manner; that is, to distinguish an object among equally shaped objects taking into account not only the object’s contour but also its form and depth information, which is precisely the major contribution of this research. Our hypothesis establishes that it is possible to integrate a robust invariant object recognition capability into industrial robots by using image features from the object’s contour (boundary object information), its form (i.e., type of curvature or topographical surface information) and depth information (from stereo disparity maps). These features can be concatenated in order to form an invariant vector descriptor which is the input to an artificial neural network (ANN) for learning and recognition purposes. In this paper we present the recognition results under different working conditions using a KUKA KR16 industrial robot, which validated our approach.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@article{RIVEROJUAREZ20131552,
title = {3D Heterogeneous Multi-sensor Global Registration},
author = {Rivero-Juarez, Joaquin and Martinez-Garcia, Edgar A. and Torres-Mendez, Luz Abril and Elara Mohan, Rajesh },
url = {http://www.sciencedirect.com/science/article/pii/S1877705813017517},
doi = {http://dx.doi.org/10.1016/j.proeng.2013.09.237},
issn = {1877-7058},
year = {2013},
date = {2013-01-01},
journal = {Procedia Engineering},
volume = {64},
pages = {1552 - 1561},
abstract = {This manuscript presents a deterministic model to register heterogeneous 3D data arising from a ring of eight ultrasonic sonar, one high data density LiDAR (light detection and ranging), and a semi-ring of three visual sensors. The three visual sensors are arranged in a cylindrical ring, and although they provide 2D colour images, a radial multi-stereo geometric model is proposed to yield 3D data. All deployed sensors are geometrically placed on-board a wheeled mobile robot platform, and data registration is carried out navigating indoors. The sensor devices in discussion are coordinated and synchronized by a home-made distributed sensor suite system. Mathematical deterministic formulation for data registration is used to obtain experimental and numerical results on global mapping. Data registration relies on a geometric model to compute depth information from a semi- circular trinocular stereo sensor that is proposed to rectify and calibrate three image frames with different orientations and positions, but with same projection point.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Proceedings Articles
@inproceedings{Rios-Cabrera_2013_ICCV__B,
title = {Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach},
author = {Rios-Cabrera, Reyes and Tuytelaars, Tinne},
year = {2013},
date = {2013-12-01},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}