Articulated Object Tracking from Visual Sensory Data for Robotic Manipulation

Name
Anastasia Bolotnikova
Abstract
In order for a robot to manipulate an articulated object, it needs to know its
state (i.e. its pose); that is to say: where and in which configuration it is. The
result of the object’s state estimation is to be provided as a feedback to the control to compute appropriate robot motion and achieve the desired manipulation outcome. This is the main topic of this thesis, where articulated object state estimation is solved using visual feedback. Vision based servoing is implemented in a Quadratic Programming task space control framework to enable humanoid robot to perform articulated objects manipulation. We thoroughly developed our methodology for vision based articulated object state estimation on these bases.
We demonstrate its efficiency by assessing it on several real experiments involving the HRP-4 humanoid robot. We also propose to combine machine learning and edge extraction techniques to achieve markerless, realtime and robust visual feedback for articulated object manipulation.
Graduation Thesis language
English
Graduation Thesis type
Master - Computer Science
Supervisor(s)
Gholamreza Anbarjafari, Abderrahmane Kheddar
Defence year
2017
 
PDF