This is a series of works that use knowledge base representations to associate objects with their affordances, especifically for grasping purposes.
This is a dataset that for visual grasp affordance prediction that promotes more robust and heterogeneous robotic grasping methods. The dataset contains different attributes from 30 different objects. Each object instance is related not only to the semantic descriptions, but also the physical features describing visual attributes, locations and different grasping regions related to a variety of actions.
We adopt the grasping ground truth labels from Robot Learning Lab (RLL) and use their grasping regions [red rectangles in example Figure (a)] to label objects contained in the RLL and D-RGB Washington database . We use these adaptations as grasp affordance regions corresponding to different actions on an object.
To know which of the stable grasping regions affords what action and under which context we conducted a survey using Figure-eight platform to assign an affordance label to the regions [as numbers in the rectangles of Figure (b)].
We present a pipeline for self-assessment of grasp affordance transfer (SAGAT) based on prior experiences. We visually detect a grasp affordance region to extract multiple grasp affordance configuration candidates. Using these candidates, we forward simulate the outcome of executing the affordance task to analyse the relation between task outcome and grasp candidates.