This is a series of works that use knowledge base representations to associate objects with their affordances, especifically for grasping purposes.
This is a dataset that for visual grasp affordance prediction that promotes more robust and heterogeneous robotic grasping methods.
The dataset contains different attributes from 30 different objects. Each object instance is related not only to the semantic descriptions,
but also the physical features describing visual attributes, locations and different grasping regions related to a variety of actions.
We adopt the grasping ground truth labels from Robot Learning Lab (RLL) and use their
grasping regions [red rectangles in example Figure (a)] to label objects contained in the RLL and D-RGB Washington database . We use these adaptations as grasp affordance regions corresponding to different actions on an object.
For the features and labels of indoor locations we use AI2Thor and
MIT indoor scenes datasets and labels to train our model.
To know which of the stable grasping regions affords what action and under which context we conducted a survey using Figure-eight platform
to assign an affordance label to the regions [as numbers in the rectangles of Figure (b)].
We present a pipeline for self-assessment of grasp affordance transfer (SAGAT) based on prior experiences. We visually detect a grasp affordance region to extract multiple grasp affordance configuration candidates.
Using these candidates, we forward simulate the outcome of executing the affordance task to analyse the relation between task outcome and grasp candidates.
We propose a heuristic-guided hierarchically optimised cost whose optimisation adapts object configurations for receivers with low arm mobility.
This also ensures that the robot grasps consider the context of the user’s upcoming task, i.e., the usage of the object.