A model-based approach to
finding substitute tools in 3d vision data | P. Abelha, F. Guerin, and M. Schoeler | visual | image labels | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Learning the semantics of object–action relations by observation, | E. E. Aksoy, A. Abramov, J. Dorr, K. Ning, B. Dellen, and F.Worgotter | visual | image labels and demonstration | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Model-free incremental
learning of the semantics of manipulation actions | E. E. Aksoy, M. Tamosiunaite, and F. Worgotter, | visual | image labels and demonstration | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Supervised learning of hidden
and non-hidden 0-order affordances and detection in real scenes | A. Aldoma, F. Tombari, and M. Vincze | visual | image labels | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
From human instructions to robot actions: Formulation of goals,
affordances and probabilistic planning | A. Antunes, L. Jamone, G. Saponaro, A. Bernardino, and R. Ventura | visual | image labels and exploration | primitive | probabilistic and planning | qualitative and quantitative | manipulation | Real robot-iCub |
Learning grasp affordance reasoning through semantic relations | P. Ardon, E. Pairet, R. P. A. Petrick, S. Ramamoorthy, and K. S. Lohan | visual | image labels | compound | probabilistic | qualitative and quantitative | manipulation | Real robot-PR2 |
Self-assessment of grasp affordance transfer | P. Ardon, E. Pairet, Y. Petillot, R. P. Petrick, S. Ramamoorthy, and K. S. Lohan | visual and kinesthetic | image labels and demonstration | compound | deterministic and heuristic | quantitative | manipulation | Real robot-PR2 |
On exploiting haptic cues for
self-supervised learning of depth-based robot navigation affordancesa | J. Baleia, P. Santana, and J. Barata | visual and tactile | image labels and exploration | primitive | deterministic and heuristic | quantitative | navigation | Real robot-Their design |
Learning
grasping affordance using probabilistic and ontological approaches | C. Barck-Holst, M. Ralph, F. Holmar, and D. Kragic | visual | image labels | primitive | deterministic and probabilistic | qualitative and quantitative | manipulation | Simulated robot-Barret hand |
Predicting
slippage and learning manipulation affordances through gaussian process
regression | Y. Bekiroglu, C. Smith, Y. Karayiannidis, D. Kragic | visual, proprioception and kinesthetic | image labels and exploration | primitive | probabilisitic and heuristic | quantitative | manipulation | Real robot-ATI Mini45 |
Grasp affordances
from multi-fingered tactile exploration using dynamic potential
fields | A. Bierbaum, M. Rambow, T. Asfour, and R. Dillmann | visual and tactile | image labels and exploration | primitive | deterministic and heuristic | qualitative and quantitative | manipulation | Real robot-FRH-4 |
Learning grasping points with shape context | J. Bohg and D. Kragic | visual | image labels | primitive | deterministic and heuristic | quantitative | manipulation | Real robot-Kuka arm |
Learning to grasp and extract affordances:
the integrated learning of grasps and affordances (ilga) model | J. Bonaiuto and M. A. Arbib | visual and proprioception | image labels and exploration | primitive | deterministic and heuristic | qualitative and quantitative | manipulation | Simulation-Their own hand design |
Continuous modeling of affordances in a symbolic knowledge base | A. K. Bozcuoglu, Y. Furuta, K. Okada, M. Beetz, and M. Inaba | visual and proprioception | image labels and exploration | compound | probabilistic | quantitative | manipulation | Real robot-PR2 |
Se3-nets: Learning rigid body motion using deep neural networks | A. Byravan and D. Fox | visual | image labels | primitive | probabilistic | quantitative | manipulation | Real robot-Baxter |
Metagrasp: Data efficient grasping by affordance interpreter network | J. Cai, H. Cheng, Z. Zhang, and J. Su | visual | image labels | primitive | probabilistic | quantitative | manipulation | Real robot-UR5 |
Using
object affordances to improve object recognition | C. Castellini, T. Tommasi, N. Noceti, F. Odone, and B. Caputo | visual and tactile | image labels and demonstration | primitive | probabilistic | quantitative | manipulation | Real-CyberGlove |
Determining
proper grasp configurations for handovers through observation of object
movement patterns and inter object interactions during usage | W. P. Chan, Y. Kakiuchi, K. Okada, and M. Inaba | visual | image labels and demonstration | compound | deterministic | quantitative | manipulation | Real robot-HRP-2 |
An affordance and
distance minimization based method for computing object orientations
for robot human handovers | W. P. Chan, M. K. Pan, E. A. Croft, and M. Inaba, | visual | image labels | compound | deterministic, probabilistic and heuristic | quantitative | manipulation | Real robot-HRP-2 |
Predicting part
affordances of objects using two-stream fully convolutional network
with multimodal inputs | K. Chaudhary, K. Okada, M. Inaba, and X. Chen | visual | image labels | compound | probabilistic | quantitative | manipulation | Real robot-HRP-2 |
Learning affordance segmentation for
real-world robotic manipulation via synthetic images | F.-J. Chu, R. Xu, and P. A. Vela | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Simulation-visual input |
Analyzing differences between teachers when learning object affordances via guided exploration | V. Chu and A. L. Thomaz | visual and kinesthetic | image labels and demonstration | primitive | deterministic | qualitative and quantitative | manipulation | Real robot-Curi |
Learning object affordances
by leveraging the combination of human-guidance and self exploration | V. Chu, T. Fitzgerald, and A. L. Thomaz | visual and kinesthetic | image labels and demonstration | primitive | probabilistic | quantitative | manipulation | Real robot-Curi |
Real-time multisensory affordance-based control for adaptive object manipulation | V. Chu, R. A. Gutierrez, S. Chernova, and A. L. Thomaz | visual and proprioception | image labels and exploration | compound | probabilistic | quantitative | manipulation | Real robot-Kinova |
Ganhand: Predicting human grasp affordances in multi-object scenes | E. Corona, A. Pumarola, G. Alenya, F. Moreno-Noguer, and G. Rogez | visual | image labels | compound | probabilistic | quantitative | manipulation | Simulation-visual input |
Using a sofm to learn object affordances | I. Cos-Aguilera, G. Hayes, and L. Canamero | visual | image labels and exploration | primitive | deterministic and heuristic | quantitative | navigation | Simulated robot-Khepera |
Training agents with
interactive reinforcement learning and contextual affordances | F. Cruz, S. Magg, C. Weber, and S. Wermter | visual | image labels and exploration | compound | deterministic and heuristic | quantitative | manipulation | Real robot-iCub |
Multi-modal feedback for
affordance-driven interactive reinforcement learning | F. Cruz, G. I. Parisi, and S. Wermter | visual | image labels and exploration | compound | probabilistic | quantitative | manipulation | Real-robot-iCub |
A cognitive control architecture for the perception–action cycle in robots and agents | V. Cutsuridis and J. G. Taylor | visual and proprioception | image labels and demonstration | primitive | deterministic | qualitative | manipulation | Simulation-visual input |
Learning affordances for
categorizing objects and their properties | N. Dag, I. Atil, S. Kalkan, and E. Sahin | visual | image labels | primitive | deterministic | quantitative | action prediction | Simulation-visual input |
Learning grasp
affordances through human demonstration | C. de Granville, J. Southerland, and A. H. Fagg | visual and tactile | image labels and demonstration | primitive | deterministic | quantitative | manipulation | Real robot-P5 glove |
Denoising
auto-encoders for learning of objects and tools affordances in continuous
space | A. Dehban, L. Jamone, A. R. Kampff, and J. Santos-Victor | visual | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-iCub |
Learning grasp affordance densities | R. Detry, D. Kraft, O. Kroemer, L. Bodenhagen, J. Peters, N. Kruger | visual | image labels | primitive | probabilistic | quantitative | manipulation | Real robot-Staubli |
Deformable-medium
affordances for interacting with multi-robot systems | M. Diana, J.-P. de la Croix, and M. Egerstedt | visual | image labels | primitive | deterministic | qualitative and quantitative | navigation | Real robot-Khepera III |
Affordancenet: An end-to-end deep
learning approach for object affordance detection | T.-T. Do, A. Nguyen, and I. Reid | visual | image labels | compound | probabilisitic | qualitative and quantitative | manipulation | Real robot-WALKMAN |
From primitive behaviors to goal-directed behavior using affordances | M. R. Dogar, M. Cakmak, E. Ugur, and E. Sahin | visual | image labels and exploration | primitive | deterministic and planning | quantitative | navigation | Real robot-Kurt-2 |
Using learned affordances
for robotic behavior development | M. R. Dogar, E. Ugur, E. Sahin, and M. Cakmak | visual | image labels and exploration | primitive | probabilistic and heuristic | quantitative | navigation | Real-robot-Kurt-3 |
Predicting human actions taking into account object affordances | V. Dutta and T. Zielinska | visual | image labels | compound | probabilisitic | quantitative | action prediction | Simulation-visual input |
Learning probabilistic discriminative models of grasp affordances
under limited supervision | A. N. Erkan, O. Kroemer, R. Detry, Y. Altun, J. Piater, and J. Peters | visual | image labels | primitive | probabilistic and heuristic | quantitative | manipulation | Real robot-Barret hand |
An architecture for online affordance-based perception and whole-body planning | M. Fallon, S. Kuindersma, S. Karumanchi, M. Antone, T. Schneider, H. Dai, C. P. D’Arpino, R. Deits, M. DiCicco, D. Fourie | visual | image labels | compound | deterministic | qualitative and quantitative | manipulation and navigation | Real robot-Atlas |
Demo2vec: Reasoning object affordances from online videos | K. Fang, T.-L. Wu, D. Yang, S. Savarese, and J. J. Lim | visual | image labels and demonstration | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Learning
about objects through action-initial steps towards artificial cognition | P. Fitzpatrick, G. Metta, L. Natale, S. Rao, and G. Sandini | visual | image labels and exploration | primitive | deterministic and heuristic | quantitative | manipulation | Real robot-BabyBot \& Cog |
Learning
predictive features in affordance based robotic perception systems | G. Fritz, L. Paletta, R. Breithaupt, E. Rome, and G. Dorffner | visual | image labels | primitive | deterministic | quantitative | manipulation | Simulated robot-Kurt2 |
Object recognition
using visuo-affordance maps | Gijsberts, T. Tommasi, G. Metta, and B. Caputo | visual and tactile | image labels and demonstration | primitive | probabilistic | quantitative | manipulation | Real-CyberGlove |
Learning intermediate object affordances: Towards the development of
a tool concept | A. Goncalves, J. Abrantes, G. Saponaro, L. Jamone, and A. Bernardino | visual | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-iCub |
A behaviorgrounded
approach to forming object categories: Separating containers from noncontainers | S. Griffith, J. Sinapov, V. Sukhoy, and A. Stoytchev | visual | image labels and exploration | compound | probabilistic | qualitative and quantitative | manipulation | Real robot-WAM |
The affordance template
ros package for robot task programming | S. Hart, P. Dinh, and K. A. Hambuchen | visual | image labels | primitive | deterministic | qualitative | manipulation | Real robot-Valkyrie |
Affordance prediction
via learned object attributes | T. Hermans, J. M. Rehg, and A. Bobick | visual | image labels and exploration | primitive | probabilistic | quantitative | navigation | Real robot-Pioneer 3 DX |
Learning contact
locations for pushing and orienting unknown objects | T. Hermans, F. Li, J. M. Rehg, and A. F. Bobick | visual | image labels and exploration | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation | Real robot-PR2 |
Decoupling behavior, perception, and control for autonomous learning of affordance | T. Hermans, J. M. Rehg, and A. F. Bobick | visual | image labels and exploration | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation | Real robot-PR2 |
Perception and human interaction for developmental learning of objects and affordances | S. Ivaldi, N. Lyubova, D. Gerardeaux-Viret, A. Droniou, S. M. Anzalone, M. Chetouani, D. Filliat, and O. Sigaud, | visual | image labels and demonstration | primitive | deterministic and probabilistic | qualitative | manipulation | Real robot-iCub |
Hallucinated humans as the
hidden context for labeling 3d scenes | Y. Jiang, H. Koppula, and A. Saxena | visual | image labels and demonstration | compound | deterministic | quantitative | action prediction | Simulated-visual input |
Autonomous detection and experimental
validation of affordances | P. Kaiser and T. Asfour | visual and tactile | image labels and exploration | compound | probabilistic, heuristic and planning | quantitative | manipulation and navigation | Real robot-Armar III |
Extracting whole-body affordances from multimodal
exploration | P. Kaiser, D. Gonzalez-Aguirre, F. Sch¨ultje, J. Borras, N. Vahrenkamp,
and T. Asfour | visual | image labels and exploration | primitive | probabilistic and heuristic | qualitative | manipulation and navigation | Simulated robot-Armar III |
Validation of whole-body loco-manipulation affordances
for pushability and liftability | P. Kaiser, M. Grotz, E. E. Aksoy, M. Do, N. Vahrenkamp, and
T. Asfour | visual | image labels and exploration | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation and navigation | Real robot-Armar III |
Towards a hierarchy of loco-manipulation affordances | P. Kaiser, E. E. Aksoy, M. Grotz, and T. Asfour | visual | image labels and exploration | primitive | probabilistic, heuristic and planning | qualitative | manipulation and navigation | Real robot-Armar III |
“Affordance-based
multi-contact whole-body pose sequence planning for humanoid robots
in unknown environments | P. Kaiser, C. Mandery, A. Boltres, and T. Asfour | visual | image labels and exploration | primitive | probabilistic, heuristic and planning | qualitative and quantitative | manipulation and navigation | Real robot-Armar III |
Interactive openended
object, affordance and grasp learning for robotic manipulation | S. H. Kasaei, N. Shafii, L. S. Lopes, and A. M. Tome | visual and kinesthetic | image labels and demonstration | primitive | probabilistic | qualitative and quantitative | manipulation | Real robot-kinova |
Perceiving, learning, and exploiting object affordances for autonomous
pile manipulation, | D. Katz, A. Venkatraman, M. Kazemi, J. A. Bagnell, and A. Stentz | visual and proprioception | image labels and exploration | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation | Real robot-Barrett |
Semantic labeling of 3d point
clouds with object affordance for robot manipulation | D. I. Kim and G. S. Sukhatme | visual | image labels | primitive | probabilistic | quantitative | action prediction | Real robot-PR2 |
Interactive affordance map building for a robotic task | D. I. Kim and G. S. Sukhatme | visual | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Simulated robot-PR2 |
Visual object-action recognition:
Inferring object affordances from human demonstration | H. Kjellstrom, J. Romero, and D. Kragic | visual | image labels and demonstration | compound | probabilistic | quantitative | manipulation | Simulation-visual input |
Physically grounded spatio-temporal object affordances | H. S. Koppula and A. Saxena | visual | image labels and demonstration | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Anticipating human activities using object affordances for
reactive robotic response | H. S. Koppula and A. Saxena | visual | image labels and demonstration | compound | probabilistic and planning | quantitative | action prediction | Real robot-PR2 |
Anticipatory planning for
human-robot teams | H. S. Koppula, A. Jain, and A. Saxena | visual | image labels and demonstration | compound | probabilistic and planning | qualitative and quantitative | action prediction | Real robot-PR2 |
Learning human activities
and object affordances from rgb-d videos | H. S. Koppula, R. Gupta, and A. Saxena | visual | image labels and demonstration | primitive | probabilistic | qualitative and quantitative | action prediction | Real robot-PR2 |
Collision risk assessment
for autonomous robots by offline traversability learning | I. Kostavelis, L. Nalpantidis, and A. Gasteratos | visual | image labels and exploration | primitive | probabilistic | quantitative | navigation | Real robot-Kurt 2 |
Learning objects and grasp affordances through autonomous exploration | D. Kraft, R. Detry, N. Pugeault, E. Baseski, J. Piater, and N. Kruger | visual | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-Staubli |
A flexible hybrid framework for modeling
complex manipulation tasks | O. Kroemer and J. Peters | visual and kinesthetic | image labels and demonstration | primitive | deterministic and planning | quantitative | manipulation | Real robot-PA-10 |
A kernel-based approach
to direct action perception | O. Kroemer, E. Ugur, E. Oztop, and J. Peters | visual and kinesthetic | image labels and demonstration | compound | probabilistic and heuristic | qualitative and quantitative | manipulation | Real robot-Gifu hand |
Exercising
affordances of objects: A part-based approach | S. R. Lakani, A. J. Rodriguez-Sanchez, and J. Piater | visual | image labels | compound | probabilistic | qualitative and quantitative | manipulation | Real robot-Kuka arm |
Towards affordance detection for robot manipulation using
affordance for parts and parts for affordance | S. R. Lakani, A. J. Rodriguez-Sanchez, and J. Piater | visual | image labels | compound | probabilistic | qualitative and quantitative | manipulation | Real robot-Kuka arm |
Foot placement selection using
non-geometric visual properties | M. A. Lewis, H.-K. Lee, and A. Patla | visual | image labels | primitive | deterministic and probabilistic | quantitative | navigation | Real robot-Their design |
Learning to grasp familiar objects based on experience and objects shape affordance | C. Liu, B. Fang, F. Sun, X. Li, and W. Huang | visual and proprioception | image labels and demonstration | primitive | probabilistic | quantitative | manipulation | Real robot-BarretHand |
Physical primitive
decomposition | Z. Liu, W. T. Freeman, J. B. Tenenbaum, and J. Wu | visual | image labels | primitive | probabilistic | qualitative and quantitative | action prediction | Simulation-visual input |
Affordance-based imitation
learning in robots | M. Lopes, F. S. Melo, and L. Montesano | visual | image labels, demonstration and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-Baltazar |
Learning to segment affordances | T. Luddecke and F. Worgotter | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Simulation-visual input |
Context-based affordance
segmentation from 2d images for robot actions | T. Luddecke, T. Kulvicius, and F. Worgotter | visual | image labels | primitive | probabilistic | qualitative and quantitative | manipulation | Real robot-Kuka arm |
Multi-model approach
based on 3d functional features for tool affordance learning in robotics | T. Mar, V. Tikhanoff, G. Metta, and L. Natale | visual | image labels and demonstration | primitive | probabilistic | quantitative | manipulation | Simulation-visual input |
Self-supervised learning of tool affordances from 3d tool representation
through parallel som mapping | T. Mar, V. Tikhanoff, G. Metta, and L. Natale | visual | image labels | primitive | probabilistic | qualitative and quantitative | manipulation | Simulation-visual input |
What can i do with this tool?
self-supervised learning of tool affordances from their 3-d geometry | T. Mar, V. Tikhanoff, and L. Natale | visual | image labels | primitive | probabilistic | qualitative and quantitative | manipulation | Simulation-iCub robot |
Director: A user interface designed for robot operation with shared autonomy | P. Marion, M. Fallon, R. Deits, A. Valenzuela, C. Perez D Arpino,
G. Izatt, L. Manuelli, M. Antone, H. Dai, T. Koolen | visual | image labels | compound | deterministic and planning | qualitative | manipulation and navigation | Real robot-Atlas |
Occluded object search by relational
affordances | B. Moldovan and L. De Raedt | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Simulation-visual input |
Learning relational affordance models for robots in
multi-object manipulation tasks | B. Moldovan, P. Moreno, M. van Otterlo, J. Santos-Victor, and
L. De Raedt | visual | image labels | primitive | probabilistic and heuristic | quantitative | manipulation | Real robot-iCub |
Relational affordances for multiple-object manipulation | B. Moldovan, P. Moreno, D. Nitti, J. Santos-Victor, and L. De Raedt | visual | image labels | primitive | probabilistic | qualitative and quantitative | manipulation | Real robot-iCub |
Learning grasping affordances from local
visual descriptors | L. Montesano and M. Lopes | visual | image labels and demonstration | primitive | deterministic | qualitative and quantitative | manipulation | Real robot-Baltazar |
Affordances,
development and imitation | L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor | visual and proprioception | image labels and demonstration | primitive | probabilistic | quantitative | manipulation | Real robot-Baltazar |
Modeling affordances using bayesian networks | L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor | visual | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-Baltazar |
Learning object affordances: from sensory–motor coordination
to imitation | L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor | visual and tactile | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-Baltazar |
Affordance
detection of tool parts from geometric features | A. Myers, C. L. Teo, C. Ferm¨uller, and Y. Aloimonos | visual | image labels | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Ego-topo:
Environment affordances from egocentric video | T. Nagarajan, Y. Li, C. Feichtenhofer, and K. Grauman | visual | image labels and demonstration | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Detecting object affordances with convolutional neural networks | A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Real robot-WALK-MAN |
Object-based affordances detection with convolutional neural
networks and dense conditional random fields | A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis | visual | image labels | compound | probabilistic | qualitative and quantitative | manipulation | Real robot-WALK-MAN |
Speeding up affordance learning
for tool use, using proprioceptive and kinesthetic inputs | K. N. Nguyen, J. Yoo, and Y. Choe | visual, proprioception and kinesthetic | image labels, demonstration and exploration | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation | Simulation-visual input |
Modeling tool-body assimilation using second-order recurrent
neural network | S. Nishide, T. Nakagawa, T. Ogata, J. Tani, T. Takahashi, and H. G.
Okuno | visual and proprioception | image labels and exploration | primitive | probabilistic | quantitative | manipulation | Real robot-HRP-2 |
Autonomous
acquisition of pushing actions to support object grasping with
a humanoid robot | D. Omrcen, C. B¨oge, T. Asfour, A. Ude, and R. Dillmann | visual and kinesthetic | image labels and exploration | primitive | deterministic and heuristic | qualitative and quantitative | manipulation | Real robot-Armar III |
Affordance graph: A framework to encode perspective taking and effort based affordances for day-to-day humanrobot interaction | A. K. Pandey and R. Alami | visual | image labels and demonstration | compound | deterministic, heuristic and planning | quantitative | action prediction | Real robot-PR2 |
Recognizing object
affordances in terms of spatio-temporal object-object relationships | A. Pieropan, C. H. Ek, and H. Kjellstrom | visual | image labels and demonstration | compound | probabilistic and heuristic | quantitative | action prediction | Simulation-visual input |
Affordance feasible
planning with manipulator wrench spaces | A. Price, S. Balakirsky, A. Bobick, and H. Christensen | visual and proprioception | image labels | primitive | probabilistic and planning | quantitative | manipulation | Real robot-KR5 |
Grasp pose detection with affordance-based task constraint learning in
single-view point clouds | K. Qian, X. Jing, Y. Duan, B. Zhou, F. Fang, J. Xia, and X. Ma | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Real robot-UR5 |
Action-grounded push affordance bootstrapping
of unknown objects | B. Ridge and A. Ude | visual | image labels and demonstration | primitive | probabilistic | qualitative and quantitative | manipulation | Simulation-visual input |
Where can i do this? geometric
affordances from a single example with the interaction tensor | E. Ruiz and W. Mayol-Cuevas | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Real robot-PR2 |
Dynamic density topological structure generation for real-time ladder
affordance detection | A. A. Saputra, W. H. Chin, Y. Toda, N. Takesue, and N. Kubota | visual | image labels | compound | probabilistic | quantitative | manipulation and navigation | Real robot-their design |
Weakly supervised affordance
detection | J. Sawatzky, A. Srikantha, and J. Gall | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Simulation-visual input |
Robobrain: Large-scale knowledge engine for robots | A. Saxena, A. Jain, O. Sener, A. Jami, D. K. Misra, and H. S. Koppula | visual and proprioception | image labels and exploration | compound | probabilistic | quantitative | manipulation | Real robot-PR2 |
Deep effect trajectory
prediction in robot manipulation | M. Y. Seker, A. E. Tekden, and E. Ugur | visual and proprioception | image labels and exploration | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation | Real robot-UR10 |
Learning social affordance for human-robot interaction | T. Shu, M. S. Ryoo, and S.-C. Zhu | visual | image labels and demonstration | compound | probabilistic | qualitative and quantitative | action prediction | Real robot-Baxter |
Learning and generalization of behaviorgrounded tool affordances | J. Sinapov and A. Stoytchev | visual | image labels and exploration | primitive | probabilistic and heuristic | quantitative | manipulation | Real robot-CRS,A251 |
Learning task
constraints for robot grasping using graphical models | D. Song, K. Huebner, V. Kyrki, and D. Kragic | visual, kinesthetic and tactile | image labels and demonstration | compound | probabilistic | quantitative | manipulation | Real robot-Barret hand |
Embodiment-specific
representation of robot grasping using graphical models and latentspace
discretization | D. Song, C. H. Ek, K. Huebner, and D. Kragic | visual and tactile | image labels | compound | probabilistic | quantitative | manipulation | Simulated robot-Armar III |
Predicting human intention in visual
observations of hand/object interactions | "D. Song, N. Kyriazis, I. Oikonomidis, C. Papazov, A. Argyros,
D. Burschka, and D. Kragic" | visual and tactile | image labels and demonstration | compound | probabilistic | quantitative | manipulation | Real robot-Tombatossals |
Task-based robot grasp
planning using probabilistic inference | D. Song, C. H. Ek, K. Huebner, and D. Kragic | visual and tactile | image labels and demonstration | compound | probabilistic | quantitative | manipulation | Real robot-Armar III |
Learning to detect
visual grasp affordance | H. O. Song, M. Fritz, D. Goehring, and T. Darrell | visual | image labels | compound | probabilistic | qualitative and quantitative | manipulation | Real robot-PR2 |
Functional object
class detection based on learned affordance cues | M. Stark, P. Lies, M. Zillich, J. Wyatt, and B. Schiele | visual | image labels and demonstration | primitive | probabilistic | qualitative and quantitative | manipulation | Simulation-visual input |
Behavior-grounded representation of tool affordances | A. Stoytchev | visual | image labels and exploration | primitive | deterministic and heuristic | qualitative and quantitative | manipulation | Real robot-CRS,A251 |
Learning the affordances of tools using a behavior-grounded approach | A. Stoytchev | visual | image labels and exploration | primitive | deterministic and heuristic | qualitative and quantitative | manipulation | Real robot-CRS,A251 |
Pose-aware placement of objects with semantic labels-brandname-based affordance prediction and cooperative dual-arm active manipulation | Y.S. Su, L.-F. Yu, H.-C. Wang, S.-H. Lu, P.-S. Ser, W.-T. Hsu, W.-
C. Lai, B. Xie, H.-M. Huang, T.-Y. Lee | visual | image labels | primitive | probabilistic and heuristic | qualitative and quantitative | manipulation | Real robot-UR10 |
Learning visual object categories for robot affordance prediction | J. Sun, J. L. Moore, A. Bobick, and J. M. Rehg | visual | image labels and demonstration | primitive | probabilistic | quantitative | navigation | Real robot-PeopleBot |
Object-object interaction affordance
learning | Y. Sun, S. Ren, and Y. Lin | visual | image labels | compound | probabilistic | quantitative | manipulation | Real robot-FANUC |
A model of shared grasp affordances
from demonstration | J. D. Sweeney and R. Grupen | visual | image labels | primitive | probabilistic | quantitative | manipulation | Real robot-Dexter arm |
Knowledge propagation and
relation learning for predicting action effects | S. Szedmak, E. Ugur, and J. Piater | visual | image labels | compound | probabilistic | quantitative | action prediction | Simulation-visual input |
Deep
affordance-grounded sensorimotor object recognition | S. Thermos, G. T. Papadopoulos, P. Daras, and G. Potamianos | visual | image labels and demonstration | primitive | probabilistic | quantitative | action prediction | Simulation-visual input |
Learning about objects with human
teachers | A. L. Thomaz and M. Cakmak | visual and kinesthetic | image labels and demonstration | primitive | deterministic | quantitative | manipulation | Real robot-Bioloid |
Exploring affordances
and tool use on the icub | V. Tikhanoff, U. Pattacini, L. Natale, and G. Metta | visual | image labels | primitive | probabilistic | qualitative and quantitative | manipulation | Real robot-iCub |
Bottom-up learning of object categories, action
effects and logical rules: From continuous manipulative exploration
to symbolic planning | E. Ugur and J. Piater | visual | image labels and exploration | compound | deterministic, heuristic and planning | qualitative | manipulation | Real robot-Kuka arm |
Refining discovered symbols with multi-step interaction experience | E. Ugur and J. Piater | visual | image labels and exploration | primitive | deterministic and planning | qualitative | manipulation | Real robot-Kuka arm |
Curiosity-driven
learning of traversability affordance on a mobile robot | E. Ugur, M. R. Dogar, M. Cakmak, and E. Sahin | visual | image labels and exploration | primitive | deterministic and heuristic | quantitative | manipulation | Real robot-Kurt3D |
The learning and use of traversability affordance using range
images on a mobile robot | E. Ugur, M. R. Dogar, M. Cakmak, and E. Sahin | visual | image labels and exploration | primitive | deterministic and heuristic | quantitative | manipulation | Real robot-Kurt3D |
Affordance learning from range data
for multi-step planning | E. Ugur, E. Sahin, and E. Oztop | visual | image labels and exploration | primitive | deterministic and planning | quantitative | manipulation | Real robot-Gifu Hand III |
Goal emulation and planning in
perceptual space using learned affordances | E. Ugur, E. Oztop, and E. Sahin | visual | image labels and exploration | primitive | deterministic and planning | qualitative and quantitative | manipulation | Real robot-Gifu hand |
Unsupervised learning of object
affordances for planning in a mobile manipulation platform | E. Ugur, E. S¸ ahin, and E. Oztop | visual | image labels and exploration | primitive | deterministic and planning | quantitative | navigation | Real robot-Kurt3D |
Staged development of
robot skills: Behavior formation, affordance learning and imitation with
motionese | E. Ugur, Y. Nagai, E. Sahin, and E. Oztop | visual | image labels and exploration | primitive | deterministic and planning | quantitative | manipulation | Real robot-Gifu hand |
Afrob: The affordance network ontology for robots | K. M. Varadarajan and M. Vincze | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Simulation-visual input |
Socially aware robot navigation system in human-populated and
interactive environments based on an adaptive spatial density function
and space affordances | A. Vega, L. J. Manso, D. G. Macharet, P. Bustos, and P. Nunez | visual | image labels | primitive | probabilistic | quantitative | navigation | Simulation-visual input |
Incorporating object intrinsic
features within deep grasp affordance prediction | M. Veres, I. Cabral, and M. Moussa | visual and proprioception | image labels and exploration | primitive | probabilistic and heuristic | quantitative | manipulation | Simulation-Fanuc arm |
Robot learning and use of
affordances in goal-directed tasks | C. Wang, K. V. Hindriks, and R. Babuska | visual | image labels and exploration | primitive | deterministic | quantitative | action prediction | Real robot-NAO |
What can i do around here? deep functional scene understanding for cognitive robots | C. Ye, Y. Yang, R. Mao, C. Ferm¨uller, and Y. Aloimonos | visual | image labels | compound | probabilistic | qualitative and quantitative | action prediction | Real robot-Baxter |
Robotic pick-and-place
of novel objects in clutter with multi-affordance grasping and crossdomain
image matching | A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza,
D. Ma, O. Taylor, M. Liu, E. Romo | visual | image labels | primitive | probabilistic | quantitative | manipulation | Real robot-Facuc arm |
Understanding tools: Task-oriented
object modeling, learning and recognition | Y. Zhu, Y. Zhao, and S. Chun Zhu | visual | image labels | compound | probabilistic | quantitative | manipulation and action prediction | Simulation-visual input |