Robot Learning Communicative Behavior

Ph.D thesis: Humans have the ability to communicate intentions both verbally and through actions. In order for robots to successfully collaborate with human and augment human’s ability, the robots also needs possess similar abilities. My thesis argues that communication has its roots in behavior. Gesture shares essential properties with sensory and motor problem solving while also overlapping significantly with language. I am inspired by the co-development of motor behavior and language reported in the psychology literature, and present an unified learning framework for acquiring both simultaneously.

A Possible Human Robot Collaboration Scenario

This is a simple scenario where effective communication is necessary for the completion of the task. Once the robot has acquired a set of skills to handle this task, it can be applied to other domains such as assisting astronauts in construction or repair missions in space, elder-care, child education.

Dyadic Interaction 1 Dexter working alongside human collaborator in an construction task. The human grabs hold on of a pipe while directing Dexter to work on the corner on the far side of the rig. Note, the far side corner is not within the work space of the human. Dexter uses eye contact to confirm the action.
Dyadic Interaction 2 Dyadic Interaction 3
The human and Dexter each goes about completing their own goal Dexter observes the human’s next move (picking up the next pipe) and tries to make decision on how to assist the task in a helpful manner.

However, a number of learning challenges exist:

  • Generate effective expressive behavior
  • Build robust, scalable knowledge representation of humans
  • Recognize human behavior and infer human intentions
  • Learning Expressive Communicative Behavior

    key idea: Communication has its roots in manual behavior
    Part 1: Intro
    embedded by Embedded Video

    YouTube link to: Intro

    Part2: Results and Discussion
    embedded by Embedded Video

    YouTube link to: Results and Discussion

    A Robust, Scalable Knowledge Representation of Humans

    key idea: Humans are characterized in terms of behavior they afford, rather than from visual appearance alone.
    abstract:
    Inspired by Turing’s emphasis on behavior as “the hallmark of humanness” and Gibson’s view of how human knowledge is stored and applied, this video presents a novel affordance-based approach of modeling humans for human-robot interaction. It is a principled, grounded approach where a robot’s understanding of humans is learned and represented in terms of behavior they afford. To demonstrate the feasibility of this approach, this video first shows how a behavioral learning framework—control basis—designed for robots to acquire general hierarchical programs for object manipulation can be applied for robots to incrementally learn behavioral affordances of humans through natural interaction. Secondly, this video presents how the learned affordances can be captured and represented using a
    probabilistic hierarchical parse-graph formulation such that the affordance model can be applied later for the purpose of recognition. Models presented are acquired incrementally by a bi-manual humanoid robot, from its interactions with 18 subjects.

    A learned hierarchical affordance model of humans

    A learned hierarchical affordance model of humans where a human is described not only as a set of kinematically related visual features, but also as behavior they afford. As shown in the figure, the robot has discovered that humans are likely to respond to gestures produced by the robot, such as gaze and point.

    Demo Video:
    embedded by Embedded Video

    YouTube link to: Human Tracking and Gesture Recognition using Learned Affordance Model

    Learning to Recognize Human Gesture and Intentions

    key idea: Knowledge gained from expressive behavior learning is useful as a behavior template for the purpose of recognition and inferring human intentions when the behavior is performed by the human.
    embedded by Embedded Video

    YouTube link to: Learning Receptive Behavior

    More Gestures

    embedded by Embedded Video

    YouTube link to: This video shows some more examples of gestures that can be learned using the same approach

    Other Dexter Demonstrations

    joint work with John Sweeney and Stephen Hart

    1. A simple task is demonstrated to Dexter through teleoperation. The robot infers intentions of the human operator by observing his actions. The task is replayed by Dexter in the end.
    2. The sorting task taught via teleoperation is replayed with previously learned implicit common sense knowledge incorporated. Dexter determines handedness intelligently given the position of the object and the target location, without being explicitly taught by the operator. Thus, saving much energe for the teleoperator.

    embedded by Embedded Video

    YouTube link to: First teach Dexter sorting via teleoperation, and then replays task with previously learned common sense knowledge
    embedded by Embedded Video

    YouTube link to: Dexter handling another sorting task using acquired common sense knowledge

    Related Publications

    • Learning Prospective Robot Behavior. AAAI Spring Symposium, 2009. S. Ou and R. Grupen. [pdf]
    • Software Architecture for Robot Learning. ICRA workshop, Japan, 2009. Joint work with S. Hart, S. Sen and R. Grupen. [pdf]
    • A Framework for Learning Declarative Structure, Robotics Conference, Philadelphia, 2006. S. Ou et. al. [pdf]

    Leave a Reply