Archive for the ‘Thesis’ Category

Using Wii-mote to teach Humanoid Robot pick things up

星期三, 八月 1st, 2007

A following up experiment with the wii-mote:
In this video the humanoid robot is controlled by the user via a wii-mote. Being in the same context as the robot, the teleoperation process feels very intuitive and natural. After the teaching, the robot can potentially use the training data to sequence its own controllers together to repeat the behavior in a different setting, e.g. a sorting task.

embedded by Embedded Video

YouTube link to:

Wii-mote Driving Simulated Mobile Robot

星期四, 一月 25th, 2007

WiiMoteMy first attempt at programming with the wii-mote . It seems to be a perfect device for creating natural gesture interfaces for controlling robots, especially arm and hand movements.

This is a test on a simulated robot. Next step is to control a real bi-manual humanoid robot (Dexter) and a self-balancing mobile manipulator robot (uBot). Stay tuned!

embedded by Embedded Video

YouTube link to: Wii-mote Driving a Simulated Robot

Detection and recogntion of humans and related social cues

星期三, 一月 17th, 2007

This page contains information on detection and recognition of humans and related social cues, including:

  • face detection
  • face recognition
  • skin color detection
  • hand detection and tracking
  • motion detection and tracking

Dexter Learning Demos

星期一, 十二月 4th, 2006

Learning to Reach: From Manual Skill to Communicative Action

The following is a demonstration that it is possible for a robot configured to learn manual skills, with no direct support for communication, to learn the utility of an important communicative action – pointing. Ths achieved through the several stages:

  1. A ball is placed within the reachable workspace of the robot ALWAYS. Through exploration the robot discovers the appropriate sequence of primitive actions that leads to the rewarding tactile sensation. This process is analogus to the “motor babbling” stage during infant development.
  2. During this stage, objects are placed sometimes out-of-reach of the robot. This causes a noticable change in the previously learned policy’s transition dynamics. The robot autonomous adapts to this change by discovering the position of the object with respect to the success of a the “reach” action, thus learning a new policy (not to reach for out-of-reach objects) to increase its rate of reward.
  3. The dynamics of the environment is changed yet again during this stage – a human enters the scene and causes the out-of-reach object sometimes appear within reach of the robot when an indication of assistance request is observed through the robot’s actions. Again, the robot notice the change in the environmental dynamics and discovers the appropriate cues and corresponding optimal action sequence such that rate of reward can increase once again.

embedded by Embedded Video

YouTube link to: Stage 1: Learns to reach
embedded by Embedded Video

YouTube link to: Stage 2: Learns to differentiate reachable and non-reachable workspace
embedded by Embedded Video

YouTube link to: Learns to point to request for assistance in the presence of human-scale “objects”

Compared with the programming interface. this approach is a natural and intuitive way for programming the robot – through manipulation of environment and acting through common sense to induce the emergence of appropriate actions, including ones that are communicative: making this negotiation process intuitive enough such potentially even uninformed human subjects can go through.

Other Dexter Demonstrations
joint work with John Sweeney and Stephen Hart

  1. A simple task is demonstrated to Dexter through teleoperation. The robot infers intentions of the human operator by observing his actions. The task is replayed by Dexter in the end.
  2. The sorting task taught via teleoperation is replayed with previously learned implicit common sense knowledge incorporated. Dexter determines handedness intelligently given the position of the object and the target location, without being explicitly taught by the operator. Thus, saving much energe for the teleoperator.

embedded by Embedded Video

YouTube link to: First teach Dexter sorting via teleoperation, and then replays task with previously learned common sense knowledge
embedded by Embedded Video

YouTube link to: Dexter handling another sorting task using acquired common sense knowledge

Fun Stuffs

Related work

Face Recognition

星期四, 十一月 2nd, 2006

This article contains two pieces of Face Recognition code:

HMM Face Recognition

This is my adaptation of the HMM face recognition algorithm described in “Face recognition using an embedded HMM” (1999) paper. The original source was found on the Yahoo OpenCV discussion group. This is an adapted version that stream-lines the process of training and testing of the algorithm.

Abstract

An HMM approach for face recognitionHidden Markov Models (HMM) have been Hmm Face Recognitionsuccessfully used for speech and action recognition where the data that is to be modeled is one-dimensional. Although attempts to use these one-dimentional HMMs for face recognition have been moderately successful, images are two-dimensional (2-D). Since 2-D HMM’s are too complex for real-time face recognition, in this paper we present a new approach for face recognition using an embedded HMM and compare this approach to the eigenface, method for face recognition, and to other HMM-based methods. Specifically, an embedded HMM has equal or better performance than previous methods, with reduced computational complexity.

Download and Compile

Download and unzip into c: drive root directory. The unpacked directory structure should look like:

+hmmfaces
++FaceRecognition (Core source for HMM face recognition)
++database (faces database)
++FindFaces (generate training images from video sequence)
++FormatConvert (convert sample images into proper pgm format for training)
++testimages (test images after training)

The face recognition project requires Visual Studio .NET to compile. A linux implementation will be available available on request.

Face Recognition
Batch train face database by running FaceRecognition with no argment ( ”Note: run FaceRecognition in the Visual Studio debug environment, otherwise the program will crash due to some memory bug)
After training you may test the result using the images in the testimages directory. Or you may use a pre-recorded video or directly try from a live video cam stream. See parameter below for different test options. When you run the program in test mode, 3 windows will pop up: the “Video” window is the live cam/test image, “ID” window displays the recognition result, “search” window will display the clipped-out face if testing from live cam stream.

Usage:
Can either find live video camera or input static image for Face Recognition
Syntax: FaceRecognition [choice [input_image_file_name] | --help]
choice=1: recognize face from static input image,
input_image_filename REQURIED
choice=2: recognize face from pre-recorded video sequence
choice=3: recognize face from LIVE cam
no argument FaceRecognition will run in batch training mode
--help: will display this help message

Adding more people into the face database

  • Record a video sequence of the person sitting in front of the camera, first looking straight, then more head slowly from side to side. This allows the training result to be invariant to head orientation changes
  • Run the FindFaces program to generate a sequence of training face images from the recorded video sequence
  • Run the FormatConvert program to convert the training images into the proper pgm format (need to be in the gray scale p5 format).
  • Place the result training into the database directory.
  • Run FaceRecognition without argument to train, WAIT 5 sec, and DONE!

Ravela Face Algorithm
The C++ implementation of the Ravela Face Algorithm can be found at the LPR wiki (LPR wiki access required)

Related paper: