Introduction

What is the problem we are trying to solve?

We are trying to build a classifier that is able to discriminate between three different hand getsures; Grasp, Point and Push. The main idea is to use this classifiers in a mixed reality communication/interaction with an onscreen animated robot character.

What is our approach to the problem?

Our approach was to exract features from the data set which will characterize each gesture uniquely and then apply different classification algorithms for gesture recognition. Beside gesture recognition we also tried to determine how soon a gesture can be recognized.

 

Dataset Description

How was the dataset collected?

The dataset was offered by Robotic Life Group. The data was collected using Vicon Motion tracking system. Two different subjects (Rita and Manu) wearing a glove with six markers on their right hand collected 30 examples of each gesture. The features from the Vicon systems are the raw markers positions X, Y, Z.

How were the markers placed?

The makers were placed as shown in the picture below.

Marker positions on right hand

 

What information does the dataset provides?

From the dataset, we know the following information:

1 Each gesture is a sequence of frames containing X Y Z coordinates for each of the 6 markers.

2. The data set neither provide the identification of the six markers nor the correspondence between the six markers from one frame to the next.