Final Conclusions

 

Conclusions of Experiment one (All Video Blurred)

 

The easiest emotional state to detect was Anger (73%)

Sadness is the most difficult emotional state to detect from motion (22%)

Neutral ,boredom and sadness were confused (around 28%)
Stress was confused with neutral in 25%
Confusion between joy and boredom is not low (19.3%)

 

 

Conclusions of Experiment two (Only Face Blurred)

 

The easiest emotional state to detect is Joy (65%)
  The second easiest emotional state to detect was Stress (59%)

Sadness is the most difficult emotional state to detect (30.43%)

Neutral and stress where confused in approx 23%
Anger was confused with stress in 19%
Confusion between boredom and neutral in 36%

 

Conclusions of Movement Analysis

Neutral

Horizontal and vertical movement of the head and hand holding the utensil highly correlated

Inactivity time for horizontal and vertical movements of head and hands in the order of Seconds (greater than 500ms)

Moderate number of variations of the signals over time.

Sadness

Less horizontal and vertical movement (variations) in head and hand not holding the utensil

Hand holding the utensil kept in a fixed vertical and horizontal position over long periods of time (in the order of seconds)

 

Anger

Hand holding the utensil with higher variations over time and large magnitude. The lines of the plot are spiky.

Apparent calm followed by compulsive movements of hand holding the utensil

Moderate movement of head.

 

Hapiness/Joy

Considerable Horizontal and vertical movement of the head. Small magnitude continuous oscillations.

Horizontal movement of head and horizontal movement of hand holding the utensil correlated.

Stress

High and constant variations in the vertical and horizontal movements of the head and hand holding the utensil.

Boredom

long periods of overal inactivity.

Slow variations in vertical and horizontal movementof head and hand holding the utensil.

 

 

Final Conclusions

It could be possible to build a computer vision system to detect the features mentioned in the movement analysis conclusions. This system would not be very accurate because all people eat in different ways. Some subjects mentioned that they will detect the emotional state of the person eating easier if they knew the person well. I think that using a computer vision system to analyze people eating in their homes in a daily basis in combination with other systems would give a better and customized detection.

If I were to repeat the experiment analizing the head and hand movements when people is doing other activities I would expect to get simmilar features such as for example sudden movements with high magnitude variations for anger and slow movements for sadness.

 

Possible Improvements

If I were to repeat the experiment again, I would hire professional actors or even better, I would get video from hidden cameras of people, and then, I would try to modify their mood.

I would use more subjects for the experiment. More than 200 probably.

I would analyze the movement of head and hands for a longer period of time. Each video lasts for 22 seconds. I would increase the duration to more than 5 minutes (probably during until the people really finish eating).