About jifei

Jifei Ou 1st. year Master's Student @ MIT Media Lab | Tangible Media Group Background Prior to the Media Lab, I was studying product design in Germany. I have a firm background on form-giving and strong skills in fabrication. I have been also playing with electronics and programming for 3 years. Interests My main research interests focuses on how body and tools (digital or physical) could be seamlessly merged so that people could work and live better.

Tele-Gesture

Samvaran Sharma, Anjali Muralidhar, Henry Skupniewicz, Jason Gao, Hayoun Won

DESCRIPTION:

The Tele-Gesture is a tangible interface for collaboration that allows users to physically point to a detailed 3D object in real life, and have their pointing gestures replicated / mirrored at a remote unit by a robotic finger to be viewed by remote collaborators, either at the same time (synchronously), or at another point in time (asynchronously).

Tele-Gesture demonstrates the importance of having a physical representation of pointing and relaying the associated physical vector information to the viewer, which is not provided by prior work using optical / laser-based approaches.

USE CASES:

This project addresses the inadequacy of two-dimensional screens for conveying a collaborator’s physical manifestations of emphasis and attention on detailed three-dimensional models or physical prototypes in many collaborative scenarios.

  1. Two remote architects discussing changes to a building via use of a 3D-printed physical model can now do so remotely and still maintain the importance of a person’s body (pointing and gesturing motions with the hand and arm) at the object in the context of the space around the object.
  2. Product designers demonstrating how to use a physical prototype by using Tele-Gesture to point at and even operate a product by using the Tele-Gesture pointer to press buttons, manipulate interfaces, etc.

 

TEAM MEMBERS AND CONTRIBUTIONS:

Everyone: Concepting, problem space exploration, design discussions.

Samvaran Sharma: Responsible for the MATLAB Machine Vision algorithm to track the stylus in real time via webcam input, derive the 3D spatial position, convert to polar coordinates, and transmit commands to the Arduino at ~20-40Hz. Also responsible for the paper writeup detailing the philosophical approach and project overview.

Jason Gao: Hardware design, prototype assembly, Arduino + electronics

Hayoun Won: Prototype assembly (with Jason, Anjali, Henry), case studies’ (with everyone) summary, presentation slides’ document, photography, video (Image editing, music, compilation, photography), diagraming the philosophical approach and project concept and system overview, visual Ideation of design rationales, and user scenarios.

Anjali Muralidhar: Hardware design, prototype assembly

Henry Skupniewicz:  Hardware design, prototype assembly, prototype documentation

MEDIA:

Video link: http://www.youtube.com/watch?v=y0eU_wMXWRA

Slides: Tele-Gesture Slides Final

Paper: Tele-Gesture

Braynestorm

braynestorm

Braynestorm is an augmented-reality device that seeks to make the initial creative process behind ideation more fun and productive. When initially researching and brainstorming a topic in preparation for a paper, assignment, or other project, we typically default to our old standbys of the mouse and keyboard setup, browsing Google for a host of information about our area of interest. This differs greatly from the brainstorming procedure behind sculpture, dance, or art, in which there is an innate physical component to the creative process (in addition to computer-aided research), freeing ourselves from the paradigm of screen, mouse, and keyboard, and allowing us to interact with our physical space as we explore different avenues.

Braynestorm seeks to reduce distraction during the creative process and improve efficiency by providing a large volume of information in varied forms, and allowing users to interact with their data in a tangible way in order to tap into the advantages of physical freedom. It does this by using a computer-vision aided process to create an augmented reality setup for the user.

The user first types in a relevant keyword or set of keywords for their desired topic, and presses “Braynestorm”, which prompts the program to automatically scour the internet for visual data (from Google images), and semantic data (facts from wikipedia, and news from Google news). When a red object is held up to the camera, the program tracks the position of the object and overlays images relating to the topic on the video feed. When a blue object is held up, news headlines are displayed, and when a green object is held up, facts are shown.

This allows users to physically interact with their information and potential ideas, and with a few modifications (such as gestures and other features to allow the user to download more information from the web without touching their keyboard or mouse), this device could grow to be very useful in the creative process, while demonstrating the power of computer vision, augmented reality, technology, art, and tangible user interfaces.