synchroLight

 

STATEMENT

In a current scenario of video communication, the shared working space between the participants are physically detached. This discontinuity leads to many inconvenience and problematics, one of them is how to point or indicate physical objects between the remote participants. We were inspired by the phenomena that light can penetrate from one side of a glass and illuminate objects from the other side. In this case, penetrating light becomes a medium that facilitates a better communication.

 

SYSTEM DESCRIPTION

For this assignment, we sought to re-create such experience in a remote communication scenario and demonstrate a vision of “light can travel through virtual world and reach out to the physical one”. In our system, user in video communication takes a flash light and points to any position on the screen. this 2-D coordination will be captured and transmitted to the remote side. A projector simulates a light beam on the same coordination. By doing so, participants are able to intuitively pointing and annotating contents in physical world.

 

APPLICATION

The metaphor of penetrating light can be also applied to other scenarios. We envisioned two potential applications.

1. Urp++

as a remote version of a previous project from Tangible Media Group, Urp, Urp++ allows user to use a phicon as light source to synchronize the environment light within the two workbenches. When the light source changes its position, the simulated shadow from both sides will response the the correct position. User can also pass the light source to each other remotely. (see the sketch below)

 

2. X-ray

In this application, the simulated light beam can not only point on the remote physical object, but also reveal some hidden information inside or behind this object. The simulated light becomes an X-ray.

 

PRESENTATION SLIDES

synchroLight Video

 

Contributors:

Jifei Ou - Ideation, Presentation, Physical Prototyping

Sheng Kai Tang - Ideation, Presentation, Software Prototyping

Magonote

Magonote is a concept for a collaborative scratching experience. ‘Magonote’ (〜孫の手〜) is a Japanese word that refers to a backscratcher tool. The system we propose comprises of a Magonote enabled chair and a stuffed animal through which a remote person can participate in scratching.

Scratching is a reflex response to itches, for which we do not usually depend on others. It is easy to miss the casual bonding that happens through scratching interaction. In fact, it is also a common social grooming activity in a number of primates. From this observation, we wanted to design a novel experience that uses scratching as a social object and a medium for reconnecting with friends and family.

The above concept video demonstrates an example scenario. Arun could really use a good scratch now. He remembers that his friend Bill helped him out when he was around, particularly when Arun’s hands could not reach the itch-location very easily. Unfortunately Bill no longer lives in his city. Arun knows what to do in this situation. He comes across a Magonote chair nearby and decides to give it a try. At the same time, in a different city, Bill is reading a book. The stuffed koala that Arun gave him suddenly starts nodding its head. He fetches it closer to see why. He notices that there are some LED lights at the back of the koala, and they are blinking. From the pattern of blinking, he recognizes it as an incoming ‘scratch request’. He acknowledges the request by giving the koala a good scratch. Immediately, the Magonote attached to Arun’s chair gets activated. Arun realizes that it is Bill. Arun likes the way Bill scratches, but the scratch-location is slightly off from the itch location. Arun signals it by rubbing his back against the chair. Bill notices that the only one LED at the top-center position of the koala is now fading in and out. He starts scratching around that particular LED. Magonote arm changes the scratch position accordingly. After Arun is satisfied with the scratch session, he leaves the chair. At Bill’s end, no more lights are blinking. Bill puts the koala back to it’s original position.

Here we have used the scratch interaction as a metaphor for casual bonding. The stuffed animal is a ghost representation of someone dear to us and a metaphor for attention seeking. The dyadic interaction between the remote users takes place in personal physical spaces. The capabilities of the chair include transmission of initial presence information, actuation of robotic Magonote and scratch-location-gesture detection through pressure sensors. The features of the stuffed animal are: presence notification through nodding of head, scratch intent notification through LED blinking, scratch position notification through LED fading and a scratch sensing surface. We implemented the LED array controls using an embedded Arduino and simulated the rest using ‘Wizard of Oz’ technique.

Team

Dan Sawada
Research, Design and laser-cutting of Magonote, Prototyping (programming, electronics), Concept video production
Anirudh Sharma
Research, Concept video production, Prototyping (programming, electronics)
Sujoy Kumar Chowdhury
Research, Prototyping (programming, electronics), Interaction Design, Concept video production

Related work

  • Scratch Input by Chris Harrison, Scott Hudson, UIST 2008
  • inTouch by Scott Brave, Andrew Dahley, and Professor Hiroshi Ishii, 1998
  • Hug Shirt by CuteCircuit, 2006

 

Constellation

Project Description: 

The goal of our project, Constellation, is to demonstrate an interface that motivates and guides collaborative motion. Collaborative motion covers a cornucopia of activities from cooking, to dancing, to swimming, to yoga. We focus specifically on dance, and particularly flash mobs, as an application ripe for our interface. We examined the natural behavior of swarms, nature’s collaborative motion, and looked to the natural cues of motion in animals such as bees and ants as the basis of our prototype. We developed a system that tracks the synchronization of movement among proximal users. As more users move limbs in sync, the corresponding movement indicators (LEDs) become increasingly brighter. This approach creates an incentive based reward system to encourage users to move in synchronized (e.g. in a flash mob), and also creates an artistic effect that enhances the overall aesthetic experience.

Prototype States

Prototype Development

Example Scenario

Group Members

Shawn Conrad

Research, Stop Motion Video, Material Logistics
Lauren Kim
Research, Stop Motion Video [image editing], Presentation
Jacqueline Kory
Research, Stop Motion Video [music, compilation]
Adina Roth
Research, Stop Motion Video [photography], Prototype [fabrication, hardware], Presentation
Jonathan Speiser
Research, Stop Motion Video, Prototype [programming, electronics]

Final PDF: Assignment 2_constellation

What’s cookin’?

Dhairya Dand, Christian Ervin, Robert Hemsley, David Nuñez, Laura Perovich

What’s cookin’? is a collaborative cooking system that helps people create meals together even when they’re apart. It is a collection of augmented kitchen tools, surfaces, and computational representation of meals.

Video

Cooking is a social, shared, and sensory experience. The best meals bring friends together and engage all of our senses, from the sizzling of sautéing onions to the texture of toasted bread.

However, sometimes it’s inconvenient for people to come together to share the creative process; kitchen space can be limited, heavy cooking tools may be difficult to transport, and commitments in the home, such as childcare, may discourage people from relocating.

Our tool allows people to cook together in a remote, synchronous, collaborative environment by mirroring the social, shared, and sensory nature of the cooking experience. It is particularly suited for collaborations that involve multiple parallel processes that work in isolation but also intersect at various times throughout the collaboration, most notably at the end when a meal is produced.

We demonstrate example interactions through a series of video sketches and physical prototypes based on friends coming together to share a meal. Our tool can can also be employed in other environments, such as the industrial kitchen or food television. Our model of collaboration extends to large crafts, such as fashion or woodworking and is serves as a step towards a broader framework where embedding knowledge in objects through interaction creates wikipedia for the physical world.

From a Table Away

In our collaborative kitchen environment the tabletop is the mediator which brings together the remote collaborators and bridges the interaction between our tangible utensils, our physical interactions and the environment. Its embedded ambient displays serve as the communication hub between chefs and render augmented information about objects placed on the surfaces.

To reduce information overload between the local and remote collaborators, we use an ambient display to enables glanceable awareness of the remote users tabletop activity. Through this users can be made aware of the remote party’s progress through a recipe or be alerted when a collaborator is struggling with a specific task. If the user wishes to transition to directly interact with the remote workspace, they can adjust the focus of their countertop bringing the impression of remote environment into the foreground of their workspace.

The surfaces help to create a shared workspace that creates enables collaborative teaching and fosters a sense of co-location and teamwork. It provides ambient notifications of the progress for the entire meal, but does not prevent any individual cook from working on his own tasks. Through this we ensure that there is a shared awareness of the ingredients and the tasks being undertaken and how these interrelate which together creates the experience of co located interaction on one shared tabletop.

Spice Spice Baby

The use of spices within this recipe extends the collaboration from the tabletop into the physical environment. Objects which have shared meaning and affordances are mirrored into the remote physical location allowing users to seamlessly share interactions and knowledge.

When Robert buys a bottle of wine that has particularly robust notes, **What’s Cookin’?** helps the chefs collaborate to improve the meal by altering the shared recipe to better match this wine. The system alerts the chefs about the wine choice, and Laura, an expert on wine and food pairings, indicates to David that he should adjust the spices in his marinara sauce. Laura locally taps the bottles recommends and the necessary types and quantities are mirrored into David’s environment. David sees his own spice bottles glow and as he adds the spices to his pot, the bottles’ glow slowly fades out until he has deposited the correct amount.

This tool enables more natural collaboration as users to are able to physically use their bodies to interact with the objects as they would if they were performing the same task locally. This allows the user to draw upon their existing mental models and kinetic memory to help recall which spices to use within the interaction. We re-use existing tools and objects and so enable the seamless continuation of their existing practices sharing the knowledge between these mirrored objects.

The self-aware bottle also records it’s interactions allowing the local user to record their own interactions and replay the information at a time in the future.

We created a physical prototype of this interaction that demostrated the experience with a spice bottle; it glowed to indicate the need to add a spice. A tilt sensor inside the device tracked when the user successfully added spice; the color of the glow changed and eventually decreased in response to a correct number of shakes.

Knead you to knead me

David sees that Laura is frustrated kneading her dough, David sends a tangible video of his hands kneading, which Laura can see overlaid on her dough. After observing David, she slides his hands aside and now practices kneading with her newly learnt style.

Video conferencing is the status quo of remote collaboration today. The question we asked was would it to be more meaningful and personal if the video can be overlaid on the context, in this case the dough, at the same time provide a tangible feedback. This lead to a co-located collaboration experience while still being remote.

Hack the knife

David is struggling to use his knife, Laura helps him learn to cut by mimicking the cutting action at her side. David’s learning is augmented by audio and haptic feedback.

The kitchen has been the birthplace for tools – knives, utensils, spoons which have existed from the stone age to our age; minus their functionality they have hardly ever evolved. Each of our tools is associated with a set of knowledge on how to use them and their best practices, we thought how would it be to have tools that are self-aware, tools that teach and learn from you. These tools not only connect to you but connect you to other people who use them.

We created a working prototype, Shared Spoons, to explore what it means to embed knowledge in the tools we use. We instrumented two wooden spoons with 6 degree of freedom accelerometer / gyroscope IMUs. This allowed us to determine the orientation of the spoons in 3D space. The spoons were connected to a 3D rendering package that provided visual feedback showing spoon orientation. As a master chef moves the spoon around, sensor data can be recorded so that the gesture of “whisk” can be differentiated from “stir,” for example. As master chefs stir many, many spoons, a knowledge repository of physical interactions with tools is collaboratively generated. The spoons can be used synchronously, as well. We demonstrated a scenario where a master chef stirs one of the Shared Spoons while the apprentice stirs the other spoon. As the apprentice “matches” the velocity and orientation of the master’s spoon, the software generates a pleasant tone; when the spoons are not in harmony, a discordant tone sounds.

Time on my side

By interacting with the handle of the skillet David sees the remaining cooking time. When David’s side is about ready, Laura’s kitchen buzzer goes off telling her to leave for David’s home.

Temporal co-ordination is a key aspect of collaboration, and time is something that we as humans aren’t good at keeping. What we have here is a collective workspace – the pan, the kitchen buzzer, even the cellphone, which work in tandem – these objects spread across distances collaborate with each other so that we don’t need to actively worry about time, thus allowing you to focus on what’s most important for you – cooking.

Related Work & Prior Art

Surfaces and Spaces
* ClearBoard (Ishii, 1992)
* Tangible Bits (Ishii, 1997)
* Double DigitalDesk (Wellner, 1993)

Cooking
* CounterActive (Ju, 2001)
* Home of the Future (Microsoft, 2003)
* Counter Intelligence Group (MIT Media Lab)

Objects
* Cooking with Elements (Bonanni)
* IntelligentSpoon (Cheng)
* ChameleonMug (Selker)

Individual Contributions

We assert that all team members shared work equally and fairly, collaborated on group efforts with enthusiasm, and also provided focused support in areas of expertise.

Laura was instrumental as Project Manager for the team and drove the development of our presentation along with the video script. Dhairya worked with Laura on the script for the video and also was responsible for shooting film. Christian was primarily responsible for editing the project video and creating the user interface simulations. David and Robert worked on the physical, working prototypes with David taking lead on the design and development of the Shared Spoons and Robert designing and implementing the augmented spice bottle.

All team members shared responsibilities on project direction and implementation.

[Presentation Slides]

 

RubixCubicle

[Presentation Slides]

This project reflects on the differences between introverts and extroverts, and how each approaches presentation vs. private thought and work.  Departing from a traditional office set up of separate cubicles and a conference room, working space can now be manipulated such that there is not as much separation and sorting into different locations for collaboration, but rather boundaries can be more fluid.  Walls are moveable, allowing individuals to merge workspaces and show each other the native, raw materials they were working on.

Users designate what they wish to keep private even as they open up their environment to another person. The wall itself is a digital object. With the aid of phicons, It uses touch screen like features to allow the hiding and showing of certain content, and automatically organizes public (to be shown) or private (to be hidden) information accordingly when sensing users move between private or collaborative configurations.

2D physical content can also be transferred into a digital format on the wall.

 

Two scenarios are described to flesh out the nuances of this new collaborative object:
1. Collocated
Body, object, spaces, configuration, private, and public

In this scenario, when working privately, each user has one private ThoughtSpace board and one SharedSpace board.  The SharedSpace board is the “showing platform”, upon which all content that should be shown during later collaboration is placed.  The ThoughtSpace board is the user’s private working space, but importantly, they have the option of using a highlighting phicon (described more below) to designate any work in progress they want to show when collaborating later – thus, ideas at different points of progress are separated spatially.  All other ThoughtSpace items are hidden automatically by the computer when moving into collaborative configurations, as seen in the second figure below.  The body is engaged when initiating collaboration through the moving of the digital wall object.  Space and information is reconfigured in the transition between private and collaborative modes.

Remote
Body, object, private, and public

Each office is equipped with a wall “socket” that is bordered by an actuating flexible material.  An ambient invitation to initiate a remote collaboration is signaled by an inviter by moving one of their walls into their socket, which then activates the invitee’s wall socket to become depressed or “open”.  The invitee can then start the collaboration by moving their wall into the socket revealed.

Flipping the desk surface down triggers a full body conferencing mode, allowing the exchange of all gestural and postural information.

When the digital wall is shifted to desk mode, the desk becomes ThoughtSpace and the vertical screen SharedSpace.  The eraser phicon can be used to erase content, and the other side, a highlighter phicon, when used on ThoughtSpace items, causes them to show when shifting into collaboration.  The drawer phicon is a handle that, when brought to the intersection of the desk and “monitor” in desk mode and pulled towards the user (like opening a drawer), allows the user to pull open a digital drawer for archiving. Lastly, the shelf is a piece of the wall that rotates open where users store content they want out of the view of their actual working space.

Because of lack of physical proximity, ambient digital shadows are employed to inform users when remote team-members are present, and knocking on a teammate’s icon is employed as a way to reach them in a more urgent manner.

This digital wall does not replace other physical furniture. However, because of the desired movement of this interface, we imagined that furniture would be sparse – users would have to be “lighter” in the way they inhabited their workspace.  To accommodate this, and to extend the involvement of the body with this interface, we built features of office furniture into the usage of the wall.  This included an eraser/highlighter phicon, a revolving piece of the wall which would show a digital shelf upon flipping it open, a desk, and a drawer phicon (which are described below).

Limitations
Implementation of moveable walls would, of course, be constrained by the available architecture of the working environment.  Additionally, it could be hard to add more users in a collocated space – different geometries, and how they affect the efficiency, productivity, and feel of the space, need to be explored.  Also, the touchscreen aspect has specific advantages and disadvantages as compared to smaller devices which use a mouse interface or stylus and a keyboard.

The Importance of Scale
By using different boards (i.e. different spaces) to designate the privacy and progress of work, we allow the user to spatially represent their ideas, which cognitively eases the burden of trying to organize various thoughts and work being built.  Users are forced to step back, much as an artist would to look at a large canvas, affording a bigger picture view that likely prevents unnecessary obsessive focus and attachment to certain details.  In addition, the ability to have a workspace and a “gallery” adjacent, as an artist often desires, is an interesting feature to add to the working environment of other fields and may yield new insights from the perspectives of considering showing and hiding in one’s work.

Prototype
https://www.dropbox.com/s/axwcvem0pbki9gt/MIT_PROJECT_checkpoint1-5.apk
short click highlights yellow (highlighter phicon). long click turns black. two long clicks in a row will delete the information (eraser phicon). Change in orientation hides information not designated to be seen.

Inspiration (Related Work)
Academic studies: Sharlin et al (Pers Ubiquit Comput 2004) “Tangible user interfaces, humans, and spatiality; Binder et al (Pers Ubiquit Comput 2004) “Configurability in a mixed-media environment for design students”

Design projects: ReticularSpaces (Bardram et al CHI ’12), AWE (robotic wall w/ reconfigurable surfaces) (Green et al IROS ’09)

Commercial:  Manhattan – Graham Hill’s LifeEdited apartment – 420 ft^2
http://www.treehugger.com/modular-design/new-york-times-treehugger-founder-graham-hill-tiny-apartment-convertible-tricks.html

Hong Kong – Gary Chang’s Domestic Transformer – 24 different rooms from 330 ft^2
http://www.interactivearchitecture.org/gary-chang-reconfigurable-living-spaces-suitcase-house-hotel.html

Team
-Sophia Chang – Concept development, final published slides, presentation compilation, stop motion video photography and editing
-Christine Hsieh – Concept development, unpublished project development slides, stop motion story line/directing and sound, detailed description and conceptual framework write up
-Wenting Guo – Concept development, final published slides, stop motion video editing
-Andrea Miller – Concept development, unpublished project development slides, app development, stop motion production

RubixCubicle 0 from Sophia on Vimeo.

Let’s Play Legos

Group Members: Jesse Austin-Breneman, Zachary Barryte, Eric Jones, Woong Ki Sung, Trygve Wastevedt

Description: Let’s Play Legos is a tangible collaborative play tool which allows remote players to craft a story and share a playspace using legos.  Using a Kinect and the Relief system, a player’s lego block form and actions are recreated in a remote location allowing the players to collaborative build a structure.  Gestural and audio commands allow the players to also collaboratively construct a sound and landscape.

The system is designed to allow two remote players to share a physical play space.  The players can collaboratively build and modify an architectural structure using legos.  When one player places a block, the kinect senses the change in position and raises the other players board accordingly.  Using the same base structure, players then can collaboratively create a story using audio commands and gestures to change a displayed background as well as trigger sound effects.  The sound effects are located in the play space through the use of gestures to heighten the “telepresence” effect.  The remote player’s hands and character pieces are projected onto the the play space from above.

The following schematics show the main metaphor used in this system as well as the way it is implemented.  The remote player is projected onto a “window” through which they can reach and affect change within the shared play space.  This is acheived using two kinects, actuated tables and projectors.

The following videos are an imagined user interaction video highlighting the different interactions between the remote players, and a video of the prototype showing the response of the actuated table to changes in a block structure.

User Interaction Video

Actuated table prototype video

For more information please look at the Let’s Play Legos slides.

Contributions:  Woong Ki Sung and Zachary Barryte produced the prototype, Eric Jones, Jesse Austin-Breneman and Trygve Wastevedt produced the user interaction video.

 

 

Tele-Gesture

Samvaran Sharma, Anjali Muralidhar, Henry Skupniewicz, Jason Gao, Hayoun Won

DESCRIPTION:

The Tele-Gesture is a tangible interface for collaboration that allows users to physically point to a detailed 3D object in real life, and have their pointing gestures replicated / mirrored at a remote unit by a robotic finger to be viewed by remote collaborators, either at the same time (synchronously), or at another point in time (asynchronously).

Tele-Gesture demonstrates the importance of having a physical representation of pointing and relaying the associated physical vector information to the viewer, which is not provided by prior work using optical / laser-based approaches.

USE CASES:

This project addresses the inadequacy of two-dimensional screens for conveying a collaborator’s physical manifestations of emphasis and attention on detailed three-dimensional models or physical prototypes in many collaborative scenarios.

  1. Two remote architects discussing changes to a building via use of a 3D-printed physical model can now do so remotely and still maintain the importance of a person’s body (pointing and gesturing motions with the hand and arm) at the object in the context of the space around the object.
  2. Product designers demonstrating how to use a physical prototype by using Tele-Gesture to point at and even operate a product by using the Tele-Gesture pointer to press buttons, manipulate interfaces, etc.

 

TEAM MEMBERS AND CONTRIBUTIONS:

Everyone: Concepting, problem space exploration, design discussions.

Samvaran Sharma: Responsible for the MATLAB Machine Vision algorithm to track the stylus in real time via webcam input, derive the 3D spatial position, convert to polar coordinates, and transmit commands to the Arduino at ~20-40Hz. Also responsible for the paper writeup detailing the philosophical approach and project overview.

Jason Gao: Hardware design, prototype assembly, Arduino + electronics

Hayoun Won: Prototype assembly (with Jason, Anjali, Henry), case studies’ (with everyone) summary, presentation slides’ document, photography, video (Image editing, music, compilation, photography), diagraming the philosophical approach and project concept and system overview, visual Ideation of design rationales, and user scenarios.

Anjali Muralidhar: Hardware design, prototype assembly

Henry Skupniewicz:  Hardware design, prototype assembly, prototype documentation

MEDIA:

Video link: http://www.youtube.com/watch?v=y0eU_wMXWRA

Slides: Tele-Gesture Slides Final

Paper: Tele-Gesture