Wave Alchemy

Life is full of moments that come with obvious or subtle expressions of energy. It is common for us as human beings to attach different emotions to such expressions. And yet when we want to capture it and interact with it, we are often constrained to flat, 2D encapsulations of video, audio, or photographic recordings. Moreover, the way we would look back at a memory in this digital age is now often through a screen with hundreds of files digitally stored away, further removing the experience of the event’s emotion. What if we had a way to experience this emotional energy again, and dynamically interact with it in infinitely complex ways? Here we present a concept and prototype that explores a novel physical-visual language of dynamic, emotionally expressive waveforms, designed to transform the way we perceive different forms of energy as we go about our daily lives. With the power of computations hidden within the physical materials used in the interface, we create an interactive form made of Radical Atoms that can take one form of energy and transmute it into a waveform as its output, or Wave Alchemy.

Team

Dan Sawada – Ideation, physical prototyping, programming, interaction design, video production and editing
Anirudh Sharma – Ideation, video production and editing, programming, partial implementation of a concept that was abandoned
Sujoy Kumar Chowdhury – Ideation, physical prototyping, programming, research, video production
Christine Hsieh – Ideation, research, presentation, physical prototyping, video production, interaction design for a concept that was abandoned
Andrea Miller – Ideation, research, presentation, physical prototyping, video production

Final Paper

CHI-extended-abstract_wave FINAL

Final Presentation

Wave Alchemy presentation slides

 

 

(re)place

STATEMENT

 

The tendency in the creation of digital technologies has been to focus on the design of tools which allow digital information to augment the physical: physical first, digital as an added layer. This direction of thinking is evident in early examples such as the garage door opener, which creates convenience through the automation of a previously existing physical object. However, the same trend also continues within more recent explorations of ways to integrate digital tools into the experience of our physical world.

Augmented reality is one example which is a growing area within human-computer interaction and a distinct example of digital augmentation, as the objects augmented are often separate from the augmenting tool. Using screens or glasses, digital information can be overlaid on any object being viewed within the physical world to augment them with additional digital information.

As another example, The Tangible Media Group at MIT has also researched means of augmentation which are more connected to the manipulation of physical. In the Relief project[3], the digitally projected image of a mountain gives added visual information to understand the topographical forms which can be displayed and manipulated on its form-changing surface.

Both of these current examples overlay a digital image over the physical, enabling access to additional information that cannot be obtained directly from the objects themselves. The inclination to explore the digital as an added layer of augmentation follows the development of all new technologies, which are always created in relation to what exists, with overlay being one of the easiest ways to bring digital affordances into the world. However, as we are now increasingly familiar with the digital, it is also important to reverse the question and ask how physical means might also augment a body of digital information in space.

This paper introduces an interface which selectively materializes digital information within space to allow one to tangibly work with a specific portion of a larger body of digital information. The name (re)place refers both to the replacing of abstract digital information by physical tangibility, as well as the adjustable placement of a physical slice within space as a way of deciding what area to materialize.

 

Replace_(CHI-Formatted Paper)

(re)place Video

 

Contributors:

Sophia Chang -  Ideation, Presentation, Content Generation, Physical Prototyping

Jifei Ou - Ideation, Presentation, Physical Prototyping

Sheng Kai Tang - Ideation, Presentation, Software Prototyping, Physical Prototyping

 

 

synchroLight

 

STATEMENT

In a current scenario of video communication, the shared working space between the participants are physically detached. This discontinuity leads to many inconvenience and problematics, one of them is how to point or indicate physical objects between the remote participants. We were inspired by the phenomena that light can penetrate from one side of a glass and illuminate objects from the other side. In this case, penetrating light becomes a medium that facilitates a better communication.

 

SYSTEM DESCRIPTION

For this assignment, we sought to re-create such experience in a remote communication scenario and demonstrate a vision of “light can travel through virtual world and reach out to the physical one”. In our system, user in video communication takes a flash light and points to any position on the screen. this 2-D coordination will be captured and transmitted to the remote side. A projector simulates a light beam on the same coordination. By doing so, participants are able to intuitively pointing and annotating contents in physical world.

 

APPLICATION

The metaphor of penetrating light can be also applied to other scenarios. We envisioned two potential applications.

1. Urp++

as a remote version of a previous project from Tangible Media Group, Urp, Urp++ allows user to use a phicon as light source to synchronize the environment light within the two workbenches. When the light source changes its position, the simulated shadow from both sides will response the the correct position. User can also pass the light source to each other remotely. (see the sketch below)

 

2. X-ray

In this application, the simulated light beam can not only point on the remote physical object, but also reveal some hidden information inside or behind this object. The simulated light becomes an X-ray.

 

PRESENTATION SLIDES

synchroLight Video

 

Contributors:

Jifei Ou - Ideation, Presentation, Physical Prototyping

Sheng Kai Tang - Ideation, Presentation, Software Prototyping

Obake

Inspired from the tactility of solids and the fluidity of liquids, we have created Obake which embodies a solid shape changing form with a liquid exterior affording fluid like interactions. Obake objects are 3D wireframe shapes with the capacity to morph into new solid forms while maintaining a fluid exterior enabling new interaction opportunities.

We have outlined ten key interactions and gestures for an Obake surface, as illustrated in our presentation – intrude, extrude, prod, pull, push, friction, compress, expand, warp, stitch, s-bend.

We have currently implemented a 2.5D display based on the Obake principles which we use to demonstrate a geographic data viewer. The current features include exploring layers in the terrain both laterally (soil and water profile) and linearly (vegetation profile), ability to pull out shapes (buildings) and prod (mountain) them to explore detail, use of the elastic texture to create (river) and to customize paths (river thickness), to simulate using friction (fire) and to morph display affordances according to the shapes (physical extrusion of terrain from 2D surface to elevated 3D environment with depth based soil profile).

Presentation: Obake

Paper: Obake

Contributions:
Rob Hemsley – Ideation, Presentation, Physical Prototyping
Dhairya Dand –  Ideation, Presentation, Software Prototyping

PURSeus

PURSeus is a bag that has the ability to change size, shape, material, split into multiple bags, automatically deliver items, and understand the user’s schedule and needs. These features eliminate the need for users to have several bags for a variety of uses because PURSeus can understand its contents and transform itself into an all-purpose bag. The theoretical material that forms PURSeus can be extended for use as any type of smart container the user requires.

PURSeus Presentation

PURSeus (CHI-Formatted Paper)

Contributions:
Jason Gao – Physical prototyping, presentation, ideation
Anjali Muralidhar – Physical prototyping, video presentation, presentation, ideation
Samvaran Sharma – MATLAB-based voice recognition, prototyping, ideation
Henry Skupniewicz – Physical prototyping, ideation
Helena Hayoun Won – Video presentation, presentation, ideation

ArTouch

In ArTouch, we explore the ways in which radical atoms can change the paradigms in which we experience art and paintings.  We challenge the natural convention of simply looking at painting by inviting the visitor to touch and interact with our art in new tangible ways creating a richer experience.  In this way, we create not only evolutionary pieces of art, but also art that become collaborative as different visitors interact with the piece throughout the course of the day.

The video below presents our vision for a few of the interactions made possible by our system.

In making the video, we concentrated on three main objects for use in interacting with the art — the hands, face, and mouth.  We feel that these “implements” make the interactions natural and ubiquitous.  In this video we highlighted three main interactions.  First was the ability to reveal hidden layers of a painting by blowing away top layers.  These underlying layers could reveal an artists method for creating a work or specially created textures left there by the artist for the visitor to discover.  Secondly, visitors participated in collaboratively creating a work through changing the physical texture.  Using their hands, visitors could smooth out various sections of the piece transitioning from rough rock, to smooth clay, and finally to the shiny bottom layer.  Over time, the roughness returns indicating how long it has been since the last interaction and giving new visitors an opportunity to interact with it.  Finally, the last piece of art invited the visitor to explore their own emotional state through abstract textures.  By sensing the visitor’s mood, the painting could change its texture and physical resistance to motion accordingly.

Physical prototype showing underlying mechanisms used to create textures on the left. The right side shows an enlarged area of the image for closer examination of local texture. This area is remotely linked to textures in the first panel as seen in the second image.

In our physical prototype as seen in the images below, we wanted to highlight some ideas not presented in the vision video or only hinted at.  First was the idea of remote interactions.  Using a set of textured wheels that were coupled by an elastic belt, we create a dynamic texture that that responds to changes in either picture.  We also explored the idea of interactions that could change based on previous input through the use of a Peltier device that can heat or cool depending on the state that the previous user determined.

We feel that ArTouch will not only revolutionize how art is experienced in museums, but will also bring more art into personal spaces through the use of remote collaboration.  This collaboration will create a more personal piece of art that can be enjoyed in new ways everyday making for a more exciting, tangible experience.

For more information regarding our project, please take a look at our our presentation slides (PDF) or paper (PDF).

Contributions:
Jessie Austin-Breneman – Physical prototyping, presentation
Zachary Barryte – Physical prototyping, presentation
Eric Jones – Physical prototyping, presentation
Woong Ki Sung – Vision video making, presentation
Wenting Guo – Vision video, presentation
Trygve Wastvedt – Vision video, presentation

Radical Textiles

Christian Ervin, David Nunez, Laura Perovich

Video (password protected): https://vimeo.com/55501151

Presentation slides: RadicalTextiles

Final Write-up: Radical Textiles Final Write-up

What if your shirt could change its form, color, and elasticity–instantly?  Or your pile of dirty clothes could morph into sparkly shoes or a bicycle helmet?  Could fabric react to what’s happening in the world… or what was going on in our heads?

In our world of radical textiles, each stitch is a radical atom.  Conceptually, we can think of these stitches as a kind of stem cell of textiles.  They multiply and divide, merge, and change depending on where they are on the body.  They are also “perfect” in that they have ideal knowledge of what they should be and how to act in different situations.

What might this world might look like?  In this future a man could pull a tie out of his shirt collar if he was underdressed, shirts would grow into jackets as soon as we walked out into the cold, and we could design an outfit and fold it into our pocket if we wanted to wear it later.

Though these scenarios may seem far away, there has been a growing interest in computational textiles in recent years leading to a variety of projects, especially around movement, interpersonal relationships, and empowerment.  This includes projects from the Media Lab such as PillowTalk which allows people to connect remotely through soft objects and DressCode that uses textiles as a way to teach programming skills, especially to young girls.

We break down the world of radical textiles along a few dimensions including use cases, modes of interaction, and types of textile changes.  We also explore the implications of a future that includes radical textiles.  They have the potential to improve our world through conservation of materials, the democratization of design, and increasing creativity.  Yet this substantial shift–a transition similar in scale to the recent shift from landlines to cell phones–may have unforeseen consequences and impacts on society.

We assert that all team members shared work equally and fairly, collaborated on group efforts with enthusiasm, and also provided focused support in areas of expertise.  All team members shared responsibilities on project direction and implementation.  Laura had primary responsibility for the development of our presentation, David for the physical prototypes, and Christian for the video editing.

PersonaBench

Title: PersonaBench

PersonaBench Extended Abstract

Description: We explore the design of socially dynamic furniture, which adapts its form to maximize interaction in public spaces. Often, individuals in today’s public spaces are increasingly isolated by their technological devices. MP3 players, smartphones and tablets, erect social barriers that inhibit interpersonal interactions. By rewarding and promoting emergent cooperative behavior, our furniture is architected to foster and catalyze connections between people.

Final presentation: PersonaBench

Stop-motion video: http://youtu.be/1NtT74SaQh8

Team:
Shawn Conrad
Research, Presentation, Happy/sad bench storyboard, Abstract
Lauren Kim
Research, Presentation, Stop motion video [photoshop], Bench fabrication, Abstract
Jacqueline Kory:
Research, Presentation, Stop motion video [compilation, music], View bench storyboard, Abstract
Adina Roth
Research, Presentation, Stop motion video [photoshop], Bench fabrication, Abstract
Jonathan Speiser
Research, Presentation, Bench fabrication, Abstract

Magonote

Magonote is a concept for a collaborative scratching experience. ‘Magonote’ (〜孫の手〜) is a Japanese word that refers to a backscratcher tool. The system we propose comprises of a Magonote enabled chair and a stuffed animal through which a remote person can participate in scratching.

Scratching is a reflex response to itches, for which we do not usually depend on others. It is easy to miss the casual bonding that happens through scratching interaction. In fact, it is also a common social grooming activity in a number of primates. From this observation, we wanted to design a novel experience that uses scratching as a social object and a medium for reconnecting with friends and family.

The above concept video demonstrates an example scenario. Arun could really use a good scratch now. He remembers that his friend Bill helped him out when he was around, particularly when Arun’s hands could not reach the itch-location very easily. Unfortunately Bill no longer lives in his city. Arun knows what to do in this situation. He comes across a Magonote chair nearby and decides to give it a try. At the same time, in a different city, Bill is reading a book. The stuffed koala that Arun gave him suddenly starts nodding its head. He fetches it closer to see why. He notices that there are some LED lights at the back of the koala, and they are blinking. From the pattern of blinking, he recognizes it as an incoming ‘scratch request’. He acknowledges the request by giving the koala a good scratch. Immediately, the Magonote attached to Arun’s chair gets activated. Arun realizes that it is Bill. Arun likes the way Bill scratches, but the scratch-location is slightly off from the itch location. Arun signals it by rubbing his back against the chair. Bill notices that the only one LED at the top-center position of the koala is now fading in and out. He starts scratching around that particular LED. Magonote arm changes the scratch position accordingly. After Arun is satisfied with the scratch session, he leaves the chair. At Bill’s end, no more lights are blinking. Bill puts the koala back to it’s original position.

Here we have used the scratch interaction as a metaphor for casual bonding. The stuffed animal is a ghost representation of someone dear to us and a metaphor for attention seeking. The dyadic interaction between the remote users takes place in personal physical spaces. The capabilities of the chair include transmission of initial presence information, actuation of robotic Magonote and scratch-location-gesture detection through pressure sensors. The features of the stuffed animal are: presence notification through nodding of head, scratch intent notification through LED blinking, scratch position notification through LED fading and a scratch sensing surface. We implemented the LED array controls using an embedded Arduino and simulated the rest using ‘Wizard of Oz’ technique.

Team

Dan Sawada
Research, Design and laser-cutting of Magonote, Prototyping (programming, electronics), Concept video production
Anirudh Sharma
Research, Concept video production, Prototyping (programming, electronics)
Sujoy Kumar Chowdhury
Research, Prototyping (programming, electronics), Interaction Design, Concept video production

Related work

  • Scratch Input by Chris Harrison, Scott Hudson, UIST 2008
  • inTouch by Scott Brave, Andrew Dahley, and Professor Hiroshi Ishii, 1998
  • Hug Shirt by CuteCircuit, 2006

 

Constellation

Project Description: 

The goal of our project, Constellation, is to demonstrate an interface that motivates and guides collaborative motion. Collaborative motion covers a cornucopia of activities from cooking, to dancing, to swimming, to yoga. We focus specifically on dance, and particularly flash mobs, as an application ripe for our interface. We examined the natural behavior of swarms, nature’s collaborative motion, and looked to the natural cues of motion in animals such as bees and ants as the basis of our prototype. We developed a system that tracks the synchronization of movement among proximal users. As more users move limbs in sync, the corresponding movement indicators (LEDs) become increasingly brighter. This approach creates an incentive based reward system to encourage users to move in synchronized (e.g. in a flash mob), and also creates an artistic effect that enhances the overall aesthetic experience.

Prototype States

Prototype Development

Example Scenario

Group Members

Shawn Conrad

Research, Stop Motion Video, Material Logistics
Lauren Kim
Research, Stop Motion Video [image editing], Presentation
Jacqueline Kory
Research, Stop Motion Video [music, compilation]
Adina Roth
Research, Stop Motion Video [photography], Prototype [fabrication, hardware], Presentation
Jonathan Speiser
Research, Stop Motion Video, Prototype [programming, electronics]

Final PDF: Assignment 2_constellation

What’s cookin’?

Dhairya Dand, Christian Ervin, Robert Hemsley, David Nuñez, Laura Perovich

What’s cookin’? is a collaborative cooking system that helps people create meals together even when they’re apart. It is a collection of augmented kitchen tools, surfaces, and computational representation of meals.

Video

Cooking is a social, shared, and sensory experience. The best meals bring friends together and engage all of our senses, from the sizzling of sautéing onions to the texture of toasted bread.

However, sometimes it’s inconvenient for people to come together to share the creative process; kitchen space can be limited, heavy cooking tools may be difficult to transport, and commitments in the home, such as childcare, may discourage people from relocating.

Our tool allows people to cook together in a remote, synchronous, collaborative environment by mirroring the social, shared, and sensory nature of the cooking experience. It is particularly suited for collaborations that involve multiple parallel processes that work in isolation but also intersect at various times throughout the collaboration, most notably at the end when a meal is produced.

We demonstrate example interactions through a series of video sketches and physical prototypes based on friends coming together to share a meal. Our tool can can also be employed in other environments, such as the industrial kitchen or food television. Our model of collaboration extends to large crafts, such as fashion or woodworking and is serves as a step towards a broader framework where embedding knowledge in objects through interaction creates wikipedia for the physical world.

From a Table Away

In our collaborative kitchen environment the tabletop is the mediator which brings together the remote collaborators and bridges the interaction between our tangible utensils, our physical interactions and the environment. Its embedded ambient displays serve as the communication hub between chefs and render augmented information about objects placed on the surfaces.

To reduce information overload between the local and remote collaborators, we use an ambient display to enables glanceable awareness of the remote users tabletop activity. Through this users can be made aware of the remote party’s progress through a recipe or be alerted when a collaborator is struggling with a specific task. If the user wishes to transition to directly interact with the remote workspace, they can adjust the focus of their countertop bringing the impression of remote environment into the foreground of their workspace.

The surfaces help to create a shared workspace that creates enables collaborative teaching and fosters a sense of co-location and teamwork. It provides ambient notifications of the progress for the entire meal, but does not prevent any individual cook from working on his own tasks. Through this we ensure that there is a shared awareness of the ingredients and the tasks being undertaken and how these interrelate which together creates the experience of co located interaction on one shared tabletop.

Spice Spice Baby

The use of spices within this recipe extends the collaboration from the tabletop into the physical environment. Objects which have shared meaning and affordances are mirrored into the remote physical location allowing users to seamlessly share interactions and knowledge.

When Robert buys a bottle of wine that has particularly robust notes, **What’s Cookin’?** helps the chefs collaborate to improve the meal by altering the shared recipe to better match this wine. The system alerts the chefs about the wine choice, and Laura, an expert on wine and food pairings, indicates to David that he should adjust the spices in his marinara sauce. Laura locally taps the bottles recommends and the necessary types and quantities are mirrored into David’s environment. David sees his own spice bottles glow and as he adds the spices to his pot, the bottles’ glow slowly fades out until he has deposited the correct amount.

This tool enables more natural collaboration as users to are able to physically use their bodies to interact with the objects as they would if they were performing the same task locally. This allows the user to draw upon their existing mental models and kinetic memory to help recall which spices to use within the interaction. We re-use existing tools and objects and so enable the seamless continuation of their existing practices sharing the knowledge between these mirrored objects.

The self-aware bottle also records it’s interactions allowing the local user to record their own interactions and replay the information at a time in the future.

We created a physical prototype of this interaction that demostrated the experience with a spice bottle; it glowed to indicate the need to add a spice. A tilt sensor inside the device tracked when the user successfully added spice; the color of the glow changed and eventually decreased in response to a correct number of shakes.

Knead you to knead me

David sees that Laura is frustrated kneading her dough, David sends a tangible video of his hands kneading, which Laura can see overlaid on her dough. After observing David, she slides his hands aside and now practices kneading with her newly learnt style.

Video conferencing is the status quo of remote collaboration today. The question we asked was would it to be more meaningful and personal if the video can be overlaid on the context, in this case the dough, at the same time provide a tangible feedback. This lead to a co-located collaboration experience while still being remote.

Hack the knife

David is struggling to use his knife, Laura helps him learn to cut by mimicking the cutting action at her side. David’s learning is augmented by audio and haptic feedback.

The kitchen has been the birthplace for tools – knives, utensils, spoons which have existed from the stone age to our age; minus their functionality they have hardly ever evolved. Each of our tools is associated with a set of knowledge on how to use them and their best practices, we thought how would it be to have tools that are self-aware, tools that teach and learn from you. These tools not only connect to you but connect you to other people who use them.

We created a working prototype, Shared Spoons, to explore what it means to embed knowledge in the tools we use. We instrumented two wooden spoons with 6 degree of freedom accelerometer / gyroscope IMUs. This allowed us to determine the orientation of the spoons in 3D space. The spoons were connected to a 3D rendering package that provided visual feedback showing spoon orientation. As a master chef moves the spoon around, sensor data can be recorded so that the gesture of “whisk” can be differentiated from “stir,” for example. As master chefs stir many, many spoons, a knowledge repository of physical interactions with tools is collaboratively generated. The spoons can be used synchronously, as well. We demonstrated a scenario where a master chef stirs one of the Shared Spoons while the apprentice stirs the other spoon. As the apprentice “matches” the velocity and orientation of the master’s spoon, the software generates a pleasant tone; when the spoons are not in harmony, a discordant tone sounds.

Time on my side

By interacting with the handle of the skillet David sees the remaining cooking time. When David’s side is about ready, Laura’s kitchen buzzer goes off telling her to leave for David’s home.

Temporal co-ordination is a key aspect of collaboration, and time is something that we as humans aren’t good at keeping. What we have here is a collective workspace – the pan, the kitchen buzzer, even the cellphone, which work in tandem – these objects spread across distances collaborate with each other so that we don’t need to actively worry about time, thus allowing you to focus on what’s most important for you – cooking.

Related Work & Prior Art

Surfaces and Spaces
* ClearBoard (Ishii, 1992)
* Tangible Bits (Ishii, 1997)
* Double DigitalDesk (Wellner, 1993)

Cooking
* CounterActive (Ju, 2001)
* Home of the Future (Microsoft, 2003)
* Counter Intelligence Group (MIT Media Lab)

Objects
* Cooking with Elements (Bonanni)
* IntelligentSpoon (Cheng)
* ChameleonMug (Selker)

Individual Contributions

We assert that all team members shared work equally and fairly, collaborated on group efforts with enthusiasm, and also provided focused support in areas of expertise.

Laura was instrumental as Project Manager for the team and drove the development of our presentation along with the video script. Dhairya worked with Laura on the script for the video and also was responsible for shooting film. Christian was primarily responsible for editing the project video and creating the user interface simulations. David and Robert worked on the physical, working prototypes with David taking lead on the design and development of the Shared Spoons and Robert designing and implementing the augmented spice bottle.

All team members shared responsibilities on project direction and implementation.

[Presentation Slides]

 

RubixCubicle

[Presentation Slides]

This project reflects on the differences between introverts and extroverts, and how each approaches presentation vs. private thought and work.  Departing from a traditional office set up of separate cubicles and a conference room, working space can now be manipulated such that there is not as much separation and sorting into different locations for collaboration, but rather boundaries can be more fluid.  Walls are moveable, allowing individuals to merge workspaces and show each other the native, raw materials they were working on.

Users designate what they wish to keep private even as they open up their environment to another person. The wall itself is a digital object. With the aid of phicons, It uses touch screen like features to allow the hiding and showing of certain content, and automatically organizes public (to be shown) or private (to be hidden) information accordingly when sensing users move between private or collaborative configurations.

2D physical content can also be transferred into a digital format on the wall.

 

Two scenarios are described to flesh out the nuances of this new collaborative object:
1. Collocated
Body, object, spaces, configuration, private, and public

In this scenario, when working privately, each user has one private ThoughtSpace board and one SharedSpace board.  The SharedSpace board is the “showing platform”, upon which all content that should be shown during later collaboration is placed.  The ThoughtSpace board is the user’s private working space, but importantly, they have the option of using a highlighting phicon (described more below) to designate any work in progress they want to show when collaborating later – thus, ideas at different points of progress are separated spatially.  All other ThoughtSpace items are hidden automatically by the computer when moving into collaborative configurations, as seen in the second figure below.  The body is engaged when initiating collaboration through the moving of the digital wall object.  Space and information is reconfigured in the transition between private and collaborative modes.

Remote
Body, object, private, and public

Each office is equipped with a wall “socket” that is bordered by an actuating flexible material.  An ambient invitation to initiate a remote collaboration is signaled by an inviter by moving one of their walls into their socket, which then activates the invitee’s wall socket to become depressed or “open”.  The invitee can then start the collaboration by moving their wall into the socket revealed.

Flipping the desk surface down triggers a full body conferencing mode, allowing the exchange of all gestural and postural information.

When the digital wall is shifted to desk mode, the desk becomes ThoughtSpace and the vertical screen SharedSpace.  The eraser phicon can be used to erase content, and the other side, a highlighter phicon, when used on ThoughtSpace items, causes them to show when shifting into collaboration.  The drawer phicon is a handle that, when brought to the intersection of the desk and “monitor” in desk mode and pulled towards the user (like opening a drawer), allows the user to pull open a digital drawer for archiving. Lastly, the shelf is a piece of the wall that rotates open where users store content they want out of the view of their actual working space.

Because of lack of physical proximity, ambient digital shadows are employed to inform users when remote team-members are present, and knocking on a teammate’s icon is employed as a way to reach them in a more urgent manner.

This digital wall does not replace other physical furniture. However, because of the desired movement of this interface, we imagined that furniture would be sparse – users would have to be “lighter” in the way they inhabited their workspace.  To accommodate this, and to extend the involvement of the body with this interface, we built features of office furniture into the usage of the wall.  This included an eraser/highlighter phicon, a revolving piece of the wall which would show a digital shelf upon flipping it open, a desk, and a drawer phicon (which are described below).

Limitations
Implementation of moveable walls would, of course, be constrained by the available architecture of the working environment.  Additionally, it could be hard to add more users in a collocated space – different geometries, and how they affect the efficiency, productivity, and feel of the space, need to be explored.  Also, the touchscreen aspect has specific advantages and disadvantages as compared to smaller devices which use a mouse interface or stylus and a keyboard.

The Importance of Scale
By using different boards (i.e. different spaces) to designate the privacy and progress of work, we allow the user to spatially represent their ideas, which cognitively eases the burden of trying to organize various thoughts and work being built.  Users are forced to step back, much as an artist would to look at a large canvas, affording a bigger picture view that likely prevents unnecessary obsessive focus and attachment to certain details.  In addition, the ability to have a workspace and a “gallery” adjacent, as an artist often desires, is an interesting feature to add to the working environment of other fields and may yield new insights from the perspectives of considering showing and hiding in one’s work.

Prototype
https://www.dropbox.com/s/axwcvem0pbki9gt/MIT_PROJECT_checkpoint1-5.apk
short click highlights yellow (highlighter phicon). long click turns black. two long clicks in a row will delete the information (eraser phicon). Change in orientation hides information not designated to be seen.

Inspiration (Related Work)
Academic studies: Sharlin et al (Pers Ubiquit Comput 2004) “Tangible user interfaces, humans, and spatiality; Binder et al (Pers Ubiquit Comput 2004) “Configurability in a mixed-media environment for design students”

Design projects: ReticularSpaces (Bardram et al CHI ’12), AWE (robotic wall w/ reconfigurable surfaces) (Green et al IROS ’09)

Commercial:  Manhattan – Graham Hill’s LifeEdited apartment – 420 ft^2
http://www.treehugger.com/modular-design/new-york-times-treehugger-founder-graham-hill-tiny-apartment-convertible-tricks.html

Hong Kong – Gary Chang’s Domestic Transformer – 24 different rooms from 330 ft^2
http://www.interactivearchitecture.org/gary-chang-reconfigurable-living-spaces-suitcase-house-hotel.html

Team
-Sophia Chang – Concept development, final published slides, presentation compilation, stop motion video photography and editing
-Christine Hsieh – Concept development, unpublished project development slides, stop motion story line/directing and sound, detailed description and conceptual framework write up
-Wenting Guo – Concept development, final published slides, stop motion video editing
-Andrea Miller – Concept development, unpublished project development slides, app development, stop motion production

RubixCubicle 0 from Sophia on Vimeo.

Let’s Play Legos

Group Members: Jesse Austin-Breneman, Zachary Barryte, Eric Jones, Woong Ki Sung, Trygve Wastevedt

Description: Let’s Play Legos is a tangible collaborative play tool which allows remote players to craft a story and share a playspace using legos.  Using a Kinect and the Relief system, a player’s lego block form and actions are recreated in a remote location allowing the players to collaborative build a structure.  Gestural and audio commands allow the players to also collaboratively construct a sound and landscape.

The system is designed to allow two remote players to share a physical play space.  The players can collaboratively build and modify an architectural structure using legos.  When one player places a block, the kinect senses the change in position and raises the other players board accordingly.  Using the same base structure, players then can collaboratively create a story using audio commands and gestures to change a displayed background as well as trigger sound effects.  The sound effects are located in the play space through the use of gestures to heighten the “telepresence” effect.  The remote player’s hands and character pieces are projected onto the the play space from above.

The following schematics show the main metaphor used in this system as well as the way it is implemented.  The remote player is projected onto a “window” through which they can reach and affect change within the shared play space.  This is acheived using two kinects, actuated tables and projectors.

The following videos are an imagined user interaction video highlighting the different interactions between the remote players, and a video of the prototype showing the response of the actuated table to changes in a block structure.

User Interaction Video

Actuated table prototype video

For more information please look at the Let’s Play Legos slides.

Contributions:  Woong Ki Sung and Zachary Barryte produced the prototype, Eric Jones, Jesse Austin-Breneman and Trygve Wastevedt produced the user interaction video.

 

 

Tele-Gesture

Samvaran Sharma, Anjali Muralidhar, Henry Skupniewicz, Jason Gao, Hayoun Won

DESCRIPTION:

The Tele-Gesture is a tangible interface for collaboration that allows users to physically point to a detailed 3D object in real life, and have their pointing gestures replicated / mirrored at a remote unit by a robotic finger to be viewed by remote collaborators, either at the same time (synchronously), or at another point in time (asynchronously).

Tele-Gesture demonstrates the importance of having a physical representation of pointing and relaying the associated physical vector information to the viewer, which is not provided by prior work using optical / laser-based approaches.

USE CASES:

This project addresses the inadequacy of two-dimensional screens for conveying a collaborator’s physical manifestations of emphasis and attention on detailed three-dimensional models or physical prototypes in many collaborative scenarios.

  1. Two remote architects discussing changes to a building via use of a 3D-printed physical model can now do so remotely and still maintain the importance of a person’s body (pointing and gesturing motions with the hand and arm) at the object in the context of the space around the object.
  2. Product designers demonstrating how to use a physical prototype by using Tele-Gesture to point at and even operate a product by using the Tele-Gesture pointer to press buttons, manipulate interfaces, etc.

 

TEAM MEMBERS AND CONTRIBUTIONS:

Everyone: Concepting, problem space exploration, design discussions.

Samvaran Sharma: Responsible for the MATLAB Machine Vision algorithm to track the stylus in real time via webcam input, derive the 3D spatial position, convert to polar coordinates, and transmit commands to the Arduino at ~20-40Hz. Also responsible for the paper writeup detailing the philosophical approach and project overview.

Jason Gao: Hardware design, prototype assembly, Arduino + electronics

Hayoun Won: Prototype assembly (with Jason, Anjali, Henry), case studies’ (with everyone) summary, presentation slides’ document, photography, video (Image editing, music, compilation, photography), diagraming the philosophical approach and project concept and system overview, visual Ideation of design rationales, and user scenarios.

Anjali Muralidhar: Hardware design, prototype assembly

Henry Skupniewicz:  Hardware design, prototype assembly, prototype documentation

MEDIA:

Video link: http://www.youtube.com/watch?v=y0eU_wMXWRA

Slides: Tele-Gesture Slides Final

Paper: Tele-Gesture

tinder

tinder is a cognitive prosthetic to address long term ideation.

  • capturing ideas, instantly
  • marinating ideas, perpetually
  • igniting ideas, magically

Capture – available, simple accessories

The capturing interface for tinder is deceptively simple and is an attempt to remove friction from long term idea generation. Often, we are reluctant to capture new ideas as they appear in our heads because our “inboxes” are clumsy, unavailable, or too complex. When we write down ideas in sketchbooks or on backs of napkins they often end up in a jumbled pile of notes, or filed away forever in a dusty file cabinet, or simply discarded after their usefulness has expired. Digital tools exacerbate the problem because they make capturing ideas as easy as bookmarking a web page. We suddenly have an overwhelming backlog of digital detritus, all at one point representing interesting ideas but now stifling our creativity. Indeed, by the time we pull our cell phones to tap out an idea, the initial spark of joy has already left the thought.

tinder provides a very simple capturing interface (i.e. the tinder, itself) – cheap, disposable scraps of magic paper that are always available on our desks or in our pocket; an ordinary pen or pencil is more than adequate to mark these pieces of tinder. Our minds are calmed because

Marinate – processing, powerful tinderbox

It is good to have holding places for ideas. By putting our thoughts in external holding bins, the ideas have a chance to marinate over the long term. Indeed, this is the heart of long term creative sparks. The longer ideas remain in a holding pattern, the more chance they have to connect in new and beautiful patterns. We grow in wisdom and can apply new contexts and perspective to these ideas. In fact, in early childhood development, the ability to make semantic connections between ideas is critical to normal cognitive growth [Beckage, Smith & Hills, 2011]

The tinderbox is a computationally powerful device disguised as an elegant wooden box. The box is wirelessly connected to cloud-based APIs that provide access to handwriting recognition, machine learning, and semantic network modeling algorithms. It detects and processes the content of every idea placed inside; the system organizes, optimizes, and reshuffles ideas in the semantic map based on everything from what is in current news headlines to the mood of the user. We feel a quiet confidence when we can find our ideas marinating as long as necessary in a brine of computational augmentation.

Ignite – revealing, magic tinder

When we are ready to address our long-term ideas, we simply remove the pieces of tinder from the tinderbox and begin to play. We notice that one of the pieces is glowing. We don’t need to know why, but tinder wants us to notice this spark. (In reality, the system’s algorithms have predicted that this piece of tinder contains a particularly relevant idea).

The heart of tinder is a new material called papyro. This paper-like material has sensor and computational matrices woven among its pulp fibers. Each piece of tinder, made of papyro, has a unique signature and fine-grained proprioception (a sense of position and motion), so it can compute its location relative to other pieces of tinder. Because papyro contains smart ink and luminescent tint, as we spread and shuffle the scraps on our desk, the luminescence spreads across the tinder and is constantly shifting as we reposition and decontextualize these ideas.

Finally, when we have a pile of tinder glowing brightly, we can “ignite” the stack by drawing a matchstick and making a striking gesture. Something amazing happens when a pile of tinder ignites. The nanobots that make up the papyro start to self-assemble into one, larger piece of tinder… a bigger idea that is the synthesis of the others. papyro is paper-like, so I can simply tear off pieces of the idea I don’t like or fold the paper to see multiple sides of an argument. If I want to add another idea, I can push one piece of tinder into the other and the papyro will melt together. If this new idea goes back into the tinderbox, the semantic network model will update, as well.

Presentation PDF: tinder_davidnunez

Digital Clay

My Ideation Idea is to create a series of discrete units that are able to sense their position in relation to the ones around them and then, comunicate this data back to a CAD program on the computer.

PDF

Smart magnetic interface

1st Project_Helena Hayoun Won

Smart magnetic interface is a new and important means of understanding relationships with others, their communities, their rules, their habits, and their references to the world. Designing radically new interface experiences for the creation of digital media can lead to unexpected discoveries and shifts in perspective.

I aim to motivate users in telling their stories by new data filtering and driving reflection on the outcomes of their behaviors. The key to bind the physical act of collection and the digital opportunity of representation is metadata customization and filtering. These mechanisms for gathering and filtering a visual, gestures, tactile and verbal story exist as branch alternatives to the usual fragmented tools.

When using this interface This, User play how an object would collect relative to the viewpoint of the acting spot. They iterate between trying and collecting in a world of multiple perspectives. The results are entirely new genres of user-created works, where user finally capture the cherished visual idioms of action.

S-Ray

Slides

HW1-SRay-Final

Introduction

Shape exploration is a process relying on different kinds of tools and media in different stages. Designers use brains for imagining, adopt paper+pen for 2D sketching, and manipulate clay for 3D modeling. The time consumptions of ideation with different media increase, while designers move from abstract to concrete representations. It is believed that reducing the time cost of transiting from one media to another could help to improve the quantity and quality of ideation. Therefore, we propose the idea of Sublimation Ray(S-Ray) which transmits and receives bidirectional signals to and from Radical Atom and to change its phases; Radical Atom is a new kind of computational material proposed by Pro. Hiroshi Ishii of Tangible Media Group in the MIT Media Lab. With the S-Ray, designers can change the Radical Atom from gas to liquid to solid on demand, and this mechanism is believe to speed up and enrich the ideation process.

Reference

Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. interactions 19, 1 (January 2012), 38-51. 

Prototypes

Mixed-Reality Model Displayer

Slides: Mixed-Reality Model Displayer

Problem Description

There are times when designers are desperately wanting to change their digital models simply with hands, so it’s easier and faster to see the instant outcome before getting into any design details. For example, when urban designers are sharing concepts with other members in the design team, or when interior designers are presenting and discussing plans to the their customers. The current solution to this problem is tedious, since designers have to open up their 3d models in a software, adding and changing models, and finally rendering them to see the result.

Solution

Mixed-Reality Model Displayer enables designers to manipulate their digital models in a more intuitive way by connecting physical models in the real world. Designer are able to put real physical models of any kind behind the screen, so when viewing through the screen they can see their physical models are put in the environment of their digital models. Also, they are able to rotate the displayer to see the panorama view of their models just as the same they always do in a modelling software. This concept can be developed further when using of holoprojection and sensors to help equip this displayer. So the designers can change not only the physical models but the digital ones projected at the same time.

1st Class Project: Tangible Foam Cutter

This documentation (1st_assignment_jlab) is of the 1st class project. My idea was for a hybrid manual/CNC foam cutter. The cutter could create prototypes out of high-density foam stock based on a CAD model. Then the user could interact with the prototype physically and make changes by manually operating the hot wire cutter. Based on encoders in the handles, the computer could track changes made to the prototype and reflect these in the CAD model.

Re-Form

Slides: Re-Form

Problem Description

Rapid prototyping and the free form construction of physical prototypes play a vital role in the ideation and design experimentation process. The current approach though is inflexible forcing a design to become increasingly constrained as it develops from initial sketch to the CAD model and onto the first fabricated prototype. This inflexibility therefore prevents the designer form being able to naturally experiment and try new designs and configurations where their understanding of the physical affordances and textures of the object shape the design.

Solution

To overcome this the Re-Form project presents an augmented prototyping workbench where physical prototypes can be rapidly developed in a freeform modelling environment. The project is built around an augmented projection system where visual elements such as the UI and object textures can be displayed on the physical form of the prototype. Through the use of reverse electrovibration textures can also be applied to the object allowing the rapid experimentation of both the visual and physical attributes of the prototype. Through these techniques Re-Form can support wizard of oz style interaction with physical objects enabling new user interactions to be immediately tested without having to redesign and fabricate each individual adjustment.

QWRKR

QWRKR (“Quirky Worker”) is a simultaneous organizational tool for the prototypical multi-colored post-it note ideation session. It comes as a suite of tools to help make these sessions more productive. [PDF]

Brainstorming produces an abundance of ideas, but team members often find it difficult to synthesize these ideas into actionable results. Additionally, analysis tends to occur after the fact rather than during the process of idea-generation. Typically, a team member is designated as “mediator” or “facilitator” for the ideation session—QWRKR takes on this role, organizing content on the fly and directing the brainstorm session to more productive results.

Faced with a chaotic wall full of ideas, humans tend to rely on simple, intuitive analytical tools to group and organize content, ignoring potentially valuable ideas.

1. Alternate organization schemes

Optical Character Recognition combined with Natural Language processing could discover latent organizational structures in the post-it data. By data mining Wikipedia and other web content, QWRKR could find associations between ideas as the team works, and check popularity to determine what ideas are trending and might be of higher value.

2. Visualizing Content

Visualizing content early can help team members discover unforeseen connections between ideas. This tool temporarily converts, with a single click, a post-it note into its corresponding image on Wikipedia. Clicking an image multiple times cycles through various associated images until appropriate.

3. Computer Vision

By tracking team members’ eye movements, QWRKR can create heat maps of ideas that are getting too much attention, and suggest focus on ideas being overlooked. As an additional association tool, it can track the sequence of focus through the idea set.

4. Semantic Organization

QWRKR can quickly re-arrange post-it data into sentences, creating surprising associations between ideas. These quasi-random reconfigurations could serve as a valuable lateral thinking aid for the team.

5. Idea Accumulation

Clicking on an idea in this mode automatically creates additional associated words or ideas around it, a directed brainstorming tool.

6. Directed Lateral Thinking

Oblique Strategies and other lateral thinking tools can help disrupt ossified thinking in the brainstorming process.

7. The Quirky Randomizer

This tool cycles through all tools at a controlled level of randomness. The ensuing cognitive dissonance can be productive, encouraging the team to reconfigure preconceived notions on the fly.

Design Shaper

Presentation Slides

Slides

Motivation

These days, design activities mostly start from and end up in the computer screen, especially in the phase of developing forms and shapes of designs. In this situation, the  keyboard and mouse are the main tools of designing; sliding and clicking a mouse are the only interactions which are allowed to us. However, if we look back on our childhood, our designs were developed through physical interactions;  we played with clay or Lego blocks with our hands, having physical sensations. I believe this physical sense of interaction is very important in designing because it provides us with a good way of exploring design ambiguity. Even when we do not know what is the next step in our design process, we can be inspired by physically touching, moving, and distorting our design objects, and this physical process sometimes leads the way to unexpected design possibilities. Here, the question was how to restore this physical sense of designing/crafting in today’s digital environment.

Project Descriptions

In this assignment, a new design environment is proposed in order to restore the physical sense of designing/crafting. This design environment detects users’ gestures with a motion sensing device and responds to them by manipulating digital objects in the large screen. This environment will enable users to have the physical sense of designing/crafting and actively explore ambiguities in the design process. In addition, by mapping users’ diverse motion vocabularies to design vocabularies, this design environment could also provides creative ways to explore new design modes. For example, if the environment is set to interpret dancing gestures, users could explorer new ways of designing by dancing.

Concept Demo

Strobe Display

Slides- Water Droplets Display

Motivation

The idea behind StrobeDisplay is to create floating physical bits of digital information. The inspiration is derived from an installation at the MIT Museum, where strobes of light on falling water is used to create an illusion of stating datapoints(green dots). These elements could be used to represent an information element and interesting mid-air visualizations can be created.

Motivation video(Strobes)

Background

Present forms of display do it on x,y axis (pixels). There are attempts to make 2.5D displays lately, some from the Media Lab only- Recompose and Relief. Attempts to exploit the z axis  in an interface have been few, such as ZeroN.

AmbientMind

AmbientMind

AmbientMind is the concept behind a new architectural health solution to the growing problem of destructive focus on work that is a result of the many portable devices we carry around with us today.  Because our bodies and whole mental and emotional focus becomes narrowed to a screen essentially with these electronics, our capacity to think creatively and be grounded in ourselves is very limited.

The created interactional space for AmbientMind would be designed such that our thinking processes, ideas, and emotions could be captured in an expanded physical space around us in a circular room not unlike our own bedrooms in size and purpose.  The focus is on a space that is comfortable and carries our identity, but also has a diverse array of tools that allow us to visualize, document, and store the products of our minds.  Using a combination of mind-mapping programs, electronic whiteboards and post-it notes, and other as yet developing digital ideation systems, this room would allow the integration of much needed physical movement and being that engenders health and wellness, and the productivity of having our ideas and being able to flow through them with tools at our fingertips.

By separating the process of ideation with its intense mental processing and working memory demands, from physically embodying the working space, AmbientMind allows the ideator to optimally traverse back and forth between the tangible real world and the world she is creating through her work.

Braynestorm

braynestorm

Braynestorm is an augmented-reality device that seeks to make the initial creative process behind ideation more fun and productive. When initially researching and brainstorming a topic in preparation for a paper, assignment, or other project, we typically default to our old standbys of the mouse and keyboard setup, browsing Google for a host of information about our area of interest. This differs greatly from the brainstorming procedure behind sculpture, dance, or art, in which there is an innate physical component to the creative process (in addition to computer-aided research), freeing ourselves from the paradigm of screen, mouse, and keyboard, and allowing us to interact with our physical space as we explore different avenues.

Braynestorm seeks to reduce distraction during the creative process and improve efficiency by providing a large volume of information in varied forms, and allowing users to interact with their data in a tangible way in order to tap into the advantages of physical freedom. It does this by using a computer-vision aided process to create an augmented reality setup for the user.

The user first types in a relevant keyword or set of keywords for their desired topic, and presses “Braynestorm”, which prompts the program to automatically scour the internet for visual data (from Google images), and semantic data (facts from wikipedia, and news from Google news). When a red object is held up to the camera, the program tracks the position of the object and overlays images relating to the topic on the video feed. When a blue object is held up, news headlines are displayed, and when a green object is held up, facts are shown.

This allows users to physically interact with their information and potential ideas, and with a few modifications (such as gestures and other features to allow the user to download more information from the web without touching their keyboard or mouse), this device could grow to be very useful in the creative process, while demonstrating the power of computer vision, augmented reality, technology, art, and tangible user interfaces.

Maquette

Like I/O Brush in two dimensions, Maquette is a framework for rapid idea generation using objects and textures at hand in the real world. Digital modeling in three dimensions is a slow process which can never keep pace with the brain’s speed of idea generation. By using physical objects to quickly build up complicated digital forms, three dimensional modeling can rival sketching as a fast ideation tool.

The device is a hand-held object with two buttons, two depth cameras, and one RGB camera. The sensors act like a Kinect to scan objects and textures in real-time. Through the use of the buttons and simple gestures any found object can be captured in the space of a digital model on-screen. The same object in real-life can then be used (moved and rotated) to control it’s digital representation. Objects can be placed, removed from the model, and booleaned out of the model using the buttons on the remote and on-screen menus. It is also possible to use props to perform operations on the model (i.e., a piece of paper can slice through the model, cutting it in half and removing one piece). Finally, as with the I/O Brush, textures can be painted onto the model using image data picked up from the RGB sensor.

Slides

Map-It!

ideation by design 

Map-It! is an interactive digital tabletop that promotes the learning of geography. It would be preprogrammed with larger places such as cities, states, and countries but would also allow for creativity by integrating a create-you-own function. With this function, a user could create their own map of their neighborhood to share with a friend or family member. This map could then be saved to a map piece ( probably a fiducial marker or QR code on the bottom of an acrylic or wood piece) and used to pull up the map on someone elses Map-it!. The use of a table as opposed to an ipad or computer would allow for more detail to be integrated into the design while still being able to see the larger picture. Ideally Map-It! would be able to map the physical terrain of a domain through hands on manipulation and would also be able to save that information. However, i’m not sure how to actually implement that.

insTag: a tool for idea documentation

Video Sketch

insTag on Vimeo

Background

Taking pictures in a meeting or branstorming session is a common activity that I want to explore for this assignment. It is very interesting that those visual impressions that brought by photos could help us associating the notes that we’ve written down with the environment that we were in. And those pictures can serve not exclusively for a post-meeting organizing, but also as a tool for real-time discussion and modification during the meeting.

System Description

The insTag is a system that aims to help one intuitively capturing ideas and rapidly evolving them with others in a meeting situation. It has a hacked camera, that connects to the cloud. The camera contains also a printer, which will print only text of location and time of the shooting moment. Besides the camera, the system has a webcam, a projector and a whiteboard.

In a meeting scenario, one can use this camera to make digital pictures of any thing he or she finds interesting. A print with authentic time and location will be generated by the camera at the same time. The digital pictures are automatically linked with the prints, which allows one freely writing notes on. When the prints are pasted on the white board, the webcam recognizes each of them because of the uniqueness of time. The linked digital pictures will be then projected on the board for discussion. One can also make a layered modification of the picture’s content on the board. By wiping the print, the webcam will take a picture of the modification and link it with the respective print. The next time when the print is pasted again on the board, a history of the modification will be projected along as well.

In the end, we will have physical tags of the inspiring moments and discussions from a meeting. As physical objects, those tags are easy to organize and retrieve. And most important, the process of making and retrieving them is intuitive.

Tree of Ideas

Presentation Slides

Tree of Ideas is a realtime Post-it brainstorm recorder & visualizer.

Nowadays, people use Post-its to jot down simple thoughts and move them around to combine and separate clusters to come up with ideas. However, the biggest issue of this method is the difficultly of documenting the ideation process.

The “Tree of Ideas” addresses this idea by using Post-its with RFID tags to digitally record and visualize the brainstorm process. First of all, when people jot down ideas, the handwriting is transmitted and recorded on a computer realtime. This handwriting is then annotated with a timestamp and a unique ID. When people move around the Post-its on the specialized board with a projector and RFID sensors, the system automatically detects the clusters based on the proximity between each note, and creates groups. At the same time, the system projects a “tree form” visualization behind the notes. Finally, each time the user adds new notes, or changes the position, it saves a snapshot of the previous status. After the brainstorming is over, people would be able to refer to the digital notes, and look back at the process to have further discussions.

By using this system, people would be able to easily document the ideation process, without any additional burden of taking photos of the board of notes or manually recording what is being discussed.

Yet another hand gloves based interface

Slide deck: MAS834-Hand-Gloves-Interface-3D-Objects-Sujoy-Project-1

When I was a kid, I was introduced to the number system using the joints in my fingers. The finger joints, or the space between them, appear to me as a grid system that we can use for interacting with user interfaces. Of course, there have been a lot of work on using instrumented hands or hand gloves as user interfaces. Some of them depend on cameras or sophisticated gesture recognition devices like Kinect. Right after my presentation, I discovered this awesome Minority Report like UI from Oblong, which is developed by John Underkoffler, a Tangible Media alumnus. Clearly, there is little room for novel contribution in terms of gestural interaction. However, I am still looking for prior art that makes use of the number system metaphor, for applications like a simple calculator. During advanced stages of ideation, when we are working on 3D prototypes, such an interaction has the potential to be deemed as intuitive and efficient.

 

Dance Formation Creator

Slides: Dance Formation Creator

Project Description

Dance Formation Creator allows users to physically manipulate pieces on a board into formations for a dance, which are then uploaded into a computer. With this tool, there is no longer a need for the traditional paper and pencil, which can look messy and be tedious to use as the choreographer keeps erasing, making changes, or scraping a formation and starting over. It also makes collaboration easier because two or more choreographers can sit around the board and manipulate the pieces, rather than being constrained to a piece of paper and passes a pencil back and forth.

Once formations are captured into the computer, the computer can provide additional information about a new formation. For example, it can evaluate if a certain dancer moved substantially more than other dancers, thus making the formation change hard to execute for that dancer. This is something that the choreographer may not have caught until rehearsal when the dancers tried the movement themselves.

Dance Formation Creator merges the physical world with the digital by allowing choreographers to manipulate pieces on a board to exercise the creative process of formation creation, and also upload these formations into a computer that can digitally record and evaluate them.

Physical Copy & Paste

Pen and paper let you move ideas from your mind to a notebook fluidly.

Computerized copy and paste lets you quickly remix and combine materials from different sources, but is restricted to the digital domain.

Currently, if you want to incorporate other materials into your physical notebook, you either have to recreate them with your pen (possibly tedious and error-prone), reference it (which requires you to look it up again later), or print/photocopy and paste it in manually with scissors and glue, which imposes a high-effort barrier to rapid, fluid ideation.

An ideal copy-and-paste solution would let you effortlessly incorporate external physical and digital materials, diagrams, and sketches into your physical notebook, so you can both augment and expand upon your own physically-penned ideas, and interactively annotate these newly incorporated materials.

A demonstration of this concept:

Video

Slides

Smart Bricks

PDF: SmartBricks

Recent studies have shown that peak creativity emerges when people are in solitude within a comfortable and uninterrupted environment. Contrary to previous belief, group brainstorming can be stifling to many, especially creatives, many of whom are introverts, because of fear of rejection, not being able to speak up to a vocal colleague or succumbing to peer pressure.

Smart Bricks responds to these issues by giving individuals empowerment and creativity to create their own space and share their ideas at their own pace. The bricks come in 3 sizes and individuals can use them both as modular units to create architectural elements such as walls/white boards, furniture and sketch blocks. In terms of articulating the meeting space, the blocks can be used to create an open meeting space, or individual alcoves for private dialogue.

For sharing information, the blocks can be aggregated in certain ways to collect ideas, via a digital pen, from one individual or a shared group or set up in a systematic way to create an ideas wall. The user also has the option to project their ideas from the block onto the wall.

Research has shown that many eureka ideas come from unexpected environments, thus, by letting the partipants create their own space, ideas may be piqued and the individuals would be able to record and share their ideas directly through their environment.

Sticky Blocks

zbarryte_stickyblocks

Interface Summary

Users write/draw/speak ideas into blocks, which they then place in a space.  Blocks represent ideas.  Ideas can grouped to form themes.  Themes and ideas can be built upon by other ideas, moved around, shuffled, sorted, etc, and each idea will remember all of its bindings (what themes it’s a part of, what ideas it’s built on).

Project Goal

Sticky Blocks enables users to physically manipulate their ideas, and to project the associations they make in their head, much as they would with post-it notes.  Unlike post-it notes, Sticky Blocks may form more intricate structures and remember former associations.

NoteSync

PDF Slides: NoteSync

Problem Description
Physical media such as pen and paper have a vital role in the formation of an idea giving users a freedom in form and fidelity not often replicated in purely digital work spaces.  Digital media does, however, excel over its physical counterparts in the areas of organization and ease of collaboration and sharing of ideas.  Current trends often seek to migrate information to a purely digital format creating a definite disconnect between the physical and digital landscapes.

Proposed Solution
NoteSync is a system that seeks to maintain the freedom of physical media while allowing the user all the benefits that digital media offers.  The NoteSync consists of a device that can read and write to physical media sources such as a notebook and provides a common digital workspace that can be used to share and collaborate ideas as illustrated in the video below.  Not only will this system enhance collaboration, but it will also let users better organize their notebooks by automatically generating tables of content and indices based on key word and symbol recognition.  By bridging the physical and digital work spaces, the NoteSync will be able to offer a richer overall experience for the user.

 

STORMBOT

Now with more buttons!


First, you have a set of digital screens. A tabletop and some monitors.

You can add and manipulate any data you want on the screens: text, tables, images, audio, video, you name it. Manipulate these data with your hands, touch-screen style. Draw on them with a digital pen, white-board style. Add text with a keyboard, or write with the pen. Annotate, scribble, format, rotate, erase.

Technology already exists to merge physical pieces of paper with digital paper (e.g., ProofRite, PaperProof, various digital pen and dot patterned paper, and so on), so why not, we can add that technology to this workspace as well! All the things you can do with paper scattered across a desk, you can do here, with more media.

The key here is that you can lay out your data and ideas in a non-linear way. Take pages 3, 15, and 27 from a pdf and put them side by side. Set a clip from an article you read nearby. Draw a line connecting these to a set of audio recordings you have from a lecture. Easily move information between screens. Jot notes everywhere and anywhere.

But there’s a problem. Everything is digital, visual, and constrained to your screen or to pieces of paper.

How can we improve?

Just add robots!


Everything is cooler with robots.

So let’s add a social robot sitting at the table with you to facilitate the brainstorming process. Not as a replacement person, but to help you interact with your data and ideas. It can analyze all your content. It can ask questions. It can search for new or relevant information that you may not have considered. It can direct attention verbally and visually to previous ideas you’ve had to help you connect your ideas (e.g., Staudt, 2009).

The robot will be able to analyze the linguistic and semantic content of your screens. (Until image and video processing gets better, those may have to be tagged with text for now.) A lot of machine learning, AI, natural language processing, and data processing algorithms exist for analyzing large sets of data like these, generating semantic concept nets or category structures, finding similarity dimensions, feature sets, and so on. All this previous work can be leveraged here.

The robot can also be equipped with sensors: eyetracking in the tabletop surface and on the screens (eyetracking monitors already exist, e.g., from Tobii Technologies, and some people are trying to develop eyetracking webcams), cameras, a microphone. Data from these can be analyzed in order to monitor attention (e.g., Narayanan et al., 2012) affect/emotions (e.g., Calvo & D’Mello, 2010; D’Mello & Graesser, 2010), and other behaviors.

With all this information, the robot, as a social actor, can help you interact with your ideas. It can suggest relevant words or phrases, papers, articles, people who have expertise in areas you’re thinking about, news, video clips, and more. Drawing on the automated tutoring system literature, it can ask questions that will lead to deep thinking about ideas.

But…. robots can’t do everything!


Stormbot is a tool to help you stay on track. It can guide you away from destructive thinking and mental blocks by detecting your frustration. It can help you generate and connect ideas. But it’s still just talking about information represented visually on digital screens.

Buttons

One way of thinking about sorting, categorizing, and organizing ideas is based on their similarity and feature sets. Buttons — actual, physical buttons, like the kind on sweaters, jackets, and bags — are a good visualization of this. Buttons can be round, or square; they can be different colors; they can have different numbers of holes. You can group them based on their feature sets and similarities in a large number of ways: blue buttons with two holes. Red buttons with four holes. Cloth buttons. Metal buttons.

What if we could sort ideas the same way?

Recall that the robot, on the backend, is already analyzing the content of your screens. Why not extend and apply this: determine feature sets. Judge similarity. Group ideas into semantic categories or networks. Basically, assign “buttons” to your ideas.

Physical buttons!

Bring that into the physical world. Have actual, physical buttons linked to each of your ideas on the screen. This is the kind of thing that might be especially helpful as an ideation game for kids, with the robot character leading the game and facilitating the interaction.

The buttons could just be real buttons, or they could be specially 3D-printed, or they could even (and perhaps ideally) be reconfigurable on the fly, incorporating new ideas and sorting schemes as you change data, add ideas, and group ideas together. Whether you’re trying to design software, a new product, come up with a research question, or write a paper, the goal is to allow you to interact with your ideas on another dimension. The buttons, with the robot’s lead, allow you to see similarities and draw connections in a different way.

Slides available here! [PDF]

BrightIdea

Slides

Problem Description
Ideas can come at any moment, and when working in groups sometimes those ideas occur when not in physical proximity. How can individuals effectively record ideas that occur to them during the day, and how can teams work together and evaluate ideas effectively, when not all together?

Proposed solution
The BrightIdea is a system to help users record ideas that occur to them at the spur of a moment and share with teammates. Each system consists of a base, that contains several idea “bulbs”, each bulb consists of multiple LEDs of varying color that identify which user made a given contribution. When a user has an idea they record it on the bulb. When they are ready to share their idea they recorded, they replace it in the base of the unit. Depending how good of an idea the creator thinks he or she has come up with, the creator make push the bulb deeper or shallower into the base to adjust the actual brightness of the bulb. Other users that see a lit bulb, may then place lit bulbs in the ‘play’ socket of the base, which replays the idea. Afterwards other users replace the bulb into the socket as well and similarly can indicate how good of an idea they think it is by how deeply they replace the bulb into the base.

IdeasInMotion

Slides PDF: Perovich_presentationFinal

Movement-based ideation can be difficult to capture since motion is fleeting and experiential.  IdeasInMotion is a system that supports dance choreographers developing routines by helping them collect their ideas, restructure the moves into a routine, experience the new choreography, and share the results with remote collaborators.  The system consists of sensors and haptic feedback devices in clothing that are used to document, re-experience, and share motion, paired with a computer-based interface for sorting and re-structuring the moves.  The sensors document the choreographer’s movements as he brainstorms and a board collects the data and wireless sends it to the computer where it is intelligently divided into segments.  Using the computer interface, the choreographer can rearrange the moves to create a new combination.  He can then experience the resulting routine through haptic feedback in the clothing.  Once he’s please with the result, he uses the computer interface to create a mirror image of the routine that represents the follower’s experience.  He sends this to his dance partner so she can learn the routine “naturally” by touch–as she would if they were collaborating in person.

SoundSpace / Permea-b-oard / MeetingMaker

PDF: Introduction and Ideas [Slideshow]

SoundSpace is a tool for musicians to meet and create music while apart. Current tools (ex. garageband, webconferencing) ignore the importance of body movements, spatialized sound, or even the vibrations of a performer stomping on the ground. The importance of these ways of communication become clear when thinking of the way a lead violinist directs a string quarter solely through emphasized body movements. SoundSpace provides all of these sensory inputs while also adding the ability to record and playback sessions. This is great for rehearsal alone later or revisiting improvised sections that are otherwise difficult to repeat. It can also by used to turn on and off instruments or rearrange portions in the composition and lead to new ways of experimentation.

As a follow-up to Clearboard, Permea-b-oard latches on to the importance of the direct gaze, and continuity of space, but suggests that this does not only need to be for the sharing of 2D representation. A space is added at the bottom of the board for working with physical or digital 3D models. While physically located far apart, the interface creates the appearance of a shared space allowing for direct gesturing, eye-contact, and shared manipulation of an object.

MeetingMaker is a tool for meetings or charrettes where a lot of sketching happens that doesn’t get recorded or that happens on one end of the table and can’t be seen by people on the other end. This is often the case in meetings between architects and engineers. At the end of the meeting, people walk away with the sketches they made and important information gets lost.

It starts out with the idea of ‘Shared Design Space‘ where the entire surface of a table can be used to draw. Important images which have been pinned-up can be grabbed and moved to the table digitally using phicons for people to look at or sketch over. Updated drawings can be easily moved back to the pin-up wall. Personal sketches can be moved towards the center of the table to be enlarged and allow everyone to look at or sketch over together.

Project files from a network or server can also be pulled up directly on the table rather than needing to run back to a computer, plot, and bring back physically. These drawings are connected directly to the project’s 3D model and any sketching that happens over these drawings are directly imported into a layer on the computer model, relating the sketch to the overall building and precluding the need to find a sketch, scan it, import it, and retrace it into the 3D model later. While sketching the computer can be told to create alignments or snaps for more accurate drawing.

Finally, the audio of the meeting can be recorded and synchronized with the movement of images around the room allowing for review of the meeting later.

Urban Rediscovery Toolkit

Deck : Urban Rediscovery Toolkit

Ideation has become a close-spaced activity, while we have a beautiful world outside to live, breathe and explore in; we are stuck at boards, pads and tabs. The Urban Rediscovery Toolkit pushes you out into the open, gives you a tangible tool that you can use as a lens to look at the world with a new perspective.

Audio Sketchbook

PDF Slide Deck: Audio Sketchbook

Problem Description
Game jams are sessions of designing, prototyping, and building games constrained by a particular theme and time limit.  Game jams can last between 6 and 48 hours, and attract participants with a variety of skills and backgrounds.  From empirical evidence, most of these participants are either game designers or programmers.  Aside from design and programming, games often employ art and music.  But while most games produced through game jams include some form of art, significantly less include music or sound effects.  This lack of audio support could be from a lack of sound design participants.  If so, how can we encourage more audio designer participation?  The lack of music could also be a result of poor audio prototyping tools.  If so, how do we create an easy-to-use prototyping tool for iterating sound effects and music?

Proposed Solution
To answer each of these questions, I propose a product that allows for better, simpler communication of audio effects.  From empirical evidence, most game jam participants are comfortable with sketching designs on paper, whiteboards, blackboards, and computers.  During these sketches, they are also making gestures and providing sound effects.  If we capture these sound effects during the sketching process, we can utilize these effects in prototyping the game.  I propose an “audio sketchbook” tablet app that allows users to sketch images and record audio over these images.

This sketchbook would have three modes, or “layers,” of sketching.  See Figure 1.  The first layer is the drawing layer.  It allows users to draw freely.  Ideally, users could utilize standard digital sketchbook options such as setting the color, thickness, and opacity of their brush.  The second layer is the audio layer.  While a user performs brush strokes, audio is captured.  This interaction takes inspiration from the “Singing Fingers” iOS app (http://singingfingers.com/).  The final layer is the editing layer.  This layer allows users to select sections of the audio layer for editing.  Editing options include copying, “slicing”, “stretching,” and deleting audio sketches.  Through these three layers, users can create, read, update, and delete their line or audio sketches.  By providing a sketching tool for prototyping audio, I hope to give game jam participants a simple way to iterate and produce audio effects with a medium they are already comfortable using.

Figure 1: Layers of Audio Sketchbook