Tangible Interfaces http://m834.media.mit.edu Just another MIT Media Lab Sites site Mon, 11 Dec 2017 00:33:02 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.3 Group 2 – TraceFORM http://m834.media.mit.edu/2017/12/10/group-2-traceform/ Mon, 11 Dec 2017 00:33:02 +0000 http://mas834.media.mit.edu/?p=7795 TraceForm

Concept

TraceFrom is physicalizing spatial visualizations with the purpose of aiding visually impaired students in the context of a learning environment.
By using the tangible medium “Transform” we are able to turn two-dimensional digital representations of plots, graphs and sketches into physical representations that visually impaired students can interact with.  Such students would typically use a screen reader software to browse through digital content, and once encountering a graph either automatically skip it or hear a simplistic label description such as  “graph for velocity and time”. Not being able to easily understand the information communicated with a graph can greatly stunt the student’s progress. We propose to implement a software and hardware system that will accompany the screen reader of a web page, and upon encountering a graph, will suggest a physicalized version of that graph on the nearby TraceForm. It will then animate the given scenario in a tangible dynamic representation.

Inspiration, Motivation

Graphical representations is a key element in our everyday lives and education. The use of coordinate systems, bar charts and similar plots are essential tools to explain a broad range of concepts within mathematics, physics, economics and social sciences to name a few. With the rise of graphical user interfaces (GUI’s) the cost of being able to interact, alter and visualise data has approached zero, creating a great tool for understanding abstract concepts such as motion profiles or curve derivatives. However, in this evolution visually-impaired students are left even further behind.

We have developed TraceForm with the aim of providing a tool that will enable visually-impaired students to gain the benefits of being able to dynamically explore and alter spatial representations.

Affordances, Design Decisions, Constraints

Transform as an interface

For our initial prototyping, and before we decided to focus on visually impaired students as our user group, we faced challenges in clearly communicating the use of the platform’s UI. In this initial work we experimented with the use of transform as a “physical” calculator that would provide a better understanding of concepts such as division and multiplication. However in most scenarios it wasn’t intuitive how to interact with the transform given that the low resolution of Transform and the uniform shape and colors of all pins provided no signifiers of how to interact with the device.

Furthermore the affordances of the pins is a matter of pushing rather than pulling since grabbing onto the pins when the surroundings are in same level is difficult. In this configuration it however gets challenging to visually get an overview of the inputs, and pressing individual pins becomes somewhat awkward.

Our conclusion was that in general the use of Transform as an interface to input and modify data turned out to be cumbersome and in most perspectives inferior to any GUI.

Focus on visual impaired

At this point we shifted our focus to visually impaired users, and the more “passive” application of converting illustrations/diagrams/plots into representations on the Transform. By doing so we avoided the requirement of having an interface upon which the user should be able to construct scenarios. Instead, we could focus on translating graphical spatial representations onto the transform and allowing simple investigative operations to be performed.
One of Transforms greatest strengths is its ability to dynamically alter its representation in real time and for that reason we chose to focus on the topic of motion (basic physics of linear uniform velocity and acceleration). The programmable pins allowed us to mimic different type of motions; constant and accelerating movements, fast and slow velocities and so forth.

The constraint of having an array of dynamic pins, is that each pin require an individual motor to control, thereby limiting the resolution of the table. As an effect one of our scenarios in which we compare an accelerating velocity and that of a constant, the difference was challenging for our blind user tester to notice at first. Although it’s visually easy to spot, the fact that one can only feel a few test-points simultaneously made it difficult to do the comparison of a vague local trend.  

Implementation

In order to spend the majority of our effort on the concept prototyping rather than the technical implementation we chose to control the TransForm in “scripted mode”, essentially meaning that we pre-programmed certain scenarios which was the played out on the table. The transform supports an input of a simple grayscale video, in which each individual table has a resolution of  24×16 px. The amount of grey determine the height of the pins (with thresholds from 50-210). We used a simple online program “PixelArt” to draw our scenarios pin-by-pin, created a storyboard within this app and turned it into a GIF.

With the help of our blind test-user Matthew Shifrin we used the pre-programmed scenarios to test out different type of interactions. He provided us with valuable feedback and ideas upon the heights of pins, how the plots were mapped and what should happen when you interacted with the pins by either tapping, holding or doing other actions. Due to the ease of which new scenarios could be changed we were able to adapt and implement and test different type of operations throughout a session of a few hours with the help of Matthew.

Final Experience, Lessons Learned

The final concept was made around the scenario of an online ressource on p-t graphs (physics, motion) found here, http://www.physicsclassroom.com/class/1DKin/Lesson-3/The-Meaning-of-Shape-for-a-p-t-Graph.
As the user listens to the text through a text-to-speech program, the software will prompt whether the user would like the encountered images physicalized on TraceForm. We let our test user interact with the webpage as he would normally browse a new source of information, using his prefered audio-to-text software and then provided the physicilisations of the pre-programmed scenarios as our user progressed through the page.
Through these tests we were able to identify a range of important learnings such as:

  • Audio cues: One of the first reactions from our user, was the need to have points speak out their coordinate if pressed briefly. The format should be as compact as possible i.e. “four comma three”. If one wanted more in depth information such as the unit of the axes, the user suggested that one should simply double-tap or hold the pin for a longer period
  • Care for use of pin heights: In the prototype each pin represented a value of 0.5 from the original graphical based sketches, and we had chosen to heighten the pins which equalled whole numbers (1,2,3 etc.). The difference in height was very clear, and marked these ticks out as appearing very important. Unless the ticks are of important matter, the user suggested that these were simply leveled with the entire axis or only subtle heightened.
  • Difficult to observe trends in low resolution: By visually inspecting a low-resolution graph it’s possible to see the trend and interpret whether it is in fact supposed to represent a straight or perhaps increasingly steep trend rather easily. This is however not as straightforward if only using a few contact-points at once if inspecting the graph with one’s hands. As a consequence the user at times had a hard time distinguishing the graph representing an accelerating motion and that of a linear velocity. A higher resolution of the medium would resolve this issue.
  • Coupling motion and graph works well: According to the user the concept of coupling a straight motion (constant/accelerating velocity) with a 2D plot on a p-t-graph simultaneously provided a good understanding of the concept.
  • Interaction with entire limbs: To fully comprehend the scenarios in which the graph dynamically evolved as an effect of a vehicle motion, the user positioned his entire forearm on the TraceForm to be able sense the datatrend as it was emerging. According to the user this technique worked very well but did require some awkward position and might constrain the type of graph displayed to a relatively straight-shape (given the nature of an arm…).

In general the feedback on TraceForm was very positive, and our test user expressed his sincere belief that if he had had such a system when he was learning about physics it would have saved him countless frustrating hours.

Final Concept / Video

The final concept can be seen below,

User Testing

We’ve combined important sections of our user testing in four brief videos:
Ideas on audio-clues and initial reaction

Suggestions for new features

Using the forearm to sense trends

General feedback 

]]>
Team 3 – The Inner Circle http://m834.media.mit.edu/2017/12/06/team-3-the-inner-circle/ Thu, 07 Dec 2017 00:25:09 +0000 http://mas834.media.mit.edu/?p=7718 Johae Song, Yiji He, Hui Yuan, Takatoshi Yoshida, Ruthi Aladjem

The Concept
The Inner Circle is a gesture operated wearable device, designed to create a space for communication, privacy and intimacy in crowded, noisy locations. This device  can be used  for planned  or serendipitous encounters, by groups of two or more.
The idea of an “Inner Circle” is contradictive, depending on the observer’s point of view:

On the one hand, it symbolizes the intrinsic need to belong, to have a close group of people to communicate with, in an intimate manner. The choice to manifest the solution as  a mobile wearable hub, signifies the realization that no one can escape the crowd and noise; finding a quite location is not an option and therefore, we need to adapt and to find a solution for this need, in the midst of the crowd.

On the other hand, for someone outside that inner circle, it may seem like an exclusive group that one cannot join or be a part of.

Thus, the  sincere need and motivation to find some quite amongst the crowd, can be seen, at the same time, as a bold statement, a protest and a call for a “quite revolution”.



 

]]>
Team 2 | TraceForm http://m834.media.mit.edu/2017/12/06/team-2/ Wed, 06 Dec 2017 18:14:57 +0000 http://mas834.media.mit.edu/?p=7688 Final Concept

 
User Study:

Let the user explore
 

Criticism
 

New Ideas
 

Positive Feedback

]]>
Team 4 | Sound of Shape http://m834.media.mit.edu/2017/12/06/sound-of-shape/ Wed, 06 Dec 2017 18:13:00 +0000 http://mas834.media.mit.edu/?p=7647 Sound of Shape

Caroline Rozendo, Irmandy Wicaksono, Jaleesa Trapp, Ring Runzhou Ye, Ashris Choundhury

 

Concept

A strand-like object that enables users to explore sound through creating shapes. The user is allowed to explore the possibilities with the signifiers and model both iconic instruments and their own ideas of playable things.

Inspiration

“Music is not limited to the world of sound. There exists a music of the visual world.” – Oskar Fischinger, 1951
The quote by Oskar Fischinger motivated us to explore the music of the visual world and extending the exploration to the physical world. Projects that we looked to for inspiration include lineFORM – which inspired the shape of our product, soundFORM – which inspired us to manipulate sound through shapes, and the Fabric Keyboard – which inspired us to build a multi-functional deformable musical interface for sound production and manipulation.

Affordances

The long, bendable strand allows the user to form it into various shapes. The sensitivity of this sound-generating instrument to proximity, tap, and shakes prompts the user to immediately realize its musical behaviour.  As the user explores different shapes, different sounds will be created by the instrument, inviting them to explore various shapes and their relationship to sound.

Constraints

We used physical sensors embedded inside the strand instead of computer vision, as using vision-sensing will limit the working area of interaction and reduce its expressiveness and functionality to the users. We would like the user to have freedom in shaping and interacting with this tangible musical interface. However, since each of the sensors is currently wired to the circuitry in our initial prototype, the mobility of this instrument is limited to 50cm.
Since an array of uni-directional bend sensors are used on both sides of the strip/strand, this instrument is constrained for 2D shape-making. In addition, the amount of hinges, which are currently ten, limits the variety of shapes that can be formed.

Implementation


As illustrated in the figure above, the strand consists of multiple layers encapsulated in a soft fabric. A long brass strip is used as the base layer and retains its shape under different bending.  An array of short styrene blocks on each side of the strip provides structural support and for the 20 flex sensors and their wirings, as well as side grooves for the Nitinol wires, two on each side. At the end of the strip, accelerometer, proximity sensor, and magnet are attached and protected by a foam.
 
The usage of flex sensors enable this instrument to detect its own shape, the accelerometer enables this instrument to detect taps and shakes (i.e. tambourine, triangle, drum) ,the proximity sensor allows the exploration of touch sensing around the inside of the 2D-shape (i.e. piano, harp). Magnets facilitate snapping of two ends of the strand to complete the 2D-shape.
 

Musical Mappings

The sensor data is continuously read and if triggered, is converted to match the specification of a MIDI protocol. A message will be then populated using these data and an address containing a status, channel, note on/off, amplitude, and expression value. In our case, for the proximity-mode, note on/off are triggered by initial proximity and the pitch of this note is controlled by the discrete proximity value, for the vibration-mode, note on/off are triggered by accelerometer through light vibration (tap) and strong vibration (shake), the amplitude is also set by the strength of the vibration.  This message is then received by either an audio synthesis environment, such as PureData or Max/MSP, or audio sequencer framework such as Ableton Live or GarageBand. In this project, we used Ableton Live. The software will then process this message and generate or control a particular sound based on its mappings and settings, providing a sound feedback to the performer. This enables musicians, sound artists, and interaction designers not only to play and explore the capability of this instrument, but also to change the functionality and express themselves by mapping and experimenting different sound metaphors.


In below Ableton Live screen capture, a percussion instrument is activated, by dragging and dropping the subset or changing the shape of the strand, the user can play different types of percussion instrument, such as tamborine, vibra, cabasa, chimes,  conga, cowbell, etc. Current setting is shown to play a high-frequency bongo.

 

System Design

Arduino DUE was used as the microcontroller that processes the sensor data from the flex sensors, proximity sensor, and accelerometer to digital inputs, converts them into MIDI messages, and sends these MIDI messages to Ableton Live 9 through Serial USB.
Since we used a large number of flex sensors ~ 20 pieces, two multiplexers (CD74HC4067) are used. Each of the multiplexer takes each side (10) of the array of flex sensors. The microcontroller addresses each input pin of both multiplexers simultaneously and subsequently read them through two ADC pins. Since we need approximately 3A to power the Nitinol wires for shape changing effects, a transistor circuit that connect an external supply (6V Battery) is built and can be controlled digitally by the microcontroller.

Sound-Shape Association


The mapping of shapes into different instruments takes inspiration from a number of articles and research experiments which suggest that the way how we connect auditory and visual-tactile perception is not completely arbitrary.
In the “Kiki-Bouba” experiment, conducted for the first time by Kohler in 1929 and replicated a few times afterwards, people were asked to match a spiky angular shape and a round, smooth shape to the nonsense words “kiki” and “bouba”. There was a strong preference to associate “bouba” to the round, smooth shape, and “kiki” to the angular shape. In a repetition of Kohler’s experiment conducted in 2001 by Vilayanur S. Ramachandran and Edward Hubbard, this preference accounted for at least 95% of the subjects.
On an article published by Nature in 2016, Chen, Huang, Woods and Spence go a little further, investigating the assumption that this correspondence would be universal and analyzing the associations of frequency or amplitude to shapes.
By allowing the user to shape the objects in search of the pitch and sound they would like to play with, we use a characteristic of perception to improve the intuitiveness of the interface.

Future Work

As a future work, we would like to extend our strand to allow more joints and possible shapes. We are also interested in integrating more sensors to expand the functionality of this strand as a musical instrument. We will also explore the use of shape memory alloy in our instrument and incorporate shape-changing feature, which will enable the instrument to automatically shape itself, help the users to reach their final or iconic shape, or even more exciting, to adapt its shape and learn its mapping based on a given sound. We also plan to improve the usability of this instrument, by making the instrument wireless, the users will be able to fully express themselves and explore the relationship between sound and shape. Finally, we would like to collaborate with musicians and sound artists to demonstrate the capability of this novel deformable instrument through musical performance or composition.
 

Process Pictures

Video

https://youtu.be/TEZTWOAdeP4
 
Attach your report here by end of day Sunday Dec 10
 
 

]]>
TG7: Reminisce http://m834.media.mit.edu/2017/12/06/tg7-reminisce/ Wed, 06 Dec 2017 18:12:37 +0000 http://mas834.media.mit.edu/?p=7681 REMINISCE: Translating ambient moments into tangible and shareable memories

Kyung Yun Choi, Darle Shinsato, Shane Zhang, Lingxi Li
Keyword: Tangible Memory, Reminisce, Notebook, Ambiance, Shape-changing display, Origami
Final Presentation Slides

CONCEPT: Tangible Memory Book

Device that records ambiance (in the form of ambient noise) and translates this into tactile movement via a shape changing sheet.
Recorded ambiance is embedded into the sheet and can be replayed to reminisce about a moment or memory of a place


 
 

INSPIRATION/ MOTIVATION

“ We feel that current technology should be merged and coupled with tangible artefacts and physical elements, in order to provide more engaging experiences and overcome some of the limitation of mobile technology. ” 
– Luigina Ciolfi & Marc McLoughlin

Today, we experience memories through digital means (ie. videos and photos)
Current devices rely on visual and auditory recordings to capture moments

What if these memories were translated into a tangible medium?
 

DESIGN CONSTRAINTS

  • Design an object that can record the ambient sound of the moment, translate it and recall it as a motion of through the medium of a patterned paper.
  • Have a variety of different paper patterns that have different visual cues
  • Allow users to share or replay their stored memory through the playback function
  • Is highly portable and fits well in the palm, akin to a small notebook.

 

 AFFORDANCES

Bookmarking
Animated GIF - Find & Share on GIPHY
Sharing

IMPLEMENTATION

Github

 

PROCESSES

Photos link-1
Photos link-2

[Video] Trailer: ?


 

[Video] Final Movie!


 

FINAL REPORT ?

link

]]>
Group 5 | the Third Eye http://m834.media.mit.edu/2017/12/06/group-5-the-third-eye/ Wed, 06 Dec 2017 18:09:32 +0000 http://mas834.media.mit.edu/?p=7682 the Third Eye | ChoongHyo Lee, Jerry Wei-Hua Yao, Seong Ho Yun, Xi Yao Wang

Concept
The Third Eye is a navigation glasses for visually impaired people, helping them to understand the environment where they are at. The goal of the glasses is to provide a better walking experience for blind people and make them understand the space through different vibration patterns.
Inspiration, Motivation

The most common aiding tool for visually impaired people nowadays is the walking cane. Although the canes do solve a lot of problems for blind people while they’re walking, there are still certain aspects which remain unsolved. For example, the blind people can only detect the objects within the moving range of their canes (a small sector area). And also they need to actually touch the obstacles in order to know what objects are they dealing with, or where the obstacles are.
Therefore, most of the time blind people are just avoiding obstacles on the street. They didn’t really know what is around them until they touch it with their canes. That is why we wanted to build a device which can work WITH the cane, solving the problems that the cane have and doing the things that the cane cannot do. By wearing the Third Eye, it enables the users to have the a basic understanding and the sense of environment just like we do.
Design Decisions, Constraints 
We started this project trying to replace the cane by different types of wearable devices. However, after realizing that the cane does not only work as an aiding tool for blind people, it also makes them feel more secure and safer while using it. We decided to put more effort on helping the blind people to understand the environment without touching the objects. And we came up with the idea of vibration patterns.
Features 

 

Implementation (and links to code/related materials)


Final Experience, Lessons Learned

  • Research about the best location for the vibration motors (the most sensitive spot on the face)
  • The vibration somewhat affect/interfere the users hearing experience.
  • Think of more interaction between the glasses and the canes.


Video
https://www.dropbox.com/s/nl0jz7mp5ykhjyn/thirdeye.mp4?dl=0

]]>
StoryBoard | Group 6 http://m834.media.mit.edu/2017/12/06/storyboard-group-6/ Wed, 06 Dec 2017 15:58:40 +0000 http://mas834.media.mit.edu/?p=7666 StoryBoard

Emily Salvador, Judith Sirera, Yan Liu, Xiaojiao Chen

Concept


We’ve created an interactive, collaborative desk for children to help bring their school assignments to life.  More specifically, we’re implementing an example module.  In this module, students can collaborate with one another to animate a scene prepared by the teacher.
Students use the desk just as they would otherwise.  The camera is designed to be un-obstructive to the front of the classroom, so the student can clearly see the teacher and the whiteboard/projector at the front of the room.  Instead of typing on a laptop, students could write on their desks with styluses or use a regular notebook since there would be no need for a laptop to take up space on their desk.
Students interact with the table by manipulating and designing physical objects.  At this point, the output is visual and sound based, but we could imagine ‘smart’ primitives like Toio or the Transform topography being applied to this setup.  More specifically to our application implementation, by using cutout pieces of construction paper, kids can design and assign their own sounds to shapes, can move the pieces around, and don’t risk damaging expensive equipment (construction paper is way less expensive than a computer monitor).

Inspiration

While ideating on our final concept, we researched the following topics, Color and Sound, Telepresence, Animation, and Education.  Below is a list of projects that we were inspired by.
Color and Sound:
https://www.kickstarter.com/projects/364756202/specdrums-music-at-your-fingertips/description
https://www.instagram.com/p/Bax0aGknUPy/?taken-by=hardwareux
https://www.instagram.com/p/BWIMccYALLq/?taken-by=hardwareux
https://musiclab.chromeexperiments.com/Kandinsky
Telepresence:
http://www8.hp.com/us/en/campaigns/sprout-pro/overview.html
https://vimeo.com/44544588
http://tangible.media.mit.edu/project/teamworkstation/
Animation:
https://turtleacademy.com/
https://scratch.mit.edu/
https://en.wikipedia.org/wiki/Adobe_After_Effects
Education:
https://www.playosmo.com/en/

Motivation

Our project is designed to be imaginative, generative and fun.  Whether it’s art class or science class, concepts should be presented in more engaging, interactive, tangible ways.  Children should be allowed to experiment using a safe, dynamic medium.  Additionally, they should be able to share their creations and collaborate with other students real-time.  Many schools are shifting to purchase computers and laptops for the classroom, but those can be distracting because they physically block students’ line of sight from the teacher.  Since the desks would be locally networked, the teacher can push content out to the class for them to learn that day and more directly engage with each student.  There would be more opportunity for collaboration compared to a traditional laptop as laptops are designed to be personal sized.  Children can work on their own desks or partner with each other or the teacher to collaborate on a single work surface.

Description of Affordances, Design Decisions, Constraints 

Affordances

Sensors: Camera – can capture x,y coordinates of a wide spectrum of RGB color values, can pick up the projector output on the paper (need to account for feedback)
Actuators: Speakers – output device that allows you to hear sound, Projector – allows user to experience i/o coincidence
Other: Colored Construction Paper/Watercolor/Craft Materials – since we’re using analog input devices that the users are familiar with, they’ll be more likely to quickly adopt our platform.  Also, by choosing a low-cost material, we’re encouraging the user to create without worrying about perfection.
Overall Device: By using the physical paradigm of playing through arts and crafts, the user doesn’t need to ‘pre-learn’ how to interact with our device. Instead they can learn about the relationship between color and sound as they naturally explore and create on the platform.  Since we’re using both visual and audio cues, if the user is hearing or visually impaired, they can still interact with the platform.

Constraints

Sensors: Camera – brightness, resolution, framerate, can pick up the projector output on the paper (need to account for feedback)
Actuators: Speakers – are they loud enough/no binaural/3D audio, Projector – brightness, resolution, framerate, needs to be precisely placed relative to the platform
Other: Colored Construction Paper/Watercolor/Craft Materials – since we’re using analog input devices that the users are familiar with, they’ll be more likely to quickly adopt our platform.
Overall Device: Time and technical expertise deficit was definitely a constraint

Software Implementation

In terms of software, our project was implemented in Max MSP.  Emily wanted to learn more about visual programming and Max and took this project as an opportunity to do just that. Max MSP was the perfect platform for the programming requirements of this project.  The software is primarily used for video and sound processing and manipulation, which is exactly what Emily was looking to work with.  The software is broken into three main components, image processing, image to sound conversion, and image output.

Image Processing

The software grabs frames from the webcam at a downsampled frame rate (approximately 6 frames/sec) and resolution (24×8 pixels).  Then, the saturation and brightness of the image is modified to easily extract and identify colors and to minimize noise from the background.  Next, the current frame and previous frame are compared to identify motion.  If the pixel value change is great enough, the software registers that position as motion and turns that pixel white.  Lastly, the software compares the color image with the motion alpha image and selects the minimum RGB value.  That way, if there was no motion detected at a pixel, it will be black, and if there was motion, that white pixel value would be converted to the color in the color detection frame.

The above clip shows the motion alpha channel combined with the ultra-saturated camera input.

Pixel to Sound Conversion

For the sound processing, Emily started by writing thresholding logic.  For example, if the R pixel value was greater than 200, the G pixel value was less than 100 and the B pixel value was less than 100, then my sound processor would register that pixel as RED.  There were also thresholding values for the colors GREEN, BLUE, and PINK.  We avoided using colors that too closely matched skin tone (like yellow, orange, and brown) since we didn’t want to pick up hand motion in our sound processor.  When the sound converter registers a hit, it increases the volume on that color value.  If no hits are registered for that color in the frame, the volume slider fades out.
For the PINK color value, the sound sequencer introduces a looped track.  This is to showcase that the user can assign sound loops to trigger when they move a sound loop assigned color.  For the RED, GREEN, and BLUE color values, there is an additional mapping of sound over the y-axis.  If the user moves one of those colors to the top of the page, the pitch of that instrument is higher, and if they move the color to the bottom of the page, the pitch decreases.

Image Output

Lastly, the composited image is output to the reciprocal device, so the users can share and create animations together.  An interesting side effect of this setup, is that in high light situation, the projector output is dim enough to be filtered out in the camera feed (avoiding a feedback loop).

However, in a low light situation (triggered by switching from high light to low light), the camera detect tries to detect color in the darkness, and starts to dream.  Those colors are output to the projector and cause new colors to be seen by the camera, generating sound.  In this low light situation, the interaction paradigm changes.  Now the user can block and add light to the storyboard surface to contribute to the generative composition.   For example, if the user blocks all the light on one table, the other table will eventually stop playing music.  If the user shines light (from a flashlight or their phone), the feedback loop begins again, radiating out from where the light was applied.

Hardware Implementation


For the hardware implementation, we bought large cardboard boxes (24″x24″x45″) to create a projection surface that matched the size of the canvas.  If this were an actual product, we would use a smaller form-factor but we were working with the projectors we already owned (as opposed to buying short throw projectors or monitors to embed under our working surface).  We cut PVC pipe to hold up the camera above the canvas and secured the projectors at the base of the table.  We painted the boxes white to make it look like we cared about the aesthetic a bit.  We cut holes into the clear surface acrylic to create storage pots for the arts and crafts supplies.  Lastly, we created demo props using construction paper, pipe cleaners, and popsicle sticks.

Final Experience, Lessons Learned

Overall, this was an awesome opportunity to learn more about telepresence, interaction design, image processing and Max MSP.

Process Pictures

Judith, Xiaojiao and Emily assembling the cardboard boxes


Yan securing the camera rig to the inside of the box


Preparing the Arts and Crafts Props


Team Photo as we carry our StoryBoard devices downstairs

Video

]]>
Final Project Template http://m834.media.mit.edu/2017/12/05/final-project-template/ Tue, 05 Dec 2017 17:39:04 +0000 http://mas834.media.mit.edu/?p=7657 A base template for what you should post up here.
Project Name
Concept
Inspiration, Motivation
Description of Affordances, Design Decisions, Constraints 
Implementation (and links to code/related materials)
Final Experience, Lessons Learned
Process Pictures
Video
Attach your report here by end of day Sunday Dec 10

]]>
Taeseop Shin http://m834.media.mit.edu/2017/11/01/taeseop-shin/ Wed, 01 Nov 2017 18:17:55 +0000 http://mas834.media.mit.edu/?p=7467 Hello, My name is Taeseop Shin, an second year M.arch student in MIT.
I am interested in the collaboration between architecture and other fields such as Technology, Art and Humanity.

]]>
Tg7 |: Mug-Mate http://m834.media.mit.edu/2017/11/01/tg7-mug-mate/ Wed, 01 Nov 2017 17:12:54 +0000 http://mas834.media.mit.edu/?p=7457  

Final presentation
https://docs.google.com/presentation/d/1ZfBWzsqvlTfcOSpAqTo_Xgfb-vm6qcraa6-5djavJys/edit?usp=sharing
First Presentation
https://docs.google.com/presentation/d/1HPAG3BRT-fEUdzM8V3kIZPa7sBrMfWX01kfbleM7678/edit?usp=sharing
Process Photos
https://photos.app.goo.gl/Zjwgyw4nRFFtHmR32
Google Drive (Images, Photos, CAD files, Videos)
https://drive.google.com/drive/folders/0B6JAFz9RI_A9NE5FUlNnd0hDTjg?usp=sharing

]]>