emilysa – Tangible Interfaces http://m834.media.mit.edu Just another MIT Media Lab Sites site Mon, 11 Dec 2017 00:33:02 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.3 StoryBoard | Group 6 http://m834.media.mit.edu/2017/12/06/storyboard-group-6/ Wed, 06 Dec 2017 15:58:40 +0000 http://mas834.media.mit.edu/?p=7666 StoryBoard

Emily Salvador, Judith Sirera, Yan Liu, Xiaojiao Chen

Concept


We’ve created an interactive, collaborative desk for children to help bring their school assignments to life.  More specifically, we’re implementing an example module.  In this module, students can collaborate with one another to animate a scene prepared by the teacher.
Students use the desk just as they would otherwise.  The camera is designed to be un-obstructive to the front of the classroom, so the student can clearly see the teacher and the whiteboard/projector at the front of the room.  Instead of typing on a laptop, students could write on their desks with styluses or use a regular notebook since there would be no need for a laptop to take up space on their desk.
Students interact with the table by manipulating and designing physical objects.  At this point, the output is visual and sound based, but we could imagine ‘smart’ primitives like Toio or the Transform topography being applied to this setup.  More specifically to our application implementation, by using cutout pieces of construction paper, kids can design and assign their own sounds to shapes, can move the pieces around, and don’t risk damaging expensive equipment (construction paper is way less expensive than a computer monitor).

Inspiration

While ideating on our final concept, we researched the following topics, Color and Sound, Telepresence, Animation, and Education.  Below is a list of projects that we were inspired by.
Color and Sound:
https://www.kickstarter.com/projects/364756202/specdrums-music-at-your-fingertips/description
https://www.instagram.com/p/Bax0aGknUPy/?taken-by=hardwareux
https://www.instagram.com/p/BWIMccYALLq/?taken-by=hardwareux
https://musiclab.chromeexperiments.com/Kandinsky
Telepresence:
http://www8.hp.com/us/en/campaigns/sprout-pro/overview.html
https://vimeo.com/44544588
http://tangible.media.mit.edu/project/teamworkstation/
Animation:
https://turtleacademy.com/
https://scratch.mit.edu/
https://en.wikipedia.org/wiki/Adobe_After_Effects
Education:
https://www.playosmo.com/en/

Motivation

Our project is designed to be imaginative, generative and fun.  Whether it’s art class or science class, concepts should be presented in more engaging, interactive, tangible ways.  Children should be allowed to experiment using a safe, dynamic medium.  Additionally, they should be able to share their creations and collaborate with other students real-time.  Many schools are shifting to purchase computers and laptops for the classroom, but those can be distracting because they physically block students’ line of sight from the teacher.  Since the desks would be locally networked, the teacher can push content out to the class for them to learn that day and more directly engage with each student.  There would be more opportunity for collaboration compared to a traditional laptop as laptops are designed to be personal sized.  Children can work on their own desks or partner with each other or the teacher to collaborate on a single work surface.

Description of Affordances, Design Decisions, Constraints 

Affordances

Sensors: Camera – can capture x,y coordinates of a wide spectrum of RGB color values, can pick up the projector output on the paper (need to account for feedback)
Actuators: Speakers – output device that allows you to hear sound, Projector – allows user to experience i/o coincidence
Other: Colored Construction Paper/Watercolor/Craft Materials – since we’re using analog input devices that the users are familiar with, they’ll be more likely to quickly adopt our platform.  Also, by choosing a low-cost material, we’re encouraging the user to create without worrying about perfection.
Overall Device: By using the physical paradigm of playing through arts and crafts, the user doesn’t need to ‘pre-learn’ how to interact with our device. Instead they can learn about the relationship between color and sound as they naturally explore and create on the platform.  Since we’re using both visual and audio cues, if the user is hearing or visually impaired, they can still interact with the platform.

Constraints

Sensors: Camera – brightness, resolution, framerate, can pick up the projector output on the paper (need to account for feedback)
Actuators: Speakers – are they loud enough/no binaural/3D audio, Projector – brightness, resolution, framerate, needs to be precisely placed relative to the platform
Other: Colored Construction Paper/Watercolor/Craft Materials – since we’re using analog input devices that the users are familiar with, they’ll be more likely to quickly adopt our platform.
Overall Device: Time and technical expertise deficit was definitely a constraint

Software Implementation

In terms of software, our project was implemented in Max MSP.  Emily wanted to learn more about visual programming and Max and took this project as an opportunity to do just that. Max MSP was the perfect platform for the programming requirements of this project.  The software is primarily used for video and sound processing and manipulation, which is exactly what Emily was looking to work with.  The software is broken into three main components, image processing, image to sound conversion, and image output.

Image Processing

The software grabs frames from the webcam at a downsampled frame rate (approximately 6 frames/sec) and resolution (24×8 pixels).  Then, the saturation and brightness of the image is modified to easily extract and identify colors and to minimize noise from the background.  Next, the current frame and previous frame are compared to identify motion.  If the pixel value change is great enough, the software registers that position as motion and turns that pixel white.  Lastly, the software compares the color image with the motion alpha image and selects the minimum RGB value.  That way, if there was no motion detected at a pixel, it will be black, and if there was motion, that white pixel value would be converted to the color in the color detection frame.

The above clip shows the motion alpha channel combined with the ultra-saturated camera input.

Pixel to Sound Conversion

For the sound processing, Emily started by writing thresholding logic.  For example, if the R pixel value was greater than 200, the G pixel value was less than 100 and the B pixel value was less than 100, then my sound processor would register that pixel as RED.  There were also thresholding values for the colors GREEN, BLUE, and PINK.  We avoided using colors that too closely matched skin tone (like yellow, orange, and brown) since we didn’t want to pick up hand motion in our sound processor.  When the sound converter registers a hit, it increases the volume on that color value.  If no hits are registered for that color in the frame, the volume slider fades out.
For the PINK color value, the sound sequencer introduces a looped track.  This is to showcase that the user can assign sound loops to trigger when they move a sound loop assigned color.  For the RED, GREEN, and BLUE color values, there is an additional mapping of sound over the y-axis.  If the user moves one of those colors to the top of the page, the pitch of that instrument is higher, and if they move the color to the bottom of the page, the pitch decreases.

Image Output

Lastly, the composited image is output to the reciprocal device, so the users can share and create animations together.  An interesting side effect of this setup, is that in high light situation, the projector output is dim enough to be filtered out in the camera feed (avoiding a feedback loop).

However, in a low light situation (triggered by switching from high light to low light), the camera detect tries to detect color in the darkness, and starts to dream.  Those colors are output to the projector and cause new colors to be seen by the camera, generating sound.  In this low light situation, the interaction paradigm changes.  Now the user can block and add light to the storyboard surface to contribute to the generative composition.   For example, if the user blocks all the light on one table, the other table will eventually stop playing music.  If the user shines light (from a flashlight or their phone), the feedback loop begins again, radiating out from where the light was applied.

Hardware Implementation


For the hardware implementation, we bought large cardboard boxes (24″x24″x45″) to create a projection surface that matched the size of the canvas.  If this were an actual product, we would use a smaller form-factor but we were working with the projectors we already owned (as opposed to buying short throw projectors or monitors to embed under our working surface).  We cut PVC pipe to hold up the camera above the canvas and secured the projectors at the base of the table.  We painted the boxes white to make it look like we cared about the aesthetic a bit.  We cut holes into the clear surface acrylic to create storage pots for the arts and crafts supplies.  Lastly, we created demo props using construction paper, pipe cleaners, and popsicle sticks.

Final Experience, Lessons Learned

Overall, this was an awesome opportunity to learn more about telepresence, interaction design, image processing and Max MSP.

Process Pictures

Judith, Xiaojiao and Emily assembling the cardboard boxes


Yan securing the camera rig to the inside of the box


Preparing the Arts and Crafts Props


Team Photo as we carry our StoryBoard devices downstairs

Video

]]>
E-MOTION | Group 6 http://m834.media.mit.edu/2017/10/31/e-motion/ Wed, 01 Nov 2017 03:40:57 +0000 http://mas834.media.mit.edu/?p=7373 Group 6

Emily Salvador, Johae Song, Xiyao Wang, Xueting Wu

Concept

We explored how to convey emotional telepresence.


We created a device that allows you to reflect on your affective state and share it with a loved one or friend over long distances.  Select between one of four emotional states that describe your mood [LOVE, HAPPY, SAD, ANGRY].  If your emotional state matches your partner’s emotional state, you’ll receive special animations to signal that you and your partner are sharing the same feeling.  If you’re curious about how your partner is feeling, rotate your cube to try to match their emotion.  If you turn the cube on its side, it’s in ‘DO NOT DISTURB’ mode, so the animations won’t wake you up at night or distract you while you’re busy.

We created a rough sketch of how we would embed the fan instead of our cube scaffolding.


The two sides of the cube have accessible hand grasps for rotating the device and look different than the faces of the cubes that convey emotion.  Each emotional face is clearly labeled so the user can know which emotional state they’re trying to share.

Affordances

While in the research stage of our project, we brainstormed what the affordances and constraints were of various components of our overall device.  Those components were the photon, the sensor (accelerometer), and the actuator (fan).

The Photon with Phobot Shield


The photon allows for communication between devices wirelessly.  Additionally, there’s an LED on the photon device, which is useful for debugging and conveying binary states.  There are some constraints with using the photon however.  You can only send messages at one second intervals and those messages were limited by the requirements of the assignment.  For this project, we were only allowed to send one 3 integer array, where the integers were bounded between 0-255.
 

MMA8451 Accelerometer


Our team used the accelerometer for our sensor.  The affordances of the accelerometer allowed us to measure 3 separate acceleration variables along the x-axis, y-axis, and z-axis.  Those values ranged between -8000 to 8000, so we had to bound the values to fit within the 0 to 255 constraint.  If we didn’t, we wouldn’t be able to interface our project with other groups projects.  Additionally, the accelerometer can measure orientation (while still).  For example, [0, 0, -Z_Gravity] would correspond to the board lying facing up, with a negative Z acceleration due to gravity.  The accelerometer can also measure motion, however, because the photon only sends messages at one second intervals, we decided to focus on orientation to minimize latency in our system.
Our team used the motor (FAD1-06025) for our actuator.  The fan can move at different speeds, in the forward and reverse directions.  The fan can also be animated by pulsing at different frequencies.  For example, we could create a heartbeat profile based on stopping and starting the fan in a fast, slow, fast, slow motion.  The motor can also be stopped completely using the stop() function.  We can use the motor to control external components if they’re light enough to be moved by the fan’s motor.  Additionally, we could use the fan to hover or propel, if the component is light enough.  The fan can be used to inflate closed, stretchy materials.  The biggest constraints of the fan is that it’s not very powerful and it’s challenging to predict the exact motion of the air flow.
We tried to design our device with affordances in mind.  After the check-in presentations, the TAs mentioned that we should think about how to make it clear how the users can interact with the device.  We added circular grips to the side panels to indicate that the device can be rotated.  Additionally, the emotional panels look very different than the side panels.  If you turn the device on its side, it will go into “DO NOT DISTURB” mode, which makes sense given there are no emotional cues on the rotation sides of the cube.  We also wanted this device to be safe, so we completely enclosed the fan so the user wouldn’t stick his or her finger in a place where they could be injured.

Implementation

When brainstorming ideas, we wanted to explore something that conveyed emotional telepresence.  Additionally, we decided early on that we wanted to focus on orientation of the device over motion since that would convey telepresence over a longer period of time than discrete movements.  That led us to the idea of using a cube for the scaffolding of our interactive telepresence.

For the physical prototype we wanted to make sure we could create a mechanism that could rotate within the cube to keep the fan orientation constant.

To prototype the cube, Johae and Xiyao designed a built a working model using cardboard and metal rods.  They worked out issues like how to calibrate the inner mechanism to be weighted correctly so the fan always points forward.  On the programming side, Emily worked on defining functions that would allow the team to debug quickly and to determine if orientation was a good way to use the accelerometer.  Additionally, we added a PCB connector board to our accelerometer to make it easy to attach and detach from the photon.  Xueting designed graphics to help us determine how we would convey the emotional signaling on our device.
We attempted to use paper flaps with the fan, but during prototyping we realized the fan just wasn’t strong enough.  While trying other ideas, we discovered that we could use the fan to power a propeller on the face of the cube.  With that we switched tactics to focus on creating a mechanism for spinning the propeller super fast so the user would see a visualization of their emotion when activated.

Our final device was fabricated with black acrylic.


Our final implementation was made out of black acrylic for a finished look and structural integrity.  For the hardware, Johae and Xiyao worked to make our final project functional and polished.  On the software side, Emily programmed the photons to trigger the fans at full strength if the accelerometers were in the same orientation.  Visually, if two cubes were oriented the same way, then the will trigger causing the propeller to spin and display a 3D visualization of the emotion.  Xueting added symmetrical graphics, so when the propeller spun, the shape would be visible in 3D and show the orientation of the device.

]]>
Emily M Salvador http://m834.media.mit.edu/2017/09/22/emily-m-salvador/ Fri, 22 Sep 2017 18:50:17 +0000 http://mas834.media.mit.edu/?p=7018
Hi, I’m Emily!  I’m a first year MAS student in the Object Based Media Group!
While I was in undergrad at MIT, I was a urop student in Tangible Media.  After graduating. I spent a year at my dream job working in themed entertainment.  I started at Walt Disney Imagineering’s R&D Department in California, then moved to Florida to help implement tangible and digital interactives* at Pandora, The World of Avatar at Disney’s Animal Kingdom.  Over the summer, I was working on tangible, musical interactives for the Super Nintendo World project with Universal Creative.
Physical space utility is a huge priority in the themed entertainment industry.  People travel to places like Universal Studios and Walt Disney World because they want to be completely (physically, visually, and sensorially) immersed in worlds that can’t exist anywhere else.
I would love to create a large-scale, interactive, collaborative storytelling platform that encourages imagination and play.  I think it’d be fun to make something magical.
The future looks social and augmented.  There seems to be a huge emphasis from the tech world right now to bring consumers out of their homes and into their physical spaces.  Amazon’s acquisition of Whole Foods allows the company to refine, redefine, and socialize their shopping user flow in a physical location.  I imagine sometime soon, they’ll begin experimenting with IoT type retail experience that would be portable to other stores.  Apple’s head of retail was quoted for trying to refresh the Apple Store experience.  They want people coming to their stores and collaborating on larger devices with friends in a social context.  Snapchat recently added a map feature that allows users to localize themselves within their physical environment.  I wouldn’t be surprised if there’s special AR content in the pipeline that encourages users to get out of their phones and into the physical world.  I’d love to live in a future of augmented, communal experiences.
*interactives in the themed entertainment industry can be defined as physical and/or digital environments and content that can be influenced by the presence of a user or group of users.

]]>