TraceFrom is physicalizing spatial visualizations with the purpose of aiding visually impaired students in the context of a learning environment.
By using the tangible medium “Transform” we are able to turn two-dimensional digital representations of plots, graphs and sketches into physical representations that visually impaired students can interact with. Such students would typically use a screen reader software to browse through digital content, and once encountering a graph either automatically skip it or hear a simplistic label description such as “graph for velocity and time”. Not being able to easily understand the information communicated with a graph can greatly stunt the student’s progress. We propose to implement a software and hardware system that will accompany the screen reader of a web page, and upon encountering a graph, will suggest a physicalized version of that graph on the nearby TraceForm. It will then animate the given scenario in a tangible dynamic representation.
Graphical representations is a key element in our everyday lives and education. The use of coordinate systems, bar charts and similar plots are essential tools to explain a broad range of concepts within mathematics, physics, economics and social sciences to name a few. With the rise of graphical user interfaces (GUI’s) the cost of being able to interact, alter and visualise data has approached zero, creating a great tool for understanding abstract concepts such as motion profiles or curve derivatives. However, in this evolution visually-impaired students are left even further behind.
We have developed TraceForm with the aim of providing a tool that will enable visually-impaired students to gain the benefits of being able to dynamically explore and alter spatial representations.
For our initial prototyping, and before we decided to focus on visually impaired students as our user group, we faced challenges in clearly communicating the use of the platform’s UI. In this initial work we experimented with the use of transform as a “physical” calculator that would provide a better understanding of concepts such as division and multiplication. However in most scenarios it wasn’t intuitive how to interact with the transform given that the low resolution of Transform and the uniform shape and colors of all pins provided no signifiers of how to interact with the device.
Furthermore the affordances of the pins is a matter of pushing rather than pulling since grabbing onto the pins when the surroundings are in same level is difficult. In this configuration it however gets challenging to visually get an overview of the inputs, and pressing individual pins becomes somewhat awkward.
Our conclusion was that in general the use of Transform as an interface to input and modify data turned out to be cumbersome and in most perspectives inferior to any GUI.
At this point we shifted our focus to visually impaired users, and the more “passive” application of converting illustrations/diagrams/plots into representations on the Transform. By doing so we avoided the requirement of having an interface upon which the user should be able to construct scenarios. Instead, we could focus on translating graphical spatial representations onto the transform and allowing simple investigative operations to be performed.
One of Transforms greatest strengths is its ability to dynamically alter its representation in real time and for that reason we chose to focus on the topic of motion (basic physics of linear uniform velocity and acceleration). The programmable pins allowed us to mimic different type of motions; constant and accelerating movements, fast and slow velocities and so forth.
The constraint of having an array of dynamic pins, is that each pin require an individual motor to control, thereby limiting the resolution of the table. As an effect one of our scenarios in which we compare an accelerating velocity and that of a constant, the difference was challenging for our blind user tester to notice at first. Although it’s visually easy to spot, the fact that one can only feel a few test-points simultaneously made it difficult to do the comparison of a vague local trend.
In order to spend the majority of our effort on the concept prototyping rather than the technical implementation we chose to control the TransForm in “scripted mode”, essentially meaning that we pre-programmed certain scenarios which was the played out on the table. The transform supports an input of a simple grayscale video, in which each individual table has a resolution of 24×16 px. The amount of grey determine the height of the pins (with thresholds from 50-210). We used a simple online program “PixelArt” to draw our scenarios pin-by-pin, created a storyboard within this app and turned it into a GIF.
With the help of our blind test-user Matthew Shifrin we used the pre-programmed scenarios to test out different type of interactions. He provided us with valuable feedback and ideas upon the heights of pins, how the plots were mapped and what should happen when you interacted with the pins by either tapping, holding or doing other actions. Due to the ease of which new scenarios could be changed we were able to adapt and implement and test different type of operations throughout a session of a few hours with the help of Matthew.
The final concept was made around the scenario of an online ressource on p-t graphs (physics, motion) found here, http://www.physicsclassroom.com/class/1DKin/Lesson-3/The-Meaning-of-Shape-for-a-p-t-Graph.
As the user listens to the text through a text-to-speech program, the software will prompt whether the user would like the encountered images physicalized on TraceForm. We let our test user interact with the webpage as he would normally browse a new source of information, using his prefered audio-to-text software and then provided the physicilisations of the pre-programmed scenarios as our user progressed through the page.
Through these tests we were able to identify a range of important learnings such as:
In general the feedback on TraceForm was very positive, and our test user expressed his sincere belief that if he had had such a system when he was learning about physics it would have saved him countless frustrating hours.
The final concept can be seen below,
We’ve combined important sections of our user testing in four brief videos:
Ideas on audio-clues and initial reaction
Suggestions for new features
Using the forearm to sense trends
General feedback
Caroline Rozendo, Irmandy Wicaksono, Jaleesa Trapp, Ring Runzhou Ye, Ashris Choundhury
A strand-like object that enables users to explore sound through creating shapes. The user is allowed to explore the possibilities with the signifiers and model both iconic instruments and their own ideas of playable things.
“Music is not limited to the world of sound. There exists a music of the visual world.” – Oskar Fischinger, 1951
The quote by Oskar Fischinger motivated us to explore the music of the visual world and extending the exploration to the physical world. Projects that we looked to for inspiration include lineFORM – which inspired the shape of our product, soundFORM – which inspired us to manipulate sound through shapes, and the Fabric Keyboard – which inspired us to build a multi-functional deformable musical interface for sound production and manipulation.
The long, bendable strand allows the user to form it into various shapes. The sensitivity of this sound-generating instrument to proximity, tap, and shakes prompts the user to immediately realize its musical behaviour. As the user explores different shapes, different sounds will be created by the instrument, inviting them to explore various shapes and their relationship to sound.
We used physical sensors embedded inside the strand instead of computer vision, as using vision-sensing will limit the working area of interaction and reduce its expressiveness and functionality to the users. We would like the user to have freedom in shaping and interacting with this tangible musical interface. However, since each of the sensors is currently wired to the circuitry in our initial prototype, the mobility of this instrument is limited to 50cm.
Since an array of uni-directional bend sensors are used on both sides of the strip/strand, this instrument is constrained for 2D shape-making. In addition, the amount of hinges, which are currently ten, limits the variety of shapes that can be formed.
As illustrated in the figure above, the strand consists of multiple layers encapsulated in a soft fabric. A long brass strip is used as the base layer and retains its shape under different bending. An array of short styrene blocks on each side of the strip provides structural support and for the 20 flex sensors and their wirings, as well as side grooves for the Nitinol wires, two on each side. At the end of the strip, accelerometer, proximity sensor, and magnet are attached and protected by a foam.
The usage of flex sensors enable this instrument to detect its own shape, the accelerometer enables this instrument to detect taps and shakes (i.e. tambourine, triangle, drum) ,the proximity sensor allows the exploration of touch sensing around the inside of the 2D-shape (i.e. piano, harp). Magnets facilitate snapping of two ends of the strand to complete the 2D-shape.
In below Ableton Live screen capture, a percussion instrument is activated, by dragging and dropping the subset or changing the shape of the strand, the user can play different types of percussion instrument, such as tamborine, vibra, cabasa, chimes, conga, cowbell, etc. Current setting is shown to play a high-frequency bongo.
Arduino DUE was used as the microcontroller that processes the sensor data from the flex sensors, proximity sensor, and accelerometer to digital inputs, converts them into MIDI messages, and sends these MIDI messages to Ableton Live 9 through Serial USB.
Since we used a large number of flex sensors ~ 20 pieces, two multiplexers (CD74HC4067) are used. Each of the multiplexer takes each side (10) of the array of flex sensors. The microcontroller addresses each input pin of both multiplexers simultaneously and subsequently read them through two ADC pins. Since we need approximately 3A to power the Nitinol wires for shape changing effects, a transistor circuit that connect an external supply (6V Battery) is built and can be controlled digitally by the microcontroller.
The mapping of shapes into different instruments takes inspiration from a number of articles and research experiments which suggest that the way how we connect auditory and visual-tactile perception is not completely arbitrary.
In the “Kiki-Bouba” experiment, conducted for the first time by Kohler in 1929 and replicated a few times afterwards, people were asked to match a spiky angular shape and a round, smooth shape to the nonsense words “kiki” and “bouba”. There was a strong preference to associate “bouba” to the round, smooth shape, and “kiki” to the angular shape. In a repetition of Kohler’s experiment conducted in 2001 by Vilayanur S. Ramachandran and Edward Hubbard, this preference accounted for at least 95% of the subjects.
On an article published by Nature in 2016, Chen, Huang, Woods and Spence go a little further, investigating the assumption that this correspondence would be universal and analyzing the associations of frequency or amplitude to shapes.
By allowing the user to shape the objects in search of the pitch and sound they would like to play with, we use a characteristic of perception to improve the intuitiveness of the interface.
As a future work, we would like to extend our strand to allow more joints and possible shapes. We are also interested in integrating more sensors to expand the functionality of this strand as a musical instrument. We will also explore the use of shape memory alloy in our instrument and incorporate shape-changing feature, which will enable the instrument to automatically shape itself, help the users to reach their final or iconic shape, or even more exciting, to adapt its shape and learn its mapping based on a given sound. We also plan to improve the usability of this instrument, by making the instrument wireless, the users will be able to fully express themselves and explore the relationship between sound and shape. Finally, we would like to collaborate with musicians and sound artists to demonstrate the capability of this novel deformable instrument through musical performance or composition.
https://youtu.be/TEZTWOAdeP4
Attach your report here by end of day Sunday Dec 10
Kyung Yun Choi, Darle Shinsato, Shane Zhang, Lingxi Li
Keyword: Tangible Memory, Reminisce, Notebook, Ambiance, Shape-changing display, Origami
Final Presentation Slides
Device that records ambiance (in the form of ambient noise) and translates this into tactile movement via a shape changing sheet.
Recorded ambiance is embedded into the sheet and can be replayed to reminisce about a moment or memory of a place
“ We feel that current technology should be merged and coupled with tangible artefacts and physical elements, in order to provide more engaging experiences and overcome some of the limitation of mobile technology. ”
– Luigina Ciolfi & Marc McLoughlin
Today, we experience memories through digital means (ie. videos and photos)
Current devices rely on visual and auditory recordings to capture moments
What if these memories were translated into a tangible medium?
AFFORDANCES
]]>
Implementation (and links to code/related materials)
Final Experience, Lessons Learned
Video
https://www.dropbox.com/s/nl0jz7mp5ykhjyn/thirdeye.mp4?dl=0
Emily Salvador, Judith Sirera, Yan Liu, Xiaojiao Chen
We’ve created an interactive, collaborative desk for children to help bring their school assignments to life. More specifically, we’re implementing an example module. In this module, students can collaborate with one another to animate a scene prepared by the teacher.
Students use the desk just as they would otherwise. The camera is designed to be un-obstructive to the front of the classroom, so the student can clearly see the teacher and the whiteboard/projector at the front of the room. Instead of typing on a laptop, students could write on their desks with styluses or use a regular notebook since there would be no need for a laptop to take up space on their desk.
Students interact with the table by manipulating and designing physical objects. At this point, the output is visual and sound based, but we could imagine ‘smart’ primitives like Toio or the Transform topography being applied to this setup. More specifically to our application implementation, by using cutout pieces of construction paper, kids can design and assign their own sounds to shapes, can move the pieces around, and don’t risk damaging expensive equipment (construction paper is way less expensive than a computer monitor).
While ideating on our final concept, we researched the following topics, Color and Sound, Telepresence, Animation, and Education. Below is a list of projects that we were inspired by.
Color and Sound:
https://www.kickstarter.com/projects/364756202/specdrums-music-at-your-fingertips/description
https://www.instagram.com/p/Bax0aGknUPy/?taken-by=hardwareux
https://www.instagram.com/p/BWIMccYALLq/?taken-by=hardwareux
https://musiclab.chromeexperiments.com/Kandinsky
Telepresence:
http://www8.hp.com/us/en/campaigns/sprout-pro/overview.html
https://vimeo.com/44544588
http://tangible.media.mit.edu/project/teamworkstation/
Animation:
https://turtleacademy.com/
https://scratch.mit.edu/
https://en.wikipedia.org/wiki/Adobe_After_Effects
Education:
https://www.playosmo.com/en/
Our project is designed to be imaginative, generative and fun. Whether it’s art class or science class, concepts should be presented in more engaging, interactive, tangible ways. Children should be allowed to experiment using a safe, dynamic medium. Additionally, they should be able to share their creations and collaborate with other students real-time. Many schools are shifting to purchase computers and laptops for the classroom, but those can be distracting because they physically block students’ line of sight from the teacher. Since the desks would be locally networked, the teacher can push content out to the class for them to learn that day and more directly engage with each student. There would be more opportunity for collaboration compared to a traditional laptop as laptops are designed to be personal sized. Children can work on their own desks or partner with each other or the teacher to collaborate on a single work surface.
Sensors: Camera – can capture x,y coordinates of a wide spectrum of RGB color values, can pick up the projector output on the paper (need to account for feedback)
Actuators: Speakers – output device that allows you to hear sound, Projector – allows user to experience i/o coincidence
Other: Colored Construction Paper/Watercolor/Craft Materials – since we’re using analog input devices that the users are familiar with, they’ll be more likely to quickly adopt our platform. Also, by choosing a low-cost material, we’re encouraging the user to create without worrying about perfection.
Overall Device: By using the physical paradigm of playing through arts and crafts, the user doesn’t need to ‘pre-learn’ how to interact with our device. Instead they can learn about the relationship between color and sound as they naturally explore and create on the platform. Since we’re using both visual and audio cues, if the user is hearing or visually impaired, they can still interact with the platform.
Sensors: Camera – brightness, resolution, framerate, can pick up the projector output on the paper (need to account for feedback)
Actuators: Speakers – are they loud enough/no binaural/3D audio, Projector – brightness, resolution, framerate, needs to be precisely placed relative to the platform
Other: Colored Construction Paper/Watercolor/Craft Materials – since we’re using analog input devices that the users are familiar with, they’ll be more likely to quickly adopt our platform.
Overall Device: Time and technical expertise deficit was definitely a constraint
In terms of software, our project was implemented in Max MSP. Emily wanted to learn more about visual programming and Max and took this project as an opportunity to do just that. Max MSP was the perfect platform for the programming requirements of this project. The software is primarily used for video and sound processing and manipulation, which is exactly what Emily was looking to work with. The software is broken into three main components, image processing, image to sound conversion, and image output.
The software grabs frames from the webcam at a downsampled frame rate (approximately 6 frames/sec) and resolution (24×8 pixels). Then, the saturation and brightness of the image is modified to easily extract and identify colors and to minimize noise from the background. Next, the current frame and previous frame are compared to identify motion. If the pixel value change is great enough, the software registers that position as motion and turns that pixel white. Lastly, the software compares the color image with the motion alpha image and selects the minimum RGB value. That way, if there was no motion detected at a pixel, it will be black, and if there was motion, that white pixel value would be converted to the color in the color detection frame.
The above clip shows the motion alpha channel combined with the ultra-saturated camera input.
For the sound processing, Emily started by writing thresholding logic. For example, if the R pixel value was greater than 200, the G pixel value was less than 100 and the B pixel value was less than 100, then my sound processor would register that pixel as RED. There were also thresholding values for the colors GREEN, BLUE, and PINK. We avoided using colors that too closely matched skin tone (like yellow, orange, and brown) since we didn’t want to pick up hand motion in our sound processor. When the sound converter registers a hit, it increases the volume on that color value. If no hits are registered for that color in the frame, the volume slider fades out.
For the PINK color value, the sound sequencer introduces a looped track. This is to showcase that the user can assign sound loops to trigger when they move a sound loop assigned color. For the RED, GREEN, and BLUE color values, there is an additional mapping of sound over the y-axis. If the user moves one of those colors to the top of the page, the pitch of that instrument is higher, and if they move the color to the bottom of the page, the pitch decreases.
Lastly, the composited image is output to the reciprocal device, so the users can share and create animations together. An interesting side effect of this setup, is that in high light situation, the projector output is dim enough to be filtered out in the camera feed (avoiding a feedback loop).
However, in a low light situation (triggered by switching from high light to low light), the camera detect tries to detect color in the darkness, and starts to dream. Those colors are output to the projector and cause new colors to be seen by the camera, generating sound. In this low light situation, the interaction paradigm changes. Now the user can block and add light to the storyboard surface to contribute to the generative composition. For example, if the user blocks all the light on one table, the other table will eventually stop playing music. If the user shines light (from a flashlight or their phone), the feedback loop begins again, radiating out from where the light was applied.
For the hardware implementation, we bought large cardboard boxes (24″x24″x45″) to create a projection surface that matched the size of the canvas. If this were an actual product, we would use a smaller form-factor but we were working with the projectors we already owned (as opposed to buying short throw projectors or monitors to embed under our working surface). We cut PVC pipe to hold up the camera above the canvas and secured the projectors at the base of the table. We painted the boxes white to make it look like we cared about the aesthetic a bit. We cut holes into the clear surface acrylic to create storage pots for the arts and crafts supplies. Lastly, we created demo props using construction paper, pipe cleaners, and popsicle sticks.
Overall, this was an awesome opportunity to learn more about telepresence, interaction design, image processing and Max MSP.
Judith, Xiaojiao and Emily assembling the cardboard boxes
Yan securing the camera rig to the inside of the box
Preparing the Arts and Crafts Props
Team Photo as we carry our StoryBoard devices downstairs