Hydrogel is 97% water. The process of making Hydrogel included boiling water and then adding a gelling agent that was then put into different shapes and sized containers. The gel would solidify over time (about 5 – 10 minutes) and then you could pick it up from these moulds. We used petridishes, test tubes and ice cube trays. Below are a few pictures of the process.
The Edible Ice Cubes – When these food colored ice-cubes of gel are added to water, the coloring starts to naturally change the color of the drink. We thought about interactions here that could include noticing if someone tampered with your drink or not and notifying the drinker about this.
Based on these initial explorations, we decided to focus our prototypes on the following:
The three prototypes/interactions we came up with include:
All videos for these interactions can be found in our presentation here.
]]>In nature, jellyfish do not have brains. They process information via sensitive nerve nets that underlie their epidermis, allowing for full radial sensation. We were inspired by their sensitivity, compositional simplicity, and the many affordances of their radial design.
Like jellyfish, we rely on touch in our natural environments. The skin is the largest organ of the human body, approximately 22 square feet of densely packed receptors. The human hand alone contains approximately 100,000 nerves. Jellyfish is an interface that makes full use of our capacity to sense through touch.
Mechanism
Jellyfish is a proposed dynamic interface that transforms flat, screen-based information into three-dimensional, mutable material, using a programmable topology.
3D Viewer
Place Jellyfish over a GUI, and move it around like a puck. The topology of Jellyfish changes according to the detected screen content, to create correlating textures. The base of the puck is a solid ring, which glides easily on surfaces; the top is a translucent skin, stretched over shape-changing wires, that can bend up to 90 degrees at each node, allowing for the creation of a variety of shapes.
Pressing on a node allows the user to deform the shape, and this input also affects the screen content, allowing for hands-on CAD modeling and other applications.
Jellyfish can transform any typical GUI interaction into a tangible experience.
Applications include: modeling in CAD software; examining datasets; GIS mapping; game controls, and more. [expand]
Our original brainstorms spanned a variety of possibilities: stress-based tongue interfaces; ants as actuators/fabricators; plant-based interactions and personal growth gardens. We decided to focus on a later idea – a tangible interface puck, loosely inspired by the Microsoft Surface Dial, because it would have a wide range of possible applications for productivity and expression.
Unlike the Dial, our puck would be more than an advanced mouse; it would be a direct and tangible connection to the original content. We were inspired by the Radical Atoms discussion and Bret Victor’s talk about the underutilization of many “modes of understanding,” particularly our capacity for tactile understanding. And to achieve this understanding, we would use programmable matter, in the form of changeable topology.
We decided to look to nature for inspiration as to methods of best realizing our vision, and focused on the jellyfish, which has a simple, radial design that affords fluid and rapid shape-changing. A trip to the New England aquarium provided additional inspiration.
When designing the interface, we focused on usability: the puck would fit in one’s hand, glide easily over any screen, and would be manipulatable by all fingers. Inspired by the jellyfish’s fluid-filled hood and underlying musculature, we decided to use a rigid structure in the bottom layer, with a gel-filled encasement on top. This would allow for more dramatic shape shifts in the rigid structure, including sharp edges, but would also afford smooth, organic surfaces if needed, by altering the amount of gel present in the topology.
There was a delay in getting the shape-changing wires we hoped to use for the rigid structure, so we used 3D-printed models to represent different topologies that could be rendered.
The tops can be used interchangeably to snap in the puck. We used gel and a plastic film to create a malleable surface atop the underlying structure.
Once the wires arrived, we tested their performance moving a gel layer. We did not achieve the dynamic node structure desired, but did produce movement in the test layer.
]]>
WinWon is a winter jacket that solves all problems we may have during the freezing cold/rain weather of Boston and actually makes it reasonably convenient and (dare I say) slightly enjoyable too.
The jacket will include the following features:
Sensing
The “vision” or capabilities of the above
But what if we could translate that into a material?
And make it into something that was part of an “everyday” use (at least in Boston winters!)
Udayan’s comment: The body heat regulation through inflation is most interesting to me. Can you also think about interactions around this?
This jacket and inflatable helmet might also be interesting to you.
]]>Hiroshi:
+ Simple, beautiful and transparent design of transformation without computational actuation. I like its aesthetics and mechanical sophistication for delicate folding.
+ On the other hand, there are limitations in the scalability to increase the multiple states, and no-linear transition among them. Hope the final report discusses the trade-off between your approach and more complex but flexible approach to design kinetic transformation.
Ken:
The mechanical transformation technique was interesting to see. Generative software made me imagine the scalability and customisability of the work. I think there could be broad possible applications for crafts, architecture, package design and etc… For the kitchen application, although the scenario they presented was that dishes change shapes before and after the meal, I’m interested to see interactions “while eating.” How can transforming dishes can change/support the way we eat? The use scenario could be deepen.
Viirj:
I like the approach of using mechanisms with smart materials to generate shape change. I think their work would be stronger if they went either deeper in the user experience or the technical explorations. In the presentation, it didn’t seem to emphasize either the user experience or technical aspects. If it was about interaction generated by the user, they could’ve dug into how a user might interact with it- action of poking with knife and etc to activate the mechanism. Also, they could have explored how other inputs from either the food itself or the soap can contribute to state transformations in the objects.
Luke:
I tend to agree with Jifei’s suggestions in class. Think about how your primitives could apply at different scales to multiple application spaces. It might also pay to think “outside tradition” when it comes to vessels and how they should be geometrically shaped. There is no reason a bowl needs to be round, especially if it is computationally variable. As with most tensions in the TMG space and materiality it seems you need to pick between highly situational based design (I do this, then this, then that) or let the materials themselves dictate that space (simple rules, complex output).
Others:
Feedback from Others
Hiroshi:
+ Great challenge given the complexity of data structure, control structure, and editing of the code.
+ I would suggest you to compare your programming paradigm (edit code and execute it) with PICO done by Dr. James Patten. PICO allows mechanical user intervention to change the course of simulation using tangible direct manipulation.
Ken:
I think this is a great exploration for using Shape Displays as educational tool that we haven’t look into yet. The concept of abstraction in 3 levels of programming is interesting. I would like to see this work get polished with additional functions, improved interaction system and actual user studies. Also, current interactions are limited by the mechanism of shape displays that you can’t grab and change the order of function like manipulating bricks. Thus, I want to see what other technologies they imagine to use (e.g. modular robotics), and further interactions enabled by those technologies.
Viirj:
Choosing a specific user group could have helped to define how much abstraction and what type of interaction is appropriate for programming. It is hard to determine how useful this method of programming is due to the large scope of programming as a general topic. Most of the tools you listed in your background related work has a specific audience which defines how much abstraction and what types of functions and outputs are important. For example if you have kids programming – then why not have kids physical manipulate the end result and have them see how its represented in code instead of taking a linear approach to how things are currently being programmed? Also, physical programming can contribute to changing specific parameters that are more intuitive in the real world than in cartesian space from the computer such as velocity, height, color, and etc.
Luke:
I personally love where this project could go. I agree with Viirj that it might be worth stating what this cant do (at least just for yourselves) to focus on what one should program. It seems you have already done this to an extent leveraging logo however I am curious if you can challenge yourselves to think outside the 3D representation of 2D programming. The “cursor” or “turtle” makes sense for 2D shapes, how about a simple “extrude” function? How would this make sense to your chosen audience? There is a great possibility for this to go beyond just understanding programming in general and to take the processing stance of “destroying the careers of gui designers by making tangible programming simple”. Nice work!
Others:
Feedback from Others
Hiroshi:
+ Assembly is tedious work if there is a single correct answer/goal, and single procedure to reach it. In that case, there is no strong reason that human being would engage. On the other hand, if it is a constructive assembly like LEGO to explore the shape to express the ideas, it would be more interesting. I would encourage you to clarify the goal. Assembly of car engine might be not the best example.
+ Magnetic approach to guide user to connect right components might have imitation (e.g. letting users to choose the meta-strategy or plan of constructive assembly). If every step is pre-defined and I have to execute following that plan, then I would ask robot assemble it for me.
Ken:
I could see what and how they got interested in the project, but I felt they couldn’t explore deep enough in terms of convincing stories, applications, and interaction techniques. I recommend them to carefully polish WHY part of the work, and list related works to find the niche. I think there is a large space to explore and define how people can interact with semi self-assembling blocks that people in modular robotics and programmable matter are not aware of.
Viirj:
Luke:
I agree with Viirj, I think the opportunity here is clear however the implementation or design motivations need to be a little stronger. What is unique about this concept? It appears to be in the way one can be either computationally guided OR have freedom to override that guidance and force the system to change. In this case, an interaction suggestion and possible scenario/application space would be useful to describe why. Thinking about application spaces allows us to refocus and think about what is the viable way to limit the system such that it is more feasible than conceptual. Have a think about examples where computation allows for variation of paths perhaps drawing from things like GPS routing where one can take the “slower yet more meaningful route” or even end up somewhere completely different from their intention.
Others:
Feedback from Others
Hiroshi:
+ Metaphor of sands and archeological dig is poetic and evocative.
+ However, recording and retrieving of personal emotional state changes with video might be not the best application, given the delicate nature of human memory and its automatic reorganization to restructure & forget automatically. Perhaps site specific applications, such as environmental pollution or history of ancient civilizations and wars might be interesting.
Ken:
Very nice mapping of sand, time and memory. I enjoyed the presentation. Although the prototype just a mock-up, the video of use scenario was convincing and was easy to imagine. I loved some extended applications like the one users can store memories in bottles. As Hiroshi was mentioning, sand represents time and by layer and layer it represents history just as geological layers do. I’m interested to see how you imagine to interact with months and years of memory with this interface. Seeing the movie INSIDE OUT, I was also thinking that some bad memories can change to good ones later. Is there any way to represent such kind of changes in impression for memories?
Viirj:
Luke:
I think we can all agree that the connection between the form and the function here is conceptually beautiful. It provides a breath of relaxation to the direction our technology / science fiction is suggesting we may soon be accustomed to (recording our entire lives – check your facebook timeline right?). I would try to think about some technical foundations / ways to manipulate things, describing a couple of examples of how the input might work. Perhaps suggesting a combination of your favorite wearable ECG (see Rosalind Picards Affective Computing group: https://www.youtube.com/watch?v=Q5ujdXhFGSY) and wearable recorder for the external implementation.
I think stating a little more clearly how this should work is important to avoiding the assumption that it will record everything while avoiding bias as a designer that you know what memories are “good or bad” is relevant. It might be worth arguing that the implementation only records moments of intense emotion while the rest is “noise”. At this point it is then important to explain how the interface can give you tangible feedback as to what is relevant and what is not – which would require explaining how the system would technically pull this off. Fantastic work but I feel you need to explain a bit more of the science, design and implementation to connect this to radical atoms.
Others:
Feedback from Others
Hiroshi:
+ Potential of ferrofluid is extremely interesting. Nice to feel the tunable stiffness using this material in your presentation.
+ As Jifei pointed out, many researchers have tested this material, and we need to clarify our original contribution, hopefully in the spectrum of interaction design and killer applications. We also need to compare this method with other techniques for stiffness change (e.g. jamming).
Ken:
It was fun to see the great amount of explorations you guys did. In the presentation, you should have listed related works to shed what the main contributions of this work is. What is new and what is not? I was confused. Although it started from simple exploration of technology, you need to package your work into a presentation which shows originality and concept in this kind of class. Hope to see further applications and interaction techniques in your paper!
Viirj:
Luke:
It’s great to see you guys exploring various materials in depth and it seems like you could have used a little more time to nail down unique application spaces that make your approach unique. There is a lot of possibilities for a unique contribution here though not so much on the technical side of things. I feel it would require you thinking about new angles for the material perhaps in combination with other technologies to pull off (like viirj said) something a little more than 2.5d forms. You have a material that is instantly stiffness variable and I feel like this rapid change makes it useful for a lot of different types of interaction. Think about how people interact with stiff and flexible materials in different ways and how this may prove interesting for HCI.
Others:
Feedback from Others
Hiroshi:
+ Very nice demonstration of dynamic tangiblization of the music, and 3 possible interactions with it: keypad, 1st controller pin array, and the waves themselves.
+ Different metaphor of composing and playing music, such as Toshio Iwai’s music insects may make more easier to justify the use of I/O coincident tangibles. http://www.leonardo.info/gallery/gallery343/iwai2.html
+ Hope you will explore a variety of metaphor, representation and interactions beyond sign waves.
Ken:
I think this was another great exploration/beginning for using shape display in specific applications. I liked the various ways of interactions to compose music and modify sound waves. This is just a random idea, but I also imagined using some kind of gestural interaction to control tempo just like a conductor lead an orchestra. (The array of pins actually looks like an orchestra…?) Also, utilizing the capability of shape display, they could also think of switching between various physical interfaces as well. Although normal instruments have single form factor to compose and play music, shape display has possibility for creating different shapes and interfaces to afford different ways to play music and I think this can be very new.
Viirj:
Luke:
Sound has been shown to be very spatial. Im drawn to think about Imogen Heap’s gloves (which I believe originated in the Opera of the Future Group) and wondering if pin heights could dynamically change more complex sounds. I realize it is kind of ridiculous to implement such things in a short time frame (I tried something much more simple like this too a while back) however I am curious if this can be more than a synthesizer (which is arguably similarly tangible). How can you take advantage of the extra dimension provided by the pins? Press piano keys hard and the sound is stronger, this allows for expression in music, something that surpasses most digital implementations and variations. Here you have the ability to vary the output of the sound by assumed displacement, density etc which could make for some very interesting sound. Aside from this, here are some wicked examples of stuff this made me think about:
– Beautiful Acoustic, mechanical sound I saw at James Pattens Lab: http://www.psfk.com/2014/03/stella-artois-chalice-rock-band.html
– Interesting 3D Visualization of sound: http://www.georgeandjonathan.com/#2
– Living Instruments – a bunch of Birds pluck strings of guitars: http://www.mbam.qc.ca/en/exhibitions/on-view/celeste-boursier-mougenot/
Others:
Feedback from Others
]]>
Team = Haeyoung + Kritika + Marc + Yan + Carolyn
]]>Inspired by atom structures and coming to the middle ground of human-assembly objects and self-assembly objects, we use magnetism as connectors to realize computer/human-assisted assembly. Building blocks with programmable on/off magnets create a dialog between the user and the material. Data-driven shapes allow versatile applications.
Technical Overview: We build three blocks, two truncated cones and one cylinder. Each of the truncated cones is embedded with an electromagnet on the faces, and the cylinder are embedded with two metallic disks on the opposite faces. The electromagnets turn on every fifteen seconds.
Applications:
-Computer-Guided Assembly
-Telepresence Assembly
-Construction Telepresence Assembly Feedback Mechanism
Related Works: M-Blocks, Topobo, Triangles.
Team: Meryl Fang, Thomas Sanchez Lengeling, Manisha Mohan, Penny Webb and HyeJi Yang
Final Presentation Slides: MAS.834 Project 2 Presentation
Final Paper: IOTOM_Paper
]]>TRANSFORM as Tangible Programming Environment (PDF)
There is “a dynamic relationship between things and thinking. We tie a knot and find ourselves in a partnership with string in our exploration of space. Objects are able to catalyze self creation.”
– Sherry Turkle
tangible + radical programming
Toolbox
display 1: function manipulation
once loaded, functions can be intuitively manipulated (rotated, scaled, etc.)
display 2: source code
functions are assembled in the middle display and represented as individual “lines of code;” the entire ensemble represents the compiled “source code”
display 3: cursor + result
the final display is a running, 3D representation of the program; a cursor indicates the location of the work being done