Hydrogel is 97% water. The process of making Hydrogel included boiling water and then adding a gelling agent that was then put into different shapes and sized containers. The gel would solidify over time (about 5 – 10 minutes) and then you could pick it up from these moulds. We used petridishes, test tubes and ice cube trays. Below are a few pictures of the process.
The Edible Ice Cubes – When these food colored ice-cubes of gel are added to water, the coloring starts to naturally change the color of the drink. We thought about interactions here that could include noticing if someone tampered with your drink or not and notifying the drinker about this.
Based on these initial explorations, we decided to focus our prototypes on the following:
The three prototypes/interactions we came up with include:
All videos for these interactions can be found in our presentation here.
]]>We’ve seen a few textile interfaces which incorporate various aspects of this proposal, namely flexibility/temperature-changing. However, interaction with texture-changing, self-healing, and hydrophobic materials is new. Incorporation for bed/sleeping and all-weather clothing is compelling. What other applications are there? Technological implementation will be difficult depending on ambition of scope.
-Dan
]]>Abdulla Alhajri
Lia Bogoev
Amy Loomis
Nono Martínez Alonso
This project is an extension of the current shape display system. UnderCut will allow for an extra degree of freedom when compared to the current shape display capabilities.
In a quest to find more screen-free time in our days as designers, we propose ways to interact with computational data on real-time in the context of physical elements and materials.
Nowadays, we have an enourmous amount of design tools which allow us to generate complex digital models with extremly simple input (i.e. visual algorithmic environments such as Dynamo or Grasshopper).
Hand-made architectural design “working model.”
What if we could use physical elements as inputs—paper, pens, cardboard—manipulated with our hands so a modeling environment reacts and “generates” for us?”
Teaching a computer how to read physical sketches into a computer isn’t something novel, algorithms for the automatic interpretation of a rought architectural sketch as a consistent 3D digital model have already been created [Interpreting Physical Sketches as Architectural Models, by Barbara Cutler and Joshua Nasman, as can be seen in images a, b, c, d]. Still, the feedback loop died when sketches where just used as input for the computer to generate things on the screen, inside existing CAD software.
Our approach wants to find a way to close the feedback loop and bring the generated information into the physical world through intelligent materials — so the user doesn’t need to interact at all with a computer, but just play with her hands.
This example shows a digital simulated environment. Our device would allow you to perform the operations that are happening inside a CAD program on the real world, by interacting with physical paper and elements, and displaying extrusions and other properties (such as line lengths at scale) on the device.
Going away of outputting the generated information with a projector due to the limitations this technology presents, we would have to try to embed certain behaviors into an “intelligent” device or material.
Sample algorithm to convert physical objects captured by a camera into lines.
—
—
Undercut seems to be interesting exploration. Cap on each pin on CooperForm can easily removed and you can attach different material. We can think of variety of extension to add on each pin to create different materiality or functions.
Architecture one is hard to imagine. Draw sketches to convey idea. I remember you were saying there will be projection on your drawing…? how does it actuate? Can this go beyond Sandscape? or is it just similar one?
]]>
Idea2:
Idea3:
]]>
** Slides presented 11/17 in class: Scratch On Transform (PDF) **
Idea 1. Water Idea
Concepts
Application Ideas
Idea 2. Shape Display
Idea 3. Radical Dimension
Motivation
Understanding 4D
Application
Platform
Interface
2. Eye location (3D) + Hand Gesture (1D) = 4D Viewpoint Location resulting 3D projection geometry change.
3. Tangible interaction – give force to make object rotate in 4D space
—
As for the water one, you can either create a novel shape changing technique for water, or propose novel compelling applications (of-course having both is ideal!). There are amazing technology for creating computational ripples 1, 2. Imagining applications for these machine could be interesting. Thinking about reflection sounds interesting but hard. There are a lot of water related works, so do survey and find the niche! I did water-related one last year so I can also help
Portable Shape Display – I would focus on one compelling device/form factor and propose various interaction techniques and concrete applications.
4D – Can people really learn what 4D is with 3D shape changing interfaces? I would like to see it working but I imagine it would be just a random crazy physical animation as a result. Eye and hand gesture thing makes no sense for me. Using GUI seems easier to understand what’s going on.
]]>
A tangible platform for music creation and playback in three dimensions.
One way we propose to make music is by using a music sequencer: a way to place notes in time. We are interested in being able to touch sound and manipulate sound by touching it in real time.
The basic idea is to arrange musical notes in time on a grid. A cursor sweeps left to right.
Our system would have 2 modes: Compose and Playback modes.
Compose mode: the shape display becomes a canvas to create music. In the upper half, the surface becomes a touchable, sculptable music creation section. Vertically the shape display turns into a musical staff. Think of the grooves on a music box. In the bottom section, the shape display turns into a beat creation section.
Playback mode: the patterns will move across the display. There could also be a waveform that allows the user to tangibly manipulate pitch.
Idea 2: Tactile Education
An adaptable play surface that mimics traditional toddler block games. Using the shape display to teach toddlers and help them develop their hand/eye coordination. “Adaptable” is the key word, it allows for multiple games without taking up more space. More compact than having several separate games, easy clean up. Vision of eventually being able to download games, like a tactile tablet
Potential games:
Simon Game – Instead of lights flashing, pins bounce, and toddler has to push down pins in same order
2-player – getting parents or siblings/friends to interact with the toddler using the table.
Shapes – teaching basic shapes, like squares, circles, triangles. It could be expanded to showing letters, or with projection, could teach color.
Sorting – like the shape in holes game, except outlines instead of holes
Recognition – the pins could form several shapes and an audio file could announce the name of one, then the toddler should push down all the pins in that shape. We could also combine projection with this to add color recognition
Matching – Create a shape that matches a computer formed shape, either from scratch, or from a shape that’s almost there, like the “Find the Differences” game
Our slides can be found here.
—
Sequencer can be an interesting exploration. Considering using the actual sound shape displays make would be interesting. It could be interesting if you can make a music which can be made only with your approach. Here is a cool related work POCOPOCO.
Not sure about the education tools. Just having several different games is weak. I will recommend to focus on one compelling scenario.
]]>