We spend roughly one-third of our lives sleeping, yet the technology that supports this daily necessity has evolved very little over time. The vast and varied technologies we interact with during our waking hours have evolved significantly, improving our daily lives and expanding our capabilities. While the communication devices and vehicles we use each day would be unrecognizable to people who lived just two-hundred years ago, the beds and blankets we use would be familiar to people who lived thousands of years ago, both in the materials they are made of and their capabilities.
SleepScape seeks to improve your sleep experience by learning your habits and adapting to them over time. Perhaps you are cold when you first get in bed each night, but wake up overheated a few hours later. SleepScape can increase airflow, or adjust its position to help regulate your body temperature. It can comfort you when you are restless by hugging your body or gently wake you in the morning by nudging you. We’ve also imagined extended capabilities, like dream monitoring and interpretation.
SleepScape is a blanket that contains a flexible mesh of electroactive polymers/muscle wires. Embedded in this mesh are sensors that track the user’s temperature as well as the ambient temperature in the room. It also contains accelerometers, proximity sensors and pressure sensors, to gather information about the user’s movement and posture, relative to the blanket. All of these can be tracked specific to different areas of the grid.
Shape-changing capabilities are activated by electrical impulse. The network of muscle wire is strategically activated in order to produce movement. Different areas gradually contract or expand, allowing for purposeful, yet subtle movements, like the blanket rolling off the user when he/she is too hot.
Over time, the blanket learns things like the user’s optimum sleep temperature, and will regulate this by acting before the user has become too hot or too cold.
Below is an image of the flexible mesh. The colored dots represent sensors embedded in the mesh. As stated above, the movement of the blanket is controlled by strategically activating muscle wires to choreograph specific movements, like the blanket rolling off the user. The center image is a time-laps of muscle wire contracting. We’ve played around with the idea of the blanket becoming more or less dense to facilitate airflow, as in the animation on the right, but there are easier ways to create heat-activated vents. This capability can be built into the cloth, instead of requiring a Pneuduino to pump air in and out.
Extended Capabilities: See Dreamscape
]]>Hydrogel is 97% water. The process of making Hydrogel included boiling water and then adding a gelling agent that was then put into different shapes and sized containers. The gel would solidify over time (about 5 – 10 minutes) and then you could pick it up from these moulds. We used petridishes, test tubes and ice cube trays. Below are a few pictures of the process.
The Edible Ice Cubes – When these food colored ice-cubes of gel are added to water, the coloring starts to naturally change the color of the drink. We thought about interactions here that could include noticing if someone tampered with your drink or not and notifying the drinker about this.
Based on these initial explorations, we decided to focus our prototypes on the following:
The three prototypes/interactions we came up with include:
All videos for these interactions can be found in our presentation here.
]]>Emobject as…
We picked Andy Ryan’s photograph of Marvin Minsky’s home to illustrate some of these concepts:
We then explored ways to use computer vision to capture information about where people were looking and where they were positioned, relative to the piece. This is a short demo of how we might capture and process viewer data, using facial recognition software and the front-facing camera on an iPhone (the circle is following the movement of Tamer’s face).
Finally, we created a small-scale prototype (of Ryan’s painting) with a layer of clear thermochromic paint, which revealed different parts of the image based on where the viewers were situated. This is a short demo of how this paint could help us change the colors dynamically and direct focus to different areas.
(Last audio sample in this clip was Minsky himself)
Team: Tamer Deif, Shruti Dhariwal, Clara Lee, Jeremy Finch
Thanks to Prof Hiroshi Ishii, Penny Webb, Dan Fitzgerald and Udayan Umapathi
In nature, jellyfish do not have brains. They process information via sensitive nerve nets that underlie their epidermis, allowing for full radial sensation. We were inspired by their sensitivity, compositional simplicity, and the many affordances of their radial design.
Like jellyfish, we rely on touch in our natural environments. The skin is the largest organ of the human body, approximately 22 square feet of densely packed receptors. The human hand alone contains approximately 100,000 nerves. Jellyfish is an interface that makes full use of our capacity to sense through touch.
Mechanism
Jellyfish is a proposed dynamic interface that transforms flat, screen-based information into three-dimensional, mutable material, using a programmable topology.
3D Viewer
Place Jellyfish over a GUI, and move it around like a puck. The topology of Jellyfish changes according to the detected screen content, to create correlating textures. The base of the puck is a solid ring, which glides easily on surfaces; the top is a translucent skin, stretched over shape-changing wires, that can bend up to 90 degrees at each node, allowing for the creation of a variety of shapes.
Pressing on a node allows the user to deform the shape, and this input also affects the screen content, allowing for hands-on CAD modeling and other applications.
Jellyfish can transform any typical GUI interaction into a tangible experience.
Applications include: modeling in CAD software; examining datasets; GIS mapping; game controls, and more. [expand]
Our original brainstorms spanned a variety of possibilities: stress-based tongue interfaces; ants as actuators/fabricators; plant-based interactions and personal growth gardens. We decided to focus on a later idea – a tangible interface puck, loosely inspired by the Microsoft Surface Dial, because it would have a wide range of possible applications for productivity and expression.
Unlike the Dial, our puck would be more than an advanced mouse; it would be a direct and tangible connection to the original content. We were inspired by the Radical Atoms discussion and Bret Victor’s talk about the underutilization of many “modes of understanding,” particularly our capacity for tactile understanding. And to achieve this understanding, we would use programmable matter, in the form of changeable topology.
We decided to look to nature for inspiration as to methods of best realizing our vision, and focused on the jellyfish, which has a simple, radial design that affords fluid and rapid shape-changing. A trip to the New England aquarium provided additional inspiration.
When designing the interface, we focused on usability: the puck would fit in one’s hand, glide easily over any screen, and would be manipulatable by all fingers. Inspired by the jellyfish’s fluid-filled hood and underlying musculature, we decided to use a rigid structure in the bottom layer, with a gel-filled encasement on top. This would allow for more dramatic shape shifts in the rigid structure, including sharp edges, but would also afford smooth, organic surfaces if needed, by altering the amount of gel present in the topology.
There was a delay in getting the shape-changing wires we hoped to use for the rigid structure, so we used 3D-printed models to represent different topologies that could be rendered.
The tops can be used interchangeably to snap in the puck. We used gel and a plastic film to create a malleable surface atop the underlying structure.
Once the wires arrived, we tested their performance moving a gel layer. We did not achieve the dynamic node structure desired, but did produce movement in the test layer.
]]>