We spend roughly one-third of our lives sleeping, yet the technology that supports this daily necessity has evolved very little over time. The vast and varied technologies we interact with during our waking hours have evolved significantly, improving our daily lives and expanding our capabilities. While the communication devices and vehicles we use each day would be unrecognizable to people who lived just two-hundred years ago, the beds and blankets we use would be familiar to people who lived thousands of years ago, both in the materials they are made of and their capabilities.
SleepScape seeks to improve your sleep experience by learning your habits and adapting to them over time. Perhaps you are cold when you first get in bed each night, but wake up overheated a few hours later. SleepScape can increase airflow, or adjust its position to help regulate your body temperature. It can comfort you when you are restless by hugging your body or gently wake you in the morning by nudging you. We’ve also imagined extended capabilities, like dream monitoring and interpretation.
SleepScape is a blanket that contains a flexible mesh of electroactive polymers/muscle wires. Embedded in this mesh are sensors that track the user’s temperature as well as the ambient temperature in the room. It also contains accelerometers, proximity sensors and pressure sensors, to gather information about the user’s movement and posture, relative to the blanket. All of these can be tracked specific to different areas of the grid.
Shape-changing capabilities are activated by electrical impulse. The network of muscle wire is strategically activated in order to produce movement. Different areas gradually contract or expand, allowing for purposeful, yet subtle movements, like the blanket rolling off the user when he/she is too hot.
Over time, the blanket learns things like the user’s optimum sleep temperature, and will regulate this by acting before the user has become too hot or too cold.
Below is an image of the flexible mesh. The colored dots represent sensors embedded in the mesh. As stated above, the movement of the blanket is controlled by strategically activating muscle wires to choreograph specific movements, like the blanket rolling off the user. The center image is a time-laps of muscle wire contracting. We’ve played around with the idea of the blanket becoming more or less dense to facilitate airflow, as in the animation on the right, but there are easier ways to create heat-activated vents. This capability can be built into the cloth, instead of requiring a Pneuduino to pump air in and out.
Extended Capabilities: See Dreamscape
]]>Hydrogel is 97% water. The process of making Hydrogel included boiling water and then adding a gelling agent that was then put into different shapes and sized containers. The gel would solidify over time (about 5 – 10 minutes) and then you could pick it up from these moulds. We used petridishes, test tubes and ice cube trays. Below are a few pictures of the process.
The Edible Ice Cubes – When these food colored ice-cubes of gel are added to water, the coloring starts to naturally change the color of the drink. We thought about interactions here that could include noticing if someone tampered with your drink or not and notifying the drinker about this.
Based on these initial explorations, we decided to focus our prototypes on the following:
The three prototypes/interactions we came up with include:
All videos for these interactions can be found in our presentation here.
]]>Emobject as…
We picked Andy Ryan’s photograph of Marvin Minsky’s home to illustrate some of these concepts:
We then explored ways to use computer vision to capture information about where people were looking and where they were positioned, relative to the piece. This is a short demo of how we might capture and process viewer data, using facial recognition software and the front-facing camera on an iPhone (the circle is following the movement of Tamer’s face).
Finally, we created a small-scale prototype (of Ryan’s painting) with a layer of clear thermochromic paint, which revealed different parts of the image based on where the viewers were situated. This is a short demo of how this paint could help us change the colors dynamically and direct focus to different areas.
(Last audio sample in this clip was Minsky himself)
Team: Tamer Deif, Shruti Dhariwal, Clara Lee, Jeremy Finch
Thanks to Prof Hiroshi Ishii, Penny Webb, Dan Fitzgerald and Udayan Umapathi
In nature, jellyfish do not have brains. They process information via sensitive nerve nets that underlie their epidermis, allowing for full radial sensation. We were inspired by their sensitivity, compositional simplicity, and the many affordances of their radial design.
Like jellyfish, we rely on touch in our natural environments. The skin is the largest organ of the human body, approximately 22 square feet of densely packed receptors. The human hand alone contains approximately 100,000 nerves. Jellyfish is an interface that makes full use of our capacity to sense through touch.
Mechanism
Jellyfish is a proposed dynamic interface that transforms flat, screen-based information into three-dimensional, mutable material, using a programmable topology.
3D Viewer
Place Jellyfish over a GUI, and move it around like a puck. The topology of Jellyfish changes according to the detected screen content, to create correlating textures. The base of the puck is a solid ring, which glides easily on surfaces; the top is a translucent skin, stretched over shape-changing wires, that can bend up to 90 degrees at each node, allowing for the creation of a variety of shapes.
Pressing on a node allows the user to deform the shape, and this input also affects the screen content, allowing for hands-on CAD modeling and other applications.
Jellyfish can transform any typical GUI interaction into a tangible experience.
Applications include: modeling in CAD software; examining datasets; GIS mapping; game controls, and more. [expand]
Our original brainstorms spanned a variety of possibilities: stress-based tongue interfaces; ants as actuators/fabricators; plant-based interactions and personal growth gardens. We decided to focus on a later idea – a tangible interface puck, loosely inspired by the Microsoft Surface Dial, because it would have a wide range of possible applications for productivity and expression.
Unlike the Dial, our puck would be more than an advanced mouse; it would be a direct and tangible connection to the original content. We were inspired by the Radical Atoms discussion and Bret Victor’s talk about the underutilization of many “modes of understanding,” particularly our capacity for tactile understanding. And to achieve this understanding, we would use programmable matter, in the form of changeable topology.
We decided to look to nature for inspiration as to methods of best realizing our vision, and focused on the jellyfish, which has a simple, radial design that affords fluid and rapid shape-changing. A trip to the New England aquarium provided additional inspiration.
When designing the interface, we focused on usability: the puck would fit in one’s hand, glide easily over any screen, and would be manipulatable by all fingers. Inspired by the jellyfish’s fluid-filled hood and underlying musculature, we decided to use a rigid structure in the bottom layer, with a gel-filled encasement on top. This would allow for more dramatic shape shifts in the rigid structure, including sharp edges, but would also afford smooth, organic surfaces if needed, by altering the amount of gel present in the topology.
There was a delay in getting the shape-changing wires we hoped to use for the rigid structure, so we used 3D-printed models to represent different topologies that could be rendered.
The tops can be used interchangeably to snap in the puck. We used gel and a plastic film to create a malleable surface atop the underlying structure.
Once the wires arrived, we tested their performance moving a gel layer. We did not achieve the dynamic node structure desired, but did produce movement in the test layer.
]]>
What is Umwelt?
Each functional component of an umwelt has a meaning and so represents the organism’s model of the world. These functional components correspond approximately to perceptual features.It is also the semiotic world of the organism, including all the meaningful aspects of the world for any particular organism, i.e. it can be water, food, shelter, potential threats, or points of reference for navigation. An organism creates and reshapes its own umwelt when it interacts with the world. This is termed a ‘functional circle’. The umwelt theory states that the mind and the world are inseparable, because it is the mind that interprets the world for the organism. Consequently, the umwelten of different organisms differ, which follows from the individuality and uniqueness of the history of every single organism.
Semiosphere
An organism creates and reshapes its own umwelt when it interacts with the world. This is termed a ‘functional circle’. The umwelt theory states that the mind and the world are inseparable, because it is the mind that interprets the world for the organism.
Consequently, the umwelten of different organisms differ, which follows from the individuality and uniqueness of the history of every single organism. When two umwelten interact, this creates a semiosphere.
What if we could use different programmable material to expand the unwelt of an organism? Further, what if we could have different organisms connect in a semiosphere through a material?
Very powerful idea in expanded Umwelt. But to what end/why would we want to? Obviously it’s cool to experience more, but we also note that an animal’s affordances are perceived using it’s umwelt. Does an expanded umwelt imply expanded “pallet” of affordances available to us? Which are the most compelling? How is it bidirectionally interactive? Are these materials that we perceive directly, that we use as tools to perceive other elements of the world, or wearable materials that expand our senses all the time?
-Dan
Penny: I like the idea of being able to dynamically adjust your perspective, as a kind of empathy machine or tool for connectivity. I would suggest thinking about it it terms of HCI; for instance, what might a computer’s umwelt be like?
]]>
Concept:
When we are stressed, our bodies release stress hormones. Popular Science found that “Changes in cortisol and other hormones register in your saliva, indicating not only stress but according to a recent study, possibly also how well you respond to it.” Short term stress can be healthy. Long-term stress is dangerous, always telling our bodies that we are in danger or activating different systems to move constantly is taxing and exhausts out bodies.
I am wondering how we can remind our bodies of some interface what kind of stress we are experiencing. Or, creating a system to help us determine if the stress we are experiencing is healthy or damaging.
How:
(Brainstorm)
Create something of which there are many pieces of input—haptic, heat sensors, perspiration, heart beat and more.
AND
Output would be a scent to help you calm down in some way. Or, react based off of what you determine as feedback. Next, you input your feelings before and after.
OR
Output: Lights to communicate to your brain, if it is good stress or bad stress. And, then cognitively acknowledge your feelings.
There is a wide body of previous work around wearables for biometrics-based mood detection/augmentation/modulation, so be cautious of navigating and positioning within it. Smell is a compelling aspect, if you can clarify the argument it the best sense for modulating stress. Combining many biometric monitors to estimate stress levels -esp. good vs. bad stress – is huge task, probably outside the scope of this class. Is there other data we have access to that could be a proxy for stress level? What feedback does the user give and how is it used?
-Dan
Penny: Think about what it really means to be stressed, what do you do, how do you respond, do you try to hide away from the stress, or do you go for a walk, or do you just ignore it? Have a think about some of these natural responses that people already do when they are stressed, and what we already to to try to ‘de-stress’. Perhaps the answer isn’t creating a technology that is aware of you by monitoring, but something you turn to when you are stressed.
]]>