djfitz – Tangible Interfaces http://mas834.media.mit.edu MAS.834 Sat, 12 Dec 2015 03:52:20 +0000 en-US hourly 1 https://courses.media.mit.edu/2015fall/mas834/wp-content/uploads/sites/6/2015/09/cropped-TIlogoB-02-copy2-32x32.png djfitz – Tangible Interfaces http://mas834.media.mit.edu 32 32 inFLux: A Magneto-rheological Material Display http://mas834.media.mit.edu/2015/12/10/influx-a-magneto-rheological-material-display/ Thu, 10 Dec 2015 10:44:30 +0000 http://mas834.media.mit.edu/?p=5466 inFlux , extended abstract

]]>
Kinetic Brush (Idea 3 by Daniel) http://mas834.media.mit.edu/2015/11/03/kinetic-brush-idea-3-by-daniel/ Tue, 03 Nov 2015 09:09:46 +0000 http://mas834.media.mit.edu/?p=4953 Paint brushes are tools to apply physical color to a surface. Digital paint brushes apply digitally-controlled color to digital surfaces, which allows for precise definition of color, the painting of patterns, and brush dynamics that would be impossible with real brushes. ioBrush was a project that brought some of that digital capability to physical paintbrushes by enabling a “copy/paste” mode not only for colors, but for patterns and even dynamic video.

The concept of Perfect Red explored one possible 3D version of this concept, enabling digital operations over 3D shapes, including copy/past, snapping, etc. Other projects, like jamsheets, explore the possibility of copying the shape of 3D object in real time to another more-digital tangible medium.

However, while both Perfect Red and Jamsheets are 3D and tangible, they are also static once formed. ioBrush could replicate dynamic content, but it was 2D and intangible. What if we can copy tangible dynamics found in real world in this way? What if we can “paint” kinetic motions?

This project proposes a small set of sensor/actuator pairs that act as “brushes” for real-world kinetics. The user records the motion of a real world object (original) with the sensors and can replicate that motion elsewhere with the corresponding actuator (replicant). The pairs can either act in a record/playback mode or in a real-time transduction mode. They can then be composed into kinetic sculptures or devices.

The set of sensor/actuator pairs is based on classical mechanism, devices, and behaviors. These include continuous rotation (wheels), partial rotation (hinges), translation (pistons), vibration, heat, temperature, and phase (solid/liquid/gas.)

Extension: The systems above allow for only one way interaction – sensor to actuator. However, if the parts of the system were symmetric servomechanisms, two-way interaction could be possible. The user could control the original physical system with it’s replicant and, more interestingly, link one replicant acting as an output to another acting as an input to physically couple otherwise separate kinematic systems in the real world through tangible mediums.

]]>
Dynamic Construction Scaffold AKA Perfect Sand Castles (Idea 2 by Daniel) http://mas834.media.mit.edu/2015/11/03/dynamic-construction-scaffold-aka-perfect-sand-castles/ Tue, 03 Nov 2015 05:43:12 +0000 http://mas834.media.mit.edu/?p=4945 Concept and Motivation

Clay (or wet sand) is a natural medium for humans to express 3D shapes by direct physical manipulation of material. The problem with such expression is that it can take years of practice to be able to create precise structures. If you haven’t developed the technique of an expert clay sculptor, you will probably end up with ill-formed blobs. As always, tools can help with this lack of dexterity. For example, buckets and other molds help us to make perfect cylinders for the turrets of sand castles. However, we are limited by the tools – in this case the “scaffolds” – we happen to have on hand. We would need a new bucket for every new kind of castle we want to build. The static nature of our tools fundamentally limits our expression. What if we had a dynamic scaffold that could morph to suit whatever we wanted to build? The shape displays, properly modified, offer one such solution.

Implementation

By covering a shape display in an elastic membrane material, then pouring sand over this membrane, we can create a “smart” sandbox. The sandbox can change it’s topology to create construction guides in 2.5D in real time. For example, it can easily create straight walls in discrete increments for “snap-to-grid” functionality. It can observe the topology of one sand construction (via an overhead depth sensor) and replicate that topology elsewhere (copy-paste.) It can raise up in areas to act as a dynamic construction support for free-standing structures (bridges, arches), then lay flat again once the user builds the structures. The smart sandbox can even add dynamic behavior to otherwise static shape structures (a draw bridge over a mote,  activating ramps over walls, etc.)

Extension: Full 3D

In the above implementation, the 2.5D constraint of the shape displays translates to a 2.5D constraint on construction. At best, we can achieve some overhangs and arches by using them in dynamic support mode. Furthermore, almost all of the actual construction is left to the user – the shape display cannot add or subtract material directly, but can only act as a guid for the user’s manipulation. A natural extension would be a robotic sand deposition/subtraction system that is able to augment the natural manipulation abilities of the human in a cooperative manner. The user could start making a simple shape, and the robot could finish it, or the user could make the rough form of a structure, and the robot would refine it. In the case of sand, such construction is easily accomplished because water as a binding agent is easily applied.

Extension: Free-form  3D drawing

Sand/clay represent bulk structures. Another useful paradigm is strut/line-based (wireframe) structures, as with the 3Doodler and other “3D pens”. Again, these mediums have the same problem: the limited dexterity of humans renders them useless for precise structures. Here again the shape displays can act as precise yet dynamic construction tools/scaffolds/guides. This is equivalent to 2D drawing with a compass/ruler, but in 2.5D.

]]>
Action at a Distance AKA Robot Physgun (Idea 1 by Daniel) http://mas834.media.mit.edu/2015/11/02/action-at-a-distance-aka-robot-physgun-idea-1-by-daniel/ Tue, 03 Nov 2015 04:44:30 +0000 http://mas834.media.mit.edu/?p=4935 Concept and Motivation

The natural way of changing the pose (position and orientation) of an object in space is through direct physical manipulation, namely with our hands. However, we are limited to manipulation of nearby objects by the length of our reach. This project aims to extend that natural ability with arbitrarily long range by transducing the manipulation through robots, which have no such locality constraint. This allows the user to naturally manipulate objects at a distance. The basic mode of interaction involves the gestural selection of a target object that a robot picks up, and pose manipulation through a “handle” analog object. What the user does to the handle, the robot does to the target object. The result is telemanipulation, but unlike other robotic manipulation concepts, the user maintains their own reference frame. They carry out the interaction from their perspective, as if they simply had longer arms, not as if they were teleported to whatever position the robot happens to be in. It it more like or non-psycic “telekinesis”.

Implementation

The technical implementation is rather straightforward. A gesture sensor (Microsoft Kinect) interprets the user pointing to an object they want to manipulate. Once the object is selected, pose input via the “handle”can be done with accurate Inertial Measurement Unit (IMU) – a smartphone will suffice. The position and state of the robot does not matter in this framework, as long as the robot is close enough to the object to grab it. The job of the robot, then, is to map the pose of the handle to it’s own end effector, using standard inverse kinematics.

Extension

By transducing physical manipulation through robotic technology, many more capabilities are enabled. The robot can handle objects that would be too large or heavy to manipulate by hand, yet because the interface is consistant, their manipulation is just as intuitive to the user. The robot can also have a much larger workspace than the human arm, so the translations of the user can be scaled upbeyond what the user could do in person. There is not requirement that the robot be stationary, or even grounded (quadcoptors), so the user can select any target objects within their line of sight and manipulate them without moving themselves, as long as the robot(s) can get to the objects. The inverse is also true – because the robot could be more precise than human hands, the user could use the same interface to scale down the manipulation into micro-manipulation with fine control of translation on very small scales. The objects can still be large however – you can position a shipping container with millimeter precision as easily as you could a marble.

 

]]>
Smart Pillow http://mas834.media.mit.edu/2015/10/26/smart-pillow/ Mon, 26 Oct 2015 20:00:41 +0000 http://mas834.media.mit.edu/?p=4773 Our exploration of the Radical Atoms theme, utilizing pneumatic soft actuation and sensing, led us to develop an interactive “smart” pillow. Smart Pillow is a pillow that transforms based on user’s needs and emotions. It can sense pressure in different contexts to interact with the user by changing shape.

The pillow has three specific modes of interaction:

  • Automatic shape adjustment during sleep for maximum comfort: The Smart Pillow senses areas of high-pressure, and reduces its inflation accordingly to relieve pressure areas. Meanwhile, it detects areas of low pressure and inflates them more, matching the contour of the user’s head and neck to distribute support evenly.
Sleeping comfortably on the self-adjusting smart pillow.

Sleeping comfortably on the self-adjusting smart pillow.

  • Sleeping/Reading Context Switching: When the user tries to sit up in a reading position, differences in pressure across the entire pillow activate a wedge-shaped pouch for the perfect bed reading ergonomics. When the user is done and wants to sleep, sliding down on the pillow reactivates sleep mode.
Reading on the self-adjusting smart pillow

Reading on the self-adjusting smart pillow.

  • Tactile Emotional Support: When the user clutched Smart Pillow in worry and despair, Smart Pillow gives the user a hug of reassurance and consolation.
Hugreciprocation with the lovable smart pillow.

Hug reciprocation with the lovable smart pillow.

The first prototype of Smart Pillow has undergone successful initial testing.

Future versions are recommended to incorporate:

  • emotional sensing and response
  • user personalization
  • reading illumination
  • telepresence sleeping
  • bedtime story tangible augmentation
  • wake alarm functionality
]]>
Dynamic Tools http://mas834.media.mit.edu/2015/09/29/dynamic-tools/ Tue, 29 Sep 2015 06:01:34 +0000 http://mas834.media.mit.edu/?p=4611 There is a tool for every purpose…and we end up with a lot of tools.

What if common hand tools could react to the way in which we try to use them, to become better at what they recognize as a new intended functionality? This project aims to explore some of those possibilities.

original_271427_49ksSlnE1InjJMtjiDStpIEsF643577

These sets of tools have the same interface (handles) and basic interaction mode (whacking, shoveling) but serve different purposes and functions. The tool could sense subtle changes in the interaction and infer the intended functionality, morphing to better suit it.

 

Functional recognition and adaptation could be activated by the context of the interaction (hitting a large flat hard surface (ax), hitting a pointy hard surface (nail), hitting a large soft surface (dirt)), but could also be informed by the nature of the user’s action (sideways whacking, vertical whacking, digging.) These two ways of recognizing functionality are often complimentary. Consider another example, an eating utensil that senses either when the user is sipping from it (action) or when they are pouring fluid into it (context), and reacts by becoming more concave to better function as a fluid-holding drinkable vessel (These two functions – holding fluid and being easy to sip from – are distinct but commonly found together in useful human artifacts).

]]>
Daniel Fitzgerald http://mas834.media.mit.edu/2015/09/22/daniel-fitzgerald/ Wed, 23 Sep 2015 04:04:28 +0000 http://mas834.media.mit.edu/?p=4330 I am a new research assistant in the Tangible Media Group. My background is in robotics engineering and computer science and I have experience with computer vision, 3D Printing, soft and robotics/devices. I am excited to explore the interplay of these fields and others in the context of tangible media. I also want to learn more about design, storytelling, and presentation to make powerful and compelling projects and demonstrations.

★★★☆ Fabrication & Craft
★☆☆☆ Design
★★★☆ Electronics
★★★★ Programming
☆☆☆☆ Biology
☆☆☆☆ Chemistry
djfitz@media.mit.edu
413-210-7630
]]>