The concept of Perfect Red explored one possible 3D version of this concept, enabling digital operations over 3D shapes, including copy/past, snapping, etc. Other projects, like jamsheets, explore the possibility of copying the shape of 3D object in real time to another more-digital tangible medium.
However, while both Perfect Red and Jamsheets are 3D and tangible, they are also static once formed. ioBrush could replicate dynamic content, but it was 2D and intangible. What if we can copy tangible dynamics found in real world in this way? What if we can “paint” kinetic motions?
This project proposes a small set of sensor/actuator pairs that act as “brushes” for real-world kinetics. The user records the motion of a real world object (original) with the sensors and can replicate that motion elsewhere with the corresponding actuator (replicant). The pairs can either act in a record/playback mode or in a real-time transduction mode. They can then be composed into kinetic sculptures or devices.
The set of sensor/actuator pairs is based on classical mechanism, devices, and behaviors. These include continuous rotation (wheels), partial rotation (hinges), translation (pistons), vibration, heat, temperature, and phase (solid/liquid/gas.)
Extension: The systems above allow for only one way interaction – sensor to actuator. However, if the parts of the system were symmetric servomechanisms, two-way interaction could be possible. The user could control the original physical system with it’s replicant and, more interestingly, link one replicant acting as an output to another acting as an input to physically couple otherwise separate kinematic systems in the real world through tangible mediums.
]]>Clay (or wet sand) is a natural medium for humans to express 3D shapes by direct physical manipulation of material. The problem with such expression is that it can take years of practice to be able to create precise structures. If you haven’t developed the technique of an expert clay sculptor, you will probably end up with ill-formed blobs. As always, tools can help with this lack of dexterity. For example, buckets and other molds help us to make perfect cylinders for the turrets of sand castles. However, we are limited by the tools – in this case the “scaffolds” – we happen to have on hand. We would need a new bucket for every new kind of castle we want to build. The static nature of our tools fundamentally limits our expression. What if we had a dynamic scaffold that could morph to suit whatever we wanted to build? The shape displays, properly modified, offer one such solution.
Implementation
By covering a shape display in an elastic membrane material, then pouring sand over this membrane, we can create a “smart” sandbox. The sandbox can change it’s topology to create construction guides in 2.5D in real time. For example, it can easily create straight walls in discrete increments for “snap-to-grid” functionality. It can observe the topology of one sand construction (via an overhead depth sensor) and replicate that topology elsewhere (copy-paste.) It can raise up in areas to act as a dynamic construction support for free-standing structures (bridges, arches), then lay flat again once the user builds the structures. The smart sandbox can even add dynamic behavior to otherwise static shape structures (a draw bridge over a mote, activating ramps over walls, etc.)
Extension: Full 3D
In the above implementation, the 2.5D constraint of the shape displays translates to a 2.5D constraint on construction. At best, we can achieve some overhangs and arches by using them in dynamic support mode. Furthermore, almost all of the actual construction is left to the user – the shape display cannot add or subtract material directly, but can only act as a guid for the user’s manipulation. A natural extension would be a robotic sand deposition/subtraction system that is able to augment the natural manipulation abilities of the human in a cooperative manner. The user could start making a simple shape, and the robot could finish it, or the user could make the rough form of a structure, and the robot would refine it. In the case of sand, such construction is easily accomplished because water as a binding agent is easily applied.
Extension: Free-form 3D drawing
Sand/clay represent bulk structures. Another useful paradigm is strut/line-based (wireframe) structures, as with the 3Doodler and other “3D pens”. Again, these mediums have the same problem: the limited dexterity of humans renders them useless for precise structures. Here again the shape displays can act as precise yet dynamic construction tools/scaffolds/guides. This is equivalent to 2D drawing with a compass/ruler, but in 2.5D.
]]>The natural way of changing the pose (position and orientation) of an object in space is through direct physical manipulation, namely with our hands. However, we are limited to manipulation of nearby objects by the length of our reach. This project aims to extend that natural ability with arbitrarily long range by transducing the manipulation through robots, which have no such locality constraint. This allows the user to naturally manipulate objects at a distance. The basic mode of interaction involves the gestural selection of a target object that a robot picks up, and pose manipulation through a “handle” analog object. What the user does to the handle, the robot does to the target object. The result is telemanipulation, but unlike other robotic manipulation concepts, the user maintains their own reference frame. They carry out the interaction from their perspective, as if they simply had longer arms, not as if they were teleported to whatever position the robot happens to be in. It it more like or non-psycic “telekinesis”.
Implementation
The technical implementation is rather straightforward. A gesture sensor (Microsoft Kinect) interprets the user pointing to an object they want to manipulate. Once the object is selected, pose input via the “handle”can be done with accurate Inertial Measurement Unit (IMU) – a smartphone will suffice. The position and state of the robot does not matter in this framework, as long as the robot is close enough to the object to grab it. The job of the robot, then, is to map the pose of the handle to it’s own end effector, using standard inverse kinematics.
Extension
By transducing physical manipulation through robotic technology, many more capabilities are enabled. The robot can handle objects that would be too large or heavy to manipulate by hand, yet because the interface is consistant, their manipulation is just as intuitive to the user. The robot can also have a much larger workspace than the human arm, so the translations of the user can be scaled upbeyond what the user could do in person. There is not requirement that the robot be stationary, or even grounded (quadcoptors), so the user can select any target objects within their line of sight and manipulate them without moving themselves, as long as the robot(s) can get to the objects. The inverse is also true – because the robot could be more precise than human hands, the user could use the same interface to scale down the manipulation into micro-manipulation with fine control of translation on very small scales. The objects can still be large however – you can position a shipping container with millimeter precision as easily as you could a marble.
]]>
The pillow has three specific modes of interaction:
The first prototype of Smart Pillow has undergone successful initial testing.
Future versions are recommended to incorporate:
What if common hand tools could react to the way in which we try to use them, to become better at what they recognize as a new intended functionality? This project aims to explore some of those possibilities.
These sets of tools have the same interface (handles) and basic interaction mode (whacking, shoveling) but serve different purposes and functions. The tool could sense subtle changes in the interaction and infer the intended functionality, morphing to better suit it.
Functional recognition and adaptation could be activated by the context of the interaction (hitting a large flat hard surface (ax), hitting a pointy hard surface (nail), hitting a large soft surface (dirt)), but could also be informed by the nature of the user’s action (sideways whacking, vertical whacking, digging.) These two ways of recognizing functionality are often complimentary. Consider another example, an eating utensil that senses either when the user is sipping from it (action) or when they are pouring fluid into it (context), and reacts by becoming more concave to better function as a fluid-holding drinkable vessel (These two functions – holding fluid and being easy to sip from – are distinct but commonly found together in useful human artifacts).
]]>