robot – Tangible Interfaces http://mas834.media.mit.edu MAS.834 Sat, 12 Dec 2015 03:52:20 +0000 en-US hourly 1 https://courses.media.mit.edu/2015fall/mas834/wp-content/uploads/sites/6/2015/09/cropped-TIlogoB-02-copy2-32x32.png robot – Tangible Interfaces http://mas834.media.mit.edu 32 32 Dynamic Construction Scaffold AKA Perfect Sand Castles (Idea 2 by Daniel) http://mas834.media.mit.edu/2015/11/03/dynamic-construction-scaffold-aka-perfect-sand-castles/ Tue, 03 Nov 2015 05:43:12 +0000 http://mas834.media.mit.edu/?p=4945 Concept and Motivation

Clay (or wet sand) is a natural medium for humans to express 3D shapes by direct physical manipulation of material. The problem with such expression is that it can take years of practice to be able to create precise structures. If you haven’t developed the technique of an expert clay sculptor, you will probably end up with ill-formed blobs. As always, tools can help with this lack of dexterity. For example, buckets and other molds help us to make perfect cylinders for the turrets of sand castles. However, we are limited by the tools – in this case the “scaffolds” – we happen to have on hand. We would need a new bucket for every new kind of castle we want to build. The static nature of our tools fundamentally limits our expression. What if we had a dynamic scaffold that could morph to suit whatever we wanted to build? The shape displays, properly modified, offer one such solution.

Implementation

By covering a shape display in an elastic membrane material, then pouring sand over this membrane, we can create a “smart” sandbox. The sandbox can change it’s topology to create construction guides in 2.5D in real time. For example, it can easily create straight walls in discrete increments for “snap-to-grid” functionality. It can observe the topology of one sand construction (via an overhead depth sensor) and replicate that topology elsewhere (copy-paste.) It can raise up in areas to act as a dynamic construction support for free-standing structures (bridges, arches), then lay flat again once the user builds the structures. The smart sandbox can even add dynamic behavior to otherwise static shape structures (a draw bridge over a mote,  activating ramps over walls, etc.)

Extension: Full 3D

In the above implementation, the 2.5D constraint of the shape displays translates to a 2.5D constraint on construction. At best, we can achieve some overhangs and arches by using them in dynamic support mode. Furthermore, almost all of the actual construction is left to the user – the shape display cannot add or subtract material directly, but can only act as a guid for the user’s manipulation. A natural extension would be a robotic sand deposition/subtraction system that is able to augment the natural manipulation abilities of the human in a cooperative manner. The user could start making a simple shape, and the robot could finish it, or the user could make the rough form of a structure, and the robot would refine it. In the case of sand, such construction is easily accomplished because water as a binding agent is easily applied.

Extension: Free-form  3D drawing

Sand/clay represent bulk structures. Another useful paradigm is strut/line-based (wireframe) structures, as with the 3Doodler and other “3D pens”. Again, these mediums have the same problem: the limited dexterity of humans renders them useless for precise structures. Here again the shape displays can act as precise yet dynamic construction tools/scaffolds/guides. This is equivalent to 2D drawing with a compass/ruler, but in 2.5D.

]]>
Action at a Distance AKA Robot Physgun (Idea 1 by Daniel) http://mas834.media.mit.edu/2015/11/02/action-at-a-distance-aka-robot-physgun-idea-1-by-daniel/ Tue, 03 Nov 2015 04:44:30 +0000 http://mas834.media.mit.edu/?p=4935 Concept and Motivation

The natural way of changing the pose (position and orientation) of an object in space is through direct physical manipulation, namely with our hands. However, we are limited to manipulation of nearby objects by the length of our reach. This project aims to extend that natural ability with arbitrarily long range by transducing the manipulation through robots, which have no such locality constraint. This allows the user to naturally manipulate objects at a distance. The basic mode of interaction involves the gestural selection of a target object that a robot picks up, and pose manipulation through a “handle” analog object. What the user does to the handle, the robot does to the target object. The result is telemanipulation, but unlike other robotic manipulation concepts, the user maintains their own reference frame. They carry out the interaction from their perspective, as if they simply had longer arms, not as if they were teleported to whatever position the robot happens to be in. It it more like or non-psycic “telekinesis”.

Implementation

The technical implementation is rather straightforward. A gesture sensor (Microsoft Kinect) interprets the user pointing to an object they want to manipulate. Once the object is selected, pose input via the “handle”can be done with accurate Inertial Measurement Unit (IMU) – a smartphone will suffice. The position and state of the robot does not matter in this framework, as long as the robot is close enough to the object to grab it. The job of the robot, then, is to map the pose of the handle to it’s own end effector, using standard inverse kinematics.

Extension

By transducing physical manipulation through robotic technology, many more capabilities are enabled. The robot can handle objects that would be too large or heavy to manipulate by hand, yet because the interface is consistant, their manipulation is just as intuitive to the user. The robot can also have a much larger workspace than the human arm, so the translations of the user can be scaled upbeyond what the user could do in person. There is not requirement that the robot be stationary, or even grounded (quadcoptors), so the user can select any target objects within their line of sight and manipulate them without moving themselves, as long as the robot(s) can get to the objects. The inverse is also true – because the robot could be more precise than human hands, the user could use the same interface to scale down the manipulation into micro-manipulation with fine control of translation on very small scales. The objects can still be large however – you can position a shipping container with millimeter precision as easily as you could a marble.

 

]]>