Monthly Archives: May 2015

Final Prez & The Start of Class – Robyn

Concept: A professor/lecturer/teacher who’s presentation is messing with them/is being werid behind their back.

Method: learn a lot about color, make a really interesting presentation and then mess with it.

First: Research, figure out what I was going to say about color!

I started with a substantial stack of books on color. History of color, theory of color, atlas of color, design of color. Even a book called All About Color. Overall I acquired about half of MIT’s section on the topic of color.

2015-05-02 15.58.23

I learned that my favorite color is cerulean blue, is made from cobalt stannate and was invented in 1805 by Andreas Hopfner. I learned that cats and dogs are not actually color blind, they just see a less vibrant form of colors. I learned how much the development of aluminum paint tubes in 1841 enabled the explosion of color in art in recent art history.

I dug through books of color and websites of color, compiling interesting snippets and stories that began to take shape in an interesting cohesiveness. My ideas evolved on a whiteboard, flowing and connecting together,

WhiteBoard

Unfortunately I promptly realized that I had a ton of info on color but that doesn’t make a trick. At this point I switched to working on mechanics. (As a result, many interesting mechanics (++), far too large a scope (–).)

The concepts I wanted to play with were,

  • Extra space on screen, that is assumed to not exist because the presentations don’t usually use it
  • A connection between how you interact with the screen and what it shows (smudging, moving, ect.)

Final presentation gag components:

  • Spazzy color/projector trouble start
  • Face recognition, spazzy or normal.
  • Presentation picture tilt with computer tilt
  • Presentation drifting around on screen, drifts off, ½ slide on the top ½ on the bottom, starts scrolling the slide top to bottom while talking about slide.
  • Smudging of presentation picture like water
  • Presentation starts drifting again and drifts all the way off the screen.

Video walk-through of technical components performance,

Spazzy start, Spazy Start

Face responding spazzyness, Face Flicker

First slide fade, Slide Fade

Presentation, Prez Slides Start

Tilt (synced with laptop screen tilt), Tilt Slide

(futuristic) Slides sliding around

Presentation, Prez Slides MIddle

Ophelia Muddle, Ophelia

End of presentation & thank you, Prez Slides End

(futuristic) slide before thank you slides off screen, end.

 

Video of all components,

GitHub of code components, (coming soon)

 

Challenges I faced and retrospective solutions:

  1. Getting started without a concrete idea of what I was trying to do
    a. Just start doing things (EARLY) and see what happens.
    b. Start with a one-day concept proving sketch of idea & iterate!
  2. Wanting to do too much with too many different sorts of mechanics
    a. REALLY be aware of where the achievable/not plausible line is and stay on the possible side of it. (also see 1.b. )

Lessons learned:

1) Split up projects, draft/mock-up super simple proof of concept ideas. (Do drafts, lots of early drafts, iterate and develop your idea)
2) Don’t be intimidated by things I’m not sure I can do, (ex, how to be funny??) just try and see what happens!
3) Technically excellence is worth 0 if it doesn’t work. (Keep it simple stupid) Ie, ACTUALLY be realistic with what you can accomplish.
4) Before starting the meat of a programming project plan what you’re going to do/the best way to set the program. (ex, if/else vs. switch statements.
5) I can code anything I want, it will just take a looong time.
6) Don’t be intimidated by the actually really fun part! Performing for people!

 

Final Project Documentation — Let There Be Light

Magic Final Documentation

###############################################################
Script of the trick:
###############################################################
*magician stands next to a table with a black TV screen (facing the audience) and a black box*

The wizards of legend had amazing powers: the ability to make things disappear, the ability to move things without touching them, the ability to fly. In comparison, the “magic” performed by current magicians is trivial: sleight of hand and clever contraptions to befuddle people. But I wanted to return to the time when being magical meant being powerful. As powerful as a God. And on the first day of creation, what did God say? Let there be light.

*magician waves hand over device sitting on the table in front of him*

*device starts shining brightly*

Of course magic is only magical if it transcends boundaries: the laws of physics, the expectations of common folk, and sometimes… real physical boundaries.

*Bring audience member on stage, hand them a silicon box made to look like steel*

(to audience member) This is a metal box, correct? Go ahead, pick it up and feel it. Can you confirm it’s made of real metal? Awesome, thanks. Go ahead and place that over the device. Now for those of you out there who aren’t big physics nerds, it isn’t possible to send any sort of signal through solid metal using any of the technologies in our cell phones, computers, and everything else. You see, metal reflects electromagnetic waves, so for anyone who thought I was doing something tricky with wireless communication, explain this one.

*wave hand over box*

(to audience member) Now, go ahead and remove the box.

*device is shining, cover it up again*

(repeat procedure, asking audience members whether to turn it on or off each time so they you know you truly have control over whether it turns on or off)

*excuse the audience member back to their seat*

When it comes to light, I’m a bit biased. I have a particular source of light that I prefer above all others: fire. That’s right, I’m a total pyro, proud of it. My runescape username was actually irishpyro94 for all of middle school. True story.

Now, as a natural element, fire is inherently more difficult to control that manmade light sources like those I was controlling before. This doesn’t always go well, so I’m going to back up from my target a little bit in case something goes wrong. *looks at people in front row* Um, you guys all turned in your liability waivers right? Ok, good.

*walk to other side of stage*

Alright, who want to see some fire!?

*gesture as if shooting a fireball a la Dragon Ball Z, TV screen suddenly turns on, showing a picture of a blazing fire, sound effects*

*applause*

Thank you!

###############################################################
End script
###############################################################

This trick gives the illusion of generating fire and light at one’s will. The script above describes the ideal version of the trick which I didn’t have time to fully implement unfortunately. I will describe the technology I used to perform a first approximation of the above.

There are two primary technical components: an infrared-based module for the up-close magic and a Bluetooth-based trick for the fire trick. I will discuss the infrared technology first.

########################
INFRARED
########################
IR Transmitter
2015-05-17 16.06.36

The magician needs an IR LED attached to his wrist with an appropriate power supply hidden up his sleeve. In my trick, I used a breadboarded Arduino circuit. I powered an Arduino Nano (http://www.arduino.cc/en/Main/ArduinoBoardNano) from a 9V battery. Then I passed the 5V rail through an appropriately sized resistor to limit the current, then through the IR led and finally to ground. I taped this LED to the bottom of my palm near my wrist, positioned such that it was covered up by the sleeves of my coat unless I extend my arms in front of me, as I do when flourishing my hands over the device in the trick. The LED I used turned out to be highly directional, only shining in a narrow beam. A better choice would be an omnidirectional light source, though then the magician would need a way to tell the LED when to turn on (as opposed to my setup, where the LED was always on). This could be accomplished with a flesh-colored flex sensor on the magician’s palm that can tell when the magician’s hand is flexed open.

IR Receiver

2015-05-17 16.06.56

The device that actually lights up needs only 3 components: a power source, an IR sensor, and a light source. I used an Arduino Nano powered via USB from my computer, though in the ideal performance the device would be battery-powered. I used an IR sensor with an integrated thresholding circuit, but the sensitivity could be better tuned if a discrete photodiode was used with the output current being read by a microcontroller that does the thresholding calculation. For the light source, I used three standard LEDs, one of each red, green, and blue. This results in an interesting iridescent effect because your eye doesn’t know whether to interpret the light as white or as discrete colors.

Ideally, this device would be packaged in an interesting way, perhaps to look like an orb or crystal ball of some sort, like the one in Gandalf’s staff that he used in Moira and to drive away the Nazgul during the Gondorian retreat from Osgiliath.

The silicon box mentioned in the script is an interesting touch that I wish I could have implemented. Silicon is transparent at IR wavelengths, but opaque to visible light. I tried to leverage the fact that it looks like metal to convince any skeptics in the audience.

Here you can see the IR system in operation.

2015-05-17 16.09.57

########################
BLUETOOTH
########################
The final part of the trick involves “lighting the TV on fire”. This is done through Bluetooth signalling between the magician and the computer controlling the TV. In the script, I described a TV monitor to display the fire, but in the trick I performed, I used a laptop. Using a larger screen is better for dramatic effect, and if it can be powered wirelessly using resonant magnetic induction, that adds another impressive element to the trick, that would be extra impressive. Then there could be some banter about the energy of the fire powering the screen or something.

Bluetooth transmitter

2015-05-17 16.08.13

On the magicians side, there is an Arduino Nano wired to an HC-05 Bluetooth module (http://www.amazon.com/Bluetooth-converter-serial-communication-master/dp/B008AVPE6Q). The Arduino is continuously reading in analog voltage values from a potentiometer voltage divider circuit. These 0-5V voltages are converted to a 10-bit binary value (0-1024 in decimal). This decimal value is sent to the HC-05 using the SoftwareSerial Arduino module, and the HC-05 automatically transfers those values to the device with which it is paired. The magician need to discreetly rotate the potentiometer dial counterclockwise during the “fireball throwing flourish”, thus signalling to the TV to display fire. Alternatively, there can just be a discreet button and the Arduino can wait for it to be pressed, which would have been way simpler and more robust than the potentiometer method. Lesson learned.

The code for this is found at https://github.com/colinmcd94/magic/blob/master/magic_analog_tx/magic_analog_tx.ino.
For anyone who wants to use bluetooth communication for any reason, I have a skeleton file for using HC-05 with Arduino+SoftwareSerial here: https://github.com/colinmcd94/magic/blob/master/magic_bluetooth/magic_bluetooth.ino.

Bluetooth reciever
I paired my Mac with the HC-05 module, which is fairly easy to do from System Preferences. If the pairing initially fails, click Options and enter the code 1234 into the box. This is the default code for all HC-05 modules.

[Screenshot 2015-05-17 16.11.33

My computer was running a shell script that connected to the HC-05 serial port using the stty command and received a data packet every 2 seconds or so. This script can be found at https://github.com/colinmcd94/magic/blob/master/fire.sh. You can see the values written to the terminal change as I rotate the potentiometer knob, and when the values falls below 100 (corresponding to about half a volt), my computer opens Safari and shows a video of fire. This is a webpage I wrote available at http://mcdonnell.mit.edu/human/magicfire.html. It uses bigvideo.js to display a video across the whole screen.

Here, you can see the Bluetooth system in operation. You can see the decimal voltage readings being transferred to the shell script running in Terminal, and the “catching fire” in action.

2015-05-17 16.16.07

Genie and the Lamp: 2015 Edition

My final project was a modern take on the Genie and the Lamp story, in which I free a super-intelligent AI from a locked Faraday Cage. Along the way, we have a conversation about the nature of magic and deception.

The role of the AI was played by a Google Nexus Q, a discontinued media streamer/Android device from 2012. The Q is small, black, spherical, and contains a programmable LED ring, all of which (combined with its obscurity/unfamiliarity) give it a very other-worldly, mystical feeling.

1.1340820048

Setup

The Q sat inside a hinged black cubical box, and was attached to the projector and speakers. The performance was scripted and directed by two complementary Android applications (source available here), one running on the Nexus Q, and one running on my phone. The phone served as a remote – the volume buttons advanced the script running on the Q from line to line. This let me talk for as long as I wanted during our conversation, and still have the AI enter at the perfect time.

I also made use of my NFC deck from my previous project. This was just placed on the table as well.

Technicals

The Nexus Q required a bit of hacking. I unlocked, rooted, and installed CyanogenMod 11 (based on Android 4.4), just to be able to run the app I developed. I used Google’s Text-To-Speech service (UK English) as the voice of the AI, and used a Parse app for communication between the remote and the Q. I had hoped to use Bluetooth for the remote, since I thought it would be faster and more reliable (especially inside the Media Lab), but the Parse app was simpler to set up and easier to use.

In the middle of the trick, the AI prints out the secret card in ASCII Art. I used this service to convert the a PNG of the card to ASCII art, 100 pixels wide. The “print” function was randomly timed to simulate the erratic output of terminals. Throughout the performance, the AI also printed small, easter-egg-style comments or status messages, for the alter audience members to appreciate.

Content

I tried to make the start a little bit of surprise. In the story, I just happen to come across the AI, so when starting the performance I pretended to have forgotten something in my seat and was on my way to retrieve it when the closed black box called out for help to me.

From there, I acted out the story of meeting the AI, learning about its past, and receiving an offer for it to grant me one wish, a custom it learned from the internet that it thought to be appropriate in this scenario. I wished to be a magician, and from there began a conversation about deception that became the theme and message of the story: that deception can be used to give people experiences that are valuable because of their impossibility.

Along the way, I performed a card trick involving my phone and a NFC-tagged deck, to demonstrate the philosophy that I was trying to explain to the AI.

Here’s a video of the full performance:

Review

I put considerably more effort into writing a narrative for this performance than for previous ones, and I feel like it made off in developing a more cohesive trick that both entertained and has a takeaway message. There were a few points that I wish I could have refined, and a few extra polishes I wish I could have added, like the box closing itself at the end or a more elaborate card trick in the middle, but I was satisfied with where I had gotten the production as it was.

I had a blast this semester, and will definitely be recommending it to friends interested in magic. Thanks for running a fabulous class!

Final Project – Isa Sobrinho

Nanotech – Drones (?)

The idea was to create power balls that would follow your hand movements.

Initially we thought about using a kinect sensor and setting it up to track the hand and translate it to the drone using Kinect Core Vision.

Then we realized (Luke told us) about the built in camera in the drone and how to set it up to track your hand as if it was the ground, changing it’s position according to your movement.

Turns out, everything change, since the drone capacity carry weight was very restrict, and we used carbon rods + mylar to make the cover, but didn’t manage to stick with the LED’s.


 

Even though the drone did NOT obey me during the presentation, I did learn a lot from the class and am very grateful to have been part of it.


Power Balls / Magic Balls:

Screen Shot 2015-05-11 at 5.12.40 PMScreen Shot 2015-05-11 at 5.12.46 PM Screen Shot 2015-05-11 at 5.12.53 PM

Screen Shot 2015-05-11 at 5.12.58 PM

Screen Shot 2015-05-11 at 5.13.05 PM


Used:

Setting Up:

https://github.com/patriciogonzalezvivo/KinectCoreVision

https://www.youtube.com/watch?v=s4-MCdRMd5E

How to make it:

Rolling Spider Drone 

Luke’s exoskeleton

Milar laser cutter paper cover

Kinnect sensors

Final Project Documentation (Kenny Friedman)

Beginnings
My final project is the culmination of projects I’ve done throughout the semester using magic to tell stories. I started with a math-based card trick involving the Fibonacci sequence. Then, I augmented that trick with a fake-Siri and had an argument with my computer for my Trick++. Next, for my midterm project, I used the pre-recorded audio concept and applied it to audio and video across multiple screens (the projector, an iPad, and an iPhone). This was similar to Marco Tempest’s iPod illusion, except with vertical displays of multiple sizes.

IMG_2191Now, I’m taking the same concept of interacting with screens, except the audio and video is no longer pre-recorded. This allows for less precision as the act responds to the performer, as well as more options for audience interaction or mistakes.

I think using techno illusions to convey a concept is a really powerful medium. In the iPod illusion, Marco uses it (aptly) to discuss the concept of deception. I really enjoyed the meta level of using a concept to talk about the concept. Since my technology involves augmented reality (AR), I initially thought of talking about AR by using AR. However, after playing around with the story, I realized that a more general and universal concept like deception, empathy, or time is a better use of techno illusions. I decided to talk about time.

Finally, I tried to step into a magic circle to think about mediums that are rarely used in magic. After looking through different kinds of art and word play, I noticed that I couldn’t find many examples of poetry & magic. And, with the exception of Bo Burnham, I couldn’t find any examples of comedy*, poetry, and magic combined.

*not that my goal was to perform something humorous or anything.

Thoughts Behind the Technology

Marco Tempest’s MultiVid ( http://marcotempest.com/screen/Public_MultiVid ) is a fantastic piece of software, however it’s limited to videos on iOS devices. The videos sync, but they can’t interact. I wanted to make a framework that allowed incorporated many aspects of an interactive multimedia performance.

I ended up successfully implementing three interactive multimedia elements. They are (in increasing oder of technical impressiveness): (1) timing a video projected on a wall that you can interact with, (2) communicating with a fake Artificial Intelligence. (3) knowing the position of a mobile device in free space.

For my performance, I put the least technically interesting one in the middle. Ignoring (for now) the gimmicks & props that I used throughout, there were 3 main parts to my trick, each corresponding to one of the technologies. These three are described below:

1. Interactive Projected Screen

While an interesting trick, this part is least impressive from a technological standpoint. I created an app that can control pause/play functionality on another device by tapping anywhere on the screen of the first device. This ensures that you can have “chapters” to an interactive screen trick. You don’t have to have a single video that encapsulates the entire performance (as I did for my midterm performance). This capability is possible on Marco’s MultiVid as well, but my version can send multiple commands (instead of simply play/pause), so it would be possible to have branching based on audience input (however, I don’t use this functionality in my trick).

The communication between devices is OSC* (see more on OSC below) on both iOS and Mac. (My trick involves just iOS to iOS for this section). Each device is running an instance of a custom app. One app receives data and displays video. The other is used as the controller.

2. Communicating with a fake Artificial Intelligence

In both my Trick++ and my Midterm, I had a conversation with a fake AI that pretended to be Siri. In both performances, they were prerecorded. There are three problems with this approach: timing during the performance has to be nearly perfect, changing the script is very difficult after it’s been created the first time (you have to regenerate the entire audio clip), and finally, there are no pause commands using my old method (see midterm documentation) so once the audio was generated, you had to manually insert breaks using an audio editor.

This time, I created an easy to use system that consists of procedurally generating the audio during the performance, and easy timing controls (either with pause commands or with a remote device). The audio is generated using AVSpeechUtterance, with is part of Apple’s native iOS SDK. This solves all three of the original problems with the prerecorded versions. It also enables the possibility of branching during the performance (by pressing different buttons on the remote), but again, that was not part of my performance.

3. Know the Position of an iOS Device in Free Space

Here are a bunch of ways not to get this feature to work (or, at least I couldn’t get this feature working)

  • Accelerometer Data: I first came up with understanding the position in free space by playing with the accelerometer data. However, the accelerometer produces the acceleration. You need to take the double integral of that to find the distance traveled. However, taking the integral twice produces so much noise that it is impossible to calculate accurately. Holding the device perfectly still will say that the device has traveled meters. So centimeter accuracy for any length of time is impossible.
  • IMG_2194Vuforia AR: Vuforia is a great AR framework developed by Qualcomm that has nearly perfect target tracking. The targets/markers can be a photo of any object with well defined (high contrast) boarders. I used this framework before in a UROP, but not for this purpose. The goal was to find the vectors of a particular marker that the projector was projecting on to the screen. Using the camera from the iOS device, it would detect a marker
  • Optical Flow: I believe this one is possible if you have a better understanding of linear algebra. I don’t have very much matrix experience, and couldn’t figure out the math to do this one correctly. It’s probably doable, and someone definitely should do it.

Those three methods all involve the device itself detecting it’s location. After none of these worked, I moved to a fallback: have a secondary, stationary device detect the movement, and wirelessly transfer the data to the moving device. I ended up using OpenCV in OpenFrameworks to detect a marker on screen, and then transfer the location information via OSC back to the iPad itself.

How To Properly Communicate Between Apple Devices

I did not perform on Final Presentations day because I could not get the devices communicating (with ruins part 1 and 3 [and sometimes 2] of my performance).

I initially thought that high level Bluetooth LE protocols would be the way to go, since I’ve used it for iOS to iOS data transfer before. However, when transferring data from iOS to Mac, it doesn’t work. BLE on OS X is only set up to be a central device, not a peripheral. My system needed the Mac to be the peripheral, and the iPad to be the central, receiving device.

OSC saved the day, using the built OSC framework that comes standard on both the iOS and OS X versions of OpenFrameworks.

Magic and the Soldier Design Competition

This past Monday I missed class to make a presentation to a panel of nine judges including the dean of West Point and head of the MIT Physics department as part of the Soldier Design Competition. This program is run through the Institute for Soldier Nanotechnologies and pits teams from West Point against teams from MIT to compete for prizes, trophies, and the occasional defense contract.

The team I was a part of is building a hierarchy of drones that can be used to sense or image important battlefield information and relay it to soldiers on the ground or remote command-and-control facilities. The hierarchy includes a “nanodrone” less than four inches on a side, a foldable tricopter capable of carrying sensor payloads up to one kilogram, and larger fixed-wing drones will powerful radios for communicating information back to base.

The obvious magical analogy to make here is omniscience. Synthesizing biometric and positional sensor data from individual soldiers, visible and IR battlefield imaging, acoustic gunshot localization, LIDAR environment mapping, and any other sensing capability using drones as a mobile backbone gives troops and generals alike perfect situational awareness at all times. This is the message I alluded to throughout my presentation and stated explicitly on the final slide.

I also used showmanship techniques to emphasize my points. To demonstrate the small size of nanodrone, I brought one of them onstage. But unlike the other teams that hauled large prototypes across the floor, I carried mine discreetly in my suit jacket pocket. As I verbally extolled the small size of these drones, I pulled it out of my pocket with a flourish and handed it to the judges.