MAS S65: Indistinguishable From… Magic as Interface, Technology, and Tradition http://indistinguishablefrom.media.mit.edu Tue, 17 May 2016 23:58:13 +0000 en-US hourly 1 https://courses.media.mit.edu/2016spring/mass65/wp-content/uploads/sites/33/2016/01/cropped-MIT_ML_Symbol_R_RGB-32x32.jpg MAS S65: Indistinguishable From… Magic as Interface, Technology, and Tradition http://indistinguishablefrom.media.mit.edu 32 32 Through the Keyhole and Into the Shire https://courses.media.mit.edu/2016spring/mass65/2016/05/16/through-the-keyhole-and-into-the-shire/ Mon, 16 May 2016 23:10:04 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=855 doorsofdurin

Historically, one of the oldest references to a magical wand describes the supernatural item in the form of a staff. Staffs, if not too ornate, provide their owners with a lethal weapon that many observers would regard as an ordinary walking stick. For my final project, I have assembled a moonlight illuminating staff with the power to unlock a door. This idea of unlocking doors with moonlight stemmed in part from the wondrously entertaining, magical adventure novel, The Hobbit by Tolkien. In the story, the protagonists are given the following clue to enter the Lonely Mountain and reclaim their treasure from the treacherous dragon, Smaug: “’Stand by the grey stone when the thrush knocks’, read Elrond, ‘and the setting sun with the last light of Durin’s Day will shine upon the key-hole.’” The last light actually alludes to the setting sun in the book; the movie embellishes the riddle by declaring moonlight alone to be the last light of Durin’s Day.

durins_day_thMoonlight does, however, play a vital role in the novel, as Durin’s Day is, “when the last moon of Autumn and the sun are in the sky together.”
The technique chosen for developing a light source mimicking moonlight was to wire a multitude of colored LEDs together such that the number of each color was proportional to the relative intensity at that particular wavelength in the spectrum for moonlight. With only a finite number of different colors of LEDs in the circuit, the light source would serve as an approximation of moonlight. However, by inserting a sufficiently high number of LEDs into the circuit and extending the variety of colors fairly evenly across the visible region of the spectrum, this approximation should be sufficient. The spectrum that I used to determine the number of each color, originally from a paper by Ciocca & Wang, can be viewed at the following link:

http://physics.stackexchange.com/questions/244922/why-does-moonlight-have-a-lower-color-temperature

The Moon’s spectrum, shown as a blue line, was taken with the Moon at an altitude of 57 degrees. The red line on the diagram is the Sun’s spectrum at an altitude of 30 degrees. As seen, the two are markedly different over most of the visible spectrum. Six different colors of LEDs were placed in the circuit: blue at 467 nm, green at 522 nm, yellow at 592 nm, amber yellow at 595 nm, orange at 609 nm, and red at 660 nm. To develop my circuit, I first wrote down the signal strength for each of the colors from the spectrum. To the nearest tenth, I obtained: blue- 0.4, green- 0.9, yellow- 1.1, amber yellow- 1.1, orange- 1.1, and red- 1.0. Next, each of these numbers was multiplied by ten to yield the number of LEDs of each respective color; the total number of LEDs was thus 4+9+11+11+11+10 = 56. Since 56 is divisible by 4, I elected to set up the circuit with 14 parallel branches of 4 LEDs per branch. Desiring an even spread of the different colors to produce as monochromatic an appearance as possible, the process I used of arranging the locations of each LED was in no way random. Instead, I played a Sudoku-like game by requiring that each branch not contain more than one of each color, while limiting the number of same-color LEDs in each of the four rows to 3. To limit the current flowing through the LEDs, a 180 ohm resistor was positioned at the beginning of each branch. A 9 Volt battery and a 3 Volt battery pack were connected together in series to provide a total of 12 Volts to the circuit. Connected to the anode of the 3 Volt battery pack was a switch; the switch enables the wizard to easily turn on and off the LEDs by closing and opening the circuit. A picture of the complete circuit is illustrated below. The resistors and LEDs were all electrically connected by soldering on a RadioShack printed circuit board. For the body of my staff, I selected a six-foot tall cylindrical wooden pole. On top of this pole was mounted the assembled printed circuit board, along with the batteries. Over the LEDs a piece of plastic was taped down to diffuse the emanating light; a plastic bag was further placed over the board to help accomplish this task.

Staff Circuit

The door to be unlocked was a wooden circular one with a diameter of 4 inches. This door was part of a rectangular box, all six sides of which were laser cut and attached with Gorilla Glue. The box was designed in Onshape with grooves and protruding edges for the attachment. A picture of the design may be examined below. The shorter side that is visible on the right of the picture shows a large circular opening. After laser cutting all six sides I kept the cut-out circle from this piece to employ as the door. The shorter side not visible on the left is solid without a cut-out door. Notice also the small hole on the top piece. This is the entry point for light to enter the system. In my actual setup, I positioned this hole on the left side. I screwed on a hinge to the door and also attached a sliding lock to its interior side. On the front of the door, opposite the lock were four screws, one of which I attached to a bolt to form a door handle. The unlocking mechanism was a high-torque servo motor with a line of plastic tubing tied to the moving part of the lock. To digitally connect with the motor an Arduino Leonardo microcontroller board was used.

Box

There exist a variety of methods of sensing light. One approach I strongly considered was to implement a Fast Fourier Transform on images of the light using Matlab. The Fourier Transform of a source of illumination reveals its spectrum. Below are images and corresponding shifted FFTs of both the light fixture atop my staff and an LED flashlight. Observing the differences in the two FFTs, it is clear that a comparator program would have distinguished between moonlight and sunlight in this case. Some of the challenges with this approach, however, are lag time of the computer for processing the images and running Matlab and Arduino software programs in concert. A simpler approach, and the one I implemented, is to exclusively use Arduino with photodetectors.

Staff

StaffFourier

flashlight

FlashlightFourier

For the light sensor circuit, I considered photodiodes, phototransistors, and photocells for the light-detecting device. Connecting a photodiode in reverse-bias to a resistor without an operational amplifier in the circuit proved dreadful. The data received from the Arduino was intermittent; quite often the device would not output any signal unless I hovered a light source extremely close to it. I quickly came to the conclusion that an alternative device was necessary to obtain successful readings. Thus, I tested a 570 nm peak responsivity phototransitor. This worked remarkably well, as ambient lighting yielded a low analog reading in comparison to my moonlight staff. Furthermore, when I switched from the staff to a bright LED, I witnessed an order of magnitude increase in the value. The values read for different light sources with photocells were more uniform; however, photocells excelled at reproducing the same value for a given light source. My final system features a phototransitor in the configuration drawn below with a 10 kilo-ohm resistor.

Phototransistor Circuit

In my program, I use a digital bandpass filter to declare moonlight to be identified when my given value (analog read signal multiplied by one million) is between 650 million and 750 million. Under that condition, the servo activates to unlock the door. The following video features me testing a variety of light sources in addition to my staff. Notice that before testing begins, I tilt the box to the right to lock it. If, for whatever reason, I am unable to open the door, I can manually tilt the box to the left to slide the lock back towards the center of the door; this prevented me from being permanently locked out. As shown in the video, when the moonlight illuminated staff is placed over the keyhole, the arm on the servo motor rotates and the door unlocks. Upon entering through the door that resembles that of a Hobbit’s home, one can view the electronic devices- Arduino Leonardo, breadboard with components, and servo motor- and associated wiring that enable the system to function.

 

The moonlight illuminated staff I created fits well as a natural model. Just as the Sun is far brighter than the Moon, the colored LEDs on my staff produce light of a lesser intensity than bright LEDs. I have thoroughly enjoyed the liberation to explore a topic that intrigued me greatly after reading Tolkien’s works. In the process, I have learned a great bit of science, especially regarding astronomy, as well as the magical significance of moonlight as a trigger.

]]>
The Magic Mirror https://courses.media.mit.edu/2016spring/mass65/2016/05/14/the-magic-mirror/ Sat, 14 May 2016 21:44:17 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=848

Introduction:
The Magic Mirror is a spin off of my previous work with facial animation from last week. However I made a massive jump from purely digital art, modeling and animation to a full on software application for live facial recognition. This project is inspired by the sassy character in Dreamwork’s Shrek, but any image with a clear picture of a face in it could be used with the facial recognition algorithm I’m using.

Resources:
The open source github of what I have done can be found at: https://github.com/emvanb/MagicMirror
This work builds upon Kyle McDonald’s ofxFaceTracker and aturo’s FaceSubstituion libraries.

My Experiences:
This project was a pretty extreme stretch for me. Especially since when I proposed this idea, I didn’t yet know that facial image recognition and capture would be such a niche field with software that costs up to $10,000. Me being a naive but passionate CS undergrad at MIT, I should have recognized that I was a little in over my head.

when I started this project, I figured it would be difficult, but not infeasible. After many hours of researching on the web, and asking companies for free trials of their software, I realized that this mirror might be closer to infeasible than difficult.
I had done work with capturing body movements before with the XBOX Kinect for another Media Lab project called BodyPaint:
https://www.youtube.com/watch?v=Em9zXr30LlE
It was my mistake to assume capturing the motion of an arm or foot would be similar, since facial recognition is in the art of detail. Instead of using the XBOX Kinect, it seemed like just focusing on detailed images from a camera would be my best shot.

I eventually gave up on looking for pre-made applications online, as none of it was free and all of it required some elaborate set up. It was during this point that I stumbled across an openFrameworks add-on that I could use for facial tracking, along with a project that was being used to map on peoples faces over a camera. It wasn’t quite what I had in mind, but it was close enough. I dug around in the code to turn off all possibility of the camera showing a face, and changed the shader such that the mask didn’t attempt to blend it’s color with my skin. I also increased the strength of the masked color and turned the background black. I then fixed the modeling of my mask and took a picture of it so that I could import it into the software. After numerous tests I deemed it the best it could be at for presentation, and started training my voice actor to use the app. For some reason, the software was better at picking up my smaller rounder face than his longer one, but after some careful calibration we found what light source was needed, distance to screen necessary, and head movements would be best in order to not loose tracking.

For the monitor display I went to Blick to buy fancy gold paper and cardboard to frame a large 48″ HD screen with a half silvered mirror on top (Most of the frame was packing and duct tape from the back, but hey it worked). I then took some black cloth and draped it over the front bottom part of the mirror, so the audience couldn’t see the set up behind. The set up behind the mirror consisted of two lamps with extremely carefully dampened light in various positioning, a laptop connected to the HD TV with an HDMI cable, and my friend Yousef Alowayed who would serve as the voice behind the mirror. An extremely special thank you to Yousef. He’s an undergrad just like me, and took time out of his extremely busy finals week to help me with my Grad class project and listen to my attempts at directing him on how to move his face and where to sit.

Conclusion:
Overall this project was (in my opinion) a pretty big success. I was able to manipulate a program that was very difficult for me to understand to somewhat follow what some insanely expensive software does in industry. It was a very valuable learning experience, and quite the hilarious portfolio piece. I would like to thank my professors and mentors Dan Novy and V. Michael Bove for helping and encouraging me throughout this project, especially since they probably already knew this was going to be a long shot. Also special thanks to Anthony Kawecki and Anthony Occidentale for helping me throughout this class and for listening to me nonstop rant about how my mesh was detaching from my rigging, Maya was failing, and facial tracking was glitching for the past several weeks.

Note: The second movie referenced in the above video is Finding Dory to be released by PIXAR, not Disney.

]]>
Final Project – The Magic Wand https://courses.media.mit.edu/2016spring/mass65/2016/05/13/final-project-the-magic-wand/ Fri, 13 May 2016 17:47:55 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=843 My final project was a magic wand.

lantern2

More documentation can be found here.

]]>
Moonlight over the Doors of Durin https://courses.media.mit.edu/2016spring/mass65/2016/05/01/moonlight-over-the-doors-of-durin/ Mon, 02 May 2016 03:50:08 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=814 As an admirer of J.R.R. Tolkien’s Middle-Earth, I strove to replicate his hidden message on the Doors of Durin, which is the western entrance to the Dwarven city of Khazad-dûm.  These Doors were only visible with starlight and moonlight, and so could not be seen by sunlight during the day.  To generate moonlight I implemented a materials-based approach.  My rationale for this was due to the fact that moonlight has a spectral signature not limited to a single wavelength.  This is quite logical, as moonlight is largely reflected sunlight off the Moon’s surface, and sunlight extends over the entire electromagnetic spectrum. In 1929, French astronomer Bernard Lyot made a volcanic ash mixture with identical optical characteristics as the lunar rocks.  Specifically, he illustrated a near-perfect match of light polarization among the two materials over all phase angles.

I made a model of the Moon by gluing volcanic ashes from St. Helena on a green Styrofoam sphere with a spray adhesive.  The circumference of the sphere prior to applying the ashes was approximately 31 centimeters; after gluing the ashes, the circumference increased to 33 centimeters.  Therefore, on average, the thickness of the ash layer on my model Moon is 2 centimeters.  To elevate the Moon over a surface, I stuck one end of a wooden dowel through part of the Styrofoam, such that my model looked like a lollipop.  The other end of the dowel was put into a green Styrofoam rectangular prism that rested on the table.  Below is a picture of this constructed model.

Model Moon

To compare my model with the actual Moon, I determined the intrinsic brightness of both objects using the equation:

Albedo Equation

Here, A is the albedo (which is the fraction of incident light reflected off the Moon), H is the absolute magnitude (the brightness of the Moon if positioned one astronomical unit away; this distance is about the distance from the Earth to the Sun: 1 AU = 149,597,870,700 meters), and D is the Moon’s diameter in kilometers.  The Moon’s albedo is approximately 12 percent.  Substituting this value into the above equation and calculating the diameter by dividing the circumference by pi and converting to kilometers (D = 1.0504 x 10-4 km), the absolute magnitude for my model Moon is H = 37.8129.  The absolute magnitude for the real Moon is quoted as HMoon = 0.25.  Objects that are intrinsically brighter than others have a lower absolute magnitude; the absolute magnitude can even be a negative quantity for very bright materials.  It is evident from the above equation that the diameter and absolute magnitude are related in a logarithmic fashion.  To find how many times brighter the model Moon is than the actual Moon, simply take the fifth root of 100

Fifth Root

and raise it to the power of the positive difference between the absolute magnitudes.  Thus, the Moon is

Brightness Equation

≈ 1.06145×1015 times brighter than my sphere made with volcanic ash.

To produce an ink that was invisible under illumination by sunlight but visible under moonlight, I resorted to a contrast scheme.  That is, I applied ink of nearly identical color as the parchment I wrote on.  The catch is that the ink needed to be slightly different than the paper with an additional color component to differentiate between the Sun and Moons’ spectra.  Sunlight consists of equal parts of red, green, and blue components.  Moonlight, however, possesses a noticeably stronger red component and a weaker blue component.  Many people are oblivious of this fact; the reasoning deals with the Purkinje effect, which makes objects in darker settings appear bluer to us, as a result of the rods in human eyes.

Knowing that the Purkinje effect would not prove a significant issue with a light source positioned near my Moon to brightly reflect on my parchment, I tried two different contrast schemes.  One used blue paper with a blue ink that had a tinge of black added.  The second used black paper with black ink that had a tinge of red added.  The inks were mixed and applied using a paintbrush.  Upon writing with the inks, I quickly realized that they had a different color than their corresponding parchments even before the mixing of a secondary color.  Thus, testing was performed to make inks of exactly the same colors as the blue and black parchments.  I began with a rough test, simply adding spoonfuls of primary ink to dabs of secondary ink.  After doing this, I recognized that a stricter method was necessary.  My first attempt to yield a more precise test was to use eye droppers and simply count the number of drops I added of each color.  This plan was squashed, however, upon noticing that the ink was too viscous to exit the droppers.  My alternative approach made use of a scale to weigh quantities of the primary colors; below is a picture of the scale measuring one gram of blue ink.

Scale

As mixing a quantity of the secondary color that was undetectable with the scale with several grams of the primary color went very far in changing the resultant color, and understanding that I did not want to devote an umpteen number of grams of primary ink for each trial, I carefully made depressions with a chopstick for my measure of the secondary ink.  Shown below are typical quantities of the two inks; notice the three depressions of the black ink.

Ink Quantities

Pictures of the ink tests are shown below.  The first picture shows the rough test performed with spoonfuls of blue ink and large dabs of black ink.  All the character sets are labels, each written with one spoonful of blue ink: B is blue (no black), 1W is one spoonful of blue ink mixed with one dab of white ink, 2W has two dabs of white ink,…, 4B has two dabs of black ink, and 5B has 5 dabs of black ink.  Don’t be fooled by the apparent disappearance of the 4W and 5W in the photo.  From the proper viewing angle, they both are quite bright, and clearly not the same color as the parchment.  The second picture validates this assertion.  Notice how bright the 4W and 5W are here.  This picture contains additional tests.  The W on the top right denotes plain white ink, clearly visible in comparison to all other colors.  Looking closely at the middle of the page, one can spot the character sets 1g, 2g, 3g.  These represent grams of blue ink mixed with exactly 10 depressions of black ink.  By my inspection, I determined that the 3g appeared the least noticeable.  Therefore, I used the recipe of 3 grams blue ink mixed with 10 chopstick depressions of black ink to write the hidden message.  The third picture shows testing done with the black ink.  This was less extensive for two reasons.  One, the black ink more closely matched the black parchment to begin with.  And two, by the time I began testing on the black parchment, I had already performed extensive testing with the blue parchment, so I proceeded directly with using the scale.  The code on this sheet is: B- black ink only, 1gB3R- 1 gram black ink and 3 depressions of red ink, and 1gB4R- 1 gram black ink and 4 depressions of red ink.  To write the hidden message, I chose 1gB3R as my recipe, as the extra amount of red in 1gB4R resulted in an ink that appeared more visible in ordinary light, without any noticeable improvement under moonlight.

Blue Ink Test Start

Blue Ink Test Detailed

Black Ink Test

The secret message I wrote on both the blue and black parchments is the same one that appears on the Doors of Durin.  It is written in the language of the Elves.

Ennyn Durin Aran Moria.

Pedro mellon a Minno.

Im Narvi hain echant.

Celebrimbor o Eregion tethant i thiw hin.

 

This translates in English as

 

The Doors of Durin, Lord of Moria.

Speak, friend, and enter.

I, Narvi, made them.

Celebrimbor of Hollin drew these signs.

 

Below are pictures of this message written in Elvish on both colored parchments.

Secret Message

To represent the Sun, I employed an LED flashlight with a luminous flux of 37 lumens.  The Sun has a luminous flux of 3.6×1028 lumens; that is on the order of 10 thousand yottalumens (Ylm).  Given that the distance from the Moon to the Sun ranges from about 147 million kilometers to 152 million kilometers, thereby taking a rough average of 150 million kilometers, in order for the ratio of luminous flux to distance to be equivalent for my contrived system, the LED flashlight must be placed approximately 1.5417×10-16 meters from the model Moon.  This is quite fascinating but hardly an issue, as the light loss over reasonable distances of several meters is minimal.

By turning each parchment at an angle with respect to the flashlight, the writing appeared invisible.  Examine the three pictures below for the blue and black parchments to verify this.

Blue Black Invisible

Blue Sunlight

Black Sunlight

To produce moonlight the flashlight was shined on the volcanic ash sphere.  For the effect to be astounding, one should already place the parchment at a viewing angle such that it is invisible with the flashlight shining on it, and merely move the Moon into the light path without shifting the parchment at all.  The two pictures below were taken exercising this approach; the first is the blue parchment and the second is the black parchment, both under moonlight.  Observe that since the Moon I constructed is small in relation to the parchment, the message is not entirely visible.  One must shift the paper towards the Moon to read content on the right-hand side.  Nevertheless, you can still pretend to be Gandalf.

Blue Moonlight

Black Moonlight

The challenge, of course, is to develop an ink that is invisible under sunlight for all viewing angles, not just a restricted angular range, and that is also clearly seen under moonlight.  I will endeavor to rectify this problem for my final project.

]]>
Enchanted Object: Magic Mirror https://courses.media.mit.edu/2016spring/mass65/2016/04/29/enchanted-object-magic-mirror/ Fri, 29 Apr 2016 22:55:38 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=806 For this weeks assignment, I decided to combine my previous knowledge from my pepper’s ghost project with a new idea. I decided to create the magic mirror from Shrek.

first I searched for quite a while for a rigged face online. Strangely, it took many hours to do this, and before I found one I actually had resorted to modeling a face from scratch in Maya. Eventually however, I did fine one on Turbosquid.com. It was a robot head though, so I had to do quite a bit of modeling on an already rigged head. (Which is very dangerous to do).

Screen Shot 2016-04-29 at 6.53.52 PM

With a reference picture from a film segment I found on youtube, I started modeling the face of the mask by deleting unnecessary parts of the robot head and moving each vertex of the mesh one by one as carefully as possible. After messing up once and decoupling the model from the rigging, I started over and was able to finish a nicely modeled Mask.

Screen Shot 2016-04-25 at 12.39.52 AM

After this I recorded video of me acting out yes and no answers to questions. I decided to start with three different voice options for possible questions, yes, no, and maybe.

The next step after recording was converting the video into tiff image sequences, which I used Premiere to export as. And then imported the video as an image file sequence of tiffs into Maya to use as a background in order to animate the mask on top.

After everything was animated, the next step was rendering. I rendered all the answers with mental ray in Maya and exported them as video through FCheck. I then recombined the sound back into the videos using Premiere by taking out the old film footage of myself and replacing it with the new mask movies. All the videos  were ready for playing. Now for the effect

To create the magic mirror, I overplayed a half silvered mirror on top of my laptop screen. Since only light from one side of a mirror will be passing through at a time, this give the effect of it being a mirror but also showing the face from behind the glass. I created a powerpoint presentation that I could control with a hidden bluetooth keyboard while watching my users. I would tell them to say “mirror mirror” in order to make the mirror appear. I had all the videos set to play automatically so everything went generally smoothly as long as the user asked a yes or no question. After the user had asked their question I would just continue the presentation with an answer.

Below is the final rendered clips of the answers. More footage will soon be uploaded of a user actually using the program. But for now I’ve kept the half silvered mirror with Dan for safe keeping.

 

My final project will build off of this so hopefully it will be even more exciting! I want a live user to be able to speak through the mask to answer questions. More on this to come later 🙂

]]>
Big Box of Bigness – How to Blow Something Up https://courses.media.mit.edu/2016spring/mass65/2016/04/26/big-box-of-bigness-how-to-blow-something-up/ Tue, 26 Apr 2016 18:19:42 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=788  

Expellenormous

So you want to make something big? Whether you’re going through a rough breakup and wish your chocolate bar was twice your size, or you have a savior complex and wish to end world hunger with a single fish and a lone loaf of bread, the ability to make anything grow in scale would undeniably useful. The world of science would be abound with antithetically fantastic voyages, GMOs would yield to DMO’s (Dimensionally modified organisms), and Frank Gehry would finally be out of a job, but how would we bring this wonderful reality about?

Back to Basics

What muggle technology makes things big? Microscopes!

What’s the difference between what a microscope does, and what Ant-man Giant-man does? The third dimension.

Luckily, a Strange man taught me some perceptual hoo-doo that allows me to essentially add dimension together, compositing a few two-dimensional realities to synthesize the illusion of an obscured third.

Test Room #1 - Backend Interface

Unfortunately the same man failed to tech me how to magically capture two dimensional recording, so for that part I am restricted to muggle technology.

Dimensional Synthesis Error – Universal Bus Controller (Code 43)

Magic is easy, technology is a bit harder. As it turns out creating new dimensions is all about your hardware – specifically the capacity of your USB bus controllers. Now, obviously there are easier solutions for processing unholy amounts of video information, but Blackmagic involves delving into the dark arts and truckloads of cash.

(Updates to follow)

Microfilming Apparatus

Technology Development

 

Side Effects May Include: Pinching (Dismemberment), Dual Personality Syndrome, and/or Murder

For those of you looking for a once-a-day solution for natural enhancement, this is not the enchanted object you are looking for. Aside from being fundamentally different from the Engorgement Charm of the Potter-verse – which increases the amount of something without increasing the quantity – this magic box has the unique quirk of copying, and displacing the objects it enlarges. BYO suicide tank.

Suicide tanks that enabled the teleportation “illusion” featured in Christopher Nolan’s The Prestige

The idea is simple enough, a spell to make something bigger. Yet somewhere along the way it all went horribly sideways, and when you’re already so invested in breaking the laws of nature it’s all too easy not to stop yourself when you finally realize you’ve gone. too. far.

#OtherSelfie

Appications
Content 02Content 03 Content 01


]]>
Magic Item https://courses.media.mit.edu/2016spring/mass65/2016/04/25/magic-item/ Mon, 25 Apr 2016 22:25:55 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=784 I’ve made a Harry Potter Remeberall
rememberaball

More documentation here

]]>
Deceptive Jail https://courses.media.mit.edu/2016spring/mass65/2016/04/04/deceptive-jail/ Mon, 04 Apr 2016 04:10:00 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=767 Webcams give the delusion of transporting a viewer to a new location by displaying a site in real-time with high-resolution and continuous motion.  When examining a site in person, however, a viewer expects to smell the surrounding airs.  Unfortunately, webcams by themselves do not provide this immersive experience.  As a means to both tackle this challenge of heightening sensations with webcams and devise a system to fool criminals, I built a small jail cell that included both visual and olfactory perceptions.  The intent is to constantly change a prisoner’s apparent location, making the prisoner disoriented and confused.  Guards may move the prisoner from the cell to a motion simulator for a multitude of hours to trick that prisoner into believing that travel is occurring.  Under fatigue, prisoners are more likely to be cooperative and divulge information concerning the truths about crimes they committed or the whereabouts/upcoming plans of their criminal partners.  As a last measure, guards could use a webcam of the prisoner’s home, accompanied with appropriate smells, and say to the prisoner, “Tell us everything you know, and we’ll let you go home.”  Having tricked the prisoner into believing the deception, the presentation of the homeland will evoke powerful emotions that will further break that prisoner down.

The jail was built out of nine jigsaw-shaped exercise mats that fitted together nicely on the sides.  A picture of the constructed cell is shown below.  Two holes were cut on one of the mats.  The larger hole was a 25 cm by 15 cm rectangle, sized for my computer screen, in which the webcam would be shown.  To give the appearance of a jail, I cut out some of the mat material and made three short bars and spread these out over this viewing window.  The smaller hole was a thin 16 cm by 3 cm rectangle for the odors to travel through.  Additionally, I positioned a small wooden dowel rod for support on the top mat and also taped down some of the outside connected sections with silver duct tape to prohibit the mats from sliding apart.

Viewing

To funnel scents through the small hole, I heated water with a Vick’s humidifier and placed several drops of fragrances in the humidifier’s basin.  Steam then rose from the humidifier with the odor of the fragrance.  I was fortunate to find a tall metal piece in the garbage, which was employed as a funnel.  I connected a section of a pizza box to the bottom of this funnel with silver duct tape, applying tape across the entire section to shield it from the moist steam of the humidifier.  A hole the size of the humidifier’s opening was cut out on the pizza box section for the steam to exit.  The system was constructed in the room seen in the above picture.  The measurements were taken such that the humidifier rested on the remainder of the pizza box to prop the funnel up high enough to reach the small hole, as shown below.

Humidifier

For completeness, the entire system, both the jail cell on the left and the humidifier/funnel sub-system on the right, is pictured below.  Notice also the laptop computer with its screen flipped around slid against the viewing window to display the webcam.

Whole System

Two different sites were tested with appropriate fragrances; one was a forest scene and the other was an ocean scene.  The link for the webcam of the forest scene is:

http://www.bear.org/website/live-cameras/live-cameras/nabc-webcam.html

The link for the webcam of the ocean scene is:

http://www.mamasbeachcam.com/

For the forest scene, a Eucalyptus fragrance was placed in the humidifier’s basin.  For the ocean scene, the fragrance added was Waikiki Beach Coconut.  The picture below shows the two fragrances; Eucalyptus is on the left and Waikiki Beach Coconut is on the right.

Scents

The next two pictures show webcam images inside the jail of the two scenes; the first is the forest scene and the second is the ocean scene.

Forest scene

Ocean scene

Additionally, I took video recordings of the process of applying the fragrances and watching the webcams.  The forest webcam is fairly static, but while watching the ocean webcam, one can clearly see the waves moving continuously.  The first video below shows the forest scene and the second shows the ocean scene.

 

 

A large challenge while testing my system was the excessive moisture buildup from the humidifier, especially around my laptop.  This issue was remedied by applying paper towels and aluminum foil under my laptop for protection.  However, the wall of the cell became sufficiently drenched after about ten minutes of operation.  One possible improvement of the system is to concoct my own fragrances to make the smells as realistic as possible.  For example, for the scent associated with the ocean, I might blend together the juices from canned tuna fish together with salt and sand.  It would certainly prove quite torturous for a prisoner to suffer through this magical potion.

]]>
Racial Camouflage https://courses.media.mit.edu/2016spring/mass65/2016/04/02/racial-camouflage/ Sun, 03 Apr 2016 03:58:09 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=757 For this weeks project, I did one small experiment and one larger one.

The Maxi-Pad Wallet

IMG_6971IMG_6970

I found this idea online a long time ago and I wanted to test it out. To create this wallet, I found an unused a large maxi-pad and removed the pad, but kept the wrapper. I then added some paper inside to hold the money, and added some tape to put under the flap to hold it together. This made a wallet that held money tight but camouflaged it to look exactly like a pad. I wanted to see if anyone would actually believe it though. After handing it to some of my (especially male) friends, I found it extremely effective. My guy friends didn’t even want to touch it while my girl friends didn’t seem to care, it looked like a regular pad. The only flaw to this design could be that I might accidentally throw it away!

 

Racial Profiling Camouflage

IMG_4817 IMG_4813 IMG_4818

This was my main experiment since I personally find racial profiling to be a very big current problem. Since I am Japanese, Indian, Belgian and Spanish, not many people know what my ethnicity is on site. Not only this, but also depending on the season I wear a completely different set of colored make up. So I already had all the supplies I needed to wear as many types of skin color I could just by using my year-round stock of make up! The goal of my camouflage was to mask my ethnicity, so that no one who did not know me would know what race I was. I researched what facial features were most defining in determining what race a person may be and found that the eyebrows, nose, eye shape, mouth and facial structure all contribute to the appearance of a race. With this in mind, I experimented… I placed black along my eyebrows so it was very difficult to see what shape they are. I also placed black around my nose so from farther away, it would be hard to see the shape and size. I also pattered the stripes in a swoop fashion, to obscure my facial structure. There wasn’t much I could do for the mouth and the eyes. So I did my best with blacking out one eye, and making the other as light as possible, along with placing stripes down my mouth. I also places a robe around my head and neck, so it was impossible to see another skin tone and what hair I had. I suppose this could also be accomplished with a hat and a scarf or a hoodie. After showing this design to my friends they all freaked out, they couldn’t even tell it was me! After presenting this design in class, my professors suggest I upload it through my Facebook account to see if it could tell that it was me. Turns out it couldn’t! Facebook’s facial recognition algorithms couldn’t even tell that this was me! So in conclusion, I think this pattern was a success. Cons include, lots of makeup, strange looks. However pros include looking like a star wars character for a day!

Screen Shot 2016-04-02 at 11.39.36 PM

]]>
#antitag – Anti Facial Recognition Environment and the Many-faced God https://courses.media.mit.edu/2016spring/mass65/2016/03/29/antitag-anti-facial-recognition-environment-and-the-many-faced-god/ Tue, 29 Mar 2016 18:12:37 +0000 https://courses.media.mit.edu/2016spring/mass65/?p=741 Want to avoid being auto-tagged by Facebook, Google photos, flickr, and the like? Want to create a party environment for all your cohorts that ensures all attendees remain unrecognizable to the collective scrutiny of the bots? Whether you’re familiar with the Many-faced God or not, you can benefit from the dark magic that streams from its collective. Here’s how.

Safety Amongst the Heard

The approach of this projects is derived from a classic hacktivist tool – the DDoS attack. Dos stands for distributed denial of service. Essentially this tactic can shut down any service the way the Star Wars fans shut down Fandango when tickets went on pre-sale; by overloading the the servers dedicated to a service with requests, the service becomes effectively unavailable.

Typically this tactic is used to shut down web services of corporations that have been misbehaving or underestimating the power of the internet, but in this case we’ll be applying the concept to render Facebook’s auto-tagging feature effectively useless.

Making A Mask of Masks

To kick things off let’s just start with a ton of faces. After running this through Facebook’s tagging system I was surprised as how good a job it did.
Screen Shot 2016-03-28 at 7.44.43 PM

Even though most of the faces were clipped or obscured, the tagging system was able to identify 18 / 22 faces. The anti-establihsment won’t settle for an 18% success rate.

To up the unsuccess rate we’ll add in some extra facial orifices.  Maybe we only need to mess with each face a little bit to throw off the recognitions algorithms.

Screen Shot 2016-03-28 at 7.45.11 PM

The tagging system is still recognizing 17/22 faces, and 17/27 if you begin to count based on all the eyes, noses, and mouths present. This increases our un-recognition rate to between 23% and 37% for an average of 30%. Better, but nothing you want to trust with your social life. Let’s take things a step further.

For the next mask we’ll use the previous mask, but overlay a rotated copy of original image. The result is a nearly unrecognizable hot mess of facial features. Everyone becomes one, and one becomes no one. This pleases the Many Faced God, but what does the Facebook’s recognition think of our abomination?

Screen Shot 2016-03-28 at 7.45.39 PM

One face detected, and he doesn’t seem to be too happy about it.  Still, this mask’s 54 faces brings it’s un-recognition rate to 98%. Not perfect, but certainly ready to begin expanding the applications at a responsible rate.

Human Trials

With no funding partners to sponsor testing subjects less conscious of their social media presence, I will have to sacrifice myself to the Many Faced God.

Just Walk Right In

The strength of this strategy over other facial recognition obfuscation techniques is that it does not require individuals to do anything irregular. No face paint, no fancy clothes or accessories. Instead, the #antitag environment protects the identity of anyone inside it. To achieve this level of obscurity, the masks created earlier are projected all over a room, so that anyone within the room gets a face full of lasers and can’t see for hours becomes covered in the obfuscatory facial features.

Mask of Masks 1.0

Print

With red boxes indicating the recognition of an incorrect face, and a green box indicating the recognition of the correct face, we can see that this first mask has a 78% success rate at preventing recognition. We can do better.

Mask of Masks 1.1

Print

Let’s skip to the last mask, just to get a sense of the results we can expect to achieve. This mask has a 100% success rate at preventing correct facial identification, and an 83%  success rate at preventing incorrect facial detection. 

Mask of Masks 1.2

Print

Adding a third layer of faces achieves total obfuscation. Coincidently total obfuscation is the new Clinton campaign slogan. No faces recognized. With 82 faces present in one frame, effectively there are none. 

]]>