Monthly Archives: December 2013

I am Building E-14

I am Building E-14mandala  I am building e-140304

…from cyberspace to space/people interactions” (or: making the building’s brain)

This project questions the concept and meaning of CYBERSPACE for understanding its implications to real-space. In a first stage I explore the idea of a “Cartesian Space”, a space in which abstract human data is represented. In this stage I seek to develop a way to track people in reality and transfer them to cyberspace as the core of a building’s “brain”.

johnny mnemonic blu-raycompre CYBERSPACE:”… A graphic representation of data  abstracted from the banks of every computer in  the human system. Unthinkable complexity.  Lines of light ranged in the nonspace of the  mind… clusters…” (Gibson, 1984)

 

What is human scale in relation to cyberspace? What would that mean?

The concept of Cyberspace was first developed by Gibson in the eighties. The idea was to introduce our minds into a place in which we can connect with others and with data, which sounds pretty much like the internet. This idea faces the impossibility of transporting our bodies into Cyberspace because of obvious technological reasons: the computer interfaces were able to translate our thoughts  but not our body-data.

Today, we have the depth cameras to “capture” our bodies. This kind of developments may bring some unexpected outcomes such as “The Building’s Brain”.

In 2012, a research group of the MIT Media Lab (Responsive Environments) decided to explore the use of Kinects for gesturally controlling 25 screens around their building, recording at the same time every path of a person walking in front of the sensors across a space. I started working three months ago with the data collected by this group, realizing these are exceptional findings – these data may constitute the only database of anonymized tracked people inside a building, perhaps in the entire world.

IMG_0752

011

What we see in the images above are 6 random visualizations of 6 different locations inside the building E-14, from a top view. The Kinect sensing range is a triangle therefore the tracking visualizations are also triangles. The “Building E-14” may use this information to analyze and control what is happening inside of it. The depth cameras stream  and capture every event that’s happening inside, therefore to retrieve information such as, how many people is inside a space, for how long, compare this information to schedules, day time, and protocols, is currently possible. The building’s brain has emerged.

140-5_10-for-post

Yet the idea is not a surveillance state of architecture but an interactive system that augments the information available for the occupants of the building qualifying its spaces for the user to choose where to go according to their intentions. For example the Building E-14 may suggest that a space is too crowded and a quiet person may choose to go somewhere else, or warn people that the doors will close soon. The Building’s brain may be a channel to interface between occupants and the architectural context.

For the occupants the way we understand space and socialize might change forever. For the designer this is a remarkable tool since from now on the interaction between the building and people’s motion can be recorded and analyzed in order to asses new designs.

Lobsters: CYBERSPACE
“…Let me think about it,” says Manfred. He closes the dialogue window, opens his eyes again, and shakes his head. Some day he too is going to be a lobster, swimming around and waving his pincers in a cyberspace so confusingly elaborate that his uploaded identity is cryptozoic: a living fossil from the depths of geological time, when mass was dumb and space was unstructured.

Next steps

Nicholas Negroponte (founder of the Media Lab) and his group “The Architecture Machine Group”, built in 1970 an experiment consisting of a controlled environment (a box) filled with small blocks and introduced a family of gerbils to inhabit it. The gerbils were in continuous observation in order to understand their performance inside the environment. A robotic arm would accommodate the blocks according to this processed data, hence the environment adapted endlessly to the gerbil behavior patterns. Assuming that it was just an experiment and, of course, that it worked sufficiently well, this experiment tests the possibility of a cyclical/adaptive/responsive/changing environment.

Consequently, the possible aim of this research would be: If we can track, for example, how many people goes inside of a rom of a party, and observe that the number of attendants exceeds the regulations “The Building” may decide to expand that room.


 

 

Ice-9 Spyware: Vonnegut-Inspired Spy Tools

vonnegut_catscradleIce-9 is a polymorph of water that melts at 45.8 ºC, that appears in Kurt Vonnegut’s “Cat’s Cradle”. When it comes into contact with liquid water under 45.8 ºC, it acts as a seed crystal that eventually turns the whole water into ice.

This is what was envisioned by renowned sci-fi satire author Kurt Vonnegut in his famous book Cat’s Cradle.  In the book, Vonnegut imagined ice-nine as a doomsday plot device, a material that when put into the wrong hands could freeze the entire planet over instantly and kill all life.

Inspired by ice-nine, we began to wonder what we could do if we had a material that could transition from liquid to solid states on command.  Here the idea of futuristic spy tools was born…

ice_9_process

We see this fictional material as more than a killing tool, we see it as a future of fabrication and manufacturing. For example, we can use this state-changing property for instantly making hand tools out of the liquid. What if the liquid can transform into a certain functional shape e.g. weapons, tools, or anything, and more importantly, at the moment we need them?  This could be revolutionary for personal fabrication, yet it also makes for a good science fiction technology.

The process is illustrated above: first we envision that a cup is designed integrated with a mold of an object (tool) contained discretely on the inside.  The cup can be filled with the “ice-nine” liquid to appear that the person is merely enjoying a soft drink, tea or coffee.  When ready, the user can agitate the solution or drop a seed crystal into the cup, causing the material to change into solid state instantly.  When the solution hardens, the user can pull out and crack open the mold to reveal the cast of a ready-to-use object.

Drawing from the influence of spy novels and James Bond movies, we envision the spy who needs a tool to open a secret cabinet inside an embassy, or an assassin who has to sneak in a weapon past metal detectors.

After doing some research, we had a good candidate for a material that fit these properties: sodium acetate.  This food-grade compound has a property that was very interesting to us: when at room temperature, it acts as a super-cooled liquid.  That is, at room temperature the compound would prefer to be a solid, but if it is in pure form, it will not crystalize at room temperature unless a seed crystal is introduced.  This was exactly what we were looking for!

For a proof-of-concept, we designed a tool and a weapon mold in Rhinoceros.  One would integrate with a coffee cup, the other would go into a team tumbler, respectively.  Here’s a 3D model sketch of the knife design

We fabricated the designs using a 3D printer, and here are the results of what we made (a knife and a wrench):

 

Finally we had a chance for some experimentation

We envision that, if we can have robust control over the crystalization and supercooling, a liquid with this state-shifting property could enable a new wave of personal product manufacturing. It doesn’t require much external energy for the printing process, and this method has the ability to turn into final shapes very quickly. As a practical application, we can always carry the liquid and the molds for different hand tools, and whenever there is immediate need, we can always turn the liquid into the tool we need and turn it back to liquid after use.  General-purpose liquid for creating and recycling tools, like omni-gel seen in the video game Mass Effect.

Ermal Dreshaj and Sang-won Leigh

Tomorrow’s Yesterday, Today

open-pod-bay-doors

Over the course of the semester I’ve been iterating on the original idea for AgNES. Originally, it was meant to be an implementation based on Portal’s snarky and evil artificial intelligence, GLaDOS, following a roughly similar design and trying to mimic the functionality. The first demo was a first attempt at using the Mac’s built-in text-to-speech synthesizer to “sing” the Portal ending song, “Still Alive”. The second iteration took a more tangible approach, and with the help of Travis Rich from the Viral Systems group at the Media Lab, we were able to put together a physical manifestation for AgNES: a small robotic head capable of tracking a user’s movement when tagged by bright colours. The head – a small cardboard box holding an Arduino board and a webcam, mounted on a small servo – was controlled by an attached computer that processed the image from the camera, looked for the desired colour within a treshold in the frame, and commanded movement accordingly. The third iteration, just a couple weeks ago, revisited the software component of it and already started going in a different, more sci-fi-ish direction: AgNES became a sort of companion robot for long, solitary travel – essentially, deep-space exploration – which could provide a grounding helpful voice to an imaginary traveller. By giving the traveller stories, facts, and various other voice-mediated interactions, the companionship of AgNES can keep the traveller grounded and relatively sane over long journeys. But AgNES’s personality also became something the user could interface with: based on the five factor model for the description of personality, AgNES’s own personality is made up of five independent cores that can be individually turned on or off. How the cores are configured has an effect on the output the user gets from the various commands, and one can thus experiment with different configurations to get different results.

By playing around with AgNES’s personality, one is also playing with the conditions necessary for its optimal functioning. Which means that as results vary, some cracks in its design are revealed: in the confusion, AgNES begins to unintendedly give out clues as to the identities of its designers and its operators. The user can then follow this clues to learn more about this design and better understand the purpose of AgNES and its intentions. This is grounded in yet another science fiction underlying narrative: how future individuals will react to and interact with technologies from the future past.

Future Archaeology and Deep-Space Exploration

Even just today, we’ve already accrued a significant technological past that is hard to access and explore. Floppy disks are a good example: if you stumbled upon a box of old floppies from years ago and wanted to browse for meaningful things within them, getting to that data would be really complicated. If the disks are functional and you can find a drive to read them, there’s still the matter of whether the data is uncorrupted and whether you can still get the software to read it. Not impossible, of course, but as time goes on, increasingly complicated.

Future researchers, probably deprived of access to instruction manuals and other reference materials that help us situate past technologies, will contemplate our present technological world trying to make sense of it just as we look back on archaeological remains and try to make well-informed conjectures about what objects were used for or why they were designed one way over another. While we make an effort to design technologies that are intuitive to use, this intuitiveness is anchored at specific moments in space and time. Thousands of years from now, when behavioural patterns become very different, it is plausible to assume that many of the design conventions in use today will no longer have the same effect. Future archaeologists then face a complicated task of reconstruction.

That is the narrative framing where AgNES comes in. AgNES is designed from the point of view of being this deep-space exploration companion; but as a narrative device, it is also about thinking what would happen if thousands of years from now, future researchers came across this device built only hundreds of years from now. What sense would they make of it? How would they understand it, after stumbling upon it floating through space in a derelict ship, perhaps still powered but no longer in the company of any travellers? In the first of the many Star Trek movies, the crew of the Enterprise stumbles upon a massive entity threatening Earth called VGER, which upon closer inspection turns out to be the Voyager 6 probe, found by an alien species who augmented its design to enable it to fulfil its mission to “collect knowledge and bring it back to its creator”, creating a sentient entity on its way back to Earth in an unrecognisable form. These technologies we’re unleashing on the universe may at some point cycle back and be found again, and it’ll be a challenge to interpret them and make sense of their original context.

AgNES and Meta-AgNES

AgNES works on two levels. As a “present day” object, it is this pseudo-artificial intelligence that provides company and grounding during deep space travel, with a customisable personality the user can modify. AgNES’s commands are limited but they provide different forms of entertainment to keep a user distracted over what would be, presumably, very long sessions of just floating through space. The design of AgNES draws from multiple science fiction sources: the already mentioned Portal was the chief one throughout, but other sources such as Arthur Clarke’s 2001 and its own evil AI, HAL9000, also had a big influence. As did many of the themes we discussed in class related to artificial intelligence and robotics (including such things as Neuromancer by William Gibson, or Do Androids Dream of Electric Sheep? by Phillip K. Dick). As a companion providing information, there’s also certainly some influence from the Illustrated Primer technology found in Neal Stephenson’s The Diamond Age.

As a “future day” object, the design of AgNES is populated with a series of clues that only become evident to the user when they begin playing around with the personality configuration. Deactivating certain personality cores opens up areas that would otherwise be forbidden to an “unauthorised” user, where they may find information that can later be explored more in detail using additional commands. The “future day” user can then put together these pieces of the puzzle to come up with their own conjecture as to what AgNES is, where it came from, who it was with, and how it came to be where it is. From this point of view, AgNES plays more like a game where you’re trying to decipher what’s going on with this object by interacting with it, drawing inspiration primarily from the text and point-and-click adventure games especially popular in the early 1990s. The big caveat, however, is that there’s no real resolution to the game: there’s no “win state” as such, and there’s no real correct answer – you can only get as far as the conjecture you draw from the information you received as to who was involved and what happened. Just as future researchers, you can never be fully certain whether your conjecture was actually the case.

Pay No Attention To The Man Behind The Curtain

Training AgNES to track bright colours

Training AgNES to track bright colours

Technically speaking, AgNES is not incredibly complex. There are two pieces running simultaneously. One of the code for AgNES itself, written on Python and managing all the commands, the UI, and the personality cores. The cores themselves are five USB sticks and a hub that are together used as a switch – the code detects which cores are plugged in at any given time and makes changes accordingly. The first versions of AgNES used the Cmd Python module for a simple command-line interface, while the final iteration uses Tkinter to instead have a simple GUI that is less prone to error and displays information more clearly. AgNES’s commands are highly dynamic, and they often pull randomised content from various sources around the web based on the information desired: for instance, under normal operations, the TIL (“Today I Learnt”) command will pull a random article from Wikipedia and then read the summary out to the user. If the Curiosity core is turned off (meaning a reduced openness to experience factor, signaled by a more limited use of language) the command does the same, but pulling from the Simple English version of Wikipedia. If, instead, the Empathy core is turned off, the system pulls a generated text from the PoMo (Post Modernism) generator and reads that out – without empathy, AgNES loses any regard for the user actually understanding the information. And so on. Not all core combinations are meaningful, but those that are pull and parse content from the web using the BeautifulSoup web scraping library.

User testing AgNES's tracking of bright colours.

User testing AgNES.

The other pieces is AgNES’s head, described above. The setup remains the same, with the box containing an Arduino board and a webcam, all of it mounted on a small servo. The image handling is done with Processing, going over a frame of the image, finding the desired colour (Post-It pink, so it can be as unambiguous as possible) and making sure it stays within a center treshold – if it falls outside of that, it signals the servo to move left or right until it readjusts.

AgNES wearing its tin foil space helmet.

AgNES wearing its tin foil space helmet.

For its final presentation, both pieces were running of a Mac Mini concealed within a stand, “decorated” to appear as if it was a 1960s sci-fi B-movie prop (meaning, lots of aluminum foil). Using an app called Air Display, the mini was using an iPad as an external display with the AgNES GUI running, making it touch-enabled very easily. (I really wanted to try to get everything running off a Raspberry Pi but it proved to be too much for this iteration. The code for AgNES itself runs OK, but the computer vision and the text-to-speech stuff would’ve been more complicated to pull off, though not impossible – just needed more time!).

The final prototype setup in all its tin foil glory.

The final prototype setup in all its tin foil glory.

Building AgNES has been great, and especially interesting to think through its implications and the underlying concepts and issues at stake. All of the code for the project is available on GitHub if you want to try it out (though replicating the specific setup might be a bit complicated). Special thanks to everyone who contributed feedback, ideas, and testing, and any comments to improve it are more than welcome!

Prosthetics in sci-fi

A familiar plot point in sci-fi movies is the introduction of prosthetics as the artificial organic limbs give the main character a beyond-regular human being presence.

Lukes prosthetic hand

Image Source: LucasFilm Ltd.

This concept gets taken even further in Warren Ellis’ Transmetropolitan, which takes place about 2000 years into the future.  Human kind has evolved in many ways and technology has grown close to our bodies.. The desire and means to augment experiences and capabilities are so elevated in Transmetropolitan that humans can be modified with alien DNA and they can also upload their consciousnesses into computers.

Screen Shot 2013-12-19 at 1.42.05 PM

Image Source: DC comics, Warren Ellis

Inspired by this paradigm and by the development of bio technology, we can now see more and more technology incorporated into our daily lives i.e  google glass, tattoo microphones and wearable tech.  Through my research, I began to wonder: can we download our muscle memory?  If we can, what has been done out in the world that could inform the process of doing so?

A bio-signal is a general term for all kinds of signals that can be (continually) measured and monitored from biological beings. The term bio-signal is often used to mean bio-electrical signal but in fact, bio-signal refers to both electrical and non-electrical signals. There are many different kinds of bio-signals, but the one that seems to be the most promising and has physiologists the most interested is electromyography.

 

EMG signals are detected over the skin surface and are generated by the electrical activity of the muscle fibers during contraction. Since each movement corresponds to a specific pattern of activation of several muscles, multi-channel EMG recordings, recorded by placing  electrodes on the involved muscles, can be used to identify the movement. This concept has been applied in the development of myoelectric prostheses.  A group at the University of Washington  has been working with technology as the means to have a human computer interaction.

Now for the fabrication part of this class, after all it is called “scifi2scifab“. Recently an article caught my attention; “researchers at the University of Sheffield, Fripp’s company, have developed a process that can print a customized nose or ear within 48 hours. First, the patient’s face is 3D-scanned, then the specific contours are added to a digital model of the new prosthetic part for a perfect fit.” -3ders.org

Photograph: Fripp Design

As a prototype, I presented a 3d printed arm (thingiverse) that can be controlled by the EMG inputs generated by someone’s muscle. The majority of the arm components are 3d printed in a MakerBot Replicator, and then connected using 7 motors and strings that operate as tendons, influenced by the work of puppeteers.  As you can see in the video (follow link) below, the arm tries to replicate the programmed basic gestures.

guillermoArm

See it reacting here! | EMGarm.mov

LuvLuv: An Experiment in Modern Dating

This book was not our our class reading list, but it was released halfway through the semester and we were inspired by its near-future musings on the state of social media.

The Circle, by Dave Eggars, follows 24-year old Mae Holland as she starts her new job at The Circle, a mix of Facebook, Google, Twitter and other social media and advertising companies. The Circle campus is inspired by modern technology company campuses, where buildings are named after historical periods like Renaissance and Enlightenment and employees are encouraged to have all their social activities on campus.

The Circle’s main product is TruYou, a unified operating system that links users’ personal emails, social media, banking, and purchasing resulting in one online identity. According to the company’s public rhetoric, this kind of transparency will usher in a new age of civility.

One of the main principles guiding the Circle is that ‘ALL THAT HAPPENS MUST BE KNOWN’ and for that reason they never delete anything. They implement a CCTV system that covers both private and public spaces, as well as full ‘transparency systems” where people wear a streaming camera, on the principle that  “SECRETS ARE LIES, SHARING IS CARING, PRIVACY IS THEFT.”

One of the technologies in the book, LuvLuv, captured our attention because it seemed to be something that could easily be implemented today. LuvLuv is a dating application that scrapes all of the known data about an individual in order to provide the searcher with information to help them plan good dates and win over the object of their affection. For example, LuvLuv could advise you of where to take your date to dinner based on their history of allergies, or suggest conversation topics that they would be interested in. This also reminded us of a great short film called Sight, which combines these kinds of dating suggestions with an augmented reality display and gamification elements.

Our incarnation of LuvLuv was an interactive website where we used search results of the online activity of one of our classmates to construct a profile where someone looking to impress him could find out everything they needed to know. Part of this project was to see how he and the class reacted to this information. Although our (awesome) classmate consented to taking part in some sort of experiment, he did not know the specifics of our project. He was surprised to see how much could be learned about him based on only information he had put up willingly online. We may have gotten him to consider changing his privacy settings! Of course, in the world of The Circle, there are no privacy settings…

Below are screenshots of LuvLuv in action:

luvluvsplash

luvluvresults

 

by Alexis Hope & Julie Legault

Sensory Fiction

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

beach2

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

suit1

suit2

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

View more photos of Sensory Fiction on Flickr

– Felix Heibeck, Alexis Hope, Julie Legault

W – Microwave of the Future

What is “W”?

“W” is a science fiction design concept of a high-end microwave that comes from the not-too-distant future.

What does the microwave of the future look like?

The W uses a revolutionary user interface to tell the hungry user everything he or she would like to know about a food item that is placed in the microwave.  No more buttons or dials on the microwave, W gets rid of these nuisances and will automatically calculate the optimal time required to heat or cook food to perfection!

What else?

The W can access the internet to give the user recipes and calorie information about a food item, video cooking guides and more!

Safety is our top concern–so we’ve designed W to identify materials that are not microwave safe.  If the item placed in the microwave is unsafe, our device will refuse to cook until the object is taken out.

Technical info:

  • Designed in Rhino 3D
  • Wood material
  • iPad Mini for UI overlay
  • Vuforia opensource library from Qualcomm for object recognition
  • Openframeworks user interface

First prototypes

W Photo 1

W Photo 5

Design and prototype by

Ermal Dreshaj

Sang-won Leigh