Ali Shtarbanov – Sensory Substitution/Augmentation Ideas

Overarching Theme:

Our perception of reality is limited by the sensory organs that our bodies come equipped with, and by the bandwidth of each organ. Our eyes for example, have receptors sensitive to electromagnetic waves between 400nm and 700nm, which is a tiny fraction of all of the EM waves that hit our bodies every second, to most of which we are completely unaware. There are gigabytes of WiFi data and hundreds of cell phone conversations that are passing through you right now, and you are completely blind to those. For the case of sound, the bandwidth of our ears is  between 20Hz and 20Khz and we are oblivious to what is happening outside of the audible range. Similar bandwidth limitations apply for the other senses. We can build machines to pick up those signals, but one cannot perceive those other signals directly because humans don’t come equipped with the proper sensors. Thus our experience of reality is constrained by our biology. Our brains are sampling just a little bit of the world.

But other animals have different experiences of reality. Snakes can see infrared, and honeybees include UV in their view of the world. For the blind and deaf tick, the important signals are temperature and odor. For the black-ghost knifefish, the world is colored by electric fields. And for the bat, its reality is primarily about ultrasonic waves. There is a word for this called umwelt, which refers to the slice of the world that a particular organism can pick up on. Each animal presumable assumes that its umwelt is everything there is out there. And humans are probably the only species that know that our umwelt is an insignificant fraction of what is out there.  What would it be like if we can experience the world beyond the confines of our umwelt? Can we create new senses that let us experience more of reality?

Idea 1:

Before going into the topic of new senses for humans, lets consider the problem with the current senses and how we experience information. In today’s digital world, we are overutlizing our visual sense, yet at the same time, we are unterutilizing the haptic sense. When we enter a public environment we see people staring at their phone and being completely unaware of what is happening outside the confines of that 6inch screen. Can we change it? We can create a handheld device that fits in one’s pocked or attaches to the wrist and delivers information form the cell phone in the form a haptic sensation. And there are several modalities that could be used for haptic feedback delivery, including change of angular momentum, change of shape, vibrations, change of center of mass, and change of temperature. Into each of those modalities one can encode information such as time or notifications. But if we consider all of the modalities together and consider all possible combinations, we can imagine the possibility of creating an entire haptic language. Imagine if we could ‘read’ a text message or an e-mail through touch, just by holding a device in our hand and have it provide us with tactile stimulus corresponding to the content of a message we just received. Although this may seem far-fetched, it is already happening. Blind people are able to to just that when they read braille. They don’t think about the little bumps as they move their finger; rather the information contained in those bumps just comes right off the page without conscious though. Morover, David Eagleman has shown a device called the ExtrasensoryVEST which encodes speach into vibrotacytile stimulations on the user’s back. After a few weeks of wearing the device, users are able to hear through the sense of touch – and this is again happening without consiously thinking about the patterns of stimulation.

Idea2:

Use the haptic feedback device described in the aforementioned paragraph but simply change the input. Rather than having the input be information coming form a cell phone, have the input as information that is in the environment around us that our biological sensors are not able to pick up. One example of this is WiFi networks. What if we could create a sense that let’s us walk though a building and EXPERIENCE the distribution of the WiFi signals throughout the building. For example, if I am in the Media Lab, I want to be able to take a walk and find out how the received power of a particular network varies as I move from place to place without having to look at a screen but rather experience this in a tactile manner. And the same applies if I am walking around campus.

Idea3:

“Emotions” for objects. A person can be modeled as a system whose inputs are senses and past experiences and whose outputs are emotions, actions, and behaviors, which in turn feed back into the system as a feedback loop. And the same model also applies to animals. But what about objects? Well , they are quite static. Can we make objects be more emotional and dynamic in response to environmental inputs? What if I had an object that could show some kind of expression depending on the temperature or humidity of the environment in which it is placed in.  Or What if it could sense its proximity to its “friends” and then behave in a way that indicated happiness, but when placed by itself alone it would express sadness.Or what if I had something that when I drop to the floor could sense the vibrations on the floor, such as when a train is approaching, and then start behaving in a certain way based on those vibrations. Or what if I have an object produce dynamic haptic feedback in response to music. Or if I put this object on ice or place it in the fridge, can I have it show some emotional response?


The idea of augmenting our senses is powerful, but how does this go beyond wearables with slightly different I/O than we see today? Our phones are powerful dev platforms to be sure, already loaded with many different sensors and screens and vibrators, if you can sit down today and write an app for it, it’s probably not a fundamentally new interface or interaction. More importantly, why does it make sense to feel wifi through touch rather than through sight, smell, or any other sense? What is the correspondence/methodology/justification in the sensory mapping? And, as an interaction, what is the input the user gives to the system in the first place? What is the feedback loop? Why?

The emotional objects has a new interesting aspect – objects with personality/agency – how is this more than an extension of emotive objects in OBMG?

-Dan