Our team was interested in making art that is more dynamic: Art that responds to the unique behaviors of the people viewing it at any time (the number of people present, their proximity to the piece, where they’re looking within the frame etc.). This creates new affordances between viewers and the art itself. Our hope is that we could eventually bring art – and its contents – to life.
We picked Andy Ryan’s photograph of Marvin Minsky’s home to illustrate some of these concepts:
We then explored ways to use computer vision to capture information about where people were looking and where they were positioned, relative to the piece. This is a short demo of how we might capture and process viewer data, using facial recognition software and the front-facing camera on an iPhone (the circle is following the movement of Tamer’s face).
Finally, we created a small-scale prototype (of Ryan’s painting) with a layer of clear thermochromic paint, which revealed different parts of the image based on where the viewers were situated. This is a short demo of how this paint could help us change the colors dynamically and direct focus to different areas.
(Last audio sample in this clip was Minsky himself)
Team: Tamer Deif, Shruti Dhariwal, Clara Lee, Jeremy Finch
Thanks to Prof Hiroshi Ishii, Penny Webb, Dan Fitzgerald and Udayan Umapathi