tinder

tinder is a cognitive prosthetic to address long term ideation.

  • capturing ideas, instantly
  • marinating ideas, perpetually
  • igniting ideas, magically

Capture – available, simple accessories

The capturing interface for tinder is deceptively simple and is an attempt to remove friction from long term idea generation. Often, we are reluctant to capture new ideas as they appear in our heads because our “inboxes” are clumsy, unavailable, or too complex. When we write down ideas in sketchbooks or on backs of napkins they often end up in a jumbled pile of notes, or filed away forever in a dusty file cabinet, or simply discarded after their usefulness has expired. Digital tools exacerbate the problem because they make capturing ideas as easy as bookmarking a web page. We suddenly have an overwhelming backlog of digital detritus, all at one point representing interesting ideas but now stifling our creativity. Indeed, by the time we pull our cell phones to tap out an idea, the initial spark of joy has already left the thought.

tinder provides a very simple capturing interface (i.e. the tinder, itself) – cheap, disposable scraps of magic paper that are always available on our desks or in our pocket; an ordinary pen or pencil is more than adequate to mark these pieces of tinder. Our minds are calmed because

Marinate – processing, powerful tinderbox

It is good to have holding places for ideas. By putting our thoughts in external holding bins, the ideas have a chance to marinate over the long term. Indeed, this is the heart of long term creative sparks. The longer ideas remain in a holding pattern, the more chance they have to connect in new and beautiful patterns. We grow in wisdom and can apply new contexts and perspective to these ideas. In fact, in early childhood development, the ability to make semantic connections between ideas is critical to normal cognitive growth [Beckage, Smith & Hills, 2011]

The tinderbox is a computationally powerful device disguised as an elegant wooden box. The box is wirelessly connected to cloud-based APIs that provide access to handwriting recognition, machine learning, and semantic network modeling algorithms. It detects and processes the content of every idea placed inside; the system organizes, optimizes, and reshuffles ideas in the semantic map based on everything from what is in current news headlines to the mood of the user. We feel a quiet confidence when we can find our ideas marinating as long as necessary in a brine of computational augmentation.

Ignite – revealing, magic tinder

When we are ready to address our long-term ideas, we simply remove the pieces of tinder from the tinderbox and begin to play. We notice that one of the pieces is glowing. We don’t need to know why, but tinder wants us to notice this spark. (In reality, the system’s algorithms have predicted that this piece of tinder contains a particularly relevant idea).

The heart of tinder is a new material called papyro. This paper-like material has sensor and computational matrices woven among its pulp fibers. Each piece of tinder, made of papyro, has a unique signature and fine-grained proprioception (a sense of position and motion), so it can compute its location relative to other pieces of tinder. Because papyro contains smart ink and luminescent tint, as we spread and shuffle the scraps on our desk, the luminescence spreads across the tinder and is constantly shifting as we reposition and decontextualize these ideas.

Finally, when we have a pile of tinder glowing brightly, we can “ignite” the stack by drawing a matchstick and making a striking gesture. Something amazing happens when a pile of tinder ignites. The nanobots that make up the papyro start to self-assemble into one, larger piece of tinder… a bigger idea that is the synthesis of the others. papyro is paper-like, so I can simply tear off pieces of the idea I don’t like or fold the paper to see multiple sides of an argument. If I want to add another idea, I can push one piece of tinder into the other and the papyro will melt together. If this new idea goes back into the tinderbox, the semantic network model will update, as well.

Presentation PDF: tinder_davidnunez

Digital Clay

My Ideation Idea is to create a series of discrete units that are able to sense their position in relation to the ones around them and then, comunicate this data back to a CAD program on the computer.

PDF

Smart magnetic interface

1st Project_Helena Hayoun Won

Smart magnetic interface is a new and important means of understanding relationships with others, their communities, their rules, their habits, and their references to the world. Designing radically new interface experiences for the creation of digital media can lead to unexpected discoveries and shifts in perspective.

I aim to motivate users in telling their stories by new data filtering and driving reflection on the outcomes of their behaviors. The key to bind the physical act of collection and the digital opportunity of representation is metadata customization and filtering. These mechanisms for gathering and filtering a visual, gestures, tactile and verbal story exist as branch alternatives to the usual fragmented tools.

When using this interface This, User play how an object would collect relative to the viewpoint of the acting spot. They iterate between trying and collecting in a world of multiple perspectives. The results are entirely new genres of user-created works, where user finally capture the cherished visual idioms of action.

S-Ray

Slides

HW1-SRay-Final

Introduction

Shape exploration is a process relying on different kinds of tools and media in different stages. Designers use brains for imagining, adopt paper+pen for 2D sketching, and manipulate clay for 3D modeling. The time consumptions of ideation with different media increase, while designers move from abstract to concrete representations. It is believed that reducing the time cost of transiting from one media to another could help to improve the quantity and quality of ideation. Therefore, we propose the idea of Sublimation Ray(S-Ray) which transmits and receives bidirectional signals to and from Radical Atom and to change its phases; Radical Atom is a new kind of computational material proposed by Pro. Hiroshi Ishii of Tangible Media Group in the MIT Media Lab. With the S-Ray, designers can change the Radical Atom from gas to liquid to solid on demand, and this mechanism is believe to speed up and enrich the ideation process.

Reference

Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. interactions 19, 1 (January 2012), 38-51. 

Prototypes

Mixed-Reality Model Displayer

Slides: Mixed-Reality Model Displayer

Problem Description

There are times when designers are desperately wanting to change their digital models simply with hands, so it’s easier and faster to see the instant outcome before getting into any design details. For example, when urban designers are sharing concepts with other members in the design team, or when interior designers are presenting and discussing plans to the their customers. The current solution to this problem is tedious, since designers have to open up their 3d models in a software, adding and changing models, and finally rendering them to see the result.

Solution

Mixed-Reality Model Displayer enables designers to manipulate their digital models in a more intuitive way by connecting physical models in the real world. Designer are able to put real physical models of any kind behind the screen, so when viewing through the screen they can see their physical models are put in the environment of their digital models. Also, they are able to rotate the displayer to see the panorama view of their models just as the same they always do in a modelling software. This concept can be developed further when using of holoprojection and sensors to help equip this displayer. So the designers can change not only the physical models but the digital ones projected at the same time.

1st Class Project: Tangible Foam Cutter

This documentation (1st_assignment_jlab) is of the 1st class project. My idea was for a hybrid manual/CNC foam cutter. The cutter could create prototypes out of high-density foam stock based on a CAD model. Then the user could interact with the prototype physically and make changes by manually operating the hot wire cutter. Based on encoders in the handles, the computer could track changes made to the prototype and reflect these in the CAD model.

Re-Form

Slides: Re-Form

Problem Description

Rapid prototyping and the free form construction of physical prototypes play a vital role in the ideation and design experimentation process. The current approach though is inflexible forcing a design to become increasingly constrained as it develops from initial sketch to the CAD model and onto the first fabricated prototype. This inflexibility therefore prevents the designer form being able to naturally experiment and try new designs and configurations where their understanding of the physical affordances and textures of the object shape the design.

Solution

To overcome this the Re-Form project presents an augmented prototyping workbench where physical prototypes can be rapidly developed in a freeform modelling environment. The project is built around an augmented projection system where visual elements such as the UI and object textures can be displayed on the physical form of the prototype. Through the use of reverse electrovibration textures can also be applied to the object allowing the rapid experimentation of both the visual and physical attributes of the prototype. Through these techniques Re-Form can support wizard of oz style interaction with physical objects enabling new user interactions to be immediately tested without having to redesign and fabricate each individual adjustment.

QWRKR

QWRKR (“Quirky Worker”) is a simultaneous organizational tool for the prototypical multi-colored post-it note ideation session. It comes as a suite of tools to help make these sessions more productive. [PDF]

Brainstorming produces an abundance of ideas, but team members often find it difficult to synthesize these ideas into actionable results. Additionally, analysis tends to occur after the fact rather than during the process of idea-generation. Typically, a team member is designated as “mediator” or “facilitator” for the ideation session—QWRKR takes on this role, organizing content on the fly and directing the brainstorm session to more productive results.

Faced with a chaotic wall full of ideas, humans tend to rely on simple, intuitive analytical tools to group and organize content, ignoring potentially valuable ideas.

1. Alternate organization schemes

Optical Character Recognition combined with Natural Language processing could discover latent organizational structures in the post-it data. By data mining Wikipedia and other web content, QWRKR could find associations between ideas as the team works, and check popularity to determine what ideas are trending and might be of higher value.

2. Visualizing Content

Visualizing content early can help team members discover unforeseen connections between ideas. This tool temporarily converts, with a single click, a post-it note into its corresponding image on Wikipedia. Clicking an image multiple times cycles through various associated images until appropriate.

3. Computer Vision

By tracking team members’ eye movements, QWRKR can create heat maps of ideas that are getting too much attention, and suggest focus on ideas being overlooked. As an additional association tool, it can track the sequence of focus through the idea set.

4. Semantic Organization

QWRKR can quickly re-arrange post-it data into sentences, creating surprising associations between ideas. These quasi-random reconfigurations could serve as a valuable lateral thinking aid for the team.

5. Idea Accumulation

Clicking on an idea in this mode automatically creates additional associated words or ideas around it, a directed brainstorming tool.

6. Directed Lateral Thinking

Oblique Strategies and other lateral thinking tools can help disrupt ossified thinking in the brainstorming process.

7. The Quirky Randomizer

This tool cycles through all tools at a controlled level of randomness. The ensuing cognitive dissonance can be productive, encouraging the team to reconfigure preconceived notions on the fly.

Design Shaper

Presentation Slides

Slides

Motivation

These days, design activities mostly start from and end up in the computer screen, especially in the phase of developing forms and shapes of designs. In this situation, the  keyboard and mouse are the main tools of designing; sliding and clicking a mouse are the only interactions which are allowed to us. However, if we look back on our childhood, our designs were developed through physical interactions;  we played with clay or Lego blocks with our hands, having physical sensations. I believe this physical sense of interaction is very important in designing because it provides us with a good way of exploring design ambiguity. Even when we do not know what is the next step in our design process, we can be inspired by physically touching, moving, and distorting our design objects, and this physical process sometimes leads the way to unexpected design possibilities. Here, the question was how to restore this physical sense of designing/crafting in today’s digital environment.

Project Descriptions

In this assignment, a new design environment is proposed in order to restore the physical sense of designing/crafting. This design environment detects users’ gestures with a motion sensing device and responds to them by manipulating digital objects in the large screen. This environment will enable users to have the physical sense of designing/crafting and actively explore ambiguities in the design process. In addition, by mapping users’ diverse motion vocabularies to design vocabularies, this design environment could also provides creative ways to explore new design modes. For example, if the environment is set to interpret dancing gestures, users could explorer new ways of designing by dancing.

Concept Demo

Strobe Display

Slides- Water Droplets Display

Motivation

The idea behind StrobeDisplay is to create floating physical bits of digital information. The inspiration is derived from an installation at the MIT Museum, where strobes of light on falling water is used to create an illusion of stating datapoints(green dots). These elements could be used to represent an information element and interesting mid-air visualizations can be created.

Motivation video(Strobes)

Background

Present forms of display do it on x,y axis (pixels). There are attempts to make 2.5D displays lately, some from the Media Lab only- Recompose and Relief. Attempts to exploit the z axis  in an interface have been few, such as ZeroN.

AmbientMind

AmbientMind

AmbientMind is the concept behind a new architectural health solution to the growing problem of destructive focus on work that is a result of the many portable devices we carry around with us today.  Because our bodies and whole mental and emotional focus becomes narrowed to a screen essentially with these electronics, our capacity to think creatively and be grounded in ourselves is very limited.

The created interactional space for AmbientMind would be designed such that our thinking processes, ideas, and emotions could be captured in an expanded physical space around us in a circular room not unlike our own bedrooms in size and purpose.  The focus is on a space that is comfortable and carries our identity, but also has a diverse array of tools that allow us to visualize, document, and store the products of our minds.  Using a combination of mind-mapping programs, electronic whiteboards and post-it notes, and other as yet developing digital ideation systems, this room would allow the integration of much needed physical movement and being that engenders health and wellness, and the productivity of having our ideas and being able to flow through them with tools at our fingertips.

By separating the process of ideation with its intense mental processing and working memory demands, from physically embodying the working space, AmbientMind allows the ideator to optimally traverse back and forth between the tangible real world and the world she is creating through her work.

Braynestorm

braynestorm

Braynestorm is an augmented-reality device that seeks to make the initial creative process behind ideation more fun and productive. When initially researching and brainstorming a topic in preparation for a paper, assignment, or other project, we typically default to our old standbys of the mouse and keyboard setup, browsing Google for a host of information about our area of interest. This differs greatly from the brainstorming procedure behind sculpture, dance, or art, in which there is an innate physical component to the creative process (in addition to computer-aided research), freeing ourselves from the paradigm of screen, mouse, and keyboard, and allowing us to interact with our physical space as we explore different avenues.

Braynestorm seeks to reduce distraction during the creative process and improve efficiency by providing a large volume of information in varied forms, and allowing users to interact with their data in a tangible way in order to tap into the advantages of physical freedom. It does this by using a computer-vision aided process to create an augmented reality setup for the user.

The user first types in a relevant keyword or set of keywords for their desired topic, and presses “Braynestorm”, which prompts the program to automatically scour the internet for visual data (from Google images), and semantic data (facts from wikipedia, and news from Google news). When a red object is held up to the camera, the program tracks the position of the object and overlays images relating to the topic on the video feed. When a blue object is held up, news headlines are displayed, and when a green object is held up, facts are shown.

This allows users to physically interact with their information and potential ideas, and with a few modifications (such as gestures and other features to allow the user to download more information from the web without touching their keyboard or mouse), this device could grow to be very useful in the creative process, while demonstrating the power of computer vision, augmented reality, technology, art, and tangible user interfaces.

Maquette

Like I/O Brush in two dimensions, Maquette is a framework for rapid idea generation using objects and textures at hand in the real world. Digital modeling in three dimensions is a slow process which can never keep pace with the brain’s speed of idea generation. By using physical objects to quickly build up complicated digital forms, three dimensional modeling can rival sketching as a fast ideation tool.

The device is a hand-held object with two buttons, two depth cameras, and one RGB camera. The sensors act like a Kinect to scan objects and textures in real-time. Through the use of the buttons and simple gestures any found object can be captured in the space of a digital model on-screen. The same object in real-life can then be used (moved and rotated) to control it’s digital representation. Objects can be placed, removed from the model, and booleaned out of the model using the buttons on the remote and on-screen menus. It is also possible to use props to perform operations on the model (i.e., a piece of paper can slice through the model, cutting it in half and removing one piece). Finally, as with the I/O Brush, textures can be painted onto the model using image data picked up from the RGB sensor.

Slides

Map-It!

ideation by design 

Map-It! is an interactive digital tabletop that promotes the learning of geography. It would be preprogrammed with larger places such as cities, states, and countries but would also allow for creativity by integrating a create-you-own function. With this function, a user could create their own map of their neighborhood to share with a friend or family member. This map could then be saved to a map piece ( probably a fiducial marker or QR code on the bottom of an acrylic or wood piece) and used to pull up the map on someone elses Map-it!. The use of a table as opposed to an ipad or computer would allow for more detail to be integrated into the design while still being able to see the larger picture. Ideally Map-It! would be able to map the physical terrain of a domain through hands on manipulation and would also be able to save that information. However, i’m not sure how to actually implement that.

insTag: a tool for idea documentation

Video Sketch

insTag on Vimeo

Background

Taking pictures in a meeting or branstorming session is a common activity that I want to explore for this assignment. It is very interesting that those visual impressions that brought by photos could help us associating the notes that we’ve written down with the environment that we were in. And those pictures can serve not exclusively for a post-meeting organizing, but also as a tool for real-time discussion and modification during the meeting.

System Description

The insTag is a system that aims to help one intuitively capturing ideas and rapidly evolving them with others in a meeting situation. It has a hacked camera, that connects to the cloud. The camera contains also a printer, which will print only text of location and time of the shooting moment. Besides the camera, the system has a webcam, a projector and a whiteboard.

In a meeting scenario, one can use this camera to make digital pictures of any thing he or she finds interesting. A print with authentic time and location will be generated by the camera at the same time. The digital pictures are automatically linked with the prints, which allows one freely writing notes on. When the prints are pasted on the white board, the webcam recognizes each of them because of the uniqueness of time. The linked digital pictures will be then projected on the board for discussion. One can also make a layered modification of the picture’s content on the board. By wiping the print, the webcam will take a picture of the modification and link it with the respective print. The next time when the print is pasted again on the board, a history of the modification will be projected along as well.

In the end, we will have physical tags of the inspiring moments and discussions from a meeting. As physical objects, those tags are easy to organize and retrieve. And most important, the process of making and retrieving them is intuitive.

Tree of Ideas

Presentation Slides

Tree of Ideas is a realtime Post-it brainstorm recorder & visualizer.

Nowadays, people use Post-its to jot down simple thoughts and move them around to combine and separate clusters to come up with ideas. However, the biggest issue of this method is the difficultly of documenting the ideation process.

The “Tree of Ideas” addresses this idea by using Post-its with RFID tags to digitally record and visualize the brainstorm process. First of all, when people jot down ideas, the handwriting is transmitted and recorded on a computer realtime. This handwriting is then annotated with a timestamp and a unique ID. When people move around the Post-its on the specialized board with a projector and RFID sensors, the system automatically detects the clusters based on the proximity between each note, and creates groups. At the same time, the system projects a “tree form” visualization behind the notes. Finally, each time the user adds new notes, or changes the position, it saves a snapshot of the previous status. After the brainstorming is over, people would be able to refer to the digital notes, and look back at the process to have further discussions.

By using this system, people would be able to easily document the ideation process, without any additional burden of taking photos of the board of notes or manually recording what is being discussed.

Yet another hand gloves based interface

Slide deck: MAS834-Hand-Gloves-Interface-3D-Objects-Sujoy-Project-1

When I was a kid, I was introduced to the number system using the joints in my fingers. The finger joints, or the space between them, appear to me as a grid system that we can use for interacting with user interfaces. Of course, there have been a lot of work on using instrumented hands or hand gloves as user interfaces. Some of them depend on cameras or sophisticated gesture recognition devices like Kinect. Right after my presentation, I discovered this awesome Minority Report like UI from Oblong, which is developed by John Underkoffler, a Tangible Media alumnus. Clearly, there is little room for novel contribution in terms of gestural interaction. However, I am still looking for prior art that makes use of the number system metaphor, for applications like a simple calculator. During advanced stages of ideation, when we are working on 3D prototypes, such an interaction has the potential to be deemed as intuitive and efficient.

 

Dance Formation Creator

Slides: Dance Formation Creator

Project Description

Dance Formation Creator allows users to physically manipulate pieces on a board into formations for a dance, which are then uploaded into a computer. With this tool, there is no longer a need for the traditional paper and pencil, which can look messy and be tedious to use as the choreographer keeps erasing, making changes, or scraping a formation and starting over. It also makes collaboration easier because two or more choreographers can sit around the board and manipulate the pieces, rather than being constrained to a piece of paper and passes a pencil back and forth.

Once formations are captured into the computer, the computer can provide additional information about a new formation. For example, it can evaluate if a certain dancer moved substantially more than other dancers, thus making the formation change hard to execute for that dancer. This is something that the choreographer may not have caught until rehearsal when the dancers tried the movement themselves.

Dance Formation Creator merges the physical world with the digital by allowing choreographers to manipulate pieces on a board to exercise the creative process of formation creation, and also upload these formations into a computer that can digitally record and evaluate them.

Physical Copy & Paste

Pen and paper let you move ideas from your mind to a notebook fluidly.

Computerized copy and paste lets you quickly remix and combine materials from different sources, but is restricted to the digital domain.

Currently, if you want to incorporate other materials into your physical notebook, you either have to recreate them with your pen (possibly tedious and error-prone), reference it (which requires you to look it up again later), or print/photocopy and paste it in manually with scissors and glue, which imposes a high-effort barrier to rapid, fluid ideation.

An ideal copy-and-paste solution would let you effortlessly incorporate external physical and digital materials, diagrams, and sketches into your physical notebook, so you can both augment and expand upon your own physically-penned ideas, and interactively annotate these newly incorporated materials.

A demonstration of this concept:

Video

Slides

Smart Bricks

PDF: SmartBricks

Recent studies have shown that peak creativity emerges when people are in solitude within a comfortable and uninterrupted environment. Contrary to previous belief, group brainstorming can be stifling to many, especially creatives, many of whom are introverts, because of fear of rejection, not being able to speak up to a vocal colleague or succumbing to peer pressure.

Smart Bricks responds to these issues by giving individuals empowerment and creativity to create their own space and share their ideas at their own pace. The bricks come in 3 sizes and individuals can use them both as modular units to create architectural elements such as walls/white boards, furniture and sketch blocks. In terms of articulating the meeting space, the blocks can be used to create an open meeting space, or individual alcoves for private dialogue.

For sharing information, the blocks can be aggregated in certain ways to collect ideas, via a digital pen, from one individual or a shared group or set up in a systematic way to create an ideas wall. The user also has the option to project their ideas from the block onto the wall.

Research has shown that many eureka ideas come from unexpected environments, thus, by letting the partipants create their own space, ideas may be piqued and the individuals would be able to record and share their ideas directly through their environment.

Sticky Blocks

zbarryte_stickyblocks

Interface Summary

Users write/draw/speak ideas into blocks, which they then place in a space.  Blocks represent ideas.  Ideas can grouped to form themes.  Themes and ideas can be built upon by other ideas, moved around, shuffled, sorted, etc, and each idea will remember all of its bindings (what themes it’s a part of, what ideas it’s built on).

Project Goal

Sticky Blocks enables users to physically manipulate their ideas, and to project the associations they make in their head, much as they would with post-it notes.  Unlike post-it notes, Sticky Blocks may form more intricate structures and remember former associations.

NoteSync

PDF Slides: NoteSync

Problem Description
Physical media such as pen and paper have a vital role in the formation of an idea giving users a freedom in form and fidelity not often replicated in purely digital work spaces.  Digital media does, however, excel over its physical counterparts in the areas of organization and ease of collaboration and sharing of ideas.  Current trends often seek to migrate information to a purely digital format creating a definite disconnect between the physical and digital landscapes.

Proposed Solution
NoteSync is a system that seeks to maintain the freedom of physical media while allowing the user all the benefits that digital media offers.  The NoteSync consists of a device that can read and write to physical media sources such as a notebook and provides a common digital workspace that can be used to share and collaborate ideas as illustrated in the video below.  Not only will this system enhance collaboration, but it will also let users better organize their notebooks by automatically generating tables of content and indices based on key word and symbol recognition.  By bridging the physical and digital work spaces, the NoteSync will be able to offer a richer overall experience for the user.

 

STORMBOT

Now with more buttons!


First, you have a set of digital screens. A tabletop and some monitors.

You can add and manipulate any data you want on the screens: text, tables, images, audio, video, you name it. Manipulate these data with your hands, touch-screen style. Draw on them with a digital pen, white-board style. Add text with a keyboard, or write with the pen. Annotate, scribble, format, rotate, erase.

Technology already exists to merge physical pieces of paper with digital paper (e.g., ProofRite, PaperProof, various digital pen and dot patterned paper, and so on), so why not, we can add that technology to this workspace as well! All the things you can do with paper scattered across a desk, you can do here, with more media.

The key here is that you can lay out your data and ideas in a non-linear way. Take pages 3, 15, and 27 from a pdf and put them side by side. Set a clip from an article you read nearby. Draw a line connecting these to a set of audio recordings you have from a lecture. Easily move information between screens. Jot notes everywhere and anywhere.

But there’s a problem. Everything is digital, visual, and constrained to your screen or to pieces of paper.

How can we improve?

Just add robots!


Everything is cooler with robots.

So let’s add a social robot sitting at the table with you to facilitate the brainstorming process. Not as a replacement person, but to help you interact with your data and ideas. It can analyze all your content. It can ask questions. It can search for new or relevant information that you may not have considered. It can direct attention verbally and visually to previous ideas you’ve had to help you connect your ideas (e.g., Staudt, 2009).

The robot will be able to analyze the linguistic and semantic content of your screens. (Until image and video processing gets better, those may have to be tagged with text for now.) A lot of machine learning, AI, natural language processing, and data processing algorithms exist for analyzing large sets of data like these, generating semantic concept nets or category structures, finding similarity dimensions, feature sets, and so on. All this previous work can be leveraged here.

The robot can also be equipped with sensors: eyetracking in the tabletop surface and on the screens (eyetracking monitors already exist, e.g., from Tobii Technologies, and some people are trying to develop eyetracking webcams), cameras, a microphone. Data from these can be analyzed in order to monitor attention (e.g., Narayanan et al., 2012) affect/emotions (e.g., Calvo & D’Mello, 2010; D’Mello & Graesser, 2010), and other behaviors.

With all this information, the robot, as a social actor, can help you interact with your ideas. It can suggest relevant words or phrases, papers, articles, people who have expertise in areas you’re thinking about, news, video clips, and more. Drawing on the automated tutoring system literature, it can ask questions that will lead to deep thinking about ideas.

But…. robots can’t do everything!


Stormbot is a tool to help you stay on track. It can guide you away from destructive thinking and mental blocks by detecting your frustration. It can help you generate and connect ideas. But it’s still just talking about information represented visually on digital screens.

Buttons

One way of thinking about sorting, categorizing, and organizing ideas is based on their similarity and feature sets. Buttons — actual, physical buttons, like the kind on sweaters, jackets, and bags — are a good visualization of this. Buttons can be round, or square; they can be different colors; they can have different numbers of holes. You can group them based on their feature sets and similarities in a large number of ways: blue buttons with two holes. Red buttons with four holes. Cloth buttons. Metal buttons.

What if we could sort ideas the same way?

Recall that the robot, on the backend, is already analyzing the content of your screens. Why not extend and apply this: determine feature sets. Judge similarity. Group ideas into semantic categories or networks. Basically, assign “buttons” to your ideas.

Physical buttons!

Bring that into the physical world. Have actual, physical buttons linked to each of your ideas on the screen. This is the kind of thing that might be especially helpful as an ideation game for kids, with the robot character leading the game and facilitating the interaction.

The buttons could just be real buttons, or they could be specially 3D-printed, or they could even (and perhaps ideally) be reconfigurable on the fly, incorporating new ideas and sorting schemes as you change data, add ideas, and group ideas together. Whether you’re trying to design software, a new product, come up with a research question, or write a paper, the goal is to allow you to interact with your ideas on another dimension. The buttons, with the robot’s lead, allow you to see similarities and draw connections in a different way.

Slides available here! [PDF]

BrightIdea

Slides

Problem Description
Ideas can come at any moment, and when working in groups sometimes those ideas occur when not in physical proximity. How can individuals effectively record ideas that occur to them during the day, and how can teams work together and evaluate ideas effectively, when not all together?

Proposed solution
The BrightIdea is a system to help users record ideas that occur to them at the spur of a moment and share with teammates. Each system consists of a base, that contains several idea “bulbs”, each bulb consists of multiple LEDs of varying color that identify which user made a given contribution. When a user has an idea they record it on the bulb. When they are ready to share their idea they recorded, they replace it in the base of the unit. Depending how good of an idea the creator thinks he or she has come up with, the creator make push the bulb deeper or shallower into the base to adjust the actual brightness of the bulb. Other users that see a lit bulb, may then place lit bulbs in the ‘play’ socket of the base, which replays the idea. Afterwards other users replace the bulb into the socket as well and similarly can indicate how good of an idea they think it is by how deeply they replace the bulb into the base.

IdeasInMotion

Slides PDF: Perovich_presentationFinal

Movement-based ideation can be difficult to capture since motion is fleeting and experiential.  IdeasInMotion is a system that supports dance choreographers developing routines by helping them collect their ideas, restructure the moves into a routine, experience the new choreography, and share the results with remote collaborators.  The system consists of sensors and haptic feedback devices in clothing that are used to document, re-experience, and share motion, paired with a computer-based interface for sorting and re-structuring the moves.  The sensors document the choreographer’s movements as he brainstorms and a board collects the data and wireless sends it to the computer where it is intelligently divided into segments.  Using the computer interface, the choreographer can rearrange the moves to create a new combination.  He can then experience the resulting routine through haptic feedback in the clothing.  Once he’s please with the result, he uses the computer interface to create a mirror image of the routine that represents the follower’s experience.  He sends this to his dance partner so she can learn the routine “naturally” by touch–as she would if they were collaborating in person.

SoundSpace / Permea-b-oard / MeetingMaker

PDF: Introduction and Ideas [Slideshow]

SoundSpace is a tool for musicians to meet and create music while apart. Current tools (ex. garageband, webconferencing) ignore the importance of body movements, spatialized sound, or even the vibrations of a performer stomping on the ground. The importance of these ways of communication become clear when thinking of the way a lead violinist directs a string quarter solely through emphasized body movements. SoundSpace provides all of these sensory inputs while also adding the ability to record and playback sessions. This is great for rehearsal alone later or revisiting improvised sections that are otherwise difficult to repeat. It can also by used to turn on and off instruments or rearrange portions in the composition and lead to new ways of experimentation.

As a follow-up to Clearboard, Permea-b-oard latches on to the importance of the direct gaze, and continuity of space, but suggests that this does not only need to be for the sharing of 2D representation. A space is added at the bottom of the board for working with physical or digital 3D models. While physically located far apart, the interface creates the appearance of a shared space allowing for direct gesturing, eye-contact, and shared manipulation of an object.

MeetingMaker is a tool for meetings or charrettes where a lot of sketching happens that doesn’t get recorded or that happens on one end of the table and can’t be seen by people on the other end. This is often the case in meetings between architects and engineers. At the end of the meeting, people walk away with the sketches they made and important information gets lost.

It starts out with the idea of ‘Shared Design Space‘ where the entire surface of a table can be used to draw. Important images which have been pinned-up can be grabbed and moved to the table digitally using phicons for people to look at or sketch over. Updated drawings can be easily moved back to the pin-up wall. Personal sketches can be moved towards the center of the table to be enlarged and allow everyone to look at or sketch over together.

Project files from a network or server can also be pulled up directly on the table rather than needing to run back to a computer, plot, and bring back physically. These drawings are connected directly to the project’s 3D model and any sketching that happens over these drawings are directly imported into a layer on the computer model, relating the sketch to the overall building and precluding the need to find a sketch, scan it, import it, and retrace it into the 3D model later. While sketching the computer can be told to create alignments or snaps for more accurate drawing.

Finally, the audio of the meeting can be recorded and synchronized with the movement of images around the room allowing for review of the meeting later.

Urban Rediscovery Toolkit

Deck : Urban Rediscovery Toolkit

Ideation has become a close-spaced activity, while we have a beautiful world outside to live, breathe and explore in; we are stuck at boards, pads and tabs. The Urban Rediscovery Toolkit pushes you out into the open, gives you a tangible tool that you can use as a lens to look at the world with a new perspective.

Audio Sketchbook

PDF Slide Deck: Audio Sketchbook

Problem Description
Game jams are sessions of designing, prototyping, and building games constrained by a particular theme and time limit.  Game jams can last between 6 and 48 hours, and attract participants with a variety of skills and backgrounds.  From empirical evidence, most of these participants are either game designers or programmers.  Aside from design and programming, games often employ art and music.  But while most games produced through game jams include some form of art, significantly less include music or sound effects.  This lack of audio support could be from a lack of sound design participants.  If so, how can we encourage more audio designer participation?  The lack of music could also be a result of poor audio prototyping tools.  If so, how do we create an easy-to-use prototyping tool for iterating sound effects and music?

Proposed Solution
To answer each of these questions, I propose a product that allows for better, simpler communication of audio effects.  From empirical evidence, most game jam participants are comfortable with sketching designs on paper, whiteboards, blackboards, and computers.  During these sketches, they are also making gestures and providing sound effects.  If we capture these sound effects during the sketching process, we can utilize these effects in prototyping the game.  I propose an “audio sketchbook” tablet app that allows users to sketch images and record audio over these images.

This sketchbook would have three modes, or “layers,” of sketching.  See Figure 1.  The first layer is the drawing layer.  It allows users to draw freely.  Ideally, users could utilize standard digital sketchbook options such as setting the color, thickness, and opacity of their brush.  The second layer is the audio layer.  While a user performs brush strokes, audio is captured.  This interaction takes inspiration from the “Singing Fingers” iOS app (http://singingfingers.com/).  The final layer is the editing layer.  This layer allows users to select sections of the audio layer for editing.  Editing options include copying, “slicing”, “stretching,” and deleting audio sketches.  Through these three layers, users can create, read, update, and delete their line or audio sketches.  By providing a sketching tool for prototyping audio, I hope to give game jam participants a simple way to iterate and produce audio effects with a medium they are already comfortable using.

Figure 1: Layers of Audio Sketchbook