Final Project Description

Click here to view final projects

This webpage can contain a link to a PPT with lots of detail, and/or a paper up to 6 pages, one column single-spaced format, for a solo project and up to 8 pages one-column single-spaced format for a group project. The paper is optional (if you prefer to put all the detail in a PPT, but don't skimp the details if you don't write a paper). Please include in the web materials (1) references and background (2) what you built/accomplished (3) how you evaluated it (give detail) (4) results of the evaluation - what works or not, what can be concluded from the results you got. You won't have time to cover all of this in the in-class presentation - so just focus the talk on (3) and (4) above. If (4) is being finished Dec 2-9 then do the best you can to discuss prelim results, how you are analyzing the results you got so far, etc. Online, you can have additional appendices if needed. Make sure data is de-identified or the persons gave full permission to have it put online and made public. Save all your COUHES forms in case we're audited. The key is to communicate online, by Fri Dec 9, what you've accomplished (which I'll grade > Dec 9) and to also present the gist of it in class Dec 2 or 9. Each solo slot is 15minutes. Please plan to present for 10-12 of those minutes. Leave at least 3-5 minutes for questions and transition.

Due 2pm, Thursday Nov 17, 2011.

To: nakra (at) tcnj.edu, mas630-staff
Subject: MAS630 HWK7
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK7.{doc,docx,tex}

READ (papers emailed):

Teresa Marrin Nakra and Brett BuSha, "Correlation of Audience and Conductor Emotional Responses in Live Orchestra Concert" (In Preparation)

Teresa Marrin and Rosalind Picard, "The 'Conductor's Jacket': A device for recording expressive musical gestures", Proceedings of the International Computer Music Conference, Oct 1998.

Celine Latulipe, Erin A. Carroll, Danielle Lottridge, "Love, hate, arousal and engagement: exploring audience responses to performing arts" CHI 2011 Session on Performing Arts, Vancouver, BC, Canada.

WRITE:

1. The Nakra-BuSha paper is under preparation for submission and the lead author would like your help to make it a great paper. Practice being a reviewer and give constructive feedback: (a) What is the main claim/contribution of the paper? Concisely word your understanding of this (helps the authors see if it is coming across clearly/appropriately). (b) What part of the paper is in most need of improvement? While there are some sections still not completed, please think about the ones that appear completed - how can what is said now be made stronger? Make suggestions and be honest with your critique.

2. The CHI paper already got published but if you know EDA you can find some suggestions/criticisms for this paper. Give two criticisms or other suggestions for how the work presented in this paper could be improved.


Due IN CLASS Friday Nov 4, 2011.

Goal: Show that your project is well underway. Clarify what you have done so far and what remains to be done and where you may encounter risk or need help. Prepare a short presentation:

1. One slide: State goal of project
2. One slide: How much background of related work have you researched so far? (Don't give overview of all the background work details, just give us an idea of how much you've looked into.)
3. Confirm that you have taken care of approvals if your project involves human subjects.
4. If you are doing a human subjects study show a graphic of the study design/flow you've planned.
5. Confirm that you have taken care of getting all software/hardware/design materials needed (e.g. if using sensors, show that you have gotten confirmation of logger or streamer sensor availability for when you need it.)
6. What else have you built/tested/started to get done?
7. Show a timeline of what you need to get done by what date.
8. What remains to be done? What are the biggest risks you're likely to face getting things done?
9. What do you need help with? The class and staff can be of help - please let us know how.


Due 2pm, Thursday, Oct 27, 2011.

To: mas630-staff, awebb@draper.com
Subject: MAS630 Homework
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK6.{doc,docx,tex}

READ (emailed to you):

DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, 74-118.

Kircher, J. C. & Raskin, D. C. (1988). Human versus computerized evaluations of polygraph data in a laboratory setting. Journal of Applied Psychology, 73, 291-302.

Kircher, J. C. & Raskin, D.C. (2002). Computer methods for the psychophysiological detection of deception. In M. Kleiner (Ed.), Handbook of Polygraph Testing (pp. 287-326). San Diego: Academic Press.

Vrij, A., Leal, S., Mann, S., & Granhag, P.A. (2011). A comparison between lying about intentions and past activities: verbal cues and detection accuracy. Applied Cognitive Psychology, 25, 212-218.

Webb, A. K., Honts, C. R., Kircher, J. C., Bernhardt, P., & Cook, A. E. (2009). Effectiveness of pupil diameter in a probable-lie comparison question test for deception. Legal and Criminological Psychology, 14, 279-292.

WRITE:

1. Suppose you are called upon to help design the affect-related part of a lie detection system for use in airports. Today, when you check in as a passenger, it is already routine at the check-in counter to ask each passenger questions such as "Have your bags been out of your immediate control since you packed them? Has any stranger given you a package or present to carry?" It is also routine to walk through a metal detector or a body-scanner, and to gather video of passengers.

(a) Describe how you might augment one of these interactions (or some other part of the experience, if you prefer) to help catch people who might have malicious life-threatening intent. What additional questions would you ask and how would you ask them?

(b) What sensors would you use and what would they measure?

(c) What behavior is your system aimed at detecting? Be specific.

(d) What is the biggest strength of your system?

(e) What is the biggest weakness of your system? (How might people fool it? When will it give false alarms?)

(f) What concerns might the general public or privacy advocates raise about your system?

Be very specific in your answers and point to the readings where appropriate.


Due 2pm, Thursday, Oct 20, 2011.

To: mas630-staff
Subject: MAS630 Homework
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK5.{doc,docx,tex}

READ:

R.W. Picard (2003) What does it mean for a computer to "have" emotions? Chapter in Trappl, Petta and Payr (2003) "Emotions in Humans and Artifacts"

Nass et al. 2005 Improving automotive safety by pairing driver emotion and car voice emotion

J. Klein, Y. Moon, and R. W. Picard, "This computer responds to user frustration" Interacting with Computers, Volume 14, No. 2, (2002), pp. 119-140 (emailed)

T. Bickmore, Picard, R. W. (2004) Towards Caring Machines, Proceedings of CHI, April 2004, Vienna, Italy.

Picard, Affective Computing Chapter 7 - Please at least skim this and think about machines synthesizing emotion and about machines using internal emotional-like mechanisms to bias their own decision-making, perception, language selection, reasoning, and more.

WRITE/COLLECT:

1) Cheery drivers responded best to an energetic voice, and upset ones to a subdued voice. Listen for instances this week where people change their voice to deal with the emotions of another person effectively, and share an example of this with the class. If you can't find an effective example, you can give an example where the interaction was ineffective and tell us why you think it failed. Please do not disclose identifying information.

2) Bring in a short video clip or movie clip with a scene of a computer/robot that has emotions. In your written assignment list which clip you're bringing in and answer these questions briefly (a) What outward signs indicate its emotions and what emotions do they communicate? (b) What fundamental advances do you think are needed in order for a real robot/computer to have the ability portrayed in the clip? (Offer: Roz can loan DVD's of old Twilight Zone, Hitchhiker's Guide to the Galaxy, I Robot....)

3) A promoter of a stuffed smiling dinosaur declared "Barney now has emotions!" When I heard this I groaned because Barney only has a mechanical smile. But enough about your professor - what would YOU require a robot to have, as emotional capabilities, in order for you to say "that robot has emotions." Explain your requirements.


Due 2pm, Thursday, Oct 6, 2011.

To: mas630-staff
Subject: MAS630 Homework
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK4.{doc,docx,tex}

Write Project Draft Ideas:

On one or two slides (to share in class Fri, but please email to us by the deadline above), please describe 1-3 ideas of projects that you are interested in doing. Teams of two are ideal so if you're a soloist now please list more than one project idea that interests you and we can suggest some matchmaking before the next class. If you'd like to team with somebody who has certain skills don't be shy about listing that. List your skills too. At this point you don't have to give much detail about your project plan. By Oct 21 when your final plan is due, I'll want you to describe:

1. What the goal of the project is, what parts exist now, and what parts need to be done to reach the goal. If any of these parts are likely to cause problems what is your back-up plan?
2. Who is doing what (if group project). Also is this a project that overlaps with another class or thesis work? (This is okay, just tell me. I expect more work to get done for overlapping projects.)
3. It should be obvious what it has to do with Affective Computing. For example, you can use machine learning to learn about affect, and perhaps classify it, but the emphasis in this class should be less on the machine learning algorithms and more on the way you are handling the affective data and its quirks, and how things change as you change the nature of the data. This is not the machine learning class. Similarly for HCI related projects, this is not an HCI class, but there are many interesting affect-related issues in most HCI and you can pull those out here.
4. Are you involving human subjects? Have you got COUHES approval or have you gotten your advisor to add you to an approved protocol, etc.? Or are you exempt? Even being exempt needs COUHES approval.
5. Future (after the project is done): The big win: Learn something you can teach the rest of us. It doesn't have to be big (although that's awesome if it is) but it does have to be "done" in the sense that you have to get some answers. Did you build something and does it work? How do you know it works? Why does it work or not? What do you know now that you didn't? You can't answer this at this early stage of project planning (unless you're cheating; please be better than that!) Nonetheless, pick a project now that you can do that gets you somewhere new, so we can all learn.
6. Schedule time with Roz (through Lillian) and/or Elliott us to meet and discuss your ideas before finalizing the project, or catch us right after class. We're here to help you and we like to be involved.

READ:

Niedenthal, PM, Embodying Emotion (18 May 2007) Science 316 (5827), 1002. [DOI: 10.1126/science.1136930]

Ahn, H.I, Teeters, A., Wang, A., Breazeal, C., and Picard, R.W. (2007) "Stoop to Conquer: Posture and affect interact to influence computer users' persistence," The 2nd International Conference on Affective Computing and Intelligent Interaction, September 12-14, 2007, Lisbon, Portugal.

Lerner, J., Small, D. A. , Loewenstein, G. (2004) Heart Strings and Purse Strings, Psychological Science, Vol. 15, No. 5, pp 337-341.

Isen, Daubman, and Nowicki, (1987) "Positive affect facilitates creative problem solving", Journal of Personality and Social Psychology, Vol 52(6), Jun 1987, 1122-1131. doi: 10.1037/0022-3514.52.6.1122

Forgas & Moylan, (1987) After the movies: transient mood and social judgments, Personality and Social Psychology Bulletin, Vol 13(4), Dec 1987, 467-477. doi: 10.1177/0146167287134004

WRITE:

1) Alex has ten questions on which to collect people's opinions. The questions relate to the overall satisfaction that people perceive with her party's politicians and their impact both locally and nationwide. She is not allowed to modify the questions, but she is willing to modify how the poll is conducted in subtle ways to make her party's political candidates look as good as possible. She plans to poll 1000 people nationally by phone and 1000 locally, in person, by some "random" process. Describe three ways Jill might bias the opinions she collects by directly (or better, indirectly) manipulating affective factors. Be clear how you think each of the three manipulations would affect people's answers (i.e. give an example question she might have been given related to the task above, say how the affect could be manipulated, and predict how it would affect the answer. Repeat 3x.)

2) Pick at least one of the physical manipulations in the readings and try it on yourself, on a task of your choice, or on a friend who doesn't know the hypothesis. For example, try browsing something neutral with a pen in your mouth in the two different positions, or try criss-crossing physical affect manipulations with a task having agreeing or opposing affective demands. You can pick something not in the readings as long as it's in the spirit of them. Of course, you know what the physical manipulations aim to do and you are a biased participant, so your results cannot be expected to be the same as those of a naive subject, or can they? Play around. It's ok to just have fun with this item - the key is to see if you pick up on any of the nuances of it by actually trying it. Jot what you tried and what you learned.

3) Before you work to brainstorm ideas related to your project, go experience true positive affect. Ask some friend(s) to help you (all in the name of science, really it is, yes.) Tell them why this is a good idea based on the above reading. For full credit on this homework have a friend email me stating they witnessed you experiencing positive affect, and a line saying why this may matter for your creativity. (So I know you taught them something about what you learned. Make sure they put "MAS630" and your name in subject line) Pay attention to things this week that naturally bias your mood. (a) Name a way in which the findings in the readings may apply to you this week. (b) Name a way in which those same findings may not apply directly to you, even if you had experienced the same emotion eliciting event as in the studies.


Due 2pm, Thursday, Sep 29, 2011.

To: mas630-staff
Subject: MAS630 Homework
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK3.{doc,docx,tex}

READ:

Hoque, M. E., Lane, J. K., el Kaliouby, R., Goodwin, M., Picard, R.W., Exploring Speech Therapy Games with Children on the Autism Spectrum, Proceedings of InterSpeech, Brighton, UK, September 6-10, 2009.

R.W. Picard and J. Scheirer (2001), The Galvactivator: A Glove that Senses and Communicates Skin Conductivity, Proceedings from the 9th International Conference on Human-Computer Interaction, August 2001, New Orleans, LA, pp. 1538-1542.

Jacobs, A. M., Fransen, B., McCurry, J. M., Heckel, F. W. P., Wagner, A. R., & Trafton, J. G. (2009) A preliminary system for recognizing boredom, Poster presented at the 4th annual ACM/IEEE International Conference of Human-Robot Interaction, San Diego, CA.

Kapoor, A., Picard, R.W. Multimodal Affect Recognition in Learning Environments, ACM MM'05, November 6-11, 2005, Singapore.

WRITE:

Suppose that your job is to build interface tools that can help a speaker be better at engaging their audience. Let's assume they already have help with their content and visuals, and now they need help with delivery, and with understanding the overall effectiveness of their speech for engaging the audience. Assume they are giving Powerpoint/Keynote style talks that range from 2 minutes to 50 minutes in length, and that you have these tools available to use (or not):

1. A video camera and an online screen capture tool (both give video and audio output), wearable mic if needed
2. Skin conductance sensors for palm or wrist
3. Motion sensors (can be integrated with above)
4. Dials that measure -100 to 100 (you need to give instructions what the endpoints mean)
5. Computer sliders that viewers can move
6. Facial analysis tools with possibilities such as el Kaliouby showed
7. Prosody pitch and timing tools such as described by Hoque et al.
8. A commercially available laptop and projector
9. Smartphones and tablet
10. Internet & cellular access
(if there's something else low-cost, commercially available that you need, you can argue for that too.)

(Part I. Speech modulation.)

Some speakers need help to modulate their pitch and timing. You don't want them to talk too fast, or talk too monotone, or sound histrionic. They might need help knowing when to pause more dramatically, say to allow the audience to laugh or process complex thoughts.

What kind of system would you build for helping a person learn how to better modulate the timing and pitch of their speech? Imagine (you don't have to build it) what your system would be and then describe:

(a) What would interacting with your system be like from the viewpoint of the speaker/user? Please be careful to consider the affective qualities of the user experience (not just the affective analysis of the speech tool). Describe each step of the user experience.
(b) Sketch at least one view of the interface you envision the user seeing.
(c) Would your system be live and online, or offline and viewed later? Why?
(d) Would you involve the audience? If so, how?
(e) How would you test if your system works well?

(Part II. Measuring Engagement.)

Some speakers will want to see where in their presentations they engaged parts of their audience and where the audience lost engagement. They will want to identify the engaging/interesting or boring parts so they can know what to keep doing and what to stop doing.

What kind of system would you build for helping a person learn how to be better at engaging their audience? Imagine (you don't have to build it) what your system would be and then describe:

(a) How would you involve the audience in measuring engagement and what would the audience experience be like? Describe each step of the audience experience, making sure that you are paying attention to their affective experience with your system as well as to what affective qualities your system might be measuring.
(b)How would you show the results to the speaker? Sketch what this would look like for the speaker. Describe the way the speaker would interact with the system and what he/she might see.
(c) Would your system be live and online, or offline and viewed later? Why?
(d) How would you test if your system works well?
(e) Describe what you think would be the 3 main weaknesses of your system and its 3 main strengths.

(Part III. Practical Constraints.)

Do you think your two systems are practical for somebody to build within the coming year? If so, great, you're done! If not, jot a few lines about how you think it's worth waiting longer for your idea, i.e., tell us why it's so much better than what could be done with current year technologies, that it's worth more investment.


Due 2pm, Thursday, Sep 22, 2011.

To: mas630-staff
Subject: MAS630 Homework
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK2.{doc,docx,tex}

READ:

M. Madsen, R. el Kaliouby, M. Goodwin, and R.W. Picard, Technology for Just-In-Time In-Situ Learning of Facial Affect for Persons Diagnosed with an Autism Spectrum Disorder, Proceedings of the 10th ACM Conference on Computers and Accessibility (ASSETS), October 13-15, 2008, Halifax, Canada.

Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, A survey of affect recognition methods: Audio, visual, and spontaneous expressions IEEE T Pattern Analysis and Machine Intelligence, Vol. 31, pp. 39-58, Jan 2009.

Why Good Advertising Works Even When You Think It Doesn't The Atlantic, August 31, 2011.

Volkswagen Ad Gets Better the More You Watch It: Results from the Affectiva Smile Tracker. Feel free to try out the smile expression recognition online (need a webcam and broadband access to youtube)

Richard L. Hazlett and Sasha Yassky Hazlett, Emotional Response to TV Commercials: Facial EMG vs. Self Report. Journal of Advertising Research, March-April 1999.

WRITE/INTERACT:

1. Not all people diagnosed with autism have difficulty interpreting facial expressions, but many do. Many people on the autism spectrum find it really hard to process both high-speed visual information and vocal information in a face-to-face conversation. Many also report being hugely stressed looking at other people's eyes. Describe 2 main challenges in bringing (imperfect) facial expression recognition into a form that can help people better process facial expressions in real-time face-to-face social interaction.

2. Suppose you are asked to move a slider on a computer screen while you watch a presentation or advertisement: slide it left if you are having negative feelings and slide it to the right if you are having positive. List three strengths related to using such a technology. List three weaknesses.

3. Play with the results dashboards online at Forbes and at Affectiva. (Sorry, Youtube recently took down the Doritos ad so it no longer plays, but you can watch and record your expressions for the other two ads - opt in if you wish, it is not required that you participate: you can just view results from others if you wish.) Can you find any interesting differences in the affective responses of people who saw the ad before vs. those who are seeing it for the first time? If so, describe. If not, say whether you think repeated viewings of an ad should give a repeatable, consistent, response, and why (or why not?)

4. Consider the opt-in technology feeding facial expressions to the dashboards you just played with. This can be seen as easier for viewers to give feedback than the standard process of filling out long online surveys to describe their experience, perhaps at many points during an interaction. On the other hand, maybe there are ways this technology could be misused or used for purposes you would not want to participate in. If you wanted to give feedback to a lot of content and had two choices: the expression-reading method, or fill out online surveys (today's standard), which would you pick? Is there something better you would prefer? What are the potentials for abuse of this technology and how might its designers prevent that?

5. To train a machine to read facial expressions, we need a large corpus of a variety of expressions that are labeled by human labelers. Try this video labeling exercise. If you are on the class list, then your username is your email address you used for this name before the '@' (The email you used for this class). There is no password. If you have any problems accessing the site please contact Micah at micahrye ( at ) mit (dot) edu. Write a few sentences on your experience with the labeling - in what ways was this easy or hard? How could this be improved?

Due 2pm, Thursday, Sep 15, 2011.

To: mas630-staff
Subject: MAS630 Homework
Include editable file with your name and homework number in filename, so Picard can send you comments,
e.g., LASTNAME-HWK1.{doc,docx,tex}

READ:

Affective Computing: Introduction and Chapters 1,2,3.

Ursula Hess, Reginald B. Adams, Jr and Robert E. Kleck, "The face is not an empty canvas: how facial expressions interact with facial appearance" Phil. Trans. R. Soc. B 12 December 2009 vol. 364 no. 1535 3497-3504 doi: 10.1098/rstb.2009.0165

Shlomo Hareli and Ursula Hess "What emotional reactions can tell us about the nature of others: An appraisal perspective on person perception" Cognition and Emotion (2010) Volume: 24, Issue: 1, Publisher: Psychology Press, Pages: 128-140, DOI: 10.1080/02699930802613828

WRITE:

1. Construct a specific example of a human-human interaction that clearly involves affect. Construct its "equivalent" interaction between a person and an affective technology, by using the media equation. Do this for two cases: one where the equivalence seems likely to hold, and one where it seems likely to not hold. Do you think the presence of affect in a human-technology interaction makes the media equation more or less likely to hold? Explain.

2. Argue for or against this statement: "Emotions are just special kinds of thoughts."

3. Pick a least favorite and a most favorite application from Chapter 3 and critique both of them (pros and cons) based on your own personal and unique research perspective. I wrote these fifteen years ago, and while some things have not changed much, I am interested in what you think is most interesting, most likely to succeed or fail, and why.

4. From the two Hess readings, come up with two questions: (i.) A detail about the papers you'd like clarification on; and (ii.) A bigger question you have about the findings/work in this area. Hess will be joining us to present to our class next Friday.