Timezone: »

Machine Perception for Human Machine Interaction
Paul L Ruvolo · Marian S Bartlett · Nicholas J Butko · Claudia Lainscsek · Gwendolen C Littlewort · Jacob Whitehill · Tingfan Wu · javier r movellan

Wed Dec 10 07:30 PM -- 12:00 AM (PST) @

We present four live demonstrations of state-of-the-art machine perception technologies for use in real-time interaction between humans and machines. (1) Computer Expression Recognition Toolbox (CERT). CERT is the first system for fully automated coding of the Facial Action Coding System (FACS). FACS is a method from experimental psychology which decomposes facial expressions into 46 component movements, and enables investigations of new relationships between facial movement and internal state. Machine learning applied to the outputs of CERT has been shown to differentiate fake from real pain expressions with greater accuracy than naive human subjects, and to detect driver fatigue by predicting a crash 60-seconds prior in driving simulators. (2) Real-time detection of auditory moods. Detection of auditory phenomena ( e.g. the emotion in a speaker's voice, the presence of laughing or music) provides robots with a valuable tool for inferring the current social mood. The ability to sense these dynamics in real-time opens the door for robots that interact more seamlessly with humans. Our approach adapts state of the art object recognition algorithms to the task of auditory category recognition, achieving excellent performance at little computational cost. (3) Real Time Infomax approach to visual saliency. While there has been an explosion in recent years in modeling human visual attention in task-free conditions, all or nearly all existing models are inappropriate for use in robots because they rely on complicated, slow calculations. We will demonstrate a real time active camera that orients itself towards the most informative regions of the visual scene. The approach is based on a Bayesian model of visual saliency that models well human eye movements in open-ended tasks. The approach runs efficiently (100FPS) on a modern low-end computer. (4) Real-time facial expression analysis for automated tutoring systems. Using the CERT toolbox (1) we investigate machine perception techniques to estimate a subject's perceived level of difficulty and his/her desired playback speed of a video lecture on a second-by-second basis. The demo we showcase is capable of running in real-time and automatically modulating the speed of a video lecture to suit the subject's current grasp of the material being presented.

Author Information

Paul L Ruvolo (UC San Diego)
Marian S Bartlett (Apple, Inc.)
Nicholas J Butko (Univ. of California, San Diego)
Claudia Lainscsek (University of California, San Diego)
Gwendolen C Littlewort (University of California, San Diego)
Jacob Whitehill (University of California, San Diego)
Tingfan Wu (University of California, San Diego)
javier r movellan (university of california san diego)

More from the Same Authors