Show simple item record

Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.

dc.contributor.authorKim, Yelin
dc.date.accessioned2016-09-13T13:54:28Z
dc.date.availableNO_RESTRICTION
dc.date.available2016-09-13T13:54:28Z
dc.date.issued2016
dc.date.submitted2016
dc.identifier.urihttps://hdl.handle.net/2027.42/133459
dc.description.abstractEmotion is a central part of human interaction, one that has a huge influence on its overall tone and outcome. Today's human-centered interactive technology can greatly benefit from automatic emotion recognition, as the extracted affective information can be used to measure, transmit, and respond to user needs. However, developing such systems is challenging due to the complexity of emotional expressions and their dynamics in terms of the inherent multimodality between audio and visual expressions, as well as the mixed factors of modulation that arise when a person speaks. To overcome these challenges, this thesis presents data-driven approaches that can quantify the underlying dynamics in audio-visual affective behavior. The first set of studies lay the foundation and central motivation of this thesis. We discover that it is crucial to model complex non-linear interactions between audio and visual emotion expressions, and that dynamic emotion patterns can be used in emotion recognition. Next, the understanding of the complex characteristics of emotion from the first set of studies leads us to examine multiple sources of modulation in audio-visual affective behavior. Specifically, we focus on how speech modulates facial displays of emotion. We develop a framework that uses speech signals which alter the temporal dynamics of individual facial regions to temporally segment and classify facial displays of emotion. Finally, we present methods to discover regions of emotionally salient events in a given audio-visual data. We demonstrate that different modalities, such as the upper face, lower face, and speech, express emotion with different timings and time scales, varying for each emotion type. We further extend this idea into another aspect of human behavior: human action events in videos. We show how transition patterns between events can be used for automatically segmenting and classifying action events. Our experimental results on audio-visual datasets show that the proposed systems not only improve performance, but also provide descriptions of how affective behaviors change over time. We conclude this dissertation with the future directions that will innovate three main research topics: machine adaptation for personalized technology, human-human interaction assistant systems, and human-centered multimedia content analysis.
dc.language.isoen_US
dc.subjectemotion recognition
dc.subjectaffective computing
dc.subjectmultimodal
dc.subjectmachine learning
dc.titleAutomatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.
dc.typeThesisen_US
dc.description.thesisdegreenamePhD
dc.description.thesisdegreedisciplineElectrical Engineering: Systems
dc.description.thesisdegreegrantorUniversity of Michigan, Horace H. Rackham School of Graduate Studies
dc.contributor.committeememberProvost, Emily Kaplan Mower
dc.contributor.committeememberLee, Honglak
dc.contributor.committeememberHero(iii), Alfred O
dc.contributor.committeememberCorso, Jason
dc.contributor.committeememberLyu, Siwei
dc.subject.hlbsecondlevelComputer Science
dc.subject.hlbsecondlevelElectrical Engineering
dc.subject.hlbtoplevelEngineering
dc.description.bitstreamurlhttp://deepblue.lib.umich.edu/bitstream/2027.42/133459/1/yelinkim_1.pdf
dc.identifier.orcid0000-0002-6503-4637
dc.identifier.name-orcidKim, Yelin; 0000-0002-6503-4637en_US
dc.owningcollnameDissertations and Theses (Ph.D. and Master's)


Files in this item

Show simple item record

Remediation of Harmful Language

The University of Michigan Library aims to describe library materials in a way that respects the people and communities who create, use, and are represented in our collections. Report harmful or offensive language in catalog records, finding aids, or elsewhere in our collections anonymously through our metadata feedback form. More information at Remediation of Harmful Language.

Accessibility

If you are unable to use this file in its current format, please select the Contact Us link and we can modify it to make it more accessible to you.