- Oxford Library of Psychology
- The Oxford Handbook of Affective Computing
- Oxford Library of Psychology
- About the Editors
- Introduction to Affective Computing
- The Promise of Affective Computing
- A Short History of Psychological Perspectives on Emotion
- Neuroscientific Perspectives of Emotion
- Appraisal Models
- Emotions in Interpersonal Life: Computer Mediation, Modeling, and Simulation
- Social Signal Processing
- Why and How to Build Emotion-Based Agent Architectures
- Affect and Machines in the Media
- Automated Face Analysis for Affective Computing
- Automatic Recognition of Affective Body Expressions
- Speech in Affective Computing
- Affect Detection in Texts
- Physiological Sensing of Emotion
- Affective Brain-Computer Interfaces: Neuroscientific Approaches to Affect Detection
- Interaction-Based Affect Detection in Educational Software
- Multimodal Affect Recognition for Naturalistic Human-Computer and Human-Robot Interactions
- Facial Expressions of Emotions for Virtual Characters
- Expressing Emotion Through Posture and Gesture
- Emotional Speech Synthesis
- Emotion Modeling for Social Robots
- Preparing Emotional Agents for Intercultural Communication
- Multimodal Affect Databases: Collection, Challenges, and Chances
- Ethical Issues in Affective Computing
- Research and Development Tools in Affective Computing
- Emotion Data Collection and Its Implications for Affective Computing
- Affect Elicitation for Affective Computing
- Crowdsourcing Techniques for Affective Computing
- Emotion Markup Language
- Machine Learning for Affective Computing: Challenges and Opportunities
- Feeling, Thinking, and Computing with Affect-Aware Learning Technologies
- Enhancing Informal Learning Experiences with Affect-Aware Technologies
- Affect-Aware Reflective Writing Studios
- Emotion in Games
- Autonomous Closed-Loop Biofeedback: An Introduction and a Melodious Application
- Affect in Human-Robot Interaction
- Virtual Reality and Collaboration
- Unobtrusive Deception Detection
- Affective Computing, Emotional Development, and Autism
- Relational Agents in Health Applications: Leveraging Affective Computing to Promote Healing and Wellness
- Cyberpsychology and Affective Computing
Abstract and Keywords
This chapter is from the forthcoming The Oxford Handbook of Affective Computing edited by Rafael Calvo, Sidney K. D'Mello, Jonathan Gratch, and Arvid Kappas. There is no single agreed-upon description of emotions or related terms in the emotion research literature. A generally useful emotion markup language should, therefore, provide a rich set of descriptive mechanisms. EmotionML has been developed at the World Wide Web Consortium by members of the affective computing community with very diverse backgrounds. It provides representations of affective states that aim to satisfy the needs of the majority of emotion researchers and application developers alike. Emotions can be represented in terms of categories, dimensions, appraisals, and action tendencies, with a single <emotion> element containing one or more of such descriptors. As it is not possible to standardize a closed set of emotion terms nor desirable to leave the choice of labels completely undefined, EmotionML provides an “emotion vocabulary” mechanism to flexibly select descriptors. This chapter describes selected aspects of EmotionML 1.0 and the procedure and thinking behind its development.
Marc Schröder is Senior Researcher at DFKI Language Technology Lab.
Paolo Baggia, Loquendo
Felix Burkhardt does tutoring, consulting, research and development in the working fields human-machine dialog systems, text-to-speech synthesis, speaker classification, ontology based natural language modeling, voice search and emotional human-machine interfaces. Originally an expert of Speech Synthesis at the Technical University of Berlin, he wrote his ph.d. thesis on the simulation of emotional speech by machines, recorded the Berlin acted emotions database, "EmoDB", and maintains several open source projects, including the emotional speech synthesizer "Emofilt" and the speech labeling and annotation tool "Speechalyzer". He has been working for the Deutsche Telekom AG since 2000, currently for the Telekom Innovation Laboratories in Berlin. He was a member of the European Network of Excellence HUMAINE on emotion-oriented computing and is currently the editor of the W3C Emotion Markup Language specification. He is the leader of the EIT ICT labs activity on multimodal interaction.
Catherine Pelachaud is a Director of Research at CNRS in the laboratory LTCI, TELECOM ParisTech. She received her PhD in Computer Graphics at the University of Pennsylvania, Philadelphia, USA in 1991. Her research interest includes embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors and socio-emotional agents. She has been member of the Humaine Association committee. She is associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and Journal on Multimodal User Interfaces. She has co-edited several books on virtual agents and emotion-oriented systems.
Christian Peter obtained his Masters Degree in Electrical Engineering in 1996 from the University of Rostock, Germany. From 1997 to 2000 he was researcher at the Computing Laboratory of Oxford University, UK in the field of hardware development and systems design for novel sensor technologies. Since 2000 he is with Fraunhofer IGD Rostock, focusing his research on the development of intelligent, self-contained, non-obtrusive sensors for affect-related physiological parameters and the analysis and application of the obtained data.
Enrico Zovato, Loquendo
Access to the complete content on Oxford Handbooks Online requires a subscription or purchase. Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription.
If you have purchased a print title that contains an access token, please see the token for information about how to register your code.