We will be looking soon for a PhD student in the context of the European Commission funded European Training Network ANIMATAS (Advancing intuitive human-machine interaction with human-like social capabilities for education in schools).
More information to follow
Elena Tsankova and Arvid Kappas will present a poster titled “Emotional intelligence may be linked to the ability for trustworthiness encoding in face sketching” at the International Convention of Psychological Science 2017 in Vienna, Austria.
We’ll be happy to see you – stop by if you are around!
Here is the poster abstract:
Trustworthiness and emotion expression are linked on the perceptual level (Oosterhof & Todorov, 2009). Here we show that the two dimensions may be linked on a more abstract level, reflected in the positive correlation between the ability to interpret facial emotion and the quality of trustworthiness encoded in facial sketches.
And the poster summary:
The perception (decoding) of facial trustworthiness is closely linked to the perception emotion expressions (e.g., Oosterhof & Todorov, 2009). Here we explored the possibility that trustworthiness and emotion are also linked on the encoding level. More precisely, we tested whether emotional intelligence was associated with the capacity to encode trustworthiness in face sketches. Forty young adults (22 female) used an online version of Ultimate Flash Face, which resembled a police sketch type of software, to create a trustworthy female, trustworthy male, untrustworthy female, and untrustworthy male face. Each encoder also filled in the Toronto Empathy Questionnaire (TEQ; Spreng, McKinnon, Mar, & Levine, 2009) and completed the revised version of the “Reading the Mind in the Eyes” Test (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). The resulting 160 sketches were split in groups of 40 and each group was rated for trustworthiness, femininity-masculinity, attractiveness, and positivity-negativity of facial expression by an independent sample of 20 decoders (10 female) in an online survey. For each encoder we computed encoding quality as a difference score between the trustworthy and untrustworthy sketches using the average decoder rating for each perceptual dimension (trustworthiness, femininity-masculinity, attractiveness, and positivity-negativity of facial expression). This encoding quality score (EQS) provided an index of how successful the encoding was—the larger the perceptual difference, the more successful the encoding. We then correlated EQS for each perceptual dimension with the encoders’ TEQ and Eyes Test scores. A positive correlation would indicate a positive link between encoder emotional intelligence and the quality of the generated sketches. Encoders with higher emotional intelligence (higher TEQ and Eyes Test scores) were expected to have produced sketches that lied further apart on the perceptual dimensions, especially the trustworthiness dimensions. We found a positive correlation between encoders’ performance on the Eyes Test and EQS for femininity-masculinity and attractiveness (ps < .05), as well as a tendency for such correlation for trustworthiness (p <. 07). The encoders’ TEQ scores did not correlate with EQS on any perceptual dimension. Although further work in this line of research is necessary, the current data suggest that there may indeed be a link between emotional intelligence and trustworthiness also on the encoding level. Interestingly, not all aspects of emotional intelligence might be involved in the encoding of trustworthiness. In particular, empathy might not be crucial for the encoding of visual cues to trustworthiness whereas the ability to decode emotions appears to be. This work enhances our knowledge about the shared perceptual basis of trustworthiness and emotion expressions by extending the link between the trustworthiness and emotion expression dimension also to the encoding level.
On Wednesday, March 16, 2016, 12:45–14:00 Arvid Kappas will give a talk titled “The promises and challenges of affective computing: The case of EMOTE” in the CoWell Brown Bag Research Seminar.
Location: Jacobs University Bremen, Res. IV Conference Room (Room 52)
You will find the abstract of the talk here.
Stop by if you are around!
On January 28, 2016 Arvid Kappas will give a keynote talk titled “When Machines Try to Infer Emotions from Nonverbal Behavior: Issues in Affective Computing” at the Nonverbal Communication preconference of the 17th Annual Convention of the Society for Personality and Social Psychology in San Diego, California.
Check it out if you are around!
We just had a new paper published!
Elena Tsankova and Arvid Kappas show that the smoothness of facial skin affects not only how attractive and healthy but also how trustworthy and competent one appears! The authors also present initial evidence that facial skin influences the perception of physical and psychological trait via slightly different mechanisms.
Check out the full publication here.
And here a couple of links to the paper in the press:
Over 100 participants in the IROS (International Conference on Intelligent Robots and Systems) 2015, which took place in Hamburg, visited the Jacobs campus on September 28! They also attended a series of demos in our lab where we talked about psychophysiological assessment of responses to robots, as well as the Bremen Emotional Sounds Toolkit (BEST). Here are a couple of pictures from the tour of our lab:
IROS (International Conference on Intelligent Robots and Systems) 2015 will take place from September 28 to October 2 in nearby Hamburg. In this context there will be a visit of over 100 roboticists to the Jacobs University campus in Bremen for demos on robotics research. We will be presenting our laboratories for psychophysiological assessment of responses to robots, as well as the Bremen Emotional Sounds Toolkit (BEST).