Sensing emotion in video

by on 28/02/2018

In social settings, people interact in close proximity. When analysing such encounters from video, we are typically interested in distinguishing between a large number of different interactions. Our work focuses on finding models to train interaction recognition and how to detect such interactions in both space and time from video.

Project lead: dr.ir. Ronald Poppe

When considering many different types of interaction, we face two challenges. First, we need to distinguish between interactions that initially appear indistinguishable. Second, it becomes more difficult to obtain sufficient specific training examples for each type of interaction. Our research addresses both issues. We have created a framework that can distinguish very fine differences among interactions. Our method can be refined with body part detectors from non-specific images with pose information. Such resources are widely available. We have introduced a training scheme and a model formulation to allow for the inclusion of this auxiliary data.