An Opportunity to Investigate the Role of Specific Nonverbal Cues and First Impression in Interviews using Deepfake Based Controlled Video Generation
Abstract
The study of nonverbal cues in a dyadic interaction, such as a job interview, mostly relies on videos and does not allow to disentangle the role of specific cues. It is thus not clear whether, for instance, an interviewee who smiles while listening to an interviewer would be perceived more favorably than an interviewee who only gazes at an interviewer. While a similar analysis in naturalistic situations requires careful curation of interview recordings, it still does not allow to disentangle the effect of specific nonverbal cues on first impression. Deepfake technology provides the opportunity to address this challenge by creating highly standardized videos of interviewees manifesting a determined behavior (i.e., a combination of specific nonverbal cues). Accordingly, we created a set of deepfake videos enabling us to manipulate the occurrence of three classes of nonverbal attributes (i.e., eye contact, nodding, and smiling). The deepfake videos showed interviewees manifesting one of four behaviors while listening to the interviewer: eye contact with smile and nod, eye contact with only nodding, just eye contact, and looking distracted. Then we tested whether these combinations of nonverbal cues influenced how the interviewees were perceived with respect to personality, confidence, and hireability. Our work reveals the potential of using deepfake technology for generating behaviorally controlled videos, useful for psychology experiments.