Identifying the Addressee in Human-Human-Robot (2008) Interactions Determined by, Michael Katzenmaier
Abstract With this function we look into the power of acoustic and visual cues, and their mix,
Office 2007 Enterprise Product Key, to identify the addressee within a human-human-robot interaction. Based on eighteen audiovisual recordings of two human beings and a (simulated) robot we discriminate the interaction of the two people from your interaction of 1 human together with the robot. The paper compares the end result of 3 approaches. The first tactic utilizes purely acoustic cues to search out the addressees. Very low degree,
Office 2010 Professional Product Key, feature based mostly cues as well as higher-level cues are examined. Inside the 2nd strategy we check no matter whether the human's head pose is a suitable cue. Our results show that visually estimated head pose is a more reliable cue for the identification of your addressee inside the human-human-robot interaction. From the third strategy we combine the acoustic and visual cues which results in significant improvements.
Details der Publikation Download Quelle Mitarbeiter CiteSeerX Archiv CiteSeerX - Scientific Literature Digital Library and Search Engine (United States) Keywords attentive interfaces,
Office 2007 Professional Plus Key, focus of attention,
windows 7 Home Premium sale, head pose estimation Typ text Sprache Englisch Verknüpfungen 10.1.1.6.1719,
Windows Product Key, 10.1.1.28.8271