ABSTRACT Identifying the Addressee in Human-Human-Robot Interactions according to Head Pose and Speech (2008) Michael Katzenmaier ,
Windows 7 Pro
Abstract With this perform we examine the power of acoustic and visual cues,
Windows 7 64 Bit, and their mixture,
Windows 7 Pro, to determine the addressee inside a human-human-robot interaction. Depending on eighteen audiovisual recordings of two human beings as well as a (simulated) robot we discriminate the interaction of your two humans in the interaction of 1 human using the robot. The paper compares the result of three approaches. The very first tactic employs purely acoustic cues to find the addressees. Very low level, characteristic based mostly cues too as higher-level cues are examined. Within the second approach we examination regardless of whether the human’s head pose is actually a suitable cue. Our final results present that visually approximated head pose is actually a much more reliable cue for your identification from the addressee within the human-human-robot interaction. Inside the third approach we merge the acoustic and visual cues which results in substantial improvements.
Details der Publikation Download Quelle Mitarbeiter CiteSeerX Archiv CiteSeerX - Scientific Literature Digital Library and Search Engine (United states of america) Key phrases attentive interfaces,
Windows 7 Professional, focus of attention,
Microsoft Office 2007 Key, head pose estimation Typ text Sprache Englisch