||Teaching a robot
how to play a card game
Multimodal Instruction-Based Learning for Personal Robots
|With the kind
support of Nuance
overall aim is the development of
human-robot interfaces allowing the instruction of robots by untrained users,
using communication methods natural to humans.
This project focuses on
card game instructions, in a scenario where a user of a personal robot wishes to
play a new card game with the robot, and needs to first explain the rules of the
game. Game instructions are a good example of more general instructions to a
personal robot, due to the range of instruction type they contain:
sequences of actions to perform and rules to apply.
The objective is developing a robot-student able to
understand the instruction from the humans teacher and integrate them in a way
that supports a game playing behaviour.
Method: The project starts with recordings of a
corpus of instructions between a human teacher and a human student (Figure
1, Publication 1).
Starting a robot development project with recording
users is an approach termed "corpus-based robotics" (publication
Setup for corpus
collection. The teacher communicates with the student (on the left) by
using spoken instructions and gestures mediated by the touch screen.
One of the problems to be solved was the
synchronization of chunks of verbal instruction and the corresponding
chunks of gestural demonstrations (Publications 3 & 4).
Time-lines of speech and gesture,
where diagonal lines indicate which utterance and gesture are paired
Current work covers:
- The development of a
semi-automatic method for the design of speech recognition grammars starting from a corpus.
- The analysis of game rule instructions to
infer and implement cognitive functions required from a learner robot.
university page -
Corpus Collection for the Design of User-Programmable Robots" (PDF
Wolf J.C., Bugmann G. (2005)
Proc. Taros'05, London, p.
2. "The Impact of Spoken Interfaces on the Design of Service
Bugmann G., Wolf J. C., Robinson P. (2005)
Robot, 32:6, 499-504
3. "Timing of visual and spoken input in robot
Wolf J.C. and Bugmann G.
Proceeding of EUROS'06 International Workshop on Vision Based
Interaction, 18 March 2006, Palermo
Speech and Gesture in Multimodal Instruction Systems" (PDF 410KB)
J.C. and Bugmann G (2006).
Proc. IEEE Roman'06, 6-8 Sept. 2006, Hatfield, UK,
Rules in Human-Robot Instructions" (PDF 258KB)
Wolf J.C. and Bugmann G. (2007) Proceedings IEEE ROMAN'07, Jeju Island, Korea,
"Converting Multi-Modal Task Instructions to Rule-Based Robot Instructions" (PDF
Wolf J.C. and Bugmann G. (2008) Proceedings of IEEE ROMAN'08, Munich,
Germany, pp. 586-591