Close relations

Understanding between humans and computers could develop with the emergence of interactive computers

 
Published: Saturday 31 May 1997

-- (Credit: SANJAY GHOSH)
work
is currently being carried out all over the world in the field of interactive computer agents - animated characters, with 'personalities' who will one day become the 'face' of our computers. The latest idea coming out of mit's Media Lab, Cambridge, Massachusetts on this front is Gandalf - a character capable of face-to-face interaction with people in real-time, and able to perceive their gestures, speech and gaze. Except he can't do all that just yet, but the project does show how control of a graphical face could produce some of the behaviour exhibited by people in conversation. The eventual aim is to enable people to interact with computers in the same manner they interact with other humans.

Gandalf was developed by mit graduate Kristinn Thorisson working with Justine Cassell, Head of mit Media Lab's Gesture & Narrative Language Group.Currently, to interact with Gandalf, the user must wear a body-tracking suit, an eye tracker and a microphone. But eventually, this equipment will become unnecessary as computer-vision systems become able to perceive the user's visual and auditory behaviour. Thorisson explains that Gandalf is based on creation of an architecture for psychosocial dialogue skills that allows implementation of 'full-duplex' multimodal characters so that they accept multimodal input and generate multimodal output in real-time, and are interruptable.

The architecture is based on three artificial-intelligence (ai) approaches: blackboards, Schema theory and behaviour-based ai. Multi-modal information streams in from the user and is processed at three different levels using blackboards for communicating immediate and final results. An action scheduler cylinder composes particular motor commands and sends them to the agent's animation module.

Part of the work includes generating interactive facial animation in a cartoon-style approach. For this Thorisson is developing a Toonface system based on an object-oriented approach to graphical faces which he says easily allows for rapid construction of whacky-looking characters -- automatically.

The animation scheme allows a controlling system to address a single feature on the face, or any combination of features, and animate them smoothly from one position to the next. "Any conceivable configuration of any movable facial feature can be achieved instantly without having to add 'examples' into a constantly expanding database. The system employs the notion on 'motors' that operate on the facial features and move them in either one or two dimensions," says Thorisson.

Gandalf can currently answer questions about the planets of the solar system. But, future work includes adding more complex natural language understanding and generation, and an increased ability to follow dialogue in real time.

Subscribe to Daily Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.