multimodal_dialogue_visualization © 2012 admin. All rights reserved.

Visualization of multimodal interaction

Continuation to an earlier project. The goal was to discover how different modality channels contribute to the overall success of a dialogue (i.e. system-user interaction).

The implementation takes an EMMA-like XML-based description of the dialogue and analyses within each dialogues for each dialogue states what type of modalities users used. Also, the language, in case of speech input, is examined a bit closer. Uni- and bigrams are generated for each dialogue states and combined across all users. On this way a developer can get a better idea of what’s going on inside the interaction and can further tune the system to achieve a better performance.

A web-based interactive version will follow, stay tuned.