A paper presented on the Turn-Taking and Coordination in Human-Machine Interaction symposium (2015 AAAI Spring Symposia, Stanford University, March 23-25, 2015, Palo Alto, California). Long worked on how to visualize multimodal interaction. After several attempts, got a solution that found to be … Continue reading
On a recent hackathon we chose to implement a simple tool for visualizing expenses, e.g. from a business trip, and classifying those expenses with appropriate labels into pre-defined classes like food, transport, hotel, etc. The hackathon lasted 1.5 days and in that time … Continue reading
Designing a complex dashboard for overviewing operation of field technicians in the oil industry. The work included user interviews, concept generation and visualization with example data.
Continuation to an earlier project. The goal was to discover how different modality channels contribute to the overall success of a dialogue (i.e. system-user interaction). The implementation takes an EMMA-like XML-based description of the dialogue and analyses within each dialogues for each … Continue reading
An old project with new tools. Back in 2000 published a paper on “Visualisation of spoken dialogues“. Since then tools for visualization developed significantly and the current effort focuses on multimodal interaction. More info available around May 2012. Figures: (c) 2012 … Continue reading