It is the dream of a single user interface which addresses and understands all human senses for communication that drives the research field of multimodal
user interfaces. The utilization of a single human sense for communication in an interface is referred to as modality (i.e. speech, gesture). Multimodal interfaces are a subset of natural user interfaces, which are in turn meant to replace the well–known WIMP paradigm used in many graphical user interfaces by making the interaction between a user and a computer more natural and intuitive.
Related Publications
- Multimodal Interaction Techniques in Scientific Data Visualization: An Analytical Survey
Jannik Fiedler, Stefan Rilling, Manfred Bogen and Jens Herder, Proceedings of GRAPP 2015, pp. 431-437
- Interaction Optimization through Pen and Touch, Eye Tracking and Speech Control for the Multimodal Seismic Interpretation Workspace (under review)
Master of Science Thesis in Media Informatics submitted to the University of Applied Science in Duesseldorf, Germany, by Danyel Kemali in December 2016