Articles via Databases
Articles via Journals
Online Catalog
E-books
Research & Information Literacy
Interlibrary loan
Theses & Dissertations
Collections
Policies
Services
About / Contact Us
Administration
Littman Architecture Library
This site will be removed in January 2019, please change your bookmarks.
This page will redirect to https://digitalcommons.njit.edu/dissertations/827 in 5 seconds

The New Jersey Institute of Technology's
Electronic Theses & Dissertations Project

Title: Designing multimodal interaction for the visually impaired
Author: Chen, Xiaoyu
View Online: njit-etd2007-063
(xxvi, 319 pages ~ 17.0 MB pdf)
Department: Department of Information Systems
Degree: Doctor of Philosophy
Program: Information Systems
Document Type: Dissertation
Advisory Committee: Tremaine, Marilyn M. (Committee chair)
Turoff, Murray (Committee member)
Jones, Quentin (Committee member)
Whitworth, Brian (Committee member)
Glinert, Ephraim P. (Committee member)
Date: 2007-08
Keywords: Multimodal interaction
Speech & touch interface
Acccessibility
Interface for visually impared users
Input dialog design
Availability: Unrestricted
Abstract:

Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users' information access.

This research investigates sighted and visually impaired users' multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user's choice. Theories in human memory and attention are used to explain the users' speech and touch input coordination.

Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices.

Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users' task performance.

In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction.

The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks.


If you have any questions please contact the ETD Team, libetd@njit.edu.

 
ETD Information
Digital Commons @ NJIT
Theses and DIssertations
ETD Policies & Procedures
ETD FAQ's
ETD home

Request a Scan
NDLTD

NJIT's ETD project was given an ACRL/NJ Technology Innovation Honorable Mention Award in spring 2003