AbstractsComputer Science

Human-Centered Optimization of Mobile Sign Language Video Communication

by Jessica Julie Tran

Institution: University of Washington
Degree: PhD
Year: 2014
Keywords: American Sign Language; Deaf community; frame rate/bit rate; Human-Computer Interaction; intelligibility; video compression; Electrical engineering
Record ID: 2045421
Full text PDF: http://hdl.handle.net/1773/26885


The proliferation of mobile devices is greater than ever; however, bandwidth and battery life have not grown accordingly to support mainstream use of mobile video communication. This dissertation contributes to the continued effort of making mobile sign language communication more accessible and affordable to deaf and hard-of-hearing people. I am optimizing the lower limits at which mobile sign language can be transmitted to reduce bandwidth and battery life, while maintaining intelligibility. This work presents the <italic>Human Signal Intelligibility Model</italic> (HSIM) to address the lack of uniformity in the way that intelligibility and comprehension are operationalized for evaluation. The HSIM influenced the design of four web studies: (1) investigating perceived intelligibility of sign language video transmitted at various low frame rates and low bit rates below the current recommended video transmission standards as prescribed in the International Telecommunication Standardization Sector (ITU-T) Q.26/16 (at least 25 fps and 100 kbps); (2) investigating the relationship between response-time and video intelligibility, which led to the creation of the <italic>Intelligibility Response-Time Method</italic>; (3) evaluating perceived video quality of different power saving algorithms utilizing qualities unique to sign language; and (4) comparing objective video quality measures to subjective responses. Results revealed an "intelligibility ceiling effect" for video transmission rates, where increasing the frame rate above 10 fps and bit rate above 60 kbps did not improve perceived video intelligibility. These findings suggest that the recommended ITU-T sign language transmission rates can be relaxed while still providing intelligible American Sign Language (ASL) video, thereby reducing bandwidth and network load. I conducted a laboratory study in which pairs of fluent ASL signers held free-form conversations over an experimental smartphone app transmitting video at frame rates and bit rates well below the ITU-T standard, to investigate how fluent ASL signers adapt to the lower video transmission rates. Participants were successful in holding intelligible conversations across all frame rates, even though they perceived the lower quality of video transmitted at 5 fps/ 25 kbps. Also, video transmitted at 10 fps/50 kbps or higher was not found to significantly improve video intelligibility, which corroborates with web study findings. Finally, I conducted a field study observing everyday use of an experimental smartphone app transmitting video at rates below the ITU-T standard. The field study revealed that gathering in-the-moment information using mobile video chat was preferred over texting because of the faster response-time. Taken together, the findings from this dissertation support the recommendation that intelligible mobile sign language conversations can occur at video transmission rates far below the ITU-T standard to optimize resources consumption, video intelligibility, and user preferences. The thesis of my…