You are here

HOSPISIGN: AN INTERACTIVE SIGN LANGUAGE PLATFORM FOR HEARING IMPAIRED

Journal Name:

Publication Year:

Abstract (2. Language): 
Sign language is the natural medium of communication for the Deaf community. In this study, we have developed an interactive communication interface for hospitals, HospiSign, using computer vision based sign language recognition methods. The objective of this paper is to review sign language based Human-Computer Interaction applications and to introduce HospiSign in this context. HospiSign is designed to meet deaf people at the information desk of a hospital and to assist them in their visit. The interface guides the deaf visitors to answer certain questions and express intention of their visit, in sign language, without the need of a translator. The system consists of a computer, a touch display to visualize the interface, and Microsoft Kinect v2 sensor to capture the users’ sign responses. HospiSign recognizes isolated signs in a structured activity diagram using Dynamic Time Warping based classifiers. In order to evaluate the developed interface, we performed usability tests and deduced that the system was able to assist its users in real time with high accuracy.
75
92

REFERENCES

References: 

[1] H. Cooper, B. Holt, and R. Bowden, (2011).Sign Language
Recognition, in Visual Analysis of Humans, Springer, pp. 539–562.
[2] L. R. Rabiner, (1989).A Tutorial on Hidden Markov Models and
Selected Applications in Speech Recognition,Proceedings of the IEEE,
vol. 77, no. 2, pp. 257–286.
[3] D. J. Berndt and J. Clifford, Using Dynamic Time Warping to Find
Patterns in Time Series,in KDD Workshop, vol. 10, 1994, pp. 359–
370.
[4] T. Starner and A. Pentland, (1997).Real-time American Sign
Language Recognition from Video Using Hidden Markov Models,in
Motion-Based Recognition. Springer, pp. 227–243.
[5] K. Grobel and M. Assan, Isolated Sign Language Recognition using
Hidden Markov Models,in IEEE Systems, Man, and Cybernetics
International Conference on Computational Cybernetics and
Simulation, vol. 1, 1997, pp. 162–167.
[6] C. Vogler and D. Metaxas, Parallel Hidden Markov Models for
American Sign Language Recognition,in the Proceedings of the IEEE
Seventh International Conference on Computer Vision, vol. 1,1999,
pp. 116–122.
[7] H.-K. Lee and J. H. Kim,(1999).An HMM-based Threshold Model
Approachfor Gesture Recognition, IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI), vol. 21, no. 10, pp. 961–
973.
Muhammed Miraç SÜZGÜN, Hilal ÖZDEMİR, Necati Cihan CAMGÖZ,
Ahmet Alp KINDIROĞLU, Doğaç BAŞARAN, Cengiz TOGAY,
Lale AKARUN
90
[8] X. Chai, G. Li, X. Chen, M. Zhou, G. Wu, and H. Li, Visualcomm: A
Tool to Support Communication betweenDeaf and Hearing Persons
withthe Kinect, in Proceedings of the 15th International ACM
SIGACCESS Conference on Computers and Accessibility. ACM,
2013, p. 76.
[9] S. Theodorakis, V. Pitsikalis, and P. Maragos, (2014).Dynamic Static
Unsupervised Sequentiality, Statistical Subunits and Lexicon for Sign
Language Recognition, Image and Vision Computing, vol. 32, no. 8,
pp. 533–549.
[10] V. Pitsikalis, S. Theodorakis, C. Vogler, and P. Maragos,Advances in
Phonetics-based Sub-unit Modeling for Transcription Alignment
andSign Language Recognition, in IEEE Computer Society
Conference onComputer Vision and Pattern Recognition
Workshops(CVPRW), 2011.
[11] Z. Zhang, (2012). Microsoft Kinect Sensor and Its Effect, MultiMedia,
IEEE,vol. 19, no. 2, pp. 4–10.
[12] B. S. Parton, (2006). Sign Language Recognition and Translation: A
Multidiscipline Approach from the Field of Artificial
Intelligence,Journal of Deaf Studies and Deaf Education, vol. 11, no.
1, pp. 94–101.
[13] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A.
Blake, M. Cook, and R. Moore, (2013). Real-Time Human Pose
Recognition in Parts from Single Depth Images, Communications of
the ACM, vol. 56, no. 1, pp. 116–124.
[14] S. Cox, Speech and Language Processing for a Constrained Speech
Translation System, in INTERSPEECH, 2002.
[15] S. Cox, M. Lincoln, J. Tryggvason, M. Nakisa, M. Wells, M. Tutt, and
S. Abbott, Tessa, a System to Aid Communication with Deaf People,in
Proceedings of the Fifth International ACM Conference on Assistive
Technologies, 2002, pp. 205–212.
HospiSign: An Interactive Sign Language Platform for Hearing Impaired
91
[16] O. Aran, I. Ari, L. Akarun, B. Sankur, A. Benoit, A. Caplier, P. Campr
and A. H. Carrillo (2009). SignTutor: An Interactive System for Sign
Language Tutoring, IEEE MultiMedia, no. 1, pp. 81–93.
[17] Z. Zafrulla, H. Brashear, P. Yin, P. Presti, T. Starner, and H.
Hamilton, American Sign Language Phrase Verification in an
Educational Gamefor Deaf Children, 20th IEEE International
Conference on PatternRecognition (ICPR), 2010, pp. 3846–3849.
[18] K. A. Weaver and T. Starner, We need to communicate!: Helping
Hearing Parents of Deaf Children Learn American Sign Language, in
The Proceedings of the 13th International ACM SIGACCESS
Conference on Computers and Accessibility. ACM, 2011, pp. 91–98.
[19] M. Hrùz, P. Campr, E. Dikici, A. A. Kındıroğlu, Z. Krnoul, A.
Ronzhin, H. Sak, D. Schorno, H. Yalcin, L. Akarun, O. Aran, A.
Karpov, M. Saraçlar, M. Železný,(2011). Automatic Fingersign-to-
Speech Translation System,Journal on Multimodal User Interfaces,vol.
4, no. 2, pp. 61–79.
[20] Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, and P. Presti,
AmericanSign Language Recognition with the Kinect, in Proceedings
of the 13th International Conference on Multimodal Interfaces, ACM,
2011, pp. 279–286.
[21] E. Efthimiou, S.-E. Fotinea, T. Hanke, J. Glauert, R. Bowden, A.
Braffort, C. Collet, P. Maragos, and F. Lefebvre-Albaret, The Dicta-
SignWiki: Enabling Web Communication for the Deaf,Computers
Helping People with Special Needs, Springer, 2012.
[22] A. Karpov, Z. Krnoul, M. Zelezny, and A. Ronzhin,
(2013).Multimodal Synthesizer for Russian and Czech Sign Languages
and Audio-Visual speech, Universal Access in Human-Computer
Interaction. Design Methods, Tools, and Interaction Techniques for
eInclusion. Springer, pp. 520–529.
Muhammed Miraç SÜZGÜN, Hilal ÖZDEMİR, Necati Cihan CAMGÖZ,
Ahmet Alp KINDIROĞLU, Doğaç BAŞARAN, Cengiz TOGAY,
Lale AKARUN
92
[23] X. Chai, G. Li, Y. Lin, Z. Xu, Y. Tang, X. Chen, and M. Zhou,Sign
Language Recognition and Translation with Kinect, in IEEE
Conference on Automatic Face and Gesture Recognition, 2013.
[24] J. Gameiro, T. Cardoso, and Y. Rybarczyk, (2014).Kinect-sign:
Teaching Sign Language to Listeners through a Game, in Innovative
and Creative Developments in Multimodal Interaction Systems.
Springer, pp. 141–159.
[25] Z. Zafrulla, H. Brashear, H. Hamilton, and T. Starner, Towards an
American Sign LanguageVerifier for Educational Game for Deaf
Children,in Proceedings of International Conference on Pattern
Recognition (ICPR), 2010.
[26] V. Lopez-Ludena, C. Gonzalez-Morcillo, J. Lopez, R. Barra-Chicote,
R. Cordoba, and R. San-Segundo, (2014). Translating Bus
Information into Sign Language for Deaf People, Engineering
Applications of Artificial Intelligence, vol. 32, pp. 258–269.
[27] T. Kadir, R. Bowden, E. J. Ong, and A. Zisserman,Minimal Training,
Large Lexicon, Unconstrained Sign Language Recognition, in British
Machine Vision Conference (BMVC), 2004.

Thank you for copying data from http://www.arastirmax.com