2015年
Sign Language Recognition with Microsoft Kinect's Depth and Colour Sensors
2015 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS (ICSIPA)
- ,
- ,
- 巻
- accepted
- 号
- 開始ページ
- 186
- 終了ページ
- 190
- 記述言語
- 英語
- 掲載種別
- 研究論文(国際会議プロシーディングス)
- 出版者・発行元
- IEEE
In the last few years, many technologies for helping differently-abled people have been developed continually including technologies for recognising sign language that enables them to communicate with each other. In this research, we studied sign language recognition using Microsoft Kinect. Conventionally, Microsoft Kinect uses its depth sensor to collect depth and motion features in order to recognise words in sign language. Our proposed method improved it by adding colour feature sensing. Acquired by the depth and colour sensors, all of the features were extracted and then machine-learned by multi-class Support Vector Machine. The learned features were associated with the following words: 'Name', 'No', 'Thank you', 'How many', 'What', 'Where', 'Yes', and 'Your'. An experiment to find out which combination of the three features-depth, motion, and colour-predicted the mentioned words most accurately showed that the combination of motion and colour features achieved the highest accuracy at 95%.
- リンク情報
- ID情報
-
- Web of Science ID : WOS:000380447200035