• Title/Summary/Keyword: hand language

Search Result 467, Processing Time 0.024 seconds

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Sign Language Translation Using Deep Convolutional Neural Networks

  • Abiyev, Rahib H.;Arslan, Murat;Idoko, John Bush
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.631-653
    • /
    • 2020
  • Sign language is a natural, visually oriented and non-verbal communication channel between people that facilitates communication through facial/bodily expressions, postures and a set of gestures. It is basically used for communication with people who are deaf or hard of hearing. In order to understand such communication quickly and accurately, the design of a successful sign language translation system is considered in this paper. The proposed system includes object detection and classification stages. Firstly, Single Shot Multi Box Detection (SSD) architecture is utilized for hand detection, then a deep learning structure based on the Inception v3 plus Support Vector Machine (SVM) that combines feature extraction and classification stages is proposed to constructively translate the detected hand gestures. A sign language fingerspelling dataset is used for the design of the proposed model. The obtained results and comparative analysis demonstrate the efficiency of using the proposed hybrid structure in sign language translation.

Hierarchical Hidden Markov Model for Finger Language Recognition (지화 인식을 위한 계층적 은닉 마코프 모델)

  • Kwon, Jae-Hong;Kim, Tae-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.9
    • /
    • pp.77-85
    • /
    • 2015
  • The finger language is the part of the sign language, which is a language system that expresses vowels and consonants with hand gestures. Korean finger language has 31 gestures and each of them needs a lot of learning models for accurate recognition. If there exist mass learning models, it spends a lot of time to search. So a real-time awareness system concentrates on how to reduce search spaces. For solving these problems, this paper suggest a hierarchy HMM structure that reduces the exploration space effectively without decreasing recognition rate. The Korean finger language is divided into 3 categories according to the direction of a wrist, and a model can be searched within these categories. Pre-classification can discern a similar finger Korean language. And it makes a search space to be managed effectively. Therefore the proposed method can be applied on the real-time recognition system. Experimental results demonstrate that the proposed method can reduce the time about three times than general HMM recognition method.

Face and Hand Tracking Algorithm for Sign Language Recognition (수화 인식을 위한 얼굴과 손 추적 알고리즘)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.11C
    • /
    • pp.1071-1076
    • /
    • 2006
  • In this paper, we develop face and hand tracking for sign language recognition system. The system is divided into two stages; the initial and tracking stages. In initial stage, we use the skin feature to localize face and hands of signer. The ellipse model on CbCr space is constructed and used to detect skin color. After the skin regions have been segmented, face and hand blobs are defined by using size and facial feature with the assumption that the movement of face is less than that of hands in this signing scenario. In tracking stage, the motion estimation is applied only hand blobs, in which first and second derivative are used to compute the position of prediction of hands. We observed that there are errors in the value of tracking position between two consecutive frames in which velocity has changed abruptly. To improve the tracking performance, our proposed algorithm compensates the error of tracking position by using adaptive search area to re-compute the hand blobs. The experimental results indicate that our proposed method is able to decrease the prediction error up to 96.87% with negligible increase in computational complexity of up to 4%.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Language Lateralization Using Magnetoencephalography (MEG): A Preliminary Study (뇌자도를 이용한 언어 편재화: 예비 연구)

  • Lee, Seo-Young;Kang, Eunjoo;Kim, June Sic;Lee, Sang-Kun;Kang, Hyejin;Park, Hyojin;Kim, Sung Hun;Lee, Seung Hwan;Chung, Chun Kee
    • Annals of Clinical Neurophysiology
    • /
    • v.8 no.2
    • /
    • pp.163-170
    • /
    • 2006
  • Backgrounds: MEG can measure the task-specific neurophysiologic activity with good spatial and time resolution. Language lateralization using noninvasive method has been a subject of interest in resective brain surgery. We purposed to develop a paradigm for language lateralization using MEG and validate its feasibility. Methods: Magnetic fields were obtained in 12 neurosurgical candidates and one volunteer for language tasks, with a 306 channel whole head MEG. Language tasks were word listening, reading and picture naming. We tested two word listening paradigms: semantic decision of meaning of abstract nouns, and recognition of repeated words. The subjects were instructed to silently name or read, and respond with pushing button or not. We decided language dominance according to the number of acceptable equivalent current dipoles (ECD) modeled by sequential single dipole, and the mean magnetic field strength by root mean square value, in each hemisphere. We collected clinical data including Wada test. Results: Magnetic fields evoked by word listening were generally distributed in bilateral temporoparietal areas with variable hemispheric dominance. Language tasks using visual stimuli frequently evoked magnetic field in posterior midline area, which made laterality decision difficult. Response during task resulted in more artifacts and different results depending on responding hand. Laterality decision with mean magnetic field strength was more concordant with Wada than the method with ECD number of each hemisphere. Conclusions: Word listening task without hand response is the most feasible paradigm for language lateralization using MEG. Mean magnetic field strength in each hemisphere is a proper index for hemispheric dominance.

  • PDF

A Machine Learning Approach to Korean Language Stemming

  • Cho, Se-hyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.549-557
    • /
    • 2001
  • Morphological analysis and POS tagging require a dictionary for the language at hand . In this fashion though it is impossible to analyze a language a dictionary. We also have difficulty if significant portion of the vocabulary is new or unknown . This paper explores the possibility of learning morphology of an agglutinative language. in particular Korean language, without any prior lexical knowledge of the language. We use unsupervised learning in that there is no instructor to guide the outcome of the learner, nor any tagged corpus. Here are the main characteristics of the approach: First. we use only raw corpus without any tags attached or any dictionary. Second, unlike many heuristics that are theoretically ungrounded, this method is based on statistical methods , which are widely accepted. The method is currently applied only to Korean language but since it is essentially language-neutral it can easily be adapted to other agglutinative languages.

  • PDF

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

Form or Function\ulcorner (형식인가 기능인가\ulcorner)

  • 이종민
    • Korean Journal of English Language and Linguistics
    • /
    • v.2 no.4
    • /
    • pp.575-587
    • /
    • 2002
  • In this paper we discuss the contrastive nature of formalism and functionalism in linguistics. Though the mainstreams of linguistic analysis have been focused on the form and function, they have been challenged from each other's strong points. On the one hand, the formal description has been studied in the tradition of generative grammar. On the other hand, the functional nature has played a crucial role in the framework of language use. It seems undesirable to argue that there is one-sided bias toward any type of linguistic approach. I try to present a balanced view of these two contrastive approaches. We also argue that there should be a cooperative work in developing the mutual growth of linguistic theory.

  • PDF

Betterment of Mobile Sign Language Recognition System (모바일 수화 인식 시스템의 개선에 관한 연구)

  • Park Kwang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.4 s.310
    • /
    • pp.1-10
    • /
    • 2006
  • This paper presents a development of a mobile sign language recognition system for daily communication of deaf people, who are sign dependent to access language, with hearing people. The system observes their sign by a cap-mounted camera and accelerometers equipped on wrists. To create a real application working in mobile environment, which is a harder recognition problem than lab environment due to illumination change and real-time requirement, a robust hand segmentation method is introduced and HMMs are adopted with a strong grammar. The result shows 99.07% word accuracy in continuous sign.