• Title/Summary/Keyword: hand gesture

Search Result 402, Processing Time 0.031 seconds

The Relationship between Lexical Retrieval and Coverbal Gestures (어휘인출과 구어동반 제스처의 관계)

  • Ha, Ji-Wan;Sim, Hyun-Sub
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.2
    • /
    • pp.123-143
    • /
    • 2011
  • At what point in the process of speech production are gestures involved? According to the Lexical Retrieval Hypothesis, gestures are involved in the lexicalization in the formulating stage. According to the Information Packaging Hypothesis, gestures are involved in the conceptual planning of massages in the conceptualizing stage. We investigated these hypotheses, using the game situation in a TV program that induced the players to involve in both lexicalization and conceptualization simultaneously. The transcription of the verbal utterances was augmented with all arm and hand gestures produced by the players. Coverbal gestures were classified into two types of gestures: lexical gestures and motor gestures. As a result, concrete words elicited lexical gestures significantly more frequently than abstract words, and abstract words elicited motor gestures significantly more frequently than concrete words. The difficulty of conceptualization in concrete words was significantly correlated with the amount of lexical gestures. However, the amount of words and the word frequency were not correlated with the amount of both gestures. This result supports the Information Packaging Hypothesis. Most of all, the importance of motor gestures was inferred from the result that abstract words elicited motor gestures more frequently rather than concrete words. Motor gestures, which have been considered as unrelated to verbal production, were excluded from analysis in many gestural studies. This study revealed motor gestures seemed to be connected to the abstract conceptualization.

  • PDF

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Noise-robust Hand Region Segmentation In RGB Color-based Real-time Image (RGB 색상 기반의 실시간 영상에서 잡음에 강인한 손영역 분할)

  • Yang, Hyuk Jin;Kim, Dong Hyun;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1603-1613
    • /
    • 2017
  • This paper proposes a method for effectively segmenting the hand region using a widely popular RGB color-based webcam. This performs the empirical preprocessing method four times to remove the noise. First, we use Gaussian smoothing to remove the overall image noise. Next, the RGB image is converted into the HSV and the YCbCr color model, and global fixed binarization is performed based on the statistical value for each color model, and the noise is removed by the bitwise-OR operation. Then, RDP and flood fill algorithms are used to perform contour approximation and inner area fill operations to remove noise. Finally, ROI (hand region) is selected by eliminating noise through morphological operation and determining a threshold value proportional to the image size. This study focuses on the noise reduction and can be used as a base technology of gesture recognition application.

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.

Study on Hand Gestures Recognition Algorithm of Millimeter Wave (밀리미터파의 손동작 인식 알고리즘에 관한 연구)

  • Nam, Myung Woo;Hong, Soon Kwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.685-691
    • /
    • 2020
  • In this study, an algorithm that recognizes numbers from 0 to 9 was developed using the data obtained after tracking hand movements using the echo signal of a millimeter-wave radar sensor at 77 GHz. The echo signals obtained from the radar sensor by detecting the motion of a hand gesture revealed a cluster of irregular dots due to the difference in scattering cross-sectional area. A valid center point was obtained from them by applying a K-Means algorithm using 3D coordinate values. In addition, the obtained center points were connected to produce a numeric image. The recognition rate was compared by inputting the obtained image and an image similar to human handwriting by applying the smoothing technique to a CNN (Convolutional Neural Network) model trained with MNIST (Modified National Institute of Standards and Technology database). The experiment was conducted in two ways. First, in the recognition experiments using images with and without smoothing, average recognition rates of 77.0% and 81.0% were obtained, respectively. In the experiment of the CNN model with augmentation of learning data, a recognition rate of 97.5% and 99.0% on average was obtained in the recognition experiment using the image with and without smoothing technique, respectively. This study can be applied to various non-contact recognition technologies using radar sensors.

A Study on Children Edutainment Contents Development with Hand Gesture Recognition and Electronic Dice (전자주사위 및 손동작 인식을 활용한 아동용 에듀테인먼트 게임 콘텐츠 개발에 관한 연구)

  • Ok, Soo-Yol
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1348-1364
    • /
    • 2011
  • As the existing edutainment contents for children are mostly comprised of educational tools which unilaterally induce educatees to passively respond to them, the content-creating methodologies in terms of which active and voluntary learning is made possible is urgently needed. In this paper, we present the implementation of the tangible 'electronic dice' interface as an interactive tool for behavior-based edutainment contents, and propose a methodology for developing edutainment contents for children by utilizing the recognition technique of hand movement based on depth-image information. Also proposed in the paper are an authoring and management tool of learning quizzes that allows educators to set up and manage their learning courseware, and a log analysis system of learning achievement for real-time monitoring of educational progress. The behavior-based tangible interface and edutainment contents that we propose provide the easy-to-operate interaction with a real object, which augments educatees' interest in learning, thus leading to their active and voluntary attitude toward learning. Furthermore, The authoring and management tool and log analysis system allow us to construct learning programs by children's achievement level and to monitor in real-time the learning development of children educatees by understanding the situation and behavior of their learning development from the analytic results obtained by observing the processes of educatees' solving problems for themselves, and utilizing them for evaluation materials for lesson plans.

Conceptual Variation of TalYeong-SilJeong in the Medical History (역대(歷代) 의서(醫書)에서 탈영실정(脫營失精)의 의미(意味) 변화(變化))

  • Hong, Sae-Young;Lee, Jae-Hyok
    • Journal of Oriental Neuropsychiatry
    • /
    • v.25 no.2
    • /
    • pp.203-212
    • /
    • 2014
  • Objectives: The aim of this study is to bring new light on TalYeong-SilJeong (exhaustion of Yeonggi and loss of Essence) through the verification of both the original intention of Hwangjenaegyeong and the conceptual variation afterwards. Methods: Of various East Asian medical texts, those inferring to TalYeong-SilJeong includeing Hwangjenaegyeong itself were closely examined under the aspect of its conception. Results: TalYeong-SilJeong was suggested as the first representative tool and accurate diagnostic method of questioning in order to determine the mental state of a patient. However, medical scholars have suggested different levels of meaning. Some used the term for the broad coverage of mental illnesses, understanding Hwangjenaegyeong's discrimination as symbolic gesture, while others projected an unchallenged value on it and weaved it into the concrete set of a disease. Conclusions: The treatment of TalYeong-SilJeong is suggested according to the varying viewpoints of each medical text. By understanding multiple layers of the conception beyond, a clinician is expected to gain an exuberant image of conception on the one hand and an insight for more effective treatment on the other hand.

Combining Dynamic Time Warping and Single Hidden Layer Feedforward Neural Networks for Temporal Sign Language Recognition

  • Thi, Ngoc Anh Nguyen;Yang, Hyung-Jeong;Kim, Sun-Hee;Kim, Soo-Hyung
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.14-22
    • /
    • 2011
  • Temporal Sign Language Recognition (TSLR) from hand motion is an active area of gesture recognition research in facilitating efficient communication with deaf people. TSLR systems consist of two stages: a motion sensing step which extracts useful features from signers' motion and a classification process which classifies these features as a performed sign. This work focuses on two of the research problems, namely unknown time varying signal of sign languages in feature extraction stage and computing complexity and time consumption in classification stage due to a very large sign sequences database. In this paper, we propose a combination of Dynamic Time Warping (DTW) and application of the Single hidden Layer Feedforward Neural networks (SLFNs) trained by Extreme Learning Machine (ELM) to cope the limitations. DTW has several advantages over other approaches in that it can align the length of the time series data to a same prior size, while ELM is a useful technique for classifying these warped features. Our experiment demonstrates the efficiency of the proposed method with the recognition accuracy up to 98.67%. The proposed approach can be generalized to more detailed measurements so as to recognize hand gestures, body motion and facial expression.

Development of Web-cam Game using Hand and Face Skin Color (손과 얼굴의 피부색을 이용한 웹캠 게임 개발)

  • Oh, Chi-Min;Aurrahman, Dhi;Islam, Md. Zahidul;Kim, Hyung-Gwan;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.60-63
    • /
    • 2008
  • The sony Eytoy is developed on Playstation 2 using webcam for detecting human. A user see his appearance in television and become real gamer in the game. It is very different interface compared with ordinary video game which uses joystick. Although Eyetoy already was made for commercial products but the interface method still is interesting and can be added with many techniques like gesture recognition. In this paper, we have developed game interface with image processing for human hand and face detection and with game graphic module. And we realize one example game for busting balloons and demonstrated the game interface abilities. We will open this project for other developers and will be developed very much.

  • PDF

Design and Implementation of Personal Communicator based on Embedded Single Board Computer for Controlling of Remote Devices (원격 장치 제어를 위한 임베디드 기술 기반의 개인용 커뮤니케이터 설계 및 구현)

  • Jang, Seong-Sik;Byun, Tae-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.99-109
    • /
    • 2011
  • This paper presents implementation details of home appliance control system using personal communicator based on LN2440 single board computer, which recognizes hand-gesture of user, controls remote moving device such as mobile home server, robot etc. through delivery of proper control commands. Also, this paper includes details of design and implementation of home gateway and mobile home server. The implemented prototype can be utilized to develop various remote control system including a remote exploration robot, intelligent wheelchair based on general purpose embedded system.