• Title/Summary/Keyword: 비전기반 사용자 인터페이스

Search Result 39, Processing Time 0.029 seconds

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.

Expert System for Stress Diagnosis of Cucumber and Tomato Using FoxPro (FoxPro를 이용한 오이와 토마토의 생육장해 진단 전문가 시스템 개발)

  • 고병진;서상룡;최영수
    • Journal of Bio-Environment Control
    • /
    • v.12 no.1
    • /
    • pp.30-37
    • /
    • 2003
  • An expert system was developed for the stress diagnosis of cucumber and tomato using FoxPro. The principle points in building the system were integration with Korean, effective processing of mass information, and easy access for non-experts such as farmers. The method of inferencing was forward chaining based on pattern matching. Knowledge base was expressed with IF∼THEN rules and was expressed in the form of tree. Also, the expert system was designed so that additions and modifications of all information could easily be performed on windows. The results tested by farmers with the developed system showed that the expert system was reliable for the practical use. It was expected the expert system could be directly applied to the stress diagnosis of other vegetable plants by modifying only data bases.

Development and Practicability Evaluation of GIS-Based Cemetery Information Management System (GIS기반 묘지정보관리시스템의 개발 및 실용성 평가)

  • Lee, Jin-Duk;Kim, Myung-Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.223-231
    • /
    • 2010
  • The uniqueness of Korean funeral culture has produced the problems such as forest indiscreet cemetery development and increase of cemeteries for those without surveying family due to negligent management. To solve these problems, government and social organization have recommended a use of cremation, charnel house and cemetery. The objective of this study is to develop a cemetery information management system which cemetery managers who are GIS laypersons are able to understand GIS functions easily and use conveniently. Cemetery tasks should be done not only in the office but also at the field. In the office, they perform GIS functions like input, modification and so forth using a desktop. And at the field, they perform the functions like simple input, inquiry using a PDA(Touchscreen) that can receive GPS signal. As various open source softwares were used to build the system, the expense was reduced largely, and we could expect the possibility that it can be utilized in more cemeteries by adding more various functions.

Posture Recognition for a Bi-directional Participatory TV Program based on Face Color Region and Motion Map (시청자 참여형 양방향 TV 방송을 위한 얼굴색 영역 및 모션맵 기반 포스처 인식)

  • Hwang, Sunhee;Lim, Kwangyong;Lee, Suwoong;Yoo, Hoyoung;Byun, Hyeran
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.549-554
    • /
    • 2015
  • As intuitive hardware interfaces continue to be developed, it has become more important to recognize the posture of the user. An efficient alternative to adding expensive sensors is to implement computer vision systems. This paper proposes a method to recognize a user's postured in a live broadcast bi-directional participatory TV program. The proposed method first estimates the position of the user's hands by generation a facial color map for the user and a motion map. The posture is then recognized by computing the relative position of the face and the hands. This method exhibited 90% accuracy in an experiment to recognize three defined postures during the live broadcast bi-directional participatory TV program, even when the input images contained a complex background.

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

An Illumination and Background-Robust Hand Image Segmentation Method Based on the Dynamic Threshold Values (조명과 배경에 강인한 동적 임계값 기반 손 영상 분할 기법)

  • Na, Min-Young;Kim, Hyun-Jung;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.607-613
    • /
    • 2011
  • In this paper, we propose a hand image segmentation method using the dynamic threshold values on input images with various lighting and background attributes. First, a moving hand silhouette is extracted using the camera input difference images, Next, based on the R,G,B histogram analysis of the extracted hand silhouette area, the threshold interval for each R, G, and B is calculated on run-time. Finally, the hand area is segmented using the thresholding and then a morphology operation, a connected component analysis and a flood-fill operation are performed for the noise removal. Experimental results on various input images showed that our hand segmentation method provides high level of accuracy and relatively fast stable results without the need of the fixed threshold values. Proposed methods can be used in the user interface of mixed reality applications.