• Title/Summary/Keyword: 휴먼 컴퓨터 인터페이스

Search Result 48, Processing Time 0.023 seconds

A Study on Smart Touch Projector System Technology Using Infrared (IR) Imaging Sensor (적외선 영상센서를 이용한 스마트 터치 프로젝터 시스템 기술 연구)

  • Lee, Kuk-Seon;Oh, Sang-Heon;Jeon, Kuk-Hui;Kang, Seong-Soo;Ryu, Dong-Hee;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.7
    • /
    • pp.870-878
    • /
    • 2012
  • Recently, very rapid development of computer and sensor technologies induces various kinds of user interface (UI) technologies based on user experience (UX). In this study, we investigate and develop a smart touch projector system technology on the basis of IR sensor and image processing. In the proposed system, a user can control computer by understanding the control events based on gesture of IR pen as an input device. In the IR image, we extract the movement (or gesture) of the devised pen and track it for recognizing gesture pattern. Also, to correct the error between the coordinate of input image sensor and display device (projector), we propose a coordinate correction algorithm to improve the accuracy of operation. Through this system technology as the next generation human-computer interaction, we can control the events of the equipped computer on the projected image screen without manipulating the computer directly.

3D View Controlling by Using Eye Gaze Tracking in First Person Shooting Game (1 인칭 슈팅 게임에서 눈동자 시선 추적에 의한 3차원 화면 조정)

  • Lee, Eui-Chul;Cho, Yong-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1293-1305
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMD. The proposed method is composed of 3 parts. In the first fart, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, the geometric relationship is determined between the monitor gazing position and the detected eye position gazing at the monitor position. In the last fart, the final gaze position on the HMB monitor is tracked and the 3D view in game is control]ed by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his (or her) hand. Also, it can increase the interest and immersion by synchronizing the gaze direction of game player and that of game character.

  • PDF

Dynamic Gesture Recognition for the Remote Camera Robot Control (원격 카메라 로봇 제어를 위한 동적 제스처 인식)

  • Lee Ju-Won;Lee Byung-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1480-1487
    • /
    • 2004
  • This study is proposed the novel gesture recognition method for the remote camera robot control. To recognize the dynamics gesture, the preprocessing step is the image segmentation. The conventional methods for the effectively object segmentation has need a lot of the cole. information about the object(hand) image. And these methods in the recognition step have need a lot of the features with the each object. To improve the problems of the conventional methods, this study proposed the novel method to recognize the dynamic hand gesture such as the MMS(Max-Min Search) method to segment the object image, MSM(Mean Space Mapping) method and COG(Conte. Of Gravity) method to extract the features of image, and the structure of recognition MLPNN(Multi Layer Perceptron Neural Network) to recognize the dynamic gestures. In the results of experiment, the recognition rate of the proposed method appeared more than 90[%], and this result is shown that is available by HCI(Human Computer Interface) device for .emote robot control.

Depth Image based Egocentric 3D Hand Pose Recognition for VR Using Mobile Deep Residual Network (모바일 Deep Residual Network을 이용한 뎁스 영상 기반 1 인칭 시점 VR 손동작 인식)

  • Park, Hye Min;Park, Na Hyeon;Oh, Ji Heon;Lee, Cheol Woo;Choi, Hyoung Woo;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1137-1140
    • /
    • 2019
  • 가상현실(Virtual Reality, VR), 증강현실(Augmented Reality, AR), 혼합현실(Mixed Reality, MR) 분야에 유용한 인간 컴퓨터 인터페이스 기술은 필수적이다. 특히 휴먼 손동작 인식 기술은 직관적인 상호작용을 가능하게 하여, 다양한 분야에서 편리한 컨트롤러로 사용할 수 있다. 본 연구에서는 뎁스 영상 기반의 1 인칭 시점 손동작 인식을 위하여 손동작 데이터베이스 생성 시스템을 구축하여, 손동작 인식기 학습에 필요한 1 인칭(Egocentric View Point) 데이터베이스를 촬영하여 제작한다. 그리고 모바일 Head Mounted Device(HMD) VR 을 위한 뎁스 영상 기반 1 인칭 시점 손동작 인식(Hand Pose Recognition, HPR) 딥러닝 Deep Residual Network 를 구현한다. 최종적으로, 안드로이드 모바일 디바이스에 학습된 Residual Network Regressor 를 이식하고 모바일 VR 에 실시간 손동작 인식 시스템을 구동하여, 모바일 VR 상 실시간 3D 손동작 인식을 가상 물체와의 상호작용을 통하여 확인 한다.

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

A Study on Manipulating Method of 3D Game in HMD Environment by using Eye Tracking (HMD(Head Mounted Display)에서 시선 추적을 통한 3차원 게임 조작 방법 연구)

  • Park, Kang-Ryoung;Lee, Eui-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.49-64
    • /
    • 2008
  • Recently, many researches about making more comfortable input device based on gaze detection technology have been done in human computer interface. However, the system cost becomes high due to the complicated hardware and there is difficulty to use the gaze detection system due to the complicated user calibration procedure. In this paper, we propose a new gaze detection method based on the 2D analysis and a simple user calibration. Our method used a small USB (Universal Serial Bus) camera attached on a HMD (Head-Mounted Display), hot-mirror and IR (Infra-Red) light illuminator. Because the HMD is moved according to user's facial movement, we can implement the gaze detection system of which performance is not affected by facial movement. In addition, we apply our gaze detection system to 3D first person shooting game. From that, the gaze direction of game character is controlled by our gaze detection method and it can target the enemy character and shoot, which can increase the immersion and interest of game. Experimental results showed that the game and gaze detection system could be operated at real-time speed in one desktop computer and we could obtain the gaze detection accuracy of 0.88 degrees. In addition, we could know our gaze detection technology could replace the conventional mouse in the 3D first person shooting game.

A Multi Small Humanoid Robot Control for Efficient Robot Performance (효율적인 전시공연을 위한 멀티 소형 휴머노이드 로봇제어)

  • Jang, Jun-Young;Lin, Chi-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8933-8939
    • /
    • 2015
  • In this paper, we designed a multi humanoid robot control method for performing an exhibition that will maximize the efficiency and user convenience and implementation. In recent years, an increasing number of case and to take advantage of the robots in the field performances and exhibitions, plays, musicals, orchestra performances are also various genres. In concert with the existing small humanoid exhibition to source from outside by using a computer and MP3 player and play, while pressing the start button of communication equipment for the show to start the robot began performing with the zoom. Thus, due to the dual source and robot operation and synchronization does not work well is the synchronization of the start of the concert sound starting point of the robot and robot motion and sound are reproduced separately were frequently occurs when you need to restart the show. In addition, when the center of gravity or lose the robots who were present during the performance problems such as performance or intervene to restart the show. In order to overcome this, in this paper, Multi-small humanoid robot was designed to control the efficiency and the user of the GUI-based human interface S/W to maximize convenience, Zigbee communication to transmit a plurality of data in al small humanoid It was used. In addition, targeting a number of the small humanoid robot demonstrated the effectiveness and validity of the user's convenience by gender actual implementation.

Face Detection in Color Images Based on Skin Region Segmentation and Neural Network (피부 영역 분할과 신경 회로망에 기반한 칼라 영상에서 얼굴 검출)

  • Lee, Young-Sook;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.1-11
    • /
    • 2006
  • Many research demonstrations and commercial applications have been tried to develop face detection and recognition systems. Human face detection plays an important role in applications such as access control and video surveillance, human computer interface, identity authentication, etc. There are some special problems such as a face connected with background, faces connected via the skin color, and a face divided into several small parts after skin region segmentation in generally. It can be allowed many face detection techniques to solve the first and second problems. However, it is not easy to detect a face divided into several parts of regions for reason of different illumination conditions in the third problem. Therefore, we propose an efficient modified skin segmentation algorithm to solve this problem because the typical region segmentation algorithm can not be used to. Our algorithm detects skin regions over the entire image, and then generates face candidate regions using our skin segmentation algorithm For each face candidate, we implement the procedure of region merging for divided regions in order to make a region using adjacency between homogeneous regions. We utilize various different searching window sizes to detect different size faces and a face detection classifier based on a back-propagation algorithm in order to verify whether the searching window contains a face or not.

  • PDF