• Title/Summary/Keyword: gesture control

Search Result 187, Processing Time 0.023 seconds

Emotion Recognition Method using Gestures and EEG Signals (제스처와 EEG 신호를 이용한 감정인식 방법)

  • Kim, Ho-Duck;Jung, Tae-Min;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF

A Study of Hand Gesture Recognition for Human Computer Interface (컴퓨터 인터페이스를 위한 Hand Gesture 인식에 관한 연구)

  • Chang, Ho-Jung;Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3041-3043
    • /
    • 2000
  • GUI(graphical user interface) has been the dominant platform for HCI(human computer interaction). The GUI-based style of interaction has made computers simpler and easier to use. However GUI will not easily support the range of interaction necessary to meet users' needs that are natural, intuitive, and adaptive. In this paper we study an approach to track a hand in an image sequence and recognize it, in each video frame for replacing the mouse as a pointing device to virtual reality. An algorithm for real time processing is proposed by estimating of the position of the hand and segmentation, considering the orientation of motion and color distribution of hand region.

  • PDF

Drone Hand Gesture Control System for DJI Mavic Air (DJI 매빅에이어를 위한 드론 손 제스처 제어 시스템)

  • Hamzah, Mohd Haziq bin;Jung, Jinwoong;Lee, Joohyun;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.333-334
    • /
    • 2018
  • This is a study on controlling a drone (DJI Mavic Air) with simple hand gesture using Leap Motion controller. Four component involve are MacBook, Leap Motion controller, Android device, and DJI Mavic Air. All of this component are connected through USB, Bluetooth, and Wi-Fi technology. The studies main purpose are to show that by controlling a drone through Leap Motion, drone amateur user can easily learn how to control a drone, and because of longer drone control range can be archived things such as search and rescue mission will be possible.

Video event control system by recognition of depth touch (깊이 터치를 통한 영상 이벤트 제어 시스템)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.35-42
    • /
    • 2016
  • Various events of stop, playback, capture, and zoom-in/out in playing video is available in the monitor of a small size such as smart phones. However, if the size of the display increases, then the cost of the touch recognition is increased, thus provision of a touch event is not possible in practice. In this paper, we propose a video event control system that recognizes a touch inexpensively from the depth information, then provides the variety of events of the toggle, the pinch-in / out by the single or multi-touch. The proposed method finds a touch location and the touch path by the depth information from a depth camera, and determines the touch gesture type. This touch interface algorithm is implemented in a small single-board system, and can control the video event by sending the gesture information through the UART communication. Simulation results show that the proposed depth touch method can control the various events in a large screen.

Implementation of a DI Multi-Touch Display Using an Improved Touch-Points Detection and Gesture Recognition (개선된 터치점 검출과 제스쳐 인식에 의한 DI 멀티터치 디스플레이 구현)

  • Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.1
    • /
    • pp.13-18
    • /
    • 2010
  • Most of the research in the multi-touch area is based on the FTIR(Frustrated Total Internal Re리ection), which is just implemented by using the previous approach. Moreover, there are not the software solutions to improve a performance in the multi touch-blobs detection or the user gesture recognition. Therefore, we implement a multi-touch table-top display that is based on the DI(Diffused Illumination), the improved touch-points detection and user gesture recognition. The proposed method supports a simultaneous transformation multi-touch command for objects in the running application. Also, the system latency time is reduced by the proposed ore-testing method in the multi touch-blobs detection processing. Implemented device is simulated by programming the Flash AS3 application in the TUIO(Tangible User Interface Object) environment that is based on the OSC(Open Sound Control) protocol. As a result, Our system shows the 37% system latency reduction, and is successful in the multi-touch gestures recognition.

Presentation Control System using Gesture Recognition and Sensor (제스처 인식과 센서를 이용한 프레젠테이션 제어 시스템)

  • Chang, Moon-Soo;Kwak, Sun-Dong;Kang, Sun-Mee
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.481-486
    • /
    • 2011
  • Recently, most presentations have been presented on the screen using the computer. This paper suggests that the computer can be controlled by the gesturing recognition, without the help of any person or tools. If we use only information in the form of images, we should have a high-resolution camera for capturing the images and a computer that can treat high-resolution images. However, this paper will present a solution whereby a low-resolution camera can be used at the stage. It uses the supersonic sensor to trace the presenter's location and a low-resolution camera for capturing the necessary limited and small area. The gesture is defined by the number of fingers and one's hand positions which are recognized by the Erosion / Dilation and Subtraction algorithm. The system this paper addresses has improved 13%, when comparing tests between the image-only data system and this paper's system. The gesture recognition tests have a 98% success rate.

A Personalized Hand Gesture Recognition System using Soft Computing Techniques (소프트 컴퓨팅 기법을 이용한 개인화된 손동작 인식 시스템)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.53-59
    • /
    • 2008
  • Recently, vision-based hand gesture recognition techniques have been developed for assisting elderly and disabled people to control home appliances. Frequently occurred problems which lower the hand gesture recognition rate are due to the inter-person variation and intra-person variation. The recognition difficulty caused by inter-person variation can be handled by using user dependent model and model selection technique. And the recognition difficulty caused by intra-person variation can be handled by using fuzzy logic. In this paper, we propose multivariate fuzzy decision tree learning and classification method for a hand motion recognition system for multiple users. When a user starts to use the system, the most appropriate recognition model is selected and used for the user.

An Extraction Method of Meaningful Hand Gesture for a Robot Control (로봇 제어를 위한 의미 있는 손동작 추출 방법)

  • Kim, Aram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.126-131
    • /
    • 2017
  • In this paper, we propose a method to extract meaningful motion among various kinds of hand gestures on giving commands to robots using hand gestures. On giving a command to the robot, the hand gestures of people can be divided into a preparation one, a main one, and a finishing one. The main motion is a meaningful one for transmitting a command to the robot in this process, and the other operation is a meaningless auxiliary operation to do the main motion. Therefore, it is necessary to extract only the main motion from the continuous hand gestures. In addition, people can move their hands unconsciously. These actions must also be judged by the robot with meaningless ones. In this study, we extract human skeleton data from a depth image obtained by using a Kinect v2 sensor and extract location data of hands data from them. By using the Kalman filter, we track the location of the hand and distinguish whether hand motion is meaningful or meaningless to recognize the hand gesture by using the hidden markov model.

A Study on the Motion and Voice Recognition Smart Mirror Using Grove Gesture Sensor (그로브 제스처 센서를 활용한 모션 및 음성 인식 스마트 미러에 관한 연구)

  • Hui-Tae Choi;Chang-Hoon Go;Ji-Min Jeong;Ye-Seul Shin;Hyoung-Keun Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1313-1320
    • /
    • 2023
  • This paper presents the development of a smart mirror that allows control of its display through glove gestures and integrates voice recognition functionality. The hardware configuration of the smart mirror consists of an LCD monitor combined with an acrylic panel, onto which a semi-mirror film with a reflectance of 37% and transmittance of 36% is attached, enabling it to function as both a mirror and a display. The proposed smart mirror eliminates the need for users to physically touch the mirror or operate a keyboard, as it implements gesture control through glove gesture sensors. Additionally, it incorporates voice recognition capabilities and integrates Google Assistant to display results on the screen corresponding to voice commands issued by the user.