• Title/Summary/Keyword: 제스처 인식

Search Result 329, Processing Time 0.025 seconds

Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition (바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계)

  • Kim, Juchang;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

Platform Independent Game Development Using HTML5 Canvas (HTML5 캔버스를 이용한 플랫폼 독립적인 게임의 구현)

  • Jang, Seok-Woo;Huh, Moon-Haeng
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3042-3048
    • /
    • 2014
  • Recently, HTML5 have drawn many people's attention since it is considered as a next-generation web standard and can implement a lot of graphic and multimedia-related techniques on a web browser without installing programs separately. In this paper, we implement a game independent of platforms, such as iOS and Android, using the HTML5 canvas. In the game, the main character can move up, down, left, and right not to collide with neighboring enemies. If the character collides with an enemy, the HP (hit point) gauge bar reduces. On the other hand, if the character obtains heart items, the gauge bar increases. In the future, we will add various items to the game and will diversify its user interfaces by applying computer vision techniques such as various gesture recognition.

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.

Design and Development of Multiple Input Device and Multiscale Interaction for GOCI Observation Satellite Imagery on the Tiled Display (타일드 디스플레이에서의 천리안 해양관측 위성영상을 위한 다중 입력 장치 및 멀티 스케일 인터랙션 설계 및 구현)

  • Park, Chan-Sol;Lee, Kwan-Ju;Kim, Nak-Hoon;Lee, Sang-Ho;Seo, Ki-Young;Park, Kyoung Shin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.541-550
    • /
    • 2014
  • This paper describes a multi-scale user interaction based tiled display visualization system using multiple input devices for monitoring and analyzing Geostationary Ocean Color Imager (GOCI) observation satellite imagery. This system provides multi-touch screen, Kinect motion sensing, and moblie interface for multiple users to control the satellite imagery either in front of the tiled display screen or far away from a distance to view marine environmental or climate changes around Korean peninsular more effectively. Due to a large amount of memory required for loading high-resolution GOCI satellite images, we employed the multi-level image load technique where the image was divided into small tiled images in order to reduce the load on the system and to be operated smoothly by user manipulation. This system performs the abstraction of common input information from multi-user Kinect motion and gestures, multi-touch points and mobile interaction information to enable a variety of user interactions for any tiled display application. In addition, the unit of time corresponding to the selected date of the satellite images are sequentially displayed on the screen and multiple users can zoom-in/out, move the imagery and select buttons to trigger functions.

Hand Region Tracking and Fingertip Detection based on Depth Image (깊이 영상 기반 손 영역 추적 및 손 끝점 검출)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.8
    • /
    • pp.65-75
    • /
    • 2013
  • This paper proposes a method of tracking the hand region and detecting the fingertip using only depth images. In order to eliminate the influence of lighting conditions and obtain information quickly and stably, this paper proposes a tracking method that relies only on depth information, as well as a method of using region growing to identify errors that can occur during the tracking process and a method of detecting the fingertip that can be applied for the recognition of various gestures. First, the closest point of approach is identified through the process of transferring the center point in order to locate the tracking point, and the region is grown from that point to detect the hand region and boundary line. Next, the ratio of the invalid boundary, obtained by means of region growing, is used to calculate the validity of the tracking region and thereby judge whether the tracking is normal. If tracking is normal, the contour line is extracted from the detected hand region and the curvature and RANSAC and Convex-Hull are used to detect the fingertip. Lastly, quantitative and qualitative analyses are performed to verify the performance in various situations and prove the efficiency of the proposed algorithm for tracking and detecting the fingertip.

Experience Design Guideline for Smart Car Interface (스마트카의 인터페이스를 위한 경험 디자인 가이드라인)

  • Yoo, Hoon Sik;Ju, Da Young
    • Design Convergence Study
    • /
    • v.15 no.1
    • /
    • pp.135-150
    • /
    • 2016
  • Due to the development of communication technology and expansion of Intelligent Transport System (ITS), the car is changing from a simple mechanical device to second living space which has comprehensive convenience function and is evolved into the platform which is playing as an interface for this role. As the interface area to provide various information to the passenger is being expanded, the research importance about smart car based user experience is rising. This study has a research objective to propose the guidelines regarding the smart car user experience elements. In order to conduct this study, smart car user experience elements were defined as function, interaction, and surface and through the discussions of UX/UI experts, 8 representative techniques, 14 representative techniques, and 8 locations of the glass windows were specified for each element. Following, the smart car users' priorities of the experience elements, which were defined through targeting 100 drivers, were analyzed in the form of questionnaire survey. The analysis showed that the users' priorities in applying the main techniques were in the order of safety, distance, and sensibility. The priorities of the production method were in the order of voice recognition, touch, gesture, physical button, and eye tracking. Furthermore, regarding the glass window locations, users prioritized the front of the driver's seat to the back. According to the demographic analysis on gender, there were no significant differences except for two functions. Therefore this showed that the guidelines of male and female can be commonly applied. Through user requirement analysis about individual elements, this study provides the guides about the requirement in each element to be applied to commercialized product with priority.

Historical Evolution of Stage Costumes in Europe since the Second World War (제2차 세계대전 이후 나타난 유럽 무대의상의 사적 분석)

  • Na, In-Wha;Lee, Kyu-Hye
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.31 no.12
    • /
    • pp.1761-1771
    • /
    • 2007
  • The artificial exaggeration of stage costumes is thought to be one of the major techniques of enhancing dramatic expression on stage, whichever for visual impact or symbolic effect of dramatization. In the history of stage dressing, a variety of styles has been tried using different materials and production techniques. This may be reviewed as an effort to express dramatic effects more effectively. As this trend became obvious in Europe after the Second World War, this study analyzes the stage costume to deepen our understandings of the role of costumes in expressing dramatic effects. To accomplish this, we first summarized the history of stage costume materials and technical advance and chose five major cases representing the history of stage costume since the Second World War in Europe based on aesthetic and creative aspects: 1) Simplified stage of Jacques Copeau, 2) Stylized stage of Bertolt Brecht, 3) Essential stage of Grotowski, 4) Measured stage of Robert Wilson the Master, and 5) Post-dramatic stage of Philippe $Decoufl\'{e}$. In each of particular case, the historical, material and dramatic contexts were examined as well as different material-effects. The results are as followings: 1) Costume for Copeau's simplified stage: its simplicity plays a supporting role to the gesture of actors(intensifying effect). 2) Costume for Brecht's stylized stage: the artificial stylization integrates into the play with the importance approximately equal to the actors's acting. 3) Costumes for Grotowski's essential stage: costumes disappeared to emphasize only actor's presence on stage. 4) Costumes for Robert Wilson's measured stage: costumes made concrete impression to the extent of obtaining the same importance of actor's body among other stage art elements(lighting, sound, props, actor, text, etc). 5) Costumes for Decoufle's post-dramatic stage: costumes in the era of multi technology possess multi functional aspects that surrogate actors' body. This study suggests that stage costumes take an important part in dramaturgy to the extent that the intent of dramaturgy can be induced enough from stage costume. Thus, costume makers are expected to incorporate the appropriate dramatic factor more than before.