• Title/Summary/Keyword: hand gesture

Search Result 401, Processing Time 0.037 seconds

Evaluation of Novel Method of Hand Gesture Input to Define Automatic Scanning Path for UAV SAR Missions (손 제스처를 이용하여 탐색 구조용 무인항공기의 자동 스캐닝 경로를 정의하는 가상현실 입력방법 개발 및 평가)

  • Chang-Geun Oh
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.4
    • /
    • pp.473-480
    • /
    • 2023
  • This study evaluated a novel method of defining the automatic flight path of unmanned aerial vehicles (UAVs) for search and rescue missions in a VR environment. The developed VR content reserves miniature digital twins of a building in the fire and a steep mountain terrain site. The users drow the UAV's scanning path using hand gestures on the surface of digital twins, and then the UAV make an automatic flight along the defined path. According to human-in-the-loop simulation tests comparing the novel method with a conventional manual flight task with 19 participants, the novel method did not improve the mission performance but participants felt a lower mental workload. The designer may need to consider the automation support on the vulnerable points of the SAR mission environment while maintaining experts' mapping capability.

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

Design method of Animation Emoticons for Non-Verbal Expression of Emotion (비언어적 감정표현을 위한 애니메이션 이모티콘의 제작방향 제시)

  • Ann, Seong-Hye;Youn, Se-Jin
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.200-204
    • /
    • 2006
  • They use emoticons as assistance to express their emotion on CMC communications. Emoticons have been developed into diverse forms like text emoticon, image emoticon, animation emoticon. However emoticons represent the emotion only in simple way, not in specific because the shortage of grouping expressions of the emotion. So, the emoticons used nowadays should be grouped specifically in order to help produce the modulated animation emoticons those are able to represent the feeling in specific and diverse way and be used conveniently. Therefore, this thesis is going to propose the design method of a new kind of animation emoticons those can make what animation emoticons used nowadays are not able to through the grouping and analyzing expression images according to faces, gesture(hand), and backgrounds(decoration) focused on the animation emoticons in messenger programs.

  • PDF

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.

A real-time robust body-part tracking system for intelligent environment (지능형 환경을 위한 실시간 신체 부위 추적 시스템 -조명 및 복장 변화에 강인한 신체 부위 추적 시스템-)

  • Jung, Jin-Ki;Cho, Kyu-Sung;Choi, Jin;Yang, Hyun S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.411-417
    • /
    • 2009
  • We proposed a robust body part tracking system for intelligent environment that will not limit freedom of users. Unlike any previous gesture recognizer, we upgraded the generality of the system by creating the ability the ability to recognize details, such as, the ability to detect the difference between long sleeves and short sleeves. For the precise each body part tracking, we obtained the image of hands, head, and feet separately from a single camera, and when detecting each body part, we separately chose the appropriate feature for certain parts. Using a calibrated camera, we transferred 2D detected body parts into the 3D posture. In the experimentation, this system showed advanced hand tracking performance in real time(50fps).

  • PDF

Hand Gesture Recognition Result Using Dynamic Training (동적 학습을 이용한 손동작 인식 결과)

  • Jeoung, You-Sun;Park, Dong-Suk;Youn, Young-Ji;Shin, Bo-Kyoung;Kim, Hye-Min;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.861-864
    • /
    • 2007
  • 본 논문에서는 카메라-투영 시스템에서 비전에 기반을 둔 팔동작 인식을 위한 새로운 알고리즘을 제안하고 있다. 제안된 인식방법은 정적인 팔동작 분류를 위하여 푸리에 변환을 사용하였다. 팔 분할은 개선된 배경 제거 방법을 사용하였다. 대부분의 인식방법들이 같은 피검자에 의해 학습과 실험이 이루어지고 상호작용 이전에 학습단계가 필요하다. 그러나 학습되지 않은 다양한 상황에 대해서도 상호작용을 위해 동작 인식이 요구된다. 그러므로 본 논문에서는 인식 작업 중에 검출된 불완전한 동작들을 정정하여 적용하였다. 그 결과 사용자와 독립되게 동작을 인식함으로써 새로운 사용자에게 신속하게 온라인 적용이 가능하였다.

  • PDF

Implementation of Multi-touch Tabletop Display for Human Computer Interaction (HCI 를 위한 멀티터치 테이블-탑 디스플레이 시스템 구현)

  • Kim, Song-Gook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.553-560
    • /
    • 2007
  • 본 논문에서는 양손의 터치를 인식하여 실시간 상호작용이 가능한 테이블 탑 디스플레이 시스템 및 구현 알고리즘에 대해 기술한다. 제안하는 시스템은 FTIR(Frustrated Total Internal Reflection) 메커니즘을 기반으로 제작되었으며 multi-touch, multi-user 방식의 손 제스처 입력이 가능하다. 시스템은 크게 영상 투영을 위한 빔-프로젝터, 적외선 LED를 부착한 아크릴 스크린, Diffuser 그리고 영상을 획득하기 위한 적외선 카메라로 구성되어 있다. 시스템 제어에 필요한 제스처 명령어 종류는 상호작용 테이블에서의 입력과 출력의 자유도를 분석하고 편리함, 의사소통, 항상성, 완벽함의 정도를 고려하여 규정하였다. 규정된 제스처는 사용자가 상호작용을 위해 스크린에 접촉한 손가락의 개수, 위치, 그리고 움직임 변화를 기준으로 세분화된다. 적외선 카메라를 통해 입력받은 영상은 잡음제거 및 손가락 영역 탐색을 위해 간단한 모폴로지 기법이 적용된 후 인식과정에 들어간다. 인식 과정에서는 입력 받은 제스처 명령어들을 미리 정의해놓은 손 제스처 모델과 비교하여 인식을 행한다. 세부적으로는 먼저 스크린에 접촉된 손가락의 개수를 파악하고 그 영역을 결정하며 그 후 그 영역들의 중심점을 추출하여 그들의 각도 및 유클리디언 거리를 계산한다. 그리고 나서 멀티터치 포인트의 위치 변화값을 미리 정의해둔 모델의 정보와 비교를 한다. 본 논문에서 제안하는 시스템의 효율성은 Google-earth를 제어하는 것을 통해 입증될 수 있다.

  • PDF

Use of a gesture user interface as a touchless image navigation system in dental surgery: Case series report

  • Rosa, Guillermo M.;Elizondo, Maria L.
    • Imaging Science in Dentistry
    • /
    • v.44 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • Purpose: The purposes of this study were to develop a workstation computer that allowed intraoperative touchless control of diagnostic and surgical images by a surgeon, and to report the preliminary experience with the use of the system in a series of cases in which dental surgery was performed. Materials and Methods: A custom workstation with a new motion sensing input device (Leap Motion) was set up in order to use a natural user interface (NUI) to manipulate the imaging software by hand gestures. The system allowed intraoperative touchless control of the surgical images. Results: For the first time in the literature, an NUI system was used for a pilot study during 11 dental surgery procedures including tooth extractions, dental implant placements, and guided bone regeneration. No complications were reported. The system performed very well and was very useful. Conclusion: The proposed system fulfilled the objective of providing touchless access and control of the system of images and a three-dimensional surgical plan, thus allowing the maintenance of sterile conditions. The interaction between surgical staff, under sterile conditions, and computer equipment has been a key issue. The solution with an NUI with touchless control of the images seems to be closer to an ideal. The cost of the sensor system is quite low; this could facilitate its incorporation into the practice of routine dental surgery. This technology has enormous potential in dental surgery and other healthcare specialties.

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.

Web-based 3D Virtual Experience using Unity and Leap Motion (Unity와 Leap Motion을 이용한 웹 기반 3D 가상품평)

  • Jung, Ho-Kyun;Park, Hyungjun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.2
    • /
    • pp.159-169
    • /
    • 2016
  • In order to realize the virtual prototyping (VP) of digital products, it is important to provide the people involved in product development with the appropriate visualization and interaction of the products, and the vivid simulation of user interface (UI) behaviors in an interactive 3D virtual environment. In this paper, we propose an approach to web-based 3D virtual experience using Unity and Leap Motion. We adopt Unity as an implementation platform which easily and rapidly implements the visualization of the products and the design and simulation of their UI behaviors, and allows remote users to get an easy access to the virtual environment. Additionally, we combine Leap Motion with Unity to embody natural and immersive interaction using the user's hand gesture. Based on the proposed approach, we have developed a testbed system for web-based 3D virtual experience and applied it for the design evaluation of various digital products. Button selection test was done to investigate the quality of the interaction using Leap Motion, and a preliminary user study was also performed to show the usefulness of the proposed approach.