• Title/Summary/Keyword: Virtual hand

Search Result 382, Processing Time 0.027 seconds

A Study on Virtual Assembly Simulation Using Virtual Reality Technology (가상현실 기술을 이용한 가상 조립 시뮬레이션에 대한 연구)

  • Kim, Yong-Wan;Park, Jin-Ah
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1715-1727
    • /
    • 2010
  • Although a hand haptic interaction which provides direct and natural sensation is the most natural way of interacting with VR environment, the hand haptic interaction has still limitations with respect to the complexity of articulated hand and related hardware capabilities. Particularly, virtual assembly simulation which refers to the verification process of digital mockup in product development lifecycle is one of the most challenging topics in virtual reality applications. However, hand haptic interaction is considered as a big obstacle, because difficulty initial grasping and non-dextrous manipulation remain as unsolved problems. In this paper, we propose that common hand haptic interactions involves two separate stages with different aspects. We present the hand haptic interaction method enables us to stably grasp a virtual object at initial grasping and delicately manipulate an object at task operating by one's intention. Therefore, proposed method provides the robustness using grasping quality and dextrous manipulation using physically simulation. We conducted experiments to evaluate the effectiveness of our proposed method under different display environments -monoscopic and stereoscopic. From 2-way ANOVA test, we show that the proposed method satisfies two aspects of hand haptic interaction. Finally, we demonstrated an actual application of various assembly simulation for relatively complex models.

Recognition of Virtual Written Characters Based on Convolutional Neural Network

  • Leem, Seungmin;Kim, Sungyoung
    • Journal of Platform Technology
    • /
    • v.6 no.1
    • /
    • pp.3-8
    • /
    • 2018
  • This paper proposes a technique for recognizing online handwritten cursive data obtained by tracing a motion trajectory while a user is in the 3D space based on a convolution neural network (CNN) algorithm. There is a difficulty in recognizing the virtual character input by the user in the 3D space because it includes both the character stroke and the movement stroke. In this paper, we divide syllable into consonant and vowel units by using labeling technique in addition to the result of localizing letter stroke and movement stroke in the previous study. The coordinate information of the separated consonants and vowels are converted into image data, and Korean handwriting recognition was performed using a convolutional neural network. After learning the neural network using 1,680 syllables written by five hand writers, the accuracy is calculated by using the new hand writers who did not participate in the writing of training data. The accuracy of phoneme-based recognition is 98.9% based on convolutional neural network. The proposed method has the advantage of drastically reducing learning data compared to syllable-based learning.

How to Reflect User's Intention to Improve Virtual Object Selection Task in VR (VR 환경에서 가상 객체 선택 상호작용 개선을 위한 사용자 의도 반영 방법)

  • Kim, Chanhee;Nam, Hyeongil;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.704-713
    • /
    • 2021
  • This paper proposes a method to prioritize the virtual objects to be selected, considering both the user's hand and the geometric relationship with the virtual objects and the user's intention which is recognized in advance. Picking up virtual objects in VR content is an essential and most commonly used interaction. When virtual objects are located close to each other in VR, a situation occurs in which virtual objects that are different from the user's intention are selected. To address this issue, this paper provides different weights for user intentions and distance between user's hand and virtual objects to derive priorities in order to generate interactions appropriately according to the situation. We conducted the experiment in the situation where the number of virtual objects and the distance between virtual objects are diversified. Experiments demonstrate the effectiveness of the proposed method when the density between virtual objects is high and the distance between each other is close, user satisfaction increases to 20.34% by increasing the weight ratio of the situation awareness. We expect the proposed method to contribute to improving interaction skills that can reflect users' intentions.

Design of a 6-DOF force reflecting hand controller (힘 반향 6자유도 수동조작기의 설계연구)

  • 변현희;김한성;김승호
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.1513-1518
    • /
    • 1996
  • A force reflecting hand controller can be used to provide more realistic information to the operator of a teleoperation system such as kinesthetic feedback from a slave robot. In this paper, a new design concept of a force reflecting 6-DOF hand controller utilizing the kinematic structure of a Stewart Platform is presented. Based on the optimal design technique of a Stewart Platform, a force reflecting hand controller has been designed and constructed to verify the technical feasibility of proposed design concept. In order to provide an operator with kinesthetic feedback information, a force mapping algorithm based on a reciprocal product of screws has been introduced. Finally, the technical feasibility of the design concept has been demonstrated through some of experimental results of the device under virtual environment on a real-time graphic system.

  • PDF

A Real-Time Pattern Recognition for Multifunction Myoelectric Hand Control

  • Chu, Jun-Uk;Moon, In-Hyuk;Mun, Mu-Seong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.842-847
    • /
    • 2005
  • This paper proposes a novel real-time EMG pattern recognition for the control of a multifunction myoelectric hand from four channel EMG signals. To cope with the nonstationary signal property of the EMG, features are extracted by wavelet packet transform. For dimensionality reduction and nonlinear mapping of the features, we also propose a linear-nonlinear feature projection composed of PCA and SOFM. The dimensionality reduction by PCA simplifies the structure of the classifier, and reduces processing time for the pattern recognition. The nonlinear mapping by SOFM transforms the PCA-reduced features to a new feature space with high class separability. Finally a multilayer neural network is employed as the pattern classifier. We implement a real-time control system for a multifunction virtual hand. From experimental results, we show that all processes, including virtual hand control, are completed within 125 msec, and the proposed method is applicable to real-time myoelectric hand control without an operation time delay.

  • PDF

A Real-time Augmented Reality System using Hand Geometric Characteristics based on Computer Vision (손의 기하학적인 특성을 적용한 실시간 비전 기반 증강현실 시스템)

  • Choi, Hee-Sun;Jung, Da-Un;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.323-335
    • /
    • 2012
  • In this paper, we propose an AR(augmented reality) system using user's bare hand based on computer vision. It is important for registering a virtual object on the real input image to detect and track correct feature points. The AR systems with markers are stable but they can not register the virtual object on an acquired image when the marker goes out of a range of the camera. There is a tendency to give users inconvenient environment which is limited to control a virtual object. On the other hand, our system detects fingertips as fiducial features using adaptive ellipse fitting method considering the geometric characteristics of hand. It registers the virtual object stably by getting movement of fingertips with determining the shortest distance from a palm center. We verified that the accuracy of fingertip detection over 82.0% and fingertip ordering and tracking have just 1.8% and 2.0% errors for each step. We proved that this system can replace the marker system by tacking a camera projection matrix effectively in the view of stable augmentation of virtual object.

Vritual Hand Animation Using Virtual Glove and VRML (Virtual Glove와 VRML을 이용한 Virtual Hand 애니메이션)

  • Ahn, J.Y.;Kim, D.O.;Lee, D.H.;Kim, N.K.;Kim, J.H.;Min, B.G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.283-284
    • /
    • 1998
  • The Virtual Reality Technology makes you feel like you are in the situation which has been made up by using the information inputted through the device which is connected with the body part. If the image which is taken from CT, MRI is reorganized in 3D, it can present the shape of the human organ more clearly. So It is more likely to be used in the operation which needs the diversified examination about mutual relation with each part in a place of 3D or confirmation of the planed information. We developed the Technology which can reorganize the image from the CT into the 3D data and represent the 3D movement of the finger according to the hand.

  • PDF

Virtual Space Calibration for Laser Vision Sensor Using Circular Jig (원형 지그를 이용한 레이저-비젼 센서의 가상 공간 교정에 관한 연구)

  • 김진대;조영식;이재원
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.12
    • /
    • pp.73-79
    • /
    • 2003
  • Recently, the tole-robot operations to an unstructured environment have been widely researched. The human's interaction with the tole-robot system can be used to improve robot operation and performance for an unknown environment. The exact modeling based on real environment is fundamental and important process for this interaction. In this paper, we propose an extrinsic parameter calibration and data augmentation method that only uses a circular jig in the hand-eye laser virtual environment. Compared to other methods, easier estimation and overlay can be done by this algorithm. Experimental results using synthetic graphic demonstrate the usefulness of the proposed algorithm.

A Study on the Development of an Electronic Component Assembly Training System Using Leap Motion (Leap Motion을 이용한 전자부품 조립 훈련 시스템 개발에 관한 연구)

  • In-Chul Lee
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.3
    • /
    • pp.463-470
    • /
    • 2023
  • In this study, an electronic parts assembly training system using Leap Motion was developed in consideration of the processes actually operated in the assembly process of electronic products. Based on Leap Motion and Oculus VR equipment, the system was developed to transfer user's hand movement data in real time and convert it into hand movement in virtual space so that electronic parts assembly simulation can be performed step by step. Through this, it was confirmed that the user can obtain an experience similar to the actual electronic parts assembly work, prevent errors that may occur during the assembly process, and improve proficiency. It is expected that this thesis will provide directions for the quality improvement and development of various education and training programs for virtual reality-based manufacturing processes.

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.