• Title/Summary/Keyword: hand tracking

Search Result 351, Processing Time 0.024 seconds

Halbach Array Type Focusing Actuator for Small and Thin Optical Data Storage Device (할바 자석배열을 이용한 초소형 정보저장장치의 초점 구동기 설계)

  • Lee, Sung-Q;Park, Kang-Ho;Paek, Mun-Cheal
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2004.11a
    • /
    • pp.65-69
    • /
    • 2004
  • The small form factor optical data storage devices are developing rapidly nowadays. Since it is designed for portable and compatibility with flash memory, its components such as disk, head, focusing actuator, and spindle motor should be assembled within 5 m thickness. The thickness of focusing actuator is within 2 mm and the total working range is $+/-100{\mu}m$, with the resolution of less than $1{\mu}m$. Since the thickness is limited tightly, it is hard to place the yoke that closes the magnetic circuit and hard to make strong flux density without yoke. Therefore, Halbach array is adopted to increase the magnetic flux of one side without yoke. The proposed Halbach array type focusing actuator has the advantage of thin actuation structure with sacrificing less flux density than conventional magnetic array. The optical head unit is moved on the swing arm type tracking actuator. Focusing coil is attached to swing arm, and Halbach magnet array is positioned at the bottom of deck along the tracking line, and focusing actuator exerts force by the Fleming's left hand rule. The working range and resolution of focusing actuator are analyzed with FEM and experiment.

  • PDF

Video Editing using Hand Gesture Tracking and Recognition (손동작 추적 및 인식을 이용한 비디오 편집)

  • Bae, Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.102-107
    • /
    • 2007
  • In this paper presents a gesture based driven approach for video editing. Given a lecture video, we adopt novel approaches to automatically detect and synchronize its content with electronic slides. The gestures in each synchronized topic (or shot) are then tracked and recognized continuously. By registering shots and slides md recovering their transformation, the regions where the gestures take place can be known. Based on the recognized gestures and their registered positions, the information in slides can be seamlessly extracted not only to assist video editing, but also to enhance the quality of original lecture video. In experiment with two videos, the proposed system showd each gesture recognition rate 95.5%,96.4%.

A Real-time Augmented Reality System using Hand Geometric Characteristics based on Computer Vision (손의 기하학적인 특성을 적용한 실시간 비전 기반 증강현실 시스템)

  • Choi, Hee-Sun;Jung, Da-Un;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.3
    • /
    • pp.323-335
    • /
    • 2012
  • In this paper, we propose an AR(augmented reality) system using user's bare hand based on computer vision. It is important for registering a virtual object on the real input image to detect and track correct feature points. The AR systems with markers are stable but they can not register the virtual object on an acquired image when the marker goes out of a range of the camera. There is a tendency to give users inconvenient environment which is limited to control a virtual object. On the other hand, our system detects fingertips as fiducial features using adaptive ellipse fitting method considering the geometric characteristics of hand. It registers the virtual object stably by getting movement of fingertips with determining the shortest distance from a palm center. We verified that the accuracy of fingertip detection over 82.0% and fingertip ordering and tracking have just 1.8% and 2.0% errors for each step. We proved that this system can replace the marker system by tacking a camera projection matrix effectively in the view of stable augmentation of virtual object.

Explosion Casting: An Efficient Selection Method for Overlapped Virtual Objects in Immersive Virtual Environments (몰입 가상현실 환경에서 겹쳐진 가상객체들의 효율적인 선택을 위한 펼침 시각화를 통한 객체 선택 방법)

  • Oh, JuYoung;Lee, Jun
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.11-18
    • /
    • 2018
  • To interact with a virtual object in immersive virtual environment, the target object should be selected quickly and accurately. Conventional 3D ray casting method using a direction of user's hand or head allows the user to select an object quickly. However, accuracy problem occurs when selecting an object using conventional methods among occlusion of objects. In this paper, we propose a region of interest based selection method that enables to select an object among occlusion of objects using a combination of gaze tracking and hand gesture recognition. When a user looks at a group of occlusion of objects, the proposed method recognizes user's gaze input, and then region of interest is set by gaze input. If the user wants to select an object among them, the user gives an activation hand gesture. Then, the proposed system relocates and visualizes all objects on a virtual active window. The user can select an object by a selecting hand gesture. Our experiment verified that the user can select an object correctly and accurately.

Recognition of Finger Language Using FCM Algorithm (FCM 알고리즘을 이용한 지화 인식)

  • Kim, Kwang-Baek;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1101-1106
    • /
    • 2008
  • People who have hearing difficulties suffer from satisfactory mutual interaction with normal people because there are little chances of communicating each other. It is caused by rare communication of people who have hearing difficulties with normal people because majority of normal people can not understand sing language that is represented by gestures and is used by people who have hearing difficulties as a principal way of communication. In this paper, we propose a recognition method of finger language using FCM algorithm in order to be possible of communication of people who have hearing difficulties with normal people. In the proposed method, skin regions are extracted from images acquired by a camera using YCbCr and HSI color spaces and then locations of two hands are traced by applying 4-directional edge tracking algorithm on the extracted skin lesions. Final hand regions are extracted from the traced hand regions by noise removal using morphological information. The extracted final hand regions are classified and recognized by FCM algorithm. In the experiment using images of finger language acquired by a camera, we verified that the proposed method have the effect of extracting two hand regions and recognizing finger language.

A Study on the Design and Implementation of a Camera-Based 6DoF Tracking and Pose Estimation System (카메라 기반 6DoF 추적 및 포즈 추정 시스템의 설계 및 구현에 관한 연구)

  • Do-Yoon Jeong;Hee-Ja Jeong;Nam-Ho Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.5
    • /
    • pp.53-59
    • /
    • 2024
  • This study presents the design and implementation of a camera-based 6DoF (6 Degrees of Freedom) tracking and pose estimation system. In particular, we propose a method for accurately estimating the positions and orientations of all fingers of a user utilizing a 6DoF robotic arm. The system is developed using the Python programming language, leveraging the Mediapipe and OpenCV libraries. Mediapipe is employed to extract keypoints of the fingers in real-time, allowing for precise recognition of the joint positions of each finger. OpenCV processes the image data collected from the camera to analyze the finger positions, thereby enabling pose estimation. This approach is designed to maintain high accuracy despite varying lighting conditions and changes in hand position. The proposed system's performance has been validated through experiments, evaluating the accuracy of hand gesture recognition and the control capabilities of the robotic arm. The experimental results demonstrate that the system can estimate finger positions in real-time, facilitating precise movements of the 6DoF robotic arm. This research is expected to make significant contributions to the fields of robotic control and human-robot interaction, opening up various possibilities for future applications. The findings of this study will aid in advancing robotic technology and promoting natural interactions between humans and robots.

An Analysis of Gaze Differences between Pre-service Teachers and Experienced Teachers on Mathematics Lesson Plan (예비교사와 경력교사의 수학 수업지도안에 대한 시선 차이 분석)

  • Son, Taekwon;Lee, Kwang-Ho
    • Education of Primary School Mathematics
    • /
    • v.23 no.1
    • /
    • pp.1-26
    • /
    • 2020
  • The purpose of this study was to analyze the process of reading and understanding mathematics lesson plan through eye-tracking to suggest implications of pre-service teacher education. As a result of the analysis, the pre-service teachers felt that the mathematics lesson plans were more difficult than the experienced teacher, they read and understood the mathematics lesson plan in sequential order. Experienced teachers, on the other hand, used a hypertext reading strategy to find key topics and make connections in order to grasp the flow of instruction in mathematics lesson plan. Based on these results, several suggestions were drawn for pre-service teachers when teaching their ability to read and understand mathematics lesson plan.

A Markerless Augmented Reality Approach for Indoor Information Visualization System (실내 정보 가시화에 의한 u-GIS 시스템을 위한 Markerless 증강현실 방법)

  • Kim, Albert Hee-Kwan;Cho, Hyeon-Dal
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.195-199
    • /
    • 2009
  • Augmented reality is a field of computer research which deals with the combination of real-world and computer-generated data, where computer graphics objects are blended into real footage in real time and it has tremendous potential in visualizing geospatial information. However, to utilize augmented reality in mobile system, many researches have undergone with GPS or marker based approaches. Localization and tracking of current position become more complex problem when it is used in indoor environments. Many proposed RF based tracking and localization. However, it does cause deployment problems of large sensors and readers. In this paper, we present a noble markerless AR approach for indoor navigation system only using a camera. We will apply this work to mobile seamless indoor/outdoor u-GIS system.

  • PDF

Construction of Static 3D Ultrasonography Image by Radiation Beam Tracking Method from 1D Array Probe (1차원 배열 탐촉자의 방사빔추적기법을 이용한 정적 3차원 초음파진단영상 구성)

  • Kim, Yong Tae;Doh, Il;Ahn, Bongyoung;Kim, Kwang-Youn
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.35 no.2
    • /
    • pp.128-133
    • /
    • 2015
  • This paper describes the construction of a static 3D ultrasonography image by tracking the radiation beam position during the handy operation of a 1D array probe to enable point-of-care use. The theoretical model of the transformation from the translational and rotational information of the sensor mounted on the probe to the reference Cartesian coordinate system was given. The signal amplification and serial communication interface module was made using a commercially available sensor. A test phantom was also made using silicone putty in a donut shape. During the movement of the hand-held probe, B-mode movie and sensor signals were recorded. B-mode images were periodically selected from the movie, and the gray levels of the pixels for each image were converted to the gray levels of 3D voxels. 3D and 2D images of arbitrary cross-section of the B-mode type were also constructed from the voxel data, and agreed well with the shape of the test phantom.

Human Spatial Cognition Using Visual and Auditory Stimulation

  • Yu, Mi;Piao, Yong-Jun;Kim, Yong-Yook;Kwon, Tae-Kyu;Hong, Chul-Un;Kim, Nam-Gyun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.2
    • /
    • pp.41-45
    • /
    • 2006
  • This paper deals with human spatial cognition using visual and auditory stimulation. More specially, this investigation is to observe the relationship between the head and the eye motor system for the localization of visual target direction in space and to try to describe what is the role of right-side versus left-side pinna. In the experiment of visual stimulation, nineteen red LEDs (Luminescent Diodes, Brightness: $210\;cd/^2$) arrayed in the horizontal plane of the surrounding panel are used. Here the LEDs are located 10 degrees apart from each other. Physiological parameters such as EOG (Electro-Oculography), head movement, and their synergic control are measured by BIOPAC system and 3SPACE FASTRAK. In the experiment of auditory stimulation, one side of the pinna function was distorted intentionally by inserting a short tube in the ear canal. The localization error caused by right and left side pinna distortion was investigated as well. Since a laser pointer showed much less error (0.5%) in localizing target position than FASTRAK (30%) that has been generally used, a laser pointer was used for the pointing task. It was found that harmonic components were not essential for auditory target localization. However, non-harmonic nearby frequency components was found to be more important in localizing the target direction of sound. We have found that the right pinna carries out one of the most important functions in localizing target direction and pure tone with only one frequency component is confusing to be localized. It was also found that the latency time is shorter in self moved tracking (SMT) than eye alone tracking (EAT) and eye hand tracking (EHT). These results can be used in further study on the characterization of human spatial cognition.