• Title/Summary/Keyword: Hand Pose

Search Result 105, Processing Time 0.026 seconds

Fuzzy rule-based Hand Motion Estimation for A 6 Dimensional Spatial Tracker

  • Lee, Sang-Hoon;Kim, Hyun-Seok;Suh, Il-Hong;Park, Myung-Kwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.82-86
    • /
    • 2004
  • A fuzzy rule-based hand-motion estimation algorithm is proposed for a 6 dimensional spatial tracker in which low cost accelerometers and gyros are employed. To be specific, beginning and stopping of hand motions needs to be accurately detected to initiate and terminate integration process to get position and pose of the hand from accelerometer and gyro signals, since errors due to noise and/or hand-shaking motions accumulated by integration processes. Fuzzy rules of yes or no of hand-motion-detection are here proposed for rules of accelerometer signals, and sum of derivatives of accelerometer and gyro signals. Several experimental results and shown to validate our proposed algorithms.

  • PDF

Determination of an Optimal Contact Pose for Object Recognition Using a Robot Hand (로봇 손의 물체 인식을 위한 최적 접촉포즈 결정 알고리즘)

  • 김종익;한헌수
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.448-451
    • /
    • 1999
  • In this paper, we propose a new object representation method and matching algorithm for object recognition using a 3-fingered robot hand. Each finger tip can measure normal vector and shapes of a contacting surface. Object is represented by the inter-surface description table where the features of a surface are described in the diagonal and the relations between two surfaces are in the upper diagonal. Based on this table, a fast and the efficient matching algorithm has been proposed. This algorithm can be applied to natural quadric objects.

  • PDF

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.

Dynamic Manipulation of a Virtual Object in Marker-less AR system Based on Both Human Hands

  • Chun, Jun-Chul;Lee, Byung-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.618-632
    • /
    • 2010
  • This paper presents a novel approach to control the augmented reality (AR) objects robustly in a marker-less AR system by fingertip tracking and hand pattern recognition. It is known that one of the promising ways to develop a marker-less AR system is using human's body such as hand or face for replacing traditional fiducial markers. This paper introduces a real-time method to manipulate the overlaid virtual objects dynamically in a marker-less AR system using both hands with a single camera. The left bare hand is considered as a virtual marker in the marker-less AR system and the right hand is used as a hand mouse. To build the marker-less system, we utilize a skin-color model for hand shape detection and curvature-based fingertip detection from an input video image. Using the detected fingertips the camera pose are estimated to overlay virtual objects on the hand coordinate system. In order to manipulate the virtual objects rendered on the marker-less AR system dynamically, a vision-based hand control interface, which exploits the fingertip tracking for the movement of the objects and pattern matching for the hand command initiation, is developed. From the experiments, we can prove that the proposed and developed system can control the objects dynamically in a convenient fashion.

A Study on Development of PC Based In-Line Inspection System with Structure Light Laser (구조화 레이저를 이용한 PC 기반 인-라인 검사 시스템 개발에 관한 연구)

  • Shin Chan-Bai;Kim Jin-Dae;Lim Hak-Kyu;Lee Jeh-Won
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.11 s.176
    • /
    • pp.82-90
    • /
    • 2005
  • Recently, the in-line vision inspection has become the subject of growing research area in the visual control systems and robotic intelligent fields that are required exact three-dimensional pose. The objective of this article is to study the pc based in line visual inspection with the hand-eye structure. This paper suggests three dimensional structured light measuring principle and design method of laser sensor header. The hand-eye laser sensor have been studied for a long time. However, it is very difficult to perform kinematical analysis between laser sensor and robot because the complicated mathematical process are needed for the real environments. In this problem, this paper will propose auto-calibration concept. The detail process of this methodology will be described. A new thinning algorithm and constrained hough transform method is also explained in this paper. Consequently, the developed in-line inspection module demonstrate the successful operation with hole, gap, width or V edge.

A Study on Hand Shape Recognition using Edge Orientation Histogram and PCA (에지 방향성 히스토그램과 주성분 분석을 이용한 손 형상 인식에 관한 연구)

  • Kim, Jong-Min;Kang, Myung-A
    • Journal of Digital Contents Society
    • /
    • v.10 no.2
    • /
    • pp.319-326
    • /
    • 2009
  • In this paper, we present an algorithm which recognize hand shape in real time using only image without adhering separate sensor. Hand recognizes using edge orientation histogram, which comes under a constant quantity of 2D appearances because hand shape is intricate. This method suit hand pose recognition in real time because it extracts hand space accurately, has little computation quantity, and is less sensitive to lighting change using color information in complicated background. Method which reduces recognition error using principal component analysis(PCA) method to can recognize through hand shape presentation direction change is explained. A case that hand shape changes by turning 3D also by using this method is possible to recognize. Human interface system manufacture technique, which controls a home electric appliance or game using, suggested method at experience could be applied.

  • PDF

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Force and Pose control for Anthropomorphic Robotic Hand with Redundancy (여유자유도를 가지는 인간형 로봇 손의 자세 및 힘 제어)

  • Yee, Gun Kyu;Kim, Yong Bum;Kim, Anna;Kang, Gitae;Choi, Hyouk Ryeol
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.4
    • /
    • pp.179-185
    • /
    • 2015
  • The versatility of a human hand is what the researchers eager to mimic. As one of the attempt, the redundant degree of freedom in the human hand is considered. However, in the force domain the redundant joint causes a control issue. To solve this problem, the force control method for a redundant robotic hand which is similar to the human is proposed. First, the redundancy of the human hand is analyzed. Then, to resolve the redundancy in force domain, the artificial minimum energy point is specified and the restoring force is used to control the configuration of the finger other than the force in a null space. Finally, the method is verified experimentally with a commercial robot hand, called Allegro Hand with a force/torque sensor.

A Decision Tree based Real-time Hand Gesture Recognition Method using Kinect

  • Chang, Guochao;Park, Jaewan;Oh, Chimin;Lee, Chilwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1393-1402
    • /
    • 2013
  • Hand gesture is one of the most popular communication methods in everyday life. In human-computer interaction applications, hand gesture recognition provides a natural way of communication between humans and computers. There are mainly two methods of hand gesture recognition: glove-based method and vision-based method. In this paper, we propose a vision-based hand gesture recognition method using Kinect. By using the depth information is efficient and robust to achieve the hand detection process. The finger labeling makes the system achieve pose classification according to the finger name and the relationship between each fingers. It also make the classification more effective and accutate. Two kinds of gesture sets can be recognized by our system. According to the experiment, the average accuracy of American Sign Language(ASL) number gesture set is 94.33%, and that of general gestures set is 95.01%. Since our system runs in real-time and has a high recognition rate, we can embed it into various applications.

Hand Gesture Interface for Manipulating 3D Objects in Augmented Reality (증강현실에서 3D 객체 조작을 위한 손동작 인터페이스)

  • Park, Keon-Hee;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.20-28
    • /
    • 2010
  • In this paper, we propose a hand gesture interface for the manipulation of augmented objects in 3D space using a camera. Generally a marker is used for the detection of 3D movement in 2D images. However marker based system has obvious defects since markers are always to be included in the image or we need additional equipments for controling objects, which results in reduced immersion. To overcome this problem, we replace marker by planar hand shape by estimating the hand pose. Kalman filter is for robust tracking of the hand shape. The experimental result indicates the feasibility of the proposed algorithm for hand based AR interfaces.