• Title/Summary/Keyword: 3D computer vision

Search Result 317, Processing Time 0.027 seconds

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Hybrid Real-time Monitoring System Using2D Vision and 3D Action Recognition (2D 비전과 3D 동작인식을 결합한 하이브리드 실시간 모니터링 시스템)

  • Lim, Jong Heon;Sung, Man Kyu;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.583-598
    • /
    • 2015
  • We need many assembly lines to produce industrial product such as automobiles that require a lot of composited parts. Big portion of such assembly line are still operated by manual works of human. Such manual works sometimes cause critical error that may produce artifacts. Also, once the assembly is completed, it is really hard to verify whether of not the product has some error. In this paper, for monitoring behaviors of manual human work in an assembly line automatically, we proposes a realtime hybrid monitoring system that combines 2D vision sensor tracking technique with 3D motion recognition sensors.

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

Adjustment Algorithms for the Measured Data of Stereo Vision Methods for Measuring the Height of Semiconductor Chips (반도체 칩의 높이 측정을 위한 스테레오 비전의 측정값 조정 알고리즘)

  • Kim, Young-Doo;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.97-102
    • /
    • 2011
  • Lots of 2D vision algorithms have been applied for inspection. However, these 2D vision algorithms have limitation in inspection applications which require 3D information data such as the height of semiconductor chips. Stereo vision is a well known method to measure the distance from the camera to the object to be measured. But it is difficult to apply for inspection directly because of its measurement error. In this paper, we propose two adjustment methods to reduce the error of the measured height data for stereo vision. The weight value based model is used to minimize the mean squared error. The average value based model is used with simple concept to reduce the measured error. The effect of these algorithms has been proved through the experiments which measure the height of semiconductor chips.

Analysis of Quantization Error in Stereo Vision (스테레오 비젼의 양자화 오차분석)

  • 김동현;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.54-63
    • /
    • 1993
  • Quantization error, generated by the quantization process of an image, is inherent in computer vision. Because, especially in stereo vision, the quantization error in a 2-D image results in position errors in the reconstructed 3-D scene, it is necessary to analyze it mathematically. In this paper, the analysis of the probability density function (pdf) of quantization error for a line-based stereo matching scheme is presented. We show that the theoretical pdf of quantization error in the reconstructed 3-D position information has more general form than the conventional analysis for pixel-based stereo matching schemes. Computer simulation is observed to surpport the theoretical distribution.

  • PDF

Stereo Vision-Based 3D Pose Estimation of Product Labels for Bin Picking (빈피킹을 위한 스테레오 비전 기반의 제품 라벨의 3차원 자세 추정)

  • Udaya, Wijenayake;Choi, Sung-In;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.8-16
    • /
    • 2016
  • In the field of computer vision and robotics, bin picking is an important application area in which object pose estimation is necessary. Different approaches, such as 2D feature tracking and 3D surface reconstruction, have been introduced to estimate the object pose accurately. We propose a new approach where we can use both 2D image features and 3D surface information to identify the target object and estimate its pose accurately. First, we introduce a label detection technique using Maximally Stable Extremal Regions (MSERs) where the label detection results are used to identify the target objects separately. Then, the 2D image features on the detected label areas are utilized to generate 3D surface information. Finally, we calculate the 3D position and the orientation of the target objects using the information of the 3D surface.

Optimal 3D Grasp Planning for unknown objects (임의 물체에 대한 최적 3차원 Grasp Planning)

  • 이현기;최상균;이상릉
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.462-465
    • /
    • 2002
  • This paper deals with the problem of synthesis of stable and optimal grasps with unknown objects by 3-finger hand. Previous robot grasp research has analyzed mainly with either unknown objects 2D by vision sensor or unknown objects, cylindrical or hexahedral objects, 3D. Extending the previous work, in this paper we propose an algorithm to analyze grasp of unknown objects 3D by vision sensor. This is archived by two steps. The first step is to make a 3D geometrical model of unknown objects by stereo matching which is a kind of 3D computer vision technique. The second step is to find the optimal grasping points. In this step, we choose the 3-finger hand because it has the characteristic of multi-finger hand and is easy to modeling. To find the optimal grasping points, genetic algorithm is used and objective function minimizing admissible farce of finger tip applied to the object is formulated. The algorithm is verified by computer simulation by which an optimal grasping points of known objects with different angles are checked.

  • PDF

3D measuring system by using the stereo vision (스테레오비젼을 이용한 3차원 물체 측정 시스템)

  • 조진연;김기범
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.224-228
    • /
    • 1997
  • Computer vision system become more important as the researches on inspection systems, intelligent robots , diagnostic medical systems is performed actively. In this paper, 3D measuring system is developed by using stereo vision. The relation between left image and right image is obtained by using 8 point algorithm, and fundamental matrix, epipole and 3D reconstruction algorithm are used to measure 3D dimensions. 3D measuring system was developed by Visual Basic, in which 3D coordinates would be obtained by simple mouse clicks. This software would be applied to construction area, home interior system, rapid measuring system.

  • PDF

Essential Computer Vision Methods for Maximal Visual Quality of Experience on Augmented Reality

  • Heo, Suwoong;Song, Hyewon;Kim, Jinwoo;Nguyen, Anh-Duc;Lee, Sanghoon
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.39-45
    • /
    • 2016
  • The augmented reality is the environment which consists of real-world view and information drawn by computer. Since the image which user can see through augmented reality device is a synthetic image composed by real-view and virtual image, it is important to make the virtual image generated by computer well harmonized with real-view image. In this paper, we present reviews of several works about computer vision and graphics methods which give user realistic augmented reality experience. To generate visually harmonized synthetic image which consists of a real and a virtual image, 3D geometry and environmental information such as lighting or material surface reflectivity should be known by the computer. There are lots of computer vision methods which aim to estimate those. We introduce some of the approaches related to acquiring geometric information, lighting environment and material surface properties using monocular or multi-view images. We expect that this paper gives reader's intuition of the computer vision methods for providing a realistic augmented reality experience.