• Title/Summary/Keyword: 2D Vision

Search Result 612, Processing Time 0.029 seconds

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

An Experimental Study on the Optimal Number of Cameras used for Vision Control System (비젼 제어시스템에 사용된 카메라의 최적개수에 대한 실험적 연구)

  • 장완식;김경석;김기영;안힘찬
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.2
    • /
    • pp.94-103
    • /
    • 2004
  • The vision system model used for this study involves the six parameters that permits a kind of adaptability in that relationship between the camera space location of manipulable visual cues and the vector of robot joint coordinates is estimated in real time. Also this vision control method requires the number of cameras to transform 2-D camera plane from 3-D physical space, and be used irrespective of location of cameras, if visual cues are displayed in the same camera plane. Thus, this study is to investigate the optimal number of cameras used for the developed vision control system according to the change of the number of cameras. This study is processed in the two ways : a) effectiveness of vision system model b) optimal number of cameras. These results show the evidence of the adaptability of the developed vision control method using the optimal number of cameras.

Evaluation of Visual Responses in Viewing a 3D Image (3D 영상 시청 시 시각반응의 평가)

  • Lee, Mu-Hyuk;Son, Jeong-Sik;Kim, Jaedo;Yu, Dong-Sik
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.17 no.2
    • /
    • pp.165-170
    • /
    • 2012
  • Purpose: The aim of this study was to measure and evaluate changes of visual responses in viewing a 2D and 3D (three-dimensional) image. Methods: The subjects were 44 college students aged 19 to 25 years with normal binocular vision. The visual responses measured were CA/C (convergence accommodation/convergence) ratio, convergence-induced PD(interpupillary distance), accommodative responses, perceived distance in viewing a 3D image. Results: Convergence and accommodative responses in viewing the 3D image were significantly larger (p<0.05) than in 2D. A moderate positive correction was found between CA/C ratio and accommodative response (r = 0.477, p = 0.001). It was indicated that smaller PD had larger depth perception. Convergence in viewing the 3D image was significantly larger (p<0.05) than that at cognitive distance. Conclusions: The visual fatigue may be more intense in larger CA/C ratio and smaller PD when viewing 3D images.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.6
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

A Study on Optimal Quality Fabrication for the Tactile Sensation of Low Visibility Using 3D Printing

  • Han, Hyeonsu;Ko, Junghyuk
    • Journal of Broadcast Engineering
    • /
    • v.24 no.7
    • /
    • pp.1237-1245
    • /
    • 2019
  • Most of the blind are low vision blinds due to injury or disease. As their vision decreases, they are experiencing inconvenience in their normal life and forgetting their memories with their family. The purpose of this study is to use Lithophane printing technology to help their normal life and to remember their family. Also, the manufactured 3D plates are to study the conditions that can be optimal understood through the tactile sense of low vision blind. When the low vision blind person understood the 3D plates, they chose three parameters that affect their tactile sense. And by comparing their tactile sense, the optimal condition results were found. This paper was concluded with (1) the round form that perceived as 3D objects, (2) the thin thickness similar to Braille, and (3) the high resolution that can be expressed in detail.

Comparison of Autorefraction and Refraction with iTrace for Elementary School Children (초등학생의 자동안굴절계와 iTrace로 측정한 굴절검사 값의 비교)

  • Kim, Hyojin;Lee, Koon-Ja;Kim, Sam-Yi;Kim, Se-Rom
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.15 no.1
    • /
    • pp.99-104
    • /
    • 2010
  • Purpose: Difference of refraction result from the method of autorefraction and iTrace were investigaged for the children of elementary school in Asan City. In iTrace method. exclusion of accommodation without cycloplegia was used. Methods: Manifest refractive stale of 42 eyes of 12~13 years old were measured using autorefractor and iTrace. Refractions of far (more than 5 m) and ncar (30 cm) vision were measured using iTrace. All data showed that the spherical equivalent were classified as being in the group 1 (-0.50D < ~ < +1.00D) and 2 (below -0.50D) according 10 refractive errors. Results: Mean spherical equivalent using autorefractor and iTrace (far and near vision) were -1.08D, -0.29D and -2.34D, respectively (p<0.01). Compared with the far vision using iTrace, autorefraction was measured the myopia with -0.50D ~ -1.00D in 52.4% of total eyes. Autorefraction also statistical significant were measured a more myopia than the far vision using iTrace in group I and 2. Conclusions: The difference of refractive errors between autorefraction and iTrace, objective refraction were measured with far vision of more than 5 m were -0.79D. Autoreftaction showed statistically decreased refraction errors than iTrace with far vision.

The Influence of Accommodation on Watching Home 3D TV at Close Distance (가정용 3D TV의 근거리 시청이 조절기능에 미치는 영향)

  • Kim, Jung-Ho;Hwang, Hae-Young;Kang, Ji-Hun;Yu, Dong-Sik;Kim, Jae-Do;Son, Jeong-Sik
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.18 no.2
    • /
    • pp.157-163
    • /
    • 2013
  • Purpose: This study was investigated weather watching 2D and 3D images effecting on accommodative function (AF), and differences between changes of AF by 2D and 3D. Methods: 50 subjects (male 30, female 20) aged 20's to 40's years old ($22.9{\pm}3.93$ years) who are available to watching 3D images were participated for this study. Accommodative amplitude (AA) by near point of accommodation (NPA), accommodative response (AR), positive and negative relative accommodation (PRA, NRA), accommodative facility (AF) were measured before, after watching 2D and 3D images at 1 m distance for 30 minutes respectively. Results: Accommodative amplitude after both watching 2D and 3D images decreased comparing to before watching images, and AA after watching 3D images was significantly lower than after watching 2D images. AR after both watching 2D and 3D images increased comparing to before watching images, but there was no difference between 2D and 3D. PRA and NRA were not significantly different between before, after watching 2D and 3D images. Accommodation speed by AF was increased for before watching ($13.52{\pm}3.32$ cpm) following by for after watching 2D images ($14.28{\pm}3.21$ cpm) and for watching 3D images ($14.90{\pm}3.27$ cpm). Conclusions: Watching images at close distance is effect to accommodation functions, and sequence of AA decrease of before watching images following by after watching 2D images and after watching 3D images may effect to asthenopia with same sequence as AA decrease. The results of increase of AF after watching images, specially 3D images show a possibility of vision therapy and further detail VT studies using 3D images are required in the future.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

Calibration of 3D Coordinates in Orthogonal Stereo Vision (직교식 스테레오 비젼에서의 3차원 좌표 보정)

  • Yoon, Hee-Joo;Seo, Young-Wuk;Bae, Jung-Soo;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.504-507
    • /
    • 2005
  • In this paper, we propose a calibration technique of 3D coordinates using orthogonal stereo vision. First, we acquire front- image and upper- image from stereo cameras with real time and extract each coordinates of a moving object using differential operation and ART2 clustering algorithm. Then, we can generate 3D coordinates of that moving object through combining these two coordinates. Finally, we calibrate 3D coordinates using orthogonal stereo vision since 3D coordinates are not accurate due to perspective. Experimental results show that accurate 3D coordinates of a moving object can be generated by the proposed calibration technique.

  • PDF

Analysis of Quantization Error in Stereo Vision (스테레오 비젼의 양자화 오차분석)

  • 김동현;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.54-63
    • /
    • 1993
  • Quantization error, generated by the quantization process of an image, is inherent in computer vision. Because, especially in stereo vision, the quantization error in a 2-D image results in position errors in the reconstructed 3-D scene, it is necessary to analyze it mathematically. In this paper, the analysis of the probability density function (pdf) of quantization error for a line-based stereo matching scheme is presented. We show that the theoretical pdf of quantization error in the reconstructed 3-D position information has more general form than the conventional analysis for pixel-based stereo matching schemes. Computer simulation is observed to surpport the theoretical distribution.

  • PDF