• Title/Summary/Keyword: 2D Vision Sensors

Search Result 39, Processing Time 0.025 seconds

The Position Estimation of a Car Using 2D Vision Sensors (2D 비젼 센서를 이용한 차체의 3D 자세측정)

  • 한명철;김정관
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.11a
    • /
    • pp.296-300
    • /
    • 1996
  • This paper presents 3D position estimation algorithm with the images of 2D vision sensors which issues Red Laser Slit light and recieves the line images. Since the sensor usually measures 2D position of corner(or edge) of a body and the measured point is not fixed in the body, the additional information of the corner(or edge) is used. That is, corner(or edge) line is straight and fixed in the body. For the body which moves in a plane, the Transformation matrix between the body coordinate and the reference coordinate is analytically found. For the 3D motion body, linearization technique and least mean squares method are used.

  • PDF

The Position Estimation of a Body Using 2-D Slit Light Vision Sensors (2-D 슬리트광 비젼 센서를 이용한 물체의 자세측정)

  • Kim, Jung-Kwan;Han, Myung-Chul
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.133-142
    • /
    • 1999
  • We introduce the algorithms of 2-D and 3-D position estimation using 2-D vision sensors. The sensors used in this research issue red laser slit light to the body. So, it is very convenient to obtain the coordinates of corner point or edge in sensor coordinate. Since the measured points are normally not fixed in the body coordinate, the additional conditions, that corner lines or edges are straight and fixed in the body coordinate, are used to find out the position and orientation of the body. In the case of 2-D motional body, we can find the solution analytically. But in the case of 3-D motional body, linearization technique and least mean squares method are used because of hard nonlinearity.

  • PDF

Correction of Photometric Distortion of a Micro Camera-Projector System for Structured Light 3D Scanning

  • Park, Go-Gwang;Park, Soon-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.96-102
    • /
    • 2012
  • This paper addresses photometric distortion problems of a compact 3D scanning sensor which is composed of a micro-size and inexpensive camera-projector system. Recently, many micro-size cameras and projectors are available. However, erroneous 3D scanning results may arise due to the poor and nonlinear photometric properties of the sensors. This paper solves two inherent photometric distortions of the sensors. First, the response functions of both the camera and projector are derived from the least squares solutions of passive and active calibration, respectively. Second, vignetting correction of the vision camera is done by using a conventional method, however the projector vignetting is corrected by using the planar homography between the image planes of the projector and camera, respectively. Experimental results show that the proposed technique enhances the linear properties of the phase patterns that are generated by the sensor.

Hybrid Real-time Monitoring System Using2D Vision and 3D Action Recognition (2D 비전과 3D 동작인식을 결합한 하이브리드 실시간 모니터링 시스템)

  • Lim, Jong Heon;Sung, Man Kyu;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.583-598
    • /
    • 2015
  • We need many assembly lines to produce industrial product such as automobiles that require a lot of composited parts. Big portion of such assembly line are still operated by manual works of human. Such manual works sometimes cause critical error that may produce artifacts. Also, once the assembly is completed, it is really hard to verify whether of not the product has some error. In this paper, for monitoring behaviors of manual human work in an assembly line automatically, we proposes a realtime hybrid monitoring system that combines 2D vision sensor tracking technique with 3D motion recognition sensors.

Deburring of Irregular Burr using Vision and Force Sensors (비젼과 힘센서를 이용한 불균일 버의 디버링 가공)

  • Choi, G.J.;Kim, Y.W.;Shin, S.W.;Ahn, D.S.
    • Journal of Power System Engineering
    • /
    • v.2 no.3
    • /
    • pp.83-88
    • /
    • 1998
  • This paper presents an efficient control algorithm that removes irregular burrs using vision and force sensors. In automated robotic deburring, the reference force should be accommodated to the profile of burrs in order to prevent the tool breakage. In this paper, (1) The profile of burrs is recognized by vision sensor and followed by the calculation of reference force, (2) Deburring expert's skill is transferred to robot. Finally, the performance of robot is evaluated through simulation and experiment.

  • PDF

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

2D Map generation Using Omnidirectional Image sensor and Stereo Vision for MobileRobot MAIRO (자율이동로봇MAIRO의 전방향 이미지센서와 스테레오 비전 시스템을 이용한 2차원 지도 생성)

  • Kim, Kyung-Ho;Lee, Hyung-Kyu;Son, Young-Jun;Song, Jae-Keun
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.495-500
    • /
    • 2002
  • Recently, a service robot industry outstands as an up and coming industry of the next generation. Specially, there are so many research in self-steering movement(SSM). In order to implement SSM, robot must effectively recognize all around, detect objects and make a surrounding map with sensors. So, many robots have a sonar and a infrared sensor, etc. But, in these sensors, We only know informations about between the robot and the object as well as resolution faculty is of inferior quality. In this paper, we will introduce new algorithm that recognizes objects around robot and makes a two dimension surrounding map with a omni-direction vision camera and two stereo vision cameras.

  • PDF

STEREO VISION-BASED FORWARD OBSTACLE DETECTION

  • Jung, H.G.;Lee, Y.H.;Kim, B.J.;Yoon, P.J.;Kim, J.H.
    • International Journal of Automotive Technology
    • /
    • v.8 no.4
    • /
    • pp.493-504
    • /
    • 2007
  • This paper proposes a stereo vision-based forward obstacle detection and distance measurement method. In general, stereo vision-based obstacle detection methods in automotive applications can be classified into two categories: IPM (Inverse Perspective Mapping)-based and disparity histogram-based. The existing disparity histogram-based method was developed for stop-and-go applications. The proposed method extends the scope of the disparity histogram-based method to highway applications by 1) replacing the fixed rectangular ROI (Region Of Interest) with the traveling lane-based ROI, and 2) replacing the peak detection with a constant threshold with peak detection using the threshold-line and peakness evaluation. In order to increase the true positive rate while decreasing the false positive rate, multiple candidate peaks were generated and then verified by the edge feature correlation method. By testing the proposed method with images captured on the highway, it was shown that the proposed method was able to overcome problems in previous implementations while being applied successfully to highway collision warning/avoidance conditions, In addition, comparisons with laser radar showed that vision sensors with a wider FOV (Field Of View) provided faster responses to cutting-in vehicles. Finally, we integrated the proposed method into a longitudinal collision avoidance system. Experimental results showed that activated braking by risk assessment using the state of the ego-vehicle and measuring the distance to upcoming obstacles could successfully prevent collisions.

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.