• Title/Summary/Keyword: 호모그래피 변환

Search Result 29, Processing Time 0.035 seconds

Bolt-Loosening Detection using Vision-Based Deep Learning Algorithm and Image Processing Method (영상기반 딥러닝 및 이미지 프로세싱 기법을 이용한 볼트풀림 손상 검출)

  • Lee, So-Young;Huynh, Thanh-Canh;Park, Jae-Hyung;Kim, Jeong-Tae
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.4
    • /
    • pp.265-272
    • /
    • 2019
  • In this paper, a vision-based deep learning algorithm and image processing method are proposed to detect bolt-loosening in steel connections. To achieve this objective, the following approaches are implemented. First, a bolt-loosening detection method that includes regional convolutional neural network(RCNN)-based deep learning algorithm and Hough line transform(HLT)-based image processing algorithm are designed. The RCNN-based deep learning algorithm is developed to identify and crop bolts in a connection image. The HLT-based image processing algorithm is designed to estimate the bolt angles from the cropped bolt images. Then, the proposed vision-based method is evaluated for verifying bolt-loosening detection in a lab-scale girder connection. The accuracy of the RCNN-based bolt detector and HLT-based bolt angle estimator are examined with respect to various perspective distortions.

Lane Model Extraction Based on Combination of Color and Edge Information from Car Black-box Images (차량용 블랙박스 영상으로부터 색상과 에지정보의 조합에 기반한 차선모델 추출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • This paper presents a procedure to extract lane line models using a set of proposed methods. Firstly, an image warping method based on homography is proposed to transform a target image into an image which is efficient to find lane pixels within a certain region in the image. Secondly, a method to use the combination of the results of edge detection and HSL (Hue, Saturation, and Lightness) transform is proposed to detect lane candidate pixels with reliability. Thirdly, erroneous candidate lane pixels are eliminated using a selection area method. Fourthly, a method to fit lane pixels to quadratic polynomials is proposed. In order to test the validity of the proposed procedure, a set of black-box images captured under varying illumination and noise conditions were used. The experimental results show that the proposed procedure could overcome the problems of color-only and edge-only based methods and extract lane pixels and model the lane line geometry effectively within less than 0.6 seconds per frame under a low-cost computing environment.

3-D Pose Estimation of an Elliptic Object Using Two Coplanar Points (두 개의 공면점을 활용한 타원물체의 3차원 위치 및 자세 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.23-35
    • /
    • 2012
  • This paper presents a 3-D pose (position and orientation) estimation method for an elliptic object in 3-D space. It is difficult to resolve the problem of determining 3-D pose parameters with respect to an elliptic feature in 3-D space by interpretation of its projected feature onto an image plane. As an alternative, we propose a two points-based pose estimation algorithm to seek the 3-D information of an elliptic feature. The proposed algorithm determines a homogeneous transformation uniquely for a given correspondence set of an ellipse and two coplanar points that are defined on model and image plane, respectively. For each plane, two triangular features are extracted from an ellipse and two points based on the polarity in 2-D projection space. A planar homography is first estimated by the triangular feature correspondences, then decomposed into 3-D pose parameters. The proposed method is evaluated through a series of experiments for analyzing the errors of 3-D pose estimation and the sensitivity with respect to point locations.

View invariant image matching using SURF (SURF(speed up robust feature)를 이용한 시점변화에 강인한 영상 매칭)

  • Son, Jong-In;Kang, Minsung;Sohn, Kwanghoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.222-225
    • /
    • 2011
  • 영상 매칭은 컴퓨터 비전에서 중요한 기초 기술 중에 하나이다. 하지만 스케일, 회전, 조명, 시점변화에 강인한 대응점을 찾는 것은 쉬운 작업이 아니다. 이러한 문제점을 보안하기 위해서 스케일 불변 특징 변환(Scale Invariant Feature Transform) 고속의 강인한 특징 추출(Speeded up robust features) 알고리즘등에 제안되었지만, 시점 변화에 있어서 취약한 문제점을 나타냈다. 본 논문에서는 이런 문제점을 해결하기 위해서 시점 변화에 강인한 알고리즘을 제안하였다. 시점 변화에 강인한 영상매칭을 위해서 원본 영상과 질의 영상간 유사도 높은 특징점들의 호모그래피 변환을 이용해서 질의 영상을 원본 영상과 유사하게 보정한 뒤에 매칭을 통해서 시점 변화에 강인한 알고리즘을 구현하였다. 시점이 변화된 여러 영상을 통해서 기존 SIFT,SURF와 성능과 수행 시간을 비교 함으로서, 본 논문에서 제안한 알고리즘의 우수성을 입증 하였다.

  • PDF

Gaze Tracking Using a Modified Starburst Algorithm and Homography Normalization (수정 Starburst 알고리즘과 Homography Normalization을 이용한 시선추적)

  • Cho, Tai-Hoon;Kang, Hyun-Min
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1162-1170
    • /
    • 2014
  • In this paper, an accurate remote gaze tracking method with two cameras is presented using a modified Starburst algorithm and honography normalization. Starburst algorithm, which was originally developed for head-mounted systems, often fails in detecting accurate pupil centers in remote tracking systems with a larger field of view due to lots of noises. A region of interest area for pupil is found using template matching, and then only within this area Starburst algorithm is applied to yield pupil boundary candidate points. These are used in improved RANSAC ellipse fitting to produce the pupil center. For gaze estimation robust to head movement, an improved homography normalization using four LEDs and calibration based on high order polynomials is proposed. Finally, it is shown that accuracy and robustness of the system is improved using two cameras rather than one camera.

Feature Matching Algorithm Robust To Viewpoint Change (시점 변화에 강인한 특징점 정합 기법)

  • Jung, Hyun-jo;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2363-2371
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm which is robust to the viewpoint change by using the FAST(Features from Accelerated Segment Test) feature detector and the SIFT(Scale Invariant Feature Transform) feature descriptor. The original FAST algorithm unnecessarily results in many feature points along the edges in the image. To solve this problem, we apply the principal curvatures for refining it. We use the SIFT descriptor to describe the extracted feature points and calculate the homography matrix through the RANSAC(RANdom SAmple Consensus) with the matching pairs obtained from the two different viewpoint images. To make feature matching robust to the viewpoint change, we classify the matching pairs by calculating the Euclidean distance between the transformed coordinates by the homography transformation with feature points in the reference image and the coordinates of the feature points in the different viewpoint image. Through the experimental results, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load.

Vision based 3D Hand Interface Using Virtual Two-View Method (가상 양시점화 방법을 이용한 비전기반 3차원 손 인터페이스)

  • Bae, Dong-Hee;Kim, Jin-Mo
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.43-54
    • /
    • 2013
  • With the consistent development of the 3D application technique, visuals are available at more realistic quality and are utilized in many applications like game. In particular, interacting with 3D objects in virtual environments, 3D graphics have led to a substantial development in the augmented reality. This study proposes a 3D user interface to control objects in 3D space through virtual two-view method using only one camera. To do so, homography matrix including transformation information between arbitrary two positions of camera is calculated and 3D coordinates are reconstructed by employing the 2D hand coordinates derived from the single camera, homography matrix and projection matrix of camera. This method will result in more accurate and quick 3D information. This approach may be advantageous with respect to the reduced amount of calculation needed for using one camera rather than two and may be effective at the same time for real-time processes while it is economically efficient.

Enhancement on 3 DoF Image Stitching Using Inertia Sensor Data (관성 센서 데이터를 활용한 3 DoF 이미지 스티칭 향상)

  • Kim, Minwoo;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.51-61
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from an inertia sensor to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw angles, pitch angles, roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data.

Object Width Measurement System Using Light Sectioning Method (광절단법을 이용한 물체 크기 측정 시스템)

  • Lee, Byeong-Ju;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.3
    • /
    • pp.697-705
    • /
    • 2014
  • This paper presents a vision based object width measurement method and its application where the light sectioning method is employed. The target object for measurement is a tread, which is the most outside component of an automobile tire. The entire system applying the measurement method consists of two processes, i.e. a calibration process and a detection process. The calibration process is to identify the relationships between a camera plane and a laser plane, and to estimate a camera lens distortion parameters. As the process requires a test pattern, namely a jig, which is elaborately manufactured. In the detection process, first of all, the region that a laser light illuminates is extracted by applying an adaptive thresholding technique where the distribution of the pixel brightness is considered to decide the optimal threshold. Then, a thinning algorithm is applied to the region so that the ends and the shoulders of a tread are detected. Finally, the tread width and the shoulder width are computed using the homography and the distortion coefficients obtained by the calibration process.

Estimating Geometric Transformation of Planar Pattern in Spherical Panoramic Image (구면 파노라마 영상에서의 평면 패턴의 기하 변환 추정)

  • Kim, Bosung;Park, Jong-Seung
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1185-1194
    • /
    • 2015
  • A spherical panoramic image does not conform to the pin-hole camera model, and, hence, it is not possible to utilize previous techniques consisting of plane-to-plane transformation. In this paper, we propose a new method to estimate the planar geometric transformation between the planar image and a spherical panoramic image. Our proposed method estimates the transformation parameters for latitude, longitude, rotation and scaling factors when the matching pairs between a spherical panoramic image and a planar image are given. A planar image is projected into a spherical panoramic image through two steps of nonlinear coordinate transformations, which makes it difficult to compute the geometric transformation. The advantage of using our method is that we can uncover each of the implicit factors as well as the overall transformation. The experiment results show that our proposed method can achieve estimation errors of around 1% and is not affected by deformation factors, such as the latitude and rotation.