• Title/Summary/Keyword: homography transformation

Search Result 29, Processing Time 0.02 seconds

A New Shape-Based Object Category Recognition Technique using Affine Category Shape Model (Affine Category Shape Model을 이용한 형태 기반 범주 물체 인식 기법)

  • Kim, Dong-Hwan;Choi, Yu-Kyung;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.185-191
    • /
    • 2009
  • This paper presents a new shape-based algorithm using affine category shape model for object category recognition and model learning. Affine category shape model is a graph of interconnected nodes whose geometric interactions are modeled using pairwise potentials. In its learning phase, it can efficiently handle large pose variations of objects in training images by estimating 2-D homography transformation between the model and the training images. Since the pairwise potentials are defined on only relative geometric relationship betweenfeatures, the proposed matching algorithm is translation and in-plane rotation invariant and robust to affine transformation. We apply spectral matching algorithm to find feature correspondences, which are then used as initial correspondences for RANSAC algorithm. The 2-D homography transformation and the inlier correspondences which are consistent with this estimate can be efficiently estimated through RANSAC, and new correspondences also can be detected by using the estimated 2-D homography transformation. Experimental results on object category database show that the proposed algorithm is robust to pose variation of objects and provides good recognition performance.

  • PDF

Ground Plane Detection Using Homography Matrix (호모그래피행렬을 이용한 노면검출)

  • Lee, Ki-Yong;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.983-988
    • /
    • 2011
  • This paper presents a robust method for ground plane detection in vision-based applications based on a monocular sequence of images with a non-stationary camera. The proposed method, which is based on the reliable estimation of the homography between two frames taken from the sequence, aims at designing a practical system to detect road surface from traffic scenes. The homography is computed using a feature matching approach, which often gives rise to inaccurate matches or undesirable matches from out of the ground plane. Hence, the proposed homography estimation minimizes the effects from erroneous feature matching by the evaluation of the difference between the predicted and the observed matrices. The method is successfully demonstrated for the detection of road surface performed on experiments to fill an information void area taken place from geometric transformation applied to captured images by an in-vehicle camera system.

Geometrical Reorientation of Distorted Road Sign using Projection Transformation for Road Sign Recognition (도로표지판 인식을 위한 사영 변환을 이용한 왜곡된 표지판의 기하교정)

  • Lim, Hee-Chul;Deb, Kaushik;Jo, Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.11
    • /
    • pp.1088-1095
    • /
    • 2009
  • In this paper, we describe the reorientation method of distorted road sign by using projection transformation for improving recognition rate of road sign. RSR (Road Sign Recognition) is one of the most important topics for implementing driver assistance in intelligent transportation systems using pattern recognition and vision technology. The RS (Road Sign) includes direction of road or place name, and intersection for obtaining the road information. We acquire input images from mounted camera on vehicle. However, the road signs are often appeared with rotation, skew, and distortion by perspective camera. In order to obtain the correct road sign overcoming these problems, projection transformation is used to transform from 4 points of image coordinate to 4 points of world coordinate. The 4 vertices points are obtained using the trajectory as the distance from the mass center to the boundary of the object. Then, the candidate areas of road sign are transformed from distorted image by using homography transformation matrix. Internal information of reoriented road signs is segmented with arrow and the corresponding indicated place name. Arrow area is the largest labeled one. Also, the number of group of place names equals to that of arrow heads. Characters of the road sign are segmented by using vertical and horizontal histograms, and each character is recognized by using SAD (Sum of Absolute Difference). From the experiments, the proposed method has shown the higher recognition results than the image without reorientation.

Fire Detection Algorithm for a Quad-rotor using Ego-motion Compensation (Ego-Motion 보정기법을 적용한 쿼드로터의 화재 감지 알고리즘)

  • Lee, Young-Wan;Kim, Jin-Hwang;Oh, Jeong-Ju;Kim, Hakil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.21-27
    • /
    • 2015
  • A conventional fire detection has been developed based on images captured from a fixed camera. However, It is difficult to apply current algorithms to a flying Quad-rotor to detect fire. To solve this problem, we propose that the fire detection algorithm can be modified for Quad-rotor using Ego-motion compensation. The proposed fire detection algorithm consists of color detection, motion detection, and fire determination using a randomness test. Color detection and randomness test are adapted similarly from an existing algorithm. However, Ego-motion compensation is adapted on motion detection for compensating the degree of Quad-rotor's motion using Planar Projective Transformation based on Optical Flow, RANSAC Algorithm, and Homography. By adapting Ego-motion compensation on the motion detection step, it has been proven that the proposed algorithm has been able to detect fires 83% of the time in hovering mode.

Visible Light and Infrared Thermal Image Registration Method Using Homography Transformation (호모그래피 변환을 이용한 가시광 및 적외선 열화상 영상 정합)

  • Lee, Sang-Hyeop;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.707-713
    • /
    • 2021
  • Symptoms of foot-and-mouth disease include fever and drooling a lot around the hoof, blisters in the mouth, poor appetite, blisters around the hoof, and blisters around the hoof. Research is underway on smart barns that remotely manage these symptoms through cameras. Visible light cameras can measure the condition of livestock such as blisters, but cannot measure body temperature. On the other hand, infrared thermal imaging cameras can measure body temperature, but it is difficult to measure the condition of livestock. In this paper, we propose an object detection system using deep learning-based livestock detection using visible and infrared thermal imaging composite camera modules for preemptive response

Feature Matching Algorithm Robust To Viewpoint Change (시점 변화에 강인한 특징점 정합 기법)

  • Jung, Hyun-jo;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2363-2371
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm which is robust to the viewpoint change by using the FAST(Features from Accelerated Segment Test) feature detector and the SIFT(Scale Invariant Feature Transform) feature descriptor. The original FAST algorithm unnecessarily results in many feature points along the edges in the image. To solve this problem, we apply the principal curvatures for refining it. We use the SIFT descriptor to describe the extracted feature points and calculate the homography matrix through the RANSAC(RANdom SAmple Consensus) with the matching pairs obtained from the two different viewpoint images. To make feature matching robust to the viewpoint change, we classify the matching pairs by calculating the Euclidean distance between the transformed coordinates by the homography transformation with feature points in the reference image and the coordinates of the feature points in the different viewpoint image. Through the experimental results, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load.

Vision based 3D Hand Interface Using Virtual Two-View Method (가상 양시점화 방법을 이용한 비전기반 3차원 손 인터페이스)

  • Bae, Dong-Hee;Kim, Jin-Mo
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.43-54
    • /
    • 2013
  • With the consistent development of the 3D application technique, visuals are available at more realistic quality and are utilized in many applications like game. In particular, interacting with 3D objects in virtual environments, 3D graphics have led to a substantial development in the augmented reality. This study proposes a 3D user interface to control objects in 3D space through virtual two-view method using only one camera. To do so, homography matrix including transformation information between arbitrary two positions of camera is calculated and 3D coordinates are reconstructed by employing the 2D hand coordinates derived from the single camera, homography matrix and projection matrix of camera. This method will result in more accurate and quick 3D information. This approach may be advantageous with respect to the reduced amount of calculation needed for using one camera rather than two and may be effective at the same time for real-time processes while it is economically efficient.

Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System (경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출)

  • Hong, Sunghoon;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.

Computer vision-based remote displacement monitoring system for in-situ bridge bearings robust to large displacement induced by temperature change

  • Kim, Byunghyun;Lee, Junhwa;Sim, Sung-Han;Cho, Soojin;Park, Byung Ho
    • Smart Structures and Systems
    • /
    • v.30 no.5
    • /
    • pp.521-535
    • /
    • 2022
  • Efficient management of deteriorating civil infrastructure is one of the most important research topics in many developed countries. In particular, the remote displacement measurement of bridges using linear variable differential transformers, global positioning systems, laser Doppler vibrometers, and computer vision technologies has been attempted extensively. This paper proposes a remote displacement measurement system using closed-circuit televisions (CCTVs) and a computer-vision-based method for in-situ bridge bearings having relatively large displacement due to temperature change in long term. The hardware of the system is composed of a reference target for displacement measurement, a CCTV to capture target images, a gateway to transmit images via a mobile network, and a central server to store and process transmitted images. The usage of CCTV capable of night vision capture and wireless data communication enable long-term 24-hour monitoring on wide range of bridge area. The computer vision algorithm to estimate displacement from the images involves image preprocessing for enhancing the circular features of the target, circular Hough transformation for detecting circles on the target in the whole field-of-view (FOV), and homography transformation for converting the movement of the target in the images into an actual expansion displacement. The simple target design and robust circle detection algorithm help to measure displacement using target images where the targets are far apart from each other. The proposed system is installed at the Tancheon Overpass located in Seoul, and field experiments are performed to evaluate the accuracy of circle detection and displacement measurements. The circle detection accuracy is evaluated using 28,542 images captured from 71 CCTVs installed at the testbed, and only 48 images (0.168%) fail to detect the circles on the target because of subpar imaging conditions. The accuracy of displacement measurement is evaluated using images captured for 17 days from three CCTVs; the average and root-mean-square errors are 0.10 and 0.131 mm, respectively, compared with a similar displacement measurement. The long-term operation of the system, as evaluated using 8-month data, shows high accuracy and stability of the proposed system.

Estimating Geometric Transformation of Planar Pattern in Spherical Panoramic Image (구면 파노라마 영상에서의 평면 패턴의 기하 변환 추정)

  • Kim, Bosung;Park, Jong-Seung
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1185-1194
    • /
    • 2015
  • A spherical panoramic image does not conform to the pin-hole camera model, and, hence, it is not possible to utilize previous techniques consisting of plane-to-plane transformation. In this paper, we propose a new method to estimate the planar geometric transformation between the planar image and a spherical panoramic image. Our proposed method estimates the transformation parameters for latitude, longitude, rotation and scaling factors when the matching pairs between a spherical panoramic image and a planar image are given. A planar image is projected into a spherical panoramic image through two steps of nonlinear coordinate transformations, which makes it difficult to compute the geometric transformation. The advantage of using our method is that we can uncover each of the implicit factors as well as the overall transformation. The experiment results show that our proposed method can achieve estimation errors of around 1% and is not affected by deformation factors, such as the latitude and rotation.