• Title/Summary/Keyword: Homography

Search Result 146, Processing Time 0.023 seconds

Feature Based Multi-Resolution Registration of Blurred Images for Image Mosaic

  • Fang, Xianyong;Luo, Bin;He, Biao;Wu, Hao
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.37-46
    • /
    • 2010
  • Existing methods for the registration of blurred images are efficient for the artificially blurred images or a planar registration, but not suitable for the naturally blurred images existing in the real image mosaic process. In this paper, we attempt to resolve this problem and propose a method for a distortion-free stitching of naturally blurred images for image mosaic. It adopts a multi-resolution and robust feature based inter-layer mosaic together. In each layer, Harris corner detector is chosen to effectively detect features and RANSAC is used to find reliable matches for further calibration as well as an initial homography as the initial motion of next layer. Simplex and subspace trust region methods are used consequently to estimate the stable focal length and rotation matrix through the transformation property of feature matches. In order to stitch multiple images together, an iterative registration strategy is also adopted to estimate the focal length of each image. Experimental results demonstrate the performance of the proposed method.

Reconstruction of Transmitted Images from Images Displayed on Video Terminals (영상 단말에 전송된 이미지를 이용한 전송 영상 복원)

  • Park, Su-Kyung;Lee, Seon-Oh;Sim, Dong-Gyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.49-57
    • /
    • 2012
  • An image reconstruction algorithm is proposed to estimate transmitted original images from images displayed on a video terminal. The proposed algorithm acquires images that are displayed on video terminal screens by using a camera. The transmitted images are then estimated with the acquired images. However, camera-acquired images exhibit geometric and color distortions caused by characteristics of the camera and display devices. We make use of a geometric distortion correction algorithm that exploits homography and color distortions using a weighted-linear model. The experimental results show that the proposed algorithm yields promising estimation performance with respect to the peak signal-to-noise ratio (PSNR). PSNR values of the estimated images with respect to the corresponding original images range from 28-29 dB.

Semi-auto Calibration Method Using Circular Sample Pixel and Homography Estimation (원형 샘플 화소와 호모그래피 예측을 이용한 반자동 카메라 캘리브레이션 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.11a
    • /
    • pp.67-70
    • /
    • 2015
  • 최근 깊이 영상 기반 렌더링 방법을 이용하여 제작된 3차원 컨텐츠가 우리의 눈을 즐겁게 해주고 있다. 이러한 깊이 영상 기반 렌더링에서는 필연적으로 색상 카메라와 깊이 카메라 간의 시점 차이가 발생한다. 따라서 두 시점을 일치시키는 전처리 과정으로서 카메라 파라미터가 중요한 역할을 수행한다. 카메라 파라미터를 획득하는 과정으로 카메라 캘리브레이션이 수행된다. 널리 사용되는 기존의 카메라 캘리브레이션 방법은 평면의 체스보드 패턴을 여러 자세로 촬영한 다음 패턴 특징점을 손으로 직접 선택해야하는 불편함이 따른다. 따라서 본 논문에서는 이 문제를 해결하기 위해 원형 샘플 화소 검사와 호모그래피 예측을 이용한 반자동 카메라 캘리브레이션을 제안한다. 제안하는 방법은 먼저 FAST 코너 검출 알고리즘을 이용하여 패턴 특징점의 후보를 영상으로부터 추출한다. 다음으로 원형 샘플 화소를 검사하여 후보군의 크기를 줄인다. 그리고 호모그래피 예측을 통해 손실된 패턴 특징점을 보완하는 완전한 패턴 특징점군을 획득한다. 마지막으로 화소 정확성 향상을 통해 실수 단위의 정확성을 가지는 패턴 특징점의 위치를 획득한다. 실험을 통해 제안하는 방법이 기존의 방법과 비교하여 카메라 파라미터의 정확성은 유지하고 수작업의 불편함을 해소할 수 있음을 확인했다.

  • PDF

Homography Estimation for View-invariant Gait Recognition (시점 불변 게이트 인식을 위한 호모그래피의 추정)

  • Na, Jin-Young;Kang, Sung-Suk;Jeong, Seung-Do;Choi, Byung-Uk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.691-694
    • /
    • 2003
  • 게이트는 사람의 걷는 방법 혹은 그 특성을 나타내는 용어로써, 최근 컴퓨터 비젼 기술을 이용하여 개개인을 분별하기 위한 게이트 특징 정보를 추출하고자 하는 연구가 활발히 진행되고 있다. 그러나 영상을 기반으로 추출한 게이트 정보는 카메라의 시점에 종속적인 단점을 가지고 있다. 이러한 단점을 해결하기 위한 노력으로 3차원 정보를 획득하려는 연구가 진행되고 있으나 이는 카메라와 사람간의 거리, 카메라 파라미터 등 부가적인 정보를 필요로 한다. 본 논문에서는 영상내의 정보만을 이용하여, 카메라 시점에 종속적인 게이트 인식의 단점을 해결할 수 있는 방안을 제안한다. 먼저 실루엣 영상으로부터 걷는 방향을 찾아내고, 간단한 연산을 통해 평면 호모그래피를 추정한다. 추정된 호모그래피를 이용하여 측면 시점의 영상으로 재구성하면, 시점 변화에 비종속적인 게이트 정보를 추출할 수 있다. 본 논문에서 제안한 방법을 평가하기 위하여 실추엣 영상의 폭과 높이 변화를 비교하였다 실험을 통해 제안한 방법을 적용할 경우, 그렇지 않은 경우에 비하여 특징 변화가 적음을 확인하였고, 특히 보폭 통의 게이트 특징 정보가 일정한 값을 유지함을 볼 수 있었다.

  • PDF

Design and Implementation of Video Clip Service System in Augmented Reality Using the SURF Algorithm (SURF 알고리즘을 이용한 증강현실 동영상 서비스 시스템의 설계 및 구현)

  • Jeon, Young-Joon;Shin, Hong-Seob;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.1
    • /
    • pp.22-28
    • /
    • 2015
  • In this paper, a service system which shows linked video clips from the static images extracted from newspapers, magazines, photo albums and etc in an augmented reality. First, the system uses SURF algorithm to extract features from the original photos printed in the media and stores them with the linked video clips. Next, when a photo is taken by using a camera from mobile devices such as smart phones, the system extracts features in real time, search a linked video clip matching the original image, and shows it on the smart phone in an augmented reality. The proposed system is applied to Android smart phone devices and the test results verify that the proposed system operates not only on normal photos but also on partially damaged photos.

Overlap Estimation for Panoramic Image Generation (중첩 영역 추정을 통한 파노라마 영상 생성)

  • Yang, Jihee;Jeon, Jihye;Park, Gooman
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.4
    • /
    • pp.32-37
    • /
    • 2014
  • The panorama is a good alternative to overcome narrow FOV under study in robot vision, stereo camera and panorama image registration and modeling. The panorama can materialize view with angles wider than human view and provide realistic space which make feeling of being on the scene based on realism. If we use all correspondence, it is too difficult to find strong features and correspondences and assume accurate homography matrix in geographic changes in images as load of calculation increases. Accordingly, we used SURF algorithm to estimate overlapping areas with high similarity by comparing and analyzing the input images' histograms and to detect features. And we solved the problem of input order so we can make panorama by input images without order.

Self-Supervised Rigid Registration for Small Images

  • Ma, Ruoxin;Zhao, Shengjie;Cheng, Samuel
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.180-194
    • /
    • 2021
  • For small image registration, feature-based approaches are likely to fail as feature detectors cannot detect enough feature points from low-resolution images. The classic FFT approach's prediction accuracy is high, but the registration time can be relatively long, about several seconds to register one image pair. To achieve real-time and high-precision rigid registration for small images, we apply deep neural networks for supervised rigid transformation prediction, which directly predicts the transformation parameters. We train deep registration models with rigidly transformed CIFAR-10 images and STL-10 images, and evaluate the generalization ability of deep registration models with transformed CIFAR-10 images, STL-10 images, and randomly generated images. Experimental results show that the deep registration models we propose can achieve comparable accuracy to the classic FFT approach for small CIFAR-10 images (32×32) and our LSTM registration model takes less than 1ms to register one pair of images. For moderate size STL-10 images (96×96), FFT significantly outperforms deep registration models in terms of accuracy but is also considerably slower. Our results suggest that deep registration models have competitive advantages over conventional approaches, at least for small images.

Sidewalk Gaseous Pollutants Estimation Through UAV Video-based Model

  • Omar, Wael;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • As unmanned aerial vehicle (UAV) technology grew in popularity over the years, it was introduced for air quality monitoring. This can easily be used to estimate the sidewalk emission concentration by calculating road traffic emission factors of different vehicle types. These calculations require a simulation of the spread of pollutants from one or more sources given for estimation. For this purpose, a Gaussian plume dispersion model was developed based on the US EPA Motor Vehicle Emissions Simulator (MOVES), which provides an accurate estimate of fuel consumption and pollutant emissions from vehicles under a wide range of user-defined conditions. This paper describes a methodology for estimating emission concentration on the sidewalk emitted by different types of vehicles. This line source considers vehicle parameters, wind speed and direction, and pollutant concentration using a UAV equipped with a monocular camera. All were sampled over an hourly interval. In this article, the YOLOv5 deep learning model is developed, vehicle tracking is used through Deep SORT (Simple Online and Realtime Tracking), vehicle localization using a homography transformation matrix to locate each vehicle and calculate the parameters of speed and acceleration, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. The results demonstrate that these estimated pollutants values are good to give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road.

Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System (경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출)

  • Hong, Sunghoon;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.