• Title/Summary/Keyword: Homography

Search Result 146, Processing Time 0.027 seconds

A Study on Improvement Technology of Image Resolution using Mobile Camera (이동 카메라를 이용한 사진 해상도 향상 기술 연구)

  • Buri Kim;Jongtaek Oh
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.93-98
    • /
    • 2023
  • Recently, as the size of display devices tends to increase and taking pictures with smart phones has become commonplace, the need for taking high-resolution pictures with smart phones is increasing. However, when the lens size of a camera is limited, such as in a smartphone, there is a physical limit to increasing the resolution of a photo. This paper is about a technique for increasing the resolution of a picture even when using a small-sized lens like a smartphone camera. It is to take multiple pictures while moving the smartphone, and to increase the resolution by combining these pictures into one picture. First of all, two pictures were taken while moving the smartphone horizontally for the 2D picture. Processes such as camera matrix estimation, and homograph inverse transformation were performed using OpenCV, and the resolution was improved by synthesizing one picture. It was confirmed that the resolution was improved in parts such as oblique lines or arcs on several test pictures.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

A Study on the Construction of Near-Real Time Drone Image Preprocessing System to use Drone Data in Disaster Monitoring (재난재해 분야 드론 자료 활용을 위한 준 실시간 드론 영상 전처리 시스템 구축에 관한 연구)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.143-149
    • /
    • 2018
  • Recently, due to the large-scale damage of natural disasters caused by global climate change, a monitoring system applying remote sensing technology is being constructed in disaster areas. Among remote sensing platforms, the drone has been actively used in the private sector due to recent technological developments, and has been applied in the disaster areas owing to advantages such as timeliness and economical efficiency. This paper deals with the development of a preprocessing system that can map the drone image data in a near-real time manner as a basis for constructing the disaster monitoring system using the drones. For the research purpose, our system is based on the SURF algorithm which is one of the computer vision technologies. This system aims to performs the desired correction through the feature point matching technique between reference images and shot images. The study area is selected as the lower part of the Gahwa River and the Daecheong dam basin. The former area has many characteristic points for matching whereas the latter area has a relatively low number of difference, so it is possible to effectively test whether the system can be applied in various environments. The results show that the accuracy of the geometric correction is 0.6m and 1.7m respectively, in both areas, and the processing time is about 30 seconds per 1 scene. This indicates that the applicability of this study may be high in disaster areas requiring timeliness. However, in case of no reference image or low-level accuracy, the results entail the limit of the decreased calibration.

Method of Measuring Color Difference Between Images using Corresponding Points and Histograms (대응점 및 히스토그램을 이용한 영상 간의 컬러 차이 측정 기법)

  • Hwang, Young-Bae;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.305-315
    • /
    • 2012
  • Color correction between two or multiple images is very crucial for the development of subsequent algorithms and stereoscopic 3D camera system. Even though various color correction methods are proposed recently, there are few methods for measuring the performance of these methods. In addition, when two images have view variation by camera positions, previous methods for the performance measurement may not be appropriate. In this paper, we propose a method of measuring color difference between corresponding images for color correction. This method finds matching points that have the same colors between two scenes to consider the view variation by correspondence searches. Then, we calculate statistics from neighbor regions of these matching points to measure color difference. From this approach, we can consider misalignment of corresponding points contrary to conventional geometric transformation by a single homography. To handle the case that matching points cannot cover the whole regions, we calculate statistics of color difference from the whole image regions. Finally, the color difference is computed by the weighted summation between correspondence based and the whole region based approaches. This weight is determined by calculating the ratio of occupying regions by correspondence based color comparison.

Removing Lighting Reflection under Dark and Rainy Environments based on Stereoscopic Vision (스테레오 영상 기반 야간 및 우천시 조명 반사 제거 기술)

  • Lee, Sang-Woong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.104-109
    • /
    • 2010
  • The lighting reflection is a common problem in image analysis and causes the many difficulties to extract distinct features in related fields. Furthermore, the problem grows in the rainy night. In this paper, we aim to remove light reflection effects and reconstruct a road surface without lighting reflections in order to extract distinct features. The proposed method utilizes a 3D analysis based on a multiple geometry using captured images, with which we can combine each reflected areas; that is, we can remove lighting reflection effects and reconstruct the surface. At first, the regions of lighting sources and reflected surfaces are extracted by local maxima based on vertically projected intensity-histograms. After that, a fundamental matrix and homography matrix among multiple images are calculated by corresponding points in each image. Finally, we combine each surface by selecting minimum value among multiple images and replace it on a target image. The proposed method can reduces lighting reflection effects and the property on the surface is not lost. While the experimental results with collected data shows plausible performance comparing to the speed, reflection-overlapping areas which can not be reconstructed remain in the result. In order to solve this problem, a new reflection model needs to be constructed.

Moving Object Detection and Tracking Techniques for Error Reduction (오인식률 감소를 위한 이동 물체 검출 및 추적 기법)

  • Hwang, Seung-Jun;Ko, Ha-Yoon;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.22 no.1
    • /
    • pp.20-26
    • /
    • 2018
  • In this paper, we propose a moving object detection and tracking algorithm based on multi-frame feature point tracking information to reduce false positives. However, there are problems of detection error and tracking speed in existing studies. In order to compensate for this, we first calculate the corner feature points and the optical flow of multiple frames for camera movement compensation and object tracking. Next, the tracking error of the optical flow is reduced by the multi-frame forward-backward tracking, and the traced feature points are divided into the background and the moving object candidate based on homography and RANSAC algorithm for camera movement compensation. Among the transformed corner feature points, the outlier points removed by the RANSAC are clustered and the outlier cluster of a certain size is classified as the moving object candidate. Objects classified as moving object candidates are tracked according to label tracking based data association analysis. In this paper, we prove that the proposed algorithm improves both precision and recall compared with existing algorithms by using quadrotor image - based detection and tracking performance experiments.