• Title/Summary/Keyword: Homography

Search Result 146, Processing Time 0.022 seconds

Development of the Advanced SURF Algorithm for Efficient Matching of Stereo Image (스테레오 영상의 효율적 매칭을 위한 개선된 SURF 알고리즘 개발)

  • Youm, Min Kyo;Yoon, Hong Sik;Whang, Jin Sang;Lee, Dong Ha
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.11-17
    • /
    • 2013
  • Nowadays 3D models are used in diverse sectors. The 3D maps provide better reality than existing plane maps as well as diverse pieces of information that cannot be expected from the limited plane maps. A process proposed in this paper enables easy and quick production by replacing the expensive laser scanners for modeling by an improved digital camera stereo matching algorithm. The algorithm used in this study was a SURF algorithm contained in the OpenCV library. The unconformity points of the algorithm were eliminated using the homography conversion and epipolar lines. In addition, the improved algorithm was compared with the commercial program, and it showed a better performance than the commercial program. It is expected that the proposed method can contribute to the digital maps and 3D virtual reality because it enables easy and quick 3D modeling provided that the stereo matching conditions are met.

An Implementation of QR Code based On-line Mobile Augmented Reality System (QR코드 기반의 온라인 모바일 증강현실 시스템의 구현)

  • Park, Min-Woo;Park, Jung-Pil;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1004-1016
    • /
    • 2012
  • This paper proposes a mobile augmented reality system to provide detail information of the products using QR code included in them. In the proposed system, we perform the estimation of the camera pose using both of marker-based and markerless-based methods. If the camera can see the QR code, we perform the estimation of the camera pose using the set of rectangles in the QR code. However, if the QR code is out of sight, we perform the estimation of the camera pose based homography between consecutive frames. Moreover, the content of the augmented reality in the proposed system is made by using meta-data. Therefore, the user can make contents of various scenario using only meta-data file without modification of our system. Especially, our system maintains the contents as newly updated state by the on-line server. Thus, it can reduce the unnecessary update of the program.

High Accurate Cup Positioning System for a Coffee Printer (커피 프린터를 위한 커피 잔 정밀 측위 시스템)

  • Kim, Heeseung;Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1950-1956
    • /
    • 2017
  • In food-printing field, precise positioning technique for a printing object is very important. In this paper, we propose cup positioning method for a latte-art printer through image processing. A camera sensor is installed on the upper side of the printer, and the image obtained from this is projected and converted into a top-view image. Then, the edge lines of the image is detected first, and then the coordinate of the center and the radius of the cup are detected through a Circular Hough transformation. The performance evaluation results show that the image processing time is 0.1 ~ 0.125 sec and the cup detection rate is 92.26%. This means that a cup is detected almost perfectly without affecting the whole latte-art printing time. The center point coordinates and radius values of cups detected by the proposed method show very small errors less than an average of 1.5 mm. Therefore, it seems that the problem of the printing position error is solved.

Non-uniform Deblur Algorithm using Gyro Sensor and Different Exposure Image Pair (자이로 센서와 노출시간이 다른 두 장의 영상을 이용한 비균일 디블러 기법)

  • Ryu, Ho-hyeong;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.200-209
    • /
    • 2016
  • This paper proposes a non-uniform de-blur algorithm using IMU sensor and a long/short exposure-time image pair to efficiently remove the blur phenomenon. Conventional blur kernel estimation algorithms using sensor information do not provide acceptable performance due to limitation of sensor performance. In order to overcome such a limitation, we present a kernel refinement step based on images having different exposure times which improves accuracy of the estimated kernel. Also, in order to figure out the phenomenon that conventional non-uniform de-blur algorithms suffer from severe degradation of visual quality in case of large blur kernels, this paper a homography-based residual de-convolution which can minimize quality degradation such as ringing artifacts during de-convolution. Experimental results show that the proposed algorithm is superior to the state-of-the-art methods in terms of subjective as well as objective visual quality.

Enhancement on 3 DoF Image Stitching Using Inertia Sensor Data (관성 센서 데이터를 활용한 3 DoF 이미지 스티칭 향상)

  • Kim, Minwoo;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.51-61
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from an inertia sensor to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw angles, pitch angles, roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data.

Robust Parameter Estimation using Fuzzy RANSAC (퍼지 RANSAC을 이용한 강건한 인수 예측)

  • Lee Joong-Jae;Jang Hyo-Jong;Kim Gye-Young;Choi Hyung-il
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.2
    • /
    • pp.252-266
    • /
    • 2006
  • Many problems in computer vision are mainly based on mathematical models. Their optimal solutions can be found by estimating the parameters of each model. However, provided an input data set is involved outliers which are relative]V larger than normal noises, they lead to incorrect results. RANSAC is a representative robust algorithm which is used to resolve the problem. One major problem with RANSAC is that it needs priori knowledge(i.e. a percentage of outliers) of the distribution of data. To solve this problem, we propose a FRANSAC algorithm which improves the rejection rate of outliers and the accuracy of solutions. This is peformed by categorizing all data into good sample set, bad sample set and vague sample set using a fuzzy classification at each iteration and sampling in only good sample set. In the experimental results, we show that the performance of the proposed algorithm when it is applied to the linear regression and the calculation of a homography.

Classification of Feature Points Required for Multi-Frame Based Building Recognition (멀티 프레임 기반 건물 인식에 필요한 특징점 분류)

  • Park, Si-young;An, Ha-eun;Lee, Gyu-cheol;Yoo, Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.3
    • /
    • pp.317-327
    • /
    • 2016
  • The extraction of significant feature points from a video is directly associated with the suggested method's function. In particular, the occlusion regions in trees or people, or feature points extracted from the background and not from objects such as the sky or mountains are insignificant and can become the cause of undermined matching or recognition function. This paper classifies the feature points required for building recognition by using multi-frames in order to improve the recognition function(algorithm). First, through SIFT(scale invariant feature transform), the primary feature points are extracted and the mismatching feature points are removed. To categorize the feature points in occlusion regions, RANSAC(random sample consensus) is applied. Since the classified feature points were acquired through the matching method, for one feature point there are multiple descriptors and therefore a process that compiles all of them is also suggested. Experiments have verified that the suggested method is competent in its algorithm.

An User-Friendly Method of Image Warping for Traffic Monitoring System (실시간 교통상황 모니터링 시스템을 위한 유저 친화적인 영상 변형 방법)

  • Yi, Chuho;Cho, Jungwon
    • Journal of Digital Convergence
    • /
    • v.14 no.12
    • /
    • pp.231-236
    • /
    • 2016
  • Currently, a traffic monitoring service using a surveillance camera is provided through internet. In general, if the user points a certain location on a map, then this service shows the real-time image of the camera where it is mounted. In this paper, we proposed the intuitive surveillance monitoring system which displays a real-time camera image on the map by warping with bird's-eye view and with the top of image as the north. In order to robustly estimate the road plane using camera image, we used the motion vectors which can be detected to changes in brightness. We applied a re-adjustment process to have the same directivity with a map and presented a user-friendly interface that can be displayed on the map. In the experiment, the proposed method was presented as the result of warping image that the user can easily perceive like a map.

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

3-D Pose Estimation of an Elliptic Object Using Two Coplanar Points (두 개의 공면점을 활용한 타원물체의 3차원 위치 및 자세 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.23-35
    • /
    • 2012
  • This paper presents a 3-D pose (position and orientation) estimation method for an elliptic object in 3-D space. It is difficult to resolve the problem of determining 3-D pose parameters with respect to an elliptic feature in 3-D space by interpretation of its projected feature onto an image plane. As an alternative, we propose a two points-based pose estimation algorithm to seek the 3-D information of an elliptic feature. The proposed algorithm determines a homogeneous transformation uniquely for a given correspondence set of an ellipse and two coplanar points that are defined on model and image plane, respectively. For each plane, two triangular features are extracted from an ellipse and two points based on the polarity in 2-D projection space. A planar homography is first estimated by the triangular feature correspondences, then decomposed into 3-D pose parameters. The proposed method is evaluated through a series of experiments for analyzing the errors of 3-D pose estimation and the sensitivity with respect to point locations.