• Title/Summary/Keyword: Feature Point Matching

Search Result 195, Processing Time 0.025 seconds

Hierarchical Active Shape Model-based Motion Estimation for Real-time Tracking of Non-rigid Object (계층적 능동형태 모델을 이용한 비정형 객체의 움직임 예측형 실시간 추적)

  • 강진영;이성원;신정호;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.1-11
    • /
    • 2004
  • In this paper we proposed a hierarchical ASM for real-time tracking of non-rigid objects. For tracking an object we used ASM for estimating object contour possibly with occlusion. Moreover, to reduce the processing time we used hierarchical approach for real-time tacking. In the next frame we estimated the initial feature point by using Kalman filter. We also added block matching algorithm for increasing accuracy of the estimation. The proposed hierarchical, prediction-based approach was proven to out perform the exiting non-hierarchical, non-prediction methods.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Correspondence Matching of Stereo Images by Sampling of Planar Region in the Scene Based on RANSAC (RANSAC에 기초한 화면내 평면 영역 샘플링에 의한 스테레오 화상의 대응 매칭)

  • Jung, Nam-Chae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.242-249
    • /
    • 2011
  • In this paper, the correspondence matching method of stereo images was proposed by means of sampling projective transformation matrix in planar region of scene. Though this study is based on RANSAC, it does not use uniform distribution by random sampling in RANSAC, but use multi non-uniform computed from difference in positions of feature point of image or templates matching. The existing matching method sampled that the correspondence is presumed to correct by use of the condition which the correct correspondence is almost satisfying, and applied RANSAC by matching the correspondence into one to one, but by sampling in stages in multi probability distribution computed for image in the proposed method, the correct correspondence of high probability can be sampled among multi correspondence candidates effectively. In the result, we could obtain many correct correspondence and verify effectiveness of the proposed method in the simulation and experiment of real images.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

A study on image registration and fusion of MRI and SPECT/PET (뇌의 단일 광자 방출 전산화 단층촬영 영상, 양전자 방출 단층 촬영 영상 그리고 핵자기공명 영상의 융합과 등록에 관한 연구)

  • Joo, Ra-Hyung;Choi, Yong;Kwon, Soo-Il;Heo, Soo-Jin
    • Progress in Medical Physics
    • /
    • v.9 no.1
    • /
    • pp.47-53
    • /
    • 1998
  • Nuclear Medicine Images have comparatively poor spatial resolution, making it difficult to relate the functional information which they contain to precise anatomical structures. Anatomical structures useful in the interpretation of SPECT /PET Images were radiolabelled. PET/SPECT Images Provide functional information, whereas MRI mainly demonstrate morphology and anatomical. Fusion or Image Registration improves the information obtained by correlating images from various modalities. Brain Scan were studied on one or more occations using MRI and SPECT. The data were aligned using a point pair methods and surface matching. SPECT and MR Images was tested using a three dimensional water fillable Hoffman Brain Phantom with small marker and PET and MR Image was tested using a patient data. Registration of SPECT and MR Images is feasible and allows more accurate anatomic assessment of sites of abnormal uptake in radiolabeled studies. Point based registration was accurate and easily implemented three dimensional registration of multimodality data set for fusion of clinical anatomic and functional imaging modalities. Accuracy of a surface matching algorithm and homologous feature pair matching for three dimensional image registration of Single Photon Emission Computed Tomography Emission Computed Tomography (SPECT), Positron Emission Tomography (PET) and Magnetic Resonance Images(MRD was tested using a three dimensional water fill able brain phantom and Patients data. Transformation parameter for translation and scaling were determined by homologous feature point pair to match each SPECT and PET scan with MR images.

  • PDF

A study on localization and compensation of mobile robot using fusion of vision and ultrasound (영상 및 거리정보 융합을 이용한 이동로봇의 위치 인식 및 오차 보정에 관한 연구)

  • Jang, Cheol-Woong;Jung, Ki-Ho;Jung, Dae-Sub;Ryu, Je-Goon;Shim, Jae-Hong;Lee, Eung-Hyuk
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.554-556
    • /
    • 2006
  • A key component for autonomous mobile robot is to localize ifself. In this paper we suggest a vision-based localization and compensation of robot's location using ultrasound. Mobile robot travels along wall and searches each feature in indoor environment and transformed absolute coordinates of actuality environment using these points and builds a map. And we obtain information of the environment because mobile robot travels along wall. Localzation search robot's location candidate point by ultrasound and decide position among candidate point by features matching.

  • PDF

A Simple Fingerprint Fuzzy Vault for FIDO

  • Cho, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5674-5691
    • /
    • 2016
  • Fast IDentity Online(FIDO) supports biometric authentications in an online environment without transmitting biometric templates over the network. For a given FIDO client, the "Fuzzy Vault" securely stores biometric templates, houses additional biometric templates, and unlocks private keys via biometrics. The Fuzzy Vault has been extensively researched and some vulnerabilities have been discovered, such as brute force, correlation, and key inversions attacks. In this paper, we propose a simple fingerprint Fuzzy Vault for FIDO clients. By using the FIDO feature, a simple minutiae alignment, and point-to-point matching, our Fuzzy Vault provides a secure algorithm to combat a variety of attacks, such as brute force, correlation, and key inversions. Using a case study, we verified our Fuzzy Vault by using a publicly available fingerprint database. The results of our experiments show that the Genuine Acceptance Rate and the False Acceptance Rate range from 48.89% to 80% and from 0.02% to 0%, respectively. In addition, our Fuzzy Vault, compared to existing similar technologies, needed fewer attempts.

Image Registration Method for KOMPSAT-2 clouds imagery (구름이 존재하는 아리랑 2호 영상의 영상정합 방법)

  • Kim, Tae-Young;Choi, Myung-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2009.03a
    • /
    • pp.250-253
    • /
    • 2009
  • 고해상도 컬러 위성 영상 촬영을 위한 다중분광 센서를 탑재한 위성의 영상은, 탑재체에 장착된 센서의 위치에 따라, 동일 지역에 대해 센서 간의 촬영시각의 차이가 발생한다. 만약 이동하는 구름이 촬영될 경우, 센서별 촬영 영상간에는 구름과 지상과의 상대적 위치가 달라진다. 고해상도 컬러 위성 영상을 생성하기 위해, 영상 정합(image registration) 기법이 사용되는 데, 일반적인 영상 정합 알고리즘은 촬영 영상에서의 특징점(feature point)이 움직이지 않는 것을 전제로 수행한다. 그 결과 이동하는 구름 경계부에서 정합점(matching point)이 추출될 경우, 지상 영역에서의 정합품질이 좋지 않다. 따라서, 본 연구에서는 구름 경계부에서 정합점이 추출되지 않는 알고리즘을 제안하였다. 실험 영상으로 구름이 존재하는 아리랑2호 영상을 사용하였고, 제안된 영상 정합 알고리즘은 지상 영역에서의 정합 품질이 높였음을 보였다.

  • PDF

SOSiM: Shape-based Object Similarity Matching using Shape Feature Descriptors (SOSiM: 형태 특징 기술자를 사용한 형태 기반 객체 유사성 매칭)

  • Noh, Chung-Ho;Lee, Seok-Lyong;Chung, Chin-Wan;Kim, Sang-Hee;Kim, Deok-Hwan
    • Journal of KIISE:Databases
    • /
    • v.36 no.2
    • /
    • pp.73-83
    • /
    • 2009
  • In this paper we propose an object similarity matching method based on shape characteristics of an object in an image. The proposed method extracts edge points from edges of objects and generates a log polar histogram with respect to each edge point to represent the relative placement of extracted points. It performs the matching in such a way that it compares polar histograms of two edge points sequentially along with edges of objects, and uses a well-known k-NN(nearest neighbor) approach to retrieve similar objects from a database. To verify the proposed method, we've compared it to an existing Shape-Context method. Experimental results reveal that our method is more accurate in object matching than the existing method, showing that when k=5, the precision of our method is 0.75-0.90 while that of the existing one is 0.37, and when k=10, the precision of our method is 0.61-0.80 while that of the existing one is 0.31. In the experiment of rotational transformation, our method is also more robust compared to the existing one, showing that the precision of our method is 0.69 while that of the existing one is 0.30.

Image Mosaicking Using Feature Points Based on Color-invariant (칼라 불변 기반의 특징점을 이용한 영상 모자이킹)

  • Kwon, Oh-Seol;Lee, Dong-Chang;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.89-98
    • /
    • 2009
  • In the field of computer vision, image mosaicking is a common method for effectively increasing restricted the field of view of a camera by combining a set of separate images into a single seamless image. Image mosaicking based on feature points has recently been a focus of research because of simple estimation for geometric transformation regardless distortions and differences of intensity generating by motion of a camera in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.