• Title/Summary/Keyword: feature point

Search Result 1,351, Processing Time 0.034 seconds

Map Alignment Method in Monocular SLAM based on Point-Line Feature (특징점과 특징선을 활용한 단안 카메라 SLAM에서의 지도 병합 방법)

  • Back, Mu Hyun;Lee, Jin Kyu;Moon, Ji Won;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.127-134
    • /
    • 2020
  • In this paper, we propose a map alignment method for maps generated by point-line monocular SLAM. In the proposed method, the information of feature lines as well as feature points extracted from multiple maps are fused into a single map. To this end, the proposed method first searches for similar areas between maps via Bag-of-Words-based image matching. Thereafter, it calculates the similarity transformation between the maps in the corresponding areas to align the maps. Finally, we merge the overlapped information of multiple maps into a single map by removing duplicate information from similar areas. Experimental results show that maps created by different users are combined into a single map, and the accuracy of the fused map is similar with the one generated by a single user. We expect that the proposed method can be utilized for fast imagery map generation.

Robust Features and Accurate Inliers Detection Framework: Application to Stereo Ego-motion Estimation

  • MIN, Haigen;ZHAO, Xiangmo;XU, Zhigang;ZHANG, Licheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.302-320
    • /
    • 2017
  • In this paper, an innovative robust feature detection and matching strategy for visual odometry based on stereo image sequence is proposed. First, a sparse multiscale 2D local invariant feature detection and description algorithm AKAZE is adopted to extract the interest points. A robust feature matching strategy is introduced to match AKAZE descriptors. In order to remove the outliers which are mismatched features or on dynamic objects, an improved random sample consensus outlier rejection scheme is presented. Thus the proposed method can be applied to dynamic environment. Then, geometric constraints are incorporated into the motion estimation without time-consuming 3-dimensional scene reconstruction. Last, an iterated sigma point Kalman Filter is adopted to refine the motion results. The presented ego-motion scheme is applied to benchmark datasets and compared with state-of-the-art approaches with data captured on campus in a considerably cluttered environment, where the superiorities are proved.

An Efficient Feature Point Extraction and Comparison Method through Distorted Region Correction in 360-degree Realistic Contents

  • Park, Byeong-Chan;Kim, Jin-Sung;Won, Yu-Hyeon;Kim, Young-Mo;Kim, Seok-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.93-100
    • /
    • 2019
  • One of critical issues in dealing with 360-degree realistic contents is the performance degradation in searching and recognition process since they support up to 4K UHD quality and have all image angles including the front, back, left, right, top, and bottom parts of a screen. To solve this problem, in this paper, we propose an efficient search and comparison method for 360-degree realistic contents. The proposed method first corrects the distortion at the less distorted regions such as front, left and right parts of the image excluding severely distorted regions such as upper and lower parts, and then it extracts feature points at the corrected region and selects the representative images through sequence classification. When the query image is inputted, the search results are provided through feature points comparison. The experimental results of the proposed method shows that it can solve the problem of performance deterioration when 360-degree realistic contents are recognized comparing with traditional 2D contents.

Determination of Camera System Orientation and Translation in Cartesian Coordinate (직교 좌표에서 카메라 시스템의 방향과 위치 결정)

  • 이용중
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.04a
    • /
    • pp.109-114
    • /
    • 2000
  • A new method for the determination of camera system rotation and translation from in 3-D space using recursive least square method is presented in this paper. With this method, the calculation of the equation is found by a linear algorithm. Where the equation are either given or be obtained by solving five or more point correspondences. Good results can be obtained in the presence if more than the eight point. A main advantage of this new method is that it decouple rotation and translation, and then reduces computation. With respect to error in the solution point number in the input image data, adding one more feature correspondence to required minimum number improves the solution accuracy drastically. However, further increase in the number of feature correspondence improve the solution accuracy only slowly. The algorithm proposed by this paper is used to make camera system rotation and translation easy to recognize even when camera system attached at end effecter of six degrees of freedom industrial robot manipulator are applied industrial field.

  • PDF

Cylinder-based Angular Interpolation to Efficiently Feature Point Matching in AR Environment (AR환경에서 특징 포인트를 효율적으로 매칭하기 위한 실린더 기반의 각도 보간)

  • Moon, YeRin;Kim, Jong-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.365-368
    • /
    • 2022
  • 본 논문에서는 가상 물체를 현실과 오차 없이 정확하게 증강 시켜야 하는 상황에서 특징 포인트를 이용하여 효율적으로 매칭하기 위한 실린더 기반의 각도 보간 기법을 제안한다. 증강현실에서 활용되는 대표적인 객체를 증강하는 방법은 특징 포인트들을 트래킹하여 찾아낸 후, RANSAC 알고리즘을 기반으로 포인트 셋에서 바닥, 벽과 같이 하나의 평면을 구성하고 그 위에 객체를 증강한다. 이 방법은 평면을 이용하기 때문에 계산량이 적지만, 증강 위치에 대한 오차가 존재하기 때문에 때때로 잘못된 위치에 객체가 배치되는 경우가 발생한다. 특히, 의료시설, 도로 공사에서 증강 현실을 사용했을 때에 증강된 가상물체의 위치, 크기 등이 현실에서 작은 오차라도 어긋날 경우 크게 사고가 발생할 수 있다. 본 논문에서는 평면 생성 없이 특징 포인트만을 이용하여 효율적으로 매칭 할 수 있는 실린더 기반의 각도 보간을 이용하여 정확하게 객체를 증강할 수 있는 결과를 보여준다.

  • PDF

A Cost Evaluation Model for Developing FHIR-based Health Information Services to Support Massive Clients

  • Seokjin Im
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.312-320
    • /
    • 2024
  • Healthcare services converged with ICT technology are improving quality of life and satisfaction through various customized services. In ICT-based medical services, data interchange between medical services is important, and HL7 FHIR, a medical data standard, enables efficient medical data interchange. FHIR-based medical information services using wireless data broadcasting can efficiently support massive clients. This paper proposes a function point model to evaluate the implementation cost of FHIR-based health information services using wireless data broadcasting. The proposed cost evaluation model can effectively evaluate the development cost by applying the complexity of converting medical data into FHIR format and the complexity of organizing indexes to efficiently support massive clients. The comparison of the proposed feature point evaluation model with simple feature points shows the efficiency and suitability of the proposed cost evaluation model.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Markerless Image-to-Patient Registration Using Stereo Vision : Comparison of Registration Accuracy by Feature Selection Method and Location of Stereo Bision System (스테레오 비전을 이용한 마커리스 정합 : 특징점 추출 방법과 스테레오 비전의 위치에 따른 정합 정확도 평가)

  • Joo, Subin;Mun, Joung-Hwan;Shin, Ki-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.1
    • /
    • pp.118-125
    • /
    • 2016
  • This study evaluates the performance of image to patient registration algorithm by using stereo vision and CT image for facial region surgical navigation. For the process of image to patient registration, feature extraction and 3D coordinate calculation are conducted, and then 3D CT image to 3D coordinate registration is conducted. Of the five combinations that can be generated by using three facial feature extraction methods and three registration methods on stereo vision image, this study evaluates the one with the highest registration accuracy. In addition, image to patient registration accuracy was compared by changing the facial rotation angle. As a result of the experiment, it turned out that when the facial rotation angle is within 20 degrees, registration using Active Appearance Model and Pseudo Inverse Matching has the highest accuracy, and when the facial rotation angle is over 20 degrees, registration using Speeded Up Robust Features and Iterative Closest Point has the highest accuracy. These results indicate that, Active Appearance Model and Pseudo Inverse Matching methods should be used in order to reduce registration error when the facial rotation angle is within 20 degrees, and Speeded Up Robust Features and Iterative Closest Point methods should be used when the facial rotation angle is over 20 degrees.

Feature Point Filtering Method Based on CS-RANSAC for Efficient Planar Homography Estimating (효과적인 평면 호모그래피 추정을 위한 CS-RANSAC 기반의 특징점 필터링 방법)

  • Kim, Dae-Woo;Yoon, Ui-Nyoung;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.6
    • /
    • pp.307-312
    • /
    • 2016
  • Markerless tracking for augmented reality using Homography can augment virtual objects correctly and naturally on live view of real-world environment by using correct pose and direction of camera. The RANSAC algorithm is widely used for estimating Homography. CS-RANSAC algorithm is one of the novel algorithm which cooperates a constraint satisfaction problem(CSP) into RANSAC algorithm for increasing accuracy and decreasing processing time. However, CS-RANSAC algorithm can be degraded performance of calculating Homography that is caused by selecting feature points which estimate low accuracy Homography in the sampling step. In this paper, we propose feature point filtering method based on CS-RANSAC for efficient planar Homography estimating the proposed algorithm evaluate which feature points estimate high accuracy Homography for removing unnecessary feature point from the next sampling step using Symmetric Transfer Error to increase accuracy and decrease processing time. To evaluate our proposed method we have compared our algorithm with the bagic CS-RANSAC algorithm, and basic RANSAC algorithm in terms of processing time, error rate(Symmetric Transfer Error), and inlier rate. The experiment shows that the proposed method produces 5% decrease in processing time, 14% decrease in Symmetric Transfer Error, and higher accurate homography by comparing the basic CS-RANSAC algorithm.

Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV (MMS로부터 취득된 LiDAR 점군데이터의 반사강도 영상과 UAV 영상의 정합을 위한 특징점 기반 매칭 기법 연구)

  • Choi, Yoonjo;Farkoushi, Mohammad Gholami;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.453-464
    • /
    • 2019
  • Recently, as the demand for 3D geospatial information increases, the importance of rapid and accurate data construction has increased. Although many studies have been conducted to register UAV (Unmanned Aerial Vehicle) imagery based on LiDAR (Light Detection and Ranging) data, which is capable of precise 3D data construction, studies using LiDAR data embedded in MMS (Mobile Mapping System) are insufficient. Therefore, this study compared and analyzed 9 matching algorithms based on feature points for registering reflectance image converted from LiDAR point cloud intensity data acquired from MMS with image data from UAV. Our results indicated that when the SIFT (Scale Invariant Feature Transform) algorithm was applied, it was able to stable secure a high matching accuracy, and it was confirmed that sufficient conjugate points were extracted even in various road environments. For the registration accuracy analysis, the SIFT algorithm was able to secure the accuracy at about 10 pixels except the case when the overlapping area is low and the same pattern is repeated. This is a reasonable result considering that the distortion of the UAV altitude is included at the time of UAV image capturing. Therefore, the results of this study are expected to be used as a basic research for 3D registration of LiDAR point cloud intensity data and UAV imagery.