• Title/Summary/Keyword: Feature Point Matching

Search Result 196, Processing Time 0.02 seconds

A Study on Atmospheric Turbulence-Induced Errors in Vision Sensor based Structural Displacement Measurement (대기외란시 비전센서를 활용한 구조물 동적 변위 측정 성능에 관한 연구)

  • Junho Gong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.3
    • /
    • pp.1-9
    • /
    • 2024
  • This study proposes a multi-scale template matching technique with image pyramids (TMI) to measure structural dynamic displacement using a vision sensor under atmospheric turbulence conditions and evaluates its displacement measurement performance. To evaluate displacement measurement performance according to distance, the three-story shear structure was designed, and an FHD camera was prepared to measure structural response. The initial measurement distance was set at 10m, and increased with an increment of 10m up to 40m. The atmospheric disturbance was generated using a heating plate under indoor illuminance condition, and the image was distorted by the optical turbulence. Through preliminary experiments, the feasibility of displacement measurement of the feature point-based displacement measurement method and the proposed method during atmospheric disturbances were compared and verified, and the verification results showed a low measurement error rate of the proposed method. As a result of evaluating displacement measurement performance in an atmospheric disturbance environment, there was no significant difference in displacement measurement performance for TMI using an artificial target depending on the presence or absence of atmospheric disturbance. However, when natural targets were used, RMSE increased significantly at shooting distances of 20 m or more, showing the operating limitations of the proposed technique. This indicates that the resolution of the natural target decreases as the shooting distance increases, and image distortion due to atmospheric disturbance causes errors in template image estimation, resulting in a high displacement measurement error.

Registration of Three-Dimensional Point Clouds Based on Quaternions Using Linear Features (선형을 이용한 쿼터니언 기반의 3차원 점군 데이터 등록)

  • Kim, Eui Myoung;Seo, Hong Deok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.175-185
    • /
    • 2020
  • Three-dimensional registration is a process of matching data with or without a coordinate system to a reference coordinate system, which is used in various fields such as the absolute orientation of photogrammetry and data combining for producing precise road maps. Three-dimensional registration is divided into a method using points and a method using linear features. In the case of using points, it is difficult to find the same conjugate point when having different spatial resolutions. On the other hand, the use of linear feature has the advantage that the three-dimensional registration is possible by using not only the case where the spatial resolution is different but also the conjugate linear feature that is not the same starting point and ending point in point cloud type data. In this study, we proposed a method to determine the scale and the three-dimensional translation after determining the three-dimensional rotation angle between two data using quaternion to perform three-dimensional registration using linear features. For the verification of the proposed method, three-dimensional registration was performed using the linear features constructed an indoor and the linear features acquired through the terrestrial mobile mapping system in an outdoor environment. The experimental results showed that the mean square root error was 0.001054m and 0.000936m, respectively, when the scale was fixed and if not fixed, using indoor data. The results of the three-dimensional transformation in the 500m section using outdoor data showed that the mean square root error was 0.09412m when the six linear features were used, and the accuracy for producing precision maps was satisfied. In addition, in the experiment where the number of linear features was changed, it was found that nine linear features were sufficient for high-precision 3D transformation through almost no change in the root mean square error even when nine linear features or more linear features were used.

Measurement Technique of Indoor location Based on Markerless applicable to AR (AR에 적용 가능한 마커리스 기반의 실내 위치 측정 기법)

  • Kim, Jae-Hyeong;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.243-251
    • /
    • 2021
  • In this paper, we propose a measurement technique of indoor location based on markerless applicable to AR. The proposed technique has the following originality. The first is to extract feature points and use them to generate local patches to enable faster computation by learning and using only local patches that are more useful than the surroundings without learning the entire image. Second, learning is performed through deep learning using the convolution neural network structure to improve accuracy by reducing the error rate. Third, unlike the existing feature point matching technique, it enables indoor location measurement including left and right movement. Fourth, since the indoor location is newly measured every frame, errors occurring in the front side during movement are prevented from accumulating. Therefore, it has the advantage that the error between the final arrival point and the predicted indoor location does not increase even if the moving distance increases. As a result of the experiment conducted to evaluate the time required and accuracy of the measurement technique of indoor location based on markerless applicable to AR proposed in this paper, the difference between the actual indoor location and the measured indoor location is an average of 12.8cm and a maximum of 21.2cm. As measured, the indoor location measurement accuracy was better than that of the existing IEEE paper. In addition, it was determined that it was possible to measure the user's indoor location in real time by displaying the measured result at 20 frames per second.

Automation of Online to Offline Stores: Extremely Small Depth-Yolov8 and Feature-Based Product Recognition (Online to Offline 상점의 자동화 : 초소형 깊이의 Yolov8과 특징점 기반의 상품 인식)

  • Jongwook Si;Daemin Kim;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.3
    • /
    • pp.121-129
    • /
    • 2024
  • The rapid advancement of digital technology and the COVID-19 pandemic have significantly accelerated the growth of online commerce, highlighting the need for support mechanisms that enable small business owners to effectively respond to these market changes. In response, this paper presents a foundational technology leveraging the Online to Offline (O2O) strategy to automatically capture products displayed on retail shelves and utilize these images to create virtual stores. The essence of this research lies in precisely identifying and recognizing the location and names of displayed products, for which a single-class-targeted, lightweight model based on YOLOv8, named ESD-YOLOv8, is proposed. The detected products are identified by their names through feature-point-based technology, equipped with the capability to swiftly update the system by simply adding photos of new products. Through experiments, product name recognition demonstrated an accuracy of 74.0%, and position detection achieved a performance with an F2-Score of 92.8% using only 0.3M parameters. These results confirm that the proposed method possesses high performance and optimized efficiency.

Mobile Camera-Based Positioning Method by Applying Landmark Corner Extraction (랜드마크 코너 추출을 적용한 모바일 카메라 기반 위치결정 기법)

  • Yoo Jin Lee;Wansang Yoon;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1309-1320
    • /
    • 2023
  • The technological development and popularization of mobile devices have developed so that users can check their location anywhere and use the Internet. However, in the case of indoors, the Internet can be used smoothly, but the global positioning system (GPS) function is difficult to use. There is an increasing need to provide real-time location information in shaded areas where GPS is not received, such as department stores, museums, conference halls, schools, and tunnels, which are indoor public places. Accordingly, research on the recent indoor positioning technology based on light detection and ranging (LiDAR) equipment is increasing to build a landmark database. Focusing on the accessibility of building a landmark database, this study attempted to develop a technique for estimating the user's location by using a single image taken of a landmark based on a mobile device and the landmark database information constructed in advance. First, a landmark database was constructed. In order to estimate the user's location only with the mobile image photographing the landmark, it is essential to detect the landmark from the mobile image, and to acquire the ground coordinates of the points with fixed characteristics from the detected landmark. In the second step, by applying the bag of words (BoW) image search technology, the landmark photographed by the mobile image among the landmark database was searched up to a similar 4th place. In the third step, one of the four candidate landmarks searched through the scale invariant feature transform (SIFT) feature point extraction technique and Homography random sample consensus(RANSAC) was selected, and at this time, filtering was performed once more based on the number of matching points through threshold setting. In the fourth step, the landmark image was projected onto the mobile image through the Homography matrix between the corresponding landmark and the mobile image to detect the area of the landmark and the corner. Finally, the user's location was estimated through the location estimation technique. As a result of analyzing the performance of the technology, the landmark search performance was measured to be about 86%. As a result of comparing the location estimation result with the user's actual ground coordinate, it was confirmed that it had a horizontal location accuracy of about 0.56 m, and it was confirmed that the user's location could be estimated with a mobile image by constructing a landmark database without separate expensive equipment.

3D surface Reconstruction of Moving Object Using Multi-Laser Stripes Irradiation (멀티 레이저 라인 조사를 이용한 비등속 이동물체의 3차원 형상 복원)

  • Yi, Young-Youl;Ye, Soo-Young;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.144-152
    • /
    • 2007
  • We propose a 3D modeling method for surface inspection of non-linear moving object. The laser lines reflect the surface curvature. We can acquire 3D surface information by analyzing projected laser lines on object. ill this paper, we use multi-line laser to make use of robust of single stripe method and high speed of single frame. Binarization and channel edge extraction method were used for robust laser line extraction. A new labeling method was used for laser line labeling. We acquired sink information between each 3D reconstructed frame by feature point matching, and registered each frame to one whole image. We verified the superiority of proposed method by applying it to container damage inspection system.

Multi-Object Detection Using Image Segmentation and Salient Points (영상 분할 및 주요 특징 점을 이용한 다중 객체 검출)

  • Lee, Jeong-Ho;Kim, Ji-Hun;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.48-55
    • /
    • 2008
  • In this paper we propose a novel method for image retrieval system using image segmentation and salient points. The proposed method consists of four steps. In the first step, images are segmented into several regions by JSEG algorithm. In the second step, for the segmented regions, dominant colors and the corresponding color histogram are constructed. By using dominant colors and color histogram, we identify candidate regions where objects may exist. In the third step, real object regions are detected from candidate regions by SIFT matching. In the final step, we measure the similarity between the query image and DB image by using the color correlogram technique. Color correlogram is computed in the query image and object region of DB image. By experimental results, it has been shown that the proposed method detects multi-object very well and it provides better retrieval performance compared with object-based retrieval systems.

Direct Divergence Approximation between Probability Distributions and Its Applications in Machine Learning

  • Sugiyama, Masashi;Liu, Song;du Plessis, Marthinus Christoffel;Yamanaka, Masao;Yamada, Makoto;Suzuki, Taiji;Kanamori, Takafumi
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.99-111
    • /
    • 2013
  • Approximating a divergence between two probability distributions from their samples is a fundamental challenge in statistics, information theory, and machine learning. A divergence approximator can be used for various purposes, such as two-sample homogeneity testing, change-point detection, and class-balance estimation. Furthermore, an approximator of a divergence between the joint distribution and the product of marginals can be used for independence testing, which has a wide range of applications, including feature selection and extraction, clustering, object matching, independent component analysis, and causal direction estimation. In this paper, we review recent advances in divergence approximation. Our emphasis is that directly approximating the divergence without estimating probability distributions is more sensible than a naive two-step approach of first estimating probability distributions and then approximating the divergence. Furthermore, despite the overwhelming popularity of the Kullback-Leibler divergence as a divergence measure, we argue that alternatives such as the Pearson divergence, the relative Pearson divergence, and the $L^2$-distance are more useful in practice because of their computationally efficient approximability, high numerical stability, and superior robustness against outliers.

Design and Implementation for Korean Character and Pen-gesture Recognition System using Stroke Information (획 정보를 이용한 한글문자와 펜 제스처 인식 시스템의 설계 및 구현)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.765-774
    • /
    • 2002
  • The purpose of this paper is a design and implementation for korean character and pen-gesture recognition system in multimedia terminal, PDA and etc, which demand both a fast process and a high recognition rate. To recognize writing-types which are written by various users, the korean character recognition system uses a database which is based on the characteristic information of korean and the stroke information Which composes a phoneme, etc. In addition. it has a fast speed by the phoneme segmentation which uses the successive process or the backtracking process. The pen-gesture recognition system is performed by a matching process between the classification features extracted from an input pen-gesture and the classification features of 15 pen-gestures types defined in the gesture model. The classification feature is using the insensitive stroke information. i.e., the positional relation between two strokes. the crossing number, the direction transition, the direction vector, the number of direction code. and the distance ratio between starting and ending point in each stroke. In the experiment, we acquired a high recognition rate and a fart speed.

Shape Based Framework for Recognition and Tracking of Texture-free Objects for Submerged Robots in Structured Underwater Environment (수중로봇을 위한 형태를 기반으로 하는 인공표식의 인식 및 추종 알고리즘)

  • Han, Kyung-Min;Choi, Hyun-Taek
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.6
    • /
    • pp.91-98
    • /
    • 2011
  • This paper proposes an efficient and accurate vision based recognition and tracking framework for texture free objects. We approached this problem with a two phased algorithm: detection phase and tracking phase. In the detection phase, the algorithm extracts shape context descriptors that used for classifying objects into predetermined interesting targets. Later on, the matching result is further refined by a minimization technique. In the tracking phase, we resorted to meanshift tracking algorithm based on Bhattacharyya coefficient measurement. In summary, the contributions of our methods for the underwater robot vision are four folds: 1) Our method can deal with camera motion and scale changes of objects in underwater environment; 2) It is inexpensive vision based recognition algorithm; 3) The advantage of shape based method compared to a distinct feature point based method (SIFT) in the underwater environment with possible turbidity variation; 4) We made a quantitative comparison of our method with a few other well-known methods. The result is quite promising for the map based underwater SLAM task which is the goal of our research.