• Title/Summary/Keyword: Robust Feature

Search Result 874, Processing Time 0.034 seconds

Features for Figure Speech Recognition in Noise Environment (잡음환경에서의 숫자음 인식을 위한 특징파라메타)

  • Lee, Jae-Ki;Koh, Si-Young;Lee, Kwang-Suk;Hur, Kang-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.473-476
    • /
    • 2005
  • This paper is proposed a robust various feature parameters in noise. Feature parameter MFCC(Mel Frequency Cepstral Coefficient) used in conventional speech recognition shows good performance. But, parameter transformed feature space that uses PCA(Principal Component Analysis)and ICA(Independent Component Analysis) that is algorithm transformed parameter MFCC's feature space that use in old for more robust performance in noise is compared with the conventional parameter MFCC's performance. The result shows more superior performance than parameter and MFCC that feature parameter transformed by the result ICA is transformed by PCA.

  • PDF

Robust 2D Feature Tracking in Long Video Sequences (긴 비디오 프레임들에서의 강건한 2차원 특징점 추적)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.473-480
    • /
    • 2007
  • Feature tracking in video frame sequences has suffered from the instability and the frequent failure of feature matching between two successive frames. In this paper, we propose a robust 2D feature tracking method that is stable to long video sequences. To improve the stability of feature tracking, we predict the spatial movement in the current image frame using the state variables. The predicted current movement is used for the initialization of the search window. By computing the feature similarities in the search window, we refine the current feature positions. Then, the current feature states are updated. This tracking process is repeated for each input frame. To reduce false matches, the outlier rejection stage is also introduced. Experimental results from real video sequences showed that the proposed method performs stable feature tracking for long frame sequences.

Comparison of Feature Point Extraction Algorithms Using Unmanned Aerial Vehicle RGB Reference Orthophoto (무인항공기 RGB 기준 정사영상을 이용한 특징점 추출 알고리즘 비교)

  • Lee, Kirim;Seong, Jihoon;Jung, Sejung;Shin, Hyeongil;Kim, Dohoon;Lee, Wonhee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.263-270
    • /
    • 2024
  • As unmanned aerial vehicles(UAVs) and sensors have been developed in a variety of ways, it has become possible to update information on the ground faster than existing aerial photography or remote sensing. However, acquisition and input of ground control points(GCPs) UAV photogrammetry takes a lot of time, and geometric distortion occurs if measurement and input of GCPs are incorrect. In this study, RGB-based orthophotos were generated to reduce GCPs measurment and input time, and comparison and evaluation were performed by applying feature point algorithms to target orthophotos from various sensors. Four feature point extraction algorithms were applied to the two study sites, and as a result, speeded up robust features(SURF) was the best in terms of the ratio of matching pairs to feature points. When compared overall, the accelerated-KAZE(AKAZE) method extracted the most feature points and matching pairs, and the binary robust invariant scalable keypoints(BRISK) method extracted the fewest feature points and matching pairs. Through these results, it was confirmed that the AKAZE method is superior when performing geometric correction of the objective orthophoto for each sensor.

On-Line Blind Channel Normalization for Noise-Robust Speech Recognition

  • Jung, Ho-Young
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.143-151
    • /
    • 2012
  • A new data-driven method for the design of a blind modulation frequency filter that suppresses the slow-varying noise components is proposed. The proposed method is based on the temporal local decorrelation of the feature vector sequence, and is done on an utterance-by-utterance basis. Although the conventional modulation frequency filtering approaches the same form regardless of the task and environment conditions, the proposed method can provide an adaptive modulation frequency filter that outperforms conventional methods for each utterance. In addition, the method ultimately performs channel normalization in a feature domain with applications to log-spectral parameters. The performance was evaluated by speaker-independent isolated-word recognition experiments under additive noise environments. The proposed method achieved outstanding improvement for speech recognition in environments with significant noise and was also effective in a range of feature representations.

  • PDF

Line feature extraction in a noisy image

  • Lee, Joon-Woong;Oh, Hak-Seo;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.137-140
    • /
    • 1996
  • Finding line segments in an intensity image has been one of the most fundamental issues in computer vision. In complex scenes, it is hard to detect the locations of point features. Line features are more robust in providing greater positional accuracy. In this paper we present a robust "line features extraction" algorithm which extracts line feature in a single pass without using any assumptions and constraints. Our algorithm consists of five steps: (1) edge scanning, (2) edge normalization, (3) line-blob extraction, (4) line-feature computation, and (5) line linking. By using edge scanning, the computational complexity due to too many edge pixels is drastically reduced. Edge normalization improves the local quantization error induced from the gradient space partitioning and minimizes perturbations on edge orientation. We also analyze the effects of edge processing, and the least squares-based method and the principal axis-based method on the computation of line orientation. We show its efficiency with some real images.al images.

  • PDF

Pre-processing Algorithm for Detection of Slab Information on Steel Process using Robust Feature Points extraction (강건한 특징점 추출을 이용한 철강제품 정보 검출을 위한 전처리 알고리즘)

  • Choi, Jong-Hyun;Yun, Jong-Pil;Choi, Sung-Hoo;Koo, Keun-Hwi;Kim, Sang-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1819-1820
    • /
    • 2008
  • Steel slabs are marked with slab management numbers (SMNs). To increase efficiency, automated identification of SMNs from digital images is desirable. Automatic extraction of SMNs is a prerequisite for automatic character segmentation and recognition. The images include complex background, and the position of the text region of the slabs is variable. This paper describes an pre-processing algorithm for detection of slab information using robust feature points extraction. Using SIFT(Scale Invariant Feature Transform) algorithm, we can reduce the search region for extraction of SMNs from the slab image.

  • PDF

A Multiple Features Video Copy Detection Algorithm Based on a SURF Descriptor

  • Hou, Yanyan;Wang, Xiuzhen;Liu, Sanrong
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.502-510
    • /
    • 2016
  • Considering video copy transform diversity, a multi-feature video copy detection algorithm based on a Speeded-Up Robust Features (SURF) local descriptor is proposed in this paper. Video copy coarse detection is done by an ordinal measure (OM) algorithm after the video is preprocessed. If the matching result is greater than the specified threshold, the video copy fine detection is done based on a SURF descriptor and a box filter is used to extract integral video. In order to improve video copy detection speed, the Hessian matrix trace of the SURF descriptor is used to pre-match, and dimension reduction is done to the traditional SURF feature vector for video matching. Our experimental results indicate that video copy detection precision and recall are greatly improved compared with traditional algorithms, and that our proposed multiple features algorithm has good robustness and discrimination accuracy, as it demonstrated that video detection speed was also improved.

Endpoint Detection of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 끝점검출)

  • 석종원;배건성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.57-64
    • /
    • 1999
  • In this paper, we investigated the robust endpoint detection algorithm in noisy environment. A new feature parameter based on a discrete wavelet transform is proposed for word boundary detection of isolated utterances. The sum of standard deviation of wavelet coefficients in the third coarse and weighted first detailed scale is defined as a new feature parameter for endpoint detection. We then developed a new and robust endpoint detection algorithm using the feature found in the wavelet domain. For the performance evaluation, we evaluated the detection accuracy and the average recognition error rate due to endpoint detection in an HMM-based recognition system across several signal-to-noise ratios and noise conditions.

  • PDF

SURF algorithm to improve Correspondence Point using Geometric Features (기하학적 특징을 이용한 SURF 알고리즘의 대응점 개선)

  • Kim, Ji-Hyun;Koo, Kyung-Mo;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.43-46
    • /
    • 2012
  • 컴퓨터 비전을 이용한 다양한 응용 분야에 있어서, 특징점을 이용한 응용 분야가 많이 이루어지고 있다. 그 중에 Global feature는 표현의 위험성과 부정확성으로 인해서 많이 사용되고 있지 않으며, Local feature를 이용한 연구가 주로 이루고 있다. 그 중에 SURF(Speeded-Up Robust Features) 알고리즘은 다수의 영상에서 같은 물리적 위치에 있는 동일한 특징점을 찾아서 매칭하는 방법으로 널리 알려진 특징점 매칭 알고리즘이다. 하지만 SURF 알고리즘을 이용하여 특징점을 매칭하여 정합 쌍을 구하였을 때 매칭되는 특징점들의 정확도가 떨어지는 단점이 있다. 본 논문에서는 특징점 매칭 알고리즘인 SURF를 사용하여 대응되는 특징점들을 들로네 삼각형의 기하학적 특징을 이용하여 정확도가 높은 특징점을 분류하여 SURF 알고리즘의 매칭되는 대응점들의 정확도를 높이는 방법을 제안한다.

  • PDF

Recent Advances in Feature Detectors and Descriptors: A Survey

  • Lee, Haeseong;Jeon, Semi;Yoon, Inhye;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.3
    • /
    • pp.153-163
    • /
    • 2016
  • Local feature extraction methods for images and videos are widely applied in the fields of image understanding and computer vision. However, robust features are detected differently when using the latest feature detectors and descriptors because of diverse image environments. This paper analyzes various feature extraction methods by summarizing algorithms, specifying properties, and comparing performance. We analyze eight feature extraction methods. The performance of feature extraction in various image environments is compared and evaluated. As a result, the feature detectors and descriptors can be used adaptively for image sequences captured under various image environments. Also, the evaluation of feature detectors and descriptors can be applied to driving assistance systems, closed circuit televisions (CCTVs), robot vision, etc.