• 제목/요약/키워드: feature recognition

검색결과 2,567건 처리시간 0.04초

Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구 (A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter)

  • 이준환;이상범
    • 한국정보처리학회논문지
    • /
    • 제7권7호
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

입자 유형별 형상추출에 의한 마모입자 자동인식에 관한 연구 (A study on automatic wear debris recognition by using particle feature extraction)

  • 장래혁;윤의성;공호성
    • 한국윤활학회:학술대회논문집
    • /
    • 한국윤활학회 1998년도 제27회 춘계학술대회
    • /
    • pp.314-320
    • /
    • 1998
  • Wear debris morphology is closely related to the wear mode and mechanism occured. Image recognition of wear debris is, therefore, a powerful tool in wear monitoring. But it has usually required expert's experience and the results could be too subjective. Development of automatic tools for wear debris recognition is needed to solve this problem. In this work, an algorithm for automatic wear debris recognition was suggested and implemented by PC base software. The presented method defined a characteristic 3-dimensional feature space where typical types of wear debris were separately located by the knowledge-based system and compared the similarity of object wear debris concerned. The 3-dimensional feature space was obtained from multiple feature vectors by using a multi-dimensional scaling technique. The results showed that the presented automatic wear debris recognition was satisfactory in many cases application.

  • PDF

LSG:모델 기반 3차원 물체 인식을 위한 정형화된 국부적인 특징 구조 (LSG;(Local Surface Group); A Generalized Local Feature Structure for Model-Based 3D Object Recognition)

  • 이준호
    • 정보처리학회논문지B
    • /
    • 제8B권5호
    • /
    • pp.573-578
    • /
    • 2001
  • This research proposes a generalized local feature structure named "LSG(Local Surface Group) for model-based 3D object recognition". An LSG consists of a surface and its immediately adjacent surface that are simultaneously visible for a given viewpoint. That is, LSG is not a simple feature but a viewpoint-dependent feature structure that contains several attributes such as surface type. color, area, radius, and simultaneously adjacent surface. In addition, we have developed a new method based on Bayesian theory that computes a measure of how distinct an LSG is compared to other LSGs for the purpose of object recognition. We have experimented the proposed methods on an object databaed composed of twenty 3d object. The experimental results show that LSG and the Bayesian computing method can be successfully employed to achieve rapid 3D object recognition.

  • PDF

입자 유형별 형상추출에 의한 마모입자 자동인식에 관한 연구 (A Study on Automatic wear Debris Recognition by using Particle Feature Extraction)

  • 장래혁;윤의성;공호성
    • Tribology and Lubricants
    • /
    • 제15권2호
    • /
    • pp.206-211
    • /
    • 1999
  • Wear debris morphology is closely related to the wear mode and mechanism occured. Image recognition of wear debris is, therefore, a powerful tool in wear monitoring. But it has usually required expert's experience and the results could be too subjective. Development of automatic tools for wear debris recognition is needed to solve this problem. In this work, an algorithm for automatic wear debris recognition was suggested and implemented by PC base software. The presented method defined a characteristic 3-dimensional feature space where typical types of wear debris were separately located by the knowledge-based system and compared the similarity of object wear debris concerned. The 3-dimensional feature space was obtained from multiple feature vectors by using a multi-dimensional scaling technique. The results showed that the presented automatic wear debris recognition was satisfactory in many cases application.

위치이동에 무관한 웨이블릿 변환을 이용한 패턴인식 (Patterns Recognition Using Translation-Invariant Wavelet Transform)

  • 김국진;조성원;김재민;임철수
    • 한국지능시스템학회논문지
    • /
    • 제13권3호
    • /
    • pp.281-286
    • /
    • 2003
  • 웨이블릿 변환(Wavelet Transform)은 공간-주파수 영역에서 신호의 국소특성을 효율적으로 구현할 수 있다 하지만, 웨이블릿 변환을 패턴 인식을 위한 특징 추출에 적용할 경우, 입력 신호의 위치 이동에 따라 추출된 특징 값이 변화하게 되어 인식률이 낮아지는 결함이 있다. 본 논문에서는 웨이블릿 변환을 패턴 인식에 적용할 경우 발생하는 입력 신호의 위치 이동에 따른 문제점을 보완하여 노이즈에 강인한 홍채인식 알고리즘을 제안한다. 실험을 통하여 제안한 알고리즘의 우수성을 보여 준다.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

DMS 모델과 이중 스펙트럼 특징을 이용한 HMM에 의한 음성 인식 (HMM-based Speech Recognition using DMS Model and Double Spectral Feature)

  • 안태옥
    • 한국산학기술학회논문지
    • /
    • 제7권4호
    • /
    • pp.649-655
    • /
    • 2006
  • 본 논문은 화자 독립의 음성인식을 위한 연구로써, DMS 모델에 의한 DMSVQ(Dynamic Multi-Section Vector Quantization) 코드북과 이중 스펙트럼 특징을 이용한 HMM(Hidden Markov Model) 음성인식 방법을 제안한다. 정적 스펙트럼 특징으로서는 LPC ?S스트럼 계수를 이용하였고, 동적 스펙트럼 특징으로는 LPC ?S스트럼의 회귀계수를 사용하였다. 이들 두개의 스펙트럼 특징들을 각각 VQ 코드북으로 양자화되고, DMS 모델을 이용한 HMM은 입력으로써 정적 스펙트럼 특징과 동적 스펙트럼 특징을 받아드림으로써 모델링된다. 제안된 방법에 의한 인식 실험은 기존의 다양한 인식 방법에 의한 인식 실험들과 비교를 위해 동일한 데이터와 조건 하에서 수행하였다. 실험 결과, 본 연구에서 제안한 방법이 기존의 방법들보다 우수한 방법임을 입증하였다.

  • PDF

일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법 (A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image)

  • 김성훈;한기태
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제5권5호
    • /
    • pp.251-260
    • /
    • 2016
  • 얼굴 인식은 얼굴 영상에서 특징을 추출하고, 이를 다양한 알고리즘을 통해 학습하여 학습된 데이터와 새로운 얼굴 영상에서의 특징과 비교하여 사람을 인식하는 기술로 인식률을 향상시키기 위해서 다양한 방법들이 요구되는 기술이다. 얼굴 인식을 위해 학습 단계에서는 얼굴 영상들로 부터 특징 성분을 추출해야하며, 이를 위한 기존 얼굴 특징 성분 추출 방법에는 선형판별분석(Linear Discriminant Analysis, LDA)이 있다. 이 방법은 얼굴 영상들을 고차원의 공간에서 점들로 표현하고, 클래스 정보와 점의 분포를 분석하여 사람을 판별하기 위한 특징들을 추출하는데, 점의 위치가 얼굴 영상의 화소값에 의해 결정되므로 얼굴 영상에서 불필요한 영역 또는 변화가 자주 발생하는 영역이 포함되는 경우 잘못된 얼굴 특징이 추출될 수 있으며, 특히 일반 카메라 영상을 사용하여 얼굴인식을 수행하는 경우 얼굴과 카메라간의 거리에 따라 얼굴 크기가 다르게 나타나 최종적으로 얼굴 인식률이 저하된다. 따라서 본 논문에서는 이러한 문제점을 해결하기 위해 일반 카메라를 이용하여 얼굴 영역을 검출하고, 검출된 얼굴 영역에서 Gabor Filter를 이용하여 계산된 얼굴 외곽선을 통해 불필요한 영역을 제거한 후 일정 크기로 얼굴 영역 크기를 정규화하였다. 정규화된 얼굴 영상을 선형 판별 분석을 통해 얼굴 특징 성분을 추출하고, 인공 신경망을 통해 학습하여 얼굴 인식을 수행한 결과 기존의 불필요 영역이 포함된 얼굴 인식 방법보다 약 13% 정도의 인식률 향상이 가능하였다.

다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법 (Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm)

  • 주종태;장인훈;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Comparative Study of Corner and Feature Extractors for Real-Time Object Recognition in Image Processing

  • Mohapatra, Arpita;Sarangi, Sunita;Patnaik, Srikanta;Sabut, Sukant
    • Journal of information and communication convergence engineering
    • /
    • 제12권4호
    • /
    • pp.263-270
    • /
    • 2014
  • Corner detection and feature extraction are essential aspects of computer vision problems such as object recognition and tracking. Feature detectors such as Scale Invariant Feature Transform (SIFT) yields high quality features but computationally intensive for use in real-time applications. The Features from Accelerated Segment Test (FAST) detector provides faster feature computation by extracting only corner information in recognising an object. In this paper we have analyzed the efficient object detection algorithms with respect to efficiency, quality and robustness by comparing characteristics of image detectors for corner detector and feature extractors. The simulated result shows that compared to conventional SIFT algorithm, the object recognition system based on the FAST corner detector yields increased speed and low performance degradation. The average time to find keypoints in SIFT method is about 0.116 seconds for extracting 2169 keypoints. Similarly the average time to find corner points was 0.651 seconds for detecting 1714 keypoints in FAST methods at threshold 30. Thus the FAST method detects corner points faster with better quality images for object recognition.