• Title/Summary/Keyword: Audio feature extraction

Search Result 46, Processing Time 0.024 seconds

Robust Feature Extraction Based on Image-based Approach for Visual Speech Recognition (시각 음성인식을 위한 영상 기반 접근방법에 기반한 강인한 시각 특징 파라미터의 추출 방법)

  • Gyu, Song-Min;Pham, Thanh Trung;Min, So-Hee;Kim, Jing-Young;Na, Seung-You;Hwang, Sung-Taek
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.348-355
    • /
    • 2010
  • In spite of development in speech recognition technology, speech recognition under noisy environment is still a difficult task. To solve this problem, Researchers has been proposed different methods where they have been used visual information except audio information for visual speech recognition. However, visual information also has visual noises as well as the noises of audio information, and this visual noises cause degradation in visual speech recognition. Therefore, it is one the field of interest how to extract visual features parameter for enhancing visual speech recognition performance. In this paper, we propose a method for visual feature parameter extraction based on image-base approach for enhancing recognition performance of the HMM based visual speech recognizer. For experiments, we have constructed Audio-visual database which is consisted with 105 speackers and each speaker has uttered 62 words. We have applied histogram matching, lip folding, RASTA filtering, Liner Mask, DCT and PCA. The experimental results show that the recognition performance of our proposed method enhanced at about 21% than the baseline method.

The Technology of the Audio Feature Extraction for Classifying Contents (콘덴츠 분류를 위한 오디오 신호 특징 추출 기술)

  • Lim, J.D.;Han, S.W.;Choi, B.C.;Chung, B.H.
    • Electronics and Telecommunications Trends
    • /
    • v.24 no.6
    • /
    • pp.121-132
    • /
    • 2009
  • 음성을 비롯하여 음악, 음향 등을 포함하는 오디오 신호는 멀티미디어 콘텐츠를 구성하는 매우 중요한 미디어 타입이며, 미디어 기록 매체와 네트워크의 발전으로 인한 데이터 양의 급격한 증대는 수동적 관리의 어려움을 유발하게 되고, 이로 인해 오디오 신호를 자동으로 구분하는 기술은 매우 중요한 기술로 인식되고 있다. 다양한 오디오 신호를 분류하기 위한 오디오 신호의 특징을 추출하는 기술은 많은 연구들을 통해 발전하여 왔으며, 본 논문은 오디오 콘텐츠 자동 분류에서 높은 성능을 갖는 오디오 신호 특징 추출에 대해서 분석한다. 그리고 특징 분류기 중에서 안정적인 성능을 가지는 SVM을 사용한 오디오 신호 분류 방법을 알아본다.

An Implementation of Real-Time Speaker Verification System on Telephone Voices Using DSP Board (DSP보드를 이용한 전화음성용 실시간 화자인증 시스템의 구현에 관한 연구)

  • Lee Hyeon Seung;Choi Hong Sub
    • MALSORI
    • /
    • no.49
    • /
    • pp.145-158
    • /
    • 2004
  • This paper is aiming at implementation of real-time speaker verification system using DSP board. Dialog/4, which is based on microprocessor and DSP processor, is selected to easily control telephone signals and to process audio/voice signals. Speaker verification system performs signal processing and feature extraction after receiving voice and its ID. Then through computing the likelihood ratio of claimed speaker model to the background model, it makes real-time decision on acceptance or rejection. For the verification experiments, total 15 speaker models and 6 background models are adopted. The experimental results show that verification accuracy rates are 99.5% for using telephone speech-based speaker models.

  • PDF

Intensified Sentiment Analysis of Customer Product Reviews Using Acoustic and Textual Features

  • Govindaraj, Sureshkumar;Gopalakrishnan, Kumaravelan
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.494-501
    • /
    • 2016
  • Sentiment analysis incorporates natural language processing and artificial intelligence and has evolved as an important research area. Sentiment analysis on product reviews has been used in widespread applications to improve customer retention and business processes. In this paper, we propose a method for performing an intensified sentiment analysis on customer product reviews. The method involves the extraction of two feature sets from each of the given customer product reviews, a set of acoustic features (representing emotions) and a set of lexical features (representing sentiments). These sets are then combined and used in a supervised classifier to predict the sentiments of customers. We use an audio speech dataset prepared from Amazon product reviews and downloaded from the YouTube portal for the purposes of our experimental evaluations.

Performance Comparison of Guitar Chords Classification Systems Based on Artificial Neural Network (인공신경망 기반의 기타 코드 분류 시스템 성능 비교)

  • Park, Sun Bae;Yoo, Do-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.391-399
    • /
    • 2018
  • In this paper, we construct and compare various guitar chord classification systems using perceptron neural network and convolutional neural network without pre-processing other than Fourier transform to identify the optimal chord classification system. Conventional guitar chord classification schemes use, for better feature extraction, computationally demanding pre-processing techniques such as stochastic analysis employing a hidden markov model or an acoustic data filtering and hence are burdensome for real-time chord classifications. For this reason, we construct various perceptron neural networks and convolutional neural networks that use only Fourier tranform for data pre-processing and compare them with dataset obtained by playing an electric guitar. According to our comparison, convolutional neural networks provide optimal performance considering both chord classification acurracy and fast processing time. In particular, convolutional neural networks exhibit robust performance even when only small fraction of low frequency components of the data are used.

Method for Classification of Age and Gender Using Gait Recognition (걸음걸이 인식을 통한 연령 및 성별 분류 방법)

  • Yoo, Hyun Woo;Kwon, Ki Youn
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.11
    • /
    • pp.1035-1045
    • /
    • 2017
  • Classification of age and gender has been carried out through different approaches such as facial-based and audio-based classifications. One of the limitations of facial-based methods is the reduced recognition rate over large distances, while another is the prerequisite of the faces to be located in front of the camera. Similarly, in audio-based methods, the recognition rate is reduced in a noisy environment. In contrast, gait-based methods are only required that a target person is in the camera. In previous works, the view point of a camera is only available as a side view and gait data sets consist of a standard gait, which is different from an ordinary gait in a real environment. We propose a feature extraction method using skeleton models from an RGB-D sensor by considering characteristics of age and gender using ordinary gait. Experimental results show that the proposed method could efficiently classify age and gender within a target group of individuals in real-life environments.

Lip Reading Method Using CNN for Utterance Period Detection (발화구간 검출을 위해 학습된 CNN 기반 입 모양 인식 방법)

  • Kim, Yong-Ki;Lim, Jong Gwan;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.233-243
    • /
    • 2016
  • Due to speech recognition problems in noisy environment, Audio Visual Speech Recognition (AVSR) system, which combines speech information and visual information, has been proposed since the mid-1990s,. and lip reading have played significant role in the AVSR System. This study aims to enhance recognition rate of utterance word using only lip shape detection for efficient AVSR system. After preprocessing for lip region detection, Convolution Neural Network (CNN) techniques are applied for utterance period detection and lip shape feature vector extraction, and Hidden Markov Models (HMMs) are then used for the recognition. As a result, the utterance period detection results show 91% of success rates, which are higher performance than general threshold methods. In the lip reading recognition, while user-dependent experiment records 88.5%, user-independent experiment shows 80.2% of recognition rates, which are improved results compared to the previous studies.

Deep Learning based Raw Audio Signal Bandwidth Extension System (딥러닝 기반 음향 신호 대역 확장 시스템)

  • Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1122-1128
    • /
    • 2020
  • Bandwidth Extension refers to restoring and expanding a narrow band signal(NB) that is damaged or damaged in the encoding and decoding process due to the lack of channel capacity or the characteristics of the codec installed in the mobile communication device. It means converting to a wideband signal(WB). Bandwidth extension research mainly focuses on voice signals and converts high bands into frequency domains, such as SBR (Spectral Band Replication) and IGF (Intelligent Gap Filling), and restores disappeared or damaged high bands based on complex feature extraction processes. In this paper, we propose a model that outputs an bandwidth extended signal based on an autoencoder among deep learning models, using the residual connection of one-dimensional convolutional neural networks (CNN), the bandwidth is extended by inputting a time domain signal of a certain length without complicated pre-processing. In addition, it was confirmed that the damaged high band can be restored even by training on a dataset containing various types of sound sources including music that is not limited to the speech.

Speech/Mixed Content Signal Classification Based on GMM Using MFCC (MFCC를 이용한 GMM 기반의 음성/혼합 신호 분류)

  • Kim, Ji-Eun;Lee, In-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.185-192
    • /
    • 2013
  • In this paper, proposed to improve the performance of speech and mixed content signal classification using MFCC based on GMM probability model used for the MPEG USAC(Unified Speech and Audio Coding) standard. For effective pattern recognition, the Gaussian mixture model (GMM) probability model is used. For the optimal GMM parameter extraction, we use the expectation maximization (EM) algorithm. The proposed classification algorithm is divided into two significant parts. The first one extracts the optimal parameters for the GMM. The second distinguishes between speech and mixed content signals using MFCC feature parameters. The performance of the proposed classification algorithm shows better results compared to the conventionally implemented USAC scheme.

Classification of General Sound with Non-negativity Constraints (비음수 제약을 통한 일반 소리 분류)

  • 조용춘;최승진;방승양
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1412-1417
    • /
    • 2004
  • Sparse coding or independent component analysis (ICA) which is a holistic representation, was successfully applied to elucidate early auditor${\gamma}$ processing and to the task of sound classification. In contrast, parts-based representation is an alternative way o) understanding object recognition in brain. In this thesis we employ the non-negative matrix factorization (NMF) which learns parts-based representation in the task of sound classification. Methods of feature extraction from the spectro-temporal sounds using the NMF in the absence or presence of noise, are explained. Experimental results show that NMF-based features improve the performance of sound classification over ICA-based features.