• 제목/요약/키워드: Speech signal processing

검색결과 331건 처리시간 0.024초

입술 움직임 영상 선호를 이용한 음성 구간 검출 (Speech Activity Detection using Lip Movement Image Signals)

  • 김응규
    • 융합신호처리학회논문지
    • /
    • 제11권4호
    • /
    • pp.289-297
    • /
    • 2010
  • 본 논문에서는 음성인식을 위한 음성구간 검출과정에서 유입될 수 있는 동적인 음향에너지 이외에 화자의 입술움직임 영상신호까지 확인함으로써 외부 음향잡음이 음성인식 대상으로 오인식되는 것을 방지하기 위한 한 가지 방법이 제시된다. 우선, 연속적인 영상이 PC용 영상카메라를 통하여 획득되고 그 입술움직임 여부가 식별된다. 다음으로, 입술움직임 영상신호 데이터는 공유메모리에 저장되어 음성인식 프로세서와 공유한다. 한편, 음성인식의 전처리 단계인 음성구간 검출과정에서 공유메모리에 저장되어진 데이터를 확인함으로써 화자의 발성에 의한 음향에너지인지의 여부가 입증된다. 최종적으로, 음성인식기와 영상처리기를 연동시켜 실험한 결과, 영상카메라에 대면해서 발성하면 음성인식 결과의 출력에 이르기까지 연동처리가 정상적으로 진행됨을 확인하였고, 영상카메라에 대면치 않고 발성하면 연동처리시스템이 그 음성인식 결과를 출력치 못함을 확인하였다. 또한, 오프라인하의 입술움직임 초기 특정값 및 템플릿 초기영상을 온라인하에서 추출된 입술움직임 초기특정값 및 템플릿 영상으로 대체함으로써 입술움직임 영상 추적의 변별력을 향상시켰다. 입술움직임 영상 추적과정을 시각적으로 확인하고 실시간으로 관련된 패러미터를 해석하기 위해 영상처리 테스트베드를 구축하였다, 음성과 영상처리 시스템의 연동결과 다양한 조명환경 하에서도 약 99.3%의 연동율을 나타냈다.

모음 열을 이용한 발화 검증 (An Utterance Verification using Vowel String)

  • 유일수;노용완;홍광석
    • 융합신호처리학회 학술대회논문집
    • /
    • 한국신호처리시스템학회 2003년도 하계학술대회 논문집
    • /
    • pp.46-49
    • /
    • 2003
  • The use of confidence measures for word/utterance verification has become art essential component of any speech input application. Confidence measures have applications to a number of problems such as rejection of incorrect hypotheses, speaker adaptation, or adaptive modification of the hypothesis score during search in continuous speech recognition. In this paper, we present a new utterance verification method using vowel string. Using subword HMMs of VCCV unit, we create anti-models which include vowel string in hypothesis words. The experiment results show that the utterance verification rate of the proposed method is about 79.5%.

  • PDF

Adaptive Post Processing of Nonlinear Amplified Sound Signal

  • Lee, Jae-Kyu;Choi, Jong-Suk;Seok, Cheong-Gyu;Kim, Mun-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.872-876
    • /
    • 2005
  • We propose a real-time post processing of nonlinear amplified signal to improve voice recognition in remote talk. In the previous research, we have found the nonlinear amplification has unique advantage for both the voice activity detection and the sound localization in remote talk. However, the original signal becomes distorted due to its nonlinear amplification and, as a result, the rest of sequence such as speech recognition show less satisfactorily results. To remedy this problem, we implement a linearization algorithm to recover the voice signal's linear characteristics after the localization has been done.

  • PDF

QoS-, Energy- and Cost-efficient Resource Allocation for Cloud-based Interactive TV Applications

  • Kulupana, Gosala;Talagala, Dumidu S.;Arachchi, Hemantha Kodikara;Fernando, Anil
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권3호
    • /
    • pp.158-167
    • /
    • 2017
  • Internet-based social and interactive video applications have become major constituents of the envisaged applications for next-generation multimedia networks. However, inherently dynamic network conditions, together with varying user expectations, pose many challenges for resource allocation mechanisms for such applications. Yet, in addition to addressing these challenges, service providers must also consider how to mitigate their operational costs (e.g., energy costs, equipment costs) while satisfying the end-user quality of service (QoS) expectations. This paper proposes a heuristic solution to the problem, where the energy incurred by the applications, and the monetary costs associated with the service infrastructure, are minimized while simultaneously maximizing the average end-user QoS. We evaluate the performance of the proposed solution in terms of serving probability, i.e., the likelihood of being able to allocate resources to groups of users, the computation time of the resource allocation process, and the adaptability and sensitivity to dynamic network conditions. The proposed method demonstrates improvements in serving probability of up to 27%, in comparison with greedy resource allocation schemes, and a several-orders-of-magnitude reduction in computation time, compared to the linear programming approach, which significantly reduces the service-interrupted user percentage when operating under variable network conditions.

Speaker Identification Based on Incremental Learning Neural Network

  • Heo, Kwang-Seung;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.76-82
    • /
    • 2005
  • Speech signal has various features of speakers. This feature is extracted from speech signal processing. The speaker is identified by the speaker identification system. In this paper, we propose the speaker identification system that uses the incremental learning based on neural network. Recorded speech signal through the microphone is blocked to the frame of 1024 speech samples. Energy is divided speech signal to voiced signal and unvoiced signal. The extracted 12 orders LPC cpestrum coefficients are used with input data for neural network. The speakers are identified with the speaker identification system using the neural network. The neural network has the structure of MLP which consists of 12 input nodes, 8 hidden nodes, and 4 output nodes. The number of output node means the identified speakers. The first output node is excited to the first speaker. Incremental learning begins when the new speaker is identified. Incremental learning is the learning algorithm that already learned weights are remembered and only the new weights that are created as adding new speaker are trained. It is learning algorithm that overcomes the fault of neural network. The neural network repeats the learning when the new speaker is entered to it. The architecture of neural network is extended with the number of speakers. Therefore, this system can learn without the restricted number of speakers.

TMS320C30을 이용한 단일채널 적응잡음제거기 구현 (Implementation of the single channel adaptive noise canceller using TMS320C30)

  • 정성윤;우세정;손창희;배건성
    • 음성과학
    • /
    • 제8권2호
    • /
    • pp.73-81
    • /
    • 2001
  • In this paper, we focus on the real time implementation of the single channel adaptive noise canceller(ANC) by using TMS320C30 EVM board. The implemented single channel adaptive noise canceller is based on a reference paper [1] in which it is simulated by using the recursive average magnitude difference function(AMDF) to get a properly delayed input speech on a sample basis as a reference signal and normalized least mean square(NLMS) algorithm. To certify results of the real time implementation, we measured the processing time of the ANC and enhancement ratio according to various signalto-noise ratios(SNRs). Experimental results demonstrate that the processing time of the speech signal of 32ms length with delay estimation of every 10 samples is about 26.3 ms, and almost the same performance as given in [1] is obtained with the implemented system.

  • PDF

Emotion Recognition based on Multiple Modalities

  • Kim, Dong-Ju;Lee, Hyeon-Gu;Hong, Kwang-Seok
    • 융합신호처리학회논문지
    • /
    • 제12권4호
    • /
    • pp.228-236
    • /
    • 2011
  • Emotion recognition plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between humans and computer. Most of previous work on emotion recognition focused on extracting emotions from face, speech or EEG information separately. Therefore, a novel approach is presented in this paper, including face, speech and EEG, to recognize the human emotion. The individual matching scores obtained from face, speech, and EEG are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. In the experiment results, the proposed approach gives an improvement of more than 18.64% when compared to the most successful unimodal approach, and also provides better performance compared to approaches integrating two modalities each other. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Interactive Rehabilitation Support System for Dementia Patients

  • Kim, Sung-Ill
    • 융합신호처리학회논문지
    • /
    • 제11권3호
    • /
    • pp.221-225
    • /
    • 2010
  • This paper presents the preliminary study of an interactive rehabilitation support system for both dementia patients and their caregivers, the goal of which is to improve the quality of life(QOL) of the patients suffering from dementia through virtual interaction. To achieve the virtual interaction, three kinds of recognition modules for speech, facial image and pen-mouse gesture are studied. The results of both practical tests and questionnaire surveys show that the proposed system had to be further improved, especially in both speech recognition and user interface for real-world applications. The surveys also revealed that the pen-mouse gesture recognition, as one of possible interactive aids, show us a probability to support weakness of speech recognition.

음성신호의 전폭분포를 이용한 유/무성음 검출에 대한 연구 (The Magnitude Distribution method of U/V decision)

  • 배성근
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1993년도 학술논문발표회 논문집 제12권 1호
    • /
    • pp.249-252
    • /
    • 1993
  • In speech signal processing, The accurate detection of the voiced/unvoiced is important for robust word recognition and analysis. This algorithm is based on the MD in the frame of speech signals that does not require statistical information about either signal or background-noise to decide a voiced/unvoiced. This paper presents a method of estimation the Characteristic of Magnitude Distribution from noisy speech and also of estimation the optimal threshold based on the MD of the voiced/unvoiced decision. The performances of this detectors is evaluated and compared to that obtained from classifying other paper.

  • PDF

DSP Processor(TMS320C32)를 이용한 화자인증 보안시스템의 구현 (Implementation of Speaker Verification Security System Using DSP Processor(TMS320C32))

  • 함영준;권혁재;최수영;정익주
    • 산업기술연구
    • /
    • 제21권B호
    • /
    • pp.107-116
    • /
    • 2001
  • The speech includes various kinds of information : language information, speaker's information, affectivity, hygienic condition, utterance environment etc. when a person communicates with others. All technologies to utilize in real life processing this speech are called the speech technology. The speech technology contains speaker's information that among them and it includes a speech which is known as a speaker recognition. DTW(Dynamic Time Warping) is the speaker recognition technology that seeks the pattern of standard speech signal and the similarity degree in an inputted speech signal using dynamic programming. ln this study, using TMS320C32 DSP processor, we are to embody this DTW and to construct a security system.

  • PDF