• 제목/요약/키워드: digital speech communication

검색결과 91건 처리시간 0.019초

DSP16210을 이용한 8kbps CS-ACELP 의 실시간 구현 (Real-Time Implementation of the 8 kbps CS-ACELP)

  • 박지현;박성일정원국임병근
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1998년도 추계종합학술대회 논문집
    • /
    • pp.1211-1214
    • /
    • 1998
  • Real-time implementation of Conjugate-Structure Algebraic CELP(CS-ACELP) is presented. ITU-T Study Group(SG) 15 has standardized the CS-ACELP speech coding algorithm as G.729. A real-time implementation of the CS-ACELP is achieved using 16 bit fixed point DSP16210 Digital Signal Processor (DSP) of Lucent Technologies. The speech coder has been implemented in the bit-exact manner using the fixed point CS-ACELP C source which is the part of the G.729 standard. To provide a multi-channel vocoder solution to digital communication system, we try to minimize the complexity(e.g., MIPS, ROM, RAM) of CS-ACELP. Our speech coder shows 15.5 MIPS in performance which enables 4 channel CS-ACELP to be processed with one DSP16210.

  • PDF

얼굴영상과 음성을 이용한 멀티모달 감정인식 (Multimodal Emotion Recognition using Face Image and Speech)

  • 이현구;김동주
    • 디지털산업정보학회논문지
    • /
    • 제8권1호
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Effect of Digital Noise Reduction of Hearing Aids on Music and Speech Perception

  • Kim, Hyo Jeong;Lee, Jae Hee;Shim, Hyun Joon
    • Journal of Audiology & Otology
    • /
    • 제24권4호
    • /
    • pp.180-190
    • /
    • 2020
  • Background and Objectives: Although many studies have evaluated the effect of the digital noise reduction (DNR) algorithm of hearing aids (HAs) on speech recognition, there are few studies on the effect of DNR on music perception. Therefore, we aimed to evaluate the effect of DNR on music, in addition to speech perception, using objective and subjective measurements. Subjects and Methods: Sixteen HA users participated in this study (58.00±10.44 years; 3 males and 13 females). The objective assessment of speech and music perception was based on the Korean version of the Clinical Assessment of Music Perception test and word and sentence recognition scores. Meanwhile, for the subjective assessment, the quality rating of speech and music as well as self-reported HA benefits were evaluated. Results: There was no improvement conferred with DNR of HAs on the objective assessment tests of speech and music perception. The pitch discrimination at 262 Hz in the DNR-off condition was better than that in the unaided condition (p=0.024); however, the unaided condition and the DNR-on conditions did not differ. In the Korean music background questionnaire, responses regarding ease of communication were better in the DNR-on condition than in the DNR-off condition (p=0.029). Conclusions: Speech and music perception or sound quality did not improve with the activation of DNR. However, DNR positively influenced the listener's subjective listening comfort. The DNR-off condition in HAs may be beneficial for pitch discrimination at some frequencies.

Effect of Digital Noise Reduction of Hearing Aids on Music and Speech Perception

  • Kim, Hyo Jeong;Lee, Jae Hee;Shim, Hyun Joon
    • 대한청각학회지
    • /
    • 제24권4호
    • /
    • pp.180-190
    • /
    • 2020
  • Background and Objectives: Although many studies have evaluated the effect of the digital noise reduction (DNR) algorithm of hearing aids (HAs) on speech recognition, there are few studies on the effect of DNR on music perception. Therefore, we aimed to evaluate the effect of DNR on music, in addition to speech perception, using objective and subjective measurements. Subjects and Methods: Sixteen HA users participated in this study (58.00±10.44 years; 3 males and 13 females). The objective assessment of speech and music perception was based on the Korean version of the Clinical Assessment of Music Perception test and word and sentence recognition scores. Meanwhile, for the subjective assessment, the quality rating of speech and music as well as self-reported HA benefits were evaluated. Results: There was no improvement conferred with DNR of HAs on the objective assessment tests of speech and music perception. The pitch discrimination at 262 Hz in the DNR-off condition was better than that in the unaided condition (p=0.024); however, the unaided condition and the DNR-on conditions did not differ. In the Korean music background questionnaire, responses regarding ease of communication were better in the DNR-on condition than in the DNR-off condition (p=0.029). Conclusions: Speech and music perception or sound quality did not improve with the activation of DNR. However, DNR positively influenced the listener's subjective listening comfort. The DNR-off condition in HAs may be beneficial for pitch discrimination at some frequencies.

디지털 음성방식의 성능 비교에 대한 연구 (A Study on the Comparison of Digital Speech Coding Performance)

  • 배철수
    • 한국통신학회논문지
    • /
    • 제17권8호
    • /
    • pp.881-890
    • /
    • 1992
  • 본 논문은 음성 시스템과 통신망에서 이용되는 음성 품질 평가 모델의 구축을 위한 기본 연구로서, 음성 부호화 평가 방법 중 주관적 평가에서 발생되는 여러 문제점을 해결하여 안정된 객관적 평가값을 얻기위해서, 여러 객관적 평가량과 주관적 평가량을 상호 비교한 후, 주관적 평가값에 적합한 객관적 평가량을 검토하였다.

  • PDF

Adaptive Encoding of Fixed Codebook in CELP Coders

  • Kim, Hong-Kook
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권3E호
    • /
    • pp.44-49
    • /
    • 1997
  • In this paper, we propose an adaptive encoding method of fixed codebook in CELP coders and implement an adaptive fixed code exited linear prediction(AF-CELP) speech coder. AF-CELP exploits the fact that the fixed codebook contribution to speech signal is also periodic like the adaptive codebook (or pitch filter) contribution. By modeling the fixed code book with the pitch lag and the gain from the adaptive codebook, AF-CELP can be implemented at low bit rates as well as low complexity. Listening tests show that a 6.4 kbit/s AF-CELP has a comparable quality to the 8 kbit/s CS-ACELP in background noise conditions.

  • PDF

히어 캠 임베디드 플랫폼 설계 (HearCAM Embedded Platform Design)

  • 홍선학;조경순
    • 디지털산업정보학회논문지
    • /
    • 제10권4호
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we implemented the HearCAM platform with Raspberry PI B+ model which is an open source platform. Raspberry PI B+ model consists of dual step-down (buck) power supply with polarity protection circuit and hot-swap protection, Broadcom SoC BCM2835 running at 700MHz, 512MB RAM solered on top of the Broadcom chip, and PI camera serial connector. In this paper, we used the Google speech recognition engine for recognizing the voice characteristics, and implemented the pattern matching with OpenCV software, and extended the functionality of speech ability with SVOX TTS(Text-to-speech) as the matching result talking to the microphone of users. And therefore we implemented the functions of the HearCAM for identifying the voice and pattern characteristics of target image scanning with PI camera with gathering the temperature sensor data under IoT environment. we implemented the speech recognition, pattern matching, and temperature sensor data logging with Wi-Fi wireless communication. And then we directly designed and made the shape of HearCAM with 3D printing technology.

심층 신경망을 이용한 음성 신호의 부호화 이력 검출 (Coding History Detection of Speech Signal using Deep Neural Network)

  • 조효진;장원;신성현;박호종
    • 방송공학회논문지
    • /
    • 제23권1호
    • /
    • pp.86-92
    • /
    • 2018
  • 본 논문에서는 디지털 음성 신호의 부호화 이력을 검출하는 방법을 제안한다. 음성 신호를 디지털 방식으로 전송 또는 저장할 때 데이터양을 줄이기 위해 부호화한다. 따라서 음성 신호 파형이 주어질 때, 해당 신호가 원본인지 부호화된 신호인지 판단하고, 만일 부호화 되었다면 부호화 횟수를 검출하는 부호화 이력 검출 과정이 필요하다. 본 논문에서는 12.2kbps 비트율의 AMR 부호화기에 대하여 원본, 단일 부호화, 이중 부호화 여부를 판단하는 부호화 이력 검출 방법을 제안한다. 제안한 방법은 입력 음성 신호에서 음성 고유의 특성 벡터를 추출하고, 해당 특성 벡터를 심층 신경망으로 모델링 하는 방법을 사용한다. 본 논문에서 제안하는 특성 벡터가 일반적인 스펙트로그램으로부터 추출한 특성 벡터보다 우수한 부호화 이력 검출 성능을 제공하는 것을 확인하였다.

이동통신망에서 삼자회의를 위한 음성 부호화기의 성능에 관한 연구 (A Comparative Performance Study of Speech Coders for Three-Way Conferencing in Digital Mobile Communication Networks)

  • 이미숙;이윤근;김기철;이황수;조위덕
    • The Journal of the Acoustical Society of Korea
    • /
    • 제14권1E호
    • /
    • pp.30-38
    • /
    • 1995
  • 본 논문에서는 이동통신망에서 신호 가산방식을 이용한 삼자회의에서의 음성 부호화기 성능을 평가하였다. 두 사람의 섞인 목소리가 다른 회의 참가자에게 전달되는 신호 가산방식은 가장 자연스러운 삼자회의 방식이지만, 아직까지 두 사람의 섞인 목소리를 부호화할 수 있는 유용한 방법은 없다. 본 논문에서는 삼자회의에 신호 가산방식을 적용하기 위해 QCELP, VSELP, RPE-LTP 보코도를 구현하여 평가를 수행하였다. 또한 두 화자의 목소리가 섞인 음성신호에 대한 부호화기의 성능평가를 위해 기존의 음질 평가법을 그대로 사용할 수 없으므로, 본 논문에서는 두 가지 주관적 평가법을 제안하였다. 제안된 방법은 문장 식별법(SD)과 수정된 DMOS(MDOMS) 방법이다. 실험결과에 의하면 VSELP 보코더의 출력음질이 다른 두 보코더에 비해 좋게 나타났다.

  • PDF

음성 패킷을 이용한 채널의 에러 정보 전달 (Transmission of Channel Error Information over Voice Packet)

  • 박호종;차성호
    • 한국음향학회지
    • /
    • 제21권4호
    • /
    • pp.394-400
    • /
    • 2002
  • 디지털 음성 통신에서 송신하는 음성 패킷의 전송 에러율을 알면 송신 채널 상황에 적합한 압축 동작을 통하여 전체 통신의 품질을 향상시킬 수 있다. 그러나 현재의 이동통신과 인터넷 통신에서는 음성 패킷의 전송 에러정보를 알려주는 프로토콜이 지원되지 않는다. 본 논문에서는 이를 해결하기 위하여 채널의 전송 에러 정보를 음성 패킷에 삽입하여 실시간으로 전달하는 방법을 제안한다. 제안하는 채널 에러 정보 삽입 방법은 ACELP (algebraic code-excited linear predictin) 코드벡터의 펄스 위치의 상관 관계를 이용하며, 이를 통하여 추가정보 삽입에 의한 음질 저하를 막고 오인식율을 줄일 수 있다. 다양한 음성 데이터를 이용하여 제안한 방법의 성능을 측정하였으며 음질의 저하가 거의 발생하지 않고 정보의 검출 능력과 오인식율에서 만족할 만한 성능을 가지는 것을 확인하였다.