• 제목/요약/키워드: tone recognition

검색결과 73건 처리시간 0.029초

A Relationship of Tone, Consonant, and Speech Perception in Audiological Diagnosis

  • Han, Woo-Jae;Allen, Jont B.
    • 한국음향학회지
    • /
    • 제31권5호
    • /
    • pp.298-308
    • /
    • 2012
  • This study was designed to examine the phoneme recognition errors of hearing-impaired (HI) listeners on a consonant-by-consonant basis, to show (1) how each HI ear perceives individual consonants differently and (2) how standard clinical measurements (i.e., using a tone and word) fail to predict these differences. Sixteen English consonant-vowel (CV) syllables of six signal-to-noise ratios in speech-weighted noise were presented at the most comfortable level for ears with mild-to-moderate sensorineural hearing loss. The findings were as follows: (1) individual HI listeners with a symmetrical pure-tone threshold showed different consonant-loss profiles (CLPs) (i.e., over a set of the 16 English consonants, the likelihood of misperceiving each consonant) in right and left ears. (2) A similar result was found across subjects. Paired ears of different HI individuals with identical pure-tone threshold presented different CLPs in one ear to the other. (3) Paired HI ears having the same averaged consonant score demonstrated completely different CLPs. We conclude that the standard clinical measurements are limited in their ability to predict the extent to which speech perception is degraded in HI ears, and thus they are a necessary, but not a sufficient measurement for HI speech perception. This suggests that the CV measurement would be a useful clinical tool.

Traffic flow measurement system using image processing

  • Hara, Takaaki;Akizuki, Kageo;Kawamura, Mamoru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 Proceedings of the Korea Automatic Control Conference, 11th (KACC); Pohang, Korea; 24-26 Oct. 1996
    • /
    • pp.426-439
    • /
    • 1996
  • In this paper, we propose a simple algorithm to calculate the numbers of the passing cars by using an image processing sensor for the digital black and white images with 256 tone level. Shadow is one of the most troublesome factor in image processing. By differencing the tone level, we cannot discriminate between the body of the car and its shadow. In our proposed algorithm, the area of the shadow is excluded by recognizing the position of each traffic lane. For real-time operation and simple calculation, two lines of the tone level are extracted and the existences of cars are recognized. In the experimental application on a high-way, the recognition rate of the real-time operation is more than 94%.

  • PDF

말소리 단어 재인 시 높낮이와 장단의 역할: 서울 방언과 대구 방언의 비교 (The Role of Pitch and Length in Spoken Word Recognition: Differences between Seoul and Daegu Dialects)

  • 이윤형;박현수
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.85-94
    • /
    • 2009
  • The purpose of this study was to see the effects of pitch and length patterns on spoken word recognition. In Experiment 1, a syllable monitoring task was used to see the effects of pitch and length on the pre-lexical level of spoken word recognition. For both Seoul dialect speakers and Daegu dialect speakers, pitch and length did not affect the syllable detection processes. This result implies that there is little effect of pitch and length in pre-lexical processing. In Experiment 2, a lexical decision task was used to see the effect of pitch and length on the lexical access level of spoken word recognition. In this experiment, word frequency (low and high) as well as pitch and length was manipulated. The results showed that pitch and length information did not play an important role for Seoul dialect speakers, but that it did affect lexical decision processing for Daegu dialect speakers. Pitch and length seem to affect lexical access during the word recognition process of Daegu dialect speakers.

  • PDF

음성의 감성요소 추출을 통한 감성 인식 시스템 (The Emotion Recognition System through The Extraction of Emotional Components from Speech)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

추론 능력에 기반한 음성으로부터의 감성 인식 (Inference Ability Based Emotion Recognition From Speech)

  • 박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.123-125
    • /
    • 2004
  • Recently, we are getting to interest in a user friendly machine. The emotion is one of most important conditions to be familiar with people. The machine uses sound or image to express or recognize the emotion. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • 제27권1호
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

음성인식을 이용한 자동 호 분류 철도 예약 시스템 (A Train Ticket Reservation Aid System Using Automated Call Routing Technology Based on Speech Recognition)

  • 심유진;김재인;구명완
    • 대한음성학회지:말소리
    • /
    • 제52호
    • /
    • pp.161-169
    • /
    • 2004
  • This paper describes the automated call routing for train ticket reservation aid system based on speech recognition. We focus on the task of automatically routing telephone calls based on user's fluently spoken response instead of touch tone menus in an interactive voice response system. Vector-based call routing algorithm is investigated and mapping table for key term is suggested. Korail database collected by KT is used for call routing experiment. We evaluate call-classification experiments for transcribed text from Korail database. In case of small training data, an average call routing error reduction rate of 14% is observed when mapping table is used.

  • PDF

음성으로부터 감성인식 요소 분석 (Analyzing the element of emotion recognition from speech)

  • 박창현;심재윤;이동욱;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2001년도 추계학술대회 학술발표 논문집
    • /
    • pp.199-202
    • /
    • 2001
  • 일반적으로 음성신호로부터 사람의 감정을 인식할 수 있는 요소는 (1)대화의 내용에 사용한 단어, (2)톤 (Tone), (3)음성신호의 피치(Pitch), (4)포만트 주파수(Formant Frequency), 그리고 (5)말의 빠르기(Speech Speed) (6)음질(Voice Quality) 등이다. 사람의 경우는 주파수 같은 분석요소 보다는 론과 단어, 빠르기, 음질로 감정을 받아들이게 되는 것이 자연스러운 방법이므로 당연히 후자의 요소들이 감정을 분류하는데 중요한 인자로 쓰일 수 있다. 그리고, 종래는 주로 후자의 요소들을 이용하였는데, 기계로써 구현하기 위해서는 조금 더 공학적인 포만트 주파수를 사용할 수 있게 되는 것이 도움이 된다. 그러므로, 본 연구는 음성 신호로부터 피치와 포만트, 그리고 말의 빠르기 등을 이용하여 감성 인식시스템을 구현하는 것을 목표로 연구를 진행하고 있는데, 그 1단계 연구로서 본 논문에서는 화가 나서 내뱉는 알과 기쁠 때 간단하게 사용하는 말들을 기반으로 하여 극단적인 두 가지 감정의 독특한 특성을 찾아낸다.

  • PDF

한국인으 위한 중국어 성조 평가 시스템 (Chinese Tone Evaluation System for Korean learners)

  • 김무중;김효숙;김선주;강효원;권철홍
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 춘계 학술대회 발표논문집
    • /
    • pp.41-44
    • /
    • 2005
  • This study is about Chinese tone evaluation system for Korean learners using speech technology, Chinese prounciaion system consists of initials, finals and tones. Initials/finals are in segmental level and tones are in suprasegmental level. So different method could be used assessing Korean users' Chinese. Differ from segmental level recognition method, we chose pattern matching method in evaluating Chinese tones. Firstly we defined speakers' own speech range and produced standard tonal pattern according to speakers' own range. And then we compared input patterns of users with referring patterns.

  • PDF

ToBI Based Prosodic Representation of the Kyungnam Dialect of Korean

  • Cho, Yong-Hyung
    • 음성과학
    • /
    • 제2권
    • /
    • pp.159-172
    • /
    • 1997
  • This paper proposes a prosodic representation system of the Kyungnam dialect of Korean, based on the ToBI system. In this system, diverse intonation patterns are transcribed on the four parallel tiers: a tone tier, a break index tier, an orthographic tier, and a miscellaneous tier. The tone tier employs pitch accents, phrase accents, and boundary tones marked with diacritics in order to represent various pitch events. The break index tier uses five break indices, numbered from 0 to 4, in order to represent degrees of connectiveness in speech by associating each inter-word position with a break index. In this, each break index represents a boundary of some kind of constituent. This system can contribute not only to a more detailed theory connecting prosody, syntax, and intonation, but also to current text-to-speech synthesis approaches, speech recognition, and other quantitative computational modellings.

  • PDF