• Title/Summary/Keyword: speech situation

Search Result 122, Processing Time 0.022 seconds

A study on Activity in Speaking Class: Partner's Speech Reconstitution(PSR) (교실 말하기 수업에서의 상대 발화 재구성 활동 연구)

  • Kim, Sang kyung
    • Cross-Cultural Studies
    • /
    • v.37
    • /
    • pp.287-307
    • /
    • 2014
  • The purpose of this paper is to introduce a new and effective classroom speaking activity helping student's communication in real situation. It will be one of useful teaching techniques for teachers because it can be used with other various types of speaking activities together. The activity is designed by the researcher, and named as the Partner's Speech Reconstitution(PSR) in this paper. In chapter 2, Noticing and Output hypothesis which is the theoretic basis of the PSR will be described and the chapter 3 will explain activity methods and examples of the PSR, and then describe its merits and demerits. The researcher applied and practiced the PSR in the speaking class for international students in the K university for three semesters. This paper systematically introduces its organized activity. It helped learners elicit speaking performance of students who avoided talking in the speaking class, made the students concentrate in speaking activity, and helped the learners to talk sufficiently by inducing each student to reconstitute partner's speech production.

Isolated Digit and Command Recognition in Car Environment (자동차 환경에서의 단독 숫자음 및 명령어 인식)

  • 양태영;신원호;김지성;안동순;이충용;윤대희;차일환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.2
    • /
    • pp.11-17
    • /
    • 1999
  • This paper proposes an observation probability smoothing technique for the robustness of a discrete hidden Markov(DHMM) model based speech recognizer. Also, an appropriate noise robust processing in car environment is suggested from experimental results. The noisy speech is often mislabeled during the vector quantization process. To reduce the effects of such mislabelings, the proposed technique increases the observation probability of similar codewords. For the noise robust processing in car environment, the liftering on the distance measure of feature vectors, the high pass filtering, and the spectral subtraction methods are examined. Recognition experiments on the 14-isolated words consists of the Korean digits and command words were performed. The database was recorded in a stopping car and a running car environments. The recognition rates of the baseline recognizer were 97.4% in a stopping situation and 59.1% in a running situation. Using the proposed observation probability smoothing technique, the liftering, the high pass filtering, and the spectral subtraction the recognition rates were enhanced to 98.3% in a stopping situation and to 88.6% in a running situation.

  • PDF

Frame Reliability Weighting for Robust Speech Recognition (프레임 신뢰도 가중에 의한 강인한 음성인식)

  • 조훈영;김락용;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.323-329
    • /
    • 2002
  • This paper proposes a frame reliability weighting method to compensate for a time-selective noise that occurs at random positions of speech signal contaminating certain parts of the speech signal. Speech frames have different degrees of reliability and the reliability is proportional to SNR (signal-to noise ratio). While it is feasible to estimate frame Sl? by using the noise information from non-speech interval under a stationary noisy situation, it is difficult to obtain noise spectrum for a time-selective noise. Therefore, we used statistical models of clean speech for the estimation of the frame reliability. The proposed MFR (model-based frame reliability) approximates frame SNR values using filterbank energy vectors that are obtained by the inverse transformation of input MFCC (mal-frequency cepstral coefficient) vectors and mean vectors of a reference model. Experiments on various burnt noises revealed that the proposed method could represent the frame reliability effectively. We could improve the recognition performance by using MFR values as weighting factors at the likelihood calculation step.

Robust Speech Recognition Using Missing Data Theory (손실 데이터 이론을 이용한 강인한 음성 인식)

  • 김락용;조훈영;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.56-62
    • /
    • 2001
  • In this paper, we adopt a missing data theory to speech recognition. It can be used in order to maintain high performance of speech recognizer when the missing data occurs. In general, hidden Markov model (HMM) is used as a stochastic classifier for speech recognition task. Acoustic events are represented by continuous probability density function in continuous density HMM(CDHMM). The missing data theory has an advantage that can be easily applicable to this CDHMM. A marginalization method is used for processing missing data because it has small complexity and is easy to apply to automatic speech recognition (ASR). Also, a spectral subtraction is used for detecting missing data. If the difference between the energy of speech and that of background noise is below given threshold value, we determine that missing has occurred. We propose a new method that examines the reliability of detected missing data using voicing probability. The voicing probability is used to find voiced frames. It is used to process the missing data in voiced region that has more redundant information than consonants. The experimental results showed that our method improves performance than baseline system that uses spectral subtraction method only. In 452 words isolated word recognition experiment, the proposed method using the voicing probability reduced the average word error rate by 12% in a typical noise situation.

  • PDF

Focus Realization of English Noun Phrases in the Classroom Situation (교실 상황에서 영어 명사구의 초점 실현 양상)

  • Jun, Ji-Hyun;Song, Jae-Yung;Lee, Dong-Hwa;Kim, Kee-Ho
    • Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.109-132
    • /
    • 2002
  • The purpose of this study is to examine the focus realization of [Adjective+Noun] phrases which are used in English classroom situations. In order to examine this, two production and one perception experiments were designed. The noun phrases in the first two production experiments are divided into three patterns according to the location of focus. The difference between the two production experiments is that in the first experiment the focused words are contextually given in the classroom situation, but in the second experiment they are presented in written form. We compare the native English teachers' focus realization of noun phrases with that of Korean teachers from the point of view of intonational phonology. In the perception test, we examine how the uttered sentences are perceived by English native speakers and Korean native speakers. The results from the three experiments show that native English teachers' focus realization is quite consistent with informational structure. Also, there is a significant difference in pitch range of adjectives and nouns when the native speakers give pitch accents on the two content words, and the uttered sentences are mostly perceived as well as the speakers' intentions. As for Korean speakers, however, they usually focus only on the adjective or they focus on both the adjective and the noun, regardless of the relative informativeness of these words. From these findings, we can conclude that focus realization of Korean teachers is rather inconsistent with respect to informational structure when compared to that of native English teachers.

  • PDF

Deep Level Situation Understanding for Casual Communication in Humans-Robots Interaction

  • Tang, Yongkang;Dong, Fangyan;Yoichi, Yamazaki;Shibata, Takanori;Hirota, Kaoru
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • A concept of Deep Level Situation Understanding is proposed to realize human-like natural communication (called casual communication) among multi-agent (e.g., humans and robots/machines), where the deep level situation understanding consists of surface level understanding (such as gesture/posture understanding, facial expression understanding, speech/voice understanding), emotion understanding, intention understanding, and atmosphere understanding by applying customized knowledge of each agent and by taking considerations of thoughtfulness. The proposal aims to reduce burden of humans in humans-robots interaction, so as to realize harmonious communication by excluding unnecessary troubles or misunderstandings among agents, and finally helps to create a peaceful, happy, and prosperous humans-robots society. A simulated experiment is carried out to validate the deep level situation understanding system on a scenario where meeting-room reservation is done between a human employee and a secretary-robot. The proposed deep level situation understanding system aims to be applied in service robot systems for smoothing the communication and avoiding misunderstanding among agents.

Recognition Performance Improvement of Unsupervised Limabeam Algorithm using Post Filtering Technique

  • Nguyen, Dinh Cuong;Choi, Suk-Nam;Chung, Hyun-Yeol
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.8 no.4
    • /
    • pp.185-194
    • /
    • 2013
  • Abstract- In distant-talking environments, speech recognition performance degrades significantly due to noise and reverberation. Recent work of Michael L. Selzer shows that in microphone array speech recognition, the word error rate can be significantly reduced by adapting the beamformer weights to generate a sequence of features which maximizes the likelihood of the correct hypothesis. In this approach, called Likelihood Maximizing Beamforming algorithm (Limabeam), one of the method to implement this Limabeam is an UnSupervised Limabeam(USL) that can improve recognition performance in any situation of environment. From our investigation for this USL, we could see that because the performance of optimization depends strongly on the transcription output of the first recognition step, the output become unstable and this may lead lower performance. In order to improve recognition performance of USL, some post-filter techniques can be employed to obtain more correct transcription output of the first step. In this work, as a post-filtering technique for first recognition step of USL, we propose to add a Wiener-Filter combined with Feature Weighted Malahanobis Distance to improve recognition performance. We also suggest an alternative way to implement Limabeam algorithm for Hidden Markov Network (HM-Net) speech recognizer for efficient implementation. Speech recognition experiments performed in real distant-talking environment confirm the efficacy of Limabeam algorithm in HM-Net speech recognition system and also confirm the improved performance by the proposed method.

Age classification of emergency callers based on behavioral speech utterance characteristics (발화행태 특징을 활용한 응급상황 신고자 연령분류)

  • Son, Guiyoung;Kwon, Soonil;Baik, Sungwook
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.6
    • /
    • pp.96-105
    • /
    • 2017
  • In this paper, we investigated the age classification from the speaker by analyzing the voice calls of the emergency center. We classified the adult and elderly from the call center calls using behavioral speech utterances and SVM(Support Vector Machine) which is a machine learning classifier. We selected two behavioral speech utterances through analysis of the call data from the emergency center: Silent Pause and Turn-taking latency. First, the criteria for age classification selected through analysis based on the behavioral speech utterances of the emergency call center and then it was significant(p <0.05) through statistical analysis. We analyzed 200 datasets (adult: 100, elderly: 100) by the 5 fold cross-validation using the SVM(Support Vector Machine) classifier. As a result, we achieved 70% accuracy using two behavioral speech utterances. It is higher accuracy than one behavioral speech utterance. These results can be suggested age classification as a new method which is used behavioral speech utterances and will be classified by combining acoustic information(MFCC) with new behavioral speech utterances of the real voice data in the further work. Furthermore, it will contribute to the development of the emergency situation judgment system related to the age classification.

A Study of Apology Strategies between Genders in EFL College Students

  • Shim, Jae-Hwang
    • English Language & Literature Teaching
    • /
    • v.15 no.2
    • /
    • pp.225-243
    • /
    • 2009
  • This study investigates the use of different speech act of apology strategies between male and female EFL college students by comparing the components of intensity, stylistic competence, and semantic formulas. The data was collected from 37 participants who were studying freshmen English reading course at the Department of English Education of C University in Seoul. Most students were English majors taking pre-teacher course of teaching English for secondary school students. The participants were divided into two gender groups of male and female. The discourse completion test (DCT) which was revised from the speech act of apology by Olshtain and Cohen (1990) was provided with the participants after the researcher explained the speech act of apology in ten situations. The speech act of apology depends on situation variables: social solidarity, severity of offense, and social status. The results show that in the preference of intensity, male and female have almost the similar ratio in high (female: 24.7%, male 24%) and low intensity (female: 75.3%, male: 76%). In the use of stylistic competence, male group (21%) expresses more diversely formal features than female group (12%), while female (87%) use more informal features than male (66%). Most of participants show a limitation in the use of speaking four types of semantic formulas: expression of apology (APOL), acknowledgment of responsibility (RESP), offer of repair (REPR), and promise of forbearance (FORB). As nonnative speakers, the participants cannot conduct the semantic formula in some situations regardless of the tasks provided. The results suggest that English teachers should recognize pragmatic variations in which students feel difficulty in appropriate speaking strategies on apology. This study also contributes to teaching learners the strategies and speaking patterns in the course of various apology situations.

  • PDF

An Experimental Research on the Room Acoustical Environment of the Elementary School Classrooms (초등학교 교실의 음환경 평가에 관한 실험적 연구)

  • Haan, Chan-Hoon;Moon, Kyu-Chun
    • Journal of the Korean Institute of Educational Facilities
    • /
    • v.11 no.1
    • /
    • pp.5-14
    • /
    • 2004
  • Since 1990s in Korea, elementary school classrooms have been designed toward open education system in pursuit of variety of educational purpose. Also, the architectural designs of schools have been acomplished for individual school not based on the standard design code. The present paper aims to investigate the acoustic environment of existing classrooms and to compare the sound insulation capacity between the ordinary classrooms and the newly built classrooms for open education. The current acoustical situation of elementary classrooms was analyzed using field measurements and questionnaire survey. In order to this, Three elementary schools were selected which were built in 1978, 1996 and 2000 respectively. Room acoustical parameters including Reverberation time(RT), Definition(D50), Speech Intelligibility(RASTI), Transmission loss(TL) and STC were measured in a classroom in each elementary school classroom. Each measurement was undertaken with the windows and doors being open or closed. As the result, it was found that the transmission loss between rooms in open classrooms is, $5{\sim}6dB$ in average, inferior than the ordinary classrooms. The RASTI of 0.70 was measured in newly built classrooms which is better than old classrooms(0.70) and open classrooms(0.73). This was shown as same in the speech definition measurements. This results from the condition of sealing and airtightness of classrooms and floor materials. The results denote that open classrooms have poor acoustic condition in sound insulation and speech intelligibility.