• Title/Summary/Keyword: signal words

Search Result 171, Processing Time 0.024 seconds

Evaluation of the Signal Word Cognition using Quantification Methods (수량화 분석을 이용한 신호단어의 인식도 평가)

  • 고병인;김동하;임현교
    • Journal of the Korean Society of Safety
    • /
    • v.15 no.4
    • /
    • pp.134-138
    • /
    • 2000
  • Signal words such as DANGER, WARNING, CAUTION, etc. have been used in order to transmit a potential hazard easily and quickly. But they were applied to a number of the sites without consistency. Thus, this study took Quantification Method and Cluster Analysis in order to judge the signal words corresponding to the urgency of situations, and to analyze whether signal words are used properly or not. According to the result of Quantification Method II signal words were most affected by Understanding, Severity and Likelihood in both student group and industrial worker group. And in Quantification Method III CAUTION corresponded to Immediacy and Understanding whereas NOTICE did to Receptivity, WARNING, DEADLY and DANCER did to Likelihood, Dangerousness and Severity. Finally, Cluster Analysis showed that CAUTION and NOTICE were recognized as similar words.

  • PDF

Perception of Tamil Mono-Syllabic and Bi-Syllabic Words in Multi-Talker Speech Babble by Young Adults with Normal Hearing

  • Gnanasekar, Sasirekha;Vaidyanath, Ramya
    • Journal of Audiology & Otology
    • /
    • v.23 no.4
    • /
    • pp.181-186
    • /
    • 2019
  • Background and Objectives: This study compared the perception of mono-syllabic and bisyllabic words in Tamil by young normal hearing adults in the presence of multi-talker speech babble at two signal-to-noise ratios (SNRs). Further for this comparison, a speech perception in noise test was constructed using existing mono-syllabic and bi-syllabic word lists in Tamil. Subjects and Methods: A total of 30 participants with normal hearing in the age range of 18 to 25 years participated in the study. Speech-in-noise test in Tamil (SPIN-T) constructed using mono-syllabic and bi-syllabic words in Tamil was used as stimuli. The stimuli were presented in the background of multi-talker speech babble at two SNRs (0 dB and +10 dB SNR). Results: The effect of noise on SPIN-T varied with SNR. All the participants performed better at +10 dB SNR, the higher of the two SNRs considered. Additionally, at +10 dB SNR performance did not vary significantly for neither mono-syllabic or bi-syllabic words. However, a significant difference existed at 0 dB SNR. Conclusions: The current study indicated that higher SNR leads to better performance. In addition, bi-syllabic words were identified with minimal errors compared to mono-syllabic words. Spectral cues were the most affected in the presence of noise leading to more of place of articulation errors for both mono-syllabic and bi-syllabic words.

Perception of Tamil Mono-Syllabic and Bi-Syllabic Words in Multi-Talker Speech Babble by Young Adults with Normal Hearing

  • Gnanasekar, Sasirekha;Vaidyanath, Ramya
    • Korean Journal of Audiology
    • /
    • v.23 no.4
    • /
    • pp.181-186
    • /
    • 2019
  • Background and Objectives: This study compared the perception of mono-syllabic and bisyllabic words in Tamil by young normal hearing adults in the presence of multi-talker speech babble at two signal-to-noise ratios (SNRs). Further for this comparison, a speech perception in noise test was constructed using existing mono-syllabic and bi-syllabic word lists in Tamil. Subjects and Methods: A total of 30 participants with normal hearing in the age range of 18 to 25 years participated in the study. Speech-in-noise test in Tamil (SPIN-T) constructed using mono-syllabic and bi-syllabic words in Tamil was used as stimuli. The stimuli were presented in the background of multi-talker speech babble at two SNRs (0 dB and +10 dB SNR). Results: The effect of noise on SPIN-T varied with SNR. All the participants performed better at +10 dB SNR, the higher of the two SNRs considered. Additionally, at +10 dB SNR performance did not vary significantly for neither mono-syllabic or bi-syllabic words. However, a significant difference existed at 0 dB SNR. Conclusions: The current study indicated that higher SNR leads to better performance. In addition, bi-syllabic words were identified with minimal errors compared to mono-syllabic words. Spectral cues were the most affected in the presence of noise leading to more of place of articulation errors for both mono-syllabic and bi-syllabic words.

Pattern classification of EMG signals by the syntactic analysis (구문론적 해석에 의한 근전도 신호의 패턴 분류)

  • 왕문성;박상희;정태윤;변윤식
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10b
    • /
    • pp.699-701
    • /
    • 1987
  • This paper deals With the EMG signal processing to apply the EMG signal to the Prosthetic arm. The EMG signals are generated by the voluntary contractions of the subject's musculature and is coded into binary words by the pulse width modulation. Command strings or sentences are constructed by concatenating several words, and are syntactically described by a context free grammar in Chomsky normal form and is tried to classify the movement pattern by the CYK algorithm.

  • PDF

MPEG-2 AAC Encoder Implementation Using a floating-Point DSP (부동 소수점 DSP를 이용한 MPEG-2 AAC 부호차기 구현)

  • Kim Seung-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.7
    • /
    • pp.882-888
    • /
    • 2005
  • MPEG-2 Advanced Audio Coding (AAC) has already been standardized as a sophisticated next generation technology AAC provides an audio signal that has CD quality at 96-128kbps/stereo. This paper describes a high-quality and efficient software implementation of an MPEG-2 AAC LC Profile encoder. Common scalefactor and noisless coding are accelerated by $45\%$ and $27\%$, respectively, through the use of TMS320C30 instructions. The implemented encoder uses 7.5kWords of program memory, 18kWords of data ROM and 92kBytes of data RAM, respectively. The results of subjective Qualify test showed that the sound quality achieved at 96kbps/stereo was equivalent to that of MP3 at 128kbps/stereo.

  • PDF

Effects of content and formal schema on reading comprehension (내용과 형식 스키마가 독해에 미치는 영향)

  • Yeon, Jun-Hum
    • English Language & Literature Teaching
    • /
    • no.3
    • /
    • pp.95-122
    • /
    • 1997
  • The purpose of this research was to investigate the effects of content and formal schema on reading comprehension. Five hundred fiftynine subjects from high school were assigned to one of the following levels and treatment conditions : (1) Higher level & Schema Activation, (2) Higher level & Non-schema Activation, (3) Lower level & Schema Activation, and (4) Lower level & Non-schema Activation. To evaluate the effects of schema activation. two experiments were conducted : one was related to the content schema and the other to the formal schema. To evaluate the effects of content schema, three different types of tests were conducted : (1) cloze test, (2) guessing the meanings of nonsense words, and (3) immediate recall test. To evaluate the effects of formal schema instruction, four kinds of tests were conducted : (1) sorting the sentences according to the importance, (2) identifying the signal words, (3) immediate recall test, and (4) identifying the specific information. For content schema condition, results indicated that the subjects given the titles or pictures before reading in "Content Schema Activation" treatment had better grades than those of the other treatment in all types of tests. regardless of their levels. Schema activation helped the subjects to increase the cognitive predictability of missing words and to participate in the tasks more actively with risk-taking. And it was also shown that good readers tend to process the words meaningfully, while poor readers tend to process the words phonetically or morphologically. Formal schema activation through teaching the text organization also had a significant influence on three types of tests: sorting the sentences according to the importance, identifying the signal words, and immediate recall test, but not on identifying the specific information. The implications from this study can be briefly noted as follows : (l) In teaching reading, the student's background knowledge should be activated as a pre-reading activity. (2) In reading, it is more important to emphasize the student's schema than the features of the text. (3) Various educational interventions should be introduced, especially for the lower level students. (4) Teaching text structures can be a powerful method for the top-down processing strategy.

  • PDF

A Study on the Segmentation of Speech Signal into Phonemic Units (음성 신호의 음소 단위 구분화에 관한 연구)

  • Lee, Yeui-Cheon;Lee, Gang-Sung;Kim, Soon-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.4
    • /
    • pp.5-11
    • /
    • 1991
  • This paper suggests a segmentation method of speech signal into phonemic units. The suggested segmentation system is speaker-independent and performed without anyprior information of speech signal. In segmentation process, we first divide input speech signal into purevoiced region and not pure voiced speech regions. After then we apply the second algorithm which segments each region into the detailed phonemic units by using the voiced detection parameters, i.e., the time variation of 0th LPC cepstrum coefficient parameter and the ZCR parameter. Types of speech, used to prove the availability of segmentation algorithm suggested in this paper, are the vocabulary composed of isolated words and continuous words. According to the experiments, the successful segmentation rate for 507 phonemic units involved in the total vocabulary is 91.7%.

  • PDF

Lexical Status and the Degree of /l/-darkening

  • Ahn, Miyeon
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.73-78
    • /
    • 2015
  • This study explores the degree of velarization of English word-final /l/ (i.e., /l/-darkness) according to the lexical status. Lexical status is defined as whether a speech stimulus is considered as a word or a non-word. We examined the temporal and spectral properties of word-final /l/ in terms of the duration and the frequency difference of F2-F1 values by varying the immediate pre-liquid vowels. The result showed that both temporal and spectral properties were contrastive across all vowel contexts in the way of real words having shorter [l] duration and low F2-F1 values, compared to non-words. That is, /l/ is more heavily velarized in words than in non-words, which suggests that lexical status whether language users encode the speech signal as a word or not is deeply involved in their speech production.

Analyzing the element of emotion recognition from speech (음성으로부터 감성인식 요소분석)

  • 심귀보;박창현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.510-515
    • /
    • 2001
  • Generally, there are (1)Words for conversation (2)Tone (3)Pitch (4)Formant frequency (5)Speech speed, etc as the element for emotional recognition from speech signal. For human being, it is natural that the tone, vice quality, speed words are easier elements rather than frequency to perceive other s feeling. Therefore, the former things are important elements fro classifying feelings. And, previous methods have mainly used the former thins but using formant is good for implementing as machine. Thus. our final goal of this research is to implement an emotional recognition system based on pitch, formant, speech speed, etc. from speech signal. In this paper, as first stage we foun specific features of feeling angry from his words when a man got angry.

  • PDF

Fixed Point Implementation of the QCELP Speech Coder

  • Yoon, Byung-Sik;Kim, Jae-Won;Lee, Won-Myoung;Jang, Seok-Jin;Choi, Song_in;Lim, Myoung-Seon
    • ETRI Journal
    • /
    • v.19 no.3
    • /
    • pp.242-258
    • /
    • 1997
  • The Qualcomm code excited linear prediction (QCELP) speech coder was adopted to increase the capacity of the CDMA Mobile System (CMS). In this paper, we implemented the QCELP speech coding algorithm by using TMS320C50 fixed point DSP chip. Also the fixed point simulation was done with C language. The computation complexity of QCELP on TMS320C50 was 10k words and data memory was 4k words. In the normal call test on the CMS, where mobile to mobile call test was done in the bypass mode without double vocoding, mean opinion score for the speech quality was he Qualcomm code excited linear prediction (QCELP) speech quality was 3.11.

  • PDF