• 제목/요약/키워드: signal words

검색결과 172건 처리시간 0.018초

수량화 분석을 이용한 신호단어의 인식도 평가 (Evaluation of the Signal Word Cognition using Quantification Methods)

  • 고병인;김동하;임현교
    • 한국안전학회지
    • /
    • 제15권4호
    • /
    • pp.134-138
    • /
    • 2000
  • Signal words such as DANGER, WARNING, CAUTION, etc. have been used in order to transmit a potential hazard easily and quickly. But they were applied to a number of the sites without consistency. Thus, this study took Quantification Method and Cluster Analysis in order to judge the signal words corresponding to the urgency of situations, and to analyze whether signal words are used properly or not. According to the result of Quantification Method II signal words were most affected by Understanding, Severity and Likelihood in both student group and industrial worker group. And in Quantification Method III CAUTION corresponded to Immediacy and Understanding whereas NOTICE did to Receptivity, WARNING, DEADLY and DANCER did to Likelihood, Dangerousness and Severity. Finally, Cluster Analysis showed that CAUTION and NOTICE were recognized as similar words.

  • PDF

Perception of Tamil Mono-Syllabic and Bi-Syllabic Words in Multi-Talker Speech Babble by Young Adults with Normal Hearing

  • Gnanasekar, Sasirekha;Vaidyanath, Ramya
    • Journal of Audiology & Otology
    • /
    • 제23권4호
    • /
    • pp.181-186
    • /
    • 2019
  • Background and Objectives: This study compared the perception of mono-syllabic and bisyllabic words in Tamil by young normal hearing adults in the presence of multi-talker speech babble at two signal-to-noise ratios (SNRs). Further for this comparison, a speech perception in noise test was constructed using existing mono-syllabic and bi-syllabic word lists in Tamil. Subjects and Methods: A total of 30 participants with normal hearing in the age range of 18 to 25 years participated in the study. Speech-in-noise test in Tamil (SPIN-T) constructed using mono-syllabic and bi-syllabic words in Tamil was used as stimuli. The stimuli were presented in the background of multi-talker speech babble at two SNRs (0 dB and +10 dB SNR). Results: The effect of noise on SPIN-T varied with SNR. All the participants performed better at +10 dB SNR, the higher of the two SNRs considered. Additionally, at +10 dB SNR performance did not vary significantly for neither mono-syllabic or bi-syllabic words. However, a significant difference existed at 0 dB SNR. Conclusions: The current study indicated that higher SNR leads to better performance. In addition, bi-syllabic words were identified with minimal errors compared to mono-syllabic words. Spectral cues were the most affected in the presence of noise leading to more of place of articulation errors for both mono-syllabic and bi-syllabic words.

Perception of Tamil Mono-Syllabic and Bi-Syllabic Words in Multi-Talker Speech Babble by Young Adults with Normal Hearing

  • Gnanasekar, Sasirekha;Vaidyanath, Ramya
    • 대한청각학회지
    • /
    • 제23권4호
    • /
    • pp.181-186
    • /
    • 2019
  • Background and Objectives: This study compared the perception of mono-syllabic and bisyllabic words in Tamil by young normal hearing adults in the presence of multi-talker speech babble at two signal-to-noise ratios (SNRs). Further for this comparison, a speech perception in noise test was constructed using existing mono-syllabic and bi-syllabic word lists in Tamil. Subjects and Methods: A total of 30 participants with normal hearing in the age range of 18 to 25 years participated in the study. Speech-in-noise test in Tamil (SPIN-T) constructed using mono-syllabic and bi-syllabic words in Tamil was used as stimuli. The stimuli were presented in the background of multi-talker speech babble at two SNRs (0 dB and +10 dB SNR). Results: The effect of noise on SPIN-T varied with SNR. All the participants performed better at +10 dB SNR, the higher of the two SNRs considered. Additionally, at +10 dB SNR performance did not vary significantly for neither mono-syllabic or bi-syllabic words. However, a significant difference existed at 0 dB SNR. Conclusions: The current study indicated that higher SNR leads to better performance. In addition, bi-syllabic words were identified with minimal errors compared to mono-syllabic words. Spectral cues were the most affected in the presence of noise leading to more of place of articulation errors for both mono-syllabic and bi-syllabic words.

구문론적 해석에 의한 근전도 신호의 패턴 분류 (Pattern classification of EMG signals by the syntactic analysis)

  • 왕문성;박상희;정태윤;변윤식
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1987년도 한국자동제어학술회의논문집; 한국과학기술대학, 충남; 16-17 Oct. 1987
    • /
    • pp.699-701
    • /
    • 1987
  • This paper deals With the EMG signal processing to apply the EMG signal to the Prosthetic arm. The EMG signals are generated by the voluntary contractions of the subject's musculature and is coded into binary words by the pulse width modulation. Command strings or sentences are constructed by concatenating several words, and are syntactically described by a context free grammar in Chomsky normal form and is tried to classify the movement pattern by the CYK algorithm.

  • PDF

부동 소수점 DSP를 이용한 MPEG-2 AAC 부호차기 구현 (MPEG-2 AAC Encoder Implementation Using a floating-Point DSP)

  • 김승우
    • 한국멀티미디어학회논문지
    • /
    • 제8권7호
    • /
    • pp.882-888
    • /
    • 2005
  • MPEG-2 AAC는 이미 보다 진보한 차세대 기술로 표준화가 이루어 졌다. AAC는 96-128kbps/stereo에서 CD 음질의 오디오 신호를 표현한다. 본 논문은 고음질의 MPEG-2 AAC LC Profile 부호화기 구현에 관하여 논하였다. 공통 스케일펙터와 무손실코딩은 각각 $45\%$$27\%$의 TMS320C30 명령어 이득을 가져왔다. 구현된 부호화기는 프로그램 메모리 7.5 kWords, 데이터 롬 18kWords, 데이터 램 92kBytes를 사용한다. 주관적 음질평가결과는 96kbps 스테레오에서 얻어진 AAC 부호화기 음질이 MP3 128kbps 스테레오에서 얻어진 것과 동일한 음질을 가짐을 보여준다.

  • PDF

내용과 형식 스키마가 독해에 미치는 영향 (Effects of content and formal schema on reading comprehension)

  • 연준흠
    • 영어어문교육
    • /
    • 제3호
    • /
    • pp.95-122
    • /
    • 1997
  • The purpose of this research was to investigate the effects of content and formal schema on reading comprehension. Five hundred fiftynine subjects from high school were assigned to one of the following levels and treatment conditions : (1) Higher level & Schema Activation, (2) Higher level & Non-schema Activation, (3) Lower level & Schema Activation, and (4) Lower level & Non-schema Activation. To evaluate the effects of schema activation. two experiments were conducted : one was related to the content schema and the other to the formal schema. To evaluate the effects of content schema, three different types of tests were conducted : (1) cloze test, (2) guessing the meanings of nonsense words, and (3) immediate recall test. To evaluate the effects of formal schema instruction, four kinds of tests were conducted : (1) sorting the sentences according to the importance, (2) identifying the signal words, (3) immediate recall test, and (4) identifying the specific information. For content schema condition, results indicated that the subjects given the titles or pictures before reading in "Content Schema Activation" treatment had better grades than those of the other treatment in all types of tests. regardless of their levels. Schema activation helped the subjects to increase the cognitive predictability of missing words and to participate in the tasks more actively with risk-taking. And it was also shown that good readers tend to process the words meaningfully, while poor readers tend to process the words phonetically or morphologically. Formal schema activation through teaching the text organization also had a significant influence on three types of tests: sorting the sentences according to the importance, identifying the signal words, and immediate recall test, but not on identifying the specific information. The implications from this study can be briefly noted as follows : (l) In teaching reading, the student's background knowledge should be activated as a pre-reading activity. (2) In reading, it is more important to emphasize the student's schema than the features of the text. (3) Various educational interventions should be introduced, especially for the lower level students. (4) Teaching text structures can be a powerful method for the top-down processing strategy.

  • PDF

음성 신호의 음소 단위 구분화에 관한 연구 (A Study on the Segmentation of Speech Signal into Phonemic Units)

  • 이의천;이강성;김순협
    • 한국음향학회지
    • /
    • 제10권4호
    • /
    • pp.5-11
    • /
    • 1991
  • 본 연구에서는 음성신호의 음소 단위 구분화 방법을 제안한다. 제안된 구분화 시스템은 화자 독립적이고, 음성신호에 대한 사전 정보 없이도 음소 단위로 구분화를 수행할 수 있는 특징을 갖는다. 구분화 처리는 입력 음성신호를 먼저 순수 유성을 구간과 순수 유성음이 아닌 구간으로 분리 시킨 후, 각각의 구간에 대해 세분화된 음소 단위로 분리시키는 2단계 구분화 알고리즘을 적용하였고, 이때 사용된 파라미터는 유성을 검출 파라미터, 영차 LPC 캡스트럼 계수의 시간변호 파라미터, ZCR 파라미터이다. 본 연구에서 제안한 구분화 알고리즘의 유용성을 입증하기 위해 사용한 대상어는 고립단어와 연속음성으로 구성된 어휘로서 전체 어휘중에 포함된 507개 음소에 대한 구분화율은 91.7% 이다.

  • PDF

Lexical Status and the Degree of /l/-darkening

  • 안미연
    • 말소리와 음성과학
    • /
    • 제7권3호
    • /
    • pp.73-78
    • /
    • 2015
  • This study explores the degree of velarization of English word-final /l/ (i.e., /l/-darkness) according to the lexical status. Lexical status is defined as whether a speech stimulus is considered as a word or a non-word. We examined the temporal and spectral properties of word-final /l/ in terms of the duration and the frequency difference of F2-F1 values by varying the immediate pre-liquid vowels. The result showed that both temporal and spectral properties were contrastive across all vowel contexts in the way of real words having shorter [l] duration and low F2-F1 values, compared to non-words. That is, /l/ is more heavily velarized in words than in non-words, which suggests that lexical status whether language users encode the speech signal as a word or not is deeply involved in their speech production.

음성으로부터 감성인식 요소분석 (Analyzing the element of emotion recognition from speech)

  • 심귀보;박창현
    • 한국지능시스템학회논문지
    • /
    • 제11권6호
    • /
    • pp.510-515
    • /
    • 2001
  • 일반적으로 음성신호로부터 사람의 감정을 인식할 수 있는 요소는(1)대화의 내용에 사용한 단어, (2)톤 (tore), (3)음성신호의 피치(Pitch), (4)포만트 주파수(Formant Frequencey)그리고 (5)말의 빠르기(Speech Speed)(6)음질(Voice Quality)등이다. 사람의 경우는주파수 같은 분석요소 보다 톤과 단어 빠르기, 음질로 감정을 받아들이게 되는것이 자연스러운 방법이므로 당연히 후자의 요소들이 감정을 분류하는데 중요한 인자로쓰일 수있다. 그리고, 종래는 주로 후자의 효소들을 이용하였는데, 기계로써 구현하기 위해서는 포만트 주파수를 사용할 수있게 되는것이 도움이 된다. 그러므로, 본 연구는 음성 신호로부터 피치와 포만트, 그리고 말의 빠르기 등을 이용하여 감성인식시스템을 구현하는것을 목표로 연구를 진행하고 있으며, 그 1단계 연구로서 본 논문에서는 화가 나서 내뱉는 말을 기반으로 하여 화난 감정의 독특한 특성을 찾아내었다.

  • PDF

Fixed Point Implementation of the QCELP Speech Coder

  • Yoon, Byung-Sik;Kim, Jae-Won;Lee, Won-Myoung;Jang, Seok-Jin;Choi, Song_in;Lim, Myoung-Seon
    • ETRI Journal
    • /
    • 제19권3호
    • /
    • pp.242-258
    • /
    • 1997
  • The Qualcomm code excited linear prediction (QCELP) speech coder was adopted to increase the capacity of the CDMA Mobile System (CMS). In this paper, we implemented the QCELP speech coding algorithm by using TMS320C50 fixed point DSP chip. Also the fixed point simulation was done with C language. The computation complexity of QCELP on TMS320C50 was 10k words and data memory was 4k words. In the normal call test on the CMS, where mobile to mobile call test was done in the bypass mode without double vocoding, mean opinion score for the speech quality was he Qualcomm code excited linear prediction (QCELP) speech quality was 3.11.

  • PDF