• 제목/요약/키워드: Sentence-level

검색결과 204건 처리시간 0.022초

억양의 근접복사 유형화를 이용한 감정음성의 음향분석 (An acoustical analysis of emotional speech using close-copy stylization of intonation curve)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제6권3호
    • /
    • pp.131-138
    • /
    • 2014
  • A close-copy stylization of intonation curve was used for an acoustical analysis of emotional speech. For the analysis, 408 utterances of five emotions (happiness, anger, fear, neutral and sadness) were processed to extract acoustical feature values. The results show that certain pitch point features (pitch point movement time and pitch point distance within a sentence) and sentence level features (pitch range of a final pitch point, pitch range of a sentence and pitch slope of a sentence) are affected by emotions. Pitch point movement time, pitch point distance within a sentence and pitch slope of a sentence show no significant difference between male and female participants. The emotions with high arousal (happiness and anger) are consistently distinguished from the emotion with low arousal (sadness) in terms of these acoustical features. Emotions with higher arousal show steeper pitch slope of a sentence. They have steeper pitch slope at the end of a sentence. They also show wider pitch range of a sentence. The acoustical analysis in this study implies the possibility that the measurement of these acoustical features can be used to cluster and identify emotions of speech.

지연된 자극 제시가 실어증 환자의 문장 이해에 미치는 영향: 반응정확도와 반응시간을 중심으로 (The Effects of Increased Processing Demands on the Sentence Comprehension of Korean-speaking Adults with Aphasia)

  • 최소영
    • 말소리와 음성과학
    • /
    • 제4권2호
    • /
    • pp.127-134
    • /
    • 2012
  • The purpose of this study is to present evidence for a particular processing approach based on the language-specific characteristics of Korean. To compare individuals' sentence-comprehension abilities, this study measured the accuracy and reaction times (RT) of 12 aphasic patients (AP) and 12 normal controls (NC) during a sentence-picture matching task. Four versions of a sentence were constructed with the two types of voice (active/passive) and two types of word order (agent-first/patient-first). To examine the effects of increased processing demand, picture stimuli were manipulated in such a way that they appeared immediately after the sentence was presented. As expected, the AP group showed higher error rates and longer RT for all conditions than the NC group. Furthermore, Korean speakers with aphasia performed above a chance level in sentence comprehension, even with passive sentences. Aphasics understood sentences more quickly and accurately when they were given in the active voice and with agent-first order. The patterns of the NC group were similar. These results confirm that Korean adults with aphasia do not completely lose their knowledge of sentence comprehension. When the processing demand was increased by delaying the picture stimulus onset, the effect of increased processing demands on RT was more pronounced in the AP than in the NC group. These findings fit well with the idea that the computational system for interpreting sentences is intact in aphasics, but its ability is compromised when processing demands increase.

한국어 구어 음성 언어 이해 모델에 관한 연구 (A Study on Korean Spoken Language Understanding Model)

  • 노용완;홍광석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2435-2438
    • /
    • 2003
  • In this paper, we propose a Korean speech understanding model using dictionary and thesaurus. The proposed model search the dictionary for the same word with in input text. If it is not in the dictionary, the proposed model search the high level words in the high level word dictionary based on the thesaurus. We compare the probability of sentence understanding model with threshold probability, and we'll get the speech understanding rate. We evaluated the performance of the sentence speech understanding system by applying twenty questions game. As the experiment results, we got sentence speech understanding accuracy of 79.8%. In this case probability of high level word is 0.9 and threshold probability is 0.38.

  • PDF

계층적 포인터 네트워크를 이용한 상호참조해결 (Coreference Resolution using Hierarchical Pointer Networks)

  • 박천음;이창기
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권9호
    • /
    • pp.542-549
    • /
    • 2017
  • Sequence-to-sequence 모델과 이와 유사한 포인터 네트워크는 입력이 여러 문장으로 이루어 지거나 입력 문장의 길이가 길어지면 성능이 저하되는 문제가 있다. 이러한 문제를 해결하기 위해 본 논문에서는 여러 문장으로 이루어진 입력열을 단어 레벨과 문장 레벨로 인코딩을 수행하고, 디코딩에서 단어 레벨과 문장 레벨 정보를 모두 이용하는 계층적 포인터 네트워크 모델을 제안하고, 이를 이용하여 모든 멘션(mention)에 대한 상호참조해결을 수행하는 계층적 포인터 네트워크 기반 상호참조해결을 제안한다. 실험 결과, 본 논문에서 제안한 모델이 정확률 87.07%, 재현율 65.39%, CoNLL F1 74.61%의 성능을 보였으며, 기존 규칙기반 모델 대비 24.01%의 성능 향상을 보였다.

음의 크기가 정상성인의 비음도에 미치는 영향 (The Effects of Vocal Loudness on Nasalance Measures of Normal Adults)

  • 이수정;고도흥
    • 음성과학
    • /
    • 제10권2호
    • /
    • pp.191-203
    • /
    • 2003
  • This study examined the effect of vocal loudness on nasalance measures, under the conditions of three sentence patterns (i.e., Oral sentences, Mixed sentences, Nasal sentences). The vocal loudness level was classified into soft voice (55 dB), medium voice (65 dB) and loud voice (75 dB). The participants in the present study were 30 normal adults (male: female =1:1). Kay's Nasometer 6200 was used to measure nasalance and Sound level meter was used to adjust the loudness level. The results of the present study are as follows. Firstly, the change in vocal loudness is in the following. In the Oral sentence stimuli, the loud voice for both male and female showed the highest nasalance degree, and the medium voice the lowest level. In the Mixed and Nasal sentence stimuli, however, male participants showed the highest degree of nasalance in the soft voice, and the lowest degree in the loud voice, and female showed the highest degree of nasalance in the soft voice and the lowest in the medium voice. Secondly, when each subject's nasalance scores were ranked in a ordered manner, noticeable tendency. Lowest nasalance score occurred in the loud voice and the highest nasalance score was recorded in the soft voice during participants' reading of the Nasal sentences. However, it was hard to find such pattern in the Oral sentences. It is assumed that velopharyngeal function could be related to these findings. Furthermore, the findings associated with vocal loudness may have diagnostic as well as clinical implications.

  • PDF

영어 동시발화의 자동 억양궤적 추출을 통한 음향 분석 (An acoustical analysis of synchronous English speech using automatic intonation contour extraction)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제7권1호
    • /
    • pp.97-105
    • /
    • 2015
  • This research mainly focuses on intonational characteristics of synchronous English speech. Intonation contours were extracted from 1,848 utterances produced in two different speaking modes (solo vs. synchronous) by 28 (12 women and 16 men) native speakers of English. Synchronous speech is found to be slower than solo speech. Women are found to speak slower than men. The effect size of speech rate caused by different speaking modes is greater than gender differences. However, there is no interaction between the two factors (speaking modes vs. gender differences) in terms of speech rate. Analysis of pitch point features has it that synchronous speech has smaller Pt (pitch point movement time), Pr (pitch point pitch range), Ps (pitch point slope) and Pd (pitch point distance) than solo speech. There is no interaction between the two factors (speaking modes vs. gender differences) in terms of pitch point features. Analysis of sentence level features reveals that synchronous speech has smaller Sr (sentence level pitch range), Ss (sentence slope), MaxNr (normalized maximum pitch) and MinNr (normalized minimum pitch) but greater Min (minimum pitch) and Sd (sentence duration) than solo speech. It is also shown that the higher the Mid (median pitch), the MaxNr and the MinNr in solo speaking mode, the more they are reduced in synchronous speaking mode. Max, Min and Mid show greater speaker discriminability than other features.

영어의 억양 유형화를 이용한 발화 속도와 남녀 화자에 따른 음향 분석 (An acoustical analysis of speech of different speaking rates and genders using intonation curve stylization of English)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제6권4호
    • /
    • pp.79-90
    • /
    • 2014
  • An intonation curve stylization was used for an acoustical analysis of English speech. For the analysis, acoustical feature values were extracted from 1,848 utterances produced with normal and fast speech rate by 28 (12 women and 16 men) native speakers of English. Men are found to speak faster than women at normal speech rate but no difference is found between genders at fast speech rate. Analysis of pitch point features has it that fast speech has greater Pt (pitch point movement time), Pr (pitch point pitch range), and Pd (pitch point distance) but smaller Ps (pitch point slope) than normal speech. Men show greater Pt, Pr, and Pd than women. Analysis of sentence level features reveals that fast speech has smaller Sr (sentence level pitch range), Sd (sentence duration), and Max (maximum pitch) but greater Ss (sentence slope) than normal speech. Women show greater Sr, Ss, Sp (pitch difference between the first pitch point and the last), Sd, MaxNr (normalized Max), and MinNr (normalized Min) than men. As speech rate increases, women speak with greater Ss and Sr than men.

뉴스 웹 페이지에서 기사 본문 추출에 관한 연구 (A Study on Extracting News Contents from News Web Pages)

  • 이용구
    • 정보관리학회지
    • /
    • 제26권1호
    • /
    • pp.305-320
    • /
    • 2009
  • 웹을 통해 제공되는 뉴스 페이지의 경우 필요한 정보 뿐 아니라 많은 불필요한 정보를 담고 있다. 이러한 불필요한 정보는 뉴스를 처리하는 시스템의 성능 저하와 비효율성을 가져온다. 이 연구에서는 웹 페이지로부터 뉴스 콘텐츠를 추출하기 위해 문장과 블록에 기반한 뉴스 기사 추출 방법을 제시하였다. 또한 이들을 결합하여 최적의 성능을 가져올 수 있는 방안을 모색하였다. 실험 결과, 웹 페이지에 대해 하이퍼링크 텍스트를 제거한 후 문장을 이용한 추출 방법을 적용하였을 때 효과적이었으며, 여기에 블록을 이용한 추출 방법과 결합하였을 때 더 좋은 결과를 가져왔다. 문장을 이용한 추출 방법은 추출 재현율을 높여주는 효과가 있는 것으로 나타났다.

문장음성 이해를 위한 확률모델에 관한 연구 (A study on the Stochastic Model for Sentence Speech Understanding)

  • 노용완;홍광석
    • 정보처리학회논문지B
    • /
    • 제10B권7호
    • /
    • pp.829-836
    • /
    • 2003
  • 본 논문에서는 사전과 시소러스를 이용하여 문장음성 이해를 위한 확률모델을 제안한다. 제안한 확률모델은 입력되는 음성과 텍스트 문장에서 단어를 추출한다. 컴퓨터가 선택한 카테고리의 사전 DB와 입력된 문장에서 추출된 단어와 비교하고 확률모델로부터 확률값을 얻는다. 이때 컴퓨터로부터 상위어 정보를 알아내고 상위어 사전을 검색하여 단어를 추출하고 입력된 단어와 확률 모델을 비교하여 결과값을 얻는다. 사전과 상위어 사전으로부터 얻은 두개의 확률값을 더하고 그 값을 미리 정해진 임계값과 비교하여 문장의 이해도를 측정한다. 이와 같은 이해 시스템을 스무고개 게임에 적용시켜 그 성능을 평가 하였다. 상위어 확률 값($\alpha$)이 0.9이고 임계값 ($\beta$)은 0.38일 때 문장음성 이해의 정확도는 79.8%였다.

Anchoring Effect of the Prosecutor's Demand on Sentence: Evidence from Korean Sexual Crime Cases

  • KIM, JUNGWOOK;CHAE, SUBOK
    • KDI Journal of Economic Policy
    • /
    • 제39권3호
    • /
    • pp.1-18
    • /
    • 2017
  • The anchoring effect can be found when a decision shows cognitive prejudice towards the initial information given. Several studies have argued that such an effect is present even for judges in the courtroom. This paper seeks to find a relationship between judges' decisions on penalty sentences and the sentences recommended by prosecutors. In this study, 2,773 actual court cases are considered in the analysis, and quantile regression is used to show that the sentencing decisions judges make are anchored by the recommendations of prosecutors. However, this reliance on recommendations differs according to the seriousness of the crime committed. Specifically, at the lowest penalty levels, a one-month increase in the prosecutors' sentencing recommendation results in a 0.25-month increase in the judges' sentence, while at the highest sentence level, the judges' sentences increase by 0.78 months under an identical condition. The results of this research indicate the need to create more objective and clear sentencing guidelines in the future in an effort to mitigate the psychological pressure experienced by judges with regard to serious offences or heinous crimes.