• 제목/요약/키워드: 코퍼스

Search Result 487, Processing Time 0.032 seconds

Research on Construction of the Korean Speech Corpus in Patient with Velopharyngeal Insufficiency (구개인두부전증 환자의 한국어 음성 코퍼스 구축 방안 연구)

  • Lee, Ji-Eun;Kim, Wook-Eun;Kim, Kwang Hyun;Sung, Myung-Whun;Kwon, Tack-Kyun
    • Korean Journal of Otorhinolaryngology-Head and Neck Surgery
    • /
    • v.55 no.8
    • /
    • pp.498-507
    • /
    • 2012
  • Background and Objectives We aimed to develop a Korean version of the velopharyngeal insufficiency (VPI) speech corpus system. Subjects and Method After developing a 3-channel simultaneous speech recording device capable of recording nasal/oral and normal compound speech separately, voice data were collected from VPI patients aged more than 10 years with/without the history of operation or prior speech therapy. This was compared to a control group for which VPI was simulated by using a french-3 nelaton tube inserted via both nostril through nasopharynx and pulling the soft palate anteriorly in varying degrees. The study consisted of three transcriptors: a speech therapist transcribed the voice file into text, a second transcriptor graded speech intelligibility and severity and the third tagged the types and onset times of misarticulation. The database were composed of three main tables regarding (1) speaker's demographics, (2) condition of the recording system and (3) transcripts. All of these were interfaced with the Praat voice analysis program, which enables the user to extract exact transcribed phrases for analysis. Results In the simulated VPI group, the higher the severity of VPI, the higher the nasalance score was obtained. In addition, we could verify the vocal energy that characterizes hypernasality and compensation in nasal/oral and compound sounds spoken by VPI patients as opposed to that characgerizes the normal control group. Conclusion With the Korean version of VPI speech corpus system, patients' common difficulties and speech tendencies in articulation can be objectively evaluated. Comparing these data with those of the normal voice, mispronunciation and dysarticulation of patients with VPI can be corrected.

Current status of automatic translation service by artificial intelligence specialized in Korean astronomical classics (천문 고문헌 특화 인공지능 자동번역 서비스의 현황)

  • Seo, Yoon Kyung;Kim, Sang Hyuk;Ahn, Young Sook;Choi, Go-Eun;Choi, Young Sil;Baik, Hangi;Sun, Bo Min;Kim, Hyun Jin;Choi, Byung Sook;Lee, Sahng Woon;Park, Raejin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.64.3-65
    • /
    • 2021
  • 인공지능 기계학습에 의한 한문고전 자동번역기는 승정원일기 뿐만 아니라, 한국 고문헌 중 천문 기록에 특화되어 한자로 된 천문 고전을 한글로 번역해 서비스하고 있다. 한국천문연구원은 한국지능정보사회진흥원이 주관하는 2019년도 Information and Communication Technology 기반 공공서비스 촉진사업에 한국고전번역원과 공동 참여하여 이 자동 번역기 개발을 완료한 것이다. 이 번역기의 개발 목적은 초벌 번역 수준일지라도 문장 형태의 한문을 한글로 자동 번역하는 것이며, 이 연구는 현재 번역기 운용 현황을 서비스 별로 분석하고자 한다. 자동 번역관련 서비스는 크게 3가지이다. 첫째, 누구나 웹 접속을 통해 사용 가능한 한문고전 자동번역 대국민 서비스이다. 1년간 자체 시험을 거쳐 2021년 1월 12일 시험판을 오픈하여 운용 중에 있다. 둘째, 기관별로 구축된 코퍼스와 도메인 특화된 번역 모델 등을 관리할 수 있는 한문고전 자동번역 확산 플랫폼 서비스이다. 대국민 서비스와 함께 클라우드 기반으로 서비스되며, 한국고전번역원이 관리를 담당한다. 셋째, 자동번역 Applied Programmable Interface를 활용한 한국천문연구원 내 자체 활용이 가능한 천문고전 자동번역 서비스이다. 서비스 현황 분석은 기관별 관리 서비스에 해당되는 한문고전 자동번역 확산 플랫폼에서 집계하여 제공하는 대시보드의 통계 기능을 활용한다. 각 서비스별 문장과 파일 번역 이용 건수, 번역 속도, 평균 자수 뿐만 아니라, 번역 모델 프로필에 따른 이용률 분석이 가능하다. 이에 따른 주요 분석 중 하나인 올해 전체 번역 이용 건수는 한 해 각 기관의 평균 방문자수 대비 87% 성과 목표에 해당되는 약 38만 건에 근접할 것으로 예측된다. 이 자동 번역기는 원문 해독 시간을 단축시키는 효과와 함께 미번역 천문 고문헌의 활용성을 높여 다양한 연구에 기여할 것이다.

  • PDF

Personalized Chit-chat Based on Language Models (언어 모델 기반 페르소나 대화 모델)

  • Jang, Yoonna;Oh, Dongsuk;Lim, Jungwoo;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.491-494
    • /
    • 2020
  • 최근 언어 모델(Language model)의 기술이 발전함에 따라, 자연어처리 분야의 많은 연구들이 좋은 성능을 내고 있다. 정해진 주제 없이 인간과 잡담을 나눌 수 있는 오픈 도메인 대화 시스템(Open-domain dialogue system) 분야에서 역시 이전보다 더 자연스러운 발화를 생성할 수 있게 되었다. 언어 모델의 발전은 응답 선택(Response selection) 분야에서도 모델이 맥락에 알맞은 답변을 선택하도록 하는 데 기여를 했다. 하지만, 대화 모델이 답변을 생성할 때 일관성 없는 답변을 만들거나, 구체적이지 않고 일반적인 답변만을 하는 문제가 대두되었다. 이를 해결하기 위하여 화자의 개인화된 정보에 기반한 대화인 페르소나(Persona) 대화 데이터 및 태스크가 연구되고 있다. 페르소나 대화 태스크에서는 화자마다 주어진 페르소나가 있고, 대화를 할 때 주어진 페르소나와 일관성이 있는 답변을 선택하거나 생성해야 한다. 이에 우리는 대용량의 코퍼스(Corpus)에 사전 학습(Pre-trained) 된 언어 모델을 활용하여 더 적절한 답변을 선택하는 페르소나 대화 시스템에 대하여 논의한다. 언어 모델 중 자기 회귀(Auto-regressive) 방식으로 모델링을 하는 GPT-2, DialoGPT와 오토인코더(Auto-encoder)를 이용한 BERT, 두 모델이 결합되어 있는 구조인 BART가 실험에 활용되었다. 이와 같이 본 논문에서는 여러 종류의 언어 모델을 페르소나 대화 태스크에 대해 비교 실험을 진행했고, 그 결과 Hits@1 점수에서 BERT가 가장 우수한 성능을 보이는 것을 확인할 수 있었다.

  • PDF

The Relationship between Syntactic Complexity Indices and Scores on Language Use in the Analytic Rating Scale (통사적 복잡성과 분석적 척도의 언어 사용 점수간의 관계 탐색)

  • Young-Ju Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.229-235
    • /
    • 2023
  • This study investigates the relationship between syntactic complexity indices and scores on language use in Jacobs et al.(1981)' analytic rating scale. Syntactic complexity indices obtained from TAASSC program and 440 essays written by EFL students from the ICNALE corpus were analyzed. Specifically, this study explores the relationship between scores on language use and Lu(2011)'s traditional syntactic complexity indices, phrasal complexity indices, and clausal complexity indices, respectively. Results of the stepwise regression analysis showed that phrasal complexity indices turned out to be the best predictor of scores on language use, although the variance in scores on language use was relatively small, compared with the previous study. Implications of the findings of the current study for writing instruction (i.e., syntactic structures at the phrase level) were also discussed.

Comparing the effects of letter-based and syllable-based speaking rates on the pronunciation assessment of Korean speakers of English (철자 기반과 음절 기반 속도가 한국인 영어 학습자의 발음 평가에 미치는 영향 비교)

  • Hyunsong Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.1-10
    • /
    • 2023
  • This study investigated the relative effectiveness of letter-based versus syllable-based measures of speech rate and articulation rate in predicting the articulation score, prosody fluency, and rating sum using "English speech data of Koreans for education" from AI Hub. We extracted and analyzed 900 utterances from the training data, including three balanced age groups (13, 19, and 26 years old). The study built three models that best predicted the pronunciation assessment scores using linear mixed-effects regression and compared the predicted scores with the actual scores from the validation data (n=180). The correlation coefficients between them were also calculated. The findings revealed that syllable-based measures of speech and articulation rates were more effective than letter-based measures in all three pronunciation assessment categories. The correlation coefficients between the predicted and actual scores ranged from .65 to .68, indicating the models' good predictive power. However, it remains inconclusive whether speech rate or articulation rate is more effective.

Multi-Label Classification for Corporate Review Text: A Local Grammar Approach (머신러닝 기반의 기업 리뷰 다중 분류: 부분 문법 적용을 중심으로)

  • HyeYeon Baek;Young Kyun Chang
    • Information Systems Review
    • /
    • v.25 no.3
    • /
    • pp.27-41
    • /
    • 2023
  • Unlike the previous works focusing on the state-of-the-art methodologies to improve the performance of machine learning models, this study improves the 'quality' of training data used in machine learning. We propose a method to enhance the quality of training data through the processing of 'local grammar,' frequently used in corpus analysis. We collected a vast amount of unstructured corporate review text data posted by employees working in the top 100 companies in Korea. After improving the data quality using the local grammar process, we confirmed that the classification model with local grammar outperformed the model without it in terms of classification performance. We defined five factors of work engagement as classification categories, and analyzed how the pattern of reviews changed before and after the COVID-19 pandemic. Through this study, we provide evidence that shows the value of the local grammar-based automatic identification and classification of employee experiences, and offer some clues for significant organizational cultural phenomena.

Characteristics of Greenhouse Gas Emissions from Charcoal Kiln (숯가마에서 발생하는 온실가스 배출 특성)

  • Lee, Seul-Ki;Jeon, Eui-Chan;Park, Seong-Kyu;Choi, Sang-Jin
    • Journal of Climate Change Research
    • /
    • v.4 no.2
    • /
    • pp.115-126
    • /
    • 2013
  • Recently Korea considers the source of biomass burning emissions reflecting national characteristic, so that includes the inventory of emission source but preceding research is rarely implemented in Korea. Therefore, a study on characteristics of greenhouse gas emissions from biomass burning is necessary and it also makes the source management effectively when the climate-atmospheric management system takes effect. In this study, using the manufactured charcoal kiln and the number of experiment was three times to get a reliable experiment result. The sampling time was decided by changing degree in charcoal kiln and charcoal manufacturing process. The results of calculation greenhouse gas emission factor from charcoal kiln were $668g\;CO_2/kg$, $20g\;CH_4/kg$, $0.01g\;N_2O/kg$. Using the emission factor developed in this study, estimate the emissions from charcoal kiln in Korea. The results of calculation were $46,040ton\;CO_2/yr$, $1,378ton\;CH_4/yr$, $0.69ton\;N_2O/yr$ and greenhouse gas emissions applying GWP are as follows. $CH_4$ emissions was $28,947ton\;CO_2eq./yr$, $N_2O$ emissions was $214ton\;CO_2eq./yr$. As a results, Gross emissions of charcoal kiln in Korea was $75,201ton\;CO_2eq./yr$, but the oak used in this study is included to the biomass so emissions of $CO_2$ are excluded. Therefore the net emissions of charcoal kiln in Korea was $29,161ton\;CO_2eq./yr$.

Alleviating Semantic Term Mismatches in Korean Information Retrieval (한국어 정보 검색에서 의미적 용어 불일치 완화 방안)

  • Yun, Bo-Hyun;Park, Sung-Jin;Kang, Hyun-Kyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3874-3884
    • /
    • 2000
  • An information retrieval system has to retrieve all and only documents which are relevant to a user query, even if index terms and query terms are not matched exactly. However, term mismatches between index terms and qucry terms have been a serious obstacle to the enhancement of retrieval performance. In this paper, we discuss automatic term normalization between words in text corpora and their application to a Korean information retrieval system. We perform two types of term normalizations to alleviate semantic term mismatches: equivalence class and co-occurrence cluster. First, transliterations, spelling errors, and synonyms are normalized into equivalence classes bv using contextual similarity. Second, context-based terms are normalized by using a combination of mutual information and word context to establish word similarities. Next, unsupervised clustering is done by using K-means algorithm and co-occurrence clusters are identified. In this paper, these normalized term products are used in the query expansion to alleviate semantic tem1 mismatches. In other words, we utilize two kinds of tcrm normalizations, equivalence class and co-occurrence cluster, to expand user's queries with new tcrms, in an attempt to make user's queries more comprehensive (adding transliterations) or more specific (adding spc'Cializationsl. For query expansion, we employ two complementary methods: term suggestion and term relevance feedback. The experimental results show that our proposed system can alleviatl' semantic term mismatches and can also provide the appropriate similarity measurements. As a result, we know that our system can improve the rctrieval efficiency of the information retrieval system.

  • PDF

A Study of Keyword Spotting System Based on the Weight of Non-Keyword Model (비핵심어 모델의 가중치 기반 핵심어 검출 성능 향상에 관한 연구)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.381-388
    • /
    • 2003
  • This paper presents a method of giving weights to garbage class clustering and Filler model to improve performance of keyword spotting system and a time-saving method of dialogue speech processing system for keyword spotting by calculating keyword transition probability through speech analysis of task domain users. The point of the method is grouping phonemes with phonetic similarities, which is effective in sensing similar phoneme groups rather than individual phonemes, and the paper aims to suggest five groups of phonemes obtained from the analysis of speech sentences in use in Korean morphology and in stock-trading speech processing system. Besides, task-subject Filler model weights are added to the phoneme groups, and keyword transition probability included in consecutive speech sentences is calculated and applied to the system in order to save time for system processing. To evaluate performance of the suggested system, corpus of 4,970 sentences was built to be used in task domains and a test was conducted with subjects of five people in their twenties and thirties. As a result, FOM with the weights on proposed five phoneme groups accounts for 85%, which has better performance than seven phoneme groups of Yapanel [1] with 88.5% and a little bit poorer performance than LVCSR with 89.8%. Even in calculation time, FOM reaches 0.70 seconds than 0.72 of seven phoneme groups. Lastly, it is also confirmed in a time-saving test that time is saved by 0.04 to 0.07 seconds when keyword transition probability is applied.

The Ability of L2 LSTM Language Models to Learn the Filler-Gap Dependency

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.11
    • /
    • pp.27-40
    • /
    • 2020
  • In this paper, we investigate the correlation between the amount of English sentences that Korean English learners (L2ers) are exposed to and their sentence processing patterns by examining what Long Short-Term Memory (LSTM) language models (LMs) can learn about implicit syntactic relationship: that is, the filler-gap dependency. The filler-gap dependency refers to a relationship between a (wh-)filler, which is a wh-phrase like 'what' or 'who' overtly in clause-peripheral position, and its gap in clause-internal position, which is an invisible, empty syntactic position to be filled by the (wh-)filler for proper interpretation. Here to implement L2ers' English learning, we build LSTM LMs that in turn learn a subset of the known restrictions on the filler-gap dependency from English sentences in the L2 corpus that L2ers can potentially encounter in their English learning. Examining LSTM LMs' behaviors on controlled sentences designed with the filler-gap dependency, we show the characteristics of L2ers' sentence processing using the information-theoretic metric of surprisal that quantifies violations of the filler-gap dependency or wh-licensing interaction effects. Furthermore, comparing L2ers' LMs with native speakers' LM in light of processing the filler-gap dependency, we not only note that in their sentence processing both L2ers' LM and native speakers' LM can track abstract syntactic structures involved in the filler-gap dependency, but also show using linear mixed-effects regression models that there exist significant differences between them in processing such a dependency.