• Title/Summary/Keyword: Lexical model

Search Result 99, Processing Time 0.02 seconds

한국어 어휘 처리 과정에서 글짜 정보와 발음 정보의 연결성 (Orthographic and phonological links in Korean lexical processing)

  • 김지순
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 1995년도 제7회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.211-214
    • /
    • 1995
  • At what level of orthographic representation is phonology linked in thelexicon? Is it at the whole word level, the syllable level, letter level, etc? This question can be addressed by comparing the two scripts used in Korean, logographic Hanmoon and alphabetic/syllabic Hangul, on a task where judgements must be made about the phonology of a visually presented word. Four experiments are reported using a "homophone decision task" and manipulating the sub-lexical relationship between orthography and phonology in Hanmoon and Hangul, and the lexical status of the stimuli. Hangul words showed a much higher error rate in judging whether there was another word identically pronounced than both Hangul nonwords and Hanmoon words. It is concluded that the relationship between orthography and phonology in the lexicon differs according tn the type of script owing to the availability of sub-lexical information: the process of making a homophone derision is based on a spread of activation exclusively among lexical entries, from orthography to phonology and vice versa (called "Orthography-Phonology-Orthography Rebound" or "OPO Rebound"). The results are explained within the mulitilevel interactive activation model with orthographic units linked to phonological units at each level.

  • PDF

양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석 (Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation)

  • 기호연;신경식
    • 한국빅데이터학회지
    • /
    • 제5권1호
    • /
    • pp.197-208
    • /
    • 2020
  • 어휘적 중의성이란 동음이의어, 다의어와 같이 단어를 2개 이상의 의미로 해석할 수 있는 경우를 의미하며, 감정을 나타내는 어휘에서도 어휘적 중의성을 띄는 경우가 다수 존재한다. 이러한 어휘들은 인간의 심리를 투영한다는 점에서 구체적이고, 풍부한 맥락을 전달하는 특징이 있다. 본 연구에서는 양방향 LSTM을 적용하여 중의성을 해소한 감정 분류 모델을 제안한다. 주변 문맥의 정보를 충분히 반영한다면, 어휘적 중의성 문제를 해결하고, 문장이 나타내려는 감정을 하나로 압축할 수 있다는 가정을 기반으로 한다. 양방향 LSTM은 문맥 정보를 필요로 하는 자연어 처리 연구 분야에서 자주 활용되는 알고리즘으로 본 연구에서도 문맥을 학습하기 위해 활용하고자 한다. GloVe 임베딩을 본 연구 모델의 임베딩 층으로 사용했으며, LSTM, RNN 알고리즘을 적용한 모델과 비교하여 본 연구 모델의 성능을 확인하였다. 이러한 프레임워크는 SNS 사용자들의 감정을 소비 욕구로 연결시킬 수 있는 마케팅 등 다양한 분야에 기여할 수 있을 것이다.

의미적 유사성과 그래프 컨볼루션 네트워크 기법을 활용한 엔티티 매칭 방법 (Entity Matching Method Using Semantic Similarity and Graph Convolutional Network Techniques)

  • 단홍조우;이용주
    • 한국전자통신학회논문지
    • /
    • 제17권5호
    • /
    • pp.801-808
    • /
    • 2022
  • 대규모 링크드 데이터에 어떻게 지식을 임베딩하고, 엔티티 매칭을 위해 어떻게 신경망 모델을 적용할 것인가에 대한 연구는 상대적으로 많이 부족한 상황이다. 이에 대한 가장 근본적인 문제는 서로 다른 레이블이 어휘 이질성을 초래한다는 것이다. 본 논문에서는 이러한 어휘 이질성 문제를 해결하기 위해 재정렬 구조를 결합한 확장된 GCN(Graph Convolutional Network) 모델을 제안한다. 제안된 모델은 기존 임베디드 기반 MTransE 및 BootEA 모델과 비교하여 각각 53% 및 40% 성능이 향상되었으며, GCN 기반 RDGCN 모델과 비교하여 성능이 5.1% 향상되었다.

Exploring the feasibility of fine-tuning large-scale speech recognition models for domain-specific applications: A case study on Whisper model and KsponSpeech dataset

  • Jungwon Chang;Hosung Nam
    • 말소리와 음성과학
    • /
    • 제15권3호
    • /
    • pp.83-88
    • /
    • 2023
  • This study investigates the fine-tuning of large-scale Automatic Speech Recognition (ASR) models, specifically OpenAI's Whisper model, for domain-specific applications using the KsponSpeech dataset. The primary research questions address the effectiveness of targeted lexical item emphasis during fine-tuning, its impact on domain-specific performance, and whether the fine-tuned model can maintain generalization capabilities across different languages and environments. Experiments were conducted using two fine-tuning datasets: Set A, a small subset emphasizing specific lexical items, and Set B, consisting of the entire KsponSpeech dataset. Results showed that fine-tuning with targeted lexical items increased recognition accuracy and improved domain-specific performance, with generalization capabilities maintained when fine-tuned with a smaller dataset. For noisier environments, a trade-off between specificity and generalization capabilities was observed. This study highlights the potential of fine-tuning using minimal domain-specific data to achieve satisfactory results, emphasizing the importance of balancing specialization and generalization for ASR models. Future research could explore different fine-tuning strategies and novel technologies such as prompting to further enhance large-scale ASR models' domain-specific performance.

A Hidden Markov Model Imbedding Multiword Units for Part-of-Speech Tagging

  • Kim, Jae-Hoon;Jungyun Seo
    • Journal of Electrical Engineering and information Science
    • /
    • 제2권6호
    • /
    • pp.7-13
    • /
    • 1997
  • Morphological Analysis of Korean has known to be a very complicated problem. Especially, the degree of part-of-speech(POS) ambiguity is much higher than English. Many researchers have tried to use a hidden Markov model(HMM) to solve the POS tagging problem and showed arround 95% correctness ratio. However, the lack of lexical information involves a hidden Markov model for POS tagging in lots of difficulties in improving the performance. To alleviate the burden, this paper proposes a method for combining multiword units, which are types of lexical information, into a hidden Markov model for POS tagging. This paper also proposes a method for extracting multiword units from POS tagged corpus. In this paper, a multiword unit is defined as a unit which consists of more than one word. We found that these multiword units are the major source of POS tagging errors. Our experiment shows that the error reduction rate of the proposed method is about 13%.

  • PDF

형태소 기반의 한국어 방송뉴스 인식 (Morpheme-based Korean broadcast news transcription)

  • 박영희;안동훈;정민화
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.123-126
    • /
    • 2002
  • In this paper, we describe our LVCSR system for Korean broadcast news transcription. The main focus is to find the most proper morpheme-based lexical model for Korean broadcast news recognition to deal with the inflectional flexibilities in Korean. There are trade-offs between lexicon size and lexical coverage, and between the length of lexical unit and WER. In our system, we analyzed the training corpus to obtain a small 24k-morpheme-based lexicon with 98.8% coverage. Then, the lexicon is optimized by combining morphemes using statistics of training corpus under monosyllable constraint or maximum length constraint. In experiments, our system reduced the number of monosyllable morphemes from 52% to 29% of the lexicon and obtained 13.24% WER for anchor and 24.97% for reporter.

  • PDF

한국어 어휘자동획득 시스템 (An Automatic Korean Lexical Acquisition System)

  • 임희석
    • 한국산학기술학회논문지
    • /
    • 제8권5호
    • /
    • pp.1087-1091
    • /
    • 2007
  • 본 논문은 인간의 언어 획득 원리를 반영한 계산주의적 한국어 어휘 자동 획득 시스템을 제안한다. 제안하는 시스템은 인간의 언어 생활을 모델링한 한국어 코퍼스를 입력 받아 언어 인식을 위하여 사용할 수 있는 어절 사전과 형태소 사전의 어절과 형태소를 자동으로 획득할 수 있다. 1천만 어절 크기의 한국어 코퍼스를 이용하여 실험한 결과, 2,097개의 어절과 3,488개의 형태소를 획득할 수 있었다. 획득된 2,097개의 어절의 출현 빈도의 합은 1천만 어절의 38.63%에 해당하였고 형태소 추출의 정확도는 99.87%를 보였다.

  • PDF

Perceptual weighting on English lexical stress by Korean learners of English

  • Goun Lee
    • 말소리와 음성과학
    • /
    • 제14권4호
    • /
    • pp.19-24
    • /
    • 2022
  • This study examined which acoustic cue(s) that Korean learners of English give weight to in perceiving English lexical stress. We manipulated segmental and suprasegmental cues in 5 steps in the first and second syllables of an English stress minimal pair "object". A total of 27 subjects (14 native speakers of English and 13 Korean L2 learners) participated in the English stress judgment task. The results revealed that native Korean listeners used the F0 and intensity cues in identifying English stress and weighted vowel quality most strongly, as native English listeners did. These results indicate that Korean learners' experience with these cues in L1 prosody can help them attend to these cues in their L2 perception. However, L2 learners' perceptual attention is not entirely predicted by their linguistic experience with specific acoustic cues in their native language.

한국어 음성인식 플랫폼(ECHOS)의 개선 및 평가 (Improvement and Evaluation of the Korean Large Vocabulary Continuous Speech Recognition Platform (ECHOS))

  • 권석봉;윤성락;장규철;김용래;김봉완;김회린;유창동;이용주;권오욱
    • 대한음성학회지:말소리
    • /
    • 제59호
    • /
    • pp.53-68
    • /
    • 2006
  • We report the evaluation results of the Korean speech recognition platform called ECHOS. The platform has an object-oriented and reusable architecture so that researchers can easily evaluate their own algorithms. The platform has all intrinsic modules to build a large vocabulary speech recognizer: Noise reduction, end-point detection, feature extraction, hidden Markov model (HMM)-based acoustic modeling, cross-word modeling, n-gram language modeling, n-best search, word graph generation, and Korean-specific language processing. The platform supports both lexical search trees and finite-state networks. It performs word-dependent n-best search with bigram in the forward search stage, and rescores the lattice with trigram in the backward stage. In an 8000-word continuous speech recognition task, the platform with a lexical tree increases 40% of word errors but decreases 50% of recognition time compared to the HTK platform with flat lexicon. ECHOS reduces 40% of recognition errors through incorporation of cross-word modeling. With the number of Gaussian mixtures increasing to 16, it yields word accuracy comparable to the previous lexical tree-based platform, Julius.

  • PDF

의미기반 인덱스 추출과 퍼지검색 모델에 관한 연구 (A Study on Semantic Based Indexing and Fuzzy Relevance Model)

  • Kang, Bo-Yeong;Kim, Dae-Won;Gu, Sang-Ok;Lee, Sang-Jo
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2002년도 봄 학술발표논문집 Vol.29 No.1 (B)
    • /
    • pp.238-240
    • /
    • 2002
  • If there is an Information Retrieval system which comprehends the semantic content of documents and knows the preference of users. the system can search the information better on the Internet, or improve the IR performance. Therefore we propose the IR model which combines semantic based indexing and fuzzy relevance model. In addition to the statistical approach, we chose the semantic approach in indexing, lexical chains, because we assume it would improve the performance of the index term extraction. Furthermore, we combined the semantic based indexing with the fuzzy model, which finds out the exact relevance of the user preference and index terms. The proposed system works as follows: First, the presented system indexes documents by the efficient index term extraction method using lexical chains. And then, if a user tends to retrieve the information from the indexed document collection, the extended IR model calculates and ranks the relevance of user query. user preference and index terms by some metrics. When we experimented each module, semantic based indexing and extended fuzzy model. it gave noticeable results. The combination of these modules is expected to improve the information retrieval performance.

  • PDF