• Title/Summary/Keyword: Sentence Feature

Search Result 108, Processing Time 0.022 seconds

DAKS: A Korean Sentence Classification Framework with Efficient Parameter Learning based on Domain Adaptation (DAKS: 도메인 적응 기반 효율적인 매개변수 학습이 가능한 한국어 문장 분류 프레임워크)

  • Jaemin Kim;Dong-Kyu Chae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.678-680
    • /
    • 2023
  • 본 논문은 정확하면서도 효율적인 한국어 문장 분류 기법에 대해서 논의한다. 최근 자연어처리 분야에서 사전 학습된 언어 모델(Pre-trained Language Models, PLM)은 미세조정(fine-tuning)을 통해 문장 분류 하위 작업(downstream task)에서 성공적인 결과를 보여주고 있다. 하지만, 이러한 미세조정은 하위 작업이 바뀔 때마다 사전 학습된 언어 모델의 전체 매개변수(model parameters)를 학습해야 한다는 단점을 갖고 있다. 본 논문에서는 이러한 문제를 해결할 수 있도록 도메인 적응기(domain adapter)를 활용한 한국어 문장 분류 프레임워크인 DAKS(Domain Adaptation-based Korean Sentence classification framework)를 제안한다. 해당 프레임워크는 학습되는 매개변수의 규모를 크게 줄임으로써 효율적인 성능을 보였다. 또한 문장 분류를 위한 특징(feature)으로써 한국어 사전학습 모델(KLUE-RoBERTa)의 다양한 은닉 계층 별 은닉 상태(hidden states)를 활용하였을 때 결과를 비교 분석하고 가장 적합한 은닉 계층을 제시한다.

The Recognition of Korean Syllables using Parameter Based on Principal Component Analysis (PCA 기반 파라메타를 이용한 숫자음 인식)

  • 박경훈;표창수;김창근;허강인
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.12a
    • /
    • pp.181-184
    • /
    • 2000
  • The new method of feature extraction is proposed, considering the statistic feature of human voice, unlike the conventional methods of voice extraction. PCA(principal Component Analysis) is applied to this new method. PCA removes the repeating of data after finding the axis direction which has the greatest variance in input dimension. Then the new method is applied to real voice recognition to assess performance. When results of the number recognition in this paper and the conventional Mel-Cepstrum of voice feature parameter are compared, there is 0.5% difference of recognition rate. Better recognition rate is expected than word or sentence recognition in that less convergence time than the conventional method in extracting voice feature. Also, better recognition tate is expected when the optimum vector is used by statistic feature of data.

  • PDF

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Korean Speech Act Tagging using Previous Sentence Features and Following Candidate Speech Acts (이전 문장 자질과 다음 발화의 후보 화행을 이용한 한국어 화행 분석)

  • Kim, Se-Jong;Lee, Yong-Hun;Lee, Jong-Hyeok
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.6
    • /
    • pp.374-385
    • /
    • 2008
  • Speech act tagging is an important step in various dialogue applications, which recognizes speaker's intentions expressed in natural language utterances. Previous approaches such as rule-based and statistics-based methods utilize the speech acts of previous utterances and sentence features of the current utterance. This paper proposes a method that determines speech acts of the current utterance using the speech acts of the following utterances as well as previous ones. Using the features of following utterances yields the accuracy 95.27%, improving previous methods by 3.65%. Moreover, sentence features of the previous utterances are employed to maximally utilize the information available to the current utterance. By applying the proper probability model for each speech act, final accuracy of 97.97% is achieved.

Recognition of Restricted Continuous Korean Speech Using Perceptual Model (인지 모델을 이용한 제한된 한국어 연속음 인식)

  • Kim, Seon-Il;Hong, Ki-Won;Lee, Haing-Sei
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.3
    • /
    • pp.61-70
    • /
    • 1995
  • In this paper, the PLP cepstrum which is close to human perceptual characteristics was extracted through the spread time area to get the temperal feature. Phonemes were recognized by artificial neural network similar to the learning method of human. The phoneme strings were matched by Markov models which well suited for sequence. Phoneme recognition for the continuous Korean speech had been done using speech blocks in which speech frames were gathered with unequal numbers. We parameterized the blocks using 7th order PLPs, PTP, zero crossing rate and energy, which neural network used as inputs. The 100 data composed of 10 Korean sentences which were taken from the speech two men pronounced five times for each sentence were used for the the recognition. As a result, maximum recognition rate of 94.4% was obtained. The sentence was recognized using Markov models generated by the phoneme strings recognized from earlier results the recognition for the 200 data which two men sounded 10 times for each sentence had been carried out. The sentence recognition rate of 92.5% was obtained.

  • PDF

Processing Scrambled Wh-Constructions in Head-Final Languages: Dependency Resolution and Feature Checking

  • Hahn, Hye-ryeong;Hong, Seungjin
    • Language and Information
    • /
    • v.18 no.2
    • /
    • pp.59-79
    • /
    • 2014
  • This paper aims at exploring the processing mechanism of filler-gap dependency resolution and feature checking in Korean wh-constructions. Based on their findings on Japanese sentence processing, Aoshima et al. (2004) have argued that the parser posits a gap in the embedded clause in head-final languages, unlike in head-initial languages, where the parser posits a gap in the matrix clause. In order to verify their findings in the Korean context, and to further explore the mechanisms involved in processing Korean wh-constructions, the present study replicated the study done by Aoshima et al., with some modifications of problematic areas in their original design. Sixty-four Korean native speakers were presented Korean sentences containing a wh-phrase in four conditions, with word order and complementizer type as the two main factors. The participants read sentences segment-by-segment, and the reading times at each segment were measured. The reading time analysis showed that there was no such slowdown at the embedded verb in the scrambled conditions as observed in Aoshima et al. Instead, there was a clear indication of the wh-feature checking process in terms of a major slowdown at the relevant region.

  • PDF

Analysis of Korean Language Parsing System and Speed Improvement of Machine Learning using Feature Module (한국어 의존 관계 분석과 자질 집합 분할을 이용한 기계학습의 성능 개선)

  • Kim, Seong-Jin;Ock, Cheol-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.66-74
    • /
    • 2014
  • Recently a variety of study of Korean parsing system is carried out by many software engineers and linguists. The parsing system mainly uses the method of machine learning or symbol processing paradigm. But the parsing system using machine learning has long training time because the data of Korean sentence is very big. And the system shows the limited recognition rate because the data has self error. In this thesis we design system using feature module which can reduce training time and analyze the recognized rate each the number of training sentences and repetition times. The designed system uses the separated modules and sorted table for binary search. We use the refined 36,090 sentences which is extracted by Sejong Corpus. The training time is decreased about three hours and the comparison of recognized rate is the highest as 84.54% when 10,000 sentences is trained 50 times. When all training sentence(32,481) is trained 10 times, the recognition rate is 82.99%. As a result it is more efficient that the system is used the refined data and is repeated the training until it became the steady state.

A Sentence Sentiment Classification reflecting Formal and Informal Vocabulary Information (형식적 및 비형식적 어휘 정보를 반영한 문장 감정 분류)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.325-332
    • /
    • 2011
  • Social Network Services(SNS) such as Twitter, Facebook and Myspace have gained popularity worldwide. Especially, sentiment analysis of SNS users' sentence is very important since it is very useful in the opinion mining. In this paper, we propose a new sentiment classification method of sentences which contains formal and informal vocabulary such as emoticons, and newly coined words. Previous methods used only formal vocabulary to classify sentiments of sentences. However, these methods are not quite effective because internet users use sentences that contain informal vocabulary. In addition, we construct suggest to construct domain sentiment vocabulary because the same word may represent different sentiments in different domains. Feature vectors are extracted from the sentiment vocabulary information and classified by Support Vector Machine(SVM). Our proposed method shows good performance in classification accuracy.

Correlation analysis of antipsychotic dose and speech characteristics according to extrapyramidal symptoms (추체외로 증상에 따른 항정신병 약물 복용량과 음성 특성의 상관관계 분석)

  • Lee, Subin;Kim, Seoyoung;Kim, Hye Yoon;Kim, Euitae;Yu, Kyung-Sang;Lee, Ho-Young;Lee, Kyogu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.367-374
    • /
    • 2022
  • In this paper, correlation analysis between speech characteristics and the dose of antipsychotic drugs was performed. To investigate the pattern of speech characteristics of ExtraPyramidal Symptoms (EPS) related to voice change, a common side effect of antipsychotic drugs, a Korean-based extrapyramidal symptom speech corpus was constructed through the sentence development. Through this, speech patterns of EPS and non-EPS groups were investigated, and in particular, a strong speech feature correlation was shown in the EPS group. In addition, it was confirmed that the type of speech sentence affects the speech feature pattern, and these results suggest the possibility of early detection of antipsychotics-induced EPS based on the speech features.