• Title/Summary/Keyword: Neural Network Language Model

Search Result 168, Processing Time 0.026 seconds

Design and Implementation of Computational Model Simulating Language Phenomena in Lexical Decision Task (어휘판단 과제 시 보이는 언어현상의 계산주의적 모델 설계 및 구현)

  • Park, Kinam;Lim, Heuiseok;Nam, Kichun
    • The Journal of Korean Association of Computer Education
    • /
    • v.9 no.2
    • /
    • pp.89-99
    • /
    • 2006
  • This paper proposes a computational model which can simulate peculiar language phenomena observed in human lexical decision task. The model is designed to mimic major language phenomena such as frequency effect, lexical status effect, word similarity, and semantic priming effect. The experimental results show that the propose model replicated the major language phenomena and performed similar performance with that of human in LDT.

  • PDF

Fake News Detection Using Deep Learning

  • Lee, Dong-Ho;Kim, Yu-Ri;Kim, Hyeong-Jun;Park, Seung-Myun;Yang, Yu-Jun
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1119-1130
    • /
    • 2019
  • With the wide spread of Social Network Services (SNS), fake news-which is a way of disguising false information as legitimate media-has become a big social issue. This paper proposes a deep learning architecture for detecting fake news that is written in Korean. Previous works proposed appropriate fake news detection models for English, but Korean has two issues that cannot apply existing models: Korean can be expressed in shorter sentences than English even with the same meaning; therefore, it is difficult to operate a deep neural network because of the feature scarcity for deep learning. Difficulty in semantic analysis due to morpheme ambiguity. We worked to resolve these issues by implementing a system using various convolutional neural network-based deep learning architectures and "Fasttext" which is a word-embedding model learned by syllable unit. After training and testing its implementation, we could achieve meaningful accuracy for classification of the body and context discrepancies, but the accuracy was low for classification of the headline and body discrepancies.

Korean Sentiment Model Interpretation using LIME Algorithm (LIME 알고리즘을 이용한 한국어 감성 분류 모델 해석)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1784-1789
    • /
    • 2021
  • Korean sentiment classification task is used in real-world services such as chatbots and analysis of user's purchase reviews. And due to the development of deep learning technology, neural network models with high performance are being applied. However, the neural network model is not easy to interpret what the input sentences are predicting due to which words, and recently, model interpretation methods for interpreting these neural network models have been popularly proposed. In this paper, we used the LIME algorithm among the model interpretation methods to interpret which of the words in the input sentences of the models learned with the korean sentiment classification dataset. As a result, the interpretation of the Bi-LSTM model with 85.24% performance included 25,283 words, but 84.20% of the transformer model with relatively low performance showed that the transformer model was more reliable than the Bi-LSTM model because it contains 26,447 words.

Predicate Recognition Method using BiLSTM Model and Morpheme Features (BiLSTM 모델과 형태소 자질을 이용한 서술어 인식 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.24-29
    • /
    • 2022
  • Semantic role labeling task used in various natural language processing fields, such as information extraction and question answering systems, is the task of identifying the arugments for a given sentence and predicate. Predicate used as semantic role labeling input are extracted using lexical analysis results such as POS-tagging, but the problem is that predicate can't extract all linguistic patterns because predicate in korean language has various patterns, depending on the meaning of sentence. In this paper, we propose a korean predicate recognition method using neural network model with pre-trained embedding models and lexical features. The experiments compare the performance on the hyper parameters of models and with or without the use of embedding models and lexical features. As a result, we confirm that the performance of the proposed neural network model was 92.63%.

CR-M-SpanBERT: Multiple embedding-based DNN coreference resolution using self-attention SpanBERT

  • Joon-young Jung
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.35-47
    • /
    • 2024
  • This study introduces CR-M-SpanBERT, a coreference resolution (CR) model that utilizes multiple embedding-based span bidirectional encoder representations from transformers, for antecedent recognition in natural language (NL) text. Information extraction studies aimed to extract knowledge from NL text autonomously and cost-effectively. However, the extracted information may not represent knowledge accurately owing to the presence of ambiguous entities. Therefore, we propose a CR model that identifies mentions referring to the same entity in NL text. In the case of CR, it is necessary to understand both the syntax and semantics of the NL text simultaneously. Therefore, multiple embeddings are generated for CR, which can include syntactic and semantic information for each word. We evaluate the effectiveness of CR-M-SpanBERT by comparing it to a model that uses SpanBERT as the language model in CR studies. The results demonstrate that our proposed deep neural network model achieves high-recognition accuracy for extracting antecedents from NL text. Additionally, it requires fewer epochs to achieve an average F1 accuracy greater than 75% compared with the conventional SpanBERT approach.

Evaluation of maxillary sinusitis from panoramic radiographs and cone-beam computed tomographic images using a convolutional neural network

  • Serindere, Gozde;Bilgili, Ersen;Yesil, Cagri;Ozveren, Neslihan
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.187-195
    • /
    • 2022
  • Purpose: This study developed a convolutional neural network (CNN) model to diagnose maxillary sinusitis on panoramic radiographs(PRs) and cone-beam computed tomographic (CBCT) images and evaluated its performance. Materials and Methods: A CNN model, which is an artificial intelligence method, was utilized. The model was trained and tested by applying 5-fold cross-validation to a dataset of 148 healthy and 148 inflamed sinus images. The CNN model was implemented using the PyTorch library of the Python programming language. A receiver operating characteristic curve was plotted, and the area under the curve, accuracy, sensitivity, specificity, positive predictive value, and negative predictive values for both imaging techniques were calculated to evaluate the model. Results: The average accuracy, sensitivity, and specificity of the model in diagnosing sinusitis from PRs were 75.7%, 75.7%, and 75.7%, respectively. The accuracy, sensitivity, and specificity of the deep-learning system in diagnosing sinusitis from CBCT images were 99.7%, 100%, and 99.3%, respectively. Conclusion: The diagnostic performance of the CNN for maxillary sinusitis from PRs was moderately high, whereas it was clearly higher with CBCT images. Three-dimensional images are accepted as the "gold standard" for diagnosis; therefore, this was not an unexpected result. Based on these results, deep-learning systems could be used as an effective guide in assisting with diagnoses, especially for less experienced practitioners.

A New Application of Human Visual Simulated Images in Optometry Services

  • Chang, Lin-Song;Wu, Bo-Wen
    • Journal of the Optical Society of Korea
    • /
    • v.17 no.4
    • /
    • pp.328-335
    • /
    • 2013
  • Due to the rapid advancement of auto-refractor technology, most optometry shops provide refraction services. Despite their speed and convenience, the measurement values provided by auto-refractors include a significant degree of error due to psychological and physical factors. Therefore, there is a need for repetitive testing to obtain a smaller mean error value. However, even repetitive testing itself might not be sufficient to ensure accurate measurements. Therefore, research on a method of measurement that can complement auto-refractor measurements and provide confirmation of refraction results needs to be conducted. The customized optometry model described herein can satisfy the above requirements. With existing technologies, using human eye measurement devices to obtain relevant individual optical feature parameters is no longer difficult, and these parameters allow us to construct an optometry model for individual eyeballs. They also allow us to compute visual images produced from the optometry model using the CODE V macro programming language before recognizing the diffraction effects visual images with the neural network algorithm to obtain the accurate refractive diopter. This study attempts to combine the optometry model with the back-propagation neural network and achieve a double check recognition effect by complementing the auto-refractor. Results show that the accuracy achieved was above 98% and that this application could significantly enhance the service quality of refraction.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

A Study on Image Generation from Sentence Embedding Applying Self-Attention (Self-Attention을 적용한 문장 임베딩으로부터 이미지 생성 연구)

  • Yu, Kyungho;No, Juhyeon;Hong, Taekeun;Kim, Hyeong-Ju;Kim, Pankoo
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.63-69
    • /
    • 2021
  • When a person sees a sentence and understands the sentence, the person understands the sentence by reminiscent of the main word in the sentence as an image. Text-to-image is what allows computers to do this associative process. The previous deep learning-based text-to-image model extracts text features using Convolutional Neural Network (CNN)-Long Short Term Memory (LSTM) and bi-directional LSTM, and generates an image by inputting it to the GAN. The previous text-to-image model uses basic embedding in text feature extraction, and it takes a long time to train because images are generated using several modules. Therefore, in this research, we propose a method of extracting features by using the attention mechanism, which has improved performance in the natural language processing field, for sentence embedding, and generating an image by inputting the extracted features into the GAN. As a result of the experiment, the inception score was higher than that of the model used in the previous study, and when judged with the naked eye, an image that expresses the features well in the input sentence was created. In addition, even when a long sentence is input, an image that expresses the sentence well was created.

A Text Content Classification Using LSTM For Objective Category Classification

  • Noh, Young-Dan;Cho, Kyu-Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.5
    • /
    • pp.39-46
    • /
    • 2021
  • AI is deeply applied to various algorithms that assists us, not only daily technologies like translator and Face ID, but also contributing to innumerable fields in industry, due to its dominance. In this research, we provide convenience through AI categorization, extracting the only data that users need, with objective classification, rather than verifying all data to find from the internet, where exists an immense number of contents. In this research, we propose a model using LSTM(Long-Short Term Memory Network), which stands out from text classification, and compare its performance with models of RNN(Recurrent Neural Network) and BiLSTM(Bidirectional LSTM), which is suitable structure for natural language processing. The performance of the three models is compared using measurements of accuracy, precision, and recall. As a result, the LSTM model appears to have the best performance. Therefore, in this research, text classification using LSTM is recommended.