• Title/Summary/Keyword: word learning

Search Result 674, Processing Time 0.021 seconds

A Computational Model of Language Learning Driven by Training Inputs

  • Lee, Eun-Seok;Lee, Ji-Hoon;Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2010.05a
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

Error Correction in Korean Morpheme Recovery using Deep Learning (딥 러닝을 이용한 한국어 형태소의 원형 복원 오류 수정)

  • Hwang, Hyunsun;Lee, Changki
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1452-1458
    • /
    • 2015
  • Korean Morphological Analysis is a difficult process. Because Korean is an agglutinative language, one of the most important processes in Morphological Analysis is Morpheme Recovery. There are some methods using Heuristic rules and Pre-Analyzed Partial Words that were examined for this process. These methods have performance limits as a result of not using contextual information. In this study, we built a Korean morpheme recovery system using deep learning, and this system used word embedding for the utilization of contextual information. In '들/VV' and '듣/VV' morpheme recovery, the system showed 97.97% accuracy, a better performance than with SVM(Support Vector Machine) which showed 96.22% accuracy.

Multicriteria-Based Computer-Aided Pronunciation Quality Evaluation of Sentences

  • Yoma, Nestor Becerra;Berrios, Leopoldo Benavides;Sepulveda, Jorge Wuth;Torres, Hiram Vivanco
    • ETRI Journal
    • /
    • v.35 no.1
    • /
    • pp.89-99
    • /
    • 2013
  • The problem of the sentence-based pronunciation evaluation task is defined in the context of subjective criteria. Three subjective criteria (that is, the minimum subjective word score, the mean subjective word score, and first impression) are proposed and modeled with the combination of word-based assessment. Then, the subjective criteria are approximated with objective sentence pronunciation scores obtained with the combination of word-based metrics. No a priori studies of common mistakes are required, and class-based language models are used to incorporate incorrect and correct pronunciations. Incorrect pronunciations are automatically incorporated by making use of a competitive lexicon and the phonetic rules of students' mother and target languages. This procedure is applicable to any second language learning context, and subjective-objective sentence score correlations greater than or equal to 0.5 can be achieved when the proposed sentence-based pronunciation criteria are approximated with combinations of word-based scores. Finally, the subjective-objective sentence score correlations reported here are very comparable with those published elsewhere resulting from methods that require a priori studies of pronunciation errors.

Impact of Word Embedding Methods on Performance of Sentiment Analysis with Machine Learning Techniques

  • Park, Hoyeon;Kim, Kyoung-jae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.8
    • /
    • pp.181-188
    • /
    • 2020
  • In this study, we propose a comparative study to confirm the impact of various word embedding techniques on the performance of sentiment analysis. Sentiment analysis is one of opinion mining techniques to identify and extract subjective information from text using natural language processing and can be used to classify the sentiment of product reviews or comments. Since sentiment can be classified as either positive or negative, it can be considered one of the general classification problems. For sentiment analysis, the text must be converted into a language that can be recognized by a computer. Therefore, text such as a word or document is transformed into a vector in natural language processing called word embedding. Various techniques, such as Bag of Words, TF-IDF, and Word2Vec are used as word embedding techniques. Until now, there have not been many studies on word embedding techniques suitable for emotional analysis. In this study, among various word embedding techniques, Bag of Words, TF-IDF, and Word2Vec are used to compare and analyze the performance of movie review sentiment analysis. The research data set for this study is the IMDB data set, which is widely used in text mining. As a result, it was found that the performance of TF-IDF and Bag of Words was superior to that of Word2Vec and TF-IDF performed better than Bag of Words, but the difference was not very significant.

Vocabulary Learning Strategy Use and Vocabulary Proficiency

  • Huh, Jin-Hee
    • English Language & Literature Teaching
    • /
    • v.15 no.4
    • /
    • pp.37-54
    • /
    • 2009
  • This study investigated vocabulary learning strategies used by EFL middle school learners in Korea and examined the relationship between the middle school learners' vocabulary learning strategy (VLS) use and their vocabulary proficiency level. One hundred and forty-one students in a public middle school participated in the study and the data for this study were collected from a vocabulary learning strategy questionnaire and a vocabulary proficiency test. Based on the result of the vocabulary proficiency test, the participants were divided into three proficiency groups: high-, mid- and low- level proficiency groups. The overall findings of the study revealed that the participants used cognitive strategies most frequently and social strategies least frequently. The most frequently used individual strategies were 'using a bilingual dictionary,' 'studying the sound of a word' and 'practicing words through verbal repetition.' The least frequently used ones were 'interacting with native speakers' and 'studying or practicing the meaning of a word in a group.' The research results also showed that the vocabulary proficiency level has a significant influence on the vocabulary strategy use. The more proficient learners used vocabulary learning strategies more actively. More specifically, the high proficiency level group used metacognitive strategies the most. The middle and low proficiency groups used cognitive strategies the most. It is suggested that language teachers should facilitate the vocabulary learning process by helping learners develop appropriate strategies.

  • PDF

Feature Generation of Dictionary for Named-Entity Recognition based on Machine Learning (기계학습 기반 개체명 인식을 위한 사전 자질 생성)

  • Kim, Jae-Hoon;Kim, Hyung-Chul;Choi, Yun-Soo
    • Journal of Information Management
    • /
    • v.41 no.2
    • /
    • pp.31-46
    • /
    • 2010
  • Now named-entity recognition(NER) as a part of information extraction has been used in the fields of information retrieval as well as question-answering systems. Unlike words, named-entities(NEs) are generated and changed steadily in documents on the Web, newspapers, and so on. The NE generation causes an unknown word problem and makes many application systems with NER difficult. In order to alleviate this problem, this paper proposes a new feature generation method for machine learning-based NER. In general features in machine learning-based NER are related with words, but entities in named-entity dictionaries are related to phrases. So the entities are not able to be directly used as features of the NER systems. This paper proposes an encoding scheme as a feature generation method which converts phrase entities into features of word units. Futhermore, due to this scheme, entities with semantic information in WordNet can be converted into features of the NER systems. Through our experiments we have shown that the performance is increased by about 6% of F1 score and the errors is reduced by about 38%.

A Study on Word Learning and Error Type for Character Correction in Hangul Character Recognition (한글 문자 인식에서의 오인식 문자 교정을 위한 단어 학습과 오류 형태에 관한 연구)

  • Lee, Byeong-Hui;Kim, Tae-Gyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.5
    • /
    • pp.1273-1280
    • /
    • 1996
  • In order perform high accuracy recognition of text recognition systems, the recognized text must be processed through a post-processing stage using contextual information. We present a system that combines multiple knowledge sources to post-process the output of an optical character recognition(OCR) system. The multiple knowledge sources include characteristics of word, wrongly recognized types of Hangul characters, and Hangul word learning In this paper, the wrongly recognized characters which are made by OCR systems are collected and analyzed. We imput a Korean dictionary with approximately 15 0,000 words, and Korean language texts of Korean elementary/middle/high school. We found that only 10.7% words in Korean language texts of Korean elementary/middle /high school were used in a Korean dictionary. And we classified error types of Korean character recognition with OCR systems. For Hangul word learning, we utilized indexes of texts. With these multiple knowledge sources, we could predict a proper word in large candidate words.

  • PDF

Word Sense Disambiguation of Predicate using Semi-supervised Learning and Sejong Electronic Dictionary (세종 전자사전과 준지도식 학습 방법을 이용한 용언의 어의 중의성 해소)

  • Kang, Sangwook;Kim, Minho;Kwon, Hyuk-chul;Oh, Jyhyun
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.107-112
    • /
    • 2016
  • The Sejong Electronic(machine-readable) Dictionary, developed by the 21st century Sejong Plan, contains systematically organized information on Korean words. It helps to solve problems encountered in the electronic formatting of the still-commonly-used hard-copy dictionary. The Sejong Electronic Dictionary, however has a limitation relate to sentence structure and selection-restricted nouns. This paper discuses the limitations of word-sense disambiguation(WSD) that uses subcategorization information suggested by the Sejong Electronic Dictionary and generalized selection-restricted nouns from the Korean Lexico-semantic network. An alternative method that utilized semi-supervised learning, the chi-square test and some other means to make WSD decisions is presented herein.

Development of a Fake News Detection Model Using Text Mining and Deep Learning Algorithms (텍스트 마이닝과 딥러닝 알고리즘을 이용한 가짜 뉴스 탐지 모델 개발)

  • Dong-Hoon Lim;Gunwoo Kim;Keunho Choi
    • Information Systems Review
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2021
  • Fake news isexpanded and reproduced rapidly regardless of their authenticity by the characteristics of modern society, called the information age. Assuming that 1% of all news are fake news, the amount of economic costs is reported to about 30 trillion Korean won. This shows that the fake news isvery important social and economic issue. Therefore, this study aims to develop an automated detection model to quickly and accurately verify the authenticity of the news. To this end, this study crawled the news data whose authenticity is verified, and developed fake news prediction models using word embedding (Word2Vec, Fasttext) and deep learning algorithms (LSTM, BiLSTM). Experimental results show that the prediction model using BiLSTM with Word2Vec achieved the best accuracy of 84%.

Bridge Damage Factor Recognition from Inspection Reports Using Deep Learning (딥러닝 기반 교량 점검보고서의 손상 인자 인식)

  • Chung, Sehwan;Moon, Seonghyeon;Chi, Seokho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.4
    • /
    • pp.621-625
    • /
    • 2018
  • This paper proposes a method for bridge damage factor recognition from inspection reports using deep learning. Bridge inspection reports contains inspection results including identified damages and causal analysis results. However, collecting such information from inspection reports manually is limited due to their considerable amount. Therefore, this paper proposes a model for recognizing bridge damage factor from inspection reports applying Named Entity Recognition (NER) using deep learning. Named Entity Recognition, Word Embedding, Recurrent Neural Network, one of deep learning methods, were applied to construct the proposed model. Experimental results showed that the proposed model has abilities to 1) recognize damage and damage factor included in a training data, 2) distinguish a specific word as a damage or a damage factor, depending on its context, and 3) recognize new damage words not included in a training data.