• Title/Summary/Keyword: Word learning system

Search Result 202, Processing Time 0.019 seconds

Korean Word Learning System Using Automatic Question Generation Technique (자동 문제 생성 기술을 이용한 한국어 어휘학습시스템)

  • Choe, Su-Il;Im, Ji-Hui;Choe, Ho-Seop;Ock, Cheol-Young
    • Korean Journal of Cognitive Science
    • /
    • v.17 no.4
    • /
    • pp.271-286
    • /
    • 2006
  • In this paper, we introduce automatic question generation technique using the language resources like User-Word Intelligent Network(U-WIN) and Korean dictionary including quite a for of information. And we present Korean word learning system with this technique. The item pool method which almost learning-system are using makes some problems. As a solution of the problems, we classified into 8 question type and implemented the Korean word learning system which is making the Korean question automatically by using the morphological and semantic information according to the automatic question generation pattern of each type.

  • PDF

Deep learning-based custom problem recommendation algorithm to improve learning rate (학습률 향상을 위한 딥러닝 기반 맞춤형 문제 추천 알고리즘)

  • Lim, Min-Ah;Hwang, Seung-Yeon;Kim, Jeong-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.171-176
    • /
    • 2022
  • With the recent development of deep learning technology, the areas of recommendation systems have also diversified. This paper studied algorithms to improve the learning rate and studied the significance results according to words through comparison with the performance characteristics of the Word2Vec model. The problem recommendation algorithm was implemented with the values expressed through the reflection of meaning and similarity test between texts, which are characteristics of the Word2Vec model. Through Word2Vec's learning results, problem recommendations were conducted using text similarity values, and problems with high similarity can be recommended. In the experimental process, it was seen that the accuracy decreased with the quantitative amount of data, and it was confirmed that the larger the amount of data in the data set, the higher the accuracy.

Character Level and Word Level English License Plate Recognition Using Deep-learning Neural Networks (딥러닝 신경망을 이용한 문자 및 단어 단위의 영문 차량 번호판 인식)

  • Kim, Jinho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.4
    • /
    • pp.19-28
    • /
    • 2020
  • Vehicle license plate recognition system is not generalized in Malaysia due to the loose character layout rule and the varying number of characters as well as the mixed capital English characters and italic English words. Because the italic English word is hard to segmentation, a separate method is required to recognize in Malaysian license plate. In this paper, we propose a mixed character level and word level English license plate recognition algorithm using deep learning neural networks. The difference of Gaussian method is used to segment character and word by generating a black and white image with emphasized character strokes and separated touching characters. The proposed deep learning neural networks are implemented on the LPR system at the gate of a building in Kuala-Lumpur for the collection of database and the evaluation of algorithm performance. The evaluation results show that the proposed Malaysian English LPR can be used in commercial market with 98.01% accuracy.

A Deeping Learning-based Article- and Paragraph-level Classification

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.31-41
    • /
    • 2018
  • Text classification has been studied for a long time in the Natural Language Processing field. In this paper, we propose an article- and paragraph-level genre classification system using Word2Vec-based LSTM, GRU, and CNN models for large-scale English corpora. Both article- and paragraph-level classification performed best in accuracy with LSTM, which was followed by GRU and CNN in accuracy performance. Thus, it is to be confirmed that in evaluating the classification performance of LSTM, GRU, and CNN, the word sequential information for articles is better than the word feature extraction for paragraphs when the pre-trained Word2Vec-based word embeddings are used in both deep learning-based article- and paragraph-level classification tasks.

A study on the vowel extraction from the word using the neural network (신경망을 이용한 단어에서 모음추출에 관한 연구)

  • 이택준;김윤중
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2003.11a
    • /
    • pp.721-727
    • /
    • 2003
  • This study designed and implemented a system to extract of vowel from a word. The system is comprised of a voice feature extraction module and a neutral network module. The voice feature extraction module use a LPC(Linear Prediction Coefficient) model to extract a voice feature from a word. The neutral network module is comprised of a learning module and voice recognition module. The learning module sets up a learning pattern and builds up a neutral network to learn. Using the information of a learned neutral network, a voice recognition module extracts a vowel from a word. A neutral network was made to learn selected vowels(a, eo, o, e, i) to test the performance of a implemented vowel extraction recognition machine. Through this experiment, could confirm that speech recognition module extract of vowel from 4 words.

  • PDF

A design and analysis of Web-Based courseware for word processor (Web 기반 워드프로세서 코스웨어의 설계 및 분석)

  • Kang, Yun-Hee;Lee, Ju-Hong;Han, Sun-Gwan
    • Journal of The Korean Association of Information Education
    • /
    • v.7 no.2
    • /
    • pp.189-197
    • /
    • 2003
  • WBI(Web Based Instruction) has been confined to some course due to a burden of development of instruction materials. In this paper, we implemented a personalized instruction and learning system for Word Processor based on Internet by using WBI. Compared to the traditional instruction and learning method for Word Processor Education, the proposed method induce students to take an interest in the learning and make it possible to do student oriented instruction and learning due to the selection of specific contents according to student's ability and his/her learning step. And this system can evaluate the learning rate on the spot by using personalized homework and maximize learning effect by using feedback.

  • PDF

Error Correction in Korean Morpheme Recovery using Deep Learning (딥 러닝을 이용한 한국어 형태소의 원형 복원 오류 수정)

  • Hwang, Hyunsun;Lee, Changki
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1452-1458
    • /
    • 2015
  • Korean Morphological Analysis is a difficult process. Because Korean is an agglutinative language, one of the most important processes in Morphological Analysis is Morpheme Recovery. There are some methods using Heuristic rules and Pre-Analyzed Partial Words that were examined for this process. These methods have performance limits as a result of not using contextual information. In this study, we built a Korean morpheme recovery system using deep learning, and this system used word embedding for the utilization of contextual information. In '들/VV' and '듣/VV' morpheme recovery, the system showed 97.97% accuracy, a better performance than with SVM(Support Vector Machine) which showed 96.22% accuracy.

Comparing Machine Learning Classifiers for Movie WOM Opinion Mining

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3169-3181
    • /
    • 2015
  • Nowadays, online word-of-mouth has become a powerful influencer to marketing and sales in business. Opinion mining and sentiment analysis is frequently adopted at market research and business analytics field for analyzing word-of-mouth content. However, there still remain several challengeable areas for 1) sentiment analysis aiming for Korean word-of-mouth content in film market, 2) availability of machine learning models only using linguistic features, 3) effect of the size of the feature set. This study took a sample of 10,000 movie reviews which had posted extremely negative/positive rating in a movie portal site, and conducted sentiment analysis with four machine learning algorithms: naïve Bayesian, decision tree, neural network, and support vector machines. We found neural network and support vector machine produced better accuracy than naïve Bayesian and decision tree on every size of the feature set. Besides, the performance of them was boosting with increasing of the feature set size.

The Sentence Similarity Measure Using Deep-Learning and Char2Vec (딥러닝과 Char2Vec을 이용한 문장 유사도 판별)

  • Lim, Geun-Young;Cho, Young-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1300-1306
    • /
    • 2018
  • The purpose of this study is to see possibility of Char2Vec as alternative of Word2Vec that most famous word embedding model in Sentence Similarity Measure Problem by Deep-Learning. In experiment, we used the Siamese Ma-LSTM recurrent neural network architecture for measure similarity two random sentences. Siamese Ma-LSTM model was implemented with tensorflow. We train each model with 200 epoch on gpu environment and it took about 20 hours. Then we compared Word2Vec based model training result with Char2Vec based model training result. as a result, model of based with Char2Vec that initialized random weight record 75.1% validation dataset accuracy and model of based with Word2Vec that pretrained with 3 million words and phrase record 71.6% validation dataset accuracy. so Char2Vec is suitable alternate of Word2Vec to optimize high system memory requirements problem.

Donguibogam-Based Pattern Diagnosis Using Natural Language Processing and Machine Learning (자연어 처리 및 기계학습을 통한 동의보감 기반 한의변증진단 기술 개발)

  • Lee, Seung Hyeon;Jang, Dong Pyo;Sung, Kang Kyung
    • The Journal of Korean Medicine
    • /
    • v.41 no.3
    • /
    • pp.1-8
    • /
    • 2020
  • Objectives: This paper aims to investigate the Donguibogam-based pattern diagnosis by applying natural language processing and machine learning. Methods: A database has been constructed by gathering symptoms and pattern diagnosis from Donguibogam. The symptom sentences were tokenized with nouns, verbs, and adjectives with natural language processing tool. To apply symptom sentences into machine learning, Word2Vec model has been established for converting words into numeric vectors. Using the pair of symptom's vector and pattern diagnosis, a pattern prediction model has been trained through Logistic Regression. Results: The Word2Vec model's maximum performance was obtained by optimizing Word2Vec's primary parameters -the number of iterations, the vector's dimensions, and window size. The obtained pattern diagnosis regression model showed 75% (chance level 16.7%) accuracy for the prediction of Six-Qi pattern diagnosis. Conclusions: In this study, we developed pattern diagnosis prediction model based on the symptom and pattern diagnosis from Donguibogam. The prediction accuracy could be increased by the collection of data through future expansions of oriental medicine classics.