• Title/Summary/Keyword: LSTM Encoding

Search Result 13, Processing Time 0.023 seconds

Encoding and language detection of text document using Deep learning algorithm (딥러닝 알고리즘을 이용한 문서의 인코딩 및 언어 판별)

  • Kim, Seonbeom;Bae, Junwoo;Park, Heejin
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.5
    • /
    • pp.124-130
    • /
    • 2017
  • Character encoding is the method used to represent characters or symbols on a computer, and there are many encoding detection software tools. For the widely used encoding detection software"uchardet", the accuracy of encoding detection of unmodified normal text document is 91.39%, but the accuracy of language detection is only 32.09%. Also, if a text document is encrypted by substitution, the accuracy of encoding detection is 3.55% and the accuracy of language detection is 0.06%. Therefore, in this paper, we propose encoding and language detection of text document using the deep learning algorithm called LSTM(Long Short-Term Memory). The results of LSTM are better than encoding detection software"uchardet". The accuracy of encoding detection of normal text document using the LSTM is 99.89% and the accuracy of language detection is 99.92%. Also, if a text document is encrypted by substitution, the accuracy of encoding detection is 99.26%, the accuracy of language detection is 99.77%.

Deep Learning Based Short-Term Electric Load Forecasting Models using One-Hot Encoding (원-핫 인코딩을 이용한 딥러닝 단기 전력수요 예측모델)

  • Kim, Kwang Ho;Chang, Byunghoon;Choi, Hwang Kyu
    • Journal of IKEEE
    • /
    • v.23 no.3
    • /
    • pp.852-857
    • /
    • 2019
  • In order to manage the demand resources of project participants and to provide appropriate strategies in the virtual power plant's power trading platform for consumers or operators who want to participate in the distributed resource collective trading market, it is very important to forecast the next day's demand of individual participants and the overall system's electricity demand. This paper developed a power demand forecasting model for the next day. For the model, we used LSTM algorithm of deep learning technique in consideration of time series characteristics of power demand forecasting data, and new scheme is applied by applying one-hot encoding method to input/output values such as power demand. In the performance evaluation for comparing the general DNN with our LSTM forecasting model, both model showed 4.50 and 1.89 of root mean square error, respectively, and our LSTM model showed high prediction accuracy.

Variation for Mental Health of Children of Marginalized Classes through Exercise Therapy using Deep Learning (딥러닝을 이용한 소외계층 아동의 스포츠 재활치료를 통한 정신 건강에 대한 변화)

  • Kim, Myung-Mi
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.725-732
    • /
    • 2020
  • This paper uses variables following as : to follow me well(0-9), it takes a lot of time to make a decision (0-9), lethargy(0-9) during physical activity in the exercise learning program of the children in the marginalized class. This paper classifies 'gender', 'physical education classroom', and 'upper, middle and lower' of age, and observe changes in ego-resiliency and self-control through sports rehabilitation therapy to find out changes in mental health. To achieve this, the data acquired was merged and the characteristics of large and small numbers were removed using the Label encoder and One-hot encoding. Then, to evaluate the performance by applying each algorithm of MLP, SVM, Dicesion tree, RNN, and LSTM, the train and test data were divided by 75% and 25%, and then the algorithm was learned with train data and the accuracy of the algorithm was measured with the Test data. As a result of the measurement, LSTM was the most effective in sex, MLP and LSTM in physical education classroom, and SVM was the most effective in age.

Transition-Based Korean Dependency Parsing using Bidirectional LSTM (Bidirectional LSTM을 이용한 전이기반 한국어 의존 구문분석)

  • Ha, Tae-Bin;Lee, Tae-Hyeon;Seo, Young-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.527-529
    • /
    • 2018
  • 초기 자연언어처리에 FNN(Feedforward Neural Network)을 적용한 연구들에 비해 LSTM(Long Short-Term Memory)은 현재 시점의 정보뿐만 아니라 이전 시점의 정보를 담고 있어 문장을 이루는 어절들, 어절을 이루는 형태소 등 순차적인(sequential) 데이터를 처리하는데 좋은 성능을 보인다. 본 논문에서는 스택과 버퍼에 있는 어절을 양방향 LSTM encoding을 이용한 representation으로 표현하여 전이기반 의존구문분석에 적용하여 현재 UAS 89.4%의 정확도를 보였고, 자질 추가 및 정제작업을 통해 성능이 개선될 것으로 보인다.

  • PDF

LSTM based sequence-to-sequence Model for Korean Automatic Word-spacing (LSTM 기반의 sequence-to-sequence 모델을 이용한 한글 자동 띄어쓰기)

  • Lee, Tae Seok;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.17-23
    • /
    • 2018
  • We proposed a LSTM-based RNN model that can effectively perform the automatic spacing characteristics. For those long or noisy sentences which are known to be difficult to handle within Neural Network Learning, we defined a proper input data format and decoding data format, and added dropout, bidirectional multi-layer LSTM, layer normalization, and attention mechanism to improve the performance. Despite of the fact that Sejong corpus contains some spacing errors, a noise-robust learning model developed in this study with no overfitting through a dropout method helped training and returned meaningful results of Korean word spacing and its patterns. The experimental results showed that the performance of LSTM sequence-to-sequence model is 0.94 in F1-measure, which is better than the rule-based deep-learning method of GRU-CRF.

A Graph Embedding Technique for Weighted Graphs Based on LSTM Autoencoders

  • Seo, Minji;Lee, Ki Yong
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1407-1423
    • /
    • 2020
  • A graph is a data structure consisting of nodes and edges between these nodes. Graph embedding is to generate a low dimensional vector for a given graph that best represents the characteristics of the graph. Recently, there have been studies on graph embedding, especially using deep learning techniques. However, until now, most deep learning-based graph embedding techniques have focused on unweighted graphs. Therefore, in this paper, we propose a graph embedding technique for weighted graphs based on long short-term memory (LSTM) autoencoders. Given weighted graphs, we traverse each graph to extract node-weight sequences from the graph. Each node-weight sequence represents a path in the graph consisting of nodes and the weights between these nodes. We then train an LSTM autoencoder on the extracted node-weight sequences and encode each nodeweight sequence into a fixed-length vector using the trained LSTM autoencoder. Finally, for each graph, we collect the encoding vectors obtained from the graph and combine them to generate the final embedding vector for the graph. These embedding vectors can be used to classify weighted graphs or to search for similar weighted graphs. The experiments on synthetic and real datasets show that the proposed method is effective in measuring the similarity between weighted graphs.

Encoding Dictionary Feature for Deep Learning-based Named Entity Recognition

  • Ronran, Chirawan;Unankard, Sayan;Lee, Seungwoo
    • International Journal of Contents
    • /
    • v.17 no.4
    • /
    • pp.1-15
    • /
    • 2021
  • Named entity recognition (NER) is a crucial task for NLP, which aims to extract information from texts. To build NER systems, deep learning (DL) models are learned with dictionary features by mapping each word in the dataset to dictionary features and generating a unique index. However, this technique might generate noisy labels, which pose significant challenges for the NER task. In this paper, we proposed DL-dictionary features, and evaluated them on two datasets, including the OntoNotes 5.0 dataset and our new infectious disease outbreak dataset named GFID. We used (1) a Bidirectional Long Short-Term Memory (BiLSTM) character and (2) pre-trained embedding to concatenate with (3) our proposed features, named the Convolutional Neural Network (CNN), BiLSTM, and self-attention dictionaries, respectively. The combined features (1-3) were fed through BiLSTM - Conditional Random Field (CRF) to predict named entity classes as outputs. We compared these outputs with other predictions of the BiLSTM character, pre-trained embedding, and dictionary features from previous research, which used the exact matching and partial matching dictionary technique. The findings showed that the model employing our dictionary features outperformed other models that used existing dictionary features. We also computed the F1 score with the GFID dataset to apply this technique to extract medical or healthcare information.

Feature Selection with Ensemble Learning for Prostate Cancer Prediction from Gene Expression

  • Abass, Yusuf Aleshinloye;Adeshina, Steve A.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.526-538
    • /
    • 2021
  • Machine and deep learning-based models are emerging techniques that are being used to address prediction problems in biomedical data analysis. DNA sequence prediction is a critical problem that has attracted a great deal of attention in the biomedical domain. Machine and deep learning-based models have been shown to provide more accurate results when compared to conventional regression-based models. The prediction of the gene sequence that leads to cancerous diseases, such as prostate cancer, is crucial. Identifying the most important features in a gene sequence is a challenging task. Extracting the components of the gene sequence that can provide an insight into the types of mutation in the gene is of great importance as it will lead to effective drug design and the promotion of the new concept of personalised medicine. In this work, we extracted the exons in the prostate gene sequences that were used in the experiment. We built a Deep Neural Network (DNN) and Bi-directional Long-Short Term Memory (Bi-LSTM) model using a k-mer encoding for the DNA sequence and one-hot encoding for the class label. The models were evaluated using different classification metrics. Our experimental results show that DNN model prediction offers a training accuracy of 99 percent and validation accuracy of 96 percent. The bi-LSTM model also has a training accuracy of 95 percent and validation accuracy of 91 percent.

Deep Neural Architecture for Recovering Dropped Pronouns in Korean

  • Jung, Sangkeun;Lee, Changki
    • ETRI Journal
    • /
    • v.40 no.2
    • /
    • pp.257-265
    • /
    • 2018
  • Pronouns are frequently dropped in Korean sentences, especially in text messages in the mobile phone environment. Restoring dropped pronouns can be a beneficial preprocessing task for machine translation, information extraction, spoken dialog systems, and many other applications. In this work, we address the problem of dropped pronoun recovery by resolving two simultaneous subtasks: detecting zero-pronoun sentences and determining the type of dropped pronouns. The problems are statistically modeled by encoding the sentence and classifying types of dropped pronouns using a recurrent neural network (RNN) architecture. Various RNN-based encoding architectures were investigated, and the stacked RNN was shown to be the best model for Korean zero-pronoun recovery. The proposed method does not require any manual features to be implemented; nevertheless, it shows good performance.

Guided Sequence Generation using Trie-based Dictionary for ASR Error Correction (음성 인식 오류 수정을 위한 Trie 기반 사전을 이용한 Guided Sequence Generation)

  • Choi, Junhwi;Ryu, Seonghan;Yu, Hwanjo;Lee, Gary Geunbae
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.211-216
    • /
    • 2016
  • 현재 나오는 많은 음성 인식기가 대체로 높은 정확도를 가지고 있더라도, 음성 인식 오류는 여전히 빈번하게 발생한다. 음성 인식 오류는 관련 어플리케이션에 있어 많은 오동작의 원인이 되므로, 음성 인식 오류는 고쳐져야 한다. 본 논문에서는 Trie 기반 사전을 이용한 Guided Sequence Generation을 제안한다. 제안하는 모델은 목표 단어와 그 단어의 문맥을 Encoding하고, 그로부터 단어를 Character 단위로 Decoding하며 단어를 Generation한다. 올바른 단어를 생성하기 위하여, Generation 시에 Trie 기반 사전을 통해 유도한다. 실험을 위해 모델은 영어 TV 가이드 도메인의 말뭉치의 음성 인식 오류를 단순히 Simulation하여 만들어진 말뭉치로부터 훈련되고, 같은 도메인의 음성 인식 문장과 결과로 이루어진 병렬 말뭉치에서 성능을 평가하였다. Guided Generation은 Unguided Generation에 비해 14.9% 정도의 오류를 줄였다.

  • PDF