• Title/Summary/Keyword: Neural Network Language Model

Search Result 168, Processing Time 0.028 seconds

Class Language Model based on Word Embedding and POS Tagging (워드 임베딩과 품사 태깅을 이용한 클래스 언어모델 연구)

  • Chung, Euisok;Park, Jeon-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.315-319
    • /
    • 2016
  • Recurrent neural network based language models (RNN LM) have shown improved results in language model researches. The RNN LMs are limited to post processing sessions, such as the N-best rescoring step of the wFST based speech recognition. However, it has considerable vocabulary problems that require large computing powers for the LM training. In this paper, we try to find the 1st pass N-gram model using word embedding, which is the simplified deep neural network. The class based language model (LM) can be a way to approach to this issue. We have built class based vocabulary through word embedding, by combining the class LM with word N-gram LM to evaluate the performance of LMs. In addition, we propose that part-of-speech (POS) tagging based LM shows an improvement of perplexity in all types of the LM tests.

Development for Estimation Model of Runway Visual Range using Deep Neural Network (심층신경망을 활용한 활주로 가시거리 예측 모델 개발)

  • Ku, SungKwan;Hong, SeokMin
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.5
    • /
    • pp.435-442
    • /
    • 2017
  • The runway visual range affected by fog and so on is one of the important indicators to determine whether aircraft can take off and land at the airport or not. In the case of airports where transportation airplanes are operated, major weather forecasts including the runway visual range for local area have been released and provided to aviation workers for recognizing that. This paper proposes a runway visual range estimation model with a deep neural network applied recently to various fields such as image processing, speech recognition, natural language processing, etc. It is developed and implemented for estimating a runway visual range of local airport with a deep neural network. It utilizes the past actual weather observation data of the applied airfield for constituting the learning of the neural network. It can show comparatively the accurate estimation result when it compares the results with the existing observation data. The proposed model can be used to generate weather information on the airfield for which no other forecasting function is available.

Runway visual range prediction using Convolutional Neural Network with Weather information

  • Ku, SungKwan;Kim, Seungsu;Hong, Seokmin
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.190-194
    • /
    • 2018
  • The runway visual range is one of the important factors that decide the possibility of taking offs and landings of the airplane at local airports. The runway visual range is affected by weather conditions like fog, wind, etc. The pilots and aviation related workers check a local weather forecast such as runway visual range for safe flight. However there are several local airfields at which no other forecasting functions are provided due to realistic problems like the deterioration, breakdown, expensive purchasing cost of the measurement equipment. To this end, this study proposes a prediction model of runway visual range for a local airport by applying convolutional neural network that has been most commonly used for image/video recognition, image classification, natural language processing and so on to the prediction of runway visual range. For constituting the prediction model, we use the previous time series data of wind speed, humidity, temperature and runway visibility. This paper shows the usefulness of the proposed prediction model of runway visual range by comparing with the measured data.

Modular Fuzzy Neural Controller Driven by Voice Commands

  • Izumi, Kiyotaka;Lim, Young-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.32.3-32
    • /
    • 2001
  • This paper proposes a layered protocol to interpret voice commands of the user´s own language to a machine, to control it in real time. The layers consist of speech signal capturing layer, lexical analysis layer, interpretation layer and finally activation layer, where each layer tries to mimic the human counterparts in command following. The contents of a continuous voice command are captured by using Hidden Markov Model based speech recognizer. Then the concepts of Artificial Neural Network are devised to classify the contents of the recognized voice command ...

  • PDF

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

LSTM Language Model Based Korean Sentence Generation (LSTM 언어모델 기반 한국어 문장 생성)

  • Kim, Yang-hoon;Hwang, Yong-keun;Kang, Tae-gwan;Jung, Kyo-min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.5
    • /
    • pp.592-601
    • /
    • 2016
  • The recurrent neural network (RNN) is a deep learning model which is suitable to sequential or length-variable data. The Long Short-Term Memory (LSTM) mitigates the vanishing gradient problem of RNNs so that LSTM can maintain the long-term dependency among the constituents of the given input sequence. In this paper, we propose a LSTM based language model which can predict following words of a given incomplete sentence to generate a complete sentence. To evaluate our method, we trained our model using multiple Korean corpora then generated the incomplete part of Korean sentences. The result shows that our language model was able to generate the fluent Korean sentences. We also show that the word based model generated better sentences compared to the other settings.

KG_VCR: A Visual Commonsense Reasoning Model Using Knowledge Graph (KG_VCR: 지식 그래프를 이용하는 영상 기반 상식 추론 모델)

  • Lee, JaeYun;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.91-100
    • /
    • 2020
  • Unlike the existing Visual Question Answering(VQA) problems, the new Visual Commonsense Reasoning(VCR) problems require deep common sense reasoning for answering questions: recognizing specific relationship between two objects in the image, presenting the rationale of the answer. In this paper, we propose a novel deep neural network model, KG_VCR, for VCR problems. In addition to make use of visual relations and contextual information between objects extracted from input data (images, natural language questions, and response lists), the KG_VCR also utilizes commonsense knowledge embedding extracted from an external knowledge base called ConceptNet. Specifically the proposed model employs a Graph Convolutional Neural Network(GCN) module to obtain commonsense knowledge embedding from the retrieved ConceptNet knowledge graph. By conducting a series of experiments with the VCR benchmark dataset, we show that the proposed KG_VCR model outperforms both the state of the art(SOTA) VQA model and the R2C VCR model.

Graph Reasoning and Context Fusion for Multi-Task, Multi-Hop Question Answering (다중 작업, 다중 홉 질문 응답을 위한 그래프 추론 및 맥락 융합)

  • Lee, Sangui;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.8
    • /
    • pp.319-330
    • /
    • 2021
  • Recently, in the field of open domain natural language question answering, multi-task, multi-hop question answering has been studied extensively. In this paper, we propose a novel deep neural network model using hierarchical graphs to answer effectively such multi-task, multi-hop questions. The proposed model extracts different levels of contextual information from multiple paragraphs using hierarchical graphs and graph neural networks, and then utilize them to predict answer type, supporting sentences and answer spans simultaneously. Conducting experiments with the HotpotQA benchmark dataset, we show high performance and positive effects of the proposed model.

PowerShell-based Malware Detection Method Using Command Execution Monitoring and Deep Learning (명령 실행 모니터링과 딥 러닝을 이용한 파워셸 기반 악성코드 탐지 방법)

  • Lee, Seung-Hyeon;Moon, Jong-Sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.5
    • /
    • pp.1197-1207
    • /
    • 2018
  • PowerShell is command line shell and scripting language, built on the .NET framework, and it has several advantages as an attack tool, including built-in support for Windows, easy code concealment and persistence, and various pen-test frameworks. Accordingly, malwares using PowerShell are increasing rapidly, however, there is a limit to cope with the conventional malware detection technique. In this paper, we propose an improved monitoring method to observe commands executed in the PowerShell and a deep learning based malware classification model that extract features from commands using Convolutional Neural Network(CNN) and send them to Recurrent Neural Network(RNN) according to the order of execution. As a result of testing the proposed model with 5-fold cross validation using 1,916 PowerShell-based malwares collected at malware sharing site and 38,148 benign scripts disclosed by an obfuscation detection study, it shows that the model effectively detects malwares with about 97% True Positive Rate(TPR) and 1% False Positive Rate(FPR).