• Title/Summary/Keyword: Bidirectional RNN

Search Result 32, Processing Time 0.025 seconds

Named-entity Recognition Using Bidirectional LSTM CRFs (Bidirectional LSTM CRFs를 이용한 한국어 개체명 인식)

  • Song, Chi-Yun;Yang, Sung-Min;Kang, Sangwoo
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.321-323
    • /
    • 2017
  • 개체명 인식은 문서 내에서 고유한 의미를 갖는 인명, 기관명, 지명, 시간, 날짜 등을 추출하여 그 종류를 결정하는것을 의미한다. Bidirectional LSTM CRFs 모델은 연속성을 갖는 데이터에 가장 적합한 RNN기반의 심층 학습모델로서 개체명 인식 연구에 가장 우수한 성능을 보여준다. 본 논문에서는 한국어 개체명 인식을 위하여 Bidirectional LSTM CRFs 모델을 사용하고, 입력 자질로 단어뿐만 아니라 품사 임베딩 모델과, 개체명 사전을 활용하여 입력 자질을 구성한다. 또한 입력 자질에 대한 벡터의 크기를 최적화 하여 기본 모델보다 성능이 향상되었음을 증명하였다.

  • PDF

Named-entity Recognition Using Bidirectional LSTM CRFs (Bidirectional LSTM CRFs를 이용한 한국어 개체명 인식)

  • Song, Chi-Yun;Yang, Sung-Min;Kang, Sangwoo
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.321-323
    • /
    • 2017
  • 개체명 인식은 문서 내에서 고유한 의미를 갖는 인명, 기관명, 지명, 시간, 날짜 등을 추출하여 그 종류를 결정하는 것을 의미한다. Bidirectional LSTM CRFs 모델은 연속성을 갖는 데이터에 가장 적합한 RNN기반의 심층 학습모델로서 개체명 인식 연구에 가장 우수한 성능을 보여준다. 본 논문에서는 한국어 개체명 인식을 위하여 Bidirectional LSTM CRFs 모델을 사용하고, 입력 자질로 단어뿐만 아니라 품사 임베딩 모델과, 개체명 사전을 활용하여 입력 자질을 구성한다. 또한 입력 자질에 대한 벡터의 크기를 최적화 하여 기본 모델보다 성능이 향상되었음을 증명하였다.

  • PDF

Video Compression Standard Prediction using Attention-based Bidirectional LSTM (어텐션 알고리듬 기반 양방향성 LSTM을 이용한 동영상의 압축 표준 예측)

  • Kim, Sangmin;Park, Bumjun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.870-878
    • /
    • 2019
  • In this paper, we propose an Attention-based BLSTM for predicting the video compression standard of a video. Recently, in NLP, many researches have been studied to predict the next word of sentences, classify and translate sentences by their semantics using the structure of RNN, and they were commercialized as chatbots, AI speakers and translator applications, etc. LSTM is designed to solve the gradient vanishing problem in RNN, and is used in NLP. The proposed algorithm makes video compression standard prediction possible by applying BLSTM and Attention algorithm which focuses on the most important word in a sentence to a bitstream of a video, not an sentence of a natural language.

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

A study on training DenseNet-Recurrent Neural Network for sound event detection (음향 이벤트 검출을 위한 DenseNet-Recurrent Neural Network 학습 방법에 관한 연구)

  • Hyeonjin Cha;Sangwook Park
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.5
    • /
    • pp.395-401
    • /
    • 2023
  • Sound Event Detection (SED) aims to identify not only sound category but also time interval for target sounds in an audio waveform. It is a critical technique in field of acoustic surveillance system and monitoring system. Recently, various models have introduced through Detection and Classification of Acoustic Scenes and Events (DCASE) Task 4. This paper explored how to design optimal parameters of DenseNet based model, which has led to outstanding performance in other recognition system. In experiment, DenseRNN as an SED model consists of DensNet-BC and bi-directional Gated Recurrent Units (GRU). This model is trained with Mean teacher model. With an event-based f-score, evaluation is performed depending on parameters, related to model architecture as well as model training, under the assessment protocol of DCASE task4. Experimental result shows that the performance goes up and has been saturated to near the best. Also, DenseRNN would be trained more effectively without dropout technique.

Adaptive Antenna Muting using RNN-based Traffic Load Prediction (재귀 신경망에 기반을 둔 트래픽 부하 예측을 이용한 적응적 안테나 뮤팅)

  • Ahmadzai, Fazel Haq;Lee, Woongsup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.633-636
    • /
    • 2022
  • The reduction of energy consumption at the base station (BS) has become more important recently. In this paper, we consider the adaptive muting of the antennas based on the predicted future traffic load to reduce the energy consumption where the number of active antennas is adaptively adjusted according to the predicted future traffic load. Given that traffic load is sequential data, three different RNN structures, namely long-short term memory (LSTM), gated recurrent unit (GRU), and bidirectional LSTM (Bi-LSTM) are considered for the future traffic load prediction. Through the performance evaluation based on the actual traffic load collected from the Afghanistan telecom company, we confirm that the traffic load can be estimated accurately and the overall power consumption can also be reduced significantly using the antenna musing.

Chord-based stepwise Korean Trot music generation technique using RNN-GAN (RNN-GAN을 이용한 코드 기반의 단계적 트로트 음악 생성 기법)

  • Hwang, Seo-Rim;Park, Young-Cheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.622-628
    • /
    • 2020
  • This paper proposes a music generation technique that automatically generates trot music using a Generative Adversarial Network (GAN) model composed of a Recurrent Neural Network (RNN). The proposed method uses a method of creating a chord as a skeleton of the music, creating a melody and bass in stages based on the chord progression made, and attaching it to the corresponding chord to complete the structured piece. Also, a new chorus chord progression is created from the verse chord progression by applying the characteristics of a trot song that repeats the structure divided into an individual section, such as intro, verse, and chorus. And it extends the length of the created trot. The quality of the generated music was specified using subjective evaluation and objective evaluation methods. It was confirmed that the generated music has similar characteristics to the existing trot.

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.