• Title/Summary/Keyword: sequence-to-sequence 모델

Search Result 695, Processing Time 0.032 seconds

A Dialogue System using CNN Sequence-to-Sequence (CNN Sequence-to-Sequence를 이용한 대화 시스템 생성)

  • Seong, Su-Jin;Sin, Chang-Uk;Park, Seong-Jae;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.151-154
    • /
    • 2018
  • 본 논문에서는 CNN Seq2Seq 구조를 이용해 한국어 대화 시스템을 개발하였다. 기존 Seq2Seq는 RNN 혹은 그 변형 네트워크에 데이터를 입력하고, 입력이 완료된 후의 은닉 층의 embedding에 기반해 출력열을 생성한다. 우리는 CNN Seq2Seq로 입력된 발화에 대해 출력 발화를 생성하는 대화 모델을 학습하였고, 그 성능을 측정하였다. CNN에 대해서는 약 12만 발화 쌍을 이용하여 학습하고 1만 발화 쌍으로 실험하였다. 평가 결과 제안 모델이 기존의 RNN 기반 모델에 비해 우수한 결과를 보였다.

  • PDF

M2M Transformation Rules for Automatic Test Case Generation from Sequence Diagram (시퀀스 다이어그램으로부터 테스트 케이스 자동 생성을 위한 M2M(Model-to-Model) 변환 규칙)

  • Kim, Jin-a;Kim, Su Ji;Seo, Yongjin;Cheon, Eunyoung;Kim, Hyeon Soo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.32-37
    • /
    • 2016
  • In model-based testing using sequence diagrams, test cases are automatically derived from the sequence diagrams. For the generation of test cases, scenarios need to be found for representing as a sequence diagram, and to extract test paths satisfying the test coverage. However, it is hard to automatically extract test paths from the sequence diagram because a sequence diagram represents loop, opt, and alt information using CombinedFragments. To resolve this problem, we propose a transformation process that transforms a sequence diagram into an activity diagram which represents scenarios as a type of control flows. In addition, we generate test cases from the activity diagram by applying a test coverage concept. Finally, we present a case study for test cases generation from a sequence diagram.

An Efficient PN Sequence Embedding and Detection Method for High Quality Digital Audio Watermarking (고음질 디지털 오디오 워터마킹을 위한 효율적인 PN 시퀸스 삽입 및 검출 방법)

  • 김현욱;오현오;김연정;윤대희
    • Journal of Broadcast Engineering
    • /
    • v.6 no.1
    • /
    • pp.21-31
    • /
    • 2001
  • In the PN-sequence based audio watermarking system, the PN sequence is shaped by a filter derived from the psychoacoustic model to increase robustness and inaudibility The psychoacoustic model calculated in each audio segment, however, requires heavy computational loads. In this paper, we propose an efficient watermarking system adopting a fixed-shape perceptual filter that substitutes psychoacoustic model derived filter. The proposed filter can shape the PN-sequence to be inaudible and enable to embed the robust watermark in a simple manner. Moreover, we propose an anchitecture for the PN-sequence compensation fitter In the watermark detecter to increase correlation between the watermark and the PN-sequence. With the proposed architecture, the blind watermark detection performance has been enhanced.

  • PDF

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.1-7
    • /
    • 2021
  • Recently, since most of the research on correcting speech recognition errors is based on English, there is not enough research on Korean speech recognition. Compared to English speech recognition, however, Korean speech recognition has many errors due to the linguistic characteristics of Korean language, such as Korean Fortis and Korean Liaison, thus research on Korean speech recognition is needed. Furthermore, earlier works primarily focused on editorial distance algorithms and syllable restoration rules, making it difficult to correct the error types of Korean Fortis and Korean Liaison. In this paper, we propose a context-sensitive post-processing model of speech recognition using a LSTM-based sequence-to-sequence model and Bahdanau attention mechanism to correct Korean speech recognition errors caused by the pronunciation. Experiments showed that by using the model, the speech recognition performance was improved from 64% to 77% for Fortis, 74% to 90% for Liaison, and from 69% to 84% for average recognition than before. Based on the results, it seems possible to apply the proposed model to real-world applications based on speech recognition.

Prediction of dam inflow based on LSTM-s2s model using luong attention (Attention 기법을 적용한 LSTM-s2s 모델 기반 댐유입량 예측 연구)

  • Lee, Jonghyeok;Choi, Suyeon;Kim, Yeonjoo
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.7
    • /
    • pp.495-504
    • /
    • 2022
  • With the recent development of artificial intelligence, a Long Short-Term Memory (LSTM) model that is efficient with time-series analysis is being used to increase the accuracy of predicting the inflow of dams. In this study, we predict the inflow of the Soyang River dam, using the LSTM model with the Sequence-to-Sequence (LSTM-s2s) and attention mechanism (LSTM-s2s with attention) that can further improve the LSTM performance. Hourly inflow, temperature, and precipitation data from 2013 to 2020 were used to train the model, and validate and test for evaluating the performance of the models. As a result, the LSTM-s2s with attention showed better performance than the LSTM-s2s in general as well as in predicting a peak value. Both models captured the inflow pattern during the peaks but detailed hourly variability is limitedly simulated. We conclude that the proposed LSTM-s2s with attention can improve inflow forecasting despite its limits in hourly prediction.

End-to-end Document Summarization using Copy Mechanism and Input Feeding (Copy Mechanism과 Input Feeding을 이용한 End-to-End 한국어 문서요약)

  • Choi, Kyoungho;Lee, Changki
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.56-61
    • /
    • 2016
  • 본 논문에서는 Sequence-to-sequence 모델을 생성요약의 방법으로 한국어 문서요약에 적용하였으며, copy mechanism과 input feeding을 적용한 RNN search 모델을 사용하여 시스템의 성능을 높였다. 인터넷 신문기사를 수집하여 구축한 한국어 문서요약 데이터 셋(train set 30291 문서, development set 3786 문서, test set 3705문서)으로 실험한 결과, input feeding과 copy mechanism을 포함한 모델이 형태소 기준으로 ROUGE-1 35.92, ROUGE-2 15.37, ROUGE-L 29.45로 가장 높은 성능을 보였다.

  • PDF

A Fuzzing Seed Generation Technique Using Natural Language Processing Model (자연어 처리 모델을 활용한 퍼징 시드 생성 기법)

  • Kim, DongYonug;Jeon, SangHoon;Ryu, MinSoo;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.417-437
    • /
    • 2022
  • The quality of the fuzzing seed file is one of the important factors to discover vulnerabilities faster. Although the prior seed generation paradigm, using dynamic taint analysis and symbolic execution techniques, enhanced fuzzing efficiency, the yare not extensively applied owing to their high complexity and need for expertise. This study proposed the DDRFuzz system, which creates seed files based on sequence-to-sequence models. We evaluated DDRFuzz on five open-source applications that used multimedia input files. Following experimental results, DDRFuzz showed the best performance compared with the state-of-the-art studies in terms of fuzzing efficiency.

Developing a Reactive System Model from a Scenario-Based Specification Model (시나리오 기반 명세 모델로부터 반응형 시스템 모델 개발)

  • Kwon, Ryoung-Kwo;Kwon, Gi-Hwon
    • Journal of Internet Computing and Services
    • /
    • v.13 no.1
    • /
    • pp.99-106
    • /
    • 2012
  • It is an important and a difficult task to analyze external inputs and interactions between objects for designing and modeling a reactive system consisting of multiple object. Also the reactive system is required huge efforts on confirm it can satisfy requirements under all possible circumstances. In this paper, we build from requirements to a scenario-based specification model using LSC(Live Sequence Chart) extending MSC(Message Sequence Chart) with richer syntax and semantic. Then the reactive system model satisfying all requirements for each object in this system can be automatically created through LTL Synthesis. Finally, we propose a method of reactive system development by iterative process transforming a reactive system model to codes.

Sequence-to-Sequence based Mobile Trajectory Prediction Model in Wireless Network (무선 네트워크에서 시퀀스-투-시퀀스 기반 모바일 궤적 예측 모델)

  • Bang, Sammy Yap Xiang;Yang, Huigyu;Raza, Syed M.;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.517-519
    • /
    • 2022
  • In 5G network environment, proactive mobility management is essential as 5G mobile networks provide new services with ultra-low latency through dense deployment of small cells. The importance of a system that actively controls device handover is emerging and it is essential to predict mobile trajectory during handover. Sequence-to-sequence model is a kind of deep learning model where it converts sequences from one domain to sequences in another domain, and mainly used in natural language processing. In this paper, we developed a system for predicting mobile trajectory in a wireless network environment using sequence-to-sequence model. Handover speed can be increased by utilize our sequence-to-sequence model in actual mobile network environment.

Improving transformer-based acoustic model performance using sequence discriminative training (Sequence dicriminative training 기법을 사용한 트랜스포머 기반 음향 모델 성능 향상)

  • Lee, Chae-Won;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.335-341
    • /
    • 2022
  • In this paper, we adopt a transformer that shows remarkable performance in natural language processing as an acoustic model of hybrid speech recognition. The transformer acoustic model uses attention structures to process sequential data and shows high performance with low computational cost. This paper proposes a method to improve the performance of transformer AM by applying each of the four algorithms of sequence discriminative training, a weighted finite-state transducer (wFST)-based learning used in the existing DNN-HMM model. In addition, compared to the Cross Entropy (CE) learning method, sequence discriminative method shows 5 % of the relative Word Error Rate (WER).