• Title/Summary/Keyword: Sequence-to-Sequence model

Search Result 1,620, Processing Time 0.032 seconds

Could Decimal-binary Vector be a Representative of DNA Sequence for Classification?

  • Sanjaya, Prima;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.5 no.3
    • /
    • pp.8-15
    • /
    • 2016
  • In recent years, one of deep learning models called Deep Belief Network (DBN) which formed by stacking restricted Boltzman machine in a greedy fashion has beed widely used for classification and recognition. With an ability to extracting features of high-level abstraction and deal with higher dimensional data structure, this model has ouperformed outstanding result on image and speech recognition. In this research, we assess the applicability of deep learning in dna classification level. Since the training phase of DBN is costly expensive, specially if deals with DNA sequence with thousand of variables, we introduce a new encoding method, using decimal-binary vector to represent the sequence as input to the model, thereafter compare with one-hot-vector encoding in two datasets. We evaluated our proposed model with different contrastive algorithms which achieved significant improvement for the training speed with comparable classification result. This result has shown a potential of using decimal-binary vector on DBN for DNA sequence to solve other sequence problem in bioinformatics.

M2M Transformation Rules for Automatic Test Case Generation from Sequence Diagram (시퀀스 다이어그램으로부터 테스트 케이스 자동 생성을 위한 M2M(Model-to-Model) 변환 규칙)

  • Kim, Jin-a;Kim, Su Ji;Seo, Yongjin;Cheon, Eunyoung;Kim, Hyeon Soo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.32-37
    • /
    • 2016
  • In model-based testing using sequence diagrams, test cases are automatically derived from the sequence diagrams. For the generation of test cases, scenarios need to be found for representing as a sequence diagram, and to extract test paths satisfying the test coverage. However, it is hard to automatically extract test paths from the sequence diagram because a sequence diagram represents loop, opt, and alt information using CombinedFragments. To resolve this problem, we propose a transformation process that transforms a sequence diagram into an activity diagram which represents scenarios as a type of control flows. In addition, we generate test cases from the activity diagram by applying a test coverage concept. Finally, we present a case study for test cases generation from a sequence diagram.

Error Correction for Korean Speech Recognition using a LSTM-based Sequence-to-Sequence Model

  • Jin, Hye-won;Lee, A-Hyeon;Chae, Ye-Jin;Park, Su-Hyun;Kang, Yu-Jin;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.1-7
    • /
    • 2021
  • Recently, since most of the research on correcting speech recognition errors is based on English, there is not enough research on Korean speech recognition. Compared to English speech recognition, however, Korean speech recognition has many errors due to the linguistic characteristics of Korean language, such as Korean Fortis and Korean Liaison, thus research on Korean speech recognition is needed. Furthermore, earlier works primarily focused on editorial distance algorithms and syllable restoration rules, making it difficult to correct the error types of Korean Fortis and Korean Liaison. In this paper, we propose a context-sensitive post-processing model of speech recognition using a LSTM-based sequence-to-sequence model and Bahdanau attention mechanism to correct Korean speech recognition errors caused by the pronunciation. Experiments showed that by using the model, the speech recognition performance was improved from 64% to 77% for Fortis, 74% to 90% for Liaison, and from 69% to 84% for average recognition than before. Based on the results, it seems possible to apply the proposed model to real-world applications based on speech recognition.

Sequence-to-sequence based Morphological Analysis and Part-Of-Speech Tagging for Korean Language with Convolutional Features (Sequence-to-sequence 기반 한국어 형태소 분석 및 품사 태깅)

  • Li, Jianri;Lee, EuiHyeon;Lee, Jong-Hyeok
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.57-62
    • /
    • 2017
  • Traditional Korean morphological analysis and POS tagging methods usually consist of two steps: 1 Generat hypotheses of all possible combinations of morphemes for given input, 2 Perform POS tagging search optimal result. require additional resource dictionaries and step could error to the step. In this paper, we tried to solve this problem end-to-end fashion using sequence-to-sequence model convolutional features. Experiment results Sejong corpus sour approach achieved 97.15% F1-score on morpheme level, 95.33% and 60.62% precision on word and sentence level, respectively; s96.91% F1-score on morpheme level, 95.40% and 60.62% precision on word and sentence level, respectively.

Model Predictive Control of Circulating Current Suppression in Parallel-Connected Inverter-fed Motor Drive Systems

  • Kang, Shin-Won;Soh, Jae-Hwan;Kim, Rae-Young
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.3
    • /
    • pp.1241-1250
    • /
    • 2018
  • Parallel three-phase voltage source inverters in a direct connection configuration are widely used to increase system power ratings. A zero-sequence circulating current can be generated according to the switching method; however, the zero-sequence circulating current not only distorts current, but also reduces the system reliability and efficiency. In this paper, a model predictive control scheme is proposed for parallel inverters to drive an interior permanent magnet synchronous motor with zero-sequence circulating current suppression. The voltage vector of the parallel inverters is derived to predict and control the torque and stator flux components. In addition, the zero-sequence circulating current is suppressed by designing the cost function without an additional current sensor and high-impedance inductor. Simulation and experimental results are presented to verify the proposed control scheme.

Online Selective-Sample Learning of Hidden Markov Models for Sequence Classification

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.145-152
    • /
    • 2015
  • We consider an online selective-sample learning problem for sequence classification, where the goal is to learn a predictive model using a stream of data samples whose class labels can be selectively queried by the algorithm. Given that there is a limit to the total number of queries permitted, the key issue is choosing the most informative and salient samples for their class labels to be queried. Recently, several aggressive selective-sample algorithms have been proposed under a linear model for static (non-sequential) binary classification. We extend the idea to hidden Markov models for multi-class sequence classification by introducing reasonable measures for the novelty and prediction confidence of the incoming sample with respect to the current model, on which the query decision is based. For several sequence classification datasets/tasks in online learning setups, we demonstrate the effectiveness of the proposed approach.

Automatic Generation of Emotional Comments on News-Articles using Sequence-to-Sequence Model (Sequence-to-Sequence 모델을 이용한 신문기사의 감성 댓글 자동 생성)

  • Park, Chun-Young;Park, Yo-Han;Jeong, Hye-Ji;Kim, Ji-Won;Choi, Yong-Seok;Lee, Kong-Joo
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.233-237
    • /
    • 2017
  • 본 논문은 신문기사의 감성 댓글을 생성하기 위한 시스템을 제시한다. 감성을 고려한 댓글 생성을 위해 기존의 Sequence-to-Sequence 모델을 사용하여 긍정, 부정, 비속어 포함, 비속어 미포함 유형의 4개의 감성 모델을 구축한다. 하나의 신문 기사에는 다양한 댓글이 달려있지만 감성 사전과 비속어 사전을 활용하여 하나의 댓글만 선별하여 사용한다. 분류한 댓글을 통해 4개의 모델을 학습하고 감성 유형에 맞는 댓글을 생성한다.

  • PDF

Automatic Generation of Emotional Comments on News-Articles using Sequence-to-Sequence Model (Sequence-to-Sequence 모델을 이용한 신문기사의 감성 댓글 자동 생성)

  • Park, Chun-Young;Park, Yo-Han;Jeong, Hye-Ji;Kim, Ji-Won;Choi, Yong-Seok;Lee, Kong-Joo
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.233-237
    • /
    • 2017
  • 본 논문은 신문기사의 감성 댓글을 생성하기 위한 시스템을 제시한다. 감성을 고려한 댓글 생성을 위해 기존의 Sequence-to-Sequence 모델을 사용하여 긍정, 부정, 비속어 포함, 비속어 미포함 유형의 4개의 감성 모델을 구축한다. 하나의 신문 기사에는 다양한 댓글이 달려있지만 감성 사전과 비속어 사전을 활용하여 하나의 댓글만 선별하여 사용한다. 분류한 댓글을 통해 4개의 모델을 학습하고 감성 유형에 맞는 댓글을 생성한다.

  • PDF

Korean Question Generation using BERT-based Sequence-to-Sequence Model (BERT 기반 Sequence-to-Sequence 모델을 이용한 한국어 질문 생성)

  • Lee, Dong-Heon;Hwang, Hyeon-Seon;Lee, Chang-Gi
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.60-63
    • /
    • 2020
  • 기계 독해는 입력 받은 질문과 문단의 관계를 파악하여 알맞은 정답을 예측하는 자연어처리 태스크로 양질의 많은 데이터 셋을 필요로 한다. 기계 독해 학습 데이터 구축은 어려운 작업으로, 문서에서 등장하는 정답과 정답을 도출할 수 있는 질문을 수작업으로 만들어야 한다. 이러한 문제를 해결하기 위하여, 본 논문에서는 정답이 속한 문서로부터 질문을 자동으로 생성해주는 BERT 기반의 Sequence-to-sequence 모델을 이용한 한국어 질문 생성 모델을 제안한다. 또한 정답이 속한 문서와 질문의 언어가 같고 정답이 속한 문장의 주변 단어가 질문에 등장할 확률이 크다는 특성에 따라 BERT 기반의 Sequence-to-sequence 모델에 복사 메카니즘을 추가한다. 실험 결과, BERT + Transformer 디코더 모델의 성능이 기존 모델과 BERT + GRU 디코더 모델보다 좋았다.

  • PDF

A Pipeline Model for Korean Morphological Analysis and Part-of-Speech Tagging Using Sequence-to-Sequence and BERT-LSTM (Sequence-to-Sequence 와 BERT-LSTM을 활용한 한국어 형태소 분석 및 품사 태깅 파이프라인 모델)

  • Youn, Jun Young;Lee, Jae Sung
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.414-417
    • /
    • 2020
  • 최근 한국어 형태소 분석 및 품사 태깅에 관한 연구는 주로 표층형에 대해 형태소 분리와 품사 태깅을 먼저하고, 추가 언어자원을 사용하여 후처리로 형태소 원형과 품사를 복원해왔다. 본 연구에서는 형태소 분석 및 품사 태깅을 두 단계로 나누어, Sequence-to-Sequence를 활용하여 형태소 원형 복원을 먼저 하고, 최근 자연어처리의 다양한 분야에서 우수한 성능을 보이는 BERT를 활용하여 형태소 분리 및 품사 태깅을 하였다. 본 논문에서는 두 단계를 파이프라인으로 연결하였고, 제안하는 형태소 분석 및 품사 태깅 파이프라인 모델은 음절 정확도가 98.39%, 형태소 정확도 98.27%, 어절 정확도 96.31%의 성능을 보였다.

  • PDF