• Title/Summary/Keyword: Bi-GRU

Search Result 23, Processing Time 0.019 seconds

Title Generation Model for which Sequence-to-Sequence RNNs with Attention and Copying Mechanisms are used (주의집중 및 복사 작용을 가진 Sequence-to-Sequence 순환신경망을 이용한 제목 생성 모델)

  • Lee, Hyeon-gu;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.674-679
    • /
    • 2017
  • In big-data environments wherein large amounts of text documents are produced daily, titles are very important clues that enable a prompt catching of the key ideas in documents; however, titles are absent for numerous document types such as blog articles and social-media messages. In this paper, a title-generation model for which sequence-to-sequence RNNs with attention and copying mechanisms are employed is proposed. For the proposed model, input sentences are encoded based on bi-directional GRU (gated recurrent unit) networks, and the title words are generated through a decoding of the encoded sentences with keywords that are automatically selected from the input sentences. Regarding the experiments with 93631 training-data documents and 500 test-data documents, the attention-mechanism performances are more effective (ROUGE-1: 0.1935, ROUGE-2: 0.0364, ROUGE-L: 0.1555) than those of the copying mechanism; in addition, the qualitative-evaluation radiative performance of the former is higher.

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

KoBERT-based for parents with disabilities Implementation of Emotion Analysis Communication Platform (장애아 부모를 위한 KoBERT 기반 감정분석 소통 플랫폼 구현)

  • Jae-Hyung Ha;Ji-Hye Huh;Won-Jib Kim;Jung-Hun Lee;Woo-Jung Park
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1014-1015
    • /
    • 2023
  • 많은 장애아 부모들은 양육에 대한 스트레스, 미래에 대한 걱정으로 심리적으로 상당한 중압감을 느낀다. 이에 비해 매년 증가하는 장애인 수에 비해 장애아 부모 및 가족의 심리적·정신적 문제를 해결하기 위한 프로그램이 부족하다.[1] 이를 해결하고자 본 논문에서는 감정분석 소통 플랫폼을 제안한다. 제안하는 플랫폼은 KoBERT 모델을 fine-tunning 하여 사용자의 일기 속 감정을 분석하여 장애아를 둔 부모 및 가족 간의 소통을 돕는다. 성능평가는 제안하는 플랫폼의 주요 기능인 KoBERT 기반 감정분석의 성능을 확인하기위해 텍스트 분류 모델로 널리 사용되고 있는 LSTM, Bi-LSTM, GRU 모델 별 성능지표들과 비교 분석한다. 성능 평가결과 KoBERT 의 정확도가 다른 분류군의 정확도보다 평균 31.4% 높은 성능을 보였고, 이 외의 지표에서도 비교적 높은 성능을 기록했다.