• Title/Summary/Keyword: Embeddings

Search Result 96, Processing Time 0.053 seconds

A Study on the Minimization of Layout Area for FPGA

  • Yi, Cheon-Hee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.2
    • /
    • pp.15-20
    • /
    • 2010
  • This paper deals with minimizing layout area of FPGA design. FPGAs are becoming increasingly important in the design of ASICs since they provide both large scale integration and user-programmability. This paper describes a method to obtain tight bound on the worst-case increase in area when drivers are introduced along many long wires in a layout. The area occupied by minimum-area embedding for a circuit can depend on the aspect ratio of the bounding rectangle of the layout. This paper presents a separator-based area-optimal embeddings for FPGA graphs in rectangles of several aspect ratios which solves the longest path problem in the constraint graph.

A Deeping Learning-based Article- and Paragraph-level Classification

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.31-41
    • /
    • 2018
  • Text classification has been studied for a long time in the Natural Language Processing field. In this paper, we propose an article- and paragraph-level genre classification system using Word2Vec-based LSTM, GRU, and CNN models for large-scale English corpora. Both article- and paragraph-level classification performed best in accuracy with LSTM, which was followed by GRU and CNN in accuracy performance. Thus, it is to be confirmed that in evaluating the classification performance of LSTM, GRU, and CNN, the word sequential information for articles is better than the word feature extraction for paragraphs when the pre-trained Word2Vec-based word embeddings are used in both deep learning-based article- and paragraph-level classification tasks.

Malaysian Name-based Ethnicity Classification using LSTM

  • Hur, Youngbum
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3855-3867
    • /
    • 2022
  • Name separation (splitting full names into surnames and given names) is not a tedious task in a multiethnic country because the procedure for splitting surnames and given names is ethnicity-specific. Malaysia has multiple main ethnic groups; therefore, separating Malaysian full names into surnames and given names proves a challenge. In this study, we develop a two-phase framework for Malaysian name separation using deep learning. In the initial phase, we predict the ethnicity of full names. We propose a recurrent neural network with long short-term memory network-based model with character embeddings for prediction. Based on the predicted ethnicity, we use a rule-based algorithm for splitting full names into surnames and given names in the second phase. We evaluate the performance of the proposed model against various machine learning models and demonstrate that it outperforms them by an average of 9%. Moreover, transfer learning and fine-tuning of the proposed model with an additional dataset results in an improvement of up to 7% on average.

Sentence model based subword embeddings for a dialog system

  • Chung, Euisok;Kim, Hyun Woo;Song, Hwa Jeon
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.599-612
    • /
    • 2022
  • This study focuses on improving a word embedding model to enhance the performance of downstream tasks, such as those of dialog systems. To improve traditional word embedding models, such as skip-gram, it is critical to refine the word features and expand the context model. In this paper, we approach the word model from the perspective of subword embedding and attempt to extend the context model by integrating various sentence models. Our proposed sentence model is a subword-based skip-thought model that integrates self-attention and relative position encoding techniques. We also propose a clustering-based dialog model for downstream task verification and evaluate its relationship with the sentence-model-based subword embedding technique. The proposed subword embedding method produces better results than previous methods in evaluating word and sentence similarity. In addition, the downstream task verification, a clustering-based dialog system, demonstrates an improvement of up to 4.86% over the results of FastText in previous research.

Dynamic Tracking Aggregation with Transformers for RGB-T Tracking

  • Xiaohu, Liu;Zhiyong, Lei
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.80-88
    • /
    • 2023
  • RGB-thermal (RGB-T) tracking using unmanned aerial vehicles (UAVs) involves challenges with regards to the similarity of objects, occlusion, fast motion, and motion blur, among other issues. In this study, we propose dynamic tracking aggregation (DTA) as a unified framework to perform object detection and data association. The proposed approach obtains fused features based a transformer model and an L1-norm strategy. To link the current frame with recent information, a dynamically updated embedding called dynamic tracking identification (DTID) is used to model the iterative tracking process. For object association, we designed a long short-term tracking aggregation module for dynamic feature propagation to match spatial and temporal embeddings. DTA achieved a highly competitive performance in an experimental evaluation on public benchmark datasets.

SG-Drop: Faster Skip-Gram by Dropping Context Words

  • Kim, DongJae;Synn, DoangJoo;Kim, Jong-Kook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.1014-1017
    • /
    • 2020
  • Many natural language processing (NLP) models utilize pre-trained word embeddings to leverage latent information. One of the most successful word embedding model is the Skip-gram (SG). In this paper, we propose a Skipgram drop (SG-Drop) model, which is a variation of the SG model. The SG-Drop model is designed to reduce training time efficiently. Furthermore, the SG-Drop allows controlling training time with its hyperparameter. It could train word embedding faster than reducing training epochs while better preserving the quality.

Gated Multi-channel Network Embedding for Large-scale Mobile App Clustering

  • Yeo-Chan Yoon;Soo Kyun Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1620-1634
    • /
    • 2023
  • This paper studies the task of embedding nodes with multiple graphs representing multiple information channels, which is useful in a large volume of network clustering tasks. By learning a node using multiple graphs, various characteristics of the node can be represented and embedded stably. Existing studies using multi-channel networks have been conducted by integrating heterogeneous graphs or limiting common nodes appearing in multiple graphs to have similar embeddings. Although these methods effectively represent nodes, it also has limitations by assuming that all networks provide the same amount of information. This paper proposes a method to overcome these limitations; The proposed method gives different weights according to the source graph when embedding nodes; the characteristics of the graph with more important information can be reflected more in the node. To this end, a novel method incorporating a multi-channel gate layer is proposed to weigh more important channels and ignore unnecessary data to embed a node with multiple graphs. Empirical experiments demonstrate the effectiveness of the proposed multi-channel-based embedding methods.

Integrated Char-Word Embedding on Chinese NER using Transformer (트랜스포머를 이용한 중국어 NER 관련 문자와 단어 통합 임배딩)

  • Jin, ChunGuang;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.415-417
    • /
    • 2021
  • Since the words and words in Chinese sentences are continuous and the length of vocabulary is huge, Chinese NER(Named Entity Recognition) always based on character representation. In recent years, many Chinese research has been reconsidered how to integrate the word information into the Chinese NER model. However, the traditional sequence model has complex structure, the slow inference speed, and an additional dictionary information is needed, which is difficult to implement in the industry. The approach in this paper has the state of the art and parallelizable, which is integrated the char-word embeddings, so that the model learns word information. The proposed model is easy to implement, and outperforms traditional model in terms of speed and efficiency, which is improved f1-score on two dataset.

Methods of Expanding Knowledge and Embeddings for Response Generation (응답 생성을 위한 지식 및 임베딩 확장 방법)

  • Kim, Bo-Eun;Jang, Young-Jin;Huang, Jin-Xia;Kwon, Oh-Woog;Kim, Hark-Soo
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.371-375
    • /
    • 2021
  • 문서 기반 대화 시스템은 주어진 배경 지식 문서와 이전 대화를 바탕으로 대화에 이어지는 적절한 응답을 생성하는 시스템이다. 문서 기반 대화 시스템은 지식 추출 작업과 응답 생성 작업으로 나뉘며, 두 하위 작업은 서로 긴밀한 관계를 가지고 있다. 즉, 주어진 배경 지식 문서와 관련된 올바른 응답을 생성하기 위해서는 정확한 지식 추출이 필수적이며, 응답 생성에 필요한 지식을 정확히 추출하지 못하는 경우 생성 응답에 배경 지식이 반영되기 힘들다. 따라서, 본 논문에서는 추출된 지식을 확장하는 방법을 통해 생성에 필요한 지식의 재현율을 높이고 이를 활용할 수 있는 임베딩 확장 방법을 제안함으로써 SacreBLEU 기준 3.51의 성능 향상을 보였다.

  • PDF

Performance Comparison of Word Embeddings for Sentiment Classification (감성 분류를 위한 워드 임베딩 성능 비교)

  • Yoon, Hye-Jin;Koo, Jahwan;Kim, Ung-Mo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.760-763
    • /
    • 2021
  • 텍스트를 자연어 처리를 위한 모델에 적용할 수 있게 언어적인 특성을 반영해서 단어를 수치화하는 방법 중 단어를 벡터로 표현하여 나타내는 워드 임베딩은 컴퓨터가 인간의 언어를 이해하고 분석 가능한 언어 모델의 필수 요소가 되었다. Word2vec 등 다양한 워드 임베딩 기법이 제안되었고 자연어를 처리할 때에 감성 분류는 중요한 요소이지만 다양한 임베딩 기법에 따른 감성 분류 모델에 대한 성능 비교 연구는 여전히 부족한 실정이다. 본 논문에서는 Emotion-stimulus 데이터를 활용하여 7가지의 감성과 2가지의 감성을 5가지의 임베딩 기법과 3종류의 분류 모델로 감성 분류 학습을 진행하였다. 감성 분류를 위해 Logistic Regression, Decision Tree, Random Forest 모델 등과 같은 보편적으로 많이 사용하는 머신러닝 분류 모델을 사용하였으며, 각각의 결과를 훈련 정확도와 테스트 정확도로 비교하였다. 실험 결과, 7가지 감성 분류 및 2가지 감성 분류 모두 사전훈련된 Word2vec가 대체적으로 우수한 정확도 성능을 보였다.