• Title/Summary/Keyword: Language Network Method

검색결과 292건 처리시간 0.026초

연결망 분석도구를 이용한 크리스토퍼 알렉산더 패턴언어 활용 가능성에 관한 연구 (A Study on the Possibility to Use Christopher Alexander's Pattern Language by Using Network Analysis Tool)

  • 정성욱;김문덕
    • 한국실내디자인학회논문집
    • /
    • 제25권3호
    • /
    • pp.31-39
    • /
    • 2016
  • This study is aimed to increase the possibility of using the Christopher Alexander's pattern language. The methodology of this study is (i) to analyze the pattern language by using the network analysis tool in order to understand the complicate network structure of the pattern language, and (ii) to apply the Alexander's method of using the pattern language by using the network analysis tool (Gephi) and to examine the feasibility of the network analysis tool as a tool for using the pattern language. Firstly, as a result of analysing the pattern language, (i) the pattern language classified by pattern number is distinguished by the patterns of towns, buildings and construction, among which the pattern of buildings plays a key function in the networks; (ii) the buildings functions a medium connecting between the towns and the construction; and (iii) the pattern language is divided into 6 sub-modules, through which the user can select a pattern. Secondly, the result of using the network analysis tool as a tool for using the pattern language (i) suggests the new method of using the pattern language by using the network analysis tool (Gephi); (ii) makes it possible to easily figure out the characteristics of the links between the patterns; and (iii) increases the completeness of the pattern language by making it easy to find out the sub-patterns in selecting a pattern.

언어 네트워크 분석 방법을 활용한 학술논문의 내용분석 (A Content Analysis of Journal Articles Using the Language Network Analysis Methods)

  • 이수상
    • 정보관리학회지
    • /
    • 제31권4호
    • /
    • pp.49-68
    • /
    • 2014
  • 본 연구의 목적은 국내 학술논문 데이터베이스에서 검색한 언어 네트워크 분석 관련 53편의 국내 학술논문들을 대상으로 하는 내용분석을 통해, 언어 네트워크 분석 방법의 기초적인 체계를 파악하기 위한 것이다. 내용분석의 범주는 분석대상의 언어 텍스트 유형, 키워드 선정 방법, 동시출현관계의 파악 방법, 네트워크의 구성 방법, 네트워크 분석도구와 분석지표의 유형이다. 분석결과로 나타난 주요 특성은 다음과 같다. 첫째, 학술논문과 인터뷰 자료를 분석대상의 언어 텍스트로 많이 사용하고 있다. 둘째, 키워드는 주로 텍스트의 본문에서 추출한 단어의 출현빈도를 사용하여 선정하고 있다. 셋째, 키워드 간 관계의 파악은 거의 동시출현빈도를 사용하고 있다. 넷째, 언어 네트워크는 단수의 네트워크보다 복수의 네트워크를 구성하고 있다. 다섯째, 네트워크 분석을 위해 NetMiner, UCINET/NetDraw, NodeXL, Pajek 등을 사용하고 있다. 여섯째, 밀도, 중심성, 하위 네트워크 등 다양한 분석지표들을 사용하고 있다. 이러한 특성들은 언어 네트워크 분석 방법의 기초적인 체계를 구성하는 데 활용할 수 있을 것이다.

Burmese Sentiment Analysis Based on Transfer Learning

  • Mao, Cunli;Man, Zhibo;Yu, Zhengtao;Wu, Xia;Liang, Haoyuan
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.535-548
    • /
    • 2022
  • Using a rich resource language to classify sentiments in a language with few resources is a popular subject of research in natural language processing. Burmese is a low-resource language. In light of the scarcity of labeled training data for sentiment classification in Burmese, in this study, we propose a method of transfer learning for sentiment analysis of a language that uses the feature transfer technique on sentiments in English. This method generates a cross-language word-embedding representation of Burmese vocabulary to map Burmese text to the semantic space of English text. A model to classify sentiments in English is then pre-trained using a convolutional neural network and an attention mechanism, where the network shares the model for sentiment analysis of English. The parameters of the network layer are used to learn the cross-language features of the sentiments, which are then transferred to the model to classify sentiments in Burmese. Finally, the model was tuned using the labeled Burmese data. The results of the experiments show that the proposed method can significantly improve the classification of sentiments in Burmese compared to a model trained using only a Burmese corpus.

Deep Neural Network 언어모델을 위한 Continuous Word Vector 기반의 입력 차원 감소 (Input Dimension Reduction based on Continuous Word Vector for Deep Neural Network Language Model)

  • 김광호;이동현;임민규;김지환
    • 말소리와 음성과학
    • /
    • 제7권4호
    • /
    • pp.3-8
    • /
    • 2015
  • In this paper, we investigate an input dimension reduction method using continuous word vector in deep neural network language model. In the proposed method, continuous word vectors were generated by using Google's Word2Vec from a large training corpus to satisfy distributional hypothesis. 1-of-${\left|V\right|}$ coding discrete word vectors were replaced with their corresponding continuous word vectors. In our implementation, the input dimension was successfully reduced from 20,000 to 600 when a tri-gram language model is used with a vocabulary of 20,000 words. The total amount of time in training was reduced from 30 days to 14 days for Wall Street Journal training corpus (corpus length: 37M words).

Human-like sign-language learning method using deep learning

  • Ji, Yangho;Kim, Sunmok;Kim, Young-Joo;Lee, Ki-Baek
    • ETRI Journal
    • /
    • 제40권4호
    • /
    • pp.435-445
    • /
    • 2018
  • This paper proposes a human-like sign-language learning method that uses a deep-learning technique. Inspired by the fact that humans can learn sign language from just a set of pictures in a book, in the proposed method, the input data are pre-processed into an image. In addition, the network is partially pre-trained to imitate the preliminarily obtained knowledge of humans. The learning process is implemented with a well-known network, that is, a convolutional neural network. Twelve sign actions are learned in 10 situations, and can be recognized with an accuracy of 99% in scenarios with low-cost equipment and limited data. The results show that the system is highly practical, as well as accurate and robust.

Sign Language Image Recognition System Using Artificial Neural Network

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권2호
    • /
    • pp.193-200
    • /
    • 2019
  • Hearing impaired people are living in a voice culture area, but due to the difficulty of communicating with normal people using sign language, many people experience discomfort in daily life and social life and various disadvantages unlike their desires. Therefore, in this paper, we study a sign language translation system for communication between a normal person and a hearing impaired person using sign language and implement a prototype system for this. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. In this paper, we use machine learning method of artificial neural network to recognize various sign language expressions of sign language users. By using generalized smart phone and various video equipment for sign language image recognition, we intend to improve the usability of sign language translation system.

A Study on Word Sense Disambiguation Using Bidirectional Recurrent Neural Network for Korean Language

  • Min, Jihong;Jeon, Joon-Woo;Song, Kwang-Ho;Kim, Yoo-Sung
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권4호
    • /
    • pp.41-49
    • /
    • 2017
  • Word sense disambiguation(WSD) that determines the exact meaning of homonym which can be used in different meanings even in one form is very important to understand the semantical meaning of text document. Many recent researches on WSD have widely used NNLM(Neural Network Language Model) in which neural network is used to represent a document into vectors and to analyze its semantics. Among the previous WSD researches using NNLM, RNN(Recurrent Neural Network) model has better performance than other models because RNN model can reflect the occurrence order of words in addition to the word appearance information in a document. However, since RNN model uses only the forward order of word occurrences in a document, it is not able to reflect natural language's characteristics that later words can affect the meanings of the preceding words. In this paper, we propose a WSD scheme using Bidirectional RNN that can reflect not only the forward order but also the backward order of word occurrences in a document. From the experiments, the accuracy of the proposed model is higher than that of previous method using RNN. Hence, it is confirmed that bidirectional order information of word occurrences is useful for WSD in Korean language.

Simple and effective neural coreference resolution for Korean language

  • Park, Cheoneum;Lim, Joonho;Ryu, Jihee;Kim, Hyunki;Lee, Changki
    • ETRI Journal
    • /
    • 제43권6호
    • /
    • pp.1038-1048
    • /
    • 2021
  • We propose an end-to-end neural coreference resolution for the Korean language that uses an attention mechanism to point to the same entity. Because Korean is a head-final language, we focused on a method that uses a pointer network based on the head. The key idea is to consider all nouns in the document as candidates based on the head-final characteristics of the Korean language and learn distributions over the referenced entity positions for each noun. Given the recent success of applications using bidirectional encoder representation from transformer (BERT) in natural language-processing tasks, we employed BERT in the proposed model to create word representations based on contextual information. The experimental results indicated that the proposed model achieved state-of-the-art performance in Korean language coreference resolution.

언어 모델 네트워크에 기반한 대어휘 연속 음성 인식 (Large Vocabulary Continuous Speech Recognition Based on Language Model Network)

  • 안동훈;정민화
    • 한국음향학회지
    • /
    • 제21권6호
    • /
    • pp.543-551
    • /
    • 2002
  • 이 논문에서는 20,000 단어급의 대어휘를 대상으로 실시간 연속음성 인식을 수행할 수 있는 탐색 방법을 제안한다. 기본적인 탐색 방법은 토큰 전파 방식의 비터비 (Viterbi) 디코딩 알고리듬을 이용한 1 패스로 구성된다. 언어 모델 네트워크를 도입하여 다양한 언어 모델들을 일관된 탐색 공간으로 구성하도록 하였으며, 프루닝(pruning) 단계에서 살아남은 토큰들로부터 동적으로 탐색 공간을 재구성하였다. 용이한 후처리를 위해 워드그래프 및 N개의 최적 문장을 출력할 수 있도록 비터비 알고리듬을 수정하였다. 이렇게 구성된 디코더는 20,000 단어급 데이터 베이스에 대해 테스트하였으며 인식률 및 RTF측면에서 평가되었다.

대어휘 연속음성인식을 위한 서브네트워크 기반의 1-패스 세미다이나믹 네트워크 디코딩 (1-Pass Semi-Dynamic Network Decoding Using a Subnetwork-Based Representation for Large Vocabulary Continuous Speech Recognition)

  • 정민화;안동훈
    • 대한음성학회지:말소리
    • /
    • 제50호
    • /
    • pp.51-69
    • /
    • 2004
  • In this paper, we present a one-pass semi-dynamic network decoding framework that inherits both advantages of fast decoding speed from static network decoders and memory efficiency from dynamic network decoders. Our method is based on the novel language model network representation that is essentially of finite state machine (FSM). The static network derived from the language model network [1][2] is partitioned into smaller subnetworks which are static by nature or self-structured. The whole network is dynamically managed so that those subnetworks required for decoding are cached in memory. The network is near-minimized by applying the tail-sharing algorithm. Our decoder is evaluated on the 25k-word Korean broadcast news transcription task. In case of the search network itself, the network is reduced by 73.4% from the tail-sharing algorithm. Compared with the equivalent static network decoder, the semi-dynamic network decoder has increased at most 6% in decoding time while it can be flexibly adapted to the various memory configurations, giving the minimal usage of 37.6% of the complete network size.

  • PDF