• 제목/요약/키워드: Word Input

검색결과 225건 처리시간 0.019초

한국어의 종성중화 작용이 영어 단어 인지에 미치는 영향 (The Effects of Korean Coda-neutralization Process on Word Recognition in English)

  • 김선미;남기춘
    • 말소리와 음성과학
    • /
    • 제2권1호
    • /
    • pp.59-68
    • /
    • 2010
  • This study addresses the issue of whether Korean(L1)-English(L2) non-proficient bilinguals are affected by the native coda-neutralization process when recognizing words in English continuous speech. Korean phonological rules require that if liaison occurs between 'words', then coda-neutralization process must come before the liaison process, which results in liaison-consonants being coda-neutralized ones such as /b/, /d/, or /g/, rather than non-neutralized ones like /p/, /t/, /k/, /$t{\int}$/, /$d_{\Im}$/, or /s/. Consequently, if Korean listeners apply their native coda-neutralization rules to English speech input, word detection will be easier when coda-neutralized consonants precede target words than when non-neutralized ones do. Word-spotting and word-monitoring tasks were used in Experiment 1 and 2, respectively. In both experiments, listeners detected words faster and more accurately when vowel-initial target words were preceded by coda-neutralized consonants than when preceded by coda non-neutralized ones. The results show that Korean listeners exploit their native phonological process when processing English, irrespective of whether the native process is appropriate or not.

  • PDF

Word-Level Embedding to Improve Performance of Representative Spatio-temporal Document Classification

  • Byoungwook Kim;Hong-Jun Jang
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.830-841
    • /
    • 2023
  • Tokenization is the process of segmenting the input text into smaller units of text, and it is a preprocessing task that is mainly performed to improve the efficiency of the machine learning process. Various tokenization methods have been proposed for application in the field of natural language processing, but studies have primarily focused on efficiently segmenting text. Few studies have been conducted on the Korean language to explore what tokenization methods are suitable for document classification task. In this paper, an exploratory study was performed to find the most suitable tokenization method to improve the performance of a representative spatio-temporal document classifier in Korean. For the experiment, a convolutional neural network model was used, and for the final performance comparison, tasks were selected for document classification where performance largely depends on the tokenization method. As a tokenization method for comparative experiments, commonly used Jamo, Character, and Word units were adopted. As a result of the experiment, it was confirmed that the tokenization of word units showed excellent performance in the case of representative spatio-temporal document classification task where the semantic embedding ability of the token itself is important.

CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로 (Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding)

  • 박현정;송민채;신경식
    • 지능정보연구
    • /
    • 제24권2호
    • /
    • pp.59-83
    • /
    • 2018
  • 고객과 대중의 니즈를 파악하기 위한 감성분석의 중요성이 커지면서 최근 영어 텍스트를 대상으로 다양한 딥러닝 모델들이 소개되고 있다. 본 연구는 영어와 한국어의 언어적인 차이에 주목하여 딥러닝 모델을 한국어 상품평 텍스트의 감성분석에 적용할 때 부딪히게 되는 기본적인 이슈들에 대하여 실증적으로 살펴본다. 즉, 딥러닝 모델의 입력으로 사용되는 단어 벡터(word vector)를 형태소 수준에서 도출하고, 여러 형태소 벡터(morpheme vector) 도출 대안에 따라 감성분석의 정확도가 어떻게 달라지는지를 비정태적(non-static) CNN(Convolutional Neural Network) 모델을 사용하여 검증한다. 형태소 벡터 도출 대안은 CBOW(Continuous Bag-Of-Words)를 기본적으로 적용하고, 입력 데이터의 종류, 문장 분리와 맞춤법 및 띄어쓰기 교정, 품사 선택, 품사 태그 부착, 고려 형태소의 최소 빈도수 등과 같은 기준에 따라 달라진다. 형태소 벡터 도출 시, 문법 준수도가 낮더라도 감성분석 대상과 같은 도메인의 텍스트를 사용하고, 문장 분리 외에 맞춤법 및 띄어쓰기 전처리를 하며, 분석불능 범주를 포함한 모든 품사를 고려할 때 감성분석의 분류 정확도가 향상되는 결과를 얻었다. 동음이의어 비율이 높은 한국어 특성 때문에 고려한 품사 태그 부착 방안과 포함할 형태소에 대한 최소 빈도수 기준은 뚜렷한 영향이 없는 것으로 나타났다.

Bracketing Input for Accurate Parsing

  • No, Yong-Kyoon
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2007년도 정기학술대회
    • /
    • pp.358-364
    • /
    • 2007
  • Syntax parsers can benefit from speakers' intuition about constituent structures indicated in the input string in the form of parentheses. Focusing on languages like Korean, whose orthographic convention requires more than one word to be written without spaces, we describe an algorithm for passing the bracketing information across the tagger to the probabilistic CFG parser, together with one for heightening (or penalizing, as the case may be) probabilities of putative constituents as they are suggested by the parser. It is shown that two or three constituents marked in the input suffice to guide the parser to the correct parse as the most likely one, even with sentences that are considered long.

  • PDF

문서입력 작업 시 컴퓨터 키보드 유형이 손목관절의 운동학적 특성에 미치는 영향 (The Effect of Standard Keyboard and Fixed-Split Keyboard on Wrist Posture During Word Processing)

  • 권혁철;정동훈;공진용
    • 한국전문물리치료학회지
    • /
    • 제11권1호
    • /
    • pp.35-43
    • /
    • 2004
  • There were two purposes of this study. The first was to research the effects of standard and fixed-split keyboards on wrist posture and movements during word processing. The second was to select optimal computer input devices in order to prevent cummulative trauma disorder in the wrist region. The group of subjects consisted of thirteen healthy men and women who all agreed to participate in this study. Kinematic data was measured from both wrist flexion and extension, and wrist radial and ulnar deviation during a 20 minute period of word processing work. The measuring tool was an electrical goniometer, and was produced by Biometrics Cooperation. The results were as follows: 1. The wrist flexion and extension at resting starting position were not significantly different (p>.05), however the angle of radial and ulnar deviation were significantly different in standard and split keyboard use during word processing (p<.05). 2. In the initial 10 minutes, the dynamic angle of wrist flexion and extension were not significantly different (p>.05), however the dynamic angle of radial and ulnar deviation was significantly different in standard and split keyboard use during word processing (p<.05). These results suggest that the split keyboard is more optimal than the standard keyboard, because it prevented excessive ulnar deviation during word processing.

  • PDF

단어의 의미와 문맥을 고려한 순환신경망 기반의 문서 분류 (Document Classification using Recurrent Neural Network with Word Sense and Contexts)

  • 주종민;김남훈;양형정;박혁로
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권7호
    • /
    • pp.259-266
    • /
    • 2018
  • 본 논문에서는 단어의 순서와 문맥을 고려하는 특징을 추출하여 순환신경망(Recurrent Neural Network)으로 문서를 분류하는 방법을 제안한다. 단어의 의미를 고려한 word2vec 방법으로 문서내의 단어를 벡터로 표현하고, 문맥을 고려하기 위해 doc2vec으로 입력하여 문서의 특징을 추출한다. 문서분류 방법으로 이전 노드의 출력을 다음 노드의 입력으로 포함하는 RNN 분류기를 사용한다. RNN 분류기는 신경망 분류기 중에서도 시퀀스 데이터에 적합하기 때문에 문서 분류에 좋은 성능을 보인다. RNN에서도 그라디언트가 소실되는 문제를 해결해주고 계산속도가 빠른 GRU(Gated Recurrent Unit) 모델을 사용한다. 실험 데이터로 한글 문서 집합 1개와 영어 문서 집합 2개를 사용하였고 실험 결과 GRU 기반 문서 분류기가 CNN 기반 문서 분류기 대비 약 3.5%의 성능 향상을 보였다.

Color Recommendation for Text Based on Colors Associated with Words

  • Liba, Saki;Nakamura, Tetsuaki;Sakamoto, Maki
    • 한국산업정보학회논문지
    • /
    • 제17권1호
    • /
    • pp.21-29
    • /
    • 2012
  • In this paper, we propose a new method to select colors representing the meaning of text contents based on the cognitive relation between words and colors, Our method is designed on the previous study revealing the existence of crucial words to estimate the colors associated with the meaning of text contents, Using the associative probability of each color with a given word and the strength of color association of the word, we estimate the probability of colors associated with a given text. The goal of this study is to propose a system to recommend the cognitively plausible colors for the meaning of the input text. To build a versatile and efficient database used by our system, two psychological experiments were conducted by using news site articles. In experiment 1, we collected 498 words which were chosen by the participants as having the strong association with color. Subsequently, we investigated which color was associated with each word in experiment 2. In addition to those data, we employed the estimated values of the strength of color association and the colors associated with the words included in a very large corpus of newspapers (approximately 130,000 words) based on the similarity between the words obtained by Latent Semantic Analysis (LSA). Therefore our method allows us to select colors for a large variety of words or sentences. Finally, we verified that our system cognitively succeeded in proposing the colors associated with the meaning of the input text, comparing the correct colors answered by participants with the estimated colors by our method. Our system is expected to be of use in various types of situations such as the data visualization, the information retrieval, the art or web pages design, and so on.

잘못 형성된 입력문장에 대한 CHART PARSER (CHART PARSER FOR ILL-FORMED INPUT SENTENCES)

  • 민경호
    • 인지과학
    • /
    • 제4권1호
    • /
    • pp.177-212
    • /
    • 1993
  • 본 연구는 잘못 형성된 입력에 대한 멜리쉬의 연구(1989)에 기반하고 있다. 이 글은 chart-based parser를 이용하여 구문론적 차원에서 잘못 형성된 입력 문자의 복구에 촛점을 둔다. 멜리쉬의 체계는 두가지 분석기, 즉 잘형성된 입력 분석기와 잘못 형성된 입력 분석기로 구성되는데, 필자의 연구는 그에 생각을 따르고 있다. 이글에서는 주로 chartparsing의 개념, 잘못형성된 입력에 대한 분석전략이 논의된다. 또한 필자가 제시하는 체계의 디자인과 구현, 필자의 체계를 멜리쉬의 체계와의 비교와 같은 사항들이 다루어질 것이다.

Language-Independent Word Acquisition Method Using a State-Transition Model

  • Xu, Bin;Yamagishi, Naohide;Suzuki, Makoto;Goto, Masayuki
    • Industrial Engineering and Management Systems
    • /
    • 제15권3호
    • /
    • pp.224-230
    • /
    • 2016
  • The use of new words, numerous spoken languages, and abbreviations on the Internet is extensive. As such, automatically acquiring words for the purpose of analyzing Internet content is very difficult. In a previous study, we proposed a method for Japanese word segmentation using character N-grams. The previously proposed method is based on a simple state-transition model that is established under the assumption that the input document is described based on four states (denoted as A, B, C, and D) specified beforehand: state A represents words (nouns, verbs, etc.); state B represents statement separators (punctuation marks, conjunctions, etc.); state C represents postpositions (namely, words that follow nouns); and state D represents prepositions (namely, words that precede nouns). According to this state-transition model, based on the states applied to each pseudo-word, we search the document from beginning to end for an accessible pattern. In other words, the process of this transition detects some words during the search. In the present paper, we perform experiments based on the proposed word acquisition algorithm using Japanese and Chinese newspaper articles. These articles were obtained from Japan's Kyoto University and the Chinese People's Daily. The proposed method does not depend on the language structure. If text documents are expressed in Unicode the proposed method can, using the same algorithm, obtain words in Japanese and Chinese, which do not contain spaces between words. Hence, we demonstrate that the proposed method is language independent.

TagPlus: 폭소노미에서 동의어 태그를 이용한 검색 시스템 (TagPlus: A Retrieval System using Synonym Tag in Folksonomy)

  • 이선숙;용환승
    • 디지털콘텐츠학회 논문지
    • /
    • 제8권3호
    • /
    • pp.255-262
    • /
    • 2007
  • 태깅은 사용자들이 공유된 콘텐츠에 키워드의 형태로 메타 데이터를 추가하는 과정이다. 최근 이러한 태깅은 웹 상 에서 더 많은 사용자들에게 사용되어지고 있는 추세인데, 이런 태깅 사이트는 사용자가 북마크, 사진, 비디오 등의 콘텐츠에 태그를 추가할 수 있도록 한다. 본 논문에서는 사용자의 참여를 바탕으로 하는 태깅 시스템의 구조와 배경 지식 또 이런 시스템이 가지는 다양한 의미와 한계들을 분석한다. 또한 WordNet 데이터베이스의 동의어 집합을 태그의 검색에 적용한 TagPlus 시스템을 제안하고 Flickr 이미지 공유 시스템으로부터 동의어 태그 검색을 가능하도록 구현하였다.

  • PDF