• Title/Summary/Keyword: Word Input

Search Result 225, Processing Time 0.026 seconds

The Effects of Korean Coda-neutralization Process on Word Recognition in English (한국어의 종성중화 작용이 영어 단어 인지에 미치는 영향)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.1
    • /
    • pp.59-68
    • /
    • 2010
  • This study addresses the issue of whether Korean(L1)-English(L2) non-proficient bilinguals are affected by the native coda-neutralization process when recognizing words in English continuous speech. Korean phonological rules require that if liaison occurs between 'words', then coda-neutralization process must come before the liaison process, which results in liaison-consonants being coda-neutralized ones such as /b/, /d/, or /g/, rather than non-neutralized ones like /p/, /t/, /k/, /$t{\int}$/, /$d_{\Im}$/, or /s/. Consequently, if Korean listeners apply their native coda-neutralization rules to English speech input, word detection will be easier when coda-neutralized consonants precede target words than when non-neutralized ones do. Word-spotting and word-monitoring tasks were used in Experiment 1 and 2, respectively. In both experiments, listeners detected words faster and more accurately when vowel-initial target words were preceded by coda-neutralized consonants than when preceded by coda non-neutralized ones. The results show that Korean listeners exploit their native phonological process when processing English, irrespective of whether the native process is appropriate or not.

  • PDF

Word-Level Embedding to Improve Performance of Representative Spatio-temporal Document Classification

  • Byoungwook Kim;Hong-Jun Jang
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.830-841
    • /
    • 2023
  • Tokenization is the process of segmenting the input text into smaller units of text, and it is a preprocessing task that is mainly performed to improve the efficiency of the machine learning process. Various tokenization methods have been proposed for application in the field of natural language processing, but studies have primarily focused on efficiently segmenting text. Few studies have been conducted on the Korean language to explore what tokenization methods are suitable for document classification task. In this paper, an exploratory study was performed to find the most suitable tokenization method to improve the performance of a representative spatio-temporal document classifier in Korean. For the experiment, a convolutional neural network model was used, and for the final performance comparison, tasks were selected for document classification where performance largely depends on the tokenization method. As a tokenization method for comparative experiments, commonly used Jamo, Character, and Word units were adopted. As a result of the experiment, it was confirmed that the tokenization of word units showed excellent performance in the case of representative spatio-temporal document classification task where the semantic embedding ability of the token itself is important.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Bracketing Input for Accurate Parsing

  • No, Yong-Kyoon
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.358-364
    • /
    • 2007
  • Syntax parsers can benefit from speakers' intuition about constituent structures indicated in the input string in the form of parentheses. Focusing on languages like Korean, whose orthographic convention requires more than one word to be written without spaces, we describe an algorithm for passing the bracketing information across the tagger to the probabilistic CFG parser, together with one for heightening (or penalizing, as the case may be) probabilities of putative constituents as they are suggested by the parser. It is shown that two or three constituents marked in the input suffice to guide the parser to the correct parse as the most likely one, even with sentences that are considered long.

  • PDF

The Effect of Standard Keyboard and Fixed-Split Keyboard on Wrist Posture During Word Processing (문서입력 작업 시 컴퓨터 키보드 유형이 손목관절의 운동학적 특성에 미치는 영향)

  • Kwon, Hyuk-Cheol;Jeong, Dong-Hoon;Kong, Jin-Yong
    • Physical Therapy Korea
    • /
    • v.11 no.1
    • /
    • pp.35-43
    • /
    • 2004
  • There were two purposes of this study. The first was to research the effects of standard and fixed-split keyboards on wrist posture and movements during word processing. The second was to select optimal computer input devices in order to prevent cummulative trauma disorder in the wrist region. The group of subjects consisted of thirteen healthy men and women who all agreed to participate in this study. Kinematic data was measured from both wrist flexion and extension, and wrist radial and ulnar deviation during a 20 minute period of word processing work. The measuring tool was an electrical goniometer, and was produced by Biometrics Cooperation. The results were as follows: 1. The wrist flexion and extension at resting starting position were not significantly different (p>.05), however the angle of radial and ulnar deviation were significantly different in standard and split keyboard use during word processing (p<.05). 2. In the initial 10 minutes, the dynamic angle of wrist flexion and extension were not significantly different (p>.05), however the dynamic angle of radial and ulnar deviation was significantly different in standard and split keyboard use during word processing (p<.05). These results suggest that the split keyboard is more optimal than the standard keyboard, because it prevented excessive ulnar deviation during word processing.

  • PDF

Document Classification using Recurrent Neural Network with Word Sense and Contexts (단어의 의미와 문맥을 고려한 순환신경망 기반의 문서 분류)

  • Joo, Jong-Min;Kim, Nam-Hun;Yang, Hyung-Jeong;Park, Hyuck-Ro
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.7
    • /
    • pp.259-266
    • /
    • 2018
  • In this paper, we propose a method to classify a document using a Recurrent Neural Network by extracting features considering word sense and contexts. Word2vec method is adopted to include the order and meaning of the words expressing the word in the document as a vector. Doc2vec is applied for considering the context to extract the feature of the document. RNN classifier, which includes the output of the previous node as the input of the next node, is used as the document classification method. RNN classifier presents good performance for document classification because it is suitable for sequence data among neural network classifiers. We applied GRU (Gated Recurrent Unit) model which solves the vanishing gradient problem of RNN. It also reduces computation speed. We used one Hangul document set and two English document sets for the experiments and GRU based document classifier improves performance by about 3.5% compared to CNN based document classifier.

Color Recommendation for Text Based on Colors Associated with Words

  • Liba, Saki;Nakamura, Tetsuaki;Sakamoto, Maki
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.1
    • /
    • pp.21-29
    • /
    • 2012
  • In this paper, we propose a new method to select colors representing the meaning of text contents based on the cognitive relation between words and colors, Our method is designed on the previous study revealing the existence of crucial words to estimate the colors associated with the meaning of text contents, Using the associative probability of each color with a given word and the strength of color association of the word, we estimate the probability of colors associated with a given text. The goal of this study is to propose a system to recommend the cognitively plausible colors for the meaning of the input text. To build a versatile and efficient database used by our system, two psychological experiments were conducted by using news site articles. In experiment 1, we collected 498 words which were chosen by the participants as having the strong association with color. Subsequently, we investigated which color was associated with each word in experiment 2. In addition to those data, we employed the estimated values of the strength of color association and the colors associated with the words included in a very large corpus of newspapers (approximately 130,000 words) based on the similarity between the words obtained by Latent Semantic Analysis (LSA). Therefore our method allows us to select colors for a large variety of words or sentences. Finally, we verified that our system cognitively succeeded in proposing the colors associated with the meaning of the input text, comparing the correct colors answered by participants with the estimated colors by our method. Our system is expected to be of use in various types of situations such as the data visualization, the information retrieval, the art or web pages design, and so on.

CHART PARSER FOR ILL-FORMED INPUT SENTENCES (잘못 형성된 입력문장에 대한 CHART PARSER)

  • KyonghoMin
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.1
    • /
    • pp.177-212
    • /
    • 1993
  • My research is based on the parser for ill-formed input by Mellish in a paper in ACL 27th meeting Proceedings. 1989. My system is composed of two parsers:WFCP and IFCP. When WFCP fails to give the parse tree for the input sentence, the sentence is identified as ill-formed and is parsed by IFCP for error detection and recovery at the syntactic level. My system is indendent of grammatical rules. It does not take into account semantic ill-formedness. My system uses a grammar composed of 25 context-free rules. My system consistes of two major parsing strategies:top-down expection and bottem-up satisfaction. With top-down expectation. rules are retrieved under the inference condition and expaned by inactive arcs. When doing bottom-up parsing. my parser used two modes:Left-Right parsing and Right-to-Left parsing. My system repairs errors sucessfully when the input contains an omitted word or an unknown word substitued for a valid word. Left- corner and right-corner errors are more easily detected and repaired than ill-formed senteces where the error is in teh middle. The deviance note. with repair details, is kept in new inactive arcs which are generated by the error correction procedure. The implementation of my system is quite different from Mellish's. When rules are invoked. my system invokes all rules with minimal inference. My bottom up parsing strategy uses Left-to-Right mode and Right-to-Left mode. My system is bottom-up-parsing-oriented like the chart parser. Errors are repaired in two ways:using top-down hypothesis, and using Need-Chart which keeps the information of expectation and complection of expanded goals by rules. To reduce the number of top-down cycles. all rules are invoked simultaneously and this invocation information is kept in Need-Chart. This idea will be extended for the implementation of multiple error recovery system.

Language-Independent Word Acquisition Method Using a State-Transition Model

  • Xu, Bin;Yamagishi, Naohide;Suzuki, Makoto;Goto, Masayuki
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.3
    • /
    • pp.224-230
    • /
    • 2016
  • The use of new words, numerous spoken languages, and abbreviations on the Internet is extensive. As such, automatically acquiring words for the purpose of analyzing Internet content is very difficult. In a previous study, we proposed a method for Japanese word segmentation using character N-grams. The previously proposed method is based on a simple state-transition model that is established under the assumption that the input document is described based on four states (denoted as A, B, C, and D) specified beforehand: state A represents words (nouns, verbs, etc.); state B represents statement separators (punctuation marks, conjunctions, etc.); state C represents postpositions (namely, words that follow nouns); and state D represents prepositions (namely, words that precede nouns). According to this state-transition model, based on the states applied to each pseudo-word, we search the document from beginning to end for an accessible pattern. In other words, the process of this transition detects some words during the search. In the present paper, we perform experiments based on the proposed word acquisition algorithm using Japanese and Chinese newspaper articles. These articles were obtained from Japan's Kyoto University and the Chinese People's Daily. The proposed method does not depend on the language structure. If text documents are expressed in Unicode the proposed method can, using the same algorithm, obtain words in Japanese and Chinese, which do not contain spaces between words. Hence, we demonstrate that the proposed method is language independent.

TagPlus: A Retrieval System using Synonym Tag in Folksonomy (TagPlus: 폭소노미에서 동의어 태그를 이용한 검색 시스템)

  • Lee, Sun-Sook;Yong, Hwan-Seung
    • Journal of Digital Contents Society
    • /
    • v.8 no.3
    • /
    • pp.255-262
    • /
    • 2007
  • Collaborative tagging describes the process by which many users add metadata in the form of keywords to shared content. Recently, collaborative tagging has grown in popularity on the web, on sites that allow users to tag bookmarks, photographs, videos and other content. In this paper, we analyze the structure and basic knowledge of collaborative tagging systems as well as their dynamical aspects. We also present a retrieval system, TagPlus, using synonym tag that is derived from WordNet database. Specifically, TagPlus, a synonym tag based system has users retrieve images from Flickr system. The proposed system show the images tagged by not only the tag that users input but also the synonyms that are synonyms with the tag.

  • PDF