• Title/Summary/Keyword: 단어 입력

Search Result 431, Processing Time 0.03 seconds

IT 업체정보검색시스템에서 동의어 처리 기법

  • 강옥선;이현철;조완섭
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2001.05a
    • /
    • pp.105-106
    • /
    • 2001
  • 일반적인 정보 검색은 색인어를 통해 이루어지는데 이런 경우 사용자는 정보를 검색하기 위해 데이터베이스에 저장된 정보들이 가지고 있는 색인어를 정확하게 입력해야 한다. 그러나 일반 사용자가 색인어를 정확하게 입력하기는 어렵고, 특히 찾고자 하는 분야가 전문 분야에서 사용되는 용어일 때는 더욱 그러하다. 이럴 때 시소러스와 같은 지식구조를 이용해서 색인어를 탐색하여 검색의 효율을 높일 수 있다. 최근 들어 정보기술 분야의 연구가 활발함에 따라 정보자로의 생산이 급격히 증가하고 이를 관련 주제 분야의 연구정보로 활용하는 경우가 증가하고 있다. 따라서 IT 분야의 정보를 관리할 수 있는 시스템의 개발이 시급하다. 또한 IT 분야와 같은 전문분야일 때 검색 시스템에서 사용할 용어의 관리에 대한 연구의 필요성이 증가하고 있다. 본 논문에서는 IT분야의 정보를 검색할 수 있는 IT 업체정보검색시스템에서 정보 검색시에 생기는 용어간의 불일치 문제를 해결하고, 각 용어들간의 계층 관계를 나타내어 정보 검색시 검색어의 확장을 도울 수 있는 용어 관리 시스템의 구조를 제안하고 그에 대한 검색 알고리즘을 제시한다. 제안된 구조는 사용자의 검색어에 대한 동의어 관계나 상위어, 하위어 등의 계층 관계를 파악하여 검색의 범위에 추가함으로써 검색 효율을 높일 수 있다. 또한 새로운 용어의 생성이나 삭제와 같은 연산이 발생했을 때 시스템을 동적으로 확장할 수 있도록 구현하였다. 제안된 시스템은 단어간의 계층 구조를 효율적으로 검색하기 위하여 객체-관계형 데이터베이스를 사용하였다. 또한 메모리 상주 DBMS를 사용하여 많은 사용자들이 동시에 접근하는 환경에서도 빠른 검색 성능을 유지할 수 있도록 하였다. 제시된 방법은 정보기술 분야뿐만 아니라 다른 전문용어 분야의 연구로도 그 범위를 확장 할 수 있다.자기자본비용의 조합인 기회자본비용으로 할인함으로써 현재의 기업가치를 구할 수 있기 때문이다. 이처럼 기업이 영업활동이나 투자활동을 통해 현금을 창출하고 소비하는 경향은 해당 비즈니스 모델의 성격을 규정하는 자료도로 이용될 수 있다. 또한 최근 인터넷기업들의 부도가 발생하고 있는데, 기업의 부실원인이 어떤 것이든 사회전체의 생산력의 감소, 실업의 증가, 채권자 및 주주의 부의 감소, 심리적 불안으로 인한 경제활동의 위축, 기업 노하우의 소멸, 대외적 신용도의 하락 등과 같은 사회적·경제적 파급효과는 대단히 크다. 이상과 같은 기업부실의 효과를 고려할 때 부실기업을 미리 예측하는 일종의 조기경보장치를 갖는다는 것은 중요한 일이다. 현금흐름정보를 이용하여 기업의 부실을 예측하면 기업의 부실징후를 파악하는데 그치지 않고 부실의 원인을 파악하고 이에 대한 대응 전략을 수립하며 그 결과를 측정하는데 활용될 수도 있다. 따라서 본 연구에서는 기업의 부도예측 정보 중 현금흐름정보를 통하여 '인터넷기업의 미래 현금흐름측정, 부도예측신호효과, 부실원인파악, 비즈니스 모델의 성격규정 등을 할 수 있는가'를 검증하려고 한다. 협력체계 확립, ${\circled}3$ 전문인력 확보 및 인력구성 조정, 그리고 ${\circled}4$ 방문보건사업의 강화 등이다., 대사(代謝)와 관계(關係)있음을 시사(示唆)해 주고 있다.ble nutrient (TDN) was highest in booting stage (59.7%); however no significant difference was found among other stages. The concentrations of Ca and P were not different among mature stages. Accordi

  • PDF

Location Inference of Twitter Users using Timeline Data (타임라인데이터를 이용한 트위터 사용자의 거주 지역 유추방법)

  • Kang, Ae Tti;Kang, Young Ok
    • Spatial Information Research
    • /
    • v.23 no.2
    • /
    • pp.69-81
    • /
    • 2015
  • If one can infer the residential area of SNS users by analyzing the SNS big data, it can be an alternative by replacing the spatial big data researches which result from the location sparsity and ecological error. In this study, we developed the way of utilizing the daily life activity pattern, which can be found from timeline data of tweet users, to infer the residential areas of tweet users. We recognized the daily life activity pattern of tweet users from user's movement pattern and the regional cognition words that users text in tweet. The models based on user's movement and text are named as the daily movement pattern model and the daily activity field model, respectively. And then we selected the variables which are going to be utilized in each model. We defined the dependent variables as 0, if the residential areas that users tweet mainly are their home location(HL) and as 1, vice versa. According to our results, performed by the discriminant analysis, the hit ratio of the two models was 67.5%, 57.5% respectively. We tested both models by using the timeline data of the stress-related tweets. As a result, we inferred the residential areas of 5,301 users out of 48,235 users and could obtain 9,606 stress-related tweets with residential area. The results shows about 44 times increase by comparing to the geo-tagged tweets counts. We think that the methodology we have used in this study can be used not only to secure more location data in the study of SNS big data, but also to link the SNS big data with regional statistics in order to analyze the regional phenomenon.

Korean Dependency Parsing Using Stack-Pointer Networks and Subtree Information (스택-포인터 네트워크와 부분 트리 정보를 이용한 한국어 의존 구문 분석)

  • Choi, Yong-Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.6
    • /
    • pp.235-242
    • /
    • 2021
  • In this work, we develop a Korean dependency parser based on a stack-pointer network that consists of a pointer network and an internal stack. The parser has an encoder and decoder and builds a dependency tree for an input sentence in a depth-first manner. The encoder of the parser encodes an input sentence, and the decoder selects a child for the word at the top of the stack at each step. Since the parser has the internal stack where a search path is stored, the parser can utilize information of previously derived subtrees when selecting a child node. Previous studies used only a grandparent and the most recently visited sibling without considering a subtree structure. In this paper, we introduce graph attention networks that can represent a previously derived subtree. Then we modify our parser based on the stack-pointer network to utilize subtree information produced by the graph attention networks. After training the dependency parser using Sejong and Everyone's corpus, we evaluate the parser's performance. Experimental results show that the proposed parser achieves better performance than the previous approaches at sentence-level accuracies when adopting 2-depth graph attention networks.

Korean Morphological Analysis Method Based on BERT-Fused Transformer Model (BERT-Fused Transformer 모델에 기반한 한국어 형태소 분석 기법)

  • Lee, Changjae;Ra, Dongyul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.4
    • /
    • pp.169-178
    • /
    • 2022
  • Morphemes are most primitive units in a language that lose their original meaning when segmented into smaller parts. In Korean, a sentence is a sequence of eojeols (words) separated by spaces. Each eojeol comprises one or more morphemes. Korean morphological analysis (KMA) is to divide eojeols in a given Korean sentence into morpheme units. It also includes assigning appropriate part-of-speech(POS) tags to the resulting morphemes. KMA is one of the most important tasks in Korean natural language processing (NLP). Improving the performance of KMA is closely related to increasing performance of Korean NLP tasks. Recent research on KMA has begun to adopt the approach of machine translation (MT) models. MT is to convert a sequence (sentence) of units of one domain into a sequence (sentence) of units of another domain. Neural machine translation (NMT) stands for the approaches of MT that exploit neural network models. From a perspective of MT, KMA is to transform an input sequence of units belonging to the eojeol domain into a sequence of units in the morpheme domain. In this paper, we propose a deep learning model for KMA. The backbone of our model is based on the BERT-fused model which was shown to achieve high performance on NMT. The BERT-fused model utilizes Transformer, a representative model employed by NMT, and BERT which is a language representation model that has enabled a significant advance in NLP. The experimental results show that our model achieves 98.24 F1-Score.

Exploiting Chunking for Dependency Parsing in Korean (한국어에서 의존 구문분석을 위한 구묶음의 활용)

  • Namgoong, Young;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.291-298
    • /
    • 2022
  • In this paper, we present a method for dependency parsing with chunking in Korean. Dependency parsing is a task of determining a governor of every word in a sentence. In general, we used to determine the syntactic governor in Korean and should transform the syntactic structure into semantic structure for further processing like semantic analysis in natural language processing. There is a notorious problem to determine whether syntactic or semantic governor. For example, the syntactic governor of the word "먹고 (eat)" in the sentence "밥을 먹고 싶다 (would like to eat)" is "싶다 (would like to)", which is an auxiliary verb and therefore can not be a semantic governor. In order to mitigate this somewhat, we propose a Korean dependency parsing after chunking, which is a process of segmenting a sentence into constituents. A constituent is a word or a group of words that function as a single unit within a dependency structure and is called a chunk in this paper. Compared to traditional dependency parsing, there are some advantage of the proposed method: (1) The number of input units in parsing can be reduced and then the parsing speed could be faster. (2) The effectiveness of parsing can be improved by considering the relation between two head words in chunks. Through experiments for Sejong dependency corpus, we have shown that the USA and LAS of the proposed method are 86.48% and 84.56%, respectively and the number of input units is reduced by about 22%p.

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

Robust Speech Recognition Algorithm of Voice Activated Powered Wheelchair for Severely Disabled Person (중증 장애우용 음성구동 휠체어를 위한 강인한 음성인식 알고리즘)

  • Suk, Soo-Young;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.250-258
    • /
    • 2007
  • Current speech recognition technology s achieved high performance with the development of hardware devices, however it is insufficient for some applications where high reliability is required, such as voice control of powered wheelchairs for disabled persons. For the system which aims to operate powered wheelchairs safely by voice in real environment, we need to consider that non-voice commands such as user s coughing, breathing, and spark-like mechanical noise should be rejected and the wheelchair system need to recognize the speech commands affected by disability, which contains specific pronunciation speed and frequency. In this paper, we propose non-voice rejection method to perform voice/non-voice classification using both YIN based fundamental frequency(F0) extraction and reliability in preprocessing. We adopted a multi-template dictionary and acoustic modeling based speaker adaptation to cope with the pronunciation variation of inarticulately uttered speech. From the recognition tests conducted with the data collected in real environment, proposed YIN based fundamental extraction showed recall-precision rate of 95.1% better than that of 62% by cepstrum based method. Recognition test by a new system applied with multi-template dictionary and MAP adaptation also showed much higher accuracy of 99.5% than that of 78.6% by baseline system.

A study on the search and selection processes of targets presented on the CRT display (컴퓨터 모니터에 제시된 표적의 탐색과 선택과정에 관한 연구)

  • 이재식;신현정;도경수
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.37-51
    • /
    • 2000
  • The present study compared computer users target-selection response patterns when the targets were varied in terms of their relative location and distance from the current position of the cursor. In Experiment 1, where the mouse was used as an input device, the effects of different directions and distances of simple target(small rectangle) on target-selection response were investigated. The results of Experiment 1 can be summarized as follows: (1) Overshooting was more frequent than either undershooting or correct movement and (2) this tendency was more prominent when the targets were presented in the oblique direction or in farther location from the current cursor position. (3) Although the overshooting and undershooting were more frequent in the oblique direction, the degree of deviation was larger in horizontal and vertical direction. (4) Time spent in moving the mouse rather than that spent in planning, calibrating or clicking was found to be the most critical factor in determining total response time. In Experiment 2, effects of the font size and line-height of the target on target-selection response were compared with regard to two types of input devices(keyboard vs. mouse). The results are as follows: (1) Mouse generally yielded shorter target-selection time than keyboard. but this tendency was reversed when the targets were presented in horizontal and vertical directions. (2) In general, target-selection time was the longest in the condition of font size of 10 and line-height of 100%, and the shortest in the condition of font size of 12 and line-height of 150%. (3) When keyboard was used as the input device, target-selection time was shortest in the 150% line-height condition, whereas in the mouse condition, target-selection time tended to be increased as the line-height increased. which resulted in the significant interaction effect between input device and line-height. Finally, several issues relating to human-computer interaction were discussed based on the results of the present study.

  • PDF

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Web-based Text-To-Sign Language Translating System (웹기반 청각장애인용 수화 웹페이지 제작 시스템)

  • Park, Sung-Wook;Wang, Bo-Hyeun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.265-270
    • /
    • 2014
  • Hearing-impaired people have difficulty in hearing, so it is also hard for them to learn letters that represent sound and text that conveys complex and abstract concepts. Therefore it has been natural choice for the hearing-impaired people to use sign language for communication, which employes facial expression, and hands and body motion. However, the major communication methods in daily life are text and speech, which are big obstacles for the hearing-impaired people to access information, to learn and make intellectual activities, and to get jobs. As delivering information via internet become common the hearing-impaired people are experiencing more difficulty in accessing information since internet represents information mostly in text forms. This intensifies unbalance of information accessibility. This paper reports web-based text-to-sign language translating system that helps web designer to use sign language in web page design. Since the system is web-based, if web designers are equipped with common computing environment for internet browsing, they can use the system. The web-based text-to-sign language system takes the format of bulletin board as user interface. When web designers write paragraphs and post them through the bulletin board to the translating server, the server translates the incoming text to sign language, animates with 3D avatar and records the animation in a MP4 file. The file addresses are fetched by the bulletin board and it enables web designers embed the translated sign language file into their web pages by using HTML5 or Javascript. Also we analyzed text used by web pages of public services, then figured out new words to the translating system, and added to improve translation. This addition is expected to encourage wide and easy acceptance of web pages for hearing-impaired people to public services.