• Title/Summary/Keyword: word dictionary

Search Result 277, Processing Time 0.021 seconds

An Improved Homonym Disambiguation Model based on Bayes Theory (Bayes 정리에 기반한 개선된 동형이의어 분별 모텔)

  • 김창환;이왕우
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.12
    • /
    • pp.1581-1590
    • /
    • 2001
  • This paper asserted more developmental model of WSD(word sense disambiguation) than J. Hur(2000)'s WSD model. This model suggested an improved statistical homonym disambiguation Model based on Bayes Theory. This paper using semantic information(co-occurrence data) obtained from definitions of part of speech(POS) tagged UMRD-S(Ulsan university Machine Readable Dictionary(Semantic Tagged)). we extracted semantic features in the context as nouns, predicates and adverbs from the definitions in the korean dictionary. In this research, we make an experiment with the accuracy of WSD system about major nine homonym nouns and new seven homonym predicates supplementary. The inner experimental result showed average accuracy of 98.32% with regard to the most Nine homonym nouns and 99.53% for the Seven homonym predicates. An Addition, we save test on Korean Information Base and ETRI's POS tagged corpus. This external experimental result showed average accuracy of 84.42% with regard to the most Nine nouns over unsupervised learning sentences from Korean Information Base and ETRI Corpus, 70.81 % accuracy rate for the Seven predicates from Sejong Project phrase part tagging corpus (3.5 million phrases) too.

  • PDF

Design and Implementation of Tool Server and License Server REL/RDD processing based on MPEG-21 Framework (MPEG-21 프레임워크 기반의 REL/RDD 처리를 위한 라이센스 서버와 툴 서버의 설계 및 구현)

  • Hong, Hyun-Woo;Ryu, Kwang-Hee;Kim, Kwang-Yong;Kim, Jae-Gon;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.623-626
    • /
    • 2005
  • The technique of developing Digital Contents have still not be a standard, and it is cause some problems in the Digital Contents's creating, circulation and consumption, So solve the problem, MPEG suggest MPEG-21 framework. In the standard, The IPMP take charge of the Digital Contents's protection and management, and also it is the same as the rights expression language REL and the dictionary defining the word of REL. But. the study of the IPMP is later than the study of REL and RDD, such as the other study of the MPEG-21 standard. So, there is few system based REL and RDD. In this paper, in order to management and protect contents right. So facing the latest standard, we designed and implementation the Tool Server and the License Server based on REL/RDD.

  • PDF

A Technique for Product Effect Analysis Using Online Customer Reviews (온라인 고객 리뷰를 활용한 제품 효과 분석 기법)

  • Lim, Young Seo;Lee, So Yeong;Lee, Ji Na;Ryu, Bo Kyung;Kim, Hyon Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.9
    • /
    • pp.259-266
    • /
    • 2020
  • In this paper, we propose a novel scheme for product effect analysis, termed PEM, to find out the effectiveness of products used for improving the current condition, such as health supplements and cosmetics, by utilizing online customer reviews. The proposed technique preprocesses online customer reviews to remove advertisements automatically, constructs the word dictionary composed of symptoms, effects, increases, and decreases, and measures products' effects from online customer reviews. Using Naver Shopping Review datasets collected through crawling, we evaluated the performance of PEM compared to those of two methods using traditional sentiment dictionary and an RNN model, respectively. Our experimental results shows that the proposed technique outperforms the other two methods. In addition, by applying the proposed technique to the online customer reviews of atopic dermatitis and acne, effective treatments for them were found appeared on online social media. The proposed product effect analysis technique presented in this paper can be applied to various products and social media because it can score the effect of products from reviews of various media including blogs.

Disambiguation of Homograph Suffixes using Lexical Semantic Network(U-WIN) (어휘의미망(U-WIN)을 이용한 동형이의어 접미사의 의미 중의성 해소)

  • Bae, Young-Jun;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.31-42
    • /
    • 2012
  • In order to process the suffix derived nouns of Korean, most of Korean processing systems have been registering the suffix derived nouns in dictionary. However, this approach is limited because the suffix is very high productive. Therefore, it is necessary to analyze semantically the unregistered suffix derived nouns. In this paper, we propose a method to disambiguate homograph suffixes using Korean lexical semantic network(U-WIN) for the purpose of semantic analysis of the suffix derived nouns. 33,104 suffix derived nouns including the homograph suffixes in the morphological and semantic tagged Sejong Corpus were used for experiments. For the experiments first of all we semantically tagged the homograph suffixes and extracted root of the suffix derived nouns and mapped the root to nodes in the U-WIN. And we assigned the distance weight to the nodes in U-WIN that could combine with each homograph suffix and we used the distance weight for disambiguating the homograph suffixes. The experiments for 35 homograph suffixes occurred in the Sejong corpus among 49 homograph suffixes in a Korean dictionary result in 91.01% accuracy.

Determination of Intrusion Log Ranking using Inductive Inference (귀납 추리를 이용한 침입 흔적 로그 순위 결정)

  • Ko, Sujeong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • Among the methods for extracting the most appropriate information from a large amount of log data, there is a method using inductive inference. In this paper, we use SVM (Support Vector Machine), which is an excellent classification method for inductive inference, in order to determine the ranking of intrusion logs in digital forensic analysis. For this purpose, the logs of the training log set are classified into intrusion logs and normal logs. The associated words are extracted from each classified set to generate a related word dictionary, and each log is expressed as a vector based on the generated dictionary. Next, the logs are learned using the SVM. We classify test logs into normal logs and intrusion logs by using the log set extracted through learning. Finally, the recommendation orders of intrusion logs are determined to recommend intrusion logs to the forensic analyst.

BEHIND CHICKEN RATINGS: An Exploratory Analysis of Yogiyo Reviews Through Text Mining (치킨 리뷰의 이면: 텍스트 마이닝을 통한 리뷰의 탐색적 분석을 중심으로)

  • Kim, Jungyeom;Choi, Eunsol;Yoon, Soohyun;Lee, Youbeen;Kim, Dongwhan
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.30-40
    • /
    • 2021
  • Ratings and reviews, despite their growing influence on restaurants' sales and reputation, entail a few limitations due to the burgeoning of reviews and inaccuracies in rating systems. This study explores the texts in reviews and ratings of a delivery application and discovers ways to elevate review credibility and usefulness. Through a text mining method, we concluded that the delivery application 'Yogiyo' has (1) a five-star oriented rating dispersion, (2) a strong positive correlation between rating factors (taste, quantity, and delivery) and (3) distinct part of speech and morpheme proportions depending on review polarity. We created a chicken-specialized negative word dictionary under four main topics and 20 sub-topic classifications after extracting a total of 367 negative words. We provide insights on how the research on delivery app reviews should progress, centered on fried chicken reviews.

Exploring the Effects of Corporate Organizational Culture on Financial Performance: Using Text Analysis and Panel Data Approach (기업의 조직문화가 재무성과에 미치는 영향에 대한 연구: 텍스트 분석과 패널 데이터 방법을 이용하여)

  • Hansol Kim;Hyemin Kim;Seung Ik Baek
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.269-288
    • /
    • 2024
  • The main objective of this study is to empirically explore how the organizational culture influences financial performance of companies. To achieve this, 58 companies included in the KOSPI 200 were selected from an online job platform in South Korea, JobPlanet. In order to understand the organizational culture of these companies, data was collected and analyzed from 81,067 reviews written by current and former members of these companies on JobPlanet over a period of 9 years from 2014 to 2022. To define the organizational culture of each company based on the review data, this study utilized well-known text analysis techniques, namely Word2Vec and FastText analysis methods. By modifying, supplementing, and extending the keywords associated with the five organizational culture values (Innovation, Integrity, Quality, Respect, and Teamwork) defined by Guiso et al. (2015), this study created a new Culture Dictionary. By using this dictionary, this study explored which cultural values-related keywords appear most often in the review data of each company, revealing the relative strength of specific cultural values within companies. Going a step further, the study also investigated which cultural values statistically impact financial performance. The results indicated that the organizational culture focusing on innovation and creativity (Innovation) and on customers and the market (Quality) positively influenced Tobin's Q, an indicator of a company's future value and growth. For the indicator of profitability, ROA, only the organizational culture emphasizing customers and the market (Quality) showed statistically significant impact. This study distinguishes itself from traditional surveys and case analysis-based research on organizational culture by analyzing large-scale text data to explore organizational culture.

Artificial Intelligence Algorithms, Model-Based Social Data Collection and Content Exploration (소셜데이터 분석 및 인공지능 알고리즘 기반 범죄 수사 기법 연구)

  • An, Dong-Uk;Leem, Choon Seong
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.23-34
    • /
    • 2019
  • Recently, the crime that utilizes the digital platform is continuously increasing. About 140,000 cases occurred in 2015 and about 150,000 cases occurred in 2016. Therefore, it is considered that there is a limit handling those online crimes by old-fashioned investigation techniques. Investigators' manual online search and cognitive investigation methods those are broadly used today are not enough to proactively cope with rapid changing civil crimes. In addition, the characteristics of the content that is posted to unspecified users of social media makes investigations more difficult. This study suggests the site-based collection and the Open API among the content web collection methods considering the characteristics of the online media where the infringement crimes occur. Since illegal content is published and deleted quickly, and new words and alterations are generated quickly and variously, it is difficult to recognize them quickly by dictionary-based morphological analysis registered manually. In order to solve this problem, we propose a tokenizing method in the existing dictionary-based morphological analysis through WPM (Word Piece Model), which is a data preprocessing method for quick recognizing and responding to illegal contents posting online infringement crimes. In the analysis of data, the optimal precision is verified through the Vote-based ensemble method by utilizing a classification learning model based on supervised learning for the investigation of illegal contents. This study utilizes a sorting algorithm model centering on illegal multilevel business cases to proactively recognize crimes invading the public economy, and presents an empirical study to effectively deal with social data collection and content investigation.

  • PDF

A Study of the Automatic Extraction of Hypernyms arid Hyponyms from the Corpus (코퍼스를 이용한 상하위어 추출 연구)

  • Pang, Chan-Seong;Lee, Hae-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.2
    • /
    • pp.143-161
    • /
    • 2008
  • The goal of this paper is to extract the hyponymy relation between words in the corpus. Adopting the basic algorithm of Hearst (1992), I propose a method of pattern-based extraction of semantic relations from the corpus. To this end, I set up a list of hypernym-hyponym pairs from Sejong Electronic Dictionary. This list is supplemented with the superordinate-subordinate terms of CoroNet. Then, I extracted all the sentences from the corpus that include hypemym-hyponym pairs of the list. From these extracted sentences, I collected all the sentences that contain meaningful constructions that occur systematically in the corpus. As a result, we could obtain 21 generalized patterns. Using the PERL program, we collected sentences of each of the 21 patterns. 57% of the sentences are turned out to have hyponymy relation. The proposed method in this paper is simpler and more advanced than that in Cederberg and Widdows (2003), in that using a word net or an electronic dictionary is generally considered to be efficient for information retrieval. The patterns extracted by this method are helpful when we look fer appropriate documents during information retrieval, and they are used to expand the concept networks like ontologies or thesauruses. However, the word order of Korean is relatively free and it is difficult to capture various expressions of a fired pattern. In the future, we should investigate more semantic relations than hyponymy, so that we can extract various patterns from the corpus.

  • PDF

Unsupervised Noun Sense Disambiguation using Local Context and Co-occurrence (국소 문맥과 공기 정보를 이용한 비교사 학습 방식의 명사 의미 중의성 해소)

  • Lee, Seung-Woo;Lee, Geun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.769-783
    • /
    • 2000
  • In this paper, in order to disambiguate Korean noun word sense, we define a local context and explain how to extract it from a raw corpus. Following the intuition that two different nouns are likely to have similar meanings if they occur in the same local context, we use, as a clue, the word that occurs in the same local context where the target noun occurs. This method increases the usability of extracted knowledge and makes it possible to disambiguate the sense of infrequent words. And we can overcome the data sparseness problem by extending the verbs in a local context. The sense of a target noun is decided by the maximum similarity to the clues learned previously. The similarity between two words is computed by their concept distance in the sense hierarchy borrowed from WordNet. By reducing the multiplicity of clues gradually in the process of computing maximum similarity, we can speed up for next time calculation. When a target noun has more than two local contexts, we assign a weight according to the type of each local context to implement the differences according to the strength of semantic restriction of local contexts. As another knowledge source, we get a co-occurrence information from dictionary definitions and example sentences about the target noun. This is used to support local contexts and helps to select the most appropriate sense of the target noun. Through experiments using the proposed method, we discovered that the applicability of local contexts is very high and the co-occurrence information can supplement the local context for the precision. In spite of the high multiplicity of the target nouns used in our experiments, we can achieve higher performance (89.8%) than the supervised methods which use a sense-tagged corpus.

  • PDF