• Title/Summary/Keyword: 유사어 처리

Search Result 191, Processing Time 0.028 seconds

A Leveling and Similarity Measure using Extended AHP of Fuzzy Term in Information System (정보시스템에서 퍼지용어의 확장된 AHP를 사용한 레벨화와 유사성 측정)

  • Ryu, Kyung-Hyun;Chung, Hwan-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.212-217
    • /
    • 2009
  • There are rule-based learning method and statistic based learning method and so on which represent learning method for hierarchy relation between domain term. In this paper, we propose to leveling and similarity measure using the extended AHP of fuzzy term in Information system. In the proposed method, we extract fuzzy term in document and categorize ontology structure about it and level priority of fuzzy term using the extended AHP for specificity of fuzzy term. the extended AHP integrates multiple decision-maker for weighted value and relative importance of fuzzy term. and compute semantic similarity of fuzzy term using min operation of fuzzy set, dice's coefficient and Min+dice's coefficient method. and determine final alternative fuzzy term. after that compare with three similarity measure. we can see the fact that the proposed method is more definite than classification performance of the conventional methods and will apply in Natural language processing field.

Construction of Ontology for River GeoSpatial Information (하천공간정보의 온톨로지 구축방안 연구)

  • Shin, Hyung Jin;Shin, Seung Hee;Hwang, Eui Ho;Chae, Hyo Sok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.627-627
    • /
    • 2015
  • 기존 물관련 시스템들은 독자적인 DB 구조를 가지고 있고 검색 서비스는 자체 시스템의 DB를 직접 접근하여 사용자에게 결과를 제시하는 형식이다. 이러한 서비스의 단점은 사용자가 개별 시스템의 서비스에 대한 지식이 없으면 접근하기 어렵다는 점이다. 개별 시스템의 개별 서비스의 개념을 벗어나기 위하여 물관련 시스템에 있는 하천공간자료 검색 정보를 카탈로그 서버에 등록하고, 카탈로그 서버에 등록된 검색정보를 사용자가 검색하는 방식을 적용하고자 한다. 카탈로그 서버에 자료에 대한 정보를 등록할 때 자료의 정보를 어떻게 기술할 것인가의 문제가 발생한다. 개별 서버마다 등록하게 된다면 용어 및 문화에 의한 차이로 같은 개념을 다른 용어로 등록하게 되는 혼란이 발생할 소지가 있다. 예를 들어 강우자료에 대하여 "강우", "Precipitation", "Railfall", "비" 등으로 등록할 소지가 있다. 이러면 실제 자료가 존재하는 데도 등록 방법에 따라 자료의 검색이 어려워진다. 이러한 상황을 제어하기 위하여 검사어휘(Controlled Vocabulary)를 도입한다. 이는 포털의 운영자가 미리 용어의 개념과 용어의 분류체계를 설정하고 등록 자료의 검색어를 미리 설정하여 자료의 원천 소유자가 자료를 등록 시 검사어휘를 참고하여 등록하거나 또는 등록되지 않는 용어의 자료인 경우 이 용어를 포탈에 신규로 등록한다. 검색용어의 난립을 피하기 위하여 사용자의 신규등록은 포탈의 운영자가 어느 정도 제어할 필요가 있다. 검사어휘의 정립과 하천 관련된 분류체계는 하천공간정보 검색의 포탈을 위한 필수사항이다. 검사어휘의 정립의 주된 목적은 이질성의 극복이다. 이질성의 종류는 문법적 이질성, 데이터 형식과 구조 및 문맥적 이질성이 있다. 이 중에서 문맥적 이질성이 가장 넓고 어려운 문제이다. 단위는 분야마다 호칭이 다르고 채택하는 기준마다 다르다. 유사어는 전문용어라도 분야마다 다르다. 우리나라에서 서비스 인코딩시 국어와 영어를 어떻게 처리할 지에 대한 대책도 필요하다. 수문학의 시계열 자료를 다루는 CUAHSI/HIS의 온톨로지는 대 개념으로 물리학적, 화학적 및 생물학적인 분야로 분류하고 있다. 하천공간정보의 온톨로지 구축을 위해 데이터 분석 및 분류, 온톨로지 요소 설정, 온톨로지 데이터 테이블 작성, 클래스 생성 및 계층화, 클래스 계층화에 따른 속성 설정, 클래스에 적합한 개체 삽입, 논리 관계 확인 및 수정과 같은 과정으로 온톨로지 개발을 진행하고자 한다.

  • PDF

A Question Example Generation System for Multiple Choice Tests by utilizing Concept Similarity in Korean WordNet (한국어 워드넷에서의 개념 유사도를 활용한 선택형 문항 생성 시스템)

  • Kim, Young-Bum;Kim, Yu-Seop
    • The KIPS Transactions:PartA
    • /
    • v.15A no.2
    • /
    • pp.125-134
    • /
    • 2008
  • We implemented a system being able to suggest example sentences for multiple choice tests, considering the level of students. To build the system, we designed an automatic method for sentence generation, which made it possible to control the difficulty degree of questions. For the proper evaluation in the multiple choice tests, proper size of question pools is required. To satisfy this requirement, a system which can generate various and numerous questions and their example sentences in a fast way should be used. In this paper, we designed an automatic generation method using a linguistic resource called WordNet. For the automatic generation, firstly, we extracted keywords from the existing sentences with the morphological analysis and candidate terms with similar meaning to the keywords in Korean WordNet space are suggested. When suggesting candidate terms, we transformed the existing Korean WordNet scheme into a new scheme to construct the concept similarity matrix. The similarity degree between concepts can be ranged from 0, representing synonyms relationships, to 9, representing non-connected relationships. By using the degree, we can control the difficulty degree of newly generated questions. We used two methods for evaluating semantic similarity between two concepts. The first one is considering only the distance between two concepts and the second one additionally considers positions of two concepts in the Korean Wordnet space. With these methods, we can build a system which can help the instructors generate new questions and their example sentences with various contents and difficulty degree from existing sentences more easily.

Feasibility Study of Applying EMMC Process to Recirculation Water Treatment System in High Density Seawater Aquaculture Farm through Laboratory Scale Reactor Operation (실험실규모 반응조 운전을 통한 고밀도 해산어 양식장 순환수 처리공정으로서 EMMC공정의 적용 가능성 연구)

  • Jeong Byung Gon;Kim Byung Hyo
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.7 no.3
    • /
    • pp.116-121
    • /
    • 2004
  • Treatability tests were conducted to study the feasibility of EMMC process as a recycling-water treatment system in high density seawater aquaculture farm. To study the effect of organic and ammonia nitrogen loading rate on system performance, hydraulic retention time was reduced gradually from 12hr to 10min. The conclusions are can be summarized as follows. When the system HRT was reduced from 12hr to 2hr gradually, there was little noticeable change(reduction) in ammonia nitrogen removal efficiencies. However, removal efficiencies were decreased dramatically when the system was operated under the HRT of less than 2hr. In case of organics(COD), there was no dramatic change in removal efficiencies depending on HRT reduction. COD removal efficiencies were maintained successfully higher than 9% when the system was operated at tile HRT of 10 min. System performances depending on media packing ratio in the reactors were also evaluated. There were little differences in each reactor performances depending on media packing ratio in reactor when the reactors were operated under the HRT of longer than 1hr. However, differences in reactor performances were considerably evident when the reactors were operated under the HRT of shorter than 1hr. When comparing reactor performance among 25%, 50%,7 5% packed reactor, it can be judged that media packing ratio more than 50% plays no significant role in increasing reactor performance. For this reason, packing the media less than 50% is more reasonable way in view of economic. Such a tendency shown in COD removal efficiencies well agreed with the variation of ammonia-nitrogen removal efficiencies according to the media packing ratio in reactors at each HRT. Difference in effluent ammonia-nitrogen concentration between 50% media packing reactor and 75% media packing reactor was negligible. When comparing with the results of 25% packing reactor, difference was not so great.

  • PDF

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

WebDBs : A User oriented Web Search Engine (WebDBs: 사용자 중심의 웹 검색 엔진)

  • 김홍일;임해철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7B
    • /
    • pp.1331-1341
    • /
    • 1999
  • This paper propose WebDBs(Web Database system) which retrieves information registered in web using query language similar to SQL. This proposed system automatically extracts information which is needed to retrieve from HTML documents dispersed in web. Also, it has an ability to process SQL based query intended for the extracted information. Web database system takes the most of query processing time for capturing documents going through network line. And so, the information previously retrieved is reused in similar applications after stored in cache in perceiving that most of the web retrieval depends on web locality. In this case, we propose cache mechanism adapted to user applications by storing cached information associated with retrieved query. And, Web search engine is implemented based on these concepts.

  • PDF

A Measurement of Lexical Relationship for Concept Network Based on Semantic Features (의미속성 기반의 개념망을 위한 어휘 연관도 측정)

  • Ock, Eun-Joo;Lee, Wang-Woo;Lee, Soo-Dong;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.146-154
    • /
    • 2001
  • 본 논문에서는 개념망 구축을 위해 사전 뜻풀이말에서 추출 가능한 의미속성의 분포 정보를 기반으로 어휘 연관도를 측정하고자 한다. 먼저 112,000여 개의 사전 뜻풀이말을 대상으로 품사 태그와 의미 태그가 부여된 코퍼스에서 의미속성을 추출한다. 추출 가능한 의미속성은 체언류, 부사류, 용언류 등이 있는데 본 논문에서는 일차적으로 명사류와 수식 관계에 있는 용언류 중 관형형 전성어미('ㄴ/은/는')가 부착된 것을 대상으로 한다. 추출된 공기쌍 45,000여 개를 대상으로 정제 작업을 거쳐 정보이론의 상호 정보량(MI)을 이용하여 명사류와 용언류의 연관도를 측정한다. 한편, 자료의 희귀성을 완화하기 위해 수식 관계의 명사류와 용언류는 기초어휘를 중심으로 유사어 집합으로 묶어서 작업을 하였다. 이러한 의미속성의 분포 정보를 통해 측정된 어휘 연관도는 의미속성의 공유 정도를 계산하여 개념들간에 계층구조를 구축하는 데 이용할 수 있다.

  • PDF

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

A Study on the Effects of Dietary Fat Sources on the Plasma and Liver Cholesterol Levels in Young Chicks (지방공급원이 병아리의 혈액 및 간 Cholesterol 함량에 미치는 영향)

  • 최인숙;지규만;오미향
    • Korean Journal of Poultry Science
    • /
    • v.13 no.2
    • /
    • pp.209-219
    • /
    • 1986
  • This study was designed to investigate the effects of various sources of dietary fats on the blood and liver cholesterol(CHOL) levels in young Single Comb White Leghorn male chicks, In experiment 1, corn oil, palm gil, tallow and fish oil were added individually at a level of 4% to semipurified type diets composed of isolated soyprotein and glucose as major components. The diets were fed ad libitum for a period of 15 days. In experiment 2, various fats such as corn oil, soybean oil, repeseed oil, palm oil, tallow, fish oil and hydrogenated fish oil(HFO) were added individually at a level of 11.4% to practical type diets primarily based on corn and soybean meal. Control diet contained 3% of corn oil. All these diets were formulated to contain equivalent amount of nutrients such as protein, vitamins and minerals on a basis of unit kcal of metabolizable energy. The third Experiment was to compare the effects of different levels of calorie/protein(C/P ratio) of diets on the performances and various biological parameters in the chicks. Control diet was the same as in experiment 2. Another diet was added with 11.14% corn oil(C/P ratio=146) and the other diet with 10% corn oil(C/P ratio=164), The diets in experiment 2 and 3 were fed ad libitum for 26 days. In the first experiment, the chicks fed the diet containing vegetable oils tended to grow faster and show better feed efficiency without significance than those fed diets added with animal fats. However, this tendency was not observed in the experiment 2. Birds consumed the diets added with fish oil appeared to have heavier liver weight and higher liver CHOL than the others(p〈0.05), No significant differences in the levels of blood CHOL and triacylglycerol(TG) were observed among the chicks of various dietary groups(Exp. 1). Weights of liver or heart were significantly heavier in the chicks consumed the diets added with HFO or fish oil, respectively(Exp. 2). However, chicks ingested diet containing fish oil appeared to have significantly lower plasma CHOL. No significant differences were observed in the levels of liver CHOL and plasma TG among the dietary groups. Birds consumed the diet with a wider C/P ratio resulted in higher liver TG levels in experiment 3(p〈0.05). Although no statistical differences were observed among the various dietary groups, chicks fed the diet with a wider C/P ratio tended to show higher levels of plasma CHOL, TG, liver CHOL and total liver lipids compared to those of the control group.

  • PDF

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.