• Title/Summary/Keyword: 확장검색어

Search Result 200, Processing Time 0.026 seconds

Virtual Machine for Program Testing on the Virtual Network Processor Environment (가상의 네트워크 프로세서 환경에서 프로그램 테스트를 위한 가상머신)

  • Hong, Soonho;Kwak, Donggyu;Ko, BangWon;Yoo, Chae-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.514-517
    • /
    • 2012
  • 최근 인터넷 사용자 증가와 네트워크를 기반의 응용 프로그램이 다양하게 개발되고 있다. 또한 스마트 폰과 매블릿 PC 의 대중화로 누구나 쉽게 인터넷을 통해 정보검색 서비스를 이용할 수 있다. 따라서 갈수록 증가하는 패킷에 대한 제]어와 이동, 삭제 등과 같은 처리를 빠르게 하기 위해 네트워크 프로세서 (Network Processor)가 개발되었다. 네트워크 프로세서는 패킷 제어와 이동, 삭제를 수행하는데 최적화되어 있다. 하지만 네트워크 프로세서를 개발한 회사마다 교차개발환경 툴과 개발언어가 서로 다르기 때문에 소스코드 재사용 및 확장이 어렵다. 또한 네트워크 프로세서에서 동작하는 프로그램을 매스트 하기 위해 하드웨어 장비가 필요하고 네트워크 프로세서에 종속적인 개발환경과 언어를 배우는 것은 프로그래머에게 큰 부담을 준다. 본 논문에서는 네트워크 프로세서에 최적화된 기능을 언어 레벨에서 정의한 eFlowC 언어를 사용하고 범용 컴퓨터에서 매스트 및 실행을 할 수 있는 가상머신을 제안한다. 그리고 가상머신 중간언어를 사용하여 가상머신이 설치된 범용 컴퓨터에서 소스코드 재사용 및 확장을 가능하게 한다. 따라서 범용 컴퓨터에서 프로그램 테스트를 통해 신뢰성 높은 프로그램을 작성할 수 있다.

Development of a Thesaurus Management System based on the Object-Oriented Technique (객체지향 기법을 이용한 시소러스 관리 시스템의 개발에 관한 연구)

  • 박계숙
    • Journal of the Korean Society for information Management
    • /
    • v.13 no.2
    • /
    • pp.5-18
    • /
    • 1996
  • For the construction of thesaurus, a thesaurus management system is needed which can process dynamic variations fast and exactly such as input. correction and deletion of words, and definition of new relationship between words. In this paper, I developed a thesaurus management system based on the object-oriented technique and GUI(graphic user interface) screen, and to enhance the effectiveness of information retrieval. I put emphasis on the expansion of synonym, English and Korean words containing the same concept.

  • PDF

Design and Implementation of Distributed XQuery Query Processor using Distributed ORDBMSs (분산 객체 관계 데이터베이스 시스템을 이용한 분산 XQuery 질의 처리기 설계 및 구현)

  • Lee, Jae-Min;Jang, Gun-Up;Hong, Eui-Kyeong
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.55-59
    • /
    • 2007
  • 최근 컴퓨팅 환경은 인터넷 환경의 웹을 기반으로 한 분산 컴퓨팅 환경으로 변화하고 있다. 그에 따라 XML 문서의 사용과 XML 문서의 양이 급속하게 증가하였으며, 언제나 쉽게 필요한 XML 문서에 접근할 수 있어야 한다. 또한 다양한 형태로 분산 저장된 XML 문서에서 원하는 데이터를 추출하고 변환하며, 단편화된 XML 데이터를 통합하는 작업들이 필요하게 된다. 따라서 XML 문서를 분산 객체 관계 데이터베이스 시스템에 효율적으로 저장하는 시스템을 개발하고, 분산 저장된 XML 문서에서 사용자가 필요한 정보를 검색할 수 있도록 하기 위해 XQuery 질의어를 지원하는 연구가 필요하다. 본 논문에서는 분산 객체 관계 데이터베이스 시스템에 저장된 XML 데이터를 접근할 수 있도록 하기 위해 XPath를 분산 SQL로 변환하여 실행하는 분산 XPath 의 처리기를 확장하여 XQuery를 분산 SQL로 변환하여 실행하는 분산 XQuery 질의 처리기를 설계 및 구현하였다.

  • PDF

A Dictionary Composition for Syntactic Analyzer from Corpus (코퍼스로부터 구문 분석을 위한 사전 구성)

  • 정민수;정규철;박기홍
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.159-161
    • /
    • 1998
  • 한글은 중심어 후행성과 어순의 자유성, 격을 결정하는 조사의 생략 등으로 인해 영어권에서 연구되어진 변형 생성 문법이나 어휘 함수 문법, 구구조문법류 등이 적용되기 어려운 문제점을 가지고 있고 관형적인 표현이 많아 구문 규칙 만으론 분석하기 쉽지 않기 때문에 사전에 의존해야 하는 경우가 많으므로 이에 적합한, 사전을 구성하고자 한다. 그러나 기존의 태그와 키워드만으로 구성된 사전만으로 어려운 점이 많고, 이 때문에 문법 규칙을 같이 적용하게 되는데 이 규칙을 보통 알고리즘을 이나 수작업을 통해 사전으로 구성하므로 정확성도 떨어진다. 저자는 이 과정을 코퍼스를 통해 구성하여 시간을 줄이고 결합 정보 또한 보다 견고하게 구성하기 위해 통계 정보-코퍼스 내에서 결합이 사용된 빈도-에 따라 순위를 결정할 수 있도록 구성하였다. 이를 보다 확장하여 구문분석 시에도 활용할 수 있도록 분석된 단어간의 결합 정보와 그 결합이 사용된 빈도를 포함하여 구문 결합 정보 사전을 구성하고자 한다. 이는 기존의 의존 문법이나 구문 관계를 이용하여 구문분석을 할 경우 올바른 트리의 결합 관계를 검색할 때 쓰여질 수 있다.

  • PDF

A Weight Boosting Method of Sentiment Features for Korean Document Sentiment Classification (한국어 문서 감정분류를 위한 감정 자질 가중치 강화 기법)

  • Hwang, Jaewon;Ko, Youngjoong
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.201-206
    • /
    • 2008
  • 본 논문은 한국어 문서 감정분류에 기반이 되는 감정 자질의 가중치 강화를 통해 감정분류의 성능 향상을 얻을 수 있는 기법을 제안한다. 먼저, 어휘 자원인 감정 자질을 확보하고, 확장된 감정 자질이 감정 분류에 얼마나 기여하는지를 평가한다. 그리고 학습 데이터를 이용하여 얻을 수 있는 감정 자질의 카이 제곱 통계량(${\chi}^2$ statics)값을 이용하여 각 문장의 감정 강도를 구한다. 이렇게 구한 문장의 감정 강도의 값을 TF-IDF 가중치 기법에 접목하여 감정 자질의 가중치를 강화시킨다. 마지막으로 긍정 문서에서는 긍정 감정 자질만 강화하고 부정 문서에서는 부정 감정 자질만 강화하여 학습하였다. 본 논문에서는 문서 분류에 뛰어난 성능을 보여주는 지지 벡터 기계(Support Vector Machine)를 사용하여 제안한 방법의 성능을 평가한다. 평가 결과, 일반적인 정보 검색에서 사용하는 내용어(Content Word) 기반의 자질을 사용한 경우 보다 약 2.0%의 성능 향상을 보였다.

  • PDF

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

Similarity checking between XML tags through expanding synonym vector (유사어 벡터 확장을 통한 XML태그의 유사성 검사)

  • Lee, Jung-Won;Lee, Hye-Soo;Lee, Ki-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.676-683
    • /
    • 2002
  • The success of XML(eXtensible Markup Language) is primarily based on its flexibility : everybody can define the structure of XML documents that represent information in the form he or she desires. XML is so flexible that XML documents cannot be automatically provided with an underlying semantics. Different tag sets, different names for elements or attributes, or different document structures in general mislead the task of classifying and clustering XML documents precisely. In this paper, we design and implement a system that allows checking the semantic-based similarity between XML tags. First, this system extracts the underlying semantics of tags and then expands the synonym set of tags using an WordNet thesaurus and user-defined word library which supports the abbreviation forms and compound words for XML tags. Seconds, considering the relative importance of XML tags in the XML documents, we extend a conventional vector space model which is the most generally used for document model in Information Retrieval field. Using this method, we have been able to check the similarity between XML tags which are represented different tags.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Semi-automatic Ontology Modeling for VOD Annotation for IPTV (IPTV의 VOD 어노테이션을 위한 반자동 온톨로지 모델링)

  • Choi, Jung-Hwa;Heo, Gil;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.7
    • /
    • pp.548-557
    • /
    • 2010
  • In this paper, we propose a semi-automatic modeling approach of ontology to annotate VOD to realize the IPTV's intelligent searching. The ontology is made by combining partial tree that extracts hypernym, hyponym, and synonym of keywords related to a service domain from WordNet. Further, we add to the partial tree new keywords that are undefined in WordNet, such as foreign words and words written in Chinese characters. The ontology consists of two parts: generic hierarchy and specific hierarchy. The former is the semantic model of vocabularies such as keywords and contents of keywords. They are defined as classes including property restrictions in the ontology. The latter is generated using the reasoning technique by inferring contents of keywords based on the generic hierarchy. An annotation generates metadata (i.e., contents and genre) of VOD based on the specific hierarchy. The generic hierarchy can be applied to other domains, and the specific hierarchy helps modeling the ontology to fit the service domain. This approach is proved as good to generate metadata independent of any specific domain. As a result, the proposed method produced around 82% precision with 2,400 VOD annotation test data.