• Title/Summary/Keyword: RDF/OWL

Search Result 89, Processing Time 0.024 seconds

Hierarchical Distributed Ontology Model Using Web Service (웹 서비스를 이용한 계층적 분산 온톨로지 모델)

  • Nam, Ho-Young;Yang, Jung-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.315-319
    • /
    • 2008
  • 인터넷 환경은 놀랄만한 속도로 발전하고 있다. 사용자의 수가 늘어나고 그와 동시에 자료의 양도 폭발적으로 늘어나고 있음에 따라, 정확한 정보를 찾고 불필요한 정보를 필터링 하는 기술이 요구되기 시작하였다. 그 대표적인 기술이 시맨틱 웹(Semantic Web)[1] 으로 시맨틱 웹은 웹상의 정보에 메타데이터를 추가로 정의하여 사람 뿐 아니라 컴퓨터가 그 정보의 의미를 파악 할 수 있도록 하는 것이다. 이러한 시맨틱 웹을 위해서는 기계가 의미를 이해할 수 있도록 온톨로지(Ontology)를 먼저 구축해야만 한다. 온톨로지는 자원과 개념의 관계를 정의해 놓은 일종의 사전으로 이를 기술하기 위한 언어로는 RDF, OWL등이 있다. 온톨로지 데이터가 증가함에 따라 온톨로지 저장소의 크기가 증가하게 되면 성능을 위해 지역적으로 온톨로지 저장소를 분산해야 한다. 이에 따라 본고는 분산 환경에서의 통합된 질의에 대한 연구를 바탕으로 확장 가능하고 유연한 구조의 분산 온톨로지 모델을 제시한다.

  • PDF

Efficient RDQL Query Processing based on RDQL2SQL (RDQL2SQL 기반의 효율적인 RDQL 질의 처리)

  • Kim, Hak-Soo;Son, Jin-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.43-45
    • /
    • 2005
  • 최근 시맨틱 웹에 대한 관심이 증가하면서 W3C표준으로 규정된 시맨틱 웹 온톨로지 언어(RDF, RDFS, OWL 등) 기반의 관련 기술에 대한 연구가 활발히 진행되고 일다. 그 중에서 시맨틱 웹 온톨로지 언어로 기술된 문서의 저장, 관리, 질의처리 기법에 대한 연구가 주목을 받고 있다. 이에 본 논문에서는 온톨로지 데이터에 대한 표준 질의 언어인 RDQL 을 기반으로 RDQL 질의를 효율적으로 처리하는 고성능 RDQL 질의 처리 엔진을 개발한다. 본 논문에서 제안하는 RDQL 질의 처리 엔진은 RDQL 질의를 대응하는 SQL 질의로 변환함으로써 기존의 관계형 데이터베이스 질의 처리 엔진(SQL 질의 처리 엔진)을 그대로 사용할 수 있다. 이 과정에서 메모리 사용량과 데이터베이스 접근을 최소화하는 고성능 RDQL 질의 처리 엔진을 개발한다. 궁극적으로 이러한 RDQL 질의 처리는 실시간 처리가 요구되는 로봇 환경뿐만 아니라 시맨틱 웹 애플리케이션에서 널리 활용될 수 있다.

  • PDF

Automatic Web Service Composition based on STRIPS (STRIPS 기반의 자동 웹 서비스 Composition)

  • 강민구;김제민;박영택;박찬규;문진영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.127-129
    • /
    • 2003
  • 시멘틱 웹 서비스의 최종적인 목표는 네트웍으로 연결된 프로그램들과 장치들이 사람의 명령 없이 긴밀하게 상호작용 하는 것이다. 시멘틱 웹은 정보의 의미를 알 수 있는 프레임워크를 연구하였는데, 이는 인공지능을 기반으로 하는 RDF. RDFS, DAML+OIL, OWL등의 언어를 기반으로 하여 연구되었다 시멘틱 웹 커뮤니티에서는 웹의 정보 뿐 아니라 서비스에도 정확한 의미를 부여하기 위해서 DAML+OIL 온톨로지 기반의 새로운 기술인 DAML-S 온톨로지 기술을 제안하였다. DAML-S는 Service. Service Profile. Service Model. Service Grounding의 4개의 상위 온톨로지로 구성되는데, 특히 Service Profile, Service Model 온톨로지와 인스턴스를 이용하여 사용자의 요구에 적합한 서비스 검색과 Composition01 가능하다. 사용자의 요구가 atomic 서비스가 아닌 여러 atomic 서비스들을 함께 이용해야 하는 경우에는 시멘틱 웹 서비스 검색 시스템 외에 추가적인 웹 서비스 Composer가 필요하게 된다. 본 논문에서는 사용자의 요구로부터 필요한 웹 서비스 chain을 구성함에 있어서 사람이 개입하지 않는 STRIPS 타입의 자동 웹 서비스 Composer를 제안한다.

  • PDF

A Study on Method for Extraction of Emotion in Newspaper (신문기사의 감정추출 방법에 관한 연구)

  • Baek, Sun-Kyoung;Kim, Pan-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.562-564
    • /
    • 2005
  • 정보검색에서의 사용자의 다양한 질의어는 객관적인 키워드에서 인간이 주관적으로 생각하고 느끼는 감정요소를 동반한 어휘들로 범위가 넓어지고 있다. 이에 본 논문에서는 감정에 기반한 신문기사 검색을 위하여 기사의 구문 분석 및 품사 태깅 절차를 거쳐 동사를 추출하고 그 중 감점을 내포하는 동사들의 관계를 이용하여 신문기사의 감정을 추출한다. 감정동사의 관계를 창조하기 위하여 감정동사들을 OWL/RDF(S)를 이용해서 온톨로지를 구축하였고 에지(Edge)기반의 유사도 측정방법을 제안하였다. 제안한 방법은 여러 가지 감정을 추출하고 감정 정도를 측정할 수 있기 때문에 이는 향후 감정기반 신문기사 검색에 효과적으로 사용될 수 있을 것이다.

  • PDF

Analysis of Semantic Web Based Rules Using Automatic Reasoning (자동 추론을 이용한 시맨틱 웹기반의 Rules 분석)

  • 양종원;이상용
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.643-645
    • /
    • 2004
  • 최근 시맨틱 웹에 대한 중요성이 부각되면서 다양한 분야에서 이와 관련된 연구가 활발하게 이루어지고 있다. 시맨틱 웹 기술의 구성은 크게 RDF와 단일화된 데이터모델, 그 위에 규정 되어지는 DAML+OIL(OWL)과 같은 의미를 표현할 수 있는 언어, 웹 자원들을 나타내기 위한 표준화된 용어 규정의 온톨로지, 그리고 그러한 의미적인 것들의 생성과 처리를 지원하는 툴로 이루어졌다. 시맨틱 웹에서 현재 온톨로지에 대한 연구와 사례들은 많이 연구가 되고 있지만 시맨틱 웹기반에서 Rules에 대한 연구는 미약하다. 본 논문에서는 Rules을 운용함에 있어서 기존의 자동 추론 방식을 통해 개발되고 있는 RuleML을 분석하고 향후 이기종간의 Rules에서의 상호운용성을 높여 시스템간의 지식을 공유하는 방법을 분석한다.

  • PDF

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Design and Development of a System for Mapping of Medical Standard Terminologies (표준 의학 용어체계의 매핑을 위한 시스템의 설계 및 개발)

  • Lee, In-Keun;Kim, Hwa-Sun;Cho, Hune
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.237-243
    • /
    • 2011
  • Various standard terminologies in medical field are composed individually to different structure. Therefore, information on crosswalking between the terminologies is needed to combine and use the terminologies. Lots of mapping tools have been developed and used to create the information. However, since those tools deal with specific terminologies, the information is restrictly created. To overcome this problem, some tools have been developed, which perform mapping tasks by composing various terminologies. However, the tools also have difficulty of composing automatically the terminologies because the terminologies have different structures. Therefore, in this paper, we propose a method for composing and using the terminologies in the developed mapping system with keeping the original structure of the terminologies. In the proposed method, additional terminologies could be added on the mapping system and used by making metadata involving information on location and structure of the terminologies. And the mapping system could cope flexibly with the changes of the structure or context of the terminologies. Moreover, various types of mapping information could be defined and created in the system because mapping data are constructed as triplets in ontology. Therefore, the mapping data can be transformed and distributed in different formats such as OWL, RDF, and Excel. Finally, we confirmed the usefulness of the mapping system based on the proposed method through the experiments about creating mapping data.

Linkage Expansion in Linked Open Data Cloud using Link Policy (연결정책을 이용한 개방형 연결 데이터 클라우드에서의 연결성 확충)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1045-1061
    • /
    • 2017
  • This paper suggests a method to expand linkages in a Linked Open Data(LOD) cloud that is a practical consequence of a semantic web. LOD cloud, contrary to the first expectation, has not been used actively because of the lack of linkages. Current method for establishing links by applying to explicit links and attaching the links to LODs have restrictions on reflecting target LODs' changes in a timely manner and maintaining them periodically. Instead of attaching them, this paper suggests that each LOD should prepare a link policy and publish it together with the LOD. The link policy specifies target LODs, predicate pairs, and similarity degrees to decide on the establishment of links. We have implemented a system that performs in-depth searching through LODs using their link policies. We have published APIs of the system to Github. Results of the experiment on the in-depth searching system with similarity degrees of 1.0 ~ 0.8 and depth level of 4 provides searching results that include 91% ~ 98% of the trustworthy links and about 170% of triples expanded.

A Semantic Similarity Decision Using Ontology Model Base On New N-ary Relation Design (새로운 N-ary 관계 디자인 기반의 온톨로지 모델을 이용한 문장의미결정)

  • Kim, Su-Kyoung;Ahn, Kee-Hong;Choi, Ho-Jin
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.4
    • /
    • pp.43-66
    • /
    • 2008
  • Currently be proceeded a lot of researchers for 'user information demand description' for interface of an information retrieval system or Web search engines, but user information demand description for a natural language form is a difficult situation. These reasons are as they cannot provide the semantic similarity that an information retrieval model can be completely satisfied with variety regarding an information demand expression and semantic relevance for user information description. Therefore, this study using the description logic that is a knowledge representation base of OWL and a vector model-based weight between concept, and to be able to satisfy variety regarding an information demand expression and semantic relevance proposes a decision way for perfect assistances of user information demand description. The experiment results by proposed method, semantic similarity of a polyseme and a synonym showed with excellent performance in decision.

Development of an Editor for Reference Data Library Based on ISO 15926 (ISO 15926 기반의 참조 데이터 라이브러리 편집기의 개발)

  • Jeon, Youngjun;Byon, Su-Jin;Mun, Duhwan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.4
    • /
    • pp.390-401
    • /
    • 2014
  • ISO 15926 is an international standard for integration of lifecycle data for process plants including oil and gas facilities. From the viewpoint of information modeling, ISO 15926 Parts 2 provides the general data model that is designed to be used in conjunction with reference data. Reference data are standard instances that represent classes, objects, properties, and templates common to a number of users, process plants, or both. ISO 15926 Parts 4 and 7 provide the initial set of classes, objects, properties and the initial set of templates, respectively. User-defined reference data specific to companies or organizations are defined by inheriting from the initial reference data and the initial set of templates. In order to support the extension of reference data and templates, an editor that provides creation, deletion and modification functions of user-defined reference data is needed. In this study, an editor for reference data based on ISO 15926 was developed. Sample reference data were encoded in OWL (web ontology language) according to the specification of ISO 15926 Part 8. iRINGTools and dot15926Editor were benchmarked for the design of GUI (graphical user interface). Reference data search, creation, modification, and deletion functions were implemented with XML (extensible markup language) DOM (document object model), and SPARQL (SPARQL protocol and RDF query language).