• Title/Summary/Keyword: Semantic Relational Information

Search Result 79, Processing Time 0.023 seconds

Providing Approximate Answers Using a Knowledge Abstraction Hierarchy (지식 추상화 계층을 이용한 근사해 생성)

  • Huh, Soon-Young;Moon, Kae-Hyun
    • Asia pacific journal of information systems
    • /
    • v.8 no.1
    • /
    • pp.43-64
    • /
    • 1998
  • Cooperative query answering is a research effort to develop a fault-tolerant and intelligent database system using the semantic knowledge base constructed from the underlying database. Such knowledge base has two aspects of usage. One is supporting the cooperative query answering process for providing both an exact answer and neighborhood information relevant to a query. The other is supporting ongoing maintenance of the knowledge base for accommodating the changes in the knowledge content and database usage purpose. Existing studies have mostly focused on the cooperative query answering process but paid little attention to the dynamic knowledge base maintenance. This paper proposes a multi-level knowledge representation framework called Knowledge Abstraction Hierarchy(KAH) that can not only support cooperative query answering but also permit dynamic knowledge maintenance, On the basis of the KAH, a knowledge abstraction database is constructed on the relational data model and accommodates diverse knowledge maintenance needs and flexibly facilitates cooperative query answering. In terms of the knowledge maintenance, database operations are discussed for the cases where either the internal contents for a given KAH change or the structures of the KAH itself change. In terms of cooperative query answering, four types of vague queries are discussed, including approximate selection, approximate join, conceptual selection, and conceptual join. A prototype system has been implemented at KAIST and is being tested with a personnel database system to demonstrate the usefulness and practicality of the knowledge abstraction database in ordinary database application systems.

  • PDF

Development of OOKS : a Knowledge Base Model Using an Object-Oriented Database (객체지향 데이터베이스를 이용한 지식베이스 모형(OOKS) 개발)

  • 허순영;김형민;양근우;최지윤
    • Journal of Intelligence and Information Systems
    • /
    • v.5 no.1
    • /
    • pp.13-34
    • /
    • 1999
  • Building a knowledge base effectively has been an important research area in the expert systems field. A variety of approaches have been studied including rules, semantic networks, and frames to represent the knowledge base for expert systems. As the size and complexity of the knowledge base get larger and more complicated, the integration of knowledge based with database technology cecomes more important to process the large amount of data. However, relational database management systems show many limitations in handing the complicated human knowledge due to its simple two dimensional table structure. In this paper, we propose Object-Oriented Knowledge Store (OOKS), a knowledge base model on the basis of a frame sturcture using an object-oriented database. In the proposed model, managing rules for inferencing and facts about objects in one uniform structure, knowledge and data can be tightly coupled and the performance of reasoning can be improved. For building a knowledge base, a knowledge script file representing rules and facts is used and the script file is transferred into a frame structure in database systems. Specifically, designing a frame structure in the database model as it is, it can facilitate management and utilization of knowledge in expert systems. To test the appropriateness of the proposed knowledge base model, a prototype system has been developed using a commercial ODBMS called ObjectStore and C++ programming language.

  • PDF

Implementation of Query Processing System in Temporal Databases (시간지원 데이터베이스의 질의처리 시스템 구현)

  • Lee, Eon-Bae;Kim, Dong-Ho;Ryu, Keun-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1418-1430
    • /
    • 1998
  • Temporal databases support an efficient historical management by means of valid time and transaction time. Valid time stands for the time when a data happens in the real world. And transaction time stands for the time when a data is stored in the database, Temporal Query Processing System(TQPS) should be extended so as tc process the temporal operations for the historical informations in the user query as well as the conventional relational operations. In this paper, the extended temporal query processing systems which is based on the previous temporal query processing system for TQuel(Temporal Query Language) consists of the temporal syntax analyzer, temporal semantic analyzer, temporal code generator, and temporal interpreter is to be described, The algorithm for additional functions such as transaction time management, temporal aggregates, temporal views, temporal joins and the heuristic optimization functions and their example how to be processed is shown.

  • PDF

New Inlining Method for Effective Creation of Relations and Preservation of Constraints (효율적인 릴레이션 생성과 제약조건 보존을 위한 새로운 Inlining 기법)

  • An, Sung-Chul;Kim, Yeong-Ung
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.773-781
    • /
    • 2006
  • XML is a standard language to express and exchange the data over the web. Recently, researches about techniques that storing XML documents into RDBMS and managing it have been progressed. These researches use a technique that are receiving the DTD document as an input and generate the relational schema from it. Existing researches, however, do not consider the semantic preservation because of the simplification of the DTD. Further, because existing studies only focus on the preservation technique to store information such as content and structure, there is a troublesomeness that have to use the stored-procedure or trigger for the data integrity during the stores of XML documents. This paper proposes a improved Inlining technique to create effective relations and to preserve semantics which can be inferred from DTD.

  • PDF

City Information Model-based Information Management of Flood Damages (도시정보모델의 침수피해정보관리에서의 활용)

  • Park, Sang Il;Kim, Min-Su;Kim, Jong Myung;Lee, Sang-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.4
    • /
    • pp.385-392
    • /
    • 2015
  • Open city information model can increase the understanding of the situation, enable the effective reuse of information due to access the semantic and relational conditions of objects, and support the reliable decision-making through linking with external references. The city information model focused on terrain and buildings was implemented based on the actual data. In addition, a process for flooding simulation was proposed using hydraulic analysis data and the city information model. The deaths and damages were estimated by flooding simulation. The availabilities were examined by detailed queries and responses based on model data of the city information model, hydraulic analysis data and the estimated damages.

A Study of Dynamic Web Ontology for Comparison-shopping Agent based on Semantic Web (시멘틱 웹 기반의 비교구매 에이전트를 위한 동적 웹 온톨로지에 대한 연구)

  • Kim, Su-Kyoung;Ahn, Ki-Hong
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.2
    • /
    • pp.31-45
    • /
    • 2005
  • In this paper, convert in RDF triple and a RDF document through RDF document converters and design metadata schema about a digital camcorder after use Wrapper technology, and acquiring commodity information of a HTML page about the digital camcorder which these papers are defined so as to be different by electronic commerce stores, and is expressed. Save in digital camcorder domain ontology storage that implemented to relational database to DCC knowledge base ontology as convert to OWL Web ontology based on designed metadata schema. Through compare with rdf and DCCKBO, mapping, and inference process, provide to buyers by DCC information of the store that had the commodity purchasing information which is the best, and proposed a dynamic Web ontology guessed to contents of the best commodity purchasing information, and to define domain ontology saved in DCCKBO.

  • PDF

Study on a Methodology for Developing Shanghanlun Ontology (상한론(傷寒論)온톨로지 구축 방법론 연구)

  • Jung, Tae-Young;Kim, Hee-Yeol;Park, Jong-Hyun
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.25 no.5
    • /
    • pp.765-772
    • /
    • 2011
  • Knowledge which is represented by formal logic are widely used in many domains such like artificial intelligence, information retrieval, e-commerce and so on. And for medical field, medical documentary records retrieval, information systems in hospitals, medical data sharing, remote treatment and expert systems need knowledge representation technology. To retrieve information intellectually and provide advanced information services, systematically controlled mechanism is needed to represent and share knowledge. Importantly, medical expert's knowledge should be represented in a form that is understandable to computers and also to humans to be applied to the medical information system supporting decision making. And it should have a suitable and efficient structure for its own purposes including reasoning, extendability of knowledge, management of data, accuracy of expressions, diversity, and so on. we call it ontology which can be processed with machines. We can use the ontology to represent traditional medicine knowledge in structured and systematic way with visualization, then also it can also be used education materials. Hence, the authors developed an Shanghanlun ontology by way of showing an example, so that we suggested a methodology for ontology development and also a model to structure the traditional medical knowledge. And this result can be used for student to learn Shanghanlun by graphical representation of it's knowledge. We analyzed the text of Shanghanlun to construct relational database including it's original text, symptoms and herb formulars. And then we classified the terms following some criterion, confirmed the structure of the ontology to describe semantic relations between the terms, especially we developed the ontology considering visual representation. The ontology developed in this study provides database showing fomulas, herbs, symptoms, the name of diseases and the text written in Shanghanlun. It's easy to retrieve contents by their semantic relations so that it is convenient to search knowledge of Shanghanlun and to learn it. It can display the related concepts by searching terms and provides expanded information with a simple click. It has some limitations such as standardization problems, short coverage of pattern(證), and error in chinese characters input. But we believe this research can be used for basic foundation to make traditional medicine more structural and systematic, to develop application softwares, and also to applied it in Shanghanlun educations.

An Efficient Method for Logical Structure Analysis of HTML Tables (HTML 테이블의 논리적 구조분석을 위한 효율적인 방법)

  • Kim Yeon-Seok;Lee Kyong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1231-1246
    • /
    • 2006
  • HTML is a format for rendering Web documents visually and uses tables to present a relational information. Since HTML has limits in terms of information processing and management by a computer, it is important to transform HTML tables into XML documents, which is able to represent logical structure information. As a prerequisite for extracting information from the Web, this paper presents an efficient method for extracting logical structures from HTML tables and transforming them into XML documents. The proposed method consists of two phases: Area segmentation and structure analysis. The area segmentation step removes noisy areas and extracts attribute and value areas through visual and semantic coherency checkup. The hierarchical structure between attribute and value areas are analyzed and transformed into XML representations using a proposed table model. Experimental results with 1,180 HTML tables show that the proposed method performs better than the conventional method, resulting in an average precision of 86.7%.

  • PDF

Snippet Extraction Method using Fuzzy Implication Operator and Relevance Feedback (연관 피드백과 퍼지 함의 연산자를 이용한 스니핏 추출 방법)

  • Park, Sun;Shim, Chun-Sik;Lee, Seong-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.3
    • /
    • pp.424-431
    • /
    • 2012
  • In information retrieval, search engine provide the rank of web page and the summary of the web page information to user. Snippet is a summaries information of representing web pages. Visiting the web page by the user is affected by the snippet. User sometime visits the wrong page with respect to user intention when uses snippet. The snippet extraction method is difficult to accurate comprehending user intention. In order to solve above problem, this paper proposes a new snippet extraction method using fuzzy implication operator and relevance feedback. The proposed method uses relevance feedback to expand the use's query. The method uses the fuzzy implication operator between the expanded query and the web pages to extract snippet to be well reflected semantic user's intention. The experimental results demonstrate that the proposed method can achieve better snippet extraction performance than the other methods.

An Efficient Algorithm for Detecting Tables in HTML Documents (HTML 문서의 테이블 식별을 위한 효율적인 알고리즘)

  • Kim Yeon-Seok;Lee Kyong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1339-1353
    • /
    • 2004
  • < TABLE > tags in HTML documents are widely used for formatting layout of Web documents as well as for describing genuine tables with relational information. As a prerequisite for information extraction from the Web, this paper presents an efficient method for sophisticated table detection. The proposed method consists of two phases: preprocessing and attribute-value relations extraction. For the preprocessing where genuine or ungenuine tables are filtered out, appropriate rules are devised based on a careful examination of general characteristics of < TABLE > tags. The remaining is detected at the attribute-value relations extraction phase. Specifically, a value area is extracted and checked out whether there is a syntactic coherency Futhermore, the method looks for a semantic coherency between an attribute area and a value area of a table that may be inappropriate for the syntactic coherency checkup. Experimental results with 11,477 < TABLE > tags from 1,393 HTML documents show at the method has performed better compared with previous works, resulting in a precision of 97.54% and a recall of 99.22% in average.

  • PDF