• Title/Summary/Keyword: Intelligent and Semantic Processing of Knowledge and Information

Search Result 23, Processing Time 0.023 seconds

An Analysis of Existing Studies on Parallel and Distributed Processing of the Rete Algorithm (Rete 알고리즘의 병렬 및 분산 처리에 관한 기존 연구 분석)

  • Kim, Jaehoon
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.7
    • /
    • pp.31-45
    • /
    • 2019
  • The core technologies for intelligent services today are deep learning, that is neural networks, and parallel and distributed processing technologies such as GPU parallel computing and big data. However, for intelligent services and knowledge sharing services through globally shared ontologies in the future, there is a technology that is better than the neural networks for representing and reasoning knowledge. It is a knowledge representation of IF-THEN in RIF or SWRL, which is the standard rule language of the Semantic Web, and can be inferred efficiently using the rete algorithm. However, when the number of rules processed by the rete algorithm running on a single computer is 100,000, its performance becomes very poor with several tens of minutes, and there is an obvious limitation. Therefore, in this paper, we analyze the past and current studies on parallel and distributed processing of rete algorithm, and examine what aspects should be considered to implement an efficient rete algorithm.

Semantic Aspects of Negation as Schema (부정 스키마의 의미론적 양상)

  • Tae, Kang-Soo
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.23-28
    • /
    • 2002
  • A fundamental problem in building an intelligent agent is that an agent does not understand the meaning of its perception or its action. One reason that an agent cannot understand the world is partially caused by a syntactic approach that converts a semantic feature into a simple string. To solve this problem, Cohen introduces a semantic approach that an agent autonomously learns a meaningful representation of physical schemas, on which some advanced conceptual structures are built, from physically interacting with environment using its own sensors and effectors. However, Cohen does not deal with a meta level of conceptual primitive that makes recognizing a schema possible. We propose that negation is a meta schema that enables an agent to recognize a physical schema. We prove some semantic aspects of negation.

Schemes for Managing Semantic Web Data in Ubiquitous Environment (유비쿼터스 환경을 고려한 시맨틱 웹 데이터 관리 기법 연구)

  • Kim, Youn-Hee;Kim, Jee-Hyun
    • Journal of Digital Contents Society
    • /
    • v.10 no.1
    • /
    • pp.1-10
    • /
    • 2009
  • One important issue to generalize the ubiquitous paradigm is the development of user-centralized and intelligent ubiquitous computing systems. Sharing knowledge and correct communication between users and devices are needed to be aware of continuous changed context information and infer services for which users are suited. The goal of this paper is to describe and manage effectively the meaning of services or data which each device offers for interaction between users and devices based on semantic relationships and reasoning. In this paper, we represent semantic data using OWL and design a ubiquitous based intelligent system. We propose some index structures and strategies to process queries classified by each subsystem and adopt labeling schemes to identify classes and resources in the semantic data. We can find devices which satisfies various user's requests exactly and quickly using the proposed strategies.

  • PDF

Ontology-based Semantic Assembly Modeling for Collaborative Product Design (협업적 제픔 설계를 위한 온톨로지 기반 시맨틱 조립체 모델링)

  • Yang Hyung-Jeong;Kim Kyung-Yun;Kim Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.139-148
    • /
    • 2006
  • In the collaborative product design environment, the communication between designers is important to capture design intents and to share a common view among the different but semantically similar terms. The Semantic Web supports integrated and uniform access to information sources and services as well as intelligent applications by the explicit representation of the semantics buried in ontology. Ontologies provide a source of shared and precisely defined terms that can be used to describe web resources and improve their accessibility to automated processes. Therefore, employing ontologies on assembly modeling makes assembly knowledge accurate and machine interpretable. In this paper, we propose a framework of semantic assembly modeling using ontologies to share design information. An assembly modeling ontology plays as a formal, explicit specification of a shared conceptualization of assembly design modeling. In this paper, implicit assembly constraints are explicitly represented using OWL (Web Ontology Language) and SWRL (Semantic Web Rule Language). The assembly ontology also captures design rationale including joint intent and spatial relationships.

A Study on the Relation between Taxonomy of Nominal Expressions and OWL Ontologies (체언표현 개념분류체계와 OWL 온톨로지의 상관관계 연구)

  • Song Do-Gyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.93-99
    • /
    • 2006
  • Ontology is an indispensable component in intelligent and semantic processing of knowledge and information, such as in semantic web. Ontology is considered to be constructed generally on the basis of taxonomy of human concepts about the world. However. as human concepts are unstructured and obscure, ontology construction based on the taxonomy of human concepts cannot be realized systematically furthermore automatically. So, we try to do this from the relation among linguistic symbols regarded representing human concepts, in short, words. We show the similarity between taxonomy of human concepts and relation among words. And we propose a methodology to construct and generate automatically ontologies from these relations mon words and a series of algorithm to convert these relations into ontologies. This paper presents the process and concrete application of this methodology.

  • PDF

Index Ontology Repository for Video Contents (비디오 콘텐츠를 위한 색인 온톨로지 저장소)

  • Hwang, Woo-Yeon;Yang, Jung-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.10
    • /
    • pp.1499-1507
    • /
    • 2009
  • With the abundance of digital contents, the necessity of precise indexing technology is consistently required. To meet these requirements, the intelligent software entity needs to be the subject of information retrieval and the interoperability among intelligent entities including human must be supported. In this paper, we analyze the unifying framework for multi-modality indexing that Snoek and Worring proposed. Our work investigates the method of improving the authenticity of indexing information in contents-based automated indexing techniques. It supports the creation and control of abstracted high-level indexing information through ontological concepts of Semantic Web skills. Moreover, it attempts to present the fundamental model that allows interoperability between human and machine and between machine and machine. The memory-residence model of processing ontology is inappropriate in order to take-in an enormous amount of indexing information. The use of ontology repository and inference engine is required for consistent retrieval and reasoning of logically expressed knowledge. Our work presents an experiment for storing and retrieving the designed knowledge by using the Minerva ontology repository, which demonstrates satisfied techniques and efficient requirements. At last, the efficient indexing possibility with related research is also considered.

  • PDF

eXtensible Rule Markup Language (XRML): Design Principles and Application (확장형 규칙 표식 언어(eXtensible Rule Markup Language) : 설계 원리 및 응용)

  • 이재규;손미애;강주영
    • Journal of Intelligence and Information Systems
    • /
    • v.8 no.1
    • /
    • pp.141-157
    • /
    • 2002
  • extensible Markup Language (XML) is a new markup language for data exchange on the Internet. In this paper, we propose a language extensible Rule Markup Language (XRML) which is an extension of XML. The implicit rules embedded in the Web pages should be identifiable, interchangeable with structured rule format, and finally accessible by various applications. It is possible to realize by using XRML. In this light, Web based Knowledge Management Systems (KMS) can be integrated with rule-based expert systems. To meet this end, we propose the six design criteria: Expressional Completeness, Relevance Linkability, Polymorphous Consistency, Applicative Universality, Knowledge Integrability and Interoperability. Furthermore, we propose three components such as RIML (Rule Identification Markup Language), RSML (Rule Structure Markup Language) and RTML (Rule Triggering Markup Language), and the Document Type Definition DTD). We have designed the XRML version 0.5 as illustrated above, and developed its prototype named Form/XRML which is an automated form processing for disbursement of the research fund in the Korea Advanced Institute of Science and Technology (KAISI). Since XRML allows both human and software agent to use the rules, there is huge application potential. We expect that XRML can contribute to the progress of Semantic Web platforms making knowledge management and e-commerce more intelligent. Since there are many emerging research groups and vendors who investigate this issue, it will not take long to see XRML commercial products. Matured XRML applications may change the way of designing information and knowledge systems in the near future.

  • PDF

Research of Semantic Considered Tree Mining Method for an Intelligent Knowledge-Services Platform

  • Paik, Juryon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.27-36
    • /
    • 2020
  • In this paper, we propose a method to derive valuable but hidden infromation from the data which is the core foundation in the 4th Industrial Revolution to pursue knowledge-based service fusion. The hyper-connected societies characterized by IoT inevitably produce big data, and with the data in order to derive optimal services for trouble situations it is first processed by discovering valuable information. A data-centric IoT platform is a platform to collect, store, manage, and integrate the data from variable devices, which is actually a type of middleware platforms. Its purpose is to provide suitable solutions for challenged problems after processing and analyzing the data, that depends on efficient and accurate algorithms performing the work of data analysis. To this end, we propose specially designed structures to store IoT data without losing the semantics and provide algorithms to discover the useful information with several definitions and proofs to show the soundness.

Active Documents: Programs by Form Designers (능동문서: 서식설계자의 프로그램)

  • Nam, Chul-Ki;Bae, Jae-Hak;Yoo, Hae-Young
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.599-610
    • /
    • 2003
  • The Web plays an important role as information source and most Web applications are document-centric. A document implies an intention of its own designer, which can be utilized actively in automation of business processes. Through an understanding of an intrinsic nature of a document function, we can see a document as an executable computer program in a special case. For this approach, we propose an active document model that is composed of form, knowledge base, rules, and queries. For reusability and interoperability of a document, each component of the proposed model is uniformly represented in XML. The proposed active document not only plays a passive role in providing user interfaces, but also is a document that a machine can infer and process with reading a procedure of document processing and business rules intended by document designers. Through this approach, document can interact with machines and can cooperate with other applications. For applicability of our active document, we show a case study for the processing of purchase orders in a B2B e-Commerce system. This paper is expected to provide the framework of accelerating the development of intelligent applications through our approach regards form document as a computer program. In short, the proposed active document contains knowledge representation and processing method, consequently our document will play an important role in providing a concept of document of pursuing in Semantic Web.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.