• Title/Summary/Keyword: RDF model

Search Result 74, Processing Time 0.02 seconds

Multilingual Product Retrieval Agent through Semantic Web and Semantic Networks (Semantic Web과 Semantic Network을 활용한 다국어 상품검색 에이전트)

  • Moon Yoo-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.2
    • /
    • pp.1-13
    • /
    • 2004
  • This paper presents a method for the multilingual product retrieval agent through XML and the semantic networks in e-commerce. Retrieval for products is an important process, since it represents interfaces of the customer contact to the e-commerce. Keyword-based retrieval is efficient as long as the product information is structured and organized. But when the product information is expressed across many online shopping malls, especially when it is expressed in different languages with cultural backgrounds, buyers' product retrieval needs language translation with ambiguities resolved in a specific context. This paper presents a RDF modeling case that resolves semantic problems in the representation of product information and across the boundaries of language domains. With adoption of UNSPSC code system, this paper designs and implements an architecture for the multilingual product retrieval agents. The architecture is based on the central repository model of product catalog management with distributed updating processes. It also includes the perspectives of buyers and suppliers. And the consistency and version management of product information are controlled by UNSPSC code system. The multilingual product names are resolved by semantic networks, thesaurus and ontology dictionary for product names.

  • PDF

An Exploratory Study on Applications of Semantic Web through the Technical Limitation Factors of Knowledge Management Systems (지식경영시스템의 기술적 한계요인분석을 통한 시맨틱 웹의 적용에 관한 탐색적 연구)

  • Joo Jae-Hun;Jang Gil-Sang
    • The Journal of Society for e-Business Studies
    • /
    • v.10 no.3
    • /
    • pp.111-134
    • /
    • 2005
  • Knowledge management is a core factor to achieve competitive advantage and improve the business performance. New information technology is also a core factor enabling the innovation of knowledge management. Semantic Web of which the goal is to realize machine-processable Web can't help affecting the knowledge management. Therefore, we empirically analyze the relationship between user's dissatisfaction and barriers or limitations of knowledge management and present methods allowing Semantic Web to overcome the limitations and to support knowledge management processes. Based on a questionnaire survey of 222 respondents, we found that the limitations of system qualities such as user inconvenience of knowledge management systems, search and integration limitations, and the limitations of knowledge qualities such as inappropriateness and untrust significantly affected the user dissatisfaction of knowledge management systems. Finally, we suggest a conceptual model of knowledge management systems of which components are resources, metadata, ontologies, and user & query layers.

  • PDF

Scalable RDFS Reasoning using Logic Programming Approach in a Single Machine (단일머신 환경에서의 논리적 프로그래밍 방식 기반 대용량 RDFS 추론 기법)

  • Jagvaral, Batselem;Kim, Jemin;Lee, Wan-Gon;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.762-773
    • /
    • 2014
  • As the web of data is increasingly producing large RDFS datasets, it becomes essential in building scalable reasoning engines over large triples. There have been many researches used expensive distributed framework, such as Hadoop, to reason over large RDFS triples. However, in many cases we are required to handle millions of triples. In such cases, it is not necessary to deploy expensive distributed systems because logic program based reasoners in a single machine can produce similar reasoning performances with that of distributed reasoner using Hadoop. In this paper, we propose a scalable RDFS reasoner using logical programming methods in a single machine and compare our empirical results with that of distributed systems. We show that our logic programming based reasoner using a single machine performs as similar as expensive distributed reasoner does up to 200 million RDFS triples. In addition, we designed a meta data structure by decomposing the ontology triples into separate sectors. Instead of loading all the triples into a single model, we selected an appropriate subset of the triples for each ontology reasoning rule. Unification makes it easy to handle conjunctive queries for RDFS schema reasoning, therefore, we have designed and implemented RDFS axioms using logic programming unifications and efficient conjunctive query handling mechanisms. The throughputs of our approach reached to 166K Triples/sec over LUBM1500 with 200 million triples. It is comparable to that of WebPIE, distributed reasoner using Hadoop and Map Reduce, which performs 185K Triples/sec. We show that it is unnecessary to use the distributed system up to 200 million triples and the performance of logic programming based reasoner in a single machine becomes comparable with that of expensive distributed reasoner which employs Hadoop framework.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.