• Title/Summary/Keyword: RDF model

Search Result 74, Processing Time 0.029 seconds

Web Interface for Distributed STEP Data using Metadata (메타데이터를 이용한 분산 STEP 데이터의 웹 인터페이스)

  • 진연권;유상봉
    • Korean Journal of Computational Design and Engineering
    • /
    • v.5 no.3
    • /
    • pp.232-241
    • /
    • 2000
  • Even though we have greater chances to accomplish successful collaborative design by utilizing recent proliferation of networks, current practices do not fully take advantage of the information infrastructure. There are so much data over the networks, but not enough knowledge about the data is available to users. The main objectives of the product data interface system proposed in this paper are to capture more knowledge on managing product data and to provide users effective search capability. We define the metadata model for product data defined in STEP AP 203 and manage the metadata from product data in a repository system. Because we utilize the standard formats such as STEP for product data and RDF for metadata, the proposed approach can be applied to various fields of industries independently on commercial products.

  • PDF

Construction of Preservation Description Framework for Digital Archiving (디지털 아카이빙을 위한 보존 기술항목 프레임워크 구축)

  • Lee, Seungmin
    • Journal of Korean Library and Information Science Society
    • /
    • v.48 no.4
    • /
    • pp.129-151
    • /
    • 2017
  • Information modeling that is broadly applied in digital archiving process privides conceptual process that can be used to guide the creation of descriptions for the objects of preservation. However, it has faced with the limitations on substantially applying to the creation of preservation metadata records. This research proposes the concept of Resource Cluster in order to address these problems and efficiently describe the objects of preservation during digital archiving process. It also constructed Preservation Description Framework (PDF) based on RDF in order to substantially manifest preservation descriptions. This framework combines the structure of OAIS Reference Model and Functional Requirements for Bibliographic Records (FRBR) and can be an alternative approach to the creation of preservation metadata in more efficient and effective ways.

Estimation of Inundation Area by Linking of Rainfall-Duration-Flooding Quantity Relationship Curve with Self-Organizing Map (강우량-지속시간-침수량 관계곡선과 자기조직화 지도의 연계를 통한 범람범위 추정)

  • Kim, Hyun Il;Keum, Ho Jun;Han, Kun Yeun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.839-850
    • /
    • 2018
  • The flood damage in urban areas due to torrential rain is increasing with urbanization. For this reason, accurate and rapid flooding forecasting and expected inundation maps are needed. Predicting the extent of flooding for certain rainfalls is a very important issue in preparing flood in advance. Recently, government agencies are trying to provide expected inundation maps to the public. However, there is a lack of quantifying the extent of inundation caused by a particular rainfall scenario and the real-time prediction method for flood extent within a short time. Therefore the real-time prediction of flood extent is needed based on rainfall-runoff-inundation analysis. One/two dimensional model are continued to analyize drainage network, manhole overflow and inundation propagation by rainfall condition. By applying the various rainfall scenarios considering rainfall duration/distribution and return periods, the inundation volume and depth can be estimated and stored on a database. The Rainfall-Duration-Flooding Quantity (RDF) relationship curve based on the hydraulic analysis results and the Self-Organizing Map (SOM) that conducts unsupervised learning are applied to predict flooded area with particular rainfall condition. The validity of the proposed methodology was examined by comparing the results of the expected flood map with the 2-dimensional hydraulic model. Based on the result of the study, it is judged that this methodology will be useful to provide an unknown flood map according to medium-sized rainfall or frequency scenario. Furthermore, it will be used as a fundamental data for flood forecast by establishing the RDF curve which the relationship of rainfall-outflow-flood is considered and the database of expected inundation maps.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.3
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

Designs of the Unified Information Model-IEC61850/IEC61970 and Topology Model for Smart Grid (스마트 그리드 망을 위한 IEC61970/IEC61850 통합 정보 모델과 토폴로지 모델 설계)

  • Yun, Seok-Yeul;Yim, Hwa-Young
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.1
    • /
    • pp.28-33
    • /
    • 2012
  • The smart grid, which is an integrated type of the power system and the digital network, requires the integration of a CIM(Common Information Model) standard for information modelling at the power control centers and an IEC 61850 standard for automation at the substation level in order to efficiently exchange the information between system elements. This paper describes the method of data transfer from one standard information model to the other standard unified information model by mapping between the objects of IEC 61850 and IEC61970 CIM standards both in the static and dynamic models, and designs the method of data transfer and information exchange between the topology processing application using unified topology class packages.

MDMA:A Modular Distributed Middleware Architecture via URI

  • Murtaza Syed Shariyar;Hong Choong Seon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.295-297
    • /
    • 2005
  • This paper uses our proposed model for connecting ubiquitous physical objects over the web using URI, while utilizing the already developed frameworks, for ubiquitous service discovery like JINI, UPnP, and RDF/OWL for semantic web. By using this proposed scheme, we have presented architecture of a service oriented modular distributed ubiquitous middleware i.e. MDMA

  • PDF

Representation of Process Plant Equipment Using Ontology and ISO 15926 (온톨로지와 ISO 15926을 이용한 공정 플랜트 기자재의 표현)

  • Mun, Du-Hwan;Kim, Byung-Chul;Han, Soon-Hung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • ISO 15926 is an international standard for the representation of process plant lifecycle data. However, it is not easy to implement the part 2-data model and the part 4-initial reference data because of their complexity in terms of data structure and shortages of related development toolkits. To overcome this problem, ISO 15926-7(part 7) is under development. ISO 15926-7 specifies implementation methods for sharing and exchange of process plant lifecycle data, which is based on semantic web technologies such as OWL, Web Services, and SPARQL. For the application of ISO 15926-7, this paper discusses how to represent technical specifications of process plant equipment by defining user-defined reference data and object information model with an example of reactor coolant pumps located in the reactor coolant system of an APR 1400 nuclear power plant.

Automatic Web Service Composition based on STRIPS (STRIPS 기반의 자동 웹 서비스 Composition)

  • 강민구;김제민;박영택;박찬규;문진영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.127-129
    • /
    • 2003
  • 시멘틱 웹 서비스의 최종적인 목표는 네트웍으로 연결된 프로그램들과 장치들이 사람의 명령 없이 긴밀하게 상호작용 하는 것이다. 시멘틱 웹은 정보의 의미를 알 수 있는 프레임워크를 연구하였는데, 이는 인공지능을 기반으로 하는 RDF. RDFS, DAML+OIL, OWL등의 언어를 기반으로 하여 연구되었다 시멘틱 웹 커뮤니티에서는 웹의 정보 뿐 아니라 서비스에도 정확한 의미를 부여하기 위해서 DAML+OIL 온톨로지 기반의 새로운 기술인 DAML-S 온톨로지 기술을 제안하였다. DAML-S는 Service. Service Profile. Service Model. Service Grounding의 4개의 상위 온톨로지로 구성되는데, 특히 Service Profile, Service Model 온톨로지와 인스턴스를 이용하여 사용자의 요구에 적합한 서비스 검색과 Composition01 가능하다. 사용자의 요구가 atomic 서비스가 아닌 여러 atomic 서비스들을 함께 이용해야 하는 경우에는 시멘틱 웹 서비스 검색 시스템 외에 추가적인 웹 서비스 Composer가 필요하게 된다. 본 논문에서는 사용자의 요구로부터 필요한 웹 서비스 chain을 구성함에 있어서 사람이 개입하지 않는 STRIPS 타입의 자동 웹 서비스 Composer를 제안한다.

  • PDF

Development of Search Method using Semantic technologies about RESTful Web Services (시맨틱 기술을 활용한 RESTful 웹서비스의 검색 기법 개발)

  • Cha, Seung-Jun;Choi, Yun-Jeong;Lee, Kyu-Chul
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.100-104
    • /
    • 2010
  • Recently with advent of Web 2.0, RESTful Web Services are becoming increasing trend to emphasize Web as platform. There are already many services and the number of service increases in very fast pace. So it is difficult to find the service what we want by keyword based search. To solve this problem, we developed the search method using sem antic technologies about RESTful Web Services. For that, first we define the system structure and model the description format based on the integrated search system for OpenAPIs, and then we add Semantic Markup (tagging, semantic annotation) on the HTML description pages. Next we extract RDF document from them and store it in service repository. Based on the keywords that are extended by means of ontology, the developed system provides more purified and extended results than similarity-based keyword searching system.