• Title/Summary/Keyword: 관계형 스키마 모델

Search Result 63, Processing Time 0.024 seconds

The Implementation Method of CIMS for Ship Manufacturing using STEP (STEP에 의한 조선 통합 생산 시스템(CIMS) 구현 방법)

  • S.B. Yoo;J.W. Lee;Y.M. Jeong;D.Y. Yoon;H.J. Kim
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.31 no.3
    • /
    • pp.38-46
    • /
    • 1994
  • The role of CIMS(Computer Integrated Manufacturing Systems) is to integrate various applications throughout a product's life cycle. STEP is an international effort to standardize information models and interfaces so that independently developed applications can be easily integrated. A prototype for the Ship CIMS is built using STEP. In this prototype, the information model defined by EXPRESS is translated into database schemas. In this paper, we explain the operation of this prototype using the examples from two application programs, i.e., the Block Division System and the Erection System which are used for the process planning of ship manufacturing. As an example. Real data stored in a relational database system(Oracle) is presented in this paper.

  • PDF

Implementation of a Multimedia based ExamBank System in Web Environments (Web환경에서 멀티미디어 기반 문제은행 시스템의 구현)

  • 남인길;정소연
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.2
    • /
    • pp.54-62
    • /
    • 2001
  • In this paper, we proposed multimedia based ExamBank system in web environments. In the proposed system the database was designed based on the object-relation model and the application program was implemented with Java such that independent execution would be possible to guarantee no fault for multi-client in Web environments. We defined the Exam entities as objects, and implemented those inter-relationships as user definition and type. In addition, by mapping the schema object of DBMS and JAVA class, it becomes to possible transferring the object systematically between DHMS and JAVA application server.

  • PDF

A Path Storing and Number Matching Technique of XML Documents Based on RDBMS (RDBMS에 기반한 XML 문서의 경로 저장과 숫자 매칭 기법)

  • Vong, Ha-Ik;Hwang, Byung-Yeon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.377-380
    • /
    • 2006
  • 최근 XML(eXtensible Markup language) 사용의 증가로 인해 다량의 대용량 XML 문서가 이용되고 있음에 따라, 효율적인 문서 관리를 위한 XML문서의 데이터 모델과 저장 스키마를 어떻게 구현할 것인가에 대한 연구가 활발히 진행되고 있다. 이에 본 논문에서는 관계형 데이터베이스를 기반으로 한 XML문서에 대한 효율적인 저장, 검색 및 관리 기법으로 노드의 텍스트 값이나 속성 값이 존재하는 경로만을 저장하고, 노드 표현에 따라 고유 노드명 식별자(Node Expression Identifier)를 부여함으로써 부여된 노드 식별자를 매칭하는 숫자 매칭(Number Matching)기법을 제안한다. 그리고 이를 입증하기 위해 WPath 질의들에 대한 처리 성능을 기존 방법과 비교함으로써 제안한 방법의 효율성을 제시한다.

  • PDF

Constructing a Metadata DB to Facilitate Retrieval of Faculty Syllabi on the Internet (인터넷 대학강의안의 검색을 위한 Metadata DB 구축)

  • 오삼균
    • Journal of the Korean Society for information Management
    • /
    • v.16 no.2
    • /
    • pp.149-164
    • /
    • 1999
  • The purpose of this paper is to introduce and discuss a newly-constructed metadata database system that facilitates the retrieval of faculty syllabi available on the Internet. This gateway system aims to provide users with one-stop access to syllabi posted by the faculty of the institutes of post-secondary education from all around the world. Several elements of the Dublin Core (DC) and other supplementary elements were used for cataloging the syllabi. The conceptual schema of all the selected elements of the syllabi was developed following the entity-relationship model. The metadata of the syllabi was then stored in a relational database system. Various searching and browsing interfaces were implemented to facilitate effective retrieval. The prototype, named as Gateway to Faculty Syllabi (GFS), is available at http:/Ais.skku.ac.kr/gfs/.

  • PDF

A Study on Flexible Attribude Tree and Patial Result Matrix for Content-baseed Retrieval and Browsing of Video Date. (비디오 데이터의 내용 기반 검색과 브라우징을 위한 유동 속성 트리 및 부분 결과 행렬의 이용 방법 연구)

  • 성인용;이원석
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.1
    • /
    • pp.1-13
    • /
    • 2000
  • While various types of information can be mixed in a continuous video stream without any cleat boundary, the meaning of a video scene can be interpreted by multiple levels of abstraction, and its description can be varied among different users. Therefore, for the content-based retrieval in video data it is important for a user to be able to describe a scene flexibly while the description given by different users should be maintained consistently This paper proposes an effective way to represent the different types of video information in conventional database models such as the relational and object-oriented models. Flexibly defined attributes and their values are organized as tree-structured dictionaries while the description of video data is stored in a fixed database schema. We also introduce several browsing methods to assist a user. The dictionary browser simplifies the annotation process as well as the querying process of a user while the result browser can help a user analyze the results of a query in terms of various combinations of Query conditions.

  • PDF

A Study on Effective Real Estate Big Data Management Method Using Graph Database Model (그래프 데이터베이스 모델을 이용한 효율적인 부동산 빅데이터 관리 방안에 관한 연구)

  • Ju-Young, KIM;Hyun-Jung, KIM;Ki-Yun, YU
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.163-180
    • /
    • 2022
  • Real estate data can be big data. Because the amount of real estate data is growing rapidly and real estate data interacts with various fields such as the economy, law, and crowd psychology, yet is structured with complex data layers. The existing Relational Database tends to show difficulty in handling various relationships for managing real estate big data, because it has a fixed schema and is only vertically extendable. In order to improve such limitations, this study constructs the real estate data in a Graph Database and verifies its usefulness. For the research method, we modeled various real estate data on MySQL, one of the most widely used Relational Databases, and Neo4j, one of the most widely used Graph Databases. Then, we collected real estate questions used in real life and selected 9 different questions to compare the query times on each Database. As a result, Neo4j showed constant performance even in queries with multiple JOIN statements with inferences to various relationships, whereas MySQL showed a rapid increase in its performance. According to this result, we have found out that a Graph Database such as Neo4j is more efficient for real estate big data with various relationships. We expect to use the real estate Graph Database in predicting real estate price factors and inquiring AI speakers for real estate.

Digital Forensic Investigation of HBase (HBase에 대한 디지털 포렌식 조사 기법 연구)

  • Park, Aran;Jeong, Doowon;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.95-104
    • /
    • 2017
  • As the technology in smart device is growing and Social Network Services(SNS) are becoming more common, the data which is difficult to be processed by existing RDBMS are increasing. As a result of this, NoSQL databases are getting popular as an alternative for processing massive and unstructured data generated in real time. The demand for the technique of digital investigation of NoSQL databases is increasing as the businesses introducing NoSQL database in their system are increasing, although the technique of digital investigation of databases has been researched centered on RDMBS. New techniques of digital forensic investigation are needed as NoSQL Database has no schema to normalize and the storage method differs depending on the type of database and operation environment. Research on document-based database of NoSQL has been done but it is not applicable as itself to other types of NoSQL Database. Therefore, the way of operation and data model, grasp of operation environment, collection and analysis of artifacts and recovery technique of deleted data in HBase which is a NoSQL column-based database are presented in this paper. Also the proposed technique of digital forensic investigation to HBase is verified by an experimental scenario.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on the Model of Internet Public Library in Korea (IPL-Korea) (인터넷 공공도서관 구축 모형 연구)

  • 고영만;오삼균
    • Journal of the Korean Society for information Management
    • /
    • v.16 no.4
    • /
    • pp.109-123
    • /
    • 1999
  • We are faced with a paradox in the age of information as finding quality information on the Internet becomes a more challenging task because of information overload. This paper describes the prototype for “IPL-Korea” (Internet Public Library in Korea) project which is an attempt to provide the public with quality information in the form of a metadata system. The system involves cataloging of resources, i.e. websites, that are filtered by library and information science majors as well as information professionals. The user focus of this system is on children, youth, women, and seniors; various classification schemes and resource descriptions relevant for each user group are incorporated into the system to allow efficient browsing of the resources. A thesaurus for “IPL-Korea”, which is based on the ERIC thesaurus, is being constructed for easy manipulation of the breath of searching. The “IPL-Korea” metadata system employs the entity-relationship model in the design of its conceptual schema. Metadata is being stored in the Oracle database system and Web interfaces to this database are provided through ASP, ColdFusion, and JAVA technology.

  • PDF

Object-Oriented Database Schemata and Queiy Processing for XML Data (XML 데이타를 위한 객체지향 데이터베이스 스키마 및 질의 처리)

  • Jeong, Tae-Seon;Park, Sang-Won;Han, Sang-Yeong;Kim, Hyeong-Ju
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.89-98
    • /
    • 2002
  • As XML has become an emerging standard for information exchange on the World Wide Web it has gained attention in database communities to extract information from XML seen as a database model. Recently, many researchers have addressed the problem of storing XML data and processing XML queries using traditional database engines. Here, most of them have used relational database systems. In this paper, we show that OODBSs can be another solution. Our technique generates an OODB schema from DTDs and processes XML queries, Especially, we show that the semi-structural part of XML data can be represented by the 'inheritance' and that this can be used to improve query processing.