• Title/Summary/Keyword: 데이터베이스 스키마

Search Result 390, Processing Time 0.034 seconds

Implementations of Record_Level Synchronized Safe Personal Cloud (레코드 단위의 동기화를 지원하는 개별 클라우드 구현 기법)

  • Hong, Dong-Kweon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.239-244
    • /
    • 2014
  • As the usefulness of mobile device is kept growing the privacy of the cloud computing is receiving more attentions. Even though many researches and solutions for privacy matters are suggested we are still worrying about the security problems. In addition most of cloud computing systems uses file-level synchronization which make it difficult to modify a part of a file. If we use data-centric app that stores data on embedded DBMS such as SQLite, a simple synchronization may incur some loss of information. In this paper we propose a solution to build a personal cloud that supports record-level synchronization. And we show a prototype system which uses RESTful web services and the same schema on mobie devices and the cloud storage. Synchronization is achieved by using a kind of optimistic concurrency control.

Construction of Integration Management System of Various Speech Corpora (다양한 음성코퍼스의 통합 관리시스템 구축)

  • Rhyu, Kyeong-Taek;Jeong, Chang-Won;Kim, Do-Goan;Lee, Young-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.259-271
    • /
    • 2006
  • In this paper, we propose relevant to design and implementation of an integrated management system for various speech corpora. The purpose of this paper is to manage an integrated management system for various kinds of speech corpora necessary for speech research and speech corpora constructed in different data formats. In addition, ways are considered to allow users to search with effect for speech corpora that meet various conditions which they want, and to allow them to add with ease corpora that are constructed newly. In order to achieve this goal, we design a global schema for an integrated management of new additional information without changing old speech corpora, and construct a web-based integrated management system based on the scheme that can be accessed without any temporal and spatial restrictions. Finally, we describe the web based interface which are the executed results involved in the service and show the efficiency of using the index view for implementation of integrated management system.

  • PDF

Semantic schema data processing using cache mechanism (캐쉬메카니즘을 이용한 시맨틱 스키마 데이터 처리)

  • Kim, Byung-Gon;Oh, Sung-Kyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.89-97
    • /
    • 2011
  • In semantic web information system like ontology that access distributed information from network, efficient query processing requires an advanced caching mechanism to reduce the query response time. P2P network system have become an important infra structure in web environment. In P2P network system, when the query is initiated, reducing the demand of data transformation to source peer is important aspect of efficient query processing. Caching of query and query result takes a particular advantage by adding or removing a query term. Many of the answers may already be cached and can be delivered to the user right away. In web environment, semantic caching method has been proposed which manages the cache as a collection of semantic regions. In this paper, we propose the semantic caching technique in cluster environment of peers. Especially, using schema data filtering technique and schema similarity cache replacement method, we enhanced the query processing efficiency.

A Study of Dynamic Web Ontology for Comparison-shopping Agent based on Semantic Web (시멘틱 웹 기반의 비교구매 에이전트를 위한 동적 웹 온톨로지에 대한 연구)

  • Kim, Su-Kyoung;Ahn, Ki-Hong
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.2
    • /
    • pp.31-45
    • /
    • 2005
  • In this paper, convert in RDF triple and a RDF document through RDF document converters and design metadata schema about a digital camcorder after use Wrapper technology, and acquiring commodity information of a HTML page about the digital camcorder which these papers are defined so as to be different by electronic commerce stores, and is expressed. Save in digital camcorder domain ontology storage that implemented to relational database to DCC knowledge base ontology as convert to OWL Web ontology based on designed metadata schema. Through compare with rdf and DCCKBO, mapping, and inference process, provide to buyers by DCC information of the store that had the commodity purchasing information which is the best, and proposed a dynamic Web ontology guessed to contents of the best commodity purchasing information, and to define domain ontology saved in DCCKBO.

  • PDF

Designing Requisite Techniques of Storage Structuresupporting Efficient Retrieval in Semantic Web (시멘틱 웹의 효율적 검색을 지원하는 저장 구조의 요소 기술 설계)

  • Shin Pan-Seop
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.3
    • /
    • pp.227-236
    • /
    • 2006
  • Semantic Web is getting popular to next web environment. Additionally, ontology language research is also activating to represent semantic relation of resource in semantic web. Specially, Ontology language as RDF and DAML+OIL appear on start point of research. But Ontology Language limited to describing characters of resource and to making a clear definition of relation of resource. So W3C suggest OWL at the next standard language for describing resource. OWL supply the lack of representation for RDF and RDF Schema. In this paper, we make Ontology to implement Online Retrieval System using OWL and propose the structure of storing Ontology document at the RDB. The structure support characters of OWL that are equivalent relationship, heterogeneous relationship, inverse relationship, union relationship and one of relationship between classes or properties. In this paper, we classify the extended elements for OWL from RDF Schema. And we propose the method of storing OWL using RDB for interoperability with many applications based on RDB. Finally, implement the storage and retrieval system based on OWL to provide advanced search function.

  • PDF

A Nonunique Composite Foreign Key-Based Approach to Fact Table Modeling and MDX Query Composing (비유일 외래키 조합 복합키 기반의 사실테이블 모델링과 MDX 쿼리문 작성법)

  • Yu, Han-Ju;Lee, Duck-Sung;Choi, In-Soo
    • KSCI Review
    • /
    • v.14 no.2
    • /
    • pp.185-197
    • /
    • 2006
  • A star schema consists of a central fact table, which is surrounded by one or more dimension tables. Each row int the fact table contains a multi-part primary key(or a composite foreign key) along with one or more columns containing various facts about the data stored in the row Each of the composit foreign key components is related to a dimensional table. The combination of keys in the fact table creates a composite foreign key that is unique to the fact table record. The composite foreign key, however, is rarely unique to the fact table record in real-world applications, particularly in financial applications. In order to make the composite foreign key be the determinant in real-world application, some precalculation might be performed in the SQL relational database, and cached in the OLAP database. However, there are many drawbacks to this approach. In some cases, this approach might give users the wrong results. In this paper, an approach to fact table modeling and related MDX query composing, which can be used in real-world applications without performing any precalculation and gives users the correct results, is proposed.

  • PDF

Development of an Integrated Retrieval System on Distributed KRISTAL-2002 Systems with Metadata Information (메타데이터 정보를 이용한 분산 KRISTAL-2002 시스템의 통합 검색 시스템 개발)

  • Choe Gui-ja;Kim Jae-Gon;Seo Jung-Hyun;Cho Han-Hyung;Lee Min-Ho;Jung Chang-Hu;Park Dong-In;Nam Young-Kwang
    • The KIPS Transactions:PartD
    • /
    • v.12D no.1 s.97
    • /
    • pp.135-150
    • /
    • 2005
  • In this paper, we propose an integrated information retrieval system for distributed multiple KRISTAL-2002 systems by using the metadata information. This system integrates current systems for different areas or systems for the same area with the different schemas so that the users can get the answers by once from the whole systems. The proposed system composes of the Source Server Manager(SSM) supporting the mapping between the integrated metadata database and source server, the Integrated Metadata Manager(ISM) for registering and managing the metadata and schema mapping, the Distributed Query Processor (DQP) for processing the user query into the source server query, the Distributed Data Set Integrated Manager(DDSIM) for transforming the total retrieval results by merging to the HTML format, and the integrated retrieval engine for managing the query results. It is assumed that the integrated metadata follows ISO/IEC 11179 metadata registration procedure with the metadata registry system which is a subsystem of the proposed system. There are two kinds of queries for users; the basic query and the detailed query. The users may select the databases or organizations for results by their own choices before giving the queries. The proposed system has been developed over KRISTAL-2002 systems with $Visual C^{++}\;and\;C^{++}-CGI$ and tested and verified with the six database systems.

A Nonunique Composite Foreign Key-Based Approach to Fact Table Modeling and MDX Query Composing (비유일 외래키 조합 복합키 기반의 사실테이블 모델링과 MDX 쿼리문 작성법)

  • Yu, Han-Ju;Lee, Duck-Sung;Choi, In-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.1 s.45
    • /
    • pp.177-188
    • /
    • 2007
  • A star schema consists of a central fact table, which is surrounded by one or more dimension tables. Each row in the fact table contains a multi-part primary key(or a composite foreign key) along with one or more columns containing various facts about the data stored in the row. Each of the composit foreign key components is related to a dimensional table. The combination of keys in the fact table creates a composite foreign key that is unique to the fact table record. The composite foreign key, however, is rarely unique to the fact table retold in real-world applications, particularly in financial applications. In order to make the composite foreign key be the determinant in real-world application, some precalculation might be performed in the SQL relational database, and cached in the OLAP database. However, there are many drawbacks to this approach. In some cases, this approach might give users the wrong results. In this paper, an approach to fact table modeling and related MDX query composing, which can be used in real-world applications without performing any precalculation and gives users the correct results, is proposed.

  • PDF

Design and Implementation of XQL Query Processing System Using XQL-SQL Query Translation (XQL-SQL 질의 변환을 통한 XQL 질의 처리 시스템의 설계 및 구현)

  • Kim, Chun-Sig;Kim, Kyung-Won;Lee, Ji-Hun;Jang, Bo-Sun;Sohn, Ki-Rack
    • The KIPS Transactions:PartD
    • /
    • v.9D no.5
    • /
    • pp.789-800
    • /
    • 2002
  • XML is a standard format of web data and is currently used as a prevailing language for exchanging data. Most of the commercial data are stored in a relational database. It is quite important to convert these conventionally stored data into those for exchange and use them in data exchange, or to get the query results effectively by utilizing XQL on XML data which are store in a relational database. Thus, it is absolutely required to have a proper query processing mechanism for XML data and to maintain many XML data properly. Up to now, many cases of researches on the storage and retrieval of XML data have been carried out and under study. But, effective retrieval and storage system for path queries like XQL has yet to be contrived. Thus, in this paper, a schema to store XML data is designed, in which DFS-Numbegering method is used to store data effectively. And an effective path query processing method is also designed and implemented, in which a traditional relational database engine is used. That is, XQL is converted into SQL with a XQL processor if a user makes query XQL in a system. A database system executes SQL, and a XML generator uses a generated record and makes a XML document.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.