• Title/Summary/Keyword: 물리적 데이타베이스 설계

Search Result 8, Processing Time 0.021 seconds

Design of The Geographic Information Database Structure for Census Mapping (센서스 지도제작을 위한 지리정보데이타베이스 구조연구)

  • 김설희
    • Spatial Information Research
    • /
    • v.1 no.1
    • /
    • pp.17-28
    • /
    • 1993
  • In order to minimize vectorizing tasks, which require huge reso¬urces and time and to support the census mapping effectively, the geographic information databases structure has been studied. The steps of the new approach are as follows. : Step 1, Scanning the maps of the whole country and storing the image data in raster format. Step 2, Vectorizing the data of specific items for Census operation such as Enume¬ration District, and then linking to attribute data in the text format. Step 3, Designing the database with a Tile and Multi-layer structure to make a continuous map logically. Step 4, Implement Censlls Mapping System(CMS) for efficient mapping and retrieving. As a consequence of this study, the cost, manpower and time effectiveness was proved and it was confirmed to produce lIseful and high-qual ified maps for the Census. In the future, this system wi II be able to provide many organizations and individuals with the various data based on geographical statistical information.

  • PDF

Optimal Construction of Multiple Indexes for Time-Series Subsequence Matching (시계열 서브시퀀스 매칭을 위한 최적의 다중 인덱스 구성 방안)

  • Lim, Seung-Hwan;Kim, Sang-Wook;Park, Hee-Jin
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.201-213
    • /
    • 2006
  • A time-series database is a set of time-series data sequences, each of which is a list of changing values of the object in a given period of time. Subsequence matching is an operation that searches for such data subsequences whose changing patterns are similar to a query sequence from a time-series database. This paper addresses a performance issue of time-series subsequence matching. First, we quantitatively examine the performance degradation caused by the window size effect, and then show that the performance of subsequence matching with a single index is not satisfactory in real applications. We argue that index interpolation is fairly useful to resolve this problem. The index interpolation performs subsequence matching by selecting the most appropriate one from multiple indexes built on windows of their inherent sizes. For index interpolation, we first decide the sites of windows for multiple indexes to be built. In this paper, we solve the problem of selecting optimal window sizes in the perspective of physical database design. For this, given a set of query sequences to be peformed in a target time-series database and a set of window sizes for building multiple indexes, we devise a formula that estimates the cost of all the subsequence matchings. Based on this formula, we propose an algorithm that determines the optimal window sizes for maximizing the performance of entire subsequence matchings. We formally Prove the optimality as well as the effectiveness of the algorithm. Finally, we perform a series of extensive experiments with a real-life stock data set and a large volume of a synthetic data set. The results reveal that the proposed approach improves the previous one by 1.5 to 7.8 times.

SDTS Conversion System (SDTS 변환 시스템)

  • Lee, Kang-Jun;Kim, Jun-Jong;Sul, Young-Min;Han, Ki-Joon
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 1998.07a
    • /
    • pp.181-195
    • /
    • 1998
  • 지리 정보 시스템(GIS)은 그 특성상 대용량의 GIS 데이타를 사용하며, 다양한 운영체제와 하드웨어 상에서 구현된다. 이렇게 상이한 운영체제와 하드웨어상의 공간데이타들은 일반적으로 서로 다른 GIS 데이타 포맷을 갖고 있기 때문에 효율적인 자료 교환이 불가능하다면 데이타 공유가 매우 어려울 뿐만 아니라 공통 데이터의 중복 보관 및 관리로 인해 막대한 경제적 손실을 가져온다. 이와 같은 문제점을 해결하기 위해 국외에서는 이미 GIS 데이타 교환을 위한 방안으로 공통 교환 표준 포맷 작업이 10여년 전부터 진행되어왔으며, 국내의 경우도 국가 차원에서 지리 정보시스템의 국가 표준을 설정하고, 기본 공간 데이타베이스를 구축하고 있다. 그리하여, 국가 기본 포맷과 공통 데이타 교환 포맷의 표준으로 SDTS(Spatial Data Transfer Standard)를 채택하였다. 본 논문에서는 범용의 SDTS 변환 시스템의 구현에 필요한 공간 데이타 분석, 논리적 설계, 물리적 설계 등의 전반적인 SDTS 변환과정, 테스트 검증 사항, 그리고 반드시 지켜야 할 규칙들을 제시한다. 마지막으로, 실제 공간 데이타인 GOTHIC과 SDTS 간의 변환 시스템의 설계와 구현에 대해서 언급한다.

  • PDF

Efficient Storage Techniques for Multidimensional Index Structures in Multi-Zoned Disk Environments (다중 존 디스크 환경에서 다차원 인덱스 구조의 효율적 저장 기법)

  • Yu, Byung-Gu;Kim, Seon-Ho;Chang, Jae-Young
    • Journal of KIISE:Databases
    • /
    • v.34 no.4
    • /
    • pp.315-327
    • /
    • 2007
  • The performance of database applications with large sets of multidimensional data depends on the performance of its access methods and the underlying disk system. In modeling the disk system, even though modem disks are manufactured with multiple physical zones, conventional access methods have been developed based on a traditional disk model with many simplifying assumptions. Thus, there is a marked lack of investigation on how to enhance the performance of access methods given a zoned disk model. The paper proposes novel zoning techniques that can be applied to any multidimensional access methods, both static and dynamic, enhancing the effective data transfer rate of underlying disk system by fully utilizing its zone characteristics. Our zoning techniques include data placement algorithms for multidimensional index structures and accompanying localized query processing algorithms for range queries. The experimental results show that our zoning techniques significantly improve the query performance.

Efficient Storage Techniques for Materialized Views Using Multi-Zoned Disks in OLAP Environment (OLAP 환경에서 다중 존 디스크를 활용한 실체뷰의 효율적 저장 기법)

  • Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.1
    • /
    • pp.143-160
    • /
    • 2009
  • In determining the performance of OLAP database applications, the structure and the effective access methods to the underlying disk system is a significant factor. In recent years, hard disks are designed with multiple physical zones where seek times and data transfer rates vary across the zones. However, there is little consideration of multi-zone disks in previous works. Instead, they assumed a traditional disk model that comes with many simplifying assumptions such as an average seek-time and a single data transfer rate. In this paper, we propose a technique storing a set of materialized views into the multi-zoned disks in OLAP environment dealing with large sets of data. We first present the disk zoning algorithm of materialized views according to the access probabilities of each views. Also, we address the problem of storing views in the dynamic environment where data are updated continuously. Finally, through experiments, we prove the performance improvement of the proposed algorithm against the conventional methods.

  • PDF

Branch-and-bound method for solving n-ary vertical partitioning problems in physical design of database (데이타베이스의 물리적 설계에서 분지한계법을 이용한 n-ary 수직분할문제)

  • Yoon, Byung-Ik;Kim, Jae-Yern
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.22 no.4
    • /
    • pp.567-578
    • /
    • 1996
  • In relational databases the number of disk accesses depends on the amount of data transferred from disk to main memory for processing the transactions. N-ary vertical partitioning of the relation can often result in a decrease in the number of disk accesses, since not all attributes in a tuple are required by each transactions. In this paper, a 0-1 integer programming model for solving n-ary vertical partitioning problem minimizing the number of disk accesses is formulated and a branch-and-bound method is used to solve it. A preprocessing procedure reducing the number of variables is presented. The algorithm is illustrated with numerical examples and is shown to be computationally efficient. Numerical experiments reveal that the proposed method is more effective in reducing access costs than the existing algorithms.

  • PDF

An Allocation Methodology on Distributed Databases Using the Genetic Algorithmsplications (유전자 알고리즘을 이용한 분산 데이터베이스 할당 방법론)

  • 박성진;박화규;손주찬;박상봉;백두권
    • The Journal of Information Technology and Database
    • /
    • v.5 no.1
    • /
    • pp.1-12
    • /
    • 1998
  • 분산 환경에서 데이터의 할당(allocation)는 중요한 설계 이슈이다. 데이터의 할당은 분산 데이터에 대한 비용(cost) 감소, 성능(performance) 및 가용성(availability) 향상 등의 이점을 극대화할 수 있도록 최적화되어야 한다. 기존 연구들의 대부분은 트랜잭션의 수행 비용을 최소화하는 방향으로만 최적화된 데이터 할당 결과를 제시하고 있다. 즉, 비용, 성능 및 가용성을 모두 함께 고려하는 연구는 아직까지 제시된 결과가 없으며 이는 복잡한 모델에 대한 적절한 최적화 기법이 없기 때문이다. 본 연구에서는 분산 데이터의 이점들인 비용, 성능 및 가용성 등의 다중측면을 동시에 고려함으로써 데이터 할당에 대한 파레토 최적해를 제공하는 DAMMA (Data Allocation Methodology considering Multiple Aspects) 방법론을 제안하였다. DAMMA 방법론은 데이터 분할 과정을 통하여 생성된 최적의 단편들을 분산 시스템의 운용 비용, 수행 성능, 가용성 등의 요소를 고려하여 각 물리적 사이트에 중복 할당하는 파레토 최적해들을 생성해낼 수 있는 설계 방법론이다.

  • PDF

The Analysis Method based on the Business Model for Developing Web Application Systems (웹 응용 시스템 개발을 위한 업무모델 기반의 분석방법)

  • 조용선;정기원
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1193-1207
    • /
    • 2003
  • Various web applications are developed as the Internet is popularized in many fields. However, in most cases of web application development, systematic analysis is omitted and developers jump into the implementation. Therefore developers have difficulties with applying the development methods for a large scale project. The approach of creating an analysis models of a web application from a business model is proposed for the rapid and efficient development. The analysis process, tasks and techniques are proposed for this approach. The use case diagram and web page list are created from business modes that is depicted using the notation of UML activity diagram. The page diagram and logical / physical database models are created using the use case diagram and the web page list. These analysis models are refined during the detailed design phase. The efficiency of proposed method has been shown using a practical case study which reflects the development project of the web application for supporting the association of auto repair shops.