• Title/Summary/Keyword: 뷰 갱신

Search Result 43, Processing Time 0.032 seconds

Self Maintainable Data Warehouse Views for Multiple Data Sources (다중 데이터 원천을 가지는 데이터웨어하우스 뷰의 자율갱신)

  • Lee, Woo-Key
    • Asia pacific journal of information systems
    • /
    • v.14 no.3
    • /
    • pp.169-187
    • /
    • 2004
  • Self-maintainability of data warehouse(DW) views is an ability to maintain the DW views without requiring an access to (i) any underlying databases or (ii) any information beyond the DW views and the delta of the databases. With our proposed method, DW views can be updated by using only the old views and the differential files such as different files, referential integrity differential files, linked differential files, and backward-linked differential files that keep the truly relevant tuples in the delta. This method avoids accessing the underlying databases in that the method achieves self-maintainability even in preparing auxiliary information. We showed that out method can be applicable to the DW views that contain joins over relations in a star schema, a snowflake schema, or a galaxy schema.

A Schema Version Model for Composite Objects in Object-Oriented Databases (객체지향 데이터베이스의 복합 객체를 위한 스키마 버전 모델)

  • Lee, Sang-Won;Kim, Hyeong-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.4
    • /
    • pp.473-486
    • /
    • 1999
  • 본 논문에서는 복합개체 계층구조의 재구성을 지원하는 객체지향 데이터베이스 스키마 버전모델을 제안한다. 이 모델은 풍부한 기본 스키마(Rich Base Schema)개념에 기반한 스키마 버전 모델 RIBS를 확장한다. RiBS 모델에서 각 스키마 버전은 하나의 기본 스키마에 대한 갱신가능한 클래스 계층구조 뷰이고 , 이 기본 스키마는 모든 스키마버전들에서 필요로 하는 스키마 정보를 갖고 있다. 본 논문에서는 스키마 버전의 복합객체 계층구조의 재구성을 위한 스키마 진화연산들을 도입하고, 이 연산들의 의미를 설명한다. 그리고 이 연산들을 통해 재구성된 복합개체 계층구조에서 대한 질의의 처리 방안을 다룬다. 또 한, 둘 이상의 스키마 버전 통합시 발생하는 복합객체 재구성 연산들에 의한 충돌현상을 설명하고 해결책을 제시한다. 본 논문의 독창성은 1) 복합객체 계층구조의 재구성을 위한 연산들을 최초로 도입한 점과 2) 확장된 RiBS 모델이 객체지향 데이터베이스의 데이터독립성(data independence)을 제공한다는 점이다.

Developing a Mapping Viewer of Moving Objects on SVG Wireless-Map (SVG 무선지도 상에서 이동객체 매핑 뷰어의 개발)

  • Ahn, Hae-Soon;Bu, Ki-Dong;Nam, In-Gil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.479-480
    • /
    • 2008
  • 본 연구에서는 서버로부터 이동 객체의 좌표 값을 수신하여 모바일 폰의 SVG 무선지도 상에서 객체의 위치를 심볼로 중첩하여 브라우징할 수 있는 J2ME 기반 매핑 뷰어를 설계하고 개발하였다. 제안한 방법은 SVG 기반의 일반적인 지도 서비스와는 달리 이동 객체의 위치를 지도상에 심볼로 표시하고 변경되는 위치를 주기적으로 갱신하여 기본도와 중첩할 수 있는 기능을 부가함으로써 개인 휴대폰과 같은 thin 클라이언트에서도 운영이 가능하다는 장점이 있다.

An Efficient Incremental Maintenance Method for Data Cubes in Data Warehouses (데이타 웨어하우스에서 데이타 큐브를 위한 효율적인 점진적 관리 기법)

  • Lee, Ki-Yong;Park, Chang-Sup;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.175-187
    • /
    • 2006
  • The data cube is an aggregation operator that computes group-bys for all possible combination of dimension attributes. %on the number of the dimension attributes is n, a data cube computes $2^n$ group-bys. Each group-by in a data cube is called a cuboid. Data cubes are often precomputed and stored as materialized views in data warehouses. These data cubes need to be updated when source relation change. The incremental maintenance of a data cube is to compute and propagate only its changes. To compute the change of a data cube of $2^n$ cuboids, previous works compute a delta cube that has the same number of cuboids as the original data cube. Thus, as the number of dimension attributes increases, the cost of computing a delta cube increases significantly. Each cuboid in a delta cube is called a delta cuboid. In this paper. we propose an incremental cube maintenance method that can maintain a data cube by using only $_nC_{{\lceil}n/2{\rceil}}$ delta cuboids. As a result, the cost of computing a delta cube is substantially reduced. Through various experiments, we show the performance advantages of our method over previous methods.

Development of geoData Aquisition System for Panoramic Image Contents Service based on Location (위치기반 파노라마 영상 콘텐츠 서비스를 위한 geoData 취득 및 처리시스템 개발)

  • Cho, Hyeon-Koo;Lee, Hyung
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.1
    • /
    • pp.438-447
    • /
    • 2011
  • geoContents have been closely related with personal life since the Google Earth and Street View by Google and the Road View by Daum were introduced. So, Location-based content, which is referred to geoContents, involving geometric spacial information and location-based image information is a sharp rise in demand. A mobile mapping system used in the area of map upgrade and road facility management has been having difficulties in satisfying the demand in the cost and time for obtaining these kinds of contents. This paper addresses geoData acquisition and processing system for producing panoramic images. The system consists of 3 devices: the first device is 3 GPS receivers for acquiring location information which is including position, attitude, orientation, and time. The second is 6 cameras for image information. And the last is to synchronize the both data. The geoData acquired by the proposed system and the method for authoring geoContents which are referred to a panoramic image with position, altitude, and orientation will be used as an effective way for establishing the various location-based content and providing them service area.

Four Consistency Levels in Trigger Processing (트리거 처리 4 단계 일관성 레벨)

  • ;Eric Hanson
    • Journal of KIISE:Databases
    • /
    • v.29 no.6
    • /
    • pp.492-501
    • /
    • 2002
  • An asynchronous trigger processor (ATP) is a oftware system that processes triggers after update transactions to databases are complete. In an ATP, discrimination networks are used to check the trigger conditions efficiently. Discrimination networks store their internal states in memory nodes. TriggerMan is an ATP and uses Gator network as the .discrimination network. The changes in databases are delivered to TriggerMan in the form of tokens. Processing tokens against a Gator network updates the memory nodes of the network and checks the condition of a trigger for which the network is built. Parallel token processing is one of the methods that can improve the system performance. However, uncontrolled parallel processing breaks trigger processing semantic consistency. In this paper, we propose four trigger processing consistency levels that allow parallel token processing with minimal anomalies. For each consistency level, a parallel token processing technique is developed. The techniques are proven to be valid and are also applicable to materialized view maintenance.

A Shadow Mapping Technique Separating Static and Dynamic Objects in Games using Multiple Render Targets (다중 렌더 타겟을 사용하여 정적 및 동적 오브젝트를 분리한 게임용 그림자 매핑 기법)

  • Lee, Dongryul;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.99-108
    • /
    • 2015
  • To identify the location of the object and improve the realism in 3D game, shadow mapping is widely used to compute the depth values of vertices in view of the light position. Since the depth value of the shadow map is calculated by the world coordinate, the depth values of the static object don't need to be updated. In this paper, (1) in order to improve the rendering speed, using multiple render targets the depth values of static objects stored only once are separated from those of dynamic objects stored each time. And (2) in order to improve the shadow quality in the quarter view 3D game, the position of the light is located close to dynamic objects traveled along the camera each time. The effectiveness of the proposed method is verified by the experiments according to the different static and dynamics object configuration in 3D game.

Metadata Management Method for Consistency and Recency in Digital Library (디지탈 도서관 환경에서 일관성과 최근성을 고려한 메타데이타 관리 방법)

  • Lee, Hai-Min;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.27 no.1
    • /
    • pp.22-32
    • /
    • 2000
  • The Digital Library is the integrated system of Information Retrieval System(IRS) and Database Management system(DBMS). In the Digital Library environment where dynamic query and update processes are required, however, the existing transaction management methods cause the following problems. First, since the traditional consistency criteria is too restrictive, it causes increment of query processing time and cannot guarantee the reflection of recency. Second, query result could be unreliable because the consistency criteria between source data and metadata is not defined. This paper models the access to metadata based on Dublin Core as query transactions and update transactions, and gives the efficient method to manage them. Particularly, this paper describes the consistency criteria of metadata which takes it Into consideration the consistency between the result of query transaction and status of source data in the Digital Library, that is different from the consistency criteria in traditional transaction management. It also presents analysis of the view point of query transaction to reflect recency and proposes metadata management to guarantee recency within metadata consistency.

  • PDF

Efficient Management of Statistical Information of Keywords on E-Catalogs (전자 카탈로그에 대한 효율적인 색인어 통계 정보 관리 방법)

  • Lee, Dong-Joo;Hwang, In-Beom;Lee, Sang-Goo
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.1-17
    • /
    • 2009
  • E-Catalogs which describe products or services are one of the most important data for the electronic commerce. E-Catalogs are created, updated, and removed in order to keep up-to-date information in e-Catalog database. However, when the number of catalogs increases, information integrity is violated by the several reasons like catalog duplication and abnormal classification. Catalog search, duplication checking, and automatic classification are important functions to utilize e-Catalogs and keep the integrity of e-Catalog database. To implement these functions, probabilistic models that use statistics of index words extracted from e-Catalogs had been suggested and the feasibility of the methods had been shown in several papers. However, even though these functions are used together in the e-Catalog management system, there has not been enough consideration about how to share common data used for each function and how to effectively manage statistics of index words. In this paper, we suggest a method to implement these three functions by using simple SQL supported by relational database management system. In addition, we use materialized views to reduce the load for implementing an application that manages statistics of index words. This brings the efficiency of managing statistics of index words by putting database management systems optimize statistics updating. We showed that our method is feasible to implement three functions and effective to manage statistics of index words with empirical evaluation.

  • PDF

Techniques of XML Query Caching on the Web (웹에서의 XML 질의 캐쉬 기법)

  • Park, Dae-Sung;Kang, Hyun-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.11 no.1
    • /
    • pp.1-23
    • /
    • 2006
  • As data on the Web is more and more in XML due to proliferation of Web applications such as e-Commerce, it is strongly required to rapidly process XML queries. One of such techniques is XML query caching. For frequently submitted queries, their results could be cached in order to guarantee fast response for the same queries. In this paper, we propose techniques for XML query performance improvement whereby the set of node identifiers(NIS) for an XML query is cached. NIS is most commonly employed as a format of XML query result,, consisting of the identifiers of the XML elements that comprise the query result. With NIS, it is suitable to meet the Web applications data retrieval requirements because reconstruction and/or modification of query results and integration of multiple query results can be efficiently done. Incremental refresh of NIS against its source updates can also be efficiently done. When the query result is requested in XML, however, materialization of NIS is needed by retrieving the source XML elements through their identifiers. In this paper, we consider three different types of NISs. proposing the algorithms of their creation, materialization, and incremental refresh. All of them were implemented using an RDBMS. Through a detailed set of performance experiments, we showed the efficiency of the proposed XML query caching techniques.

  • PDF