• Title/Summary/Keyword: real-time databases

Search Result 184, Processing Time 0.022 seconds

Real-Time Optimistic Concurrency Control using Thomas’ Write Rule (Thomas 기록 규칙을 이용한 실시간 낙관적 동시성 제어)

  • Kim, Mal-Hee;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.27 no.4
    • /
    • pp.596-603
    • /
    • 2000
  • 낙관적 기법은 실시간 데이터베이스 시스템을 위한 동시성 제어로서 적합하다. 특히, 종료시 한을 초과한 트랜잭션이 시스템으로부터 제거되는 펌 실시간 데이터베이스 시스템에서 낙관 적 기법은 잠금 기법보다 우수한 성능을 보인다. 그러나 낙관적 기법은 낭비적 수행과 과도 한 재시작의 문제를 안고 있다. 종료에 가까운 트랜잭션의 재시작은 시스템 자원의 낭비는 물론 종료시한 초과 가능성을 높인다. 발생되는 재시작의 수를 줄이기 위해서 충돌하는 트 랜잭션들간의 직렬화 순서를 동적으로 저장하는 방법이 이용되었다. 그러나 직렬화 순서의 동적 조정 기법을 이용함에도 불구하고 불필요한 재시작이 발생된다. 본 논문에서는 기존의 타임스탬프 기반 동시성 제어에서 이용되던 Thomas 기록 규칙을 이용하여 이러한 불필요 한 재시작을 제거한 개선된 실시간 낙관적 동시성 제어 기법을 제한한다. 제안된 방법은 요 구되는 데이터베이스 일관성을 보장하면서도 발생되는 재시작 수를 줄임으로써 성능을 향상 시킨다.

  • PDF

Lock-based Secure Protocol in Real-Time Databases (실시간 데이터베이스에서 로킹기반 보안 프로토콜)

  • 박수연;이승룡
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10b
    • /
    • pp.211-213
    • /
    • 1998
  • 실시간 응용을 위한 데이터베이스 시스템은 시간 제약 조건을 만족시켜야 하며, 데이터 일관성을 유지해야 한다. 또한 다중레벨을 지원하는 보안 프로토콜은 cover channel의 생성을 방지하는 것이 중요하다. Son과 Mukkamala는 primary copy와 secondary copy를 사용한 SRT-2PL을 개발하였다. 이 프로토콜은 보안 레벨간의 불간섭(non-interference)을 지원하며, covert channel의 발생을 막을 수 있으며, 지연이 적고 취소가 적으므로 실시간 데이터베이스 시스템에서 보안을 유지하는데 사용될 수 있다. 그러나 secondary copy를 모든 데이터 오브젝트에 대해 항상 보존해야 하므로 작업공간의 낭비가 있고, 데이터의 갱신을 위해 update queue를 관리해야 하는 오버헤드와 그에 따른 예측성 결여가 문제점으로 나타난다. 따라서, 본 논문에서는 불간섭을 지원하여 covert channel의 발생을 방지하면서, 복사본의 유지 기간을 줄여 실시간 지원을 강화시키고, 예측성을 좀더 높인 개선된 SRT-2PL 실시간 데이터베이스 보안 프로토콜을 제안한다. 본 논문에서 제안하는 동적 복사 알고리즘은 트랜잭션의 동작에 따라 동적으로 복사본을 생성하여 레벨간의 불간섭을 제공함과 동시에, 복사본의 유지 기간을 줄여 작업공간의 낭비를 줄이고 예측성을 높일 수 있다.

Efficient Labeling Scheme for Query Processing over XML Fragment Stream in Wireless Computing (무선 환경에서 XML 조각 스트림 질의 처리를 위한 효율적인 레이블링 기법)

  • Ko, Hye-Kyeong
    • The KIPS Transactions:PartD
    • /
    • v.17D no.5
    • /
    • pp.353-358
    • /
    • 2010
  • Unlike the traditional databases, queries on XML streams are restricted to a real time processing and memory usage. In this paper, a robust labeling scheme is proposed, which quickly identifies structural relationship between XML fragments. The proposed labeling scheme provides an effective query processing by removing many redundant operations and minimizing the number of fragments being processed. In experimental results, the proposed labeling scheme efficiently processes query processing and optimizes memory usage.

Extracting meeting location from seminar and conference announcement in English

  • Kim, Anatoliy;Choi, Dong-Hyun;Choi, Key-Sun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.258-261
    • /
    • 2011
  • Living in the age of information people face problems related to information overload. Information is easy to produce, store and distribute through various communication channels, one of which is emails. With the appearance of the mobile devices, such as smart phones and tabs, people can have access to email inbox at any moment of time from everywhere. In this paper we present information extraction system with a specific goal of extracting meeting location from the announcement of seminar or conference. We apply a machine learning method (conditional random fields, CRF), train the system using annotated corpus of seminar and conference announcements and validate results by applying various extracted correction rules and patterns. Furthermore, we normalize extracted location, and reference using geo-coding databases, OpenStreetMap and Wikipedia resources to determine real geographical coordinates.

Virtual Manufacturing for an Automotive Company (II) - Constuction and Operation of a Virtual Body Shop (자동차 가상생산 기술 적용 (II) - 차체공장 가상플랜트 구축 및 운영)

  • Noh, Sang-Do;Hong, Sung-Won;Kim, Duk-Young;Sohn, Chang-Young;Hahn, Hyung-Sang
    • IE interfaces
    • /
    • v.14 no.2
    • /
    • pp.127-133
    • /
    • 2001
  • Virtual Manufacturing is a technology facilitating effective development and agile production of products via computer models representing physical and logical schema and the behavior of the real manufacturing systems. For the successful application of this technology, a virtual plant as a well-designed and integrated environment is essential. We propose a series of systematic approaches and effective methods for construction and operation of a virtual plant in this paper, such as a 3-D CAD modeling, cell and line simulations and databases. We developed key technologies for measuring and 3-D CAD modeling of many equipments, facilities and structures of the buildings. In order to study the benefit of virtual manufacturing, we constructed a sophisticated virtual plant model of a Korean automotive company's body shop, and conducted precise simulations of unit cell, lines and the whole plant. We could obtain the benefit of savings in time and cost in many manufacturing preparation activities in the new car development processes.

  • PDF

The MAPN Modeling for the distributed Data Allocation based on Multiple Aspects (다중 측면 기반의 분산 데이터 할당을 위한 MAPN 모델링)

  • Park, Seong-Jin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.745-755
    • /
    • 2000
  • In designing distributed databases, DAP(Data Allocation Problem) is one of the key design issues. Because, however, most of previous researches on DAP have considered only cost aspect, they cannot increase the performance and availability and they are not proper to the system requiring high-availability or real-time processing. Therefore, we need a more formal data allocation model considering multiple aspects. In this paper, we propose the MAPN (Multiple Aspects Petri Net) modeling method for the distributed transaction modeling. The MAPN model, an extended classical petri net, is proposed for the formal modeling considering multiple aspects (cost, performance and availability) concurrently. We demonstrate that we can compose the valid DAP evaluation model considering not only cost but also performance and availability concurrently by using the MAPN structure and MAPN graph.

  • PDF

A single-phase algorithm for mining high utility itemsets using compressed tree structures

  • Bhat B, Anup;SV, Harish;M, Geetha
    • ETRI Journal
    • /
    • v.43 no.6
    • /
    • pp.1024-1037
    • /
    • 2021
  • Mining high utility itemsets (HUIs) from transaction databases considers such factors as the unit profit and quantity of purchased items. Two-phase tree-based algorithms transform a database into compressed tree structures and generate candidate patterns through a recursive pattern-growth procedure. This procedure requires a lot of memory and time to construct conditional pattern trees. To address this issue, this study employs two compressed tree structures, namely, Utility Count Tree and String Utility Tree, to enumerate valid patterns and thus promote fast utility computation. Furthermore, the study presents an algorithm called single-phase utility computation (SPUC) that leverages these two tree structures to mine HUIs in a single phase by incorporating novel pruning strategies. Experiments conducted on both real and synthetic datasets demonstrate the superior performance of SPUC compared with IHUP, UP-Growth, and UP-Growth+algorithms.

Management System of On-line Mode Client-cluster (온라인 모드 클라이언트-클러스터 운영 시스템)

  • 박제호;박용범
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.4 no.2
    • /
    • pp.108-113
    • /
    • 2003
  • Research results have demonstrated that conventional client-server databases have scalability problem in the presence of many concurrent clients. The multi-tier architecture that exploits similarities in clients' object access behavior partitions clients into logical clusters according to their object request pattern. As a result, object requests that are served inside the clusters, server load and request response time can be optimized. Management of clustering by utilizing clients' access pattern-based is an important component for the system's goal. Off-line methods optimizes the quality of the global clustering, the necessary cost and clustering schedule needs to be considered and planned carefully in respect of stable system's performance. In this paper, we propose methods that detect changes in access behavior and optimize system configuration in real time. Finally this paper demonstrates the effectiveness of on-line change detection and results of experimental investigation concerning reconfiguration.

  • PDF

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.

An Effective Similarity Search Technique supporting Time Warping in Sequence Databases (시퀀스 데이타베이스에서 타임 워핑을 지원하는 효과적인 유살 검색 기법)

  • Kim, Sang-Wook;Park, Sang-Hyun
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.643-654
    • /
    • 2001
  • This paper discusses an effective processing of similarity search that supports time warping in large sequence database. Time warping enables finding sequences with similar patterns even when they are of different length, Previous methods fail to employ multi-dimensional indexes without false dismissal since the time warping distance does not satisfy the triangular inequality. They have to scan all the database, thus suffer from serious performance degradation in large database. Another method that hires the suffix tree also shows poor performance due to the large tree size. In this paper we propose a new novel method for similarity search that supports time warping Our primary goal is to innovate on search performance in large database without false dismissal. to attain this goal ,we devise a new distance function $D_{tw-Ib}$ consistently underestimates the time warping distance and also satisfies the triangular inequality, $D_{tw-Ib}$ uses a 4-tuple feature vector extracted from each sequence and is invariant to time warping, For efficient processing, we employ a distance function, We prove that our method does not incur false dismissal. To verify the superiority of our method, we perform extensive experiments . The results reveal that our method achieves significant speedup up to 43 times with real-world S&P 500 stock data and up to 720 times with very large synthetic data.

  • PDF