• 제목/요약/키워드: Query Tree

검색결과 329건 처리시간 0.028초

A Bitmap Index for Chunk-Based MOLAP Cubes (청크 기반 MOLAP 큐브를 위한 비트맵 인덱스)

  • Lim, Yoon-Sun;Kim, Myung
    • Journal of KIISE:Databases
    • /
    • 제30권3호
    • /
    • pp.225-236
    • /
    • 2003
  • MOLAP systems store data in a multidimensional away called a 'cube' and access them using way indexes. When a cube is placed into disk, it can be Partitioned into a set of chunks of the same side length. Such a cube storage scheme is called the chunk-based MOLAP cube storage scheme. It gives data clustering effect so that all the dimensions are guaranteed to get a fair chance in terms of the query processing speed. In order to achieve high space utilization, sparse chunks are further compressed. Due to data compression, the relative position of chunks cannot be obtained in constant time without using indexes. In this paper, we propose a bitmap index for chunk-based MOLAP cubes. The index can be constructed along with the corresponding cube generation. The relative position of chunks is retained in the index so that chunk retrieval can be done in constant time. We placed in an index block as many chunks as possible so that the number of index searches is minimized for OLAP operations such as range queries. We showed the proposed index is efficient by comparing it with multidimensional indexes such as UB-tree and grid file in terms of time and space.

An Efficient Privacy Preserving Method based on Semantic Security Policy Enforcement (의미적 보안정책 집행에 의한 효율적 개인정보보호 방식)

  • Kang, Woo-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제13권6호
    • /
    • pp.173-186
    • /
    • 2013
  • New information technologies make it easy to access and acquire information in various ways. However, It also enable powerful and various threat to system security. To challenge these threats, various extended access control methods are being studied. We suggest a new extended access control method that make it possible to conform to security policies enforcement even with discrepancy between policy based constraints rules and query based constraints rules via their semantic relationship. New our approach derives semantic implications using tree hierarchy structure and coordinates the exceed privileges using semantic gap factor calculating the degree of the discrepancy. In addition, we illustrate prototype system architecture and make performance comparison with existing access control methods.

Parallel Spatial Join Method Using Efficient Spatial Relation Partition In Distributed Spatial Database Systems (분산 공간 DBMS에서의 효율적인 공간 릴레이션 분할 기법을 이용한 병렬 공간 죠인 기법)

  • Ko, Ju-Il;Lee, Hwan-Jae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • 제4권1호
    • /
    • pp.39-46
    • /
    • 2002
  • In distributed spatial database systems, users nay issue a query that joins two relations stored at different sites. The sheer volume and complexity of spatial data bring out expensive CPU and I/O costs during the spatial join processing. This paper shows a new spatial join method which joins two spatial relation in a parallel way. Firstly, the initial join operation is divided into two distinct ones by partitioning one of two participating relations based on the region. This two join operations are assigned to each sites and executed simultaneously. Finally, each intermediate result sets from the two join operations are merged to an ultimate result set. This method reduces the number of spatial objects participating in the spatial operations. It also reduces the scope and the number of scanning spatial indices. And it does not materialize the temporary results by implementing the join algebra operators using the iterator. The performance test shows that this join method can lead to efficient use in terms of buffer and disk by narrowing down the joining region and decreasing the number of spatial objects.

  • PDF

Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error (퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계)

  • Roh, Seok-Beom;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • 제20권1호
    • /
    • pp.101-108
    • /
    • 2010
  • In this paper, we proposed a new lazy classifier with fuzzy k-nearest neighbors approach and feature selection which is based on reconstruction error. Reconstruction error is the performance index for locally linear reconstruction. When a new query point is given, fuzzy k-nearest neighbors approach defines the local area where the local classifier is available and assigns the weighting values to the data patterns which are involved within the local area. After defining the local area and assigning the weighting value, the feature selection is carried out to reduce the dimension of the feature space. When some features are selected in terms of the reconstruction error, the local classifier which is a sort of polynomial is developed using weighted least square estimation. In addition, the experimental application covers a comparative analysis including several previously commonly encountered methods such as standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees.

Lazy Bulk Insertion Method of Moving Objects Using Index Structure Estimation (색인 구조 예측을 통한 이동체의 지연 다량 삽입 기법)

  • Kim, Jeong-Hyun;Park, Sun-Young;Jang, Hyong-Il;Kim, Ho-Suk;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • 제7권3호
    • /
    • pp.55-65
    • /
    • 2005
  • This paper presents a bulk insertion technique for efficiently inserting data items. Traditional moving object database focused on efficient query processing that happens mainly after index building. Traditional index structures rarely considered disk I/O overhead for index rebuilding by inserting data items. This paper, to solve this problem, describes a new bulk insertion technique which efficiently induces the current positions of moving objects and reduces update cost greatly. This technique uses buffering technique for bulk insertion in spatial index structures such as R-tree. To analyze split or merge node, we add a secondary index for information management on leaf node of primary index. And operations are classified to reduce unnecessary insertion and deletion. This technique decides processing order of moving objects, which minimize split and merge cost as a result of update operations. Experimental results show that this technique reduces insertion cost as compared with existing insertion techniques.

  • PDF

Application-Oriented Context Pre-fetch Method for Enhancing Inference Performance in Ontology-based Context Management (온톨로지 기반의 상황정보관리에서 추론 성능 향상을 위한 어플리케이션 지향적 상황정보 선인출 기법)

  • Lee Jae-Ho;Park In-Suk;Lee Dong-Man;Hyun Soon-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • 제12권4호
    • /
    • pp.254-263
    • /
    • 2006
  • Ontology-based context models are widely used in ubiquitous computing environment because they have advantages in the acquisition of conceptual context through inferencing, context sharing, and context reusing. Among the benefits, inferencing enables context-aware applications to use conceptual contexts which cannot be acquired by sensors. However, inferencing causes processing delay and thus becomes the major obstacle to the implementation of context-aware applications. The delay becomes longer as the amount of contexts increases. In this paper, we propose a context pre-fetching method to reduce the size of contexts to be processed in a working memory in attempt to speed up inferencing. For this, we extend the query-tree method to identify contexts relevant to the queries of a context-aware application. Maintaining the pre-fetched contexts optimal in a working memory, the processing delay of inference reduces without the loss of the benefits of ontology-based context model. We apply the proposed scheme to our ubiquitous computing middleware, Active Surroundings, and demonstrate the performance enhancement by experiments.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • 제18권1호
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Design of the Flexible Buffer Node Technique to Adjust the Insertion/Search Cost in Historical Index (과거 위치 색인에서 입력/검색 비용 조정을 위한 가변 버퍼 노드 기법 설계)

  • Jung, Young-Jin;Ahn, Bu-Young;Lee, Yang-Koo;Lee, Dong-Gyu;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • 제18D권4호
    • /
    • pp.225-236
    • /
    • 2011
  • Various applications of LBS (Location Based Services) are being developed to provide the customized service depending on user's location with progress of wireless communication technology and miniaturization of personalized device. To effectively process an amount of vehicles' location data, LBS requires the techniques such as vehicle observation, data communication, data insertion and search, and user query processing. In this paper, we propose the historical location index, GIP-FB (Group Insertion tree with Flexible Buffer Node) and the flexible buffer node technique to adjust the cost of data insertion and search. the designed GIP+ based index employs the buffer node and the projection storage to cut the cost of insertion and search. Besides, it adjusts the cost of insertion and search by changing the number of line segments of the buffer node with user defined time interval. In the experiment, the buffer node size influences the performance of GIP-FB by changing the number of non-leaf node of the index. the proposed flexible buffer node is used to adjust the performance of the historical location index depending on the applications of LBS.

Hippocratic XML Databases: A Model and Access Control Mechanism (히포크라테스 XML 데이터베이스: 모델 및 액세스 통제 방법)

  • Lee Jae-Gil;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • 제31권6호
    • /
    • pp.684-698
    • /
    • 2004
  • The Hippocratic database model recently proposed by Agrawal et al. incorporates privacy protection capabilities into relational databases. Since the Hippocratic database is based on the relational database, it needs extensions to be adapted for XML databases. In this paper, we propose the Hippocratic XML database model, an extension of the Hippocratic database model for XML databases and present an efficient access control mechanism under this model. In contrast to relational data, XML data have tree-like hierarchies. Thus, in order to manage these hierarchies of XML data, we extend and formally define such concepts presented in the Hippocratic database model as privacy preferences, privacy policies, privacy authorizations, and usage purposes of data records. Next, we present a new mechanism, which we call the authorization index, that is used in the access control mechanism. This authorization index, which is Implemented using a multi-dimensional index, allows us to efficiently search authorizations implied by the authorization granted on the nearest ancestor using the nearest neighbor search technique. Using synthetic and real data, we have performed extensive experiments comparing query processing time with those of existing access control mechanisms. The results show that the proposed access control mechanism improves the wall clock time by up to 13.6 times over the top-down access control strategy and by up to 20.3 times over the bottom-up access control strategy The major contributions of our paper are 1) extending the Hippocratic database model into the Hippocratic XML database model and 2) proposing an efficient across control mechanism that uses the authorization index and nearest neighbor search technique under this model.