• Title/Summary/Keyword: query execution

Search Result 94, Processing Time 0.024 seconds

Efficient Multiple Joins using the Synchronization of Page Execution Time in Limited Processors Environments (한정된 프로세서 환경에서 체이지 실행시간 동기화를 이용한 효율적인 다중 결합)

  • Lee, Kyu-Ock;Weon, Young-Sun;Hong, Man-Pyo
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.732-741
    • /
    • 2001
  • In the relational database systems the join operation is one of the most time-consuming query operations. Many parallel join algorithms have been developed 개 reduce the execution time Multiple hash join algorithm using allocation tree is one of the most efficient ones. However, it may have some delay on the processing each node of allocation tree, which is occurred in tuple-probing phase by the difference between one page reading time of outer relation and the processing time of already read one. This delay problem was solved by using the concept of synchronization of page execution time with we had proposed In this paper the effects of the performance improvements in each node of the allocation tree are extended to the whole allocation tree and the performance evaluation about that is processed. In addition we propose an efficient algorithm for multiple hash joins in limited number of processor environments according to the relationship between the number of input relations in the allocation tree and the number of processors allocated to the tree. Finally. we analyze the performance by building the analytical cost model and verify the validity of it by various performance comparison with previous method.

  • PDF

A Study on Effective Real Estate Big Data Management Method Using Graph Database Model (그래프 데이터베이스 모델을 이용한 효율적인 부동산 빅데이터 관리 방안에 관한 연구)

  • Ju-Young, KIM;Hyun-Jung, KIM;Ki-Yun, YU
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.163-180
    • /
    • 2022
  • Real estate data can be big data. Because the amount of real estate data is growing rapidly and real estate data interacts with various fields such as the economy, law, and crowd psychology, yet is structured with complex data layers. The existing Relational Database tends to show difficulty in handling various relationships for managing real estate big data, because it has a fixed schema and is only vertically extendable. In order to improve such limitations, this study constructs the real estate data in a Graph Database and verifies its usefulness. For the research method, we modeled various real estate data on MySQL, one of the most widely used Relational Databases, and Neo4j, one of the most widely used Graph Databases. Then, we collected real estate questions used in real life and selected 9 different questions to compare the query times on each Database. As a result, Neo4j showed constant performance even in queries with multiple JOIN statements with inferences to various relationships, whereas MySQL showed a rapid increase in its performance. According to this result, we have found out that a Graph Database such as Neo4j is more efficient for real estate big data with various relationships. We expect to use the real estate Graph Database in predicting real estate price factors and inquiring AI speakers for real estate.

A Load Balancing Method using Partition Tuning for Pipelined Multi-way Hash Join (다중 해시 조인의 파이프라인 처리에서 분할 조율을 통한 부하 균형 유지 방법)

  • Mun, Jin-Gyu;Jin, Seong-Il;Jo, Seong-Hyeon
    • Journal of KIISE:Databases
    • /
    • v.29 no.3
    • /
    • pp.180-192
    • /
    • 2002
  • We investigate the effect of the data skew of join attributes on the performance of a pipelined multi-way hash join method, and propose two new harsh join methods in the shared-nothing multiprocessor environment. The first proposed method allocates buckets statically by round-robin fashion, and the second one allocates buckets dynamically via a frequency distribution. Using harsh-based joins, multiple joins can be pipelined to that the early results from a join, before the whole join is completed, are sent to the next join processing without staying in disks. Shared nothing multiprocessor architecture is known to be more scalable to support very large databases. However, this hardware structure is very sensitive to the data skew. Unless the pipelining execution of multiple hash joins includes some dynamic load balancing mechanism, the skew effect can severely deteriorate the system performance. In this parer, we derive an execution model of the pipeline segment and a cost model, and develop a simulator for the study. As shown by our simulation with a wide range of parameters, join selectivities and sizes of relations deteriorate the system performance as the degree of data skew is larger. But the proposed method using a large number of buckets and a tuning technique can offer substantial robustness against a wide range of skew conditions.

Metadata Ontology Design for B2B Business Process Registries (기업간 비즈니스 프로세스 등록저장소를 위한 메타데이터 온톨로지 설계)

  • Kim, Jong-Woo;Kim, Hyoung-Do;Yun, Jung-Hee;Jung, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.14D no.4 s.114
    • /
    • pp.435-446
    • /
    • 2007
  • B2B registries are information systems to register B2B related business information such as companies' profiles, business documents, business processes, and services and to provide query facilities to find information about potential business partners. Focusing on the design of the registry for B2B business processes, in this paper, a metadata ontology is designed to register B2B business processes. In practice, there are several competitive business process definition languages such as ebXML BPSS (Business Process Specification Schema), WSBPEL (Web Service Business Process Execution Language), BPMN (Business Process Modeling Notation), and so on. In order to register heterogeneous business processes based on different representation frameworks, the proposed metadata ontology consists of three layers, common metadata, language-specific metadata, and interrelationship metadata. To show the usefulness of the proposed metadata ontology, two examples which are represented by ebXML BPSS and WSBPEL respectively are described in order to show how the proposed metadata ontology is used to registry B2B business processes. To implement the proposed metadata ontology using ebXML registry, metadata mapping scheme to ebRIM (ebXML Registry Information Model) is also suggested.

An Implementation of distributed Real-time Location Data Server based on the GALIS Architecture (GALIS 구조 기반 실시간 분산 위치 데이타 서버 구현)

  • Lee, Joon-Woo;Lee, Woon-Ju;Lee, Ho;Nah, Yun-Mook
    • Journal of Korea Spatial Information System Society
    • /
    • v.7 no.1 s.13
    • /
    • pp.53-62
    • /
    • 2005
  • A challenging task in the LBS system engineering is to implement a highly scalable system architecture which can manage moderate-size configurations handling thousands of moving items as well as upper-end configurations handling distributed computing system architecture that consists of multiple data processors, each dedicated to keeping records relevant to a different geographical zone and a different time zone. In this paper, we explain a prototype location data server structuring major components of GALIS by employing the TMO programming scheme, including the execution engine middleware developed to support real-time distributed object programming and real-time distributed computing system design. We present how to generate realistic location sensing reports and how to process such location reports and location-related queries. Some experimental results showing performance factors regarding distributed query processing are also explained.

  • PDF

A Binary Decision Diagram-based Modeling Rule for Object-Relational Transformation Methodology (객체-관계 변환 방법론을 위한 이진 결정 다이어그램 기반의 모델링 규칙)

  • Cha, Sooyoung;Lee, Sukhoon;Baik, Doo-Kwon
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1410-1422
    • /
    • 2015
  • In order to design a system, software developers use an object model such as the UML class diagram. Object-Relational Transformation Methodology (ORTM) is a methodology to transform the relationships that are expressed in the object model into relational database tables, and it is applied for the implementation of the designed system. Previous ORTM studies have suggested a number of transformation methods to represent one relationship. However, there is an implementation problem that is difficult to apply because the usage criteria for each transformation method do not exist. Therefore, this paper proposes a binary decision diagram-based modeling rule for each relationship. Hence, we define the conditions for distinguishing the transformation methods. By measuring the query execution time, we also evaluate the modeling rules that are required for the verification. After evaluation, we re-define the final modeling rules which are represented by propositional logic, and show that our proposed modeling rules are useful for the implementation of the designed system through a case study.

PSR: Pre-Computing Solutions in RDBMS for Efficient Web Services Composition Search (PSR : 효율적인 웹 서비스 컴포지션 검색을 위한 RDBMS 기반의 선 계산 기법)

  • Kwon, Joon-Ho;Park, Kyu-Ho;Lee, Dae-Wook;Lee, Suk-Ho
    • Journal of KIISE:Databases
    • /
    • v.35 no.4
    • /
    • pp.333-344
    • /
    • 2008
  • In recent years, the web services composition has received much attention. By web services composition, we mean providing a new service that does not exist on the repository. In this paper, we propose a new system called PSR for web services composition search using a relational database. We also propose algorithms for pre-computing web services composition using joins and indices. We store ontologies from web services in RDBMS, so that the PSR system returns web services composition in order of similarity with user query through the degree of the ontology matching. We demonstrated that our pre-computing web services composition approach in RDBMS yields lower execution time and good scalability when handling a large number of web services and user queries.

Study on Methods for Sasang Constituion Diagnosis (사상체질진단 방법론 연구)

  • Kim Jon-Won;Lee Eui-Ju;Kim Kyn-Kon;Kim Jong-Yeol;Lee Yong-Tae
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.19 no.6
    • /
    • pp.1471-1474
    • /
    • 2005
  • Sasang constitution medicine is to do different treatment accordining to sasang constitution. Therefore, the constitution diagnosis in the Sasang constitution medicine is very important thing. The Process of Sasang constitution diagnosis Is difficult thing, because of consuming much time, making every effort. It is apt to be subjective tendency. So it need to make objective method. The QSCC II (Questionnaire of Sasang Constitution Classification II ) have several problems- can't do diagnosis of Taeyangin, the accuracy rate of Sasang constitution diagnosis is not high (probably 60%), and so on. So, we need the new methods for the Sasang constitution Diagnosis. We will modify the problems of QSCC II. The First is the problems of the study execution process, not-multicenter study, a low data, the absent of Taeyangin cases. So, we have to do the multicenter study. The Second is the problems of a query and the method of statistics analysis. We will modify the problems of self-report Questionnaire. That is the problems of self-report Questionnaire, the lack of objective estimation( body type, personal appearance, etc), the absent of the estimation on typical or non-typical type constitution. We modified the problems of QSCC II. Therefore we made the new self-report Questionnaire for patients. We modified the problems of self-report Questionnaire. Therefore we made the new Constituion diagnosis Questionnaire for doctors. We develop the Questionnaire of two ways for the Sasang constitution Diagnosis. The one is the new self-report Questionnaire for patients. The other is the new Constitution diagnosis Questionnaire for doctors. We have to melt down the Questionnaire of two ways for the Sasang constitution Diagnosis.

A Vertical Partitioning Algorithm based on Fuzzy Graph (퍼지 그래프 기반의 수직 분할 알고리즘)

  • Son, Jin-Hyun;Choi, Kyung-Hoon;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.315-323
    • /
    • 2001
  • The concept of vertical partitioning has been discussed so far in an objective of improving the performance of query execution and system throughput. It can be applied to the areas where the match between data and queries affects performance, which includes partitioning of individual files in centralized environments, data distribution in distributed databases, dividing data among different levels of memory hierarchies, and so on. In general, a vertical partitioning algorithm should support n-ary partitioning as well as a globally optimal solution for the generation of all meaningful fragments. Most previous methods, however, have some limitations to support both of them efficiently. Because the vertical partitioning problem basically includes the fuzziness property, the proper management is required for the fuzziness problem. In this paper we propose an efficient vertical $\alpha$-partitioning algorithm which is based on the fuzzy theory. The method can not only generate all meaningful fragments but also support n-ary partitioning without any complex mathematical computations.

  • PDF

Cost Model for Parallel Spatial Joins using Fixed Grids (고정 그리드를 이용한 병렬 공간 조인을 위한 비용 모델)

  • Kim, Jin-Deog;Hong, Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.665-676
    • /
    • 2001
  • The most expensive spatial operation in patial database in a spatial join which computes a combined table of which tuple consists of two tuples of the two tables satisgying a spatial predicate. Although the execution time of sequential processing of a spatial join has been so far considerably improved the response time is not tolerable because of not meeting the requiremetns of interactive users. It is usually appropriate to use parallel processing to improve the performance of spatial join processing. in spatial database the fixed grids which consist of the regularly partitioned cells can be employed the previous works on the spatial joins have not studied the parallel processing of spatial joins using fixed grids. This paper has presented an analytical cost model that estimates the comparative performance of a parallel spatial join algorithm based on the fixed grids in terms of the number of MBR comparisons. disk accesses, and message passing, Several experiments on the synthetic and real datasets show that the proposed analytical model is very accurate. This most model is also expected to used for implementing a very important DBMS component, Called the query processing optimizer.

  • PDF