• Title/Summary/Keyword: space partitioning

Search Result 175, Processing Time 0.032 seconds

Parallelization of Multi-Block Flow Solver with Multi-Block/Multi-Partitioning Method (다중블록/다중영역분할 기법을 이용한 유동해석 코드 병렬화)

  • Ju, Wan-Don;Lee, Bo-Sung;Lee, Dong-Ho;Hong, Seung-Gyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.31 no.7
    • /
    • pp.9-14
    • /
    • 2003
  • In this work, a multi-block/multi-partitioning method is suggested for a multi-block parallelization. It has an advantage of uniform load balance via subdividing of each block on each processor. To make a comparison of parallel efficiency according to domain decomposition method, a multi-block/single-partitioning and a multi-block/ multi-partitioning methods are applied to the flow analysis solver. The multi-block/ multi-partitioning method has more satisfactory parallel efficiency because of optimized load balancing. Finally, it has applied to the CFDS code. As a result, the computing speed with sixteen processors is over twelve times faster than that of sequential solver.

MLPPI Wizard: An Automated Multi-level Partitioning Tool on Analytical Workloads

  • Suh, Young-Kyoon;Crolotte, Alain;Kostamaa, Pekka
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1693-1713
    • /
    • 2018
  • An important technique used by database administrators (DBAs) is to improve performance in decision-support workloads associated with a Star schema is multi-level partitioning. Queries will then benefit from performance improvements via partition elimination, due to constraints on queries expressed on the dimension tables. As the task of multi-level partitioning can be overwhelming for a DBA we are proposing a wizard that facilitates the task by calculating a partitioning scheme for a particular workload. The system resides completely on a client and interacts with the costing estimation subsystem of the query optimizer via an API over the network, thereby eliminating any need to make changes to the optimizer. In addition, since only cost estimates are needed the wizard overhead is very low. By using a greedy algorithm for search space enumeration over the query predicates in the workload the wizard is efficient with worst-case polynomial complexity. The technology proposed can be applied to any clustering or partitioning scheme in any database management system that provides an interface to the query optimizer. Applied to the Teradata database the technology provides recommendations that outperform a human expert's solution as measured by the total execution time of the workload. We also demonstrate the scalability of our approach when the fact table (and workload) size increases.

Declustering of High-dimensional Data by Cyclic Sliced Partitioning (주기적 편중 분할에 의한 다차원 데이터 디클러스터링)

  • Kim Hak-Cheol;Kim Tae-Wan;Li Ki-Joune
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.596-608
    • /
    • 2004
  • A lot of work has been done to reduce disk access time in I/O intensive systems, which store and handle massive amount of data, by distributing data across multiple disks and accessing them in parallel. Most of the previous work has focused on an efficient mapping from a grid cell to a disk number on the assumption that data space is regular grid-like partitioned. Although we can achieve good performance for low-dimensional data by grid-like partitioning, its performance becomes degenerate as grows the dimension of data even with a good disk allocation scheme. This comes from the fact that they partition entire data space equally regardless of distribution ratio of data objects. Most of the data in high-dimensional space exist around the surface of space. For that reason, we propose a new declustering algorithm based on the partitioning scheme which partition data space from the surface. With an unbalanced partitioning scheme, several experimental results show that we can remarkably reduce the number of data blocks touched by a query as grows the dimension of data and a query size. In this paper, we propose disk allocation schemes based on the layout of the resultant data blocks after partitioning. To show the performance of the proposed algorithm, we have performed several experiments with different dimensional data and for a wide range of number of disks. Our proposed disk allocation method gives a performance within 10 additive disk accesses compared with strictly optimal allocation scheme. We compared our algorithm with Kronecker sequence based declustering algorithm, which is reported to be the best among the grid partition and mapping function based declustering algorithms. We can improve declustering performance up to 14 times as grows dimension of data.

Performance Improvement of Declustering Algorithm by Efficient Grid-Partitioning Multi-Dimensional Space (다차원 공간의 효율적인 그리드 분할을 통한 디클러스터링 알고리즘 성능향상 기법)

  • Kim, Hak-Cheol
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.37-48
    • /
    • 2010
  • In this paper, we analyze the shortcomings of the previous declustering methods, which are based on grid-like partitioning and a mapping function from a cell to a disk number, for high-dimensional space and propose a solution. The problems arise from the fact that the number of splitting is small(for the most part, binary-partitioning is sufficient), and the side length of a range query whose selectivity is small is quite large. To solve this problem, we propose a mathematical model to estimate the performance of a grid-like partitioning method. With the proposed estimation model, we can choose a good grid-like partitioning method among the possible schemes and this results in overall improvement in declustering performance. Several experimental results show that we can improve the performance of a previous declustering method up to 2.7 times.

S-Octree: An Extension to Spherical Coordinates

  • Park, Tae-Jung;Lee, Sung-Ho;Kim, Chang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1748-1759
    • /
    • 2010
  • We extend the octree subdivision process from Cartesian coordinates to spherical coordinates to develop more efficient space-partitioning structure for surface models. As an application of the proposed structure, we apply the octree subdivision in spherical coordinates ("S-Octree") to geometry compression in progressive mesh coding. Most previous researches on geometry-driven progressive mesh compression are devoted to improve predictability of geometry information. Unlike this, we focus on the efficient information storage for the space-partitioning structure. By eliminating void space at initial stage and aligning the R axis for the important components in geometry information, the S-Octree improves the efficiency in geometry information coding. Several meshes are tested in the progressive mesh coding based on the S-Octree and the results for performance parameters are presented.

Fuzzy modeling using transformed input space partitioning

  • You, Je-Young;Lee, Sang-Chul;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.494-498
    • /
    • 1996
  • Three fuzzy input space partitoining methods, which are grid, tree, and scatter method, are mainly used until now. These partition methods represent good performance in the modeling of the linear system and nonlinear system with independent modeling variables. But in the case of the nonlinear system with the coupled modeling variables, there should be many fuzzy rules for acquiring the exact fuzzy model. In this paper, it shows that the fuzzy model is acquired using transformed modeling vector by linear transformation of the modeling vector.

  • PDF

Compression of 3D Mesh Geometry and Vertex Attributes for Mobile Graphics

  • Lee, Jong-Seok;Choe, Sung-Yul;Lee, Seung-Yong
    • Journal of Computing Science and Engineering
    • /
    • v.4 no.3
    • /
    • pp.207-224
    • /
    • 2010
  • This paper presents a compression scheme for mesh geometry, which is suitable for mobile graphics. The main focus is to enable real-time decoding of compressed vertex positions while providing reasonable compression ratios. Our scheme is based on local quantization of vertex positions with mesh partitioning. To prevent visual seams along the partitioning boundaries, we constrain the locally quantized cells of all mesh partitions to have the same size and aligned local axes. We propose a mesh partitioning algorithm to minimize the size of locally quantized cells, which relates to the distortion of a restored mesh. Vertex coordinates are stored in main memory and transmitted to graphics hardware for rendering in the quantized form, saving memory space and system bus bandwidth. Decoding operation is combined with model geometry transformation, and the only overhead to restore vertex positions is one matrix multiplication for each mesh partition. In our experiments, a 32-bit floating point vertex coordinate is quantized into an 8-bit integer, which is the smallest data size supported in a mobile graphics library. With this setting, the distortions of the restored meshes are comparable to 11-bit global quantization of vertex coordinates. We also apply the proposed approach to compression of vertex attributes, such as vertex normals and texture coordinates, and show that gains similar to vertex geometry can be obtained through local quantization with mesh partitioning.

Methods to Recognize and Manage Spatial Shapes for Space Syntax Analysis (공간구문분석을 위한 공간형상 인식 및 관리 방법)

  • Jeong, Sang-Kyu;Ban, Yong-Un
    • KIEAE Journal
    • /
    • v.11 no.6
    • /
    • pp.95-100
    • /
    • 2011
  • Although Space Syntax is a well-known technique for spatial analysis, debates have taken place among some researchers because the Space Syntax discards geometric information as both shapes and sizes of spaces, and hence may cause some inconsistencies. Therefore, this study aims at developing methods to recognize and manage spatial shapes for more precise space syntax analysis. To reach this goal, this study employed both a graph theory and binary spatial partitioning (BSP) tree to recognize and manage spatial information. As a result, spatial shapes and sizes could be recognized by checking loops in graph converted from spatial shapes of built environment. Each spatial shape could be managed sequentially by BSP tree with hierarchical structure. Through such recognition and management processes, convex maps composed of the fattest and fewest convex spaces could be drawn. In conclusion, we hope that the methods developed here will be useful for urban planning to find appropriate purposes of spaces to satisfy the sustainability of built environment on the basis of the spatial and social relationships in urban spaces.

The Cooperate Navigation for Swarm Robot Using Centroidal Voronoi Tessellation (무게중심 보로노이 테셀레이션을 이용한 군집로봇의 협조탐색)

  • Bang, Mun-Seop;Joo, Young-Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.1
    • /
    • pp.130-134
    • /
    • 2012
  • In this paper, we propose a space partitioning technique for swarm robots by using the Centroidal Voronoi Tessellation. The proposed method consists of two parts such as space partition and collision avoidance. The space partition for searching a given space is carried out by a density function which is generated by some accidents. The collision avoidance is implemented by the potential field method. Finally, the numerical experiments show the effectiveness and feasibility of the proposed method.