• Title/Summary/Keyword: Partitioning methods

Search Result 241, Processing Time 0.021 seconds

Recursive SPIHT(Set Partitioning in Hierarchy Trees) Algorithm for Embedded Image Coding (내장형 영상코딩을 위한 재귀적 SPIHT 알고리즘)

  • 박영석
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.7-14
    • /
    • 2003
  • A number of embedded wavelet image coding methods have been proposed since the introduction of EZW(Embedded Zerotree Wavelet) algorithm. A common characteristic of these methods is that they use fundamental ideas found in the EZW algorithm. Especially, one of these methods is the SPIHT(Set Partitioning in Hierarchy Trees) algorithm, which became very popular since it was able to achieve equal or better performance than EZW without having to use an arithmetic encoder. In this paper We propose a recursive set partitioning in hierarchy trees(RSPIHT) algorithm for embedded image coding and evaluate it's effectiveness experimentally. The proposed RSPIHT algorithm takes the simple and regular form and the worst case time complexity of O(n). From the viewpoint of processing time, the RSPIHT algorithm takes about 16.4% improvement in average than the SPIHT algorithm at T-layer over 4 of experimental images. Also from the viewpoint of coding rate, the RSPIHT algorithm takes similar results at T-layer under 7 but the improved results at other T-layer of experimental images.

  • PDF

A Vertical Partitioning Algorithm based on Fuzzy Graph (퍼지 그래프 기반의 수직 분할 알고리즘)

  • Son, Jin-Hyun;Choi, Kyung-Hoon;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.315-323
    • /
    • 2001
  • The concept of vertical partitioning has been discussed so far in an objective of improving the performance of query execution and system throughput. It can be applied to the areas where the match between data and queries affects performance, which includes partitioning of individual files in centralized environments, data distribution in distributed databases, dividing data among different levels of memory hierarchies, and so on. In general, a vertical partitioning algorithm should support n-ary partitioning as well as a globally optimal solution for the generation of all meaningful fragments. Most previous methods, however, have some limitations to support both of them efficiently. Because the vertical partitioning problem basically includes the fuzziness property, the proper management is required for the fuzziness problem. In this paper we propose an efficient vertical $\alpha$-partitioning algorithm which is based on the fuzzy theory. The method can not only generate all meaningful fragments but also support n-ary partitioning without any complex mathematical computations.

  • PDF

Protein Motif Extraction via Feature Interval Selection

  • Sohn, In-Suk;Hwang, Chang-Ha;Ko, Jun-Su;Chiu, David;Hong, Dug-Hun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.4
    • /
    • pp.1279-1287
    • /
    • 2006
  • The purpose of this paper is to present a new algorithm for extracting the consensus pattern, or motif from sequence belonging to the same family. Two methods are considered for feature interval partitioning based on equal probability and equal width interval partitioning. C2H2 zinc finger protein and epidermal growth factor protein sequences are used to demonstrate the effectiveness of the proposed algorithm for motif extraction. For two protein families, the equal width interval partitioning method performs better than the equal probability interval partitioning method.

  • PDF

Frequency and Subcarrier Reuse Partitioning for FH-OFDMA Cellular Systems

  • Lee, Yeonwoo;Kim, Kyung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.5
    • /
    • pp.601-609
    • /
    • 2013
  • One of the most serious factors constraining the next generation cellular mobile consumer communication systems will be the severe co-channel interference experienced at the cell edge. Such a capacity-degrading impairment combined with the limited available spectrum resource makes it essential to develop more spectrally efficient solutions to enhance the system performance and enrich the mobile user's application services. This paper proposes a unique hybrid method of frequency hopping (FH) and subcarrier-reuse-partitioning that can maximize the system capacity by efficiently utilizing the available spectrum while at the same time reduce the co-channel interference effect. The main feature of the proposed method is that it applies an optimal combination of different frequency reuse factors (FRF) and FH-subcarrier allocation patterns into the partitioned cell regions. From the simulation results, it is shown that the proposed method can achieve the optimum number of subcarrier subsets according to the frequency-reuse distance and results in better performance than the fixed FRF methods, for a given partitioning arrangement. The results are presented in the context of both blocking probability and BER performances. It will also be shown how the proposed scheme is well suited to FH-OFDMA based cellular systems aiming at low co-channel interference performance and optimized number of subcarriers.

Performance Improvement of Force-directed Partitioning Algorithm for HW/SW Codesign (하드웨어/소프트웨어 통합설계를 위한 FDS 분할 알고리즘의 성능개선)

  • Oh, Ju-Young;Lee, Myoun-Jae;Lee, Jun-Yong;Park, Do-Soon
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.491-496
    • /
    • 2002
  • Most partitioning algorithms for hardware- software codesign do not consider scheduling. Therefore, partitioning should be performed again if time constraints art not satisfied in scheduling the partitioned results. Existing FDS-applied methods which consider scheduling in partitioning decide the control step of the node to schedule while selecting nodes for partitioning. In selecting nodes for partitioning, several aspects should be considered together such as added cost or time due to the partition of the node, or the degree of interference due to the scheduling of the node. At this time, the induced force, which means the degree of intereference of scheduling other nodes, is computed all over the control step of the corresponding node and other depending nodes. In this paper, a new FDS-applied partitioning algorithm is proposed, where partitioning is performed using the defined scheduling urgency and relative scheduling urgency of the nodes. Since the nodes are partitioned by the computation of relative scheduling urgencies only at the earliest control step and the latest control step among the assignable steps, the time complexity for the computation of induced force could be improve. Experimental result on the benchmarks show the improvement of execution time of the proposed algorithm compared to the existing FDS-applied methods.

A Cyclic Sliced Partitioning Method for Packing High-dimensional Data (고차원 데이타 패킹을 위한 주기적 편중 분할 방법)

  • 김태완;이기준
    • Journal of KIISE:Databases
    • /
    • v.31 no.2
    • /
    • pp.122-131
    • /
    • 2004
  • Traditional works on indexing have been suggested for low dimensional data under dynamic environments. But recent database applications require efficient processing of huge sire of high dimensional data under static environments. Thus many indexing strategies suggested especially in partitioning ones do not adapt to these new environments. In our study, we point out these facts and propose a new partitioning strategy, which complies with new applications' requirements and is derived from analysis. As a preliminary step to propose our method, we apply a packing technique on the one hand and exploit observations on the Minkowski-sum cost model on the other, under uniform data distribution. Observations predict that unbalanced partitioning strategy may be more query-efficient than balanced partitioning strategy for high dimensional data. Thus we propose our method, called CSP (Cyclic Spliced Partitioning method). Analysis on this method explicitly suggests metrics on how to partition high dimensional data. By the cost model, simulations, and experiments, we show excellent performance of our method over balanced strategy. By experimental studies on other indices and packing methods, we also show the superiority of our method.

Parallelization of Multi-Block Flow Solver with Multi-Block/Multi-Partitioning Method (다중블록/다중영역분할 기법을 이용한 유동해석 코드 병렬화)

  • Ju, Wan-Don;Lee, Bo-Sung;Lee, Dong-Ho;Hong, Seung-Gyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.31 no.7
    • /
    • pp.9-14
    • /
    • 2003
  • In this work, a multi-block/multi-partitioning method is suggested for a multi-block parallelization. It has an advantage of uniform load balance via subdividing of each block on each processor. To make a comparison of parallel efficiency according to domain decomposition method, a multi-block/single-partitioning and a multi-block/ multi-partitioning methods are applied to the flow analysis solver. The multi-block/ multi-partitioning method has more satisfactory parallel efficiency because of optimized load balancing. Finally, it has applied to the CFDS code. As a result, the computing speed with sixteen processors is over twelve times faster than that of sequential solver.

Vertical class fragmentation in distributed object-oriented databases (분산 객체 지향 데이타베이스에서 클래스의 기법)

  • 이순미;임해철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.2
    • /
    • pp.215-224
    • /
    • 1997
  • This paper addresses the vertical class fragmentation in distributed object-oriented databases. In the proposed vertical fragmentation, after producing the attribute fragment by partitioning attributes, then the method fragment is produced by gathering methods referring the attribute in each fragment. For partitioning attributes, we define query access matrix(QAM) and method access matrix(MAM) to express attributes that method refers, and extend QAM, MAM and attribute usage matrix(AUM) to universal class environment for representing relationship among other classes through class hierarchy and class composite hierarchy.

  • PDF

A space partitioning method embedded in a simulated annealing algorithm for facility layout problems with shape constraints

  • Kim, Jae-Gon;Kim, Yeong-Dae
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.04a
    • /
    • pp.465-468
    • /
    • 1996
  • We deal with facility layout problems with shape constraints. A simulated annealing algorithm is developed for the problems. In the algorithm, a solution is encoded as a matrix that has information about relative locations of the facilities in the floor. A block layout is constructed by partitioning the floor into a set of rectangular blocks according to the information while satisfying areas of facilities. In this paper, three methods are suggested for the partitioning procedure and they are employed in the simulated annealing algorithm. Results of computational experiments show that the proposed algorithm performs better than existing algorithms, especially for problems with tight shape constraints.

  • PDF

CPU-GPU2 Trigeneous Computing for Iterative Reconstruction in Computed Tomography

  • Oh, Chanyoung;Yi, Youngmin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.4
    • /
    • pp.294-301
    • /
    • 2016
  • In this paper, we present methods to efficiently parallelize iterative 3D image reconstruction by exploiting trigeneous devices (three different types of device) at the same time: a CPU, an integrated GPU, and a discrete GPU. We first present a technique that exploits single instruction multiple data (SIMD) architectures in GPUs. Then, we propose a performance estimation model, based on which we can easily find the optimal data partitioning on trigeneous devices. We found that the performance significantly varies by up to 6.23 times, depending on how SIMD units in GPUs are accessed. Then, by using trigeneous devices and the proposed estimation models, we achieve optimal partitioning and throughput, which corresponds to a 9.4% further improvement, compared to discrete GPU-only execution.