• Title/Summary/Keyword: Set Partitioning

Search Result 173, Processing Time 0.028 seconds

Low Memory Zerotree Coding (저 메모리를 갖는 제로트리 부호화)

  • Shin, Cheol;Kim, Ho-Sik;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.814-821
    • /
    • 2002
  • The SPIHT(set partitioning in hierarchical tree) is efficient and well-known in the zerotree coding algorithm. However SPIHT's high memory requirement is a major difficulty for hardware implementation. In this paper we propose low-memory and fast zerotree algorithm. We present following three methods for reduced memory and fst coding speed. First, wavelet transform by lifting has a low memory requirement and reduced complexity than traditional filter bank implementation. The second method is to divide the wavelet coefficients into a block. Finally, we use NLS algorithm proposed by Wheeler and Pearlman in our codec. Performance of NLS is nearly same as SPIHT and reveals low and fixed memory and fast coding speed.

Reproducibility Assessment of K-Means Clustering and Applications (K-평균 군집화의 재현성 평가 및 응용)

  • 허명회;이용구
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.1
    • /
    • pp.135-144
    • /
    • 2004
  • We propose a reproducibility (validity) assessment procedure of K-means cluster analysis by randomly partitioning the data set into three parts, of which two subsets are used for developing clustering rules and one subset for testing consistency of clustering rules. Also, as an alternative to Rand index and corrected Rand index, we propose an entropy-based consistency measure between two clustering rules, and apply it to determination of the number of clusters in K-means clustering.

Gas/particle Partitioning of PAHs Segregated with Particle Size in Equilibrium States (대기 중 PAHs의 입경별 가스/입자 분배평형에 관한 연구)

  • Park, Jin-Soo;Lee, Dong-Soo;Kim, Jong-Guk
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.27 no.12
    • /
    • pp.1270-1276
    • /
    • 2005
  • When gas/particle partitioning of PAHs in the atmosphere approached an equilibrium state, the slope of linear regression between gas/particle partitioning coefficient($logK_p$) and subcooled liquid vapour pressure($logP_L^O$) was -1. But it was alleged that the slope of equilibrium state might not be -1 in real atmospheric environment due to heterogeneous characteristics of particulate matter. In This study, it would be found if gas/particle partitioning of PAHs segregated with particle size in equilibrium state was based on the hypothesis mentioned above. We have calculated the slopes of $logK_p$ v.s. $logP_L^O$ after collecting 10 set samples which consisted of particulate and vaporous phases. The slope was close to -1 in equilibrium states. But despite of equilibrium state, all slopes segregated with particle size were not close to -1 and those were gentler with larger particle size. The difference of slopes in equilibrium states was almost against the assumption of gas/particle partitioning theory. When the gas/particle partitioning was due to adsorption, the desorption enthalpy was different in each particle size. When it was absorption, the activity coefficient was different. The difference of desorption enthalpy and activity coefficient in each particle size indicate the heterogeneous characteristics of the bulk particle. This may be the reason for slope variation with particle size even though in an equilibrium state.

Circuit Partitioning Algorithm Using Wire Redundancy Removal Method

  • Kim Jin-kuk;Kwon Ki-duk;Sihn Bong-sik;Chong Jung-wha
    • Proceedings of the IEEK Conference
    • /
    • 2004.06b
    • /
    • pp.541-544
    • /
    • 2004
  • This paper presents a new circuit panitioning algorithm using wire redundancy removal. This algorithm consist of the two steps. In the first step. We propose a new IIP(Iterative Improvement Partitioning) technique that selects the method to choice cells according to improvement status using two kinds of bucket structures, the one kept by total gain, and the other by updated gain. In the second step, we select the target wire in the cut-set. We add a alternative wire in the circuit to remove the target wire. For this we use wire redundancy removal and addition method The experimental results on MCNC benchmark circuits show improvement up to $41-50\%$ in cut-size over previous algorithms

  • PDF

Dynamic Cache Partitioning Strategy for Efficient Buffer Cache Management (효율적인 버퍼 캐시 관리를 위한 동적 캐시 분할 블록교체 기법)

  • 진재선;허의남;추현승
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.2
    • /
    • pp.35-44
    • /
    • 2003
  • The effectiveness of buffer cache replacement algorithms is critical to the performance of I/O systems. In this paper, we propose the degree of inter-reference gap (DIG) based block replacement scheme that retains merits of the least recently used (LRU) such as simple implementation and good cache hit ratio (CHR) for general patterns of references, and improves CHR further. In the proposed scheme, cache blocks with low DIGs are distinguished from blocks with high DIGs and the replacement block is selected among high DIGs blocks as done in the low inter-reference recency set (LIRS) scheme. Thus, by having the effect of the partitioning the cache memory dynamically based on DIGs, CHR is improved. Trace-driven simulation is employed to verified the superiority of the DIG based scheme and shows that the performance improves up to about 175% compared to the LRU scheme and 3% compared to the LIRS scheme for the same traces.

  • PDF

Clustering Data with Categorical Attributes Using Inter-dimensional Association Rules and Hypergraph Partitioning (차원간 연관관계와 하이퍼그래프 분할법을 이용한 범주형 속성을 가진 데이터의 클러스터링)

  • 이성기;윤덕균
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.24 no.65
    • /
    • pp.41-50
    • /
    • 2001
  • Clustering in data mining is a discovery process that groups a set of data such that the intracluster similarity is maximized and intercluster similarity is minimized. The discovered clusters from clustering process are used to explain the characteristics of the data distribution. In this paper we propose a new methodology for clustering related transactions with categorical attributes. Our approach starts with transforming general relational databases into a transactional databases. We make use of inter-dimensional association rules for composing hypergraph edges, and a hypergraph partitioning algorithm for clustering the values of attributes. The clusters of the values of attributes are used to find the clusters of transactions. The suggested procedure can enhance the interpretation of resulting clusters with allocated attribute values.

  • PDF

Typed Separation Set Partitioning for Thread Partitioning of Non-strict functional Programs (비평가인자 함수 프로그램의 스레드 분할 향상을 위한 자료형 분리 집합 분할알고리즘)

  • Yang, Chang-Mo;Joo, Hyung-Seok;Yoo, Weon-Hee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.8
    • /
    • pp.2127-2136
    • /
    • 1998
  • 비평가인자 함수 언어는 비평가인자 어의로 인하여 기존의 von Neumann 형 병렬기에서 효율적인 수행을 어렵게 하는 미세수준의 동적 스케줄링 단위로 병합하는 과정이 중요하다. 이러한 과정을 스레드 분할이라 한다. 본 논문에서는 비평가인자 함수 프로그램을 스레드로 분할하는 자료형 분리집합 분할이라는 스레드 분할 알고리즘을 제안한다. 자료형 분리 집합 분할 알고리즘은 자료형을 비교할 수 없는 입력명과 출력명 사이에는 잠재 종속이 존재할 수 없다는 사실을 이용하여 스레드 분할을 수행한다. 이 방법을 사용하면 기존의 스레드 분할 방법에서 실패하는 스레드의 병합이 가능하며, 기존의 분할 알고리즘보다 더 큰 스레드를 생성할 수 있다.

  • PDF

An Efficient Multibody Dynamic Algorithm Using Independent Coordinates Set and Modified Velocity Transformation Method (수정된 속도변환기법과 독립좌표를 사용한 효율적인 다물체 동역학 알고리즘)

  • Kang, Sheen-Gil;Yoon, Yong-San
    • Proceedings of the KSME Conference
    • /
    • 2001.06b
    • /
    • pp.488-494
    • /
    • 2001
  • Many literatures, so far, have concentrated on approaches employing dependent coordinates set resulting in computational burden of constraint forces, which is needless in many cases. Some researchers developed methods to remove or calculate it efficiently. But systematic generation of the motion equation using independent coordinates set by Kane's equation is possible for any closed loop system. Independent velocity transformation method builds the smallest size of motion equation, but needs practically more complicated code implementation. In this study, dependent velocity matrix is systematically transformed into independent one using dependent-independent transformation matrix of each body group, and then motion equation free of constraint force is constructed. This method is compared with the other approach by counting the number of multiplications for car model with 15 d.o.f..

  • PDF

TCAM Partitioning for High-Performance Packet Classification (고성능 패킷 분류를 위한 TCAM 분할)

  • Kim Kyu-Ho;Kang Seok-Min;Song Il-Seop;Kwon Teack-Geun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.2B
    • /
    • pp.91-97
    • /
    • 2006
  • As increasing the network bandwidth, the threat of a network also increases with emerging various new services. For a high-performance network security, It is generally used that high-speed packet classification methods which employ hardware like TCAM. There needs an method using these devices efficiently because they are expensive and their capacity is not sufficient. In this paper, we propose an efficient packet classification using a Ternary-CAM(TCAM) which is widely used device for high-speed packet classification in which we have applied Snort rule set for the well-known intrusion detection system. In order to save the size of an expensive TCAM, we have eliminated duplicated IP addresses and port numbers in the rule according to the partitioning of a table in the TCAM, and we have represented negation and range rules with reduced TCAM size. We also keep advantages of low TCAM capacity consumption and reduce the number of TCAM lookups by decreasing the TCAM partitioning using combining port numbers. According to simulation results on our TCAM partitioning, the size of a TCAM can be reduced by upto 98$\%$ and the performance does not degrade significantly for high-speed packet classification with a large amount of rules.

Adaptive Load Balancing Algorithm of Ethereum Shard Using Bargaining Solution (협상 해법을 이용한 이더리움 샤드 부하 균형 알고리즘)

  • Baek, Dong Hwan;Kim, Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.4
    • /
    • pp.93-100
    • /
    • 2021
  • The Ethereum shard system for solving the scalability problem of the blockchain has a load balancing issue, which is modeled as a graph partitioning problem. In this paper, we propose an adaptive online weighted graph partitioning algorithm that can negotiate between two utility of the shard system using the game theory's bargaining solution. The bargaining solution is an axiomatic solution that can fairly determine the points of conflict of utility. The proposed algorithm was improved to apply the existing online graph partitioning algorithm to the weighted graph, and load balancing was performed efficiently through the design considering the situation of the sharding system using the extension of Nash bargaining solution, which is extended to apply solution to non-convex feasible set of bargaining problem. As a result of the experiment, it showed up to 37% better performance than typical load balancing algorithm of shard system.