• Title/Summary/Keyword: set partitioning

Search Result 174, Processing Time 0.025 seconds

A Region-based Comparison Algorithm of k sets of Trapezoids (k 사다리꼴 셋의 영역 중심 비교 알고리즘)

  • Jung, Hae-Jae
    • The KIPS Transactions:PartA
    • /
    • v.10A no.6
    • /
    • pp.665-670
    • /
    • 2003
  • In the applications like automatic masks generation for semiconductor production, a drawing consists of lots of polygons that are partitioned into trapezoids. The addition/deletion of a polygon to/from the drawing is performed through geometric operations such as insertion, deletion, and search of trapezoids. Depending on partitioning algorithm being used, a polygon can be partitioned differently in terms of shape, size, and so on. So, It's necessary to invent some comparison algorithm of sets of trapezoids in which each set represents interested parts of a drawing. This comparison algorithm, for example, may be used to verify a software program handling geometric objects consisted of trapezoids. In this paper, given k sets of trapezoids in which each set forms the regions of interest of each drawing, we present how to compare the k sets to see if all k sets represent the same geometric scene. When each input set has the same number n of trapezoids, the algorithm proposed has O(2$^{k-2}$ $n^2$(log n+k)) time complexity. It is also shown that the algorithm suggested has the same time complexity O( $n^2$ log n) as the sweeping-based algorithm when the number k(<< n) of input sets is small. Furthermore, the proposed algorithm can be kn times faster than the sweeping-based algorithm when all the trapezoids in the k input sets are almost the same.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Fine-scalable SPIHT Hardware Design for Frame Memory Compression in Video Codec

  • Kim, Sunwoong;Jang, Ji Hun;Lee, Hyuk-Jae;Rhee, Chae Eun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.17 no.3
    • /
    • pp.446-457
    • /
    • 2017
  • In order to reduce the size of frame memory or bus bandwidth, frame memory compression (FMC) recompresses reconstructed or reference frames of video codecs. This paper proposes a novel FMC design based on discrete wavelet transform (DWT) - set partitioning in hierarchical trees (SPIHT), which supports fine-scalable throughput and is area-efficient. In the proposed design, multi-cores with small block sizes are used in parallel instead of a single core with a large block size. In addition, an appropriate pipelining schedule is proposed. Compared to the previous design, the proposed design achieves the processing speed which is closer to the target system speed, and therefore it is more efficient in hardware utilization. In addition, a scheme in which two passes of SPIHT are merged into one pass called merged refinement pass (MRP) is proposed. As the number of shifters decreases and the bit-width of remained shifters is reduced, the size of SPIHT hardware significantly decreases. The proposed FMC encoder and decoder designs achieve the throughputs of 4,448 and 4,000 Mpixels/s, respectively, and their gate counts are 76.5K and 107.8K. When the proposed design is applied to high efficiency video codec (HEVC), it achieves 1.96% lower average BDBR and 0.05 dB higher average BDPSNR than the previous FMC design.

Effects of Single Nucleotide Polymorphism Marker Density on Haplotype Block Partition

  • Kim, Sun Ah;Yoo, Yun Joo
    • Genomics & Informatics
    • /
    • v.14 no.4
    • /
    • pp.196-204
    • /
    • 2016
  • Many researchers have found that one of the most important characteristics of the structure of linkage disequilibrium is that the human genome can be divided into non-overlapping block partitions in which only a small number of haplotypes are observed. The location and distribution of haplotype blocks can be seen as a population property influenced by population genetic events such as selection, mutation, recombination and population structure. In this study, we investigate the effects of the density of markers relative to the full set of all polymorphisms in the region on the results of haplotype partitioning for five popular haplotype block partition methods: three methods in Haploview (confidence interval, four gamete test, and solid spine), MIG++ implemented in PLINK 1.9 and S-MIG++. We used several experimental datasets obtained by sampling subsets of single nucleotide polymorphism (SNP) markers of chromosome 22 region in the 1000 Genomes Project data and also the HapMap phase 3 data to compare the results of haplotype block partitions by five methods. With decreasing sampling ratio down to 20% of the original SNP markers, the total number of haplotype blocks decreases and the length of haplotype blocks increases for all algorithms. When we examined the marker-independence of the haplotype block locations constructed from the datasets of different density, the results using below 50% of the entire SNP markers were very different from the results using the entire SNP markers. We conclude that the haplotype block construction results should be used and interpreted carefully depending on the selection of markers and the purpose of the study.

인위적 데이터를 이용한 군집분석 프로그램간의 비교에 대한 연구

  • 김성호;백승익
    • Journal of Intelligence and Information Systems
    • /
    • v.7 no.2
    • /
    • pp.35-49
    • /
    • 2001
  • Over the years, cluster analysis has become a popular tool for marketing and segmentation researchers. There are various methods for cluster analysis. Among them, K-means partitioning cluster analysis is the most popular segmentation method. However, because the cluster analysis is very sensitive to the initial configurations of the data set at hand, it becomes an important issue to select an appropriate starting configuration that is comparable with the clustering of the whole data so as to improve the reliability of the clustering results. Many programs for K-mean cluster analysis employ various methods to choose the initial seeds and compute the centroids of clusters. In this paper, we suggest a methodology to evaluate various clustering programs. Furthermore, to explore the usability of the methodology, we evaluate four clustering programs by using the methodology.

  • PDF

Embedded Compression Codec Algorithm for Motion Compensated Wavelet Video Coding System (움직임 보상된 웨이블릿 기반의 비디오 코딩 시스템에 적용 가능한 임베디드 압축 코덱 알고리즘)

  • Kim, Song-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.3
    • /
    • pp.77-83
    • /
    • 2012
  • In this paper, a low-complexity embedded compression (EC) Codec algorithm for the wavelet video coder is applied to reduce excessive external memory requirements. The EC algorithm is used to achieve a fixed compression ratio of 50 % under the near-lossless-compression constraint. The EC technique can reduce the 50 % memory requirement for intermediate low-frequency coefficients during multiple discrete wavelet transform stages compared with direct implementation of the wavelet video encoder of this paper. Furthermore, the EC scheme based on a forward adaptive quantization and fixed length coding can save bandwidth and size of buffer between DWT and SPIHT to 50 %. Simulation results show that our EC algorithm present only PSNR degradation of 0.179 and 0.162 dB in average when the target bit-rate of the video coder are 1 and 0.5 bpp, respectively.

Integrity Assessment Models for Bridge Structures Using Fuzzy Decision-Making (퍼지의사결정을 이용한 교량 구조물의 건전성평가 모델)

  • 안영기;김성칠
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.6
    • /
    • pp.1022-1031
    • /
    • 2002
  • This paper presents efficient models for bridge structures using CART-ANFIS (classification and regression tree-adaptive neuro fuzzy inference system). A fuzzy decision tree partitions the input space of a data set into mutually exclusive regions, each region is assigned a label, a value, or an action to characterize its data points. Fuzzy decision trees used for classification problems are often called fuzzy classification trees, and each terminal node contains a label that indicates the predicted class of a given feature vector. In the same vein, decision trees used for regression problems are often called fuzzy regression trees, and the terminal node labels may be constants or equations that specify the predicted output value of a given input vector. Note that CART can select relevant inputs and do tree partitioning of the input space, while ANFIS refines the regression and makes it continuous and smooth everywhere. Thus it can be seen that CART and ANFIS are complementary and their combination constitutes a solid approach to fuzzy modeling.

A dominant hyperrectangle generation technique of classification using IG partitioning (정보이득 분할을 이용한 분류기법의 지배적 초월평면 생성기법)

  • Lee, Hyeong-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.1
    • /
    • pp.149-156
    • /
    • 2014
  • NGE(Nested Generalized Exemplar Method) can increase the performance of the noisy data at the same time, can reduce the size of the model. It is the optimal distance-based classification method using a matching rule. NGE cross or overlap hyperrectangles generated in the learning has been noted to inhibit the factors. In this paper, We propose the DHGen(Dominant Hyperrectangle Generation) algorithm which avoids the overlapping and the crossing between hyperrectangles, uses interval weights for mixed hyperrectangles to be splited based on the mutual information. The DHGen improves the classification performance and reduces the number of hyperrectangles by processing the training set in an incremental manner. The proposed DHGen has been successfully shown to exhibit comparable classification performance to k-NN and better result than EACH system which implements the NGE theory using benchmark data sets from UCI Machine Learning Repository.

Parallel Spatial Join Method Using Efficient Spatial Relation Partition In Distributed Spatial Database Systems (분산 공간 DBMS에서의 효율적인 공간 릴레이션 분할 기법을 이용한 병렬 공간 죠인 기법)

  • Ko, Ju-Il;Lee, Hwan-Jae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.4 no.1 s.7
    • /
    • pp.39-46
    • /
    • 2002
  • In distributed spatial database systems, users nay issue a query that joins two relations stored at different sites. The sheer volume and complexity of spatial data bring out expensive CPU and I/O costs during the spatial join processing. This paper shows a new spatial join method which joins two spatial relation in a parallel way. Firstly, the initial join operation is divided into two distinct ones by partitioning one of two participating relations based on the region. This two join operations are assigned to each sites and executed simultaneously. Finally, each intermediate result sets from the two join operations are merged to an ultimate result set. This method reduces the number of spatial objects participating in the spatial operations. It also reduces the scope and the number of scanning spatial indices. And it does not materialize the temporary results by implementing the join algebra operators using the iterator. The performance test shows that this join method can lead to efficient use in terms of buffer and disk by narrowing down the joining region and decreasing the number of spatial objects.

  • PDF

The optimal balance between sexual and asexual reproduction in variable environments: a systematic review

  • Yang, Yun Young;Kim, Jae Geun
    • Journal of Ecology and Environment
    • /
    • v.40 no.2
    • /
    • pp.89-106
    • /
    • 2016
  • Many plant species have two modes of reproduction: sexual and asexual. Both modes of reproduction have often been viewed as adaptations to temporally or spatially variable environments. The plant should adjust partitioning to match changes in the estimated success of the two reproductive modes. Perennial plants showed that favorable habitats in soil nutrients or water content tend to promote clonal growth over sexual reproduction. In contrast, under high light-quantity conditions, clonal plants tend to allocate more biomass to sexual reproduction and less to clonal propagation. On the other hand, plants with chasmogamous and cleistogamous flowers provides with a greater tendency of the opportunity to ensure some seed set in any stressful environmental conditions such as low light, low soil nutrients, or low soil moisture. It is considered that vegetative reproduction has high competitive ability and is the major means to expand established population of perennial plants, whereas cleistogamous reproduction is insurance to persist in stressful sites due to being strong. Chasmogamous reproduction mainly enhances established and new population. Therefore, the functions of sexual and asexual propagules of perennial or annual plants differ from each other. These traits of propagule thus determine its success at a particular region of any environmental gradients. Eventually, if environmental resources or stress levels change in either space or time, species composition will probably also change. The reason based on which the plants differ with respect to favored reproduction modes in each environmental condition, may be involved in their specific realized niche.