• Title/Summary/Keyword: Repeated clustering

Search Result 31, Processing Time 0.028 seconds

Repeated Clustering to Improve the Discrimination of Typical Daily Load Profile

  • Kim, Young-Il;Ko, Jong-Min;Song, Jae-Ju;Choi, Hoon
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.3
    • /
    • pp.281-287
    • /
    • 2012
  • The customer load profile clustering method is used to make the TDLP (Typical Daily Load Profile) to estimate the quarter hourly load profile of non-AMR (Automatic Meter Reading) customers. This study examines how the repeated clustering method improves the ability to discriminate among the TDLPs of each cluster. The k-means algorithm is a well-known clustering technology in data mining. Repeated clustering groups the cluster into sub-clusters with the k-means algorithm and chooses the sub-cluster that has the maximum average error and repeats clustering until the final cluster count is satisfied.

Customer Clustering Method Using Repeated Small-sized Clustering to improve the Classifying Ability of Typical Daily Load Profile (일일 대표 부하패턴의 분별력을 높이기 위한 반복적인 소규모 군집화를 이용한 고객 군집화 방법)

  • Kim, Young-Il;Song, Jae-Ju;Oh, Do-Eun;Jung, Nam-Joon;Yang, Il-Kwon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.11
    • /
    • pp.2269-2274
    • /
    • 2009
  • Customer clustering method is used to make a TDLP (typical daily load profile) to estimate the quater hourly load profile of non-AMR (Automatic Meter Reading) customer. In this paper, repeated small-sized clustering method is supposed to improve the classifying ability of TDLP. K-means algorithm is well-known clustering technology of data mining. To reduce the local maxima of k-means algorithm, proposed method clusters average load profiles to small-sized clusters and selects the highest error rated cluster and clusters this to small-sized clusters repeatedly to minimize the local maxima.

Repeated K-means Clustering Algorithm For Radar Sorting (레이더 군집화를 위한 반복 K-means 클러스터링 알고리즘)

  • Dong Hyun ParK;Dong-ho Seo;Jee-hyeon Baek;Won-jin Lee;Dong Eui Chang
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.5
    • /
    • pp.384-391
    • /
    • 2023
  • In modern electronic warfare, a number of radar emitters are in operation, causing radar receivers to receive high-density signal pulses that occur simultaneously. To analyze the radar signals more accurately and identify enemies, the sorting process of high-density radar signals is very important before analysis. Recently, machine learning algorithms, specifically K-means clustering, are the subject of research aimed at improving the accuracy of radar signal sorting. One of the challenges faced by these studies is that the clustering results can vary depending on how the initial points are selected and how many clusters number are set. This paper introduces a repeated K-means clustering algorithm that aims to accurately cluster all data by identifying and addressing false clusters in the radar sorting problem. To verify the performance of the proposed algorithm, experiments are conducted by applying it to simulated signals that are generated by a signal generator.

Imputation method for missing data based on clustering and measure of property (군집화 및 특성도를 이용한 결측치 대체 방법)

  • Kim, Sunghyun;Kim, Dongjae
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.1
    • /
    • pp.29-40
    • /
    • 2018
  • There are various reasons for missing values when collecting data. Missing values have some influence on the analysis and results; consequently, various methods of processing missing values have been studied to solve the problem. It is thought that the later point of view may be affected by the initial time point value in the repeated measurement data. However, in the existing method, there was no method for the imputation of missing values using this concept. Therefore, we proposed a new missing value imputation method in this study using clustering in initial time point of the repeated measurement data and the measure of property proposed by Kim and Kim (The Korean Communications in Statistics, 30, 463-473, 2017). We also applied the Monte Carlo simulations to compare the performance of the established method and suggested methods in repeated measurement data.

A Clustering Technique using Common Structures of XML Documents (XML 문서의 공통 구조를 이용한 클러스터링 기법)

  • Hwang, Jeong-Hee;Ryu, Keun-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.6
    • /
    • pp.650-661
    • /
    • 2005
  • As the Internet is growing, the use of XML which is a standard of semi-structured document is increasing. Therefore, there are on going works about integration and retrieval of XML documents. However, the basis of efficient integration and retrieval of documents is to cluster XML documents with similar structure. The conventional XML clustering approaches use the hierarchical clustering algorithm that produces the demanded number of clusters through repeated merge, but it have some problems that it is difficult to compute the similarity between XML documents and it costs much time to compare similarity repeatedly. In order to address this problem, we use clustering algorithm for transactional data that is scale for large size of data. In this paper we use common structures from XML documents that don't have DTD or schema. In order to use common structures of XML document, we extract representative structures by decomposing the structure from a tree model expressing the XML document, and we perform clustering with the extracted structure. Besides, we show efficiency of proposed method by comparing and analyzing with the previous method.

An Energy-Efficient Clustering Using Division of Cluster in Wireless Sensor Network (무선 센서 네트워크에서 클러스터의 분할을 이용한 에너지 효율적 클러스터링)

  • Kim, Jong-Ki;Kim, Yoeng-Won
    • Journal of Internet Computing and Services
    • /
    • v.9 no.4
    • /
    • pp.43-50
    • /
    • 2008
  • Various studies are being conducted to achieve efficient routing and reduce energy consumption in wireless sensor networks where energy replacement is difficult. Among routing mechanisms, the clustering technique has been known to be most efficient. The clustering technique consists of the elements of cluster construction and data transmission. The elements that construct a cluster are repeated in regular intervals in order to equalize energy consumption among sensor nodes in the cluster. The algorithms for selecting a cluster head node and arranging cluster member nodes optimized for the cluster head node are complex and requires high energy consumption. Furthermore, energy consumption for the data transmission elements is proportional to $d^2$ and $d^4$ around the crossover region. This paper proposes a means of reducing energy consumption by increasing the efficiency of the cluster construction elements that are regularly repeated in the cluster technique. The proposed approach maintains the number of sensor nodes in a cluster at a constant level by equally partitioning the region where nodes with density considerations will be allocated in cluster construction, and reduces energy consumption by selecting head nodes near the center of the cluster. It was confirmed through simulation experiments that the proposed approach consumes less energy than the LEACH algorithm.

  • PDF

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Document Clustering based on Level-wise Stop-word Removing for an Efficient Document Searching (효율적인 문서검색을 위한 레벨별 불용어 제거에 기반한 문서 클러스터링)

  • Joo, Kil Hong;Lee, Won Suk
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.3
    • /
    • pp.67-80
    • /
    • 2008
  • Various document categorization methods have been studied to provide a user with an effective way of browsing a large scale of documents. They do compares set of documents into groups of semantically similar documents automatically. However, the automatic categorization method suffers from low accuracy. This thesis proposes a semi-automatic document categorization method based on the domains of documents. Each documents is belongs to its initial domain. All the documents in each domain are recursively clustered in a level-wise manner, so that the category tree of the documents can be founded. To find the clusters of documents, the stop-word of each document is removed on the document frequency of a word in the domain. For each cluster, its cluster keywords are extracted based on the common keywords among the documents, and are used as the category of the domain. Recursively, each cluster is regarded as a specified domain and the same procedure is repeated until it is terminated by a user. In each level of clustering, a user can adjust any incorrectly clustered documents to improve the accuracy of the document categorization.

  • PDF

Adaptive Clustering based Sparse Representation for Image Denoising (적응 군집화 기반 희소 부호화에 의한 영상 잡음 제거)

  • Kim, Seehyun
    • Journal of IKEEE
    • /
    • v.23 no.3
    • /
    • pp.910-916
    • /
    • 2019
  • Non-local similarity of natural images is one of highly exploited features in various applications dealing with images. Unique edges, texture, and pattern of the images are frequently repeated over the entire image. Once the similar image blocks are classified into a cluster, representative features of the image blocks can be extracted from the cluster. The bigger the size of the cluster is the better the additive white noise can be separated. Denoising is one of major research topics in the image processing field suppressing the additive noise. In this paper, a denoising algorithm is proposed which first clusters the noisy image blocks based on similarity, extracts the feature of the cluster, and finally recovers the original image. Performance experiments with several images under various noise strengths show that the proposed algorithm recovers the details of the image such as edges, texture, and patterns while outperforming the previous methods in terms of PSNR in removing the additive Gaussian noise.

News Topic Extraction based on Word Similarity (단어 유사도를 이용한 뉴스 토픽 추출)

  • Jin, Dongxu;Lee, Soowon
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1138-1148
    • /
    • 2017
  • Topic extraction is a technology that automatically extracts a set of topics from a set of documents, and this has been a major research topic in the area of natural language processing. Representative topic extraction methods include Latent Dirichlet Allocation (LDA) and word clustering-based methods. However, there are problems with these methods, such as repeated topics and mixed topics. The problem of repeated topics is one in which a specific topic is extracted as several topics, while the problem of mixed topic is one in which several topics are mixed in a single extracted topic. To solve these problems, this study proposes a method to extract topics using an LDA that is robust against the problem of repeated topic, going through the steps of separating and merging the topics using the similarity between words to correct the extracted topics. As a result of the experiment, the proposed method showed better performance than the conventional LDA method.