• 제목/요약/키워드: High-dimensional datasets

검색결과 46건 처리시간 0.022초

Comprehensive review on Clustering Techniques and its application on High Dimensional Data

  • Alam, Afroj;Muqeem, Mohd;Ahmad, Sultan
    • International Journal of Computer Science & Network Security
    • /
    • 제21권6호
    • /
    • pp.237-244
    • /
    • 2021
  • Clustering is a most powerful un-supervised machine learning techniques for division of instances into homogenous group, which is called cluster. This Clustering is mainly used for generating a good quality of cluster through which we can discover hidden patterns and knowledge from the large datasets. It has huge application in different field like in medicine field, healthcare, gene-expression, image processing, agriculture, fraud detection, profitability analysis etc. The goal of this paper is to explore both hierarchical as well as partitioning clustering and understanding their problem with various approaches for their solution. Among different clustering K-means is better than other clustering due to its linear time complexity. Further this paper also focused on data mining that dealing with high-dimensional datasets with their problems and their existing approaches for their relevancy

A Density Peak Clustering Algorithm Based on Information Bottleneck

  • Yongli Liu;Congcong Zhao;Hao Chao
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.778-790
    • /
    • 2023
  • Although density peak clustering can often easily yield excellent results, there is still room for improvement when dealing with complex, high-dimensional datasets. One of the main limitations of this algorithm is its reliance on geometric distance as the sole similarity measurement. To address this limitation, we draw inspiration from the information bottleneck theory, and propose a novel density peak clustering algorithm that incorporates this theory as a similarity measure. Specifically, our algorithm utilizes the joint probability distribution between data objects and feature information, and employs the loss of mutual information as the measurement standard. This approach not only eliminates the potential for subjective error in selecting similarity method, but also enhances performance on datasets with multiple centers and high dimensionality. To evaluate the effectiveness of our algorithm, we conducted experiments using ten carefully selected datasets and compared the results with three other algorithms. The experimental results demonstrate that our information bottleneck-based density peaks clustering (IBDPC) algorithm consistently achieves high levels of accuracy, highlighting its potential as a valuable tool for data clustering tasks.

Density-based Outlier Detection in Multi-dimensional Datasets

  • Wang, Xite;Cao, Zhixin;Zhan, Rongjuan;Bai, Mei;Ma, Qian;Li, Guanyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권12호
    • /
    • pp.3815-3835
    • /
    • 2022
  • Density-based outlier detection is one of the hot issues in data mining. A point is determined as outlier on basis of the density of points near them. The existing density-based detection algorithms have high time complexity, in order to reduce the time complexity, a new outlier detection algorithm DODMD (Density-based Outlier Detection in Multidimensional Datasets) is proposed. Firstly, on the basis of ZH-tree, the concept of micro-cluster is introduced. Each leaf node is regarded as a micro-cluster, and the micro-cluster is calculated to achieve the purpose of batch filtering. In order to obtain n sets of approximate outliers quickly, a greedy method is used to calculate the boundary of LOF and mark the minimum value as LOFmin. Secondly, the outliers can filtered out by LOFmin, the real outliers are calculated, and then the result set is updated to make the boundary closer. Finally, the accuracy and efficiency of DODMD algorithm are verified on real dataset and synthetic dataset respectively.

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • 제44권2호
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

대용량 데이터의 내용 기반 검색을 위한 분산 고차원 색인 구조 (A Distributed High Dimensional Indexing Structure for Content-based Retrieval of Large Scale Data)

  • 최현화;이미영;김영창;장재우;이규철
    • 한국정보과학회논문지:데이타베이스
    • /
    • 제37권5호
    • /
    • pp.228-237
    • /
    • 2010
  • 고차원 데이터에 대한 다양한 색인 구조가 제안되어 왔음에도 불구하고, 인터넷 서비스로서 이미지 및 동영상의 내용 기반 검색을 지원하기 위해서는 고확장성 지원 및 k-최근접점 검색 성능 향상을 지원하는 새로운 고차원 데이터의 색인 구조가 절실히 요구된다. 이에 우리는 다중 컴퓨팅 노드를 바탕으로 구축되는 분산 색인 구조로 분산 벡터 근사 트리(Distributed Vector Approximation-tree)를 제안한다. 분산 벡터 근사 트리는 대용량의 고차원 데이터로부터 추출한 샘플 데이터를 바탕으로 hybrid spill-tree를 구축하고, hybrid spill-tree외 말단 노드 각각에 분산 컴퓨팅 노드를 매핑하여 VA-file용 구축하는 두 레벨의 분산 색인 구조이다. 우리는 다중 컴퓨팅 노드들 상에 구축된 분산 벡터 근사 트리를 바탕으로 병렬 k-최근접점 검색을 수행함으로써 검씩 성능을 향상시킨다. 본 논문에서는 서로 다른 분포의 데이터 집합을 바탕으로 한 성능 시험 결과를 통하여, 분산 벡터 근사 트리가 기존의 고확장성을 지원하는 색인 구조와 비교하여 검색 정확도에 대한 손실 없이 더 빠른 k-최근접점 검색을 수행함을 보인다.

고차원자료에서의 다중검정의 활용 (Multiple testing and its applications in high-dimension)

  • 장원철
    • Journal of the Korean Data and Information Science Society
    • /
    • 제24권5호
    • /
    • pp.1063-1076
    • /
    • 2013
  • 현대 과학기술의 발전으로 빅데이터의 시대가 도래하였다, 이러한 빅데이터는 여러가지 과학적 문제에 대한 해답을 제공하지만 반면에 이로 인해 새로운 도전에 직면하고 있다. 마이크로어레이 자료와 같은 고차원자료는 이러한 빅데이터에서 흔히 볼 수 있는 유형중의 하나이다. 본 논문에서는 고차원 자료분석에 많이 쓰이고 있는 대역검정과 동시검정, 그리고 이의 응용에 대한 소개를 한다.

Enhanced Locality Sensitive Clustering in High Dimensional Space

  • Chen, Gang;Gao, Hao-Lin;Li, Bi-Cheng;Hu, Guo-En
    • Transactions on Electrical and Electronic Materials
    • /
    • 제15권3호
    • /
    • pp.125-129
    • /
    • 2014
  • A dataset can be clustered by merging the bucket indices that come from the random projection of locality sensitive hashing functions. It should be noted that for this to work the merging interval must be calculated first. To improve the feasibility of large scale data clustering in high dimensional space we propose an enhanced Locality Sensitive Hashing Clustering Method. Firstly, multiple hashing functions are generated. Secondly, data points are projected to bucket indices. Thirdly, bucket indices are clustered to get class labels. Experimental results showed that on synthetic datasets this method achieves high accuracy at much improved cluster speeds. These attributes make it well suited to clustering data in high dimensional space.

ETLi: Efficiently annotated traffic LiDAR dataset using incremental and suggestive annotation

  • Kang, Jungyu;Han, Seung-Jun;Kim, Nahyeon;Min, Kyoung-Wook
    • ETRI Journal
    • /
    • 제43권4호
    • /
    • pp.630-639
    • /
    • 2021
  • Autonomous driving requires a computerized perception of the environment for safety and machine-learning evaluation. Recognizing semantic information is difficult, as the objective is to instantly recognize and distinguish items in the environment. Training a model with real-time semantic capability and high reliability requires extensive and specialized datasets. However, generalized datasets are unavailable and are typically difficult to construct for specific tasks. Hence, a light detection and ranging semantic dataset suitable for semantic simultaneous localization and mapping and specialized for autonomous driving is proposed. This dataset is provided in a form that can be easily used by users familiar with existing two-dimensional image datasets, and it contains various weather and light conditions collected from a complex and diverse practical setting. An incremental and suggestive annotation routine is proposed to improve annotation efficiency. A model is trained to simultaneously predict segmentation labels and suggest class-representative frames. Experimental results demonstrate that the proposed algorithm yields a more efficient dataset than uniformly sampled datasets.

Enhancing Gene Expression Classification of Support Vector Machines with Generative Adversarial Networks

  • Huynh, Phuoc-Hai;Nguyen, Van Hoa;Do, Thanh-Nghi
    • Journal of information and communication convergence engineering
    • /
    • 제17권1호
    • /
    • pp.14-20
    • /
    • 2019
  • Currently, microarray gene expression data take advantage of the sufficient classification of cancers, which addresses the problems relating to cancer causes and treatment regimens. However, the sample size of gene expression data is often restricted, because the price of microarray technology on studies in humans is high. We propose enhancing the gene expression classification of support vector machines with generative adversarial networks (GAN-SVMs). A GAN that generates new data from original training datasets was implemented. The GAN was used in conjunction with nonlinear SVMs that efficiently classify gene expression data. Numerical test results on 20 low-sample-size and very high-dimensional microarray gene expression datasets from the Kent Ridge Biomedical and Array Expression repositories indicate that the model is more accurate than state-of-the-art classifying models.

Cooperative Coevolution Differential Evolution Based on Spark for Large-Scale Optimization Problems

  • Tan, Xujie;Lee, Hyun-Ae;Shin, Seong-Yoon
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.155-160
    • /
    • 2021
  • Differential evolution is an efficient algorithm for solving continuous optimization problems. However, its performance deteriorates rapidly, and the runtime increases exponentially when differential evolution is applied for solving large-scale optimization problems. Hence, a novel cooperative coevolution differential evolution based on Spark (known as SparkDECC) is proposed. The divide-and-conquer strategy is used in SparkDECC. First, the large-scale problem is decomposed into several low-dimensional subproblems using the random grouping strategy. Subsequently, each subproblem can be addressed in a parallel manner by exploiting the parallel computation capability of the resilient distributed datasets model in Spark. Finally, the optimal solution of the entire problem is obtained using the cooperation mechanism. The experimental results on 13 high-benchmark functions show that the new algorithm performs well in terms of speedup and scalability. The effectiveness and applicability of the proposed algorithm are verified.