• Title/Summary/Keyword: High Dimensionality Data

Search Result 122, Processing Time 0.029 seconds

Integrating Discrete Wavelet Transform and Neural Networks for Prostate Cancer Detection Using Proteomic Data

  • Hwang, Grace J.;Huang, Chuan-Ching;Chen, Ta Jen;Yue, Jack C.;Ivan Chang, Yuan-Chin;Adam, Bao-Ling
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.319-324
    • /
    • 2005
  • An integrated approach for prostate cancer detection using proteomic data is presented. Due to the high-dimensional feature of proteomic data, the discrete wavelet transform (DWT) is used in the first-stage for data reduction as well as noise removal. After the process of DWT, the dimensionality is reduced from 43,556 to 1,599. Thus, each sample of proteomic data can be represented by 1599 wavelet coefficients. In the second stage, a voting method is used to select a common set of wavelet coefficients for all samples together. This produces a 987-dimension subspace of wavelet coefficients. In the third stage, the Autoassociator algorithm reduces the dimensionality from 987 to 400. Finally, the artificial neural network (ANN) is applied on the 400-dimension space for prostate cancer detection. The integrated approach is examined on 9 categories of 2-class experiments, and also 3- and 4-class experiments. All of the experiments were run 10 times of ten-fold cross-validation (i. e. 10 partitions with 100 runs). For 9 categories of 2-class experiments, the average testing accuracies are between 81% and 96%, and the average testing accuracies of 3- and 4-way classifications are 85% and 84%, respectively. The integrated approach achieves exciting results for the early detection and diagnosis of prostate cancer.

  • PDF

Current trends in high dimensional massive data analysis (고차원 대용량 자료분석의 현재 동향)

  • Jang, Woncheol;Kim, Gwangsu;Kim, Joungyoun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.999-1005
    • /
    • 2016
  • The advent of big data brings the opportunity to answer many open scientic questions but also presents some interesting challenges. Main features of contemporary datasets are the high dimensionality and massive sample size. In this paper, we give an overview of major challenges caused by these two features: (1) noise accumulation and spurious correlations in high dimensional data; (ii) computational scalability for massive data. We also provide applications of big data in various fields including forecast of disasters, digital humanities and sabermetrics.

Design of a Hierarchically Structured Gas Identification System Using Fuzzy Sets and Rough Sets (퍼지집합과 러프집합을 이용한 계층 구조 가스 식별 시스템의 설계)

  • Bang, Young-Keun;Lee, Chul-Heui
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.3
    • /
    • pp.419-426
    • /
    • 2018
  • An useful and effective design method for the gas identification system is presented in this paper. The proposed gas identification system adopts hierarchical structure with two level rule base combining fuzzy sets with rough sets. At first, a hybrid genetic algorithm is used in grouping the array sensors of which the measured patterns are similar in order to reduce the dimensionality of patterns to be analyzed and to make rule construction easy and simple. Next, for low level identification, fuzzy inference systems for each divided group are designed by using TSK fuzzy rule, which allow handling the drift and the uncertainty of sensor data effectively. Finally, rough set theory is applied to derive the identification rules at high level which reflect the identification characteristics of each divided group. Thus, the proposed method is able to accomplish effectively dimensionality reduction as well as accurate gas identification. In simulation, we demonstrated the effectiveness of the proposed methods by identifying five types of gases.

Poisson linear mixed models with ARMA random effects covariance matrix

  • Choi, Jiin;Lee, Keunbaik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.927-936
    • /
    • 2017
  • To analyze longitudinal count data, Poisson linear mixed models are commonly used. In the models the random effects covariance matrix explains both within-subject variation and serial correlation of repeated count outcomes. When the random effects covariance matrix is assumed to be misspecified, the estimates of covariates effects can be biased. Therefore, we propose reasonable and flexible structures of the covariance matrix using autoregressive and moving average Cholesky decomposition (ARMACD). The ARMACD factors the covariance matrix into generalized autoregressive parameters (GARPs), generalized moving average parameters (GMAPs) and innovation variances (IVs). Positive IVs guarantee the positive-definiteness of the covariance matrix. In this paper, we use the ARMACD to model the random effects covariance matrix in Poisson loglinear mixed models. We analyze epileptic seizure data using our proposed model.

A Clustering Approach for Feature Selection in Microarray Data Classification Using Random Forest

  • Aydadenta, Husna;Adiwijaya, Adiwijaya
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1167-1175
    • /
    • 2018
  • Microarray data plays an essential role in diagnosing and detecting cancer. Microarray analysis allows the examination of levels of gene expression in specific cell samples, where thousands of genes can be analyzed simultaneously. However, microarray data have very little sample data and high data dimensionality. Therefore, to classify microarray data, a dimensional reduction process is required. Dimensional reduction can eliminate redundancy of data; thus, features used in classification are features that only have a high correlation with their class. There are two types of dimensional reduction, namely feature selection and feature extraction. In this paper, we used k-means algorithm as the clustering approach for feature selection. The proposed approach can be used to categorize features that have the same characteristics in one cluster, so that redundancy in microarray data is removed. The result of clustering is ranked using the Relief algorithm such that the best scoring element for each cluster is obtained. All best elements of each cluster are selected and used as features in the classification process. Next, the Random Forest algorithm is used. Based on the simulation, the accuracy of the proposed approach for each dataset, namely Colon, Lung Cancer, and Prostate Tumor, achieved 85.87%, 98.9%, and 89% accuracy, respectively. The accuracy of the proposed approach is therefore higher than the approach using Random Forest without clustering.

Machine Learning Based Structural Health Monitoring System using Classification and NCA (분류 알고리즘과 NCA를 활용한 기계학습 기반 구조건전성 모니터링 시스템)

  • Shin, Changkyo;Kwon, Hyunseok;Park, Yurim;Kim, Chun-Gon
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.1
    • /
    • pp.84-89
    • /
    • 2019
  • This is a pilot study of machine learning based structural health monitoring system using flight data of composite aircraft. In this study, the most suitable machine learning algorithm for structural health monitoring was selected and dimensionality reduction method for application on the actual flight data was conducted. For these tasks, impact test on the cantilever beam with added mass, which is the simulation of damage in the aircraft wing structure was conducted and classification model for damage states (damage location and level) was trained. Through vibration test of cantilever beam with fiber bragg grating (FBG) sensor, data of normal and 12 damaged states were acquired, and the most suitable algorithm was selected through comparison between algorithms like tree, discriminant, support vector machine (SVM), kNN, ensemble. Besides, through neighborhood component analysis (NCA) feature selection, dimensionality reduction which is necessary to deal with high dimensional flight data was conducted. As a result, quadratic SVMs performed best with 98.7% for without NCA and 95.9% for with NCA. It is also shown that the application of NCA improved prediction speed, training time, and model memory.

Design and Performance Analysis of a Parallel Cell-Based Filtering Scheme using Horizontally-Partitioned Technique (수평 분할 방식을 이용한 병렬 셀-기반 필터링 기법의 설계 및 성능 평가)

  • Chang, Jae-Woo;Kim, Young-Chang
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.459-470
    • /
    • 2003
  • It is required to research on high-dimensional index structures for efficiently retrieving high-dimensional data because an attribute vector in data warehousing and a feature vector in multimedia database have a characteristic of high-dimensional data. For this, many high-dimensional index structures have been proposed, but they have so called ‘dimensional curse’ problem that retrieval performance is extremely decreased as the dimensionality is increased. To solve the problem, the cell-based filtering (CBF) scheme has been proposed. But the CBF scheme show a linear decreasing on performance as the dimensionality. To cope with the problem, it is necessary to make use of parallel processing techniques. In this paper, we propose a parallel CBF scheme which uses a horizontally-partitioned technique as declustering. In order to maximize the retrieval performance of the proposed parallel CBF scheme, we construct our parallel CBF scheme under a SN (Shared Nothing) cluster architecture. In addition, we present a data insertion algorithm, a rage query processing one, and a k-NN query processing one which are suitable for the SN cluster architecture. Finally, we show that our parallel CBF scheme achieves better retrieval performance in proportion to the number of servers in the SN cluster architecture, compared with the conventional CBF scheme.

Early Software Quality Prediction Using Support Vector Machine (Support Vector Machine을 이용한 초기 소프트웨어 품질 예측)

  • Hong, Euy-Seok
    • Journal of Information Technology Services
    • /
    • v.10 no.2
    • /
    • pp.235-245
    • /
    • 2011
  • Early criticality prediction models that determine whether a design entity is fault-prone or not are becoming more and more important as software development projects are getting larger. Effective predictions can reduce the system development cost and improve software quality by identifying trouble-spots at early phases and proper allocation of effort and resources. Many prediction models have been proposed using statistical and machine learning methods. This paper builds a prediction model using Support Vector Machine(SVM) which is one of the most popular modern classification methods and compares its prediction performance with a well-known prediction model, BackPropagation neural network Model(BPM). SVM is known to generalize well even in high dimensional spaces under small training data conditions. In prediction performance evaluation experiments, dimensionality reduction techniques for data set are not used because the dimension of input data is too small. Experimental results show that the prediction performance of SVM model is slightly better than that of BPM and polynomial kernel function achieves better performance than other SVM kernel functions.

Smoothed Local PC0A by BYY data smoothing learning

  • Liu, Zhiyong;Xu, Lei
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.109.3-109
    • /
    • 2001
  • The so-called curse of dimensionality arises when Gaussian mixture is used on high-dimensional small-sample-size data, since the number of free elements that needs to be specied in each covariance matrix of Gaussian mixture increases exponentially with the number of dimension d. In this paper, by constraining the covariance matrix in its decomposed orthonormal form we get a local PCA model so as to reduce the number of free elements needed to be specified. Moreover, to cope with the small sample size problem, we adopt BYY data smoothing learning which is a regularization over maximum likelihood learning obtained from BYY harmony learning to implement this local PCA model.

  • PDF

An extension of multifactor dimensionality reduction method for detecting gene-gene interactions with the survival time (생존시간과 연관된 유전자 간의 교호작용에 관한 다중차원축소방법의 확장)

  • Oh, Jin Seok;Lee, Seung Yeoun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.5
    • /
    • pp.1057-1067
    • /
    • 2014
  • Many genetic variants have been identified to be associated with complex diseases such as hypertension, diabetes and cancers throughout genome-wide association studies (GWAS). However, there still exist a serious missing heritability problem since the proportion explained by genetic variants from GWAS is very weak less than 10~15%. Gene-gene interaction study may be helpful to explain the missing heritability because most of complex disease mechanisms are involved with more than one single SNP, which include multiple SNPs or gene-gene interactions. This paper focuses on gene-gene interactions with the survival phenotype by extending the multifactor dimensionality reduction (MDR) method to the accelerated failure time (AFT) model. The standardized residual from AFT model is used as a residual score for classifying multiple geno-types into high and low risk groups and algorithm of MDR is implemented. We call this method AFT-MDR and compares the power of AFT-MDR with those of Surv-MDR and Cox-MDR in simulation studies. Also a real data for leukemia Korean patients is analyzed. It was found that the power of AFT-MDR is greater than that of Surv-MDR and is comparable with that of Cox-MDR, but is very sensitive to the censoring fraction.