• 제목/요약/키워드: Missing Data Handling

검색결과 40건 처리시간 0.025초

결측치가 있는 자료에서의 변동모드분해법 (Variational Mode Decomposition with Missing Data)

  • 최규빈;오희석;이영조;김동호;유경상
    • 응용통계연구
    • /
    • 제28권2호
    • /
    • pp.159-174
    • /
    • 2015
  • 최근에 Dragomiretskiy와 Zosso (2014)는 경험적모드분해의 단점을 보완하여 새로운 신호 분해방법인 변동모드분해법(Variational Mode Decomposition)을 고안하였다. 기본적으로 변동모드분해법은 경험적모드분해법에 비하여 주파수 탐색 및 분리(tone detection and tone separation)에 탁월한 성능을 보인다. 또한 고속퓨리에변환을 기반으로 한 알고리즘을 사용하여 경험적모드분해법보다 잡음에 강건하다는 장점이 있다. 하지만 변동모드분해법은 결측 등으로 신호가 동일한 시간간격 혹은 공간적 간격으로 측정되지 않은 경우 제대로 동작하지 않는 단점이 있다. 이를 보완하기 위해서 본 논문에서는 변동모드분해법에 다단계우도함수를 조합하는 새로운 방법을 제안한다. 여기에서 다단계우도함수는 변동모드분해법이 신호를 적절한 내재모드함수로 분해하기 전에 결측치를 대체하는 효율적인 방법을 제시한다. 모의실험과 실제 자료의 사례연구를 통하여 변동모드분해법이 기존의 방법보다 더 효율적으로 신호를 분해한다는 것을 보일 것이다.

손실 데이터를 처리하기 위한 집락분석 알고리즘 (A Clustering Algorithm for Handling Missing Data)

  • 이종찬
    • 한국융합학회논문지
    • /
    • 제8권11호
    • /
    • pp.103-108
    • /
    • 2017
  • 유비쿼터스 환경에서는 다양한 센서로 부터 원거리에 데이터를 전송해야 하는 문제가 제기되어져 왔다. 특히 서로 다른 위치에서 도착한 데이터를 통합하는 과정에서 데이터의 속성 값들이 상이하거나 데이터에 일부 손실이 있는 데이터들도 처리해야 하는 어려운 문제를 가지고 있었다. 본 논문은 이와 같은 데이터들을 대상으로 집락분석 하는 방법을 제시한다. 이 방법의 핵심은 문제에 적합한 목적함수를 정의하고, 이 목적함수를 최적화 할 수 있는 알고리즘을 개발하는데 있다. 목적함수는 OCS 목적함수를 변형하여 사용한다. 이진 값을 가지는 데이터만을 처리할 수 있었던 MFA(Mean Field Annealing)을 연속 값을 가지는 분야에도 적용할 수 있도록 확장한다. 그리고 이를 CMFA이라 명하고 최적화 알고리즘으로 사용한다.

혼합형 데이터 보간을 위한 디노이징 셀프 어텐션 네트워크 (Denoising Self-Attention Network for Mixed-type Data Imputation)

  • 이도훈;김한준;전종훈
    • 한국콘텐츠학회논문지
    • /
    • 제21권11호
    • /
    • pp.135-144
    • /
    • 2021
  • 최근 데이터 기반 의사결정 기술이 데이터 산업을 이끄는 핵심기술로 자리 잡고 있는바, 이를 위한 머신러닝 기술은 고품질의 학습데이터를 요구한다. 하지만 실세계 데이터는 다양한 이유에 의해 결측값이 포함되어 이로부터 생성된 학습된 모델의 성능을 떨어뜨린다. 이에 실세계에 존재하는 데이터로부터 고성능 학습 모델을 구축하기 위해서 학습데이터에 내재한 결측값을 자동 보간하는 기법이 활발히 연구되고 있다. 기존 머신러닝 기반 결측 데이터 보간 기법은 수치형 변수에만 적용되거나, 변수별로 개별적인 예측 모형을 만들기 때문에 매우 번거로운 작업을 수반하게 된다. 이에 본 논문은 수치형, 범주형 변수가 혼합된 데이터에 적용 가능한 데이터 보간 모델인 Denoising Self-Attention Network(DSAN)를 제안한다. DSAN은 셀프 어텐션과 디노이징 기법을 결합하여 견고한 특징 표현 벡터를 학습하고, 멀티태스크 러닝을 통해 다수개의 결측치 변수에 대한 보간 모델을 병렬적으로 생성할 수 있다. 제안 모델의 유효성을 검증하기 위해 다수개의 혼합형 학습 데이터에 대하여 임의로 결측 처리한 후 데이터 보간 실험을 수행한다. 원래 값과 보간 값 간의 오차와 보간된 데이터를 학습한 이진 분류 모델의 성능을 비교하여 제안 기법의 유효성을 입증한다.

Different penalty methods for assessing interval from first to successful insemination in Japanese Black heifers

  • Setiaji, Asep;Oikawa, Takuro
    • Asian-Australasian Journal of Animal Sciences
    • /
    • 제32권9호
    • /
    • pp.1349-1354
    • /
    • 2019
  • Objective: The objective of this study was to determine the best approach for handling missing records of first to successful insemination (FS) in Japanese Black heifers. Methods: Of a total of 2,367 records of heifers born between 2003 and 2015 used, 206 (8.7%) of open heifers were missing. Four penalty methods based on the number of inseminations were set as follows: C1, FS average according to the number of inseminations; C2, constant number of days, 359; C3, maximum number of FS days to each insemination; and C4, average of FS at the last insemination and FS of C2. C5 was generated by adding a constant number (21 d) to the highest number of FS days in each contemporary group. The bootstrap method was used to compare among the 5 methods in terms of bias, mean squared error (MSE) and coefficient of correlation between estimated breeding value (EBV) of non-censored data and censored data. Three percentages (5%, 10%, and 15%) were investigated using the random censoring scheme. The univariate animal model was used to conduct genetic analysis. Results: Heritability of FS in non-censored data was $0.012{\pm}0.016$, slightly lower than the average estimate from the five penalty methods. C1, C2, and C3 showed lower standard errors of estimated heritability but demonstrated inconsistent results for different percentages of missing records. C4 showed moderate standard errors but more stable ones for all percentages of the missing records, whereas C5 showed the highest standard errors compared with noncensored data. The MSE in C4 heritability was $0.633{\times}10^{-4}$, $0.879{\times}10^{-4}$, $0.876{\times}10^{-4}$ and $0.866{\times}10^{-4}$ for 5%, 8.7%, 10%, and 15%, respectively, of the missing records. Thus, C4 showed the lowest and the most stable MSE of heritability; the coefficient of correlation for EBV was 0.88; 0.93 and 0.90 for heifer, sire and dam, respectively. Conclusion: C4 demonstrated the highest positive correlation with the non-censored data set and was consistent within different percentages of the missing records. We concluded that C4 was the best penalty method for missing records due to the stable value of estimated parameters and the highest coefficient of correlation.

Probabilistic penalized principal component analysis

  • Park, Chongsun;Wang, Morgan C.;Mo, Eun Bi
    • Communications for Statistical Applications and Methods
    • /
    • 제24권2호
    • /
    • pp.143-154
    • /
    • 2017
  • A variable selection method based on probabilistic principal component analysis (PCA) using penalized likelihood method is proposed. The proposed method is a two-step variable reduction method. The first step is based on the probabilistic principal component idea to identify principle components. The penalty function is used to identify important variables in each component. We then build a model on the original data space instead of building on the rotated data space through latent variables (principal components) because the proposed method achieves the goal of dimension reduction through identifying important observed variables. Consequently, the proposed method is of more practical use. The proposed estimators perform as the oracle procedure and are root-n consistent with a proper choice of regularization parameters. The proposed method can be successfully applied to high-dimensional PCA problems with a relatively large portion of irrelevant variables included in the data set. It is straightforward to extend our likelihood method in handling problems with missing observations using EM algorithms. Further, it could be effectively applied in cases where some data vectors exhibit one or more missing values at random.

An Intelligent Framework for Feature Detection and Health Recommendation System of Diseases

  • Mavaluru, Dinesh
    • International Journal of Computer Science & Network Security
    • /
    • 제21권3호
    • /
    • pp.177-184
    • /
    • 2021
  • All over the world, people are affected by many chronic diseases and medical practitioners are working hard to find out the symptoms and remedies for the diseases. Many researchers focus on the feature detection of the disease and trying to get a better health recommendation system. It is necessary to detect the features automatically to provide the most relevant solution for the disease. This research gives the framework of Health Recommendation System (HRS) for identification of relevant and non-redundant features in the dataset for prediction and recommendation of diseases. This system consists of three phases such as Pre-processing, Feature Selection and Performance evaluation. It supports for handling of missing and noisy data using the proposed Imputation of missing data and noise detection based Pre-processing algorithm (IMDNDP). The selection of features from the pre-processed dataset is performed by proposed ensemble-based feature selection using an expert's knowledge (EFS-EK). It is very difficult to detect and monitor the diseases manually and also needs the expertise in the field so that process becomes time consuming. Finally, the prediction and recommendation can be done using Support Vector Machine (SVM) and rule-based approaches.

Breast Cancer and Modifiable Lifestyle Factors in Argentinean Women: Addressing Missing Data in a Case-Control Study

  • Coquet, Julia Becaria;Tumas, Natalia;Osella, Alberto Ruben;Tanzi, Matteo;Franco, Isabella;Diaz, Maria Del Pilar
    • Asian Pacific Journal of Cancer Prevention
    • /
    • 제17권10호
    • /
    • pp.4567-4575
    • /
    • 2016
  • A number of studies have evidenced the effect of modifiable lifestyle factors such as diet, breastfeeding and nutritional status on breast cancer risk. However, none have addressed the missing data problem in nutritional epidemiologic research in South America. Missing data is a frequent problem in breast cancer studies and epidemiological settings in general. Estimates of effect obtained from these studies may be biased, if no appropriate method for handling missing data is applied. We performed Multiple Imputation for missing values on covariates in a breast cancer case-control study of $C{\acute{o}}rdoba$ (Argentina) to optimize risk estimates. Data was obtained from a breast cancer case control study from 2008 to 2015 (318 cases, 526 controls). Complete case analysis and multiple imputation using chained equations were the methods applied to estimate the effects of a Traditional dietary pattern and other recognized factors associated with breast cancer. Physical activity and socioeconomic status were imputed. Logistic regression models were performed. When complete case analysis was performed only 31% of women were considered. Although a positive association of Traditional dietary pattern and breast cancer was observed from both approaches (complete case analysis OR=1.3, 95%CI=1.0-1.7; multiple imputation OR=1.4, 95%CI=1.2-1.7), effects of other covariates, like BMI and breastfeeding, were only identified when multiple imputation was considered. A Traditional dietary pattern, BMI and breastfeeding are associated with the occurrence of breast cancer in this Argentinean population when multiple imputation is appropriately performed. Multiple Imputation is suggested in Latin America's epidemiologic studies to optimize effect estimates in the future.

Fuzzy Classification Method for Processing Incomplete Dataset

  • Woo, Young-Woon;Lee, Kwang-Eui;Han, Soo-Whan
    • Journal of information and communication convergence engineering
    • /
    • 제8권4호
    • /
    • pp.383-386
    • /
    • 2010
  • Pattern classification is one of the most important topics for machine learning research fields. However incomplete data appear frequently in real world problems and also show low learning rate in classification models. There have been many researches for handling such incomplete data, but most of the researches are focusing on training stages. In this paper, we proposed two classification methods for incomplete data using triangular shaped fuzzy membership functions. In the proposed methods, missing data in incomplete feature vectors are inferred, learned and applied to the proposed classifier using triangular shaped fuzzy membership functions. In the experiment, we verified that the proposed methods show higher classification rate than a conventional method.

Classification for Imbalanced Breast Cancer Dataset Using Resampling Methods

  • Hana Babiker, Nassar
    • International Journal of Computer Science & Network Security
    • /
    • 제23권1호
    • /
    • pp.89-95
    • /
    • 2023
  • Analyzing breast cancer patient files is becoming an exciting area of medical information analysis, especially with the increasing number of patient files. In this paper, breast cancer data is collected from Khartoum state hospital, and the dataset is classified into recurrence and no recurrence. The data is imbalanced, meaning that one of the two classes have more sample than the other. Many pre-processing techniques are applied to classify this imbalanced data, resampling, attribute selection, and handling missing values, and then different classifiers models are built. In the first experiment, five classifiers (ANN, REP TREE, SVM, and J48) are used, and in the second experiment, meta-learning algorithms (Bagging, Boosting, and Random subspace). Finally, the ensemble model is used. The best result was obtained from the ensemble model (Boosting with J48) with the highest accuracy 95.2797% among all the algorithms, followed by Bagging with J48(90.559%) and random subspace with J48(84.2657%). The breast cancer imbalanced dataset was classified into recurrence, and no recurrence with different classified algorithms and the best result was obtained from the ensemble model.

대용량 자료에서 핵심적인 소수의 변수들의 선별과 로지스틱 회귀 모형의 전개 (Screening Vital Few Variables and Development of Logistic Regression Model on a Large Data Set)

  • 임용빈;조재연;엄경아;이선아
    • 품질경영학회지
    • /
    • 제34권2호
    • /
    • pp.129-135
    • /
    • 2006
  • In the advance of computer technology, it is possible to keep all the related informations for monitoring equipments in control and huge amount of real time manufacturing data in a data base. Thus, the statistical analysis of large data sets with hundreds of thousands observations and hundred of independent variables whose some of values are missing at many observations is needed even though it is a formidable computational task. A tree structured approach to classification is capable of screening important independent variables and their interactions. In a Six Sigma project handling large amount of manufacturing data, one of the goals is to screen vital few variables among trivial many variables. In this paper we have reviewed and summarized CART, C4.5 and CHAID algorithms and proposed a simple method of screening vital few variables by selecting common variables screened by all the three algorithms. Also how to develop a logistics regression model on a large data set is discussed and illustrated through a large finance data set collected by a credit bureau for th purpose of predicting the bankruptcy of the company.