• 제목/요약/키워드: Missing data theory

검색결과 39건 처리시간 0.019초

손실 데이터 이론을 이용한 강인한 음성 인식 (Robust Speech Recognition Using Missing Data Theory)

  • 김락용;조훈영;오영환
    • 한국음향학회지
    • /
    • 제20권3호
    • /
    • pp.56-62
    • /
    • 2001
  • 본 논문에서는 손실이 발생하는 상황에서 높은 인식률을 유지하기 위해서 손실 데이터 이론을 음성 인식기에 적용하였다 손실 데이터 이론은 일반적으로 이용되는 통계적 정합 방법인 은닉 마코프 모델 (HMM: hidden Markov model) 중 연속 Gaussian확률 밀도 함수를 이용하여 음성 특징들의 출력 확률을 나타내는 경우에 쉽게 적용할 수 있다는 장점을 갖고 있다. 손실 데이터 이론의 방법 중 계산량이 적고 인식기에 적용이 쉬운 주변화(marginalization)방법을 사용하였으며 특징 벡터의 특정 차수나 시간열의 손실 검출 방법은 음성 신호의 에너지와 주위 배경 잡음의 에너지의 차이가 임계치보다 작게 되는 부분을 찾는 주파수 차감 방법을 이용하였다. 본 논문에서 제안한 손실 영역의 신뢰도 평가는 분석 구간이 모음일 확률을 계산해서 비교적 잉여 정보가 많이 포함된 모음화된 구간의 손실만을 처리하도록 하였다. 제안한 방법을 사용하여 여러 잡음 환경에 대해서 기존의 손실 데이터 처리 방법만을 사용한 경우보다 452 단어의 화자독립 단어 인식 실험을 수행한 결과 오류율측면에서 평균적으로 약 12%의 성능 향상을 얻을 수 있었다.

  • PDF

Support Vector Regression을 이용한 희소 데이터의 전처리 (A Sparse Data Preprocessing Using Support Vector Regression)

  • 전성해;박정은;오경환
    • 한국지능시스템학회논문지
    • /
    • 제14권6호
    • /
    • pp.789-792
    • /
    • 2004
  • 웹 마이닝, 바이오정보학, 통계적 자료 분석 등 여러 분야에서 매우 다양한 형태의 결측치가 발생하여 학습 데이터를 희소하게 만든다. 결측치는 주로 전처리 과정에서 가장 기본적인 평균과 최빈수뿐만 아니라 조건부 평균, 나무 모형, 그리고 마코프체인 몬테칼로 기법과 같은 결측치 대체 기법들을 적용하여 추정된 값에 의해 대체된다. 그런데 주어진 데이터의 결측치 비율이 크게 되면 기존의 결측치 대체 방법들의 예측의 정확도는 낮아지는 특성을 보인다. 또한 데이터의 결측치 비율이 증가할수록 사용 가능한 결측치 대체 방법들의 수는 제한된다. 이러한 문제점을 해결하기 위하여 본 논문에서는 통계적 학습 이론 중에서 Vapnik의 Support Vector Regression을 데이터 전처리 과정에 알맞게 변형하여 적용하였다. 제안 방법을 이용하여 결측치 비율이 큰 희소 데이터의 전처리도 가능할 수 있도록 하였다 UCI machine learning repository로부터 얻어진 데이터를 이용하여 제안 방법의 성능을 확인하였다.

A Modified Grey-Based k-NN Approach for Treatment of Missing Value

  • Chun, Young-M.;Lee, Joon-W.;Chung, Sung-S.
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권2호
    • /
    • pp.421-436
    • /
    • 2006
  • Huang proposed a grey-based nearest neighbor approach to predict accurately missing attribute value in 2004. Our study proposes which way to decide the number of nearest neighbors using not only the deng's grey relational grade but also the wen's grey relational grade. Besides, our study uses not an arithmetic(unweighted) mean but a weighted one. Also, GRG is used by a weighted value when we impute missing values. There are four different methods - DU, DW, WU, WW. The performance of WW(Wen's GRG & weighted mean) method is the best of any other methods. It had been proven by Huang that his method was much better than mean imputation method and multiple imputation method. The performance of our study is far superior to that of Huang.

  • PDF

Compressive sensing-based two-dimensional scattering-center extraction for incomplete RCS data

  • Bae, Ji-Hoon;Kim, Kyung-Tae
    • ETRI Journal
    • /
    • 제42권6호
    • /
    • pp.815-826
    • /
    • 2020
  • We propose a two-dimensional (2D) scattering-center-extraction (SCE) method using sparse recovery based on the compressive-sensing theory, even with data missing from the received radar cross-section (RCS) dataset. First, using the proposed method, we generate a 2D grid via adaptive discretization that has a considerably smaller size than a fully sampled fine grid. Subsequently, the coarse estimation of 2D scattering centers is performed using both the method of iteratively reweighted least square and a general peak-finding algorithm. Finally, the fine estimation of 2D scattering centers is performed using the orthogonal matching pursuit (OMP) procedure from an adaptively sampled Fourier dictionary. The measured RCS data, as well as simulation data using the point-scatterer model, are used to evaluate the 2D SCE accuracy of the proposed method. The results indicate that the proposed method can achieve higher SCE accuracy for an incomplete RCS dataset with missing data than that achieved by the conventional OMP, basis pursuit, smoothed L0, and existing discrete spectral estimation techniques.

A Study on the Treatment of Missing Value using Grey Relational Grade and k-NN Approach

  • 천영민;정성석
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2006년도 PROCEEDINGS OF JOINT CONFERENCEOF KDISS AND KDAS
    • /
    • pp.55-62
    • /
    • 2006
  • Huang proposed a grey-based nearest neighbor approach to predict accurately missing attribute value in 2004. Our study proposes which way to decide the number of nearest neighbors using not only the dong's grey relational grade but also the wen's grey relational grade. Besides, our study uses not an arithmetic(unweighted) mean but a weighted one. Also, GRG is used by a weighted value when we impute a missing values. There are four different methods - DU, DW, WU, WW. The performance of WW(wen's GRG & weighted mean) method is the best of my other methods. It had been proven by Huang that his method was much better than mean imputation method and multiple imputation method. The performance of our study is far superior to that of Huang.

  • PDF

손실 데이터를 처리하기 위한 집락분석 알고리즘 (A Clustering Algorithm for Handling Missing Data)

  • 이종찬
    • 한국융합학회논문지
    • /
    • 제8권11호
    • /
    • pp.103-108
    • /
    • 2017
  • 유비쿼터스 환경에서는 다양한 센서로 부터 원거리에 데이터를 전송해야 하는 문제가 제기되어져 왔다. 특히 서로 다른 위치에서 도착한 데이터를 통합하는 과정에서 데이터의 속성 값들이 상이하거나 데이터에 일부 손실이 있는 데이터들도 처리해야 하는 어려운 문제를 가지고 있었다. 본 논문은 이와 같은 데이터들을 대상으로 집락분석 하는 방법을 제시한다. 이 방법의 핵심은 문제에 적합한 목적함수를 정의하고, 이 목적함수를 최적화 할 수 있는 알고리즘을 개발하는데 있다. 목적함수는 OCS 목적함수를 변형하여 사용한다. 이진 값을 가지는 데이터만을 처리할 수 있었던 MFA(Mean Field Annealing)을 연속 값을 가지는 분야에도 적용할 수 있도록 확장한다. 그리고 이를 CMFA이라 명하고 최적화 알고리즘으로 사용한다.

A Real Time Traffic Flow Model Based on Deep Learning

  • Zhang, Shuai;Pei, Cai Y.;Liu, Wen Y.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권8호
    • /
    • pp.2473-2489
    • /
    • 2022
  • Urban development has brought about the increasing saturation of urban traffic demand, and traffic congestion has become the primary problem in transportation. Roads are in a state of waiting in line or even congestion, which seriously affects people's enthusiasm and efficiency of travel. This paper mainly studies the discrete domain path planning method based on the flow data. Taking the traffic flow data based on the highway network structure as the research object, this paper uses the deep learning theory technology to complete the path weight determination process, optimizes the path planning algorithm, realizes the vehicle path planning application for the expressway, and carries on the deployment operation in the highway company. The path topology is constructed to transform the actual road information into abstract space that the machine can understand. An appropriate data structure is used for storage, and a path topology based on the modeling background of expressway is constructed to realize the mutual mapping between the two. Experiments show that the proposed method can further reduce the interpolation error, and the interpolation error in the case of random missing is smaller than that in the other two missing modes. In order to improve the real-time performance of vehicle path planning, the association features are selected, the path weights are calculated comprehensively, and the traditional path planning algorithm structure is optimized. It is of great significance for the sustainable development of cities.

Sample size calculation for comparing time-averaged responses in K-group repeated binary outcomes

  • Wang, Jijia;Zhang, Song;Ahn, Chul
    • Communications for Statistical Applications and Methods
    • /
    • 제25권3호
    • /
    • pp.321-328
    • /
    • 2018
  • In clinical trials with repeated measurements, the time-averaged difference (TAD) may provide a more powerful evaluation of treatment efficacy than the rate of changes over time when the treatment effect has rapid onset and repeated measurements continue across an extended period after a maximum effect is achieved (Overall and Doyle, Controlled Clinical Trials, 15, 100-123, 1994). The sample size formula has been investigated by many researchers for the evaluation of TAD in two treatment groups. For the evaluation of TAD in multi-arm trials, Zhang and Ahn (Computational Statistics & Data Analysis, 58, 283-291, 2013) and Lou et al. (Communications in Statistics-Theory and Methods, 46, 11204-11213, 2017b) developed the sample size formulas for continuous outcomes and count outcomes, respectively. In this paper, we derive a sample size formula to evaluate the TAD of the repeated binary outcomes in multi-arm trials using the generalized estimating equation approach. This proposed sample size formula accounts for various correlation structures and missing patterns (including a mixture of independent missing and monotone missing patterns) that are frequently encountered by practitioners in clinical trials. We conduct simulation studies to assess the performance of the proposed sample size formula under a wide range of design parameters. The results show that the empirical powers and the empirical Type I errors are close to nominal levels. We illustrate our proposed method using a clinical trial example.

데이터베이스 정규화 이론을 이용한 국민건강영양조사 중 다년도 식이조사 자료 정제 및 통합 (Data Cleaning and Integration of Multi-year Dietary Survey in the Korea National Health and Nutrition Examination Survey (KNHANES) using Database Normalization Theory)

  • 권남지;서지혜;이헌주
    • 한국환경보건학회지
    • /
    • 제43권4호
    • /
    • pp.298-306
    • /
    • 2017
  • Objectives: Since 1998, the Korea National Health and Nutrition Examination Survey (KNHANES) has been conducted in order to investigate the health and nutritional status of Koreans. The food intake data of individuals in the KNHANES has also been utilized as source dataset for risk assessment of chemicals via food. To improve the reliability of intake estimation and prevent missing data for less-responded foods, the structure of integrated long-standing datasets is significant. However, it is difficult to merge multi-year survey datasets due to ineffective cleaning processes for handling extensive numbers of codes for each food item along with changes in dietary habits over time. Therefore, this study aims at 1) cleaning the process of abnormal data 2) generation of integrated long-standing raw data, and 3) contributing to the production of consistent dietary exposure factors. Methods: Codebooks, the guideline book, and raw intake data from KNHANES V and VI were used for analysis. The violation of the primary key constraint and the $1^{st}-3rd$ normal form in relational database theory were tested for the codebook and the structure of the raw data, respectively. Afterwards, the cleaning process was executed for the raw data by using these integrated codes. Results: Duplication of key records and abnormality in table structures were observed. However, after adjusting according to the suggested method above, the codes were corrected and integrated codes were newly created. Finally, we were able to clean the raw data provided by respondents to the KNHANES survey. Conclusion: The results of this study will contribute to the integration of the multi-year datasets and help improve the data production system by clarifying, testing, and verifying the primary key, integrity of the code, and primitive data structure according to the database normalization theory in the national health data.

A Study on the Incomplete Information Processing System(INiPS) Using Rough Set

  • Jeong, Gu-Beom;Chung, Hwan-Mook;Kim, Guk-Boh;Park, Kyung-Ok
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2000년도 추계학술대회 학술발표 논문집
    • /
    • pp.243-251
    • /
    • 2000
  • In general, Rough Set theory is used for classification, inference, and decision analysis of incomplete data by using approximation space concepts in information system. Information system can include quantitative attribute values which have interval characteristics, or incomplete data such as multiple or unknown(missing) data. These incomplete data cause the inconsistency in information system and decrease the classification ability in system using Rough Sets. In this paper, we present various types of incomplete data which may occur in information system and propose INcomplete information Processing System(INiPS) which converts incomplete information system into complete information system in using Rough Sets.

  • PDF