• 제목/요약/키워드: prediction algorithm

검색결과 2,748건 처리시간 0.028초

Blind MMSE Equalization of FIR/IIR Channels Using Oversampling and Multichannel Linear Prediction

  • Chen, Fangjiong;Kwong, Sam;Kok, Chi-Wah
    • ETRI Journal
    • /
    • 제31권2호
    • /
    • pp.162-172
    • /
    • 2009
  • A linear-prediction-based blind equalization algorithm for single-input single-output (SISO) finite impulse response/infinite impulse response (FIR/IIR) channels is proposed. The new algorithm is based on second-order statistics, and it does not require channel order estimation. By oversampling the channel output, the SISO channel model is converted to a special single-input multiple-output (SIMO) model. Two forward linear predictors with consecutive prediction delays are applied to the subchannel outputs of the SIMO model. It is demonstrated that the partial parameters of the SIMO model can be estimated from the difference between the prediction errors when the length of the predictors is sufficiently large. The sufficient filter length for achieving the optimal prediction is also derived. Based on the estimated parameters, both batch and adaptive minimum-mean-square-error equalizers are developed. The performance of the proposed equalizers is evaluated by computer simulations and compared with existing algorithms.

  • PDF

K-ToBI 기호에 준한 F0 곡선 생성 알고리듬 (A computational algorithm for F0 contour generation in Korean developed with prosodically labeled databases using K-ToBI system)

  • 이용주;이숙향;김종진;고현주;김영일;김상훈;이정철
    • 대한음성학회지:말소리
    • /
    • 제35_36호
    • /
    • pp.131-143
    • /
    • 1998
  • This study describes an algorithm for the F0 contour generation system for Korean sentences and its evaluation results. 400 K-ToBI labeled utterances were used which were read by one male and one female announcers. F0 contour generation system uses two classification trees for prediction of K-ToBI labels for input text and 11 regression trees for prediction of F0 values for the labels. Evaluation results of the system showed 77.2% prediction accuracy for prediction of IP boundaries and 72.0% prediction accuracy for AP boundaries. Information of voicing and duration of the segments was not changed for F0 contour generation and its evaluation. Evaluation results showed 23.5Hz RMS error and 0.55 correlation coefficient in F0 generation experiment using labelling information from the original speech data.

  • PDF

재무부실화 예측을 위한 랜덤 서브스페이스 앙상블 모형의 최적화 (Optimization of Random Subspace Ensemble for Bankruptcy Prediction)

  • 민성환
    • 한국IT서비스학회지
    • /
    • 제14권4호
    • /
    • pp.121-135
    • /
    • 2015
  • Ensemble classification is to utilize multiple classifiers instead of using a single classifier. Recently ensemble classifiers have attracted much attention in data mining community. Ensemble learning techniques has been proved to be very useful for improving the prediction accuracy. Bagging, boosting and random subspace are the most popular ensemble methods. In random subspace, each base classifier is trained on a randomly chosen feature subspace of the original feature space. The outputs of different base classifiers are aggregated together usually by a simple majority vote. In this study, we applied the random subspace method to the bankruptcy problem. Moreover, we proposed a method for optimizing the random subspace ensemble. The genetic algorithm was used to optimize classifier subset of random subspace ensemble for bankruptcy prediction. This paper applied the proposed genetic algorithm based random subspace ensemble model to the bankruptcy prediction problem using a real data set and compared it with other models. Experimental results showed the proposed model outperformed the other models.

퍼지이론과 SVM 결합을 통한 기업부도예측 최적화 (Optimized Bankruptcy Prediction through Combining SVM with Fuzzy Theory)

  • 최소윤;안현철
    • 디지털융복합연구
    • /
    • 제13권3호
    • /
    • pp.155-165
    • /
    • 2015
  • 기업부도예측은 재무 분야에 있어 중요한 연구주제 중 하나로 1960년대 이후부터 꾸준히 연구되어져 왔다. 국내의 경우, IMF 사태 이후 기업부도예측에 관한 중요성이 강조되고 있다. 이에 본 연구에서는 보다 정확한 기업부도예측을 위해 높은 예측력과 동시에 과적합화의 문제를 해결한다고 알려진 SVM(Support Vector Machine)을 기반으로 퍼지이론(fuzzy theory)을 활용해 입력변수를 확장하고, 유전자 알고리즘(GA, Genetic Algorithm)을 이용해 유사 혹은 유사최적의 입력변수집합과 파라미터를 탐색하는 새로운 융합모형을 제시한다. 제안모형의 유용성을 검증하기 위하여 H은행의 비외감 중공업 기업 데이터를 이용하여 실험을 수행하였으며, 비교모형으로는 로짓분석, 판별분석, 의사결정나무, 사례기반추론, 인공신경망, SVM을 선정하였다. 실험결과, 제안모형이 모든 비교모형들에 비해 우수한 예측력을 보이는 것으로 나타났다. 본 연구는 우수한 예측 성능을 가진 다기법 융합 모형을 새롭게 제안하여, 부도예측 분야에 학술적, 실무적으로 기여할 수 있을 것으로 기대된다.

Vest-type System on Machine Learning-based Algorithm to Detect and Predict Falls

  • Ho-Chul Kim;Ho-Seong Hwang;Kwon-Hee Lee;Min-Hee Kim
    • PNF and Movement
    • /
    • 제22권1호
    • /
    • pp.43-54
    • /
    • 2024
  • Purpose: Falls among persons older than 65 years are a significant concern due to their frequency and severity. This study aimed to develop a vest-type embedded artificial intelligence (AI) system capable of detecting and predicting falls in various scenarios. Methods: In this study, we established and developed a vest-type embedded AI system to judge and predict falls in various directions and situations. To train the AI, we collected data using acceleration and gyroscope values from a six-axis sensor attached to the seventh cervical and the second sacral vertebrae of the user, considering accurate motion analysis of the human body. The model was constructed using a neural network-based AI prediction algorithm to anticipate the direction of falls using the collected pedestrian data. Results: We focused on developing a lightweight and efficient fall prediction model for integration into an embedded AI algorithm system, ensuring real-time network optimization. Our results showed that the accuracy of fall occurrence and direction prediction using the trained fall prediction model was 89.0% and 78.8%, respectively. Furthermore, the fall occurrence and direction prediction accuracy of the model quantized for embedded porting was 87.0 % and 75.5 %, respectively. Conclusion: The developed fall detection and prediction system, designed as a vest-type with an embedded AI algorithm, offers the potential to provide real-time feedback to pedestrians in clinical settings and proactively prepare for accidents.

동적 데이터베이스 기반 태풍 진로 예측 (Dynamic data-base Typhoon Track Prediction (DYTRAP))

  • 이윤제;권혁조;주동찬
    • 대기
    • /
    • 제21권2호
    • /
    • pp.209-220
    • /
    • 2011
  • A new consensus algorithm for the prediction of tropical cyclone track has been developed. Conventional consensus is a simple average of a few fixed models that showed the good performance in track prediction for the past few years. Meanwhile, the consensus in this study is a weighted average of a few models that may change for every individual forecast time. The models are selected as follows. The first step is to find the analogous past tropical cyclone tracks to the current track. The next step is to evaluate the model performances for those past tracks. Finally, we take the weighted average of the selected models. More weight is given to the higher performance model. This new algorithm has been named as DYTRAP (DYnamic data-base Typhoon tRAck Prediction) in the sense that the data base is used to find the analogous past tracks and the effective models for every individual track prediction case. DYTRAP has been applied to all 2009 tropical cyclone track prediction. The results outperforms those of all models as well as all the official forecasts of the typhoon centers. In order to prove the real usefulness of DYTRAP, it is necessary to apply the DYTRAP system to the real time prediction because the forecast in typhoon centers usually uses 6-hour or 12-hour-old model guidances.

기계학습 알고리즘을 이용한 반도체 테스트공정의 불량 예측 (Defect Prediction Using Machine Learning Algorithm in Semiconductor Test Process)

  • 장수열;조만식;조슬기;문병무
    • 한국전기전자재료학회논문지
    • /
    • 제31권7호
    • /
    • pp.450-454
    • /
    • 2018
  • Because of the rapidly changing environment and high uncertainties, the semiconductor industry is in need of appropriate forecasting technology. In particular, both the cost and time in the test process are increasing because the process becomes complicated and there are more factors to consider. In this paper, we propose a prediction model that predicts a final "good" or "bad" on the basis of preconditioning test data generated in the semiconductor test process. The proposed prediction model solves the classification and regression problems that are often dealt with in the semiconductor process and constructs a reliable prediction model. We also implemented a prediction model through various machine learning algorithms. We compared the performance of the prediction models constructed through each algorithm. Actual data of the semiconductor test process was used for accurate prediction model construction and effective test verification.

Evolutionary Computing Driven Extreme Learning Machine for Objected Oriented Software Aging Prediction

  • Ahamad, Shahanawaj
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.232-240
    • /
    • 2022
  • To fulfill user expectations, the rapid evolution of software techniques and approaches has necessitated reliable and flawless software operations. Aging prediction in the software under operation is becoming a basic and unavoidable requirement for ensuring the systems' availability, reliability, and operations. In this paper, an improved evolutionary computing-driven extreme learning scheme (ECD-ELM) has been suggested for object-oriented software aging prediction. To perform aging prediction, we employed a variety of metrics, including program size, McCube complexity metrics, Halstead metrics, runtime failure event metrics, and some unique aging-related metrics (ARM). In our suggested paradigm, extracting OOP software metrics is done after pre-processing, which includes outlier detection and normalization. This technique improved our proposed system's ability to deal with instances with unbalanced biases and metrics. Further, different dimensional reduction and feature selection algorithms such as principal component analysis (PCA), linear discriminant analysis (LDA), and T-Test analysis have been applied. We have suggested a single hidden layer multi-feed forward neural network (SL-MFNN) based ELM, where an adaptive genetic algorithm (AGA) has been applied to estimate the weight and bias parameters for ELM learning. Unlike the traditional neural networks model, the implementation of GA-based ELM with LDA feature selection has outperformed other aging prediction approaches in terms of prediction accuracy, precision, recall, and F-measure. The results affirm that the implementation of outlier detection, normalization of imbalanced metrics, LDA-based feature selection, and GA-based ELM can be the reliable solution for object-oriented software aging prediction.

신경회로망 예측기법을 결합한 Dynamic Rate Leaky Bucket 알고리즘의 구현 (An implementation of the dynamic rate leaky bucket algorithm combined with a neural network based prediction)

  • 이두헌;신요안;김영한
    • 한국통신학회논문지
    • /
    • 제22권2호
    • /
    • pp.259-267
    • /
    • 1997
  • The advent of B-ISDN using ATM(asynchronous transfer mode) made possible a variety of new multimedia services, however it also created a problem of congestion control due to bursty nature of various traffic sources. To tackle this problem, UPC/NPC(user parameter control/network parameter control) have been actively studied and DRLB(dynamic rate leaky bucket) algorithm, in which the token generation rate is changed according to states of data source andbuffer occupancy, is a good example of the UPC/NPC. However, the DRLB algorithm has drawbacks of low efficiency and difficult real-time implementation for bursty traffic sources because the determination of token generation rate in the algorithm is based on the present state of network. In this paper, we propose a more plastic and effective congestion control algorithm by combining the DRLB algorithm and neural network based prediction to remedy the drawbacks of the DRLB algorithm, and verify the efficacy of the proposed method by computer simulations.

  • PDF

유색잡음에 대한 적응잡음제거기의 성능향성 (Performance improvement of adaptivenoise canceller with the colored noise)

  • 박장식;조성환;손경식
    • 한국통신학회논문지
    • /
    • 제22권10호
    • /
    • pp.2339-2347
    • /
    • 1997
  • The performance of the adaptive noise canceller using LMS algorithm is degraded by the gradient noise due to target speech signals. An adaptive noise canceller with speech detector was proposed to reduce this performande degradation. The speech detector utilized the adaptive prediction-error filter adapted by the NLMS algorithm. This paper discusses to enhance the performance of the adaptive noise canceller forthecorlored noise. The affine projection algorithm, which is known as faster than NLMS algorithm for correlated signals, is used to adapt the adaptive filter and the adaptive prediction error filter. When the voice signals are detected by the speech detector, coefficients of adaptive filter are adapted by the sign-error afine projection algorithm which is modified to reduce the miaslignment of adaptive filter coefficients. Otherwirse, they are adapted by affine projection algorithm. To obtain better performance, the proper step size of sign-error affine projection algorithm is discussed. As resutls of computer simulation, it is shown that the performance of the proposed ANC is better than that of conventional one.

  • PDF