• Title/Summary/Keyword: 확률신경망

Search Result 261, Processing Time 0.022 seconds

A Study on Speaker-Independent Speech Recognition Using a Hybrid System of Semi-Continuous HMM and RBF (반연속 HMM과 RBF 혼합 시스템을 이용한 화자독립 음성인식에 관한 연구)

  • Moon Yun Joo;June Sun Do;Kang Chul Ho
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.36-39
    • /
    • 1999
  • 본 논문에서는 기존의 반연속 HMM과 신경망 알고리즘인 RBF(Radial Basis Function)를 혼합한 형태를 음성인식에 적용한다. 기존의 반연속 HMM은 학습 과정에서 모든 모델과 상태에서 공유되는 L개의 가우시안 확률 밀도들과 각가우시안 확률 밀도들의 가중치를 결정하는 흔합 밀도계수 의해 입력 음성의 특징을 확률적으로 모델링하는 혼합 확률을 얻고 또 Maximum likelihood와 Baum-Welch 알고리즘을 이용해 초기확률, 전이확률, 관측확률, 평균벡터 $\mu$, 공분산 행렬 $\Sigma$을 학습해 나간다. 그러나 제안한 RBF/반연속 HMM 혼합형태는 RBF의 변형된 방식을 첨가해 반연속 HMM 관측 파라미터를 RBF에 의해 결정함으로써 보단 분별릭 있는 화자독립 인식 시스템이 된다. 그래서 인식 실험결과 인식률에 있어서 기존의 반연속 HMM보다 향상된 인식률을 얻는다.

  • PDF

Training Network Design Based on Convolution Neural Network for Object Classification in few class problem (소 부류 객체 분류를 위한 CNN기반 학습망 설계)

  • Lim, Su-chang;Kim, Seung-Hyun;Kim, Yeon-Ho;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.144-150
    • /
    • 2017
  • Recently, deep learning is used for intelligent processing and accuracy improvement of data. It is formed calculation model composed of multi data processing layer that train the data representation through an abstraction of the various levels. A category of deep learning, convolution neural network is utilized in various research fields, which are human pose estimation, face recognition, image classification, speech recognition. When using the deep layer and lots of class, CNN that show a good performance on image classification obtain higher classification rate but occur the overfitting problem, when using a few data. So, we design the training network based on convolution neural network and trained our image data set for object classification in few class problem. The experiment show the higher classification rate of 7.06% in average than the previous networks designed to classify the object in 1000 class problem.

Parameter Estimation in Debris Flow Deposition Model Using Pseudo Sample Neural Network (의사 샘플 신경망을 이용한 토석류 퇴적 모델의 파라미터 추정)

  • Heo, Gyeongyong;Lee, Chang-Woo;Park, Choong-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.11-18
    • /
    • 2012
  • Debris flow deposition model is a model to predict affected areas by debris flow and random walk model (RWM) was used to build the model. Although the model was proved to be effective in the prediction of affected areas, the model has several free parameters decided experimentally. There are several well-known methods to estimate parameters, however, they cannot be applied directly to the debris flow problem due to the small size of training data. In this paper, a modified neural network, called pseudo sample neural network (PSNN), was proposed to overcome the sample size problem. In the training phase, PSNN uses pseudo samples, which are generated using the existing samples. The pseudo samples smooth the solution space and reduce the probability of falling into a local optimum. As a result, PSNN can estimate parameter more robustly than traditional neural networks do. All of these can be proved through the experiments using artificial and real data sets.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.

Interacting Multiple Model Vehicle-Tracking System Based on Neural Network (신경회로망을 이용한 다중모델 차량추적 시스템)

  • Hwang, Jae-Pil;Park, Seong-Keun;Kim, Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.5
    • /
    • pp.641-647
    • /
    • 2009
  • In this paper, a new filtering scheme for adaptive cruise control (ACC) system is presented. In the proposed scheme, the identification of the mode of the preceding vehicle is considered as a classification problem and it is done by a neural network classifier. The neural network classifier outputs a posterior probability of the mode of the preceding vehicle and the probability is directly used in the IMM framework. Finally, ten scenarios are made and the proposed NIMM is tested on them to show its validity.

A Study on the Hopfield Network for automatic weapon assignment (자동무장할당을 위한 홉필드망 설계연구)

  • 이양원;강민구;이봉기
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.1 no.2
    • /
    • pp.183-191
    • /
    • 1997
  • A neural network-based algorithm for the static weapon-target assignment (WTA) problem is Presented in this paper. An optimal WTA is one which allocates targets to weapon systems such that the total expected leakage value of targets surviving the defense is minimized. The proposed algorithm is based on a Hopfield and Tank's neural network model, and uses K x M processing elements called binary neuron, where M is the number of weapon platforms and K is the number of targets. From the software simulation results of example battle scenarios, it is shown that the proposed method has better performance in convergence speed than other method when the optimal initial values are used.

  • PDF

Neural Equalization Techniques in Partial Erasure Model of Nonlinear Magnetic Recording Channel (부분 삭제 모델로 나타난 비선형 자기기록 채널에서의 신경망 등화기법)

  • Choi, Soo-Yong;Ong, Sung-Hwan;You, Cheol-Woo;Hong, Dae-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.103-108
    • /
    • 1998
  • The increase in the capacity of the digital magnetic recording systems inevitably causes severe intersymbol interference (ISI) and nonlinear distortions in the digital magnetic recording channel. In this paper, to cope with severe ISI and nonlinear distortions a neural decision feedback equalizer (NDFE) is applied to the digital magnetic recording channel - partial erasure channel model. In the performance comparison of bit error probability (or bit error ratio : BER) between the NDFE and the conventional decision feedback equalizer (DFE) via computer simulations. It has been found that as nonlinear distortions increase the NDFE has more SNR (SIgnal-to-Noise Ratio) advantage over the conventional DFE. In addition, in spite of the same recording density, as nonlinear distortions are increased, NDFE has the better performance of BER and the greater stability over conventional DFE.

  • PDF

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

Application of multiple linear regression and artificial neural network models to forecast long-term precipitation in the Geum River basin (다중회귀모형과 인공신경망모형을 이용한 금강권역 강수량 장기예측)

  • Kim, Chul-Gyum;Lee, Jeongwoo;Lee, Jeong Eun;Kim, Hyeonjun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.723-736
    • /
    • 2022
  • In this study, monthly precipitation forecasting models that can predict up to 12 months in advance were constructed for the Geum River basin, and two statistical techniques, multiple linear regression (MLR) and artificial neural network (ANN), were applied to the model construction. As predictor candidates, a total of 47 climate indices were used, including 39 global climate patterns provided by the National Oceanic and Atmospheric Administration (NOAA) and 8 meteorological factors for the basin. Forecast models were constructed by using climate indices with high correlation by analyzing the teleconnection between the monthly precipitation and each climate index for the past 40 years based on the forecast month. In the goodness-of-fit test results for the average value of forecasts of each month for 1991 to 2021, the MLR models showed -3.3 to -0.1% for the percent bias (PBIAS), 0.45 to 0.50 for the Nash-Sutcliffe efficiency (NSE), and 0.69 to 0.70 for the Pearson correlation coefficient (r), whereas, the ANN models showed PBIAS -5.0~+0.5%, NSE 0.35~0.47, and r 0.64~0.70. The mean values predicted by the MLR models were found to be closer to the observation than the ANN models. The probability of including observations within the forecast range for each month was 57.5 to 83.6% (average 72.9%) for the MLR models, and 71.5 to 88.7% (average 81.1%) for the ANN models, indicating that the ANN models showed better results. The tercile probability by month was 25.9 to 41.9% (average 34.6%) for the MLR models, and 30.3 to 39.1% (average 34.7%) for the ANN models. Both models showed long-term predictability of monthly precipitation with an average of 33.3% or more in tercile probability. In conclusion, the difference in predictability between the two models was found to be relatively small. However, when judging from the hit rate for the prediction range or the tercile probability, the monthly deviation for predictability was found to be relatively small for the ANN models.

Centroid Neural Network with Bhattacharyya Kernel (Bhattacharyya 커널을 적용한 Centroid Neural Network)

  • Lee, Song-Jae;Park, Dong-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.861-866
    • /
    • 2007
  • A clustering algorithm for Gaussian Probability Distribution Function (GPDF) data called Centroid Neural Network with a Bhattacharyya Kernel (BK-CNN) is proposed in this paper. The proposed BK-CNN is based on the unsupervised competitive Centroid Neural Network (CNN) and employs a kernel method for data projection. The kernel method adopted in the proposed BK-CNN is used to project data from the low dimensional input feature space into higher dimensional feature space so as the nonlinear problems associated with input space can be solved linearly in the feature space. In order to cluster the GPDF data, the Bhattacharyya kernel is used to measure the distance between two probability distributions for data projection. With the incorporation of the kernel method, the proposed BK-CNN is capable of dealing with nonlinear separation boundaries and can successfully allocate more code vector in the region that GPDF data are densely distributed. When applied to GPDF data in an image classification probleml, the experiment results show that the proposed BK-CNN algorithm gives 1.7%-4.3% improvements in average classification accuracy over other conventional algorithm such as k-means, Self-Organizing Map (SOM) and CNN algorithms with a Bhattacharyya distance, classed as Bk-Means, B-SOM, B-CNN algorithms.