• Title/Summary/Keyword: neural network training

Search Result 1,750, Processing Time 0.024 seconds

Training Method and Speaker Verification Measures for Recurrent Neural Network based Speaker Verification System

  • Kim, Tae-Hyung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.3C
    • /
    • pp.257-267
    • /
    • 2009
  • This paper presents a training method for neural networks and the employment of MSE (mean scare error) values as the basis of a decision regarding the identity claim of a speaker in a recurrent neural networks based speaker verification system. Recurrent neural networks (RNNs) are employed to capture temporally dynamic characteristics of speech signal. In the process of supervised learning for RNNs, target outputs are automatically generated and the generated target outputs are made to represent the temporal variation of input speech sounds. To increase the capability of discriminating between the true speaker and an impostor, a discriminative training method for RNNs is presented. This paper shows the use and the effectiveness of the MSE value, which is obtained from the Euclidean distance between the target outputs and the outputs of networks for test speech sounds of a speaker, as the basis of speaker verification. In terms of equal error rates, results of experiments, which have been performed using the Korean speech database, show that the proposed speaker verification system exhibits better performance than a conventional hidden Markov model based speaker verification system.

Recognition of Tabacco Ripeness & Grading based on the Neural Network (신경회로망을 이용한 담배 숙도인식 및 등급판정)

  • LEE, S.S.;LEE, C.H.;LEE, D.W.;HWANG, H.
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.17 no.1
    • /
    • pp.5-14
    • /
    • 1995
  • Efficient algorithms for the automatic classification of flue-cured tovacco ripeness and grading have been developed The ripeness of the tobacco was classified into 4 levels vased on the color. The lab-built simple RGB color measuring system was utilized for detecting the light reflectance of the tobacco leaves. The measured data were used far training the artificial neural network The performance of the trained network was also tested far the untrained samples. The spectrophotometer was used to detect the light reflectance and absorption of the graded tobacco leaves in the frequency ranges of the visible light The measured data and the statistical analysis was performed to investigate the light characteristics of the graded samples. The measured data were obtained from samples of 5 different grades directly without considering the leaf positions. Those data were used far training the artificial neural network The performance of the trained network was also tested far the untrained samples. The neural network based sensor information processing showed successful results for grading of tobacco leaves.

  • PDF

Damaged Traffic Sign Recognition using Hopfield Networks and Fuzzy Max-Min Neural Network (홉필드 네트워크와 퍼지 Max-Min 신경망을 이용한 손상된 교통 표지판 인식)

  • Kim, Kwang Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1630-1636
    • /
    • 2022
  • The results of current method of traffic sign detection gets hindered by environmental conditions and the traffic sign's condition as well. Therefore, in this paper, we propose a method of improving detection performance of damaged traffic signs by utilizing Hopfield Network and Fuzzy Max-Min Neural Network. In this proposed method, the characteristics of damaged traffic signs are analyzed and those characteristics are configured as the training pattern to be used by Fuzzy Max-Min Neural Network to initially classify the characteristics of the traffic signs. The images with initial characteristics that has been classified are restored by using Hopfield Network. The images restored with Hopfield Network are classified by the Fuzzy Max-Min Neural Network onces again to finally classify and detect the damaged traffic signs. 8 traffic signs with varying degrees of damage are used to evaluate the performance of the proposed method which resulted with an average of 38.76% improvement on classification performance than the Fuzzy Max-Min Neural Network.

A Study on Performance Improvement of Fuzzy Min-Max Neural Network Using Gating Network

  • Kwak, Byoung-Dong;Park, Kwang-Hyun;Z. Zenn Bien
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.492-495
    • /
    • 2003
  • Fuzzy Min-Max Neural Network(FMMNN) is a powerful classifier, It has, however, some problems. Learning result depends on the presentation order of input data and the training parameter that limits the size of hyperbox. The latter problem affects the result seriously. In this paper, the new approach to alleviate that without loss of on-line learning ability is proposed. The committee machine is used to achieve the multi-resolution FMMNN. Each expert is a FMMNN with fixed training parameter. The advantages of small and large training parameters are used at the same time. The parameters are selected by performance and independence measures. The Decision of each expert is guided by the gating network. Therefore the regional and parametric divide and conquer scheme are used. Simulation shows that the proposed method has better classification performance.

  • PDF

Postprocessing for Tonality and Repeatability, and Average Neural Networks for Training Multiple Songs in Automatic Composition (자동작곡에서 조성과 반복구성을 위한 후처리 방법 및 다수 곡 학습을 위한 평균 신경망 방법)

  • Kim, Kyunghwan;Jung, Sung Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.445-451
    • /
    • 2016
  • This paper introduces a postprocessing method, an iteration method for melody, and an average neural network method for learning a large number of songs in order to improve musically insufficient parts in automatic composition using existing artificial neural network. The melody of songs composed by artificial neural networks is produced according to the melodies of trained songs, so it can not be a specific tonality and it is difficult to have a repetitive composition. In order to solve these problems, we propose a postprocessing method that converts the melody composed by artificial neural networks into a melody having a specific tonality according to music theory and an iteration method for melody by iteratively composing measure divisions of artificial neural networks. In addition, the existing training method of many songs has some disadvantages. To solve this problem, we adopt an average neural network that is made by averaging the weights of artificial neural networks trained each song. From some experiments, it was confirmed that the proposed method solves the existing problems.

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

Trends in Deep-neural-network-based Dialogue Systems (심층 신경망 기반 대화처리 기술 동향)

  • Kwon, O.W.;Hong, T.G.;Huang, J.X.;Roh, Y.H.;Choi, S.K.;Kim, H.Y.;Kim, Y.K.;Lee, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.55-64
    • /
    • 2019
  • In this study, we introduce trends in neural-network-based deep learning research applied to dialogue systems. Recently, end-to-end trainable goal-oriented dialogue systems using long short-term memory, sequence-to-sequence models, among others, have been studied to overcome the difficulties of domain adaptation and error recognition and recovery in traditional pipeline goal-oriented dialogue systems. In addition, some research has been conducted on applying reinforcement learning to end-to-end trainable goal-oriented dialogue systems to learn dialogue strategies that do not appear in training corpora. Recent neural network models for end-to-end trainable chit-chat systems have been improved using dialogue context as well as personal and topic information to produce a more natural human conversation. Unlike previous studies that have applied different approaches to goal-oriented dialogue systems and chit-chat systems respectively, recent studies have attempted to apply end-to-end trainable approaches based on deep neural networks in common to them. Acquiring dialogue corpora for training is now necessary. Therefore, future research will focus on easily and cheaply acquiring dialogue corpora and training with small annotated dialogue corpora and/or large raw dialogues.

A fault diagnostic system for a chemical process using artificial neural network (인공 신경 회로망을 이용한 화학공정의 이상진단 시스템)

  • 최병민;윤여홍;윤인섭
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.131-134
    • /
    • 1990
  • A back-propagation neural network based system for a fault diagnosis of a chemical process is developed. Training data are acquired from FCD(Fault-Consequence Digraph) model. To improve the resolution of a diagnosis, the system is decomposed into 6 subsystems and the training data are composed of 0, 1 and intermediate values. The feasibility of this approach is tested through case studies in a real plant, a naphtha furnace, which has been used to develop a knowledge based expert system, OASYS (Operation Aiding expert SYStem).

  • PDF

Prediction of Etch Profile Uniformity Using Wavelet and Neural Network

  • Park, Won-Sun;Lim, Myo-Taeg;Kim, Byungwhan
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.2
    • /
    • pp.256-262
    • /
    • 2004
  • Conventionally, profile non-uniformity has been characterized by relying on approximated profile with angle or anisotropy. In this study, a new non-uniformity model for etch profile is presented by applying a discrete wavelet to the image obtained from a scanning electron microscopy (SEM). Prediction models for wavelet-transformed data are then constructed using a back-propagation neural network. The proposed method was applied to the data collected from the etching of tungsten material. Additionally, 7 experiments were conducted to obtain test data. Model performance was evaluated in terms of the average prediction accuracy (APA) and the best prediction accuracy (BPA). To take into account randomness in initial weights, two hundred models were generated for a given set of training factors. Behaviors of the APA and BPA were investigated as a function of training factors, including training tolerance, hidden neuron, initial weight distribution, and two slopes for bipolar sig-moid and linear function. For all variations in training factors, the APA was not consistent with the BPA. The prediction accuracy was optimized using three approaches, the best model based approach, the average model based approach and the combined model based approach. Despite the largest APA of the first approach, its BPA was smallest compared to the other two approaches.

Modeling High Power Semiconductor Device Using Backpropagation Neural Network (역전파 신경망을 이용한 고전력 반도체 소자 모델링)

  • Kim, Byung-Whan;Kim, Sung-Mo;Lee, Dae-Woo;Roh, Tae-Moon;Kim, Jong-Dae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.5
    • /
    • pp.290-294
    • /
    • 2003
  • Using a backpropagation neural network (BPNN), a high power semiconductor device was empirically modeled. The device modeled is a n-LDMOSFET and its electrical characteristics were measured with a HP4156A and a Tektronix curve tracer 370A. The drain-source current $(I_{DS})$ was measured over the drain-source voltage $(V_{DS})$ ranging between 1 V to 200 V at each gate-source voltage $(V_{GS}).$ For each $V_{GS},$ the BPNN was trained with 100 training data, and the trained model was tested with another 100 test data not pertaining to the training data. The prediction accuracy of each $V_{GS}$ model was optimized as a function of training factors, including training tolerance, number of hidden neurons, initial weight distribution, and two gradients of activation functions. Predictions from optimized models were highly consistent with actual measurements.