• Title/Summary/Keyword: neural network learning

Search Result 4,140, Processing Time 0.029 seconds

Speaker Verification Using Hidden LMS Adaptive Filtering Algorithm and Competitive Learning Neural Network (Hidden LMS 적응 필터링 알고리즘을 이용한 경쟁학습 화자검증)

  • Cho, Seong-Won;Kim, Jae-Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.2
    • /
    • pp.69-77
    • /
    • 2002
  • Speaker verification can be classified in two categories, text-dependent speaker verification and text-independent speaker verification. In this paper, we discuss text-dependent speaker verification. Text-dependent speaker verification system determines whether the sound characteristics of the speaker are equal to those of the specific person or not. In this paper we obtain the speaker data using a sound card in various noisy conditions, apply a new Hidden LMS (Least Mean Square) adaptive algorithm to it, and extract LPC (Linear Predictive Coding)-cepstrum coefficients as feature vectors. Finally, we use a competitive learning neural network for speaker verification. The proposed hidden LMS adaptive filter using a neural network reduces noise and enhances features in various noisy conditions. We construct a separate neural network for each speaker, which makes it unnecessary to train the whole network for a new added speaker and makes the system expansion easy. We experimentally prove that the proposed method improves the speaker verification performance.

Adaptive Fuzzy Neural Control of Unknown Nonlinear Systems Based on Rapid Learning Algorithm

  • Kim, Hye-Ryeong;Kim, Jae-Hun;Kim, Euntai;Park, Mignon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09b
    • /
    • pp.95-98
    • /
    • 2003
  • In this paper, an adaptive fuzzy neural control of unknown nonlinear systems based on the rapid learning algorithm is proposed for optimal parameterization. We combine the advantages of fuzzy control and neural network techniques to develop an adaptive fuzzy control system for updating nonlinear parameters of controller. The Fuzzy Neural Network(FNN), which is constructed by an equivalent four-layer connectionist network, is able to learn to control a process by updating the membership functions. The free parameters of the AFN controller are adjusted on-line according to the control law and adaptive law for the purpose of controlling the plant track a given trajectory and it's initial values are off-line preprocessing, In order to improve the convergence of the learning process, we propose a rapid learning algorithm which combines the error back-propagation algorithm with Aitken's $\delta$$\^$2/ algorithm. The heart of this approach ls to reduce the computational burden during the FNN learning process and to improve convergence speed. The simulation results for nonlinear plant demonstrate the control effectiveness of the proposed system for optimal parameterization.

  • PDF

Inverse Kinematic Learning of Robot Coordinate Transformations Using Dynamic Neural Network (동적 신경망에 의한 로봇 좌표 변환의 역기구학적 학습)

  • Cho, Hyeon-Seob;Ryu, In-Ho;Jeon, Jeong-Chay;Kim, Hee-Sook;Jang, Seong-Whan
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2363-2366
    • /
    • 1998
  • The intent of this paper is to describe a neural network structure called dynamic neural processor(DNP), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the DNP, are described. Computer simulations are provided to demonstrate the effectiveness of the proposed learning using the DNP.

  • PDF

A Deep Learning Model for Predicting User Personality Using Social Media Profile Images

  • Kanchana, T.S.;Zoraida, B.S.E.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.265-271
    • /
    • 2022
  • Social media is a form of communication based on the internet to share information through content and images. Their choice of profile images and type of image they post can be closely connected to their personality. The user posted images are designated as personality traits. The objective of this study is to predict five factor model personality dimensions from profile images by using deep learning and neural networks. Developed a deep learning framework-based neural network for personality prediction. The personality types of the Big Five Factor model can be quantified from user profile images. To measure the effectiveness, proposed two models using convolution Neural Networks to classify each personality of the user. Done performance analysis among two different models for efficiently predict personality traits from profile image. It was found that VGG-69 CNN models are best performing models for producing the classification accuracy of 91% to predict user personality traits.

Learning Less Random to Learn Better in Deep Reinforcement Learning with Noisy Parameters

  • Kim, Chayoung
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.9 no.1
    • /
    • pp.127-134
    • /
    • 2019
  • In terms of deep Reinforcement Learning (RL), exploration can be worked stochastically in the action of a state space. On the other hands, exploitation can be done the proportion of well generalization behaviors. The balance of exploration and exploitation is extremely important for better results. The randomly selected action with ε-greedy for exploration has been regarded as a de facto method. There is an alternative method to add noise parameters into a neural network for richer exploration. However, it is not easy to predict or detect over-fitting with the stochastically exploration in the perturbed neural network. Moreover, the well-trained agents in RL do not necessarily prevent or detect over-fitting in the neural network. Therefore, we suggest a novel design of a deep RL by the balance of the exploration with drop-out to reduce over-fitting in the perturbed neural networks.

Memristor Bridge Synapse-based Neural Network Circuit Design and Simulation of the Hardware-Implemented Artificial Neuron (멤리스터 브리지 시냅스 기반 신경망 회로 설계 및 하드웨어적으로 구현된 인공뉴런 시뮬레이션)

  • Yang, Chang-ju;Kim, Hyongsuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.5
    • /
    • pp.477-481
    • /
    • 2015
  • Implementation of memristor-based multilayer neural networks and their hardware-based learning architecture is investigated in this paper. Two major functions of neural networks which should be embedded in synapses are programmable memory and analog multiplication. "Memristor", which is a newly developed device, has two such major functions in it. In this paper, multilayer neural networks are implemented with memristors. A Random Weight Change algorithm is adopted and implemented in circuits for its learning. Its hardware-based learning on neural networks is two orders faster than its software counterpart.

Wavelet Neural Network Controller for AQM in a TCP Network: Adaptive Learning Rates Approach

  • Kim, Jae-Man;Park, Jin-Bae;Choi, Yoon-Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.4
    • /
    • pp.526-533
    • /
    • 2008
  • We propose a wavelet neural network (WNN) control method for active queue management (AQM) in an end-to-end TCP network, which is trained by adaptive learning rates (ALRs). In the TCP network, AQM is important to regulate the queue length by passing or dropping the packets at the intermediate routers. RED, PI, and PID algorithms have been used for AQM. But these algorithms show weaknesses in the detection and control of congestion under dynamically changing network situations. In our method, the WNN controller using ALRs is designed to overcome these problems. It adaptively controls the dropping probability of the packets and is trained by gradient-descent algorithm. We apply Lyapunov theorem to verify the stability of the WNN controller using ALRs. Simulations are carried out to demonstrate the effectiveness of the proposed method.

The Development of IDMLP Neural Network for the Chip Implementation and it's Application to Speech Recognition (Chip 구현을 위한 IDMLP 신경 회로망의 개발과 음성인식에 대한 응용)

  • 김신진;박정운;정호선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.5
    • /
    • pp.394-403
    • /
    • 1991
  • This paper described the development of input driven multilayer perceptron(IDMLP) neural network and it's application to the Korean spoken digit recognition. The IDMPLP neural network used here and the learning algorithm for this network was proposed newly. In this model, weight value is integer and transfer function in the neuron is hard limit function. According to the result of the network learning for the some kinds of input data, the number of network layers is one or more by the difficulties of classifying the inputs. We tested the recognition of binaried data for the spoken digit 0 to 9 by means of the proposed network. The experimental results are 100% and 96% for the learning data and test data, respectively.

  • PDF

A Study on Unsupervised Learning Method of RAM-based Neural Net (RAM 기반 신경망의 비지도 학습에 관한 연구)

  • Park, Sang-Moo;Kim, Seong-Jin;Lee, Dong-Hyung;Lee, Soo-Dong;Ock, Cheol-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.1
    • /
    • pp.31-38
    • /
    • 2011
  • A RAM-based Neural Net is a weightless neural network based on binary neural network. 3-D neural network using this paper is binary neural network with multiful information bits and store counts of training. Recognition method by MRD technique is based on the supervised learning. Therefore neural network by itself can not distinguish between the categories and well-separated categories of training data can achieve only through the performance. In this paper, unsupervised learning algorithm is proposed which is trained existing 3-D neural network without distinction of data, to distinguish between categories depending on the only input training patterns. The training data for proposed unsupervised learning provided by the NIST handwritten digits of MNIST which is consist of 0 to 9 multi-pattern, a randomly materials are used as training patterns. Through experiments, neural network is to determine the number of discriminator which each have an idea of the handwritten digits that can be interpreted.

Development of Convolutional Neural Network Basic Practice Cases (합성곱 신경망 기초 실습 사례 개발)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • v.14 no.2
    • /
    • pp.279-285
    • /
    • 2022
  • In this paper, as a liberal arts course for non-majors, we developed a basic practice case for convolutional neural networks, which is essential for designing a basic convolutional neural network course curriculum. The developed practice case focuses on understanding the working principle of the convolutional neural network and uses a spreadsheet to check the entire visualized process. The developed practice case consisted of generating supervised learning method image training data, implementing the input layer, convolution layer (convolutional layer), pooling layer, and output layer sequentially, and testing the performance of the convolutional neural network on new data. By extending the practice cases developed in this paper, the number of images to be recognized can be expanded, or basic practice cases can be made to create a convolutional neural network that increases the compression rate for high-quality images. Therefore, it can be said that the utility of this convolutional neural network basic practice case is high.