• Title/Summary/Keyword: Neural computing

Search Result 518, Processing Time 0.025 seconds

Soft computing with neural networks for engineering applications: Fundamental issues and adaptive approaches

  • Ghaboussi, Jamshid;Wu, Xiping
    • Structural Engineering and Mechanics
    • /
    • v.6 no.8
    • /
    • pp.955-969
    • /
    • 1998
  • Engineering problems are inherently imprecision tolerant. Biologically inspired soft computing methods are emerging as ideal tools for constructing intelligent engineering systems which employ approximate reasoning and exhibit imprecision tolerance. They also offer built-in mechanisms for dealing with uncertainty. The fundamental issues associated with engineering applications of the emerging soft computing methods are discussed, with emphasis on neural networks. A formalism for neural network representation is presented and recent developments on adaptive modeling of neural networks, specifically nested adaptive neural networks for constitutive modeling are discussed.

Optimization of Memristor Devices for Reservoir Computing (축적 컴퓨팅을 위한 멤리스터 소자의 최적화)

  • Kyeongwoo Park;HyeonJin Sim;HoBin Oh;Jonghwan Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.1-6
    • /
    • 2024
  • Recently, artificial neural networks have been playing a crucial role and advancing across various fields. Artificial neural networks are typically categorized into feedforward neural networks and recurrent neural networks. However, feedforward neural networks are primarily used for processing static spatial patterns such as image recognition and object detection. They are not suitable for handling temporal signals. Recurrent neural networks, on the other hand, face the challenges of complex training procedures and requiring significant computational power. In this paper, we propose memristors suitable for an advanced form of recurrent neural networks called reservoir computing systems, utilizing a mask processor. Using the characteristic equations of Ti/TiOx/TaOy/Pt, Pt/TiOx/Pt, and Ag/ZnO-NW/Pt memristors, we generated current-voltage curves to verify their memristive behavior through the confirmation of hysteresis. Subsequently, we trained and inferred reservoir computing systems using these memristors with the NIST TI-46 database. Among these systems, the accuracy of the reservoir computing system based on Ti/TiOx/TaOy/Pt memristors reached 99%, confirming the Ti/TiOx/TaOy/Pt memristor structure's suitability for inferring speech recognition tasks.

  • PDF

A Computing Method of a Process Coefficient in Prediction Model of Plate Temperature using Neural Network (신경망을 이용한 판온예측모델내 공정상수 설정 방법)

  • Kim, Tae-Eun;Lee, Haiyoung
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.28 no.11
    • /
    • pp.51-57
    • /
    • 2014
  • This paper presents an algorithmic type computing technique of process coefficient in predicting model of temperature for reheating furnace and also suggests a design method of neural network model to find an adequate value of process coefficient for arbitrary operating conditions including test conditons. The proposed neural network use furnace temperature, line speed and slab information as input variables, and process coefficient is output variable. Reasonable process coefficients can be obtained by an algorithmic procedure proposed in this paper using process data gathered at test conditons. Also, neural network model output equal process coefficient under same input conditions. This means that adquate process coefficients can be found by only computing neural network model without additive test even if operating conditions vary.

Performance analysis of local exit for distributed deep neural networks over cloud and edge computing

  • Lee, Changsik;Hong, Seungwoo;Hong, Sungback;Kim, Taeyeon
    • ETRI Journal
    • /
    • v.42 no.5
    • /
    • pp.658-668
    • /
    • 2020
  • In edge computing, most procedures, including data collection, data processing, and service provision, are handled at edge nodes and not in the central cloud. This decreases the processing burden on the central cloud, enabling fast responses to end-device service requests in addition to reducing bandwidth consumption. However, edge nodes have restricted computing, storage, and energy resources to support computation-intensive tasks such as processing deep neural network (DNN) inference. In this study, we analyze the effect of models with single and multiple local exits on DNN inference in an edge-computing environment. Our test results show that a single-exit model performs better with respect to the number of local exited samples, inference accuracy, and inference latency than a multi-exit model at all exit points. These results signify that higher accuracy can be achieved with less computation when a single-exit model is adopted. In edge computing infrastructure, it is therefore more efficient to adopt a DNN model with only one or a few exit points to provide a fast and reliable inference service.

An Efficient Deep Learning Based Image Recognition Service System Using AWS Lambda Serverless Computing Technology (AWS Lambda Serverless Computing 기술을 활용한 효율적인 딥러닝 기반 이미지 인식 서비스 시스템)

  • Lee, Hyunchul;Lee, Sungmin;Kim, Kangseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.6
    • /
    • pp.177-186
    • /
    • 2020
  • Recent advances in deep learning technology have improved image recognition performance in the field of computer vision, and serverless computing is emerging as the next generation cloud computing technology for event-based cloud application development and services. Attempts to use deep learning and serverless computing technology to increase the number of real-world image recognition services are increasing. Therefore, this paper describes how to develop an efficient deep learning based image recognition service system using serverless computing technology. The proposed system suggests a method that can serve large neural network model to users at low cost by using AWS Lambda Server based on serverless computing. We also show that we can effectively build a serverless computing system that uses a large neural network model by addressing the shortcomings of AWS Lambda Server, cold start time and capacity limitation. Through experiments, we confirmed that the proposed system, using AWS Lambda Serverless Computing technology, is efficient for servicing large neural network models by solving processing time and capacity limitations as well as cost reduction.

A Novel Soft Computing Technique for the Shortcoming of the Polynomial Neural Network

  • Kim, Dongwon;Huh, Sung-Hoe;Seo, Sam-Jun;Park, Gwi-Tae
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.2
    • /
    • pp.189-200
    • /
    • 2004
  • In this paper, we introduce a new soft computing technique that dwells on the ideas of combining fuzzy rules in a fuzzy system with polynomial neural networks (PNN). The PNN is a flexible neural architecture whose structure is developed through the modeling process. Unfortunately, the PNN has a fatal drawback in that it cannot be constructed for nonlinear systems with only a small amount of input variables. To overcome this limitation in the conventional PNN, we employed one of three principal soft computing components such as a fuzzy system. As such, a space of input variables is partitioned into several subspaces by the fuzzy system and these subspaces are utilized as new input variables to the PNN architecture. The proposed soft computing technique is achieved by merging the fuzzy system and the PNN into one unified framework. As a result, we can find a workable synergistic environment and the main characteristics of the two modeling techniques are harmonized. Thus, the proposed method alleviates the problems of PNN while providing superb performance. Identification results of the three-input nonlinear static function and nonlinear system with two inputs will be demonstrated to demonstrate the performance of the proposed approach.

Traffic-based reinforcement learning with neural network algorithm in fog computing environment

  • Jung, Tae-Won;Lee, Jong-Yong;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.144-150
    • /
    • 2020
  • Reinforcement learning is a technology that can present successful and creative solutions in many areas. This reinforcement learning technology was used to deploy containers from cloud servers to fog servers to help them learn the maximization of rewards due to reduced traffic. Leveraging reinforcement learning is aimed at predicting traffic in the network and optimizing traffic-based fog computing network environment for cloud, fog and clients. The reinforcement learning system collects network traffic data from the fog server and IoT. Reinforcement learning neural networks, which use collected traffic data as input values, can consist of Long Short-Term Memory (LSTM) neural networks in network environments that support fog computing, to learn time series data and to predict optimized traffic. Description of the input and output values of the traffic-based reinforcement learning LSTM neural network, the composition of the node, the activation function and error function of the hidden layer, the overfitting method, and the optimization algorithm.

Training an Artificial Neural Network for Estimating the Power Flow State

  • Sedaghati, Alireza
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.275-280
    • /
    • 2005
  • The principal context of this research is the approach to an artificial neural network algorithm which solves multivariable nonlinear equation systems by estimating the state of line power flow. First a dynamical neural network with feedback is used to find the minimum value of the objective function at each iteration of the state estimator algorithm. In second step a two-layer neural network structures is derived to implement all of the different matrix-vector products that arise in neural network state estimator analysis. For hardware requirements, as they relate to the total number of internal connections, the architecture developed here preserves in its structure the pronounced sparsity of power networks for which state the estimator analysis is to be carried out. A principal feature of the architecture is that the computing time overheads in solution are independent of the dimensions or structure of the equation system. It is here where the ultrahigh-speed of massively parallel computing in neural networks can offer major practical benefit.

  • PDF

Wideband Speech Reconstruction Using Modular Neural Networks (모듈화한 신경 회로망을 이용한 광대역 음성 복원)

  • Woo Dong Hun;Ko Charm Han;Kang Hyun Min;Jeong Jin Hee;Kim Yoo Shin;Kim Hyung Soon
    • MALSORI
    • /
    • no.48
    • /
    • pp.93-105
    • /
    • 2003
  • Since telephone channel has bandlimited frequency characteristics, speech signal over the telephone channel shows degraded speech quality. In this paper, we propose an algorithm using neural network to reconstruct wideband speech from its narrowband version. Although single neural network is a good tool for direct mapping, it has difficulty in training for vast and complicated data. To alleviate this problem, we modularize the neural networks based on appropriate clustering of the acoustic space. We also introduce fuzzy computing to compensate for probable misclassification at the cluster boundaries. According to our simulation, the proposed algorithm showed improved performance over the single neural network and conventional codebook mapping method in both objective and subjective evaluations.

  • PDF

Speech Recognition of Multi-Syllable Words Using Soft Computing Techniques (소프트컴퓨팅 기법을 이용한 다음절 단어의 음성인식)

  • Lee, Jong-Soo;Yoon, Ji-Won
    • Transactions of the Society of Information Storage Systems
    • /
    • v.6 no.1
    • /
    • pp.18-24
    • /
    • 2010
  • The performance of the speech recognition mainly depends on uncertain factors such as speaker's conditions and environmental effects. The present study deals with the speech recognition of a number of multi-syllable isolated Korean words using soft computing techniques such as back-propagation neural network, fuzzy inference system, and fuzzy neural network. Feature patterns for the speech recognition are analyzed with 12th order thirty frames that are normalized by the linear predictive coding and Cepstrums. Using four models of speech recognizer, actual experiments for both single-speakers and multiple-speakers are conducted. Through this study, the recognizers of combined fuzzy logic and back-propagation neural network and fuzzy neural network show the better performance in identifying the speech recognition.