• 제목/요약/키워드: Neural Networks model

검색결과 1,875건 처리시간 0.031초

On Neural Fuzzy Systems

  • Su, Shun-Feng;Yeh, Jen-Wei
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권4호
    • /
    • pp.276-287
    • /
    • 2014
  • Neural fuzzy system (NFS) is basically a fuzzy system that has been equipped with learning capability adapted from the learning idea used in neural networks. Due to their outstanding system modeling capability, NFS have been widely employed in various applications. In this article, we intend to discuss several ideas regarding the learning of NFS for modeling systems. The first issue discussed here is about structure learning techniques. Various ideas used in the literature are introduced and discussed. The second issue is about the use of recurrent networks in NFS to model dynamic systems. The discussion about the performance of such systems will be given. It can be found that such a delay feedback can only bring one order to the system not all possible order as claimed in the literature. Finally, the mechanisms and relative learning performance of with the use of the recursive least squares (RLS) algorithm are reported and discussed. The analyses will be on the effects of interactions among rules. Two kinds of systems are considered. They are the strict rules and generalized rules and have difference variances for membership functions. With those observations in our study, several suggestions regarding the use of the RLS algorithm in NFS are presented.

퍼지다항식 뉴론 기반의 유전론적 최적 자기구성 퍼지 다항식 뉴럴네트워크 (Genetically Opimized Self-Organizing Fuzzy Polynomial Neural Networks Based on Fuzzy Polynomial Neurons)

  • 박호성;이동윤;오성권
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권8호
    • /
    • pp.551-560
    • /
    • 2004
  • In this paper, we propose a new architecture of Self-Organizing Fuzzy Polynomial Neural Networks (SOFPNN) that is based on a genetically optimized multilayer perceptron with fuzzy polynomial neurons (FPNs) and discuss its comprehensive design methodology involving mechanisms of genetic optimization, especially genetic algorithms (GAs). The proposed SOFPNN gives rise to a structurally optimized structure and comes with a substantial level of flexibility in comparison to the one we encounter in conventional SOFPNNs. The design procedure applied in the construction of each layer of a SOFPNN deals with its structural optimization involving the selection of preferred nodes (or FPNs) with specific local characteristics (such as the number of input variables, the order of the polynomial of the consequent part of fuzzy rules, and a collection of the specific subset of input variables) and addresses specific aspects of parametric optimization. Through the consecutive process of such structural and parametric optimization, an optimized and flexible fuzzy neural network is generated in a dynamic fashion. To evaluate the performance of the genetically optimized SOFPNN, the model is experimented with using two time series data(gas furnace and chaotic time series), A comparative analysis reveals that the proposed SOFPNN exhibits higher accuracy and superb predictive capability in comparison to some previous models available in the literatures.

GBGNN: Gradient Boosted Graph Neural Networks

  • Eunjo Jang;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • 제20권4호
    • /
    • pp.501-513
    • /
    • 2024
  • In recent years, graph neural networks (GNNs) have been extensively used to analyze graph data across various domains because of their powerful capabilities in learning complex graph-structured data. However, recent research has focused on improving the performance of a single GNN with only two or three layers. This is because stacking layers deeply causes the over-smoothing problem of GNNs, which degrades the performance of GNNs significantly. On the other hand, ensemble methods combine individual weak models to obtain better generalization performance. Among them, gradient boosting is a powerful supervised learning algorithm that adds new weak models in the direction of reducing the errors of the previously created weak models. After repeating this process, gradient boosting combines the weak models to produce a strong model with better performance. Until now, most studies on GNNs have focused on improving the performance of a single GNN. In contrast, improving the performance of GNNs using multiple GNNs has not been studied much yet. In this paper, we propose gradient boosted graph neural networks (GBGNN) that combine multiple shallow GNNs with gradient boosting. We use shallow GNNs as weak models and create new weak models using the proposed gradient boosting-based loss function. Our empirical evaluations on three real-world datasets demonstrate that GBGNN performs much better than a single GNN. Specifically, in our experiments using graph convolutional network (GCN) and graph attention network (GAT) as weak models on the Cora dataset, GBGNN achieves performance improvements of 12.3%p and 6.1%p in node classification accuracy compared to a single GCN and a single GAT, respectively.

pRAM회로망을 위한 역전파 학습 알고리즘 (A Backpropagation Learning Algorithm for pRAM Networks)

  • 완재희;채수익
    • 전자공학회논문지B
    • /
    • 제31B권1호
    • /
    • pp.107-114
    • /
    • 1994
  • Hardware implementation of the on-chip learning artificial neural networks is important for real-time processing. A pRAM model is based on probabilistic firing of a biological neuron and can be implemented in the VLSI circuit with learning capability. We derive a backpropagation learning algorithm for the pRAM networks and present its circuit implementation with stochastic computation. The simulation results confirm the good convergence of the learning algorithm for the pRAM networks.

  • PDF

A Hybrid Modeling Architecture; Self-organizing Neuro-fuzzy Networks

  • Park, Byoungjun;Sungkwun Oh
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.102.1-102
    • /
    • 2002
  • In this paper, we propose Self-organizing neurofuzzy networks(SONFN) and discuss their comprehensive design methodology. The proposed SONFN is generated from the mutually combined structure of both neurofuzzy networks (NFN) and polynomial neural networks(PNN) for model identification of complex and nonlinear systems. NFN contributes to the formation of the premise part of the SONFN. The consequence part of the SONFN is designed using PNN. The parameters of the membership functions, learning rates and momentum coefficients are adjusted with the use of genetic optimization. We discuss two kinds of SONFN architectures and propose a comprehensive learning algorithm. It is shown that this network...

  • PDF

The development of food image detection and recognition model of Korean food for mobile dietary management

  • Park, Seon-Joo;Palvanov, Akmaljon;Lee, Chang-Ho;Jeong, Nanoom;Cho, Young-Im;Lee, Hae-Jeung
    • Nutrition Research and Practice
    • /
    • 제13권6호
    • /
    • pp.521-528
    • /
    • 2019
  • BACKGROUND/OBJECTIVES: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake. MATERIALS/METHODS: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of $150{\times}150$ and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition. RESULTS: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks. CONCLUSION: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

Deep compression of convolutional neural networks with low-rank approximation

  • Astrid, Marcella;Lee, Seung-Ik
    • ETRI Journal
    • /
    • 제40권4호
    • /
    • pp.421-434
    • /
    • 2018
  • The application of deep neural networks (DNNs) to connect the world with cyber physical systems (CPSs) has attracted much attention. However, DNNs require a large amount of memory and computational cost, which hinders their use in the relatively low-end smart devices that are widely used in CPSs. In this paper, we aim to determine whether DNNs can be efficiently deployed and operated in low-end smart devices. To do this, we develop a method to reduce the memory requirement of DNNs and increase the inference speed, while maintaining the performance (for example, accuracy) close to the original level. The parameters of DNNs are decomposed using a hybrid of canonical polyadic-singular value decomposition, approximated using a tensor power method, and fine-tuned by performing iterative one-shot hybrid fine-tuning to recover from a decreased accuracy. In this study, we evaluate our method on frequently used networks. We also present results from extensive experiments on the effects of several fine-tuning methods, the importance of iterative fine-tuning, and decomposition techniques. We demonstrate the effectiveness of the proposed method by deploying compressed networks in smartphones.

An investigation on the mortars containing blended cement subjected to elevated temperatures using Artificial Neural Network (ANN) models

  • Ramezanianpour, A.A.;Kamel, M.E.;Kazemian, A.;Ghiasvand, E.;Shokrani, H.;Bakhshi, N.
    • Computers and Concrete
    • /
    • 제10권6호
    • /
    • pp.649-662
    • /
    • 2012
  • This paper presents the results of an investigation on the compressive strength and weight loss of mortars containing three types of fillers as cement replacements; Limestone Filler (LF), Silica Fume (SF) and Trass (TR), subjected to elevated temperatures including $400^{\circ}C$, $600^{\circ}C$, $800^{\circ}C$ and $1000^{\circ}C$. Results indicate that addition of TR to blended cements, compared to SF addition, leads to higher compressive strength and lower weight loss at elevated temperatures. In order to model the influence of the different parameters on the compressive strength and the weight loss of specimens, artificial neural networks (ANNs) were adopted. Different diagrams were plotted based on the predictions of the most accurate networks to study the effects of temperature, different fillers and cement content on the target properties. In addition to the impressive RMSE and $R^2$ values of the best networks, the data used as the input for the prediction plots were chosen within the range of the data introduced to the networks in the training phase. Therefore, the prediction plots could be considered reliable to perform the parametric study.

가속도 센서 데이터 기반의 행동 인식 모델 성능 향상 기법 (Improving Performance of Human Action Recognition on Accelerometer Data)

  • 남정우;김진헌
    • 전기전자학회논문지
    • /
    • 제24권2호
    • /
    • pp.523-528
    • /
    • 2020
  • 스마트 모바일 장치의 확산은 인간의 일상 행동 분석을 보다 일반적이고 간단하게 만들었다. 행동 분석은 이미 본인 인증, 감시, 건강 관리 등 많은 분야에서 사용 중이고 그 유용성이 증명되었다. 본 논문에서는 스마트폰의 가속도 센서 신호를 사용하여 효율적이고 정확하게 행동 인식을 수행하는 합성곱 신경망(모델 A)과 순환 신경망까지 적용한(모델 B) 심층 신경망 모델을 제시한다. 모델 A는 batch normalization과 같은 단순한 기법만 적용해도 이전의 결과보다 더 작은 모델로 더 높은 성능을 달성할 수 있다는 것을 보인다. 모델 B는 시계열 데이터 모델링에 주로 사용되는 LSTM 레이어를 추가하여 예측 정확도를 더욱 높일 수 있음을 보인다. 이 모델은 29명의 피실험자를 대상으로 수집한 벤치마크 데이트 세트에서 종합 예측 정확도 97.16%(모델 A), 99.50%(모델 B)를 달성했다.

Formulation of the Neural Network for Implicit Constitutive Model (I) : Application to Implicit Vioscoplastic Model

  • Lee, Joon-Seong;Lee, Ho-Jeong;Furukawa, Tomonari
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제9권3호
    • /
    • pp.191-197
    • /
    • 2009
  • Up to now, a number of models have been proposed and discussed to describe a wide range of inelastic behaviors of materials. The fatal problem of using such models is however the existence of model errors, and the problem remains inevitably as far as a material model is written explicitly. In this paper, the authors define the implicit constitutive model and propose an implicit viscoplastic constitutive model using neural networks. In their modeling, inelastic material behaviors are generalized in a state space representation and the state space form is constructed by a neural network using input-output data sets. A technique to extract the input-output data from experimental data is also described. The proposed model was first generated from pseudo-experimental data created by one of the widely used constitutive models and was found to replace the model well. Then, having been tested with the actual experimental data, the proposed model resulted in a negligible amount of model errors indicating its superiority to all the existing explicit models in accuracy.