• Title/Summary/Keyword: Logic Neurons

Search Result 24, Processing Time 0.029 seconds

Logic-based Fuzzy Neural Networks based on Fuzzy Granulation

  • Kwak, Keun-Chang;Kim, Dong-Hwa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1510-1515
    • /
    • 2005
  • This paper is concerned with a Logic-based Fuzzy Neural Networks (LFNN) with the aid of fuzzy granulation. As the underlying design tool guiding the development of the proposed LFNN, we concentrate on the context-based fuzzy clustering which builds information granules in the form of linguistic contexts as well as OR fuzzy neuron which is logic-driven processing unit realizing the composition operations of T-norm and S-norm. The design process comprises several main phases such as (a) defining context fuzzy sets in the output space, (b) completing context-based fuzzy clustering in each context, (c) aggregating OR fuzzy neuron into linguistic models, and (c) optimizing connections linking information granules and fuzzy neurons in the input and output spaces. The experimental examples are tested through two-dimensional nonlinear function. The obtained results reveal that the proposed model yields better performance in comparison with conventional linguistic model and other approaches.

  • PDF

A Study on the Digital Implementation of Multi-layered Neural Networks for Pattern Recognition (패턴인식을 위한 다층 신경망의 디지털 구현에 관한 연구)

  • 박영석
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.111-118
    • /
    • 2001
  • In this paper, in order to implement the multi-layered perceptron neural network using pure digital logic circuit model, we propose the new logic neuron structure, the digital canonical multi-layered logic neural network structure, and the multi-stage multi-layered logic neural network structure for pattern recognition applications. And we show that the proposed approach provides an incremental additive learning algorithm, which is very simple and effective.

  • PDF

Test Generation for Combinational Logic Circuits Using Neural Networks (신경회로망을 이용한 조합 논리회로의 테스트 생성)

  • 김영우;임인칠
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.30A no.9
    • /
    • pp.71-79
    • /
    • 1993
  • This paper proposes a new test pattern generation methodology for combinational logic circuits using neural networks based on a modular structure. The CUT (Circuit Under Test) is described in our gate level hardware description language. By conferring neural database, the CUT is compiled to an ATPG (Automatic Test Pattern Generation) neural network. Each logic gate in CUT is represented as a discrete Hopfield network. Such a neual network is called a gate module in this paper. All the gate modules for a CUT form an ATPG neural network by connecting each module through message passing paths by which the states of modules are transferred to their adjacent modules. A fault is injected by setting the activation values of some neurons at given values and by invalidating connections between some gate modules. A test pattern for an injected fault is obtained when all gate modules in the ATPG neural network are stabilized through evolution and mutual interactions. The proposed methodology is efficient for test generation, known to be NP-complete, through its massive paralelism. Some results on combinational logic circuits confirm the feasibility of the proposed methodology.

  • PDF

Genetically Optimized Hybrid Fuzzy Set-based Polynomial Neural Networks with Polynomial and Fuzzy Polynomial Neurons

  • Oh Sung-Kwun;Roh Seok-Beom;Park Keon-Jun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.4
    • /
    • pp.327-332
    • /
    • 2005
  • We investigatea new fuzzy-neural networks-Hybrid Fuzzy set based polynomial Neural Networks (HFSPNN). These networks consist of genetically optimized multi-layer with two kinds of heterogeneous neurons thatare fuzzy set based polynomial neurons (FSPNs) and polynomial neurons (PNs). We have developed a comprehensive design methodology to determine the optimal structure of networks dynamically. The augmented genetically optimized HFSPNN (namely gHFSPNN) results in a structurally optimized structure and comes with a higher level of flexibility in comparison to the one we encounter in the conventional HFPNN. The GA-based design procedure being applied at each layer of gHFSPNN leads to the selection leads to the selection of preferred nodes (FSPNs or PNs) available within the HFSPNN. In the sequel, the structural optimization is realized via GAs, whereas the ensuing detailed parametric optimization is carried out in the setting of a standard least square method-based learning. The performance of the gHFSPNN is quantified through experimentation where we use a number of modeling benchmarks synthetic and experimental data already experimented with in fuzzy or neurofuzzy modeling.

Chip design and application of gas classification function using MLP classification method (MLP분류법을 적용한 가스분류기능의 칩 설계 및 응용)

  • 장으뜸;서용수;정완영
    • Proceedings of the IEEK Conference
    • /
    • 2001.06b
    • /
    • pp.309-312
    • /
    • 2001
  • A primitive gas classification system which can classify limited species of gas was designed and simulated. The 'electronic nose' consists of an array of 4 metal oxide gas sensors with different selectivity patterns, signal collecting unit and a signal pattern recognition and decision Part in PLD(programmable logic device) chip. Sensor array consists of four commercial, tin oxide based, semiconductor type gas sensors. BP(back propagation) neutral networks with MLP(Multilayer Perceptron) structure was designed and implemented on CPLD of fifty thousand gate level chip by VHDL language for processing the input signals from 4 gas sensors and qualification of gases in air. The network contained four input units, one hidden layer with 4 neurons and output with 4 regular neurons. The 'electronic nose' system was successfully classified 4 kinds of industrial gases in computer simulation.

  • PDF

A Fast Automatic Test Pattern Generator Using Massive Parallelism (대량의 병렬성을 이용한 고속 자동 테스트 패턴 생성기)

  • 김영오;임인칠
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.5
    • /
    • pp.661-670
    • /
    • 1995
  • This paper presents a fast massively parallel automatic test pattern generator for digital combinational logic circuits using neural networks. Automatic test pattern generation neural network(ATPGNN) evolves its state to a stable local minima by exchanging messages among neural network modules. In preprocessing phase, we calculate the essential assignments for the stuck-at faults in fault list by adopting dominator concept. It makes more neurons be fixed and the system speed up. Consequently. fast test pattern generation is achieved. Test patterns for stuck-open faults are generated through getting initialization patterns for the obtained stuck-at faults in the corresponding ATPGNN.

  • PDF

Design of a Neuro-Fuzzy System Using Union-Based Rule Antecedent (합 기반의 전건부를 가지는 뉴로-퍼지 시스템 설계)

  • Chang-Wook Han;Don-Kyu Lee
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.13-17
    • /
    • 2024
  • In this paper, union-based rule antecedent neuro-fuzzy controller, which can guarantee a parsimonious knowledge base with reduced number of rules, is proposed. The proposed neuro-fuzzy controller allows union operation of input fuzzy sets in the antecedents to cover bigger input domain compared with the complete structure rule which consists of AND combination of all input variables in its premise. To construct the proposed neuro-fuzzy controller, we consider the multiple-term unified logic processor (MULP) which consists of OR and AND fuzzy neurons. The fuzzy neurons exhibit learning abilities as they come with a collection of adjustable connection weights. In the development stage, the genetic algorithm (GA) constructs a Boolean skeleton of the proposed neuro-fuzzy controller, while the stochastic reinforcement learning refines the binary connections of the GA-optimized controller for further improvement of the performance index. An inverted pendulum system is considered to verify the effectiveness of the proposed method by simulation and experiment.

Neural Network Training Using a GMDH Type Algorithm

  • Pandya, Abhijit S.;Gilbar, Thomas;Kim, Kwang-Baek
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.52-58
    • /
    • 2005
  • We have developed a Group Method of Data Handling (GMDH) type algorithm for designing multi-layered neural networks. The algorithm is general enough that it will accept any number of inputs and any sized training set. Each neuron of the resulting network is a function of two of the inputs to the layer. The equation for each of the neurons is a quadratic polynomial. Several forms of the equation are tested for each neuron to make sure that only the best equation of two inputs is kept. All possible combinations of two inputs to each layer are also tested. By carefully testing each resulting neuron, we have developed an algorithm to keep only the best neurons at each level. The algorithm's goal is to create as accurate a network as possible while minimizing the size of the network. Software was developed to train and simulate networks using our algorithm. Several applications were modeled using our software, and the result was that our algorithm succeeded in developing small, accurate, multi-layer networks.

A neuron computer model embedded Lukasiewicz' implication

  • Kobata, Kenji;Zhu, Hanxi;Aoyama, Tomoo;Yoshihara, Ikuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.449-449
    • /
    • 2000
  • Many researchers have studied architectures for non-Neumann's computers because of escaping its bottleneck. To avoid the bottleneck, a neuron-based computer has been developed. The computer has only neurons and their connections, which are constructed of the learning. But still it has information processing facilities, and at the same time, it is like as a simplified brain to make inference; it is called "neuron-computer". No instructions are considered in any neural network usually; however, to complete complex processing on restricted computing resources, the processing must be reduced to primitive actions. Therefore, we introduce the instructions to the neuron-computer, in which the most important function is implications. There is an implication represented by binary-operators, but general implications for multi-value or fuzzy logics can't be done. Therefore, we need to use Lukasiewicz' operator at least. We investigated a neuron-computer having instructions for general implications. If we use the computer, the effective inferences base on multi-value logic is executed rapidly in a small logical unit.

  • PDF

Pseudoinverse Matrix Decomposition Based Incremental Extreme Learning Machine with Growth of Hidden Nodes

  • Kassani, Peyman Hosseinzadeh;Kim, Euntai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.125-130
    • /
    • 2016
  • The proposal of this study is a fast version of the conventional extreme learning machine (ELM), called pseudoinverse matrix decomposition based incremental ELM (PDI-ELM). One of the main problems in ELM is to determine the number of hidden nodes. In this study, the number of hidden nodes is automatically determined. The proposed model is an incremental version of ELM which adds neurons with the goal of minimization the error of the ELM network. To speed up the model the information of pseudoinverse from previous step is taken into account in the current iteration. To show the ability of the PDI-ELM, it is applied to few benchmark classification datasets in the University of California Irvine (UCI) repository. Compared to ELM learner and two other versions of incremental ELM, the proposed PDI-ELM is faster.