• Title/Summary/Keyword: Input Nodes

Search Result 379, Processing Time 0.029 seconds

Development of Information Propagation Neural Networks processing On-line Interpolation (실시간 보간 가능을 갖는 정보전파신경망의 개발)

  • Kim, Jong-Man;Sin, Dong-Yong;Kim, Hyong-Suk;Kim, Sung-Joong
    • Proceedings of the KIEE Conference
    • /
    • 1998.07b
    • /
    • pp.461-464
    • /
    • 1998
  • Lateral Information Propagation Neural Networks (LIPN) is proposed for on-line interpolation. The proposed neural network technique is the real time computation method through the inter-node diffusion. In the network, a node corresponds to a state in the quantized input space. Each node is composed of a processing unit and fixed weights from its neighbor nodes as well as its input terminal. Information propagates among neighbor nodes laterally and inter-node interpolation is achieved. Through several simulation experiments, real time reconstruction of the nonlinear image information is processed. 1-D LIPN hardware has been implemented with general purpose analog ICs to test the interpolation capability of the proposed neural networks. Experiments with static and dynamic signals have been done upon the LIPN hardware.

  • PDF

Symmetric Adiabatic Logic Circuits against Differential Power Analysis

  • Choi, Byong-Deok;Kim, Kyung-Eun;Chung, Ki-Seok;Kim, Dong-Kyue
    • ETRI Journal
    • /
    • v.32 no.1
    • /
    • pp.166-168
    • /
    • 2010
  • We investigate the possibility of using adiabatic logic as a countermeasure against differential power analysis (DPA) style attacks to make use of its energy efficiency. Like other dual-rail logics, adiabatic logic exhibits a current dependence on input data, which makes the system vulnerable to DPA. To resolve this issue, we propose a symmetric adiabatic logic in which the discharge paths are symmetric for data-independent parasitic capacitance, and the charges are shared between the output nodes and between the internal nodes, respectively, to prevent the circuit from depending on the previous input data.

Instrumentation based Neural Networks for Real-time detecting of Energized Insulator (오손 애자자의 실시간 검출을 위한 계측기반 신경망)

  • Kim, Jong-Man;Kim, Young-Min
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2004.05c
    • /
    • pp.25-29
    • /
    • 2004
  • For detecting of Energized Insulator, a new Lateral Information Propagation Networks(LIPN) has been proposed. Faulty insulator is reduced the rate of insulation extremely, and taken the results dirty and injured. It is necessary to be actions that detect the faulty insulator and exchange the new one. And thus, we have designed the LIPN to be detected that insulators by the real time computation method through the inter-node diffusion. 1n the network, a node corresponds to a state in the quantized input space. Each node is composed of a processing unit and fixed weights from its neighbor nodes as well as its input terminal. Information propagates among neighbor nodes laterally and inter-node interpolation is achieved. Through the results of simulation experiments, we difine the ability of real-time detecting the faulty insulators.

  • PDF

A Local Weight Learning Neural Network Architecture for Fast and Accurate Mapping (빠르고 정확한 변환을 위한 국부 가중치 학습 신경회로)

  • 이인숙;오세영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.9
    • /
    • pp.739-746
    • /
    • 1991
  • This paper develops a modified multilayer perceptron architecture which speeds up learning as well as the net's mapping accuracy. In Phase I, a cluster partitioning algorithm like the Kohonen's self-organizing feature map or the leader clustering algorithm is used as the front end that determines the cluster to which the input data belongs. In Phase II, this cluster selects a subset of the hidden layer nodes that combines the input and outputs nodes into a subnet of the full scale backpropagation network. The proposed net has been applied to two mapping problems, one rather smooth and the other highly nonlinear. Namely, the inverse kinematic problem for a 3-link robot manipulator and the 5-bit parity mapping have been chosen as examples. The results demonstrate the proposed net's superior accuracy and convergence properties over the original backpropagation network or its existing improvement techniques.

  • PDF

Nonlinear System Modeling Using a Neural Networks (비선형 시스템의 신경회로망을 이용한 모델링 기법)

  • Chong, Kil To;No, Tae-Soo;Hong, Dong-Pyo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.12
    • /
    • pp.22-29
    • /
    • 1996
  • In this paper the nodes of the multilayer hidden layers have been modified for modeling the nonlinear systems. The structure of nodes in the hidden layers is built with the feedforward, the cross talk and the recurrent connections. The feedforward links are mapping the nonlinear function and the cross talks and the recurent links memorize the dynamics of the system. The cross talks are connected between the modes in the same hidden layers and the recurrent connection has self feedback, and these two connections receive one time delayed input signals. The simplified steam boiler and the analytic multi input multi output nonlinear system which contains process noise have been modeled using this neural networks.

  • PDF

A Production Function for the Organization with Hierarchical Network Queue Structure (계층적(階層的) 네트웍 대기구조(待機構造)를 갖는 조직(組織)의 생산함수(生産函數)에 대한 연구(硏究))

  • Gang, Seok-Hyeon;Kim, Seong-In
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.12 no.1
    • /
    • pp.63-71
    • /
    • 1986
  • In the organization with a hierarchical network queue structure a production function is derived whose input factors are the numbers of servers at nodes and output is the number of served customers. Its useful properties are investigated. Using this production function, the contributions of servers to the number of served customers are studied. Also given an expected waiting time in the system for each customer, the optimal numbers of servers at nodes are obtained minimizing a cost function.

  • PDF

Automatic Recognition of Pitch Accents Using Time-Delay Recurrent Neural Network (시간지연 회귀 신경회로망을 이용한 피치 악센트 인식)

  • Kim, Sung-Suk;Kim, Chul;Lee, Wan-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.4E
    • /
    • pp.112-119
    • /
    • 2004
  • This paper presents a method for the automatic recognition of pitch accents with no prior knowledge about the phonetic content of the signal (no knowledge of word or phoneme boundaries or of phoneme labels). The recognition algorithm used in this paper is a time-delay recurrent neural network (TDRNN). A TDRNN is a neural network classier with two different representations of dynamic context: delayed input nodes allow the representation of an explicit trajectory F0(t), while recurrent nodes provide long-term context information that can be used to normalize the input F0 trajectory. Performance of the TDRNN is compared to the performance of a MLP (multi-layer perceptron) and an HMM (Hidden Markov Model) on the same task. The TDRNN shows the correct recognition of $91.9{\%}\;of\;pitch\;events\;and\;91.0{\%}$ of pitch non-events, for an average accuracy of $91.5{\%}$ over both pitch events and non-events. The MLP with contextual input exhibits $85.8{\%},\;85.5{\%},\;and\;85.6{\%}$ recognition accuracy respectively, while the HMM shows the correct recognition of $36.8{\%}\;of\;pitch\;events\;and\;87.3{\%}$ of pitch non-events, for an average accuracy of $62.2{\%}$ over both pitch events and non-events. These results suggest that the TDRNN architecture is useful for the automatic recognition of pitch accents.

Optimal design of Self-Organizing Fuzzy Polynomial Neural Networks with evolutionarily optimized FPN (진화론적으로 최적화된 FPN에 의한 자기구성 퍼지 다항식 뉴럴 네트워크의 최적 설계)

  • Park, Ho-Sung;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.12-14
    • /
    • 2005
  • In this paper, we propose a new architecture of Self-Organizing Fuzzy Polynomial Neural Networks(SOFPNN) by means of genetically optimized fuzzy polynomial neuron(FPN) and discuss its comprehensive design methodology involving mechanisms of genetic optimization, especially genetic algorithms(GAs). The conventional SOFPNNs hinges on an extended Group Method of Data Handling(GMDH) and exploits a fixed fuzzy inference type in each FPN of the SOFPNN as well as considers a fixed number of input nodes located in each layer. The design procedure applied in the construction of each layer of a SOFPNN deals with its structural optimization involving the selection of preferred nodes (or FPNs) with specific local characteristics (such as the number of input variables, the order of the polynomial of the consequent part of fuzzy rules, a collection of the specific subset of input variables, and the number of membership function) and addresses specific aspects of parametric optimization. Therefore, the proposed SOFPNN gives rise to a structurally optimized structure and comes with a substantial level of flexibility in comparison to the one we encounter in conventional SOFPNNs. To evaluate the performance of the genetically optimized SOFPNN, the model is experimented with using two time series data(gas furnace and chaotic time series).

  • PDF

Design of Neurofuzzy Networks by Means of Linear Fuzzy Inference and Its Application to Software Engineering (선형 퍼지추론을 이용한 뉴로퍼지 네트워크의 설계와 소프트웨어 공학으로의 응용)

  • Park, Byoung-Jun;Park, Ho-Sung;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2818-2820
    • /
    • 2002
  • In this paper, we design neurofuzzy networks architecture by means of linear fuzzy inference. The proposed neurofuzzy networks are equivalent to linear fuzzy rules, and the structure of these networks is composed of two main substructures, namely premise part and consequence part. The premise part of neurofuzzy networks use fuzzy space partitioning in terms of all variables for considering correlation between input variables. The consequence part is networks constituted as first-order linear form. The consequence part of neurofuzzy networks in general structure(for instance ANFIS networks) consists of nodes with a function that is a linear combination of input variables. But that of the proposed neurofuzzy networks consists of not nodes but networks that are constructed by connection weight and itself correspond to a linear combination of input variables functionally. The connection weights in consequence part are learned by back-propagation algorithm. For the evaluation of proposed neurofuzzy networks. The experimental results include a well-known NASA dataset concerning software cost estimation.

  • PDF

An Optimization of Representation of Boolean Functions Using OPKFDD (OPKFDD를 이용한 불리안 함수 표현의 최적화)

  • Jung, Mi-Gyoung;Lee, Hyuck;Lee, Guee-Sang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.781-791
    • /
    • 1999
  • DD(Decision Diagrams) is an efficient operational data structure for an optimal expression of boolean functions. In a graph-based synthesis using DD, the goal of optimization decreases representation space for boolean functions. This paper represents boolean functions using OPKFDD(Ordered Pseudo-Kronecker Functional Decision Diagrams) for a graph-based synthesis and is based on the number of nodes as the criterion of DD size. For a property of OPKFDD that is able to select one of different decomposition types for each node, OPKFDD is variable in its size by the decomposition types selection of each node and input variable order. This paper proposes a method for generating OPKFDD efficiently from the current BDD(Binary Decision Diagram) Data structure and an algorithm for minimizing one. In the multiple output functions, the relations of each function affect the number of nodes of OPKFDD. Therefore this paper proposes a method to decide the input variable order considering the above cases. Experimental results of comparing with the current representation methods and the reordering methods for deciding input variable order are shown.

  • PDF