• Title/Summary/Keyword: Activation Function

Search Result 1,494, Processing Time 0.043 seconds

Approximation of Polynomials and Step function for cosine modulated Gaussian Function in Neural Network Architecture (뉴로 네트워크에서 코사인 모듈화 된 가우스함수의 다항식과 계단함수의 근사)

  • Lee, Sang-Wha
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.115-122
    • /
    • 2012
  • We present here a new class of activation functions for neural networks, which herein will be called CosGauss function. This function is a cosine-modulated gaussian function. In contrast to the sigmoidal-, hyperbolic tangent- and gaussian activation functions, more ridges can be obtained by the CosGauss function. It will be proven that this function can be used to aproximate polynomials and step functions. The CosGauss function was tested with a Cascade-Correlation-Network of the multilayer structure on the Tic-Tac-Toe game and iris plants problems, and results are compared with those obtained with other activation functions.

Performance Improvement Method of Deep Neural Network Using Parametric Activation Functions (파라메트릭 활성함수를 이용한 심층신경망의 성능향상 방법)

  • Kong, Nayoung;Ko, Sunwoo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.3
    • /
    • pp.616-625
    • /
    • 2021
  • Deep neural networks are an approximation method that approximates an arbitrary function to a linear model and then repeats additional approximation using a nonlinear active function. In this process, the method of evaluating the performance of approximation uses the loss function. Existing in-depth learning methods implement approximation that takes into account loss functions in the linear approximation process, but non-linear approximation phases that use active functions use non-linear transformation that is not related to reduction of loss functions of loss. This study proposes parametric activation functions that introduce scale parameters that can change the scale of activation functions and location parameters that can change the location of activation functions. By introducing parametric activation functions based on scale and location parameters, the performance of nonlinear approximation using activation functions can be improved. The scale and location parameters in each hidden layer can improve the performance of the deep neural network by determining parameters that minimize the loss function value through the learning process using the primary differential coefficient of the loss function for the parameters in the backpropagation. Through MNIST classification problems and XOR problems, parametric activation functions have been found to have superior performance over existing activation functions.

Comparison of Activation Functions using Deep Reinforcement Learning for Autonomous Driving on Intersection (교차로에서 자율주행을 위한 심층 강화 학습 활성화 함수 비교 분석)

  • Lee, Dongcheul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.117-122
    • /
    • 2021
  • Autonomous driving allows cars to drive without people and is being studied very actively thanks to the recent development of artificial intelligence technology. Among artificial intelligence technologies, deep reinforcement learning is used most effectively. Deep reinforcement learning requires us to build a neural network using an appropriate activation function. So far, many activation functions have been suggested, but different performances have been shown depending on the field of application. This paper compares and evaluates the performance of which activation function is effective when using deep reinforcement learning to learn autonomous driving on highways. To this end, the performance metrics to be used in the evaluation were defined and the values of the metrics according to each activation function were compared in graphs. As a result, when Mish was used, the reward was higher on average than other activation functions, and the difference from the activation function with the lowest reward was 9.8%.

Impact of Activation Functions on Flood Forecasting Model Based on Artificial Neural Networks (홍수량 예측 인공신경망 모형의 활성화 함수에 따른 영향 분석)

  • Kim, Jihye;Jun, Sang-Min;Hwang, Soonho;Kim, Hak-Kwan;Heo, Jaemin;Kang, Moon-Seong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.63 no.1
    • /
    • pp.11-25
    • /
    • 2021
  • The objective of this study was to analyze the impact of activation functions on flood forecasting model based on Artificial neural networks (ANNs). The traditional activation functions, the sigmoid and tanh functions, were compared with the functions which have been recently recommended for deep neural networks; the ReLU, leaky ReLU, and ELU functions. The flood forecasting model based on ANNs was designed to predict real-time runoff for 1 to 6-h lead time using the rainfall and runoff data of the past nine hours. The statistical measures such as R2, Nash-Sutcliffe Efficiency (NSE), Root Mean Squared Error (RMSE), the error of peak time (ETp), and the error of peak discharge (EQp) were used to evaluate the model accuracy. The tanh and ELU functions were most accurate with R2=0.97 and RMSE=30.1 (㎥/s) for 1-h lead time and R2=0.56 and RMSE=124.6~124.8 (㎥/s) for 6-h lead time. We also evaluated the learning speed by using the number of epochs that minimizes errors. The sigmoid function had the slowest learning speed due to the 'vanishing gradient problem' and the limited direction of weight update. The learning speed of the ELU function was 1.2 times faster than the tanh function. As a result, the ELU function most effectively improved the accuracy and speed of the ANNs model, so it was determined to be the best activation function for ANNs-based flood forecasting.

Neural adaptive equalization of M-ary QAM signals using a new activation function with a multi-saturated output region (새로운 다단계 복소 활성 함수를 이용한 신경회로망에 의한 M-ary QAM 신호의 적응 등화)

  • 유철우;홍대식
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.1
    • /
    • pp.42-54
    • /
    • 1998
  • For decreasing intersymbol interference (ISI) due to band-limited channels in digitalcommunication, the uses of equalization techniques are necessary. Among the useful adaptive equalization techniques, because of their ease of implementation and nonlinear capabilites, the neural networks have been used as an alternative for effectively dealing with the channel distortion. In this paepr, a complex-valued multilayer percepron is proposed as a nonlinear adaptive equalizer. After the important properties that a suitable complex-valued activation function must possess are discussed, a new complex-valued activation function is developed for the proposed schemes to deal with M-ary QAM signals of any constellation sizes. It has been further proven that by the nonlinear transformation of the proposed function, the correlation coefficient between the real and imaginary parts of input data decreases when they are jointly Gaussian random variables. Lastly, the effectiveness of the proposed scheme is demonstrated by simulations. The proposed scheme provides, compared with the linear equalizer using the least mean squares (LMS) algorith, an interesting improvement concerning Bit Error Rate (BER) when channel distortions are nonlinear.

  • PDF

The Effect of regularization and identity mapping on the performance of activation functions (정규화 및 항등사상이 활성함수 성능에 미치는 영향)

  • Ryu, Seo-Hyeon;Yoon, Jae-Bok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.75-80
    • /
    • 2017
  • In this paper, we describe the effect of the regularization method and the network with identity mapping on the performance of the activation functions in deep convolutional neural networks. The activation functions act as nonlinear transformation. In early convolutional neural networks, a sigmoid function was used. To overcome the problem of the existing activation functions such as gradient vanishing, various activation functions were developed such as ReLU, Leaky ReLU, parametric ReLU, and ELU. To solve the overfitting problem, regularization methods such as dropout and batch normalization were developed on the sidelines of the activation functions. Additionally, data augmentation is usually applied to deep learning to avoid overfitting. The activation functions mentioned above have different characteristics, but the new regularization method and the network with identity mapping were validated only using ReLU. Therefore, we have experimentally shown the effect of the regularization method and the network with identity mapping on the performance of the activation functions. Through this analysis, we have presented the tendency of the performance of activation functions according to regularization and identity mapping. These results will reduce the number of training trials to find the best activation function.

Double-stranded RNA Induces Inflammatory Gene Expression in Schwann Cells: Implication in the Wallerian Degeneration

  • Lee, Hyun-Kyoung;Park, Chan-Hee;Choi, Se-Young;Oh, Seog-Bae;Park, Kyung-Pyo;Kim, Joong-Soo;Lee, Sung-Joong
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.8 no.5
    • /
    • pp.253-257
    • /
    • 2004
  • Schwann cells play an important role in peripheral nerve regeneration. Upon neuronal injury, activated Schwann cells clean up the myelin debris by phagocytosis, and promote neuronal survival and axon outgrowth by secreting various neurotrophic factors. However, it is unclear how the nerve injury induces Schwann cell activation. Recently, it was reported that certain cytoplasmic molecules, which are secreted by cells undergoing necrotic cell death, induce immune cell activation via the toll-like receptors (TLRs). This suggests that the TLRs expressed on Schwann cells may recognize nerve damage by binding to the endogenous ligands secreted by the damaged nerve, thereby inducing Schwann cell activation. Accordingly, this study was undertaken to examine the expression and the function of the TLRs on primary Schwann cells and iSC, a rat Schwann cell line. The transcripts of TLR2, 3, 4, and 9 were detected on the primary Schwann cells as well as on iSC. The stimulation of iSC with poly (I : C), a synthetic ligand for the TLR3, induced the expression of $TNF-{\alpha}$ and RANTES. In addition, poly (I : C) stimulation induced the iNOS expression and nitric oxide secretion in iSC. These results suggest that the TLRs may be involved in the inflammatory activation of Schwann cells, which is observed during Wallerian degeneration after a peripheral nerve injury.

The Effects of Combined Complex Exercise with Abdominal Drawing-in Maneuver on Expiratory Abdominal Muscles Activation and Forced Pulmonary Function for Post Stroke Patients (복합운동과 복부 끌어당김 조정 훈련의 병행이 뇌졸중 환자의 호기 시 복부근육 활성도 및 노력성 폐기능에 미치는 영향)

  • Yun, Jeung-Hyun;Kim, Tae-Soo;Lee, Byung-Ki
    • Journal of the Korean Society of Physical Medicine
    • /
    • v.8 no.4
    • /
    • pp.513-523
    • /
    • 2013
  • PURPOSE: The purpose of this study was to investigate characteristics of the forced pulmonary function test effect and abdominal muscles activation by combined complex exercise with abdominal drawing-in maneuver training of chronic stroke patients. METHODS: 14 post stroke patients(10 males and 4 females) involved voluntary this study and we divided two groups into CEG(complex exercise group) and CEAG (complex exercise and abdominal drawing-in maneuver group).(n=7, per goup). Each groups implicated the 2 times, 30minute exercises for 6 weeks a day. The CEAG performed the complex exercise 15 minutes and 15 minutes of abdominal drawing-in maneuver. For data analysis, the mean and standard deviation were estimated; non-parametric independent t-test was carried out. RESULTS: According to the study, in the combined complex exercise with abdominal drawing-in maneuver group, FVC and activation of transversus abdominis/internal oblique were statistically significant difference compared to the complex exercise group. CONCLUSION: These results indicate that the combined complex with abdominal drawing-in maneuver was efficient in enhancing abdominal muscles activation and pulmonary function of chronic stroke patients.

Design and Implementation of a Viewer for Class Components Retrieval (클래스 부품 검색을 위한 Viewer의 설계 및 구현)

  • 정미정;송영재
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.426-429
    • /
    • 1999
  • Many similar class components are stored in object-storage but the object-storage has needed the retrieval function of correct component for reuse. Accordingly this paper designed the class component retrieval viewer of the object-storage by using the improved spreading activation strategy. Object-storage has made up of information of inheritance relation, superclass, subclass, and we defined the queries about each class function. Also we specified connectionist relaxation of the each class and query, finally we gained retrieval result which showed highest activation value order of class component information including the query function.

  • PDF

Comparison of Reinforcement Learning Activation Functions to Maximize Rewards in Autonomous Highway Driving (고속도로 자율주행 시 보상을 최대화하기 위한 강화 학습 활성화 함수 비교)

  • Lee, Dongcheul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.63-68
    • /
    • 2022
  • Autonomous driving technology has recently made great progress with the introduction of deep reinforcement learning. In order to effectively use deep reinforcement learning, it is important to select the appropriate activation function. In the meantime, many activation functions have been presented, but they show different performance depending on the environment to be applied. This paper compares and evaluates the performance of 12 activation functions to see which activation functions are effective when using reinforcement learning to learn autonomous driving on highways. To this end, a performance evaluation method was presented and the average reward value of each activation function was compared. As a result, when using GELU, the highest average reward could be obtained, and SiLU showed the lowest performance. The average reward difference between the two activation functions was 20%.