• Title/Summary/Keyword: Activation function

Search Result 1,477, Processing Time 0.035 seconds

Beta and Alpha Regularizers of Mish Activation Functions for Machine Learning Applications in Deep Neural Networks

  • Mathayo, Peter Beatus;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.136-141
    • /
    • 2022
  • A very complex task in deep learning such as image classification must be solved with the help of neural networks and activation functions. The backpropagation algorithm advances backward from the output layer towards the input layer, the gradients often get smaller and smaller and approach zero which eventually leaves the weights of the initial or lower layers nearly unchanged, as a result, the gradient descent never converges to the optimum. We propose a two-factor non-saturating activation functions known as Bea-Mish for machine learning applications in deep neural networks. Our method uses two factors, beta (𝛽) and alpha (𝛼), to normalize the area below the boundary in the Mish activation function and we regard these elements as Bea. Bea-Mish provide a clear understanding of the behaviors and conditions governing this regularization term can lead to a more principled approach for constructing better performing activation functions. We evaluate Bea-Mish results against Mish and Swish activation functions in various models and data sets. Empirical results show that our approach (Bea-Mish) outperforms native Mish using SqueezeNet backbone with an average precision (AP50val) of 2.51% in CIFAR-10 and top-1accuracy in ResNet-50 on ImageNet-1k. shows an improvement of 1.20%.

Comparison of Deep Learning Activation Functions for Performance Improvement of a 2D Shooting Game Learning Agent (2D 슈팅 게임 학습 에이전트의 성능 향상을 위한 딥러닝 활성화 함수 비교 분석)

  • Lee, Dongcheul;Park, Byungjoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.135-141
    • /
    • 2019
  • Recently, there has been active researches about building an artificial intelligence agent that can learn how to play a game by using re-enforcement learning. The performance of the learning can be diverse according to what kinds of deep learning activation functions they used when they train the agent. This paper compares the activation functions when we train our agent for learning how to play a 2D shooting game by using re-enforcement learning. We defined performance metrics to analyze the results and plotted them along a training time. As a result, we found ELU (Exponential Linear Unit) with a parameter 1.0 achieved best rewards than other activation functions. There was 23.6% gap between the best activation function and the worst activation function.

A Comparative Analysis of Reinforcement Learning Activation Functions for Parking of Autonomous Vehicles (자율주행 자동차의 주차를 위한 강화학습 활성화 함수 비교 분석)

  • Lee, Dongcheul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.75-81
    • /
    • 2022
  • Autonomous vehicles, which can dramatically solve the lack of parking spaces, are making great progress through deep reinforcement learning. Activation functions are used for deep reinforcement learning, and various activation functions have been proposed, but their performance deviations were large depending on the application environment. Therefore, finding the optimal activation function depending on the environment is important for effective learning. This paper analyzes 12 functions mainly used in reinforcement learning to compare and evaluate which activation function is most effective when autonomous vehicles use deep reinforcement learning to learn parking. To this end, a performance evaluation environment was established, and the average reward of each activation function was compared with the success rate, episode length, and vehicle speed. As a result, the highest reward was the case of using GELU, and the ELU was the lowest. The reward difference between the two activation functions was 35.2%.

A Speed Control Scheme with The Torque Compensator based on the Activation Function for PMSM (PMSM에 대한 활성화 함수를 가지는 토크 보상기의 속도제어)

  • Kim, Hong Min;Lim, Geun Min;Ahn, Jin Woo;Lee, Dong Hee
    • Proceedings of the KIPE Conference
    • /
    • 2011.11a
    • /
    • pp.315-316
    • /
    • 2011
  • This paper presents speed control scheme of the PMSM which has torque compensator to reduce the speed error and ripple. The proposed speed controller is based on the conventional PI control scheme. But the additional torque compensator which is different to the conventional differential controller produces a compensation torque to suppress speed ripple. In order to determine the proper compensation, the activation function which has discrete value is used in the proposed control scheme. With the proposed activation function, the compensation torque acts to suppress the speed error increasing. The proposed speed control scheme is verified by the computer simulation and experiments of 400[W] PMSM. In the simulation and experiments, the proposed control scheme has better control performance compare than the conventional PI and PID control schemes.

  • PDF

A New Fuzzy Supervised Learning Algorithm

  • Kim, Kwang-Baek;Yuk, Chang-Keun;Cha, Eui-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.399-403
    • /
    • 1998
  • In this paper, we proposed a new fuzzy supervised learning algorithm. We construct, and train, a new type fuzzy neural net to model the linear activation function. Properties of our fuzzy neural net include : (1) a proposed linear activation function ; and (2) a modified delta rule for learning algorithm. We applied this proposed learning algorithm to exclusive OR,3 bit parity using benchmark in neural network and pattern recognition problems, a kind of image recognition.

  • PDF

Antimicrobial Activities of Chopi(Zanthoxylum piperitum DC.) Extract (초피추출물의 항균특성)

  • 정순경;정재두;조성환
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.28 no.2
    • /
    • pp.371-377
    • /
    • 1999
  • In order to evaluate the antimicrobial function of natural herb extracts as antimicrobial agent or packaging material for the preservation of foods and greenhouse produce, the water extract of chopi (Zanthoxylum piperitum DC.) was prepared and its antimicobial activity was determined. In the paper disk test its antimicrobial activity was increased in proportion to its concentraion. The growth of microorganisms was completely inhibited above 500ppm of its concentration. It showed wide spectrum of thermal(40 to 180oC) and pH(4 to 10) stabilities. In the electronic microscopic observation(TEM and SEM) of microbial morphological change it showed to decrease the activation of physiological enzymes and to lose the function of cell membranes. Even in the activation test of galactosidase, it seemed to weaken the osmotic function of cell membranes remarkably in comparison with chloroform and its activation corresponded to 40~50% of toluene. Zanthoxylum piperitum DC. extract seemed to be an excellent antimicrobial for the inhibition of food borne microorganisms as well as the pre servation of greenhouse produces.

  • PDF

A Fundamental Study on the Effect of Activation Function in Predicting Carbonation Progress Using Deep Learning Algorithm (딥러닝 알고리즘 기반 탄산화 진행 예측에서 활성화 함수 적용에 관한 기초적 연구)

  • Jung, Do-Hyun;Lee, Han-Seung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2019.11a
    • /
    • pp.60-61
    • /
    • 2019
  • Concrete carbonation is one of the factors that reduce the durability of concrete. In modern times, due to industrialization, the carbon dioxide concentration in the atmosphere is increasing, and the impact of carbonation is increasing. So, it is important to understand the carbonation resistance according to the concrete compounding to secure the concrete durability life. In this study, we want to predict the concrete carbonation velocity coefficient, which is an indicator of the carbonation resistance of concrete, through the deep learning algorithm, and to find the activation function suitable for the prediction of carbonation rate coefficient as a process to determine the learning accuracy through the deep learning algorithm. In the scope of this study, using the ReLU function showed better accuracy than using other activation functions.

  • PDF

Analysis on the Accuracy of Building Construction Cost Estimation by Activation Function and Training Model Configuration (활성화함수와 학습노드 진행 변화에 따른 건축 공사비 예측성능 분석)

  • Lee, Ha-Neul;Yun, Seok-Heon
    • Journal of KIBIM
    • /
    • v.12 no.2
    • /
    • pp.40-48
    • /
    • 2022
  • It is very important to accurately predict construction costs in the early stages of the construction project. However, it is difficult to accurately predict construction costs with limited information from the initial stage. In recent years, with the development of machine learning technology, it has become possible to predict construction costs more accurately than before only with schematic construction characteristics. Based on machine learning technology, this study aims to analyze plans to more accurately predict construction costs by using only the factors influencing construction costs. To the end of this study, the effect of the error rate according to the activation function and the node configuration of the hidden layer was analyzed.

A Activation Function Selection of CNN for Inductive Motor Static Fault Diagnosis (유도전동기의 고정자 고장 진단을 위한 CNN의 활성화 함수 선정)

  • Kim, Kyoung-Min;Kim, Yong-Hyeon;Park, Guen-Ho;Lee, Buhm;Lee, Sang-Ro;Goh, Yeong-Jin
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.2
    • /
    • pp.287-292
    • /
    • 2021
  • In this paper, we propose an efficient CNN application method by analyzing the effect of activation function on the failure diagnosis of the inductive motor stator. Generally, the main purpose of the inductive motor stator failure diagnosis is to prevent the failure by rapidly diagnosing the minute turn short. In the application of activation function, experiments show that the Sigmoid function is 23.23% more useful in accuracy of diagnosis than the ReLu function, although it is shown that ReLu has superiority in overall fixer failure in utilizing the activation function.

APPROXIMATION ORDER TO A FUNCTION IN Lp SPACE BY GENERALIZED TRANSLATION NETWORKS

  • HAHM, NAHMWOO;HONG, BUM IL
    • Honam Mathematical Journal
    • /
    • v.28 no.1
    • /
    • pp.125-133
    • /
    • 2006
  • We investigate the approximation order to a function in $L_p$[-1, 1] for $0{\leq}p<{\infty}$ by generalized translation networks. In most papers related to neural network approximation, sigmoidal functions are adapted as an activation function. In our research, we choose an infinitely many times continuously differentiable function as an activation function. Using the integral modulus of continuity and the divided difference formula, we get the approximation order to a function in $L_p$[-1, 1].

  • PDF