• Title/Summary/Keyword: Relu

Search Result 10, Processing Time 0.023 seconds

Optimization of Model based on Relu Activation Function in MLP Neural Network Model

  • Ye Rim Youn;Jinkeun Hong
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.80-87
    • /
    • 2024
  • This paper focuses on improving accuracy in constrained computing settings by employing the ReLU (Rectified Linear Unit) activation function. The research conducted involves modifying parameters of the ReLU function and comparing performance in terms of accuracy and computational time. This paper specifically focuses on optimizing ReLU in the context of a Multilayer Perceptron (MLP) by determining the ideal values for features such as the dimensions of the linear layers and the learning rate (Ir). In order to optimize performance, the paper experiments with adjusting parameters like the size dimensions of linear layers and Ir values to induce the best performance outcomes. The experimental results show that using ReLU alone yielded the highest accuracy of 96.7% when the dimension sizes were 30 - 10 and the Ir value was 1. When combining ReLU with the Adam optimizer, the optimal model configuration had dimension sizes of 60 - 40 - 10, and an Ir value of 0.001, which resulted in the highest accuracy of 97.07%.

Image Segmentation of Fuzzy Deep Learning using Fuzzy Logic (퍼지 논리를 이용한 퍼지 딥러닝 영상 분할)

  • Jongjin Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.71-76
    • /
    • 2023
  • In this paper, we propose a fuzzy U-Net, a fuzzy deep learning model that applies fuzzy logic to improve performance in image segmentation using deep learning. Fuzzy modules using fuzzy logic were combined with U-Net, a deep learning model that showed excellent performance in image segmentation, and various types of fuzzy modules were simulated. The fuzzy module of the proposed deep learning model learns intrinsic and complex rules between feature maps of images and corresponding segmentation results. To this end, the superiority of the proposed method was demonstrated by applying it to dental CBCT data. As a result of the simulation, it can be seen that the performance of the ADD-RELU fuzzy module structure of the model using the addition skip connection in the proposed fuzzy U-Net is 0.7928 for the test dataset and the best.

Performance Evaluation of YOLOv5 Model according to Various Hyper-parameters in Nuclear Medicine Phantom Images (핵의학 팬텀 영상에서 초매개변수 변화에 따른 YOLOv5 모델의 성능평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.21-26
    • /
    • 2024
  • The one of the famous deep learning models for object detection task is you only look once version 5 (YOLOv5) framework based on the one stage architecture. In addition, YOLOv5 model indicated high performance for accurate lesion detection using the bottleneck CSP layer and skip connection function. The purpose of this study was to evaluate the performance of YOLOv5 framework according to various hyperparameters in position emission tomogrpahy (PET) phantom images. The dataset was obtained from QIN PET segmentation challenge in 500 slices. We set the bounding box to generate ground truth dataset using labelImg software. The hyperparameters for network train were applied by changing optimization function (SDG, Adam, and AdamW), activation function (SiLU, LeakyRelu, Mish, and Hardwish), and YOLOv5 model size (nano, small, large, and xlarge). The intersection over union (IOU) method was used for performance evaluation. As a results, the condition of outstanding performance is to apply AdamW, Hardwish, and nano size for optimization function, activation function and model version, respectively. In conclusion, we confirmed the usefulness of YOLOv5 network for object detection performance in nuclear medicine images.

A Study on the Accuracy Improvement of One-repetition Maximum based on Deep Neural Network for Physical Exercise

  • Lee, Byung-Hoon;Kim, Myeong-Jin;Kim, Kyung-Seok
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.147-154
    • /
    • 2019
  • In this paper, we conducted a study that utilizes deep learning to calculate appropriate physical exercise information when basic human factors such as sex, age, height, and weight of users come in. To apply deep learning, a method was applied to calculate the amount of fat needed to calculate the amount of one repetition maximum by utilizing the structure of the basic Deep Neural Network. By applying Accuracy improvement methods such as Relu, Weight initialization, and Dropout to existing deep learning structures, we have improved Accuracy to derive a lean body weight that is closer to actual results. In addition, the results were derived by applying a formula for calculating the one repetition maximum load on upper and lower body movements for use in actual physical exercise. If studies continue, such as the way they are applied in this paper, they will be able to suggest effective physical exercise options for different conditions as well as conditions for users.

YOLO Model FPS Enhancement Method for Determining Human Facial Expression based on NVIDIA Jetson TX1 (NVIDIA Jetson TX1 기반의 사람 표정 판별을 위한 YOLO 모델 FPS 향상 방법)

  • Bae, Seung-Ju;Choi, Hyeon-Jun;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.5
    • /
    • pp.467-474
    • /
    • 2019
  • In this paper, we propose a novel method to improve FPS while maintaining the accuracy of YOLO v2 model in NVIDIA Jetson TX1. In general, in order to reduce the amount of computation, a conversion to an integer operation or reducing the depth of a network have been used. However, the accuracy of recognition can be deteriorated. So, we use methods to reduce computation and memory consumption through adjustment of the filter size and integrated computation of the network The first method is to replace the $3{\times}3$ filter with a $1{\times}1$ filter, which reduces the number of parameters to one-ninth. The second method is to reduce the amount of computation through CBR (Convolution-Add Bias-Relu) among the inference acceleration functions of TensorRT, and the last method is to reduce memory consumption by integrating repeated layers using TensorRT. For the simulation results, although the accuracy is decreased by 1% compared to the existing YOLO v2 model, the FPS has been improved from the existing 3.9 FPS to 11 FPS.

Prediction of KBO playoff Using the Deep Neural Network (DNN을 활용한 'KBO' 플레이오프진출 팀 예측)

  • Ju-Hyeok Park;Yang-Jae Lee;Hee-Chang Han;Yoo-Lim Jun;Yoo-Jin Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.315-316
    • /
    • 2023
  • 본 논문에서는 딥러닝을 활용하여 KBO (Korea Baseball Organization)의 다음 시즌 플레이오프 진출 확률을 예측하는 Deep Neural Network (DNN) 시스템을 설계하고 구현하는 방법을 제안한다. 연구 방법으로 KBO 각 시즌별 데이터를 1999년도 데이터부터 수집하여 분석한 결과, 각 시즌 데이터 중 경기당 평균 득점, 타자 OPS, 투수 WHIP 등이 시즌 결과에 유의미한 영향을 끼치는 것을 확인하였다. 모델 설계는 linear, softmax 함수를 사용하는 것보다 relu, tanh, sigmoid 함수를 사용했을 때 더 높은 정확도를 얻을 수 있었다. 실제 2022 시즌 결과를 예측한 결과 88%의 정확도를 도출했다. 폭투의 수, 피홈런 등 가중치가 높은 변수의 값이 우수할 경우 시즌 결과가 좋게 나온다는 것이 증명되었다. 본 논문에서 설계한 이 시스템은 KBO 구단만이 아닌 모든 야구단에서 선수단을 구성하는데 활용 가능하다고 사료된다.

  • PDF

Prediction of the Number of Crimes according to Urban Environmental Factors in the Metropolitan Area (수도권 도시 환경 요인에 따른 범죄 발생 건수 예측)

  • Ye-Won Jang;Ye-Lim Kim;Si-Hyeon Park;Jae-Young Lee;Yoo-Jin Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.321-322
    • /
    • 2023
  • 본 논문에서는 Scikit-learn 패키지의 LinearRegression 모델과 Keras 딥러닝 모델을 활용하여 수도권 도시 환경 요인에 따른 범죄 발생 건수를 예측 모델을 제안한다. 연구 방법으로 범죄 발생과 유의미한 관계가 있다고 파악되는 수도권의 각 자치구 별 데이터셋을 분석하여, CCTV, 파출소, 가로등의 수가 범죄 발생에 유의미한 영향을 끼치는 것을 확인하였다. 독립 변수들 간에 Scale을 줄이고자 정규화를 진행했고, 종속변수의 정규성 확보를 위해 로그변환을 취했다. 손실 함수는 회귀문제에서 사용되는 'relu'함수를 사용했고 모델의 성능을 확인할 수 있는 지표로 MSE(Mean Squared Error)를 사용해 모델을 구성하였다. 본 논문에서 설계한 이 프로그램은 범죄 발생율이 높은 지역구에 경찰 인력의 추가적 배치, 안전 시설 확충 등 실무적 조치를 취함에 있어 근거를 제공할 수 있을 것으로 사료된다.

  • PDF

Detection Method of Vehicle Fuel-cut Driving with Deep-learning Technique (딥러닝 기법을 이용한 차량 연료차단 주행의 감지법)

  • Ko, Kwang-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.327-333
    • /
    • 2019
  • The Fuel-cut driving is started when the acceleration pedal released with transmission gear engaged. Fuel economy of the vehicle improves by active fuel-cut driving. A deep-learning technique is proposed to predict fuel-cut driving with vehicle speed, acceleration and road gradient data in the study. It's 3~10 of hidden layers and 10~20 of variables and is applied to the 9600 data obtained in the test driving of a vehicle in the road of 12km. Its accuracy is about 84.5% with 10 variables, 7 hidden layers and Relu as activation function. Its error is regarded from the fact that the change rate of input data is higher than the rate of fuel consumption data. Therefore the accuracy can be better by the normalizing process of input data. It's unnecessary to get the signal of vehicle injector or OBD, and a deep-learning technique applied to the data to be got easily, like GPS. It can contribute to eco-drive for the computing time small.

Quantitative Analysis for Win/Loss Prediction of 'League of Legends' Utilizing the Deep Neural Network System through Big Data

  • No, Si-Jae;Moon, Yoo-Jin;Hwang, Young-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.213-221
    • /
    • 2021
  • In this paper, we suggest the Deep Neural Network Model System for predicting results of the match of 'League of Legends (LOL).' The model utilized approximately 26,000 matches of the LOL game and Keras of Tensorflow. It performed an accuracy of 93.75% without overfitting disadvantage in predicting the '2020 League of Legends Worlds Championship' utilizing the real data in the middle of the game. It employed functions of Sigmoid, Relu and Logcosh, for better performance. The experiments found that the four variables largely affected the accuracy of predicting the match --- 'Dragon Gap', 'Level Gap', 'Blue Rift Heralds', and 'Tower Kills Gap,' and ordinary users can also use the model to help develop game strategies by focusing on four elements. Furthermore, the model can be applied to predicting the match of E-sports professional leagues around the world and to the useful training indicators for professional teams, contributing to vitalization of E-sports.

A Performance Comparison of Super Resolution Model with Different Activation Functions (활성함수 변화에 따른 초해상화 모델 성능 비교)

  • Yoo, Youngjun;Kim, Daehee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.10
    • /
    • pp.303-308
    • /
    • 2020
  • The ReLU(Rectified Linear Unit) function has been dominantly used as a standard activation function in most deep artificial neural network models since it was proposed. Later, Leaky ReLU, Swish, and Mish activation functions were presented to replace ReLU, which showed improved performance over existing ReLU function in image classification task. Therefore, we recognized the need to experiment with whether performance improvements could be achieved by replacing the RELU with other activation functions in the super resolution task. In this paper, the performance was compared by changing the activation functions in EDSR model, which showed stable performance in the super resolution task. As a result, in experiments conducted with changing the activation function of EDSR, when the resolution was converted to double, the existing activation function, ReLU, showed similar or higher performance than the other activation functions used in the experiment. When the resolution was converted to four times, Leaky ReLU and Swish function showed slightly improved performance over ReLU. PSNR and SSIM, which can quantitatively evaluate the quality of images, were able to identify average performance improvements of 0.06%, 0.05% when using Leaky ReLU, and average performance improvements of 0.06% and 0.03% when using Swish. When the resolution is converted to eight times, the Mish function shows a slight average performance improvement over the ReLU. Using Mish, PSNR and SSIM were able to identify an average of 0.06% and 0.02% performance improvement over the RELU. In conclusion, Leaky ReLU and Swish showed improved performance compared to ReLU for super resolution that converts resolution four times and Mish showed improved performance compared to ReLU for super resolution that converts resolution eight times. In future study, we should conduct comparative experiments to replace activation functions with Leaky ReLU, Swish and Mish to improve performance in other super resolution models.