• Title/Summary/Keyword: Deep Learning MLP

Search Result 70, Processing Time 0.024 seconds

Prediction Model of Software Fault using Deep Learning Methods (딥러닝 기법을 사용하는 소프트웨어 결함 예측 모델)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.111-117
    • /
    • 2022
  • Many studies have been conducted on software fault prediction models for decades, and the models using machine learning techniques showed the best performance. Deep learning techniques have become the most popular in the field of machine learning, but few studies have used them as classifiers for fault prediction models. Some studies have used deep learning to obtain semantic information from the model input source code or syntactic data. In this paper, we produced several models by changing the model structure and hyperparameters using MLP with three or more hidden layers. As a result of the model evaluation experiment, the MLP-based deep learning models showed similar performance to the existing models in terms of Accuracy, but significantly better in AUC. It also outperformed another deep learning model, the CNN model.

Design of CNN with MLP Layer (MLP 층을 갖는 CNN의 설계)

  • Park, Jin-Hyun;Hwang, Kwang-Bok;Choi, Young-Kiu
    • Journal of the Korean Society of Mechanical Technology
    • /
    • v.20 no.6
    • /
    • pp.776-782
    • /
    • 2018
  • After CNN basic structure was introduced by LeCun in 1989, there has not been a major structure change except for more deep network until recently. The deep network enhances the expression power due to improve the abstraction ability of the network, and can learn complex problems by increasing non linearity. However, the learning of a deep network means that it has vanishing gradient or longer learning time. In this study, we proposes a CNN structure with MLP layer. The proposed CNNs are superior to the general CNN in their classification performance. It is confirmed that classification accuracy is high due to include MLP layer which improves non linearity by experiment. In order to increase the performance without making a deep network, it is confirmed that the performance is improved by increasing the non linearity of the network.

Deep Learning-based Product Recommendation Model for Influencer Marketing (인플루언서를 위한 딥러닝 기반의 제품 추천모델 개발)

  • Song, Hee Seok;Kim, Jae Kyung
    • Journal of Information Technology Applications and Management
    • /
    • v.29 no.3
    • /
    • pp.43-55
    • /
    • 2022
  • In this study, with the goal of developing a deep learning-based product recommendation model for effective matching of influencers and products, a deep learning model with a collaborative filtering model combined with generalized matrix decomposition(GMF), a collaborative filtering model based on multi-layer perceptron (MLP), and neural collaborative filtering and generalized matrix Factorization (NeuMF), a hybrid model combining GMP and MLP was developed and tested. In particular, we utilize one-class problem free boosting (OCF-B) method to solve the one-class problem that occurs when training is performed only on positive cases using implicit feedback in the deep learning-based collaborative filtering recommendation model. In relation to model selection based on overall experimental results, the MLP model showed highest performance with weighted average precision, weighted average recall, and f1 score were 0.85 in the model (n=3,000, term=15). This study is meaningful in practice as it attempted to commercialize a deep learning-based recommendation system where influencer's promotion data is being accumulated, pactical personalized recommendation service is not yet commercially applied yet.

Prediction of Software Fault Severity using Deep Learning Methods (딥러닝을 이용한 소프트웨어 결함 심각도 예측)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.113-119
    • /
    • 2022
  • In software fault prediction, a multi classification model that predicts the fault severity category of a module can be much more useful than a binary classification model that simply predicts the presence or absence of faults. A small number of severity-based fault prediction models have been proposed, but no classifier using deep learning techniques has been proposed. In this paper, we construct MLP models with 3 or 5 hidden layers, and they have a structure with a fixed or variable number of hidden layer nodes. As a result of the model evaluation experiment, MLP-based deep learning models shows significantly better performance in both Accuracy and AUC than MLPs, which showed the best performance among models that did not use deep learning. In particular, the model structure with 3 hidden layers, 32 batch size, and 64 nodes shows the best performance.

Estimation of Frost Occurrence using Multi-Input Deep Learning (다중 입력 딥러닝을 이용한 서리 발생 추정)

  • Yongseok Kim;Jina Hur;Eung-Sup Kim;Kyo-Moon Shim;Sera Jo;Min-Gu Kang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.53-62
    • /
    • 2024
  • In this study, we built a model to estimate frost occurrence in South Korea using single-input deep learning and multi-input deep learning. Meteorological factors used as learning data included minimum temperature, wind speed, relative humidity, cloud cover, and precipitation. As a result of statistical analysis for each factor on days when frost occurred and days when frost did not occur, significant differences were found. When evaluating the frost occurrence models based on single-input deep learning and multi-input deep learning model, the model using both GRU and MLP was highest accuracy at 0.8774 on average. As a result, it was found that frost occurrence model adopting multi-input deep learning improved performance more than using MLP, LSTM, GRU respectively.

Protein Disorder Prediction Using Multilayer Perceptrons

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.9 no.4
    • /
    • pp.11-15
    • /
    • 2013
  • "Protein Folding Problem" is considered to be one of the "Great Challenges of Computer Science" and prediction of disordered protein is an important part of the protein folding problem. Machine learning models can predict the disordered structure of protein based on its characteristic of "learning from examples". Among many machine learning models, we investigate the possibility of multilayer perceptron (MLP) as the predictor of protein disorder. The investigation includes a single hidden layer MLP, multi hidden layer MLP and the hierarchical structure of MLP. Also, the target node cost function which deals with imbalanced data is used as training criteria of MLPs. Based on the investigation results, we insist that MLP should have deep architectures for performance improvement of protein disorder prediction.

A Study on Rotating Object Classification using Deep Neural Networks (깊은신경망을 이용한 회전객체 분류 연구)

  • Lee, Yong-Kyu;Lee, Yill-Byung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.425-430
    • /
    • 2015
  • This paper is a study to improve the classification efficiency of rotating objects by using deep neural networks to which a deep learning algorithm was applied. For the classification experiment of rotating objects, COIL-20 is used as data and total 3 types of classifiers are compared and analyzed. 3 types of classifiers used in the study include PCA classifier to derive a feature value while reducing the dimension of data by using Principal Component Analysis and classify by using euclidean distance, MLP classifier of the way of reducing the error energy by using error back-propagation algorithm and finally, deep learning applied DBN classifier of the way of increasing the probability of observing learning data through pre-training and reducing the error energy through fine-tuning. In order to identify the structure-specific error rate of the deep neural networks, the experiment is carried out while changing the number of hidden layers and number of hidden neurons. The classifier using DBN showed the lowest error rate. Its structure of deep neural networks with 2 hidden layers showed a high recognition rate by moving parameters to a location helpful for recognition.

Comparison and optimization of deep learning-based radiosensitivity prediction models using gene expression profiling in National Cancer Institute-60 cancer cell line

  • Kim, Euidam;Chung, Yoonsun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.3027-3033
    • /
    • 2022
  • Background: In this study, various types of deep-learning models for predicting in vitro radiosensitivity from gene-expression profiling were compared. Methods: The clonogenic surviving fractions at 2 Gy from previous publications and microarray gene-expression data from the National Cancer Institute-60 cell lines were used to measure the radiosensitivity. Seven different prediction models including three distinct multi-layered perceptrons (MLP), four different convolutional neural networks (CNN) were compared. Folded cross-validation was applied to train and evaluate model performance. The criteria for correct prediction were absolute error < 0.02 or relative error < 10%. The models were compared in terms of prediction accuracy, training time per epoch, training fluctuations, and required calculation resources. Results: The strength of MLP-based models was their fast initial convergence and short training time per epoch. They represented significantly different prediction accuracy depending on the model configuration. The CNN-based models showed relatively high prediction accuracy, low training fluctuations, and a relatively small increase in the memory requirement as the model deepens. Conclusion: Our findings suggest that a CNN-based model with moderate depth would be appropriate when the prediction accuracy is important, and a shallow MLP-based model can be recommended when either the training resources or time are limited.

Verified Deep Learning-based Model Research for Improved Uniformity of Sputtered Metal Thin Films (스퍼터 금속 박막 균일도 예측을 위한 딥러닝 기반 모델 검증 연구)

  • Eun Ji Lee;Young Joon Yoo;Chang Woo Byun;Jin Pyung Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.113-117
    • /
    • 2023
  • As sputter equipment becomes more complex, it becomes increasingly difficult to understand the parameters that affect the thickness uniformity of thin metal film deposited by sputter. To address this issue, we verified a deep learning model that can predict complex relationships. Specifically, we trained the model to predict the height of 36 magnets based on the thickness of the material, using Support Vector Machine (SVM), Multilayer Perceptron (MLP), 1D-Convolutional Neural Network (1D-CNN), and 2D-Convolutional Neural Network (2D-CNN) algorithms. After evaluating each model, we found that the MLP model exhibited the best performance, especially when the dataset was constructed regardless of the thin film material. In conclusion, our study suggests that it is possible to predict the sputter equipment source using film thickness data through a deep learning model, which makes it easier to understand the relationship between film thickness and sputter equipment.

  • PDF

Deep Learning based BER Prediction Model in Underwater IoT Networks (딥러닝 기반의 수중 IoT 네트워크 BER 예측 모델)

  • Byun, JungHun;Park, Jin Hoon;Jo, Ohyun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.6
    • /
    • pp.41-48
    • /
    • 2020
  • The sensor nodes in underwater IoT networks have practical limitations in power supply. Thus, the reduction of power consumption is one of the most important issues in underwater environments. In this regard, AMC(Adaptive Modulation and Coding) techniques are used by using the relation between SNR and BER. However, according to our hands-on experience, we observed that the relation between SNR and BER is not that tight in underwater environments. Therefore, we propose a deep learning based MLP classification model to reflect multiple underwater channel parameters at the same time. It correctly predicts BER with a high accuracy of 85.2%. The proposed model can choose the best parameters to have the highest throughput. Simulation results show that the throughput can be enhanced by 4.4 times higher than the conventionally measured results.