• Title/Summary/Keyword: Improved deep learning

Search Result 548, Processing Time 0.026 seconds

Transfer Learning based DNN-SVM Hybrid Model for Breast Cancer Classification

  • Gui Rae Jo;Beomsu Baek;Young Soon Kim;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.1-11
    • /
    • 2023
  • Breast cancer is the disease that affects women the most worldwide. Due to the development of computer technology, the efficiency of machine learning has increased, and thus plays an important role in cancer detection and diagnosis. Deep learning is a field of machine learning technology based on an artificial neural network, and its performance has been rapidly improved in recent years, and its application range is expanding. In this paper, we propose a DNN-SVM hybrid model that combines the structure of a deep neural network (DNN) based on transfer learning and a support vector machine (SVM) for breast cancer classification. The transfer learning-based proposed model is effective for small training data, has a fast learning speed, and can improve model performance by combining all the advantages of a single model, that is, DNN and SVM. To evaluate the performance of the proposed DNN-SVM Hybrid model, the performance test results with WOBC and WDBC breast cancer data provided by the UCI machine learning repository showed that the proposed model is superior to single models such as logistic regression, DNN, and SVM, and ensemble models such as random forest in various performance measures.

A Study on Improved Comments Generation Using Transformer (트랜스포머를 이용한 향상된 댓글 생성에 관한 연구)

  • Seong, So-yun;Choi, Jae-yong;Kim, Kyoung-chul
    • Journal of Korea Game Society
    • /
    • v.19 no.5
    • /
    • pp.103-114
    • /
    • 2019
  • We have been studying a deep-learning program that can communicate with other users in online communities since 2017. But there were problems with processing a Korean data set because of Korean characteristics. Also, low usage of GPUs of RNN models was a problem too. In this study, as Natural Language Processing models are improved, we aim to make better results using these improved models. To archive this, we use a Transformer model which includes Self-Attention mechanism. Also we use MeCab, korean morphological analyzer, to address a problem with processing korean words.

A Study on the Analysis and Estimation of the Construction Cost by Using Deep learning in the SMART Educational Facilities - Focused on Planning and Design Stage - (딥러닝을 이용한 스마트 교육시설 공사비 분석 및 예측 - 기획·설계단계를 중심으로 -)

  • Jung, Seung-Hyun;Gwon, Oh-Bin;Son, Jae-Ho
    • Journal of the Korean Institute of Educational Facilities
    • /
    • v.25 no.6
    • /
    • pp.35-44
    • /
    • 2018
  • The purpose of this study is to predict more accurate construction costs and to support efficient decision making in the planning and design stages of smart education facilities. The higher the error in the projected cost, the more risk a project manager takes. If the manager can predict a more accurate construction cost in the early stages of a project, he/she can secure a decision period and support a more rational decision. During the planning and design stages, there is a limited amount of variables that can be selected for the estimating model. Moreover, since the number of completed smart schools is limited, there is little data. In this study, various artificial intelligence models were used to accurately predict the construction cost in the planning and design phase with limited variables and lack of performance data. A theoretical study on an artificial neural network and deep learning was carried out. As the artificial neural network has frequent problems of overfitting, it is found that there is a problem in practical application. In order to overcome the problem, this study suggests that the improved models of Deep Neural Network and Deep Belief Network are more effective in making accurate predictions. Deep Neural Network (DNN) and Deep Belief Network (DBN) models were constructed for the prediction of construction cost. Average Error Rate and Root Mean Square Error (RMSE) were calculated to compare the error and accuracy of those models. This study proposes a cost prediction model that can be used practically in the planning and design stages.

Improvement of signal and noise performance using single image super-resolution based on deep learning in single photon-emission computed tomography imaging system

  • Kim, Kyuseok;Lee, Youngjin
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2341-2347
    • /
    • 2021
  • Because single-photon emission computed tomography (SPECT) is one of the widely used nuclear medicine imaging systems, it is extremely important to acquire high-quality images for diagnosis. In this study, we designed a super-resolution (SR) technique using dense block-based deep convolutional neural network (CNN) and evaluated the algorithm on real SPECT phantom images. To acquire the phantom images, a real SPECT system using a99mTc source and two physical phantoms was used. To confirm the image quality, the noise properties and visual quality metric evaluation parameters were calculated. The results demonstrate that our proposed method delivers a more valid SR improvement by using dense block-based deep CNNs as compared to conventional reconstruction techniques. In particular, when the proposed method was used, the quantitative performance was improved from 1.2 to 5.0 times compared to the result of using the conventional iterative reconstruction. Here, we confirmed the effects on the image quality of the resulting SR image, and our proposed technique was shown to be effective for nuclear medicine imaging.

An Experimental Comparison of CNN-based Deep Learning Algorithms for Recognition of Beauty-related Skin Disease

  • Bae, Chang-Hui;Cho, Won-Young;Kim, Hyeong-Jun;Ha, Ok-Kyoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.12
    • /
    • pp.25-34
    • /
    • 2020
  • In this paper, we empirically compare the effectiveness of training models to recognize beauty-related skin disease using supervised deep learning algorithms. Recently, deep learning algorithms are being actively applied for various fields such as industry, education, and medical. For instance, in the medical field, the ability to diagnose cutaneous cancer using deep learning based artificial intelligence has improved to the experts level. However, there are still insufficient cases applied to disease related to skin beauty. This study experimentally compares the effectiveness of identifying beauty-related skin disease by applying deep learning algorithms, considering CNN, ResNet, and SE-ResNet. The experimental results using these training models show that the accuracy of CNN is 71.5% on average, ResNet is 90.6% on average, and SE-ResNet is 95.3% on average. In particular, the SE-ResNet-50 model, which is a SE-ResNet algorithm with 50 hierarchical structures, showed the most effective result for identifying beauty-related skin diseases with an average accuracy of 96.2%. The purpose of this paper is to study effective training and methods of deep learning algorithms in consideration of the identification for beauty-related skin disease. Thus, it will be able to contribute to the development of services used to treat and easy the skin disease.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

A Pre-processing Process Using TadGAN-based Time-series Anomaly Detection (TadGAN 기반 시계열 이상 탐지를 활용한 전처리 프로세스 연구)

  • Lee, Seung Hoon;Kim, Yong Soo
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.3
    • /
    • pp.459-471
    • /
    • 2022
  • Purpose: The purpose of this study was to increase prediction accuracy for an anomaly interval identified using an artificial intelligence-based time series anomaly detection technique by establishing a pre-processing process. Methods: Significant variables were extracted by applying feature selection techniques, and anomalies were derived using the TadGAN time series anomaly detection algorithm. After applying machine learning and deep learning methodologies using normal section data (excluding anomaly sections), the explanatory power of the anomaly sections was demonstrated through performance comparison. Results: The results of the machine learning methodology, the performance was the best when SHAP and TadGAN were applied, and the results in the deep learning, the performance was excellent when Chi-square Test and TadGAN were applied. Comparing each performance with the papers applied with a Conventional methodology using the same data, it can be seen that the performance of the MLR was significantly improved to 15%, Random Forest to 24%, XGBoost to 30%, Lasso Regression to 73%, LSTM to 17% and GRU to 19%. Conclusion: Based on the proposed process, when detecting unsupervised learning anomalies of data that are not actually labeled in various fields such as cyber security, financial sector, behavior pattern field, SNS. It is expected to prove the accuracy and explanation of the anomaly detection section and improve the performance of the model.

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning (딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구)

  • Shin, Seokyong;Lee, SangHun;Han, HyunHo
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.251-258
    • /
    • 2021
  • In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.

Thermal Image Processing and Synthesis Technique Using Faster-RCNN (Faster-RCNN을 이용한 열화상 이미지 처리 및 합성 기법)

  • Shin, Ki-Chul;Lee, Jun-Su;Kim, Ju-Sik;Kim, Ju-Hyung;Kwon, Jang-woo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.30-38
    • /
    • 2021
  • In this paper, we propose a method for extracting thermal data from thermal image and improving detection of heating equipment using the data. The main goal is to read the data in bytes from the thermal image file to extract the thermal data and the real image, and to apply the composite image obtained by synthesizing the image and data to the deep learning model to improve the detection accuracy of the heating facility. Data of KHNP was used for evaluation data, and Faster-RCNN is used as a learning model to compare and evaluate deep learning detection performance according to each data group. The proposed method improved on average by 0.17 compared to the existing method in average precision evaluation.As a result, this study attempted to combine national data-based thermal image data and deep learning detection to improve effective data utilization.

Object Detection and Post-processing of LNGC CCS Scaffolding System using 3D Point Cloud Based on Deep Learning (딥러닝 기반 LNGC 화물창 스캐닝 점군 데이터의 비계 시스템 객체 탐지 및 후처리)

  • Lee, Dong-Kun;Ji, Seung-Hwan;Park, Bon-Yeong
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.5
    • /
    • pp.303-313
    • /
    • 2021
  • Recently, quality control of the Liquefied Natural Gas Carrier (LNGC) cargo hold and block-erection interference areas using 3D scanners have been performed, focusing on large shipyards and the international association of classification societies. In this study, as a part of the research on LNGC cargo hold quality management advancement, a study on deep-learning-based scaffolding system 3D point cloud object detection and post-processing were conducted using a LNGC cargo hold 3D point cloud. The scaffolding system point cloud object detection is based on the PointNet deep learning architecture that detects objects using point clouds, achieving 70% prediction accuracy. In addition, the possibility of improving the accuracy of object detection through parameter adjustment is confirmed, and the standard of Intersection over Union (IoU), an index for determining whether the object is the same, is achieved. To avoid the manual post-processing work, the object detection architecture allows automatic task performance and can achieve stable prediction accuracy through supplementation and improvement of learning data. In the future, an improved study will be conducted on not only the flat surface of the LNGC cargo hold but also complex systems such as curved surfaces, and the results are expected to be applicable in process progress automation rate monitoring and ship quality control.