• Title/Summary/Keyword: Deep Learning Model Comparison

Search Result 196, Processing Time 0.023 seconds

Synthetic Computed Tomography Generation while Preserving Metallic Markers for Three-Dimensional Intracavitary Radiotherapy: Preliminary Study

  • Jin, Hyeongmin;Kang, Seonghee;Kang, Hyun-Cheol;Choi, Chang Heon
    • Progress in Medical Physics
    • /
    • v.32 no.4
    • /
    • pp.172-178
    • /
    • 2021
  • Purpose: This study aimed to develop a deep learning architecture combining two task models to generate synthetic computed tomography (sCT) images from low-tesla magnetic resonance (MR) images to improve metallic marker visibility. Methods: Twenty-three patients with cervical cancer treated with intracavitary radiotherapy (ICR) were retrospectively enrolled, and images were acquired using both a computed tomography (CT) scanner and a low-tesla MR machine. The CT images were aligned to the corresponding MR images using a deformable registration, and the metallic dummy source markers were delineated using threshold-based segmentation followed by manual modification. The deformed CT (dCT), MR, and segmentation mask pairs were used for training and testing. The sCT generation model has a cascaded three-dimensional (3D) U-Net-based architecture that converts MR images to CT images and segments the metallic marker. The performance of the model was evaluated with intensity-based comparison metrics. Results: The proposed model with segmentation loss outperformed the 3D U-Net in terms of errors between the sCT and dCT. The structural similarity score difference was not significant. Conclusions: Our study shows the two-task-based deep learning models for generating the sCT images using low-tesla MR images for 3D ICR. This approach will be useful to the MR-only workflow in high-dose-rate brachytherapy.

LSTM-based Deep Learning for Time Series Forecasting: The Case of Corporate Credit Score Prediction (시계열 예측을 위한 LSTM 기반 딥러닝: 기업 신용평점 예측 사례)

  • Lee, Hyun-Sang;Oh, Sehwan
    • The Journal of Information Systems
    • /
    • v.29 no.1
    • /
    • pp.241-265
    • /
    • 2020
  • Purpose Various machine learning techniques are used to implement for predicting corporate credit. However, previous research doesn't utilize time series input features and has a limited prediction timing. Furthermore, in the case of corporate bond credit rating forecast, corporate sample is limited because only large companies are selected for corporate bond credit rating. To address limitations of prior research, this study attempts to implement a predictive model with more sample companies, which can adjust the forecasting point at the present time by using the credit score information and corporate information in time series. Design/methodology/approach To implement this forecasting model, this study uses the sample of 2,191 companies with KIS credit scores for 18 years from 2000 to 2017. For improving the performance of the predictive model, various financial and non-financial features are applied as input variables in a time series through a sliding window technique. In addition, this research also tests various machine learning techniques that were traditionally used to increase the validity of analysis results, and the deep learning technique that is being actively researched of late. Findings RNN-based stateful LSTM model shows good performance in credit rating prediction. By extending the forecasting time point, we find how the performance of the predictive model changes over time and evaluate the feature groups in the short and long terms. In comparison with other studies, the results of 5 classification prediction through label reclassification show good performance relatively. In addition, about 90% accuracy is found in the bad credit forecasts.

Electric Power Demand Prediction Using Deep Learning Model with Temperature Data (기온 데이터를 반영한 전력수요 예측 딥러닝 모델)

  • Yoon, Hyoup-Sang;Jeong, Seok-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.307-314
    • /
    • 2022
  • Recently, researches using deep learning-based models are being actively conducted to replace statistical-based time series forecast techniques to predict electric power demand. The result of analyzing the researches shows that the performance of the LSTM-based prediction model is acceptable, but it is not sufficient for long-term regional-wide power demand prediction. In this paper, we propose a WaveNet deep learning model to predict electric power demand 24-hour-ahead with temperature data in order to achieve the prediction accuracy better than MAPE value of 2% which statistical-based time series forecast techniques can present. First of all, we illustrate a delated causal one-dimensional convolutional neural network architecture of WaveNet and the preprocessing mechanism of the input data of electric power demand and temperature. Second, we present the training process and walk forward validation with the modified WaveNet. The performance comparison results show that the prediction model with temperature data achieves MAPE value of 1.33%, which is better than MAPE Value (2.33%) of the same model without temperature data.

Comparison of Chlorophyll-a Prediction and Analysis of Influential Factors in Yeongsan River Using Machine Learning and Deep Learning (머신러닝과 딥러닝을 이용한 영산강의 Chlorophyll-a 예측 성능 비교 및 변화 요인 분석)

  • Sun-Hee, Shim;Yu-Heun, Kim;Hye Won, Lee;Min, Kim;Jung Hyun, Choi
    • Journal of Korean Society on Water Environment
    • /
    • v.38 no.6
    • /
    • pp.292-305
    • /
    • 2022
  • The Yeongsan River, one of the four largest rivers in South Korea, has been facing difficulties with water quality management with respect to algal bloom. The algal bloom menace has become bigger, especially after the construction of two weirs in the mainstream of the Yeongsan River. Therefore, the prediction and factor analysis of Chlorophyll-a (Chl-a) concentration is needed for effective water quality management. In this study, Chl-a prediction model was developed, and the performance evaluated using machine and deep learning methods, such as Deep Neural Network (DNN), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost). Moreover, the correlation analysis and the feature importance results were compared to identify the major factors affecting the concentration of Chl-a. All models showed high prediction performance with an R2 value of 0.9 or higher. In particular, XGBoost showed the highest prediction accuracy of 0.95 in the test data.The results of feature importance suggested that Ammonia (NH3-N) and Phosphate (PO4-P) were common major factors for the three models to manage Chl-a concentration. From the results, it was confirmed that three machine learning methods, DNN, RF, and XGBoost are powerful methods for predicting water quality parameters. Also, the comparison between feature importance and correlation analysis would present a more accurate assessment of the important major factors.

Speech detection from broadcast contents using multi-scale time-dilated convolutional neural networks (다중 스케일 시간 확장 합성곱 신경망을 이용한 방송 콘텐츠에서의 음성 검출)

  • Jang, Byeong-Yong;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.89-96
    • /
    • 2019
  • In this paper, we propose a deep learning architecture that can effectively detect speech segmentation in broadcast contents. We also propose a multi-scale time-dilated layer for learning the temporal changes of feature vectors. We implement several comparison models to verify the performance of proposed model and calculated the frame-by-frame F-score, precision, and recall. Both the proposed model and the comparison model are trained with the same training data, and we train the model using 32 hours of Korean broadcast data which is composed of various genres (drama, news, documentary, and so on). Our proposed model shows the best performance with F-score 91.7% in Korean broadcast data. The British and Spanish broadcast data also show the highest performance with F-score 87.9% and 92.6%. As a result, our proposed model can contribute to the improvement of performance of speech detection by learning the temporal changes of the feature vectors.

Deep Learning based Time Offset Estimation in GPS Time Transfer Measurement Data (GPS 시각전송 측정데이터에 대한 딥러닝 모델 기반 시각오프셋 예측)

  • Yu, Dong-Hui;Kim, Min-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.456-462
    • /
    • 2022
  • In this paper, we introduce a method of predicting time offset by applying LSTM, a deep learning model, to a precision time comparison technique based on measurement data extracted from code signals transmitted from GPS satellites to determine Universal Coordinated Time (UTC). First, we introduce a process of extracting time information from code signals received from a GPS satellite on a daily basis and constructing a daily time offset into one time series data. To apply the deep learning model to the constructed time offset time series data, LSTM, one of the recurrent neural networks, was applied to predict the time offset of a GPS satellite. Through this study, the possibility of time offset prediction by applying deep learning in the field of GNSS precise time transfer was confirmed.

Shrimp Quality Detection Method Based on YOLOv4

  • Tao, Xingyi;Feng, Yiran;Lee, Eung-Joo;Tao, Xueheng
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.903-911
    • /
    • 2022
  • A shrimp quality detection model using YOLOv4 deep learning algorithm is designed, which is superior in terms of network architecture, data processing and feature extraction. The shrimp images were taken and data expanded on their own, the LableImage platform was used for data annotation, and the network model was trained under the Darknet framework. Through comparison, the final performance of the model was all higher than other common target detection models, and its detection accuracy reached 93.7% with an average detection time of 47 ms, indicating that the method can effectively detect the quality of shrimp in the production process.

A Survey on Vision Transformers for Object Detection Task (객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구)

  • Jungmin, Ha;Hyunjong, Lee;Jungmin, Eom;Jaekoo, Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

Comparison and analysis of prediction performance of fine particulate matter(PM2.5) based on deep learning algorithm (딥러닝 알고리즘 기반의 초미세먼지(PM2.5) 예측 성능 비교 분석)

  • Kim, Younghee;Chang, Kwanjong
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.3
    • /
    • pp.7-13
    • /
    • 2021
  • This study develops an artificial intelligence prediction system for Fine particulate Matter(PM2.5) based on the deep learning algorithm GAN model. The experimental data are closely related to the changes in temperature, humidity, wind speed, and atmospheric pressure generated by the time series axis and the concentration of air pollutants such as SO2, CO, O3, NO2, and PM10. Due to the characteristics of the data, since the concentration at the current time is affected by the concentration at the previous time, a predictive model for recursive supervised learning was applied. For comparative analysis of the accuracy of the existing models, CNN and LSTM, the difference between observation value and prediction value was analyzed and visualized. As a result of performance analysis, it was confirmed that the proposed GAN improved to 15.8%, 10.9%, and 5.5% in the evaluation items RMSE, MAPE, and IOA compared to LSTM, respectively.

Comparative Study of Deep Learning Model for Semantic Segmentation of Water System in SAR Images of KOMPSAT-5 (아리랑 5호 위성 영상에서 수계의 의미론적 분할을 위한 딥러닝 모델의 비교 연구)

  • Kim, Min-Ji;Kim, Seung Kyu;Lee, DoHoon;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.206-214
    • /
    • 2022
  • The way to measure the extent of damage from floods and droughts is to identify changes in the extent of water systems. In order to effectively grasp this at a glance, satellite images are used. KOMPSAT-5 uses Synthetic Aperture Radar (SAR) to capture images regardless of weather conditions such as clouds and rain. In this paper, various deep learning models are applied to perform semantic segmentation of the water system in this SAR image and the performance is compared. The models used are U-net, V-Net, U2-Net, UNet 3+, PSPNet, Deeplab-V3, Deeplab-V3+ and PAN. In addition, performance comparison was performed when the data was augmented by applying elastic deformation to the existing SAR image dataset. As a result, without data augmentation, U-Net was the best with IoU of 97.25% and pixel accuracy of 98.53%. In case of data augmentation, Deeplab-V3 showed IoU of 95.15% and V-Net showed the best pixel accuracy of 96.86%.