• Title/Summary/Keyword: U-net architecture

Search Result 43, Processing Time 0.029 seconds

Application of deep convolutional neural network for short-term precipitation forecasting using weather radar-based images

  • Le, Xuan-Hien;Jung, Sungho;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.136-136
    • /
    • 2021
  • In this study, a deep convolutional neural network (DCNN) model is proposed for short-term precipitation forecasting using weather radar-based images. The DCNN model is a combination of convolutional neural networks, autoencoder neural networks, and U-net architecture. The weather radar-based image data used here are retrieved from competition for rainfall forecasting in Korea (AI Contest for Rainfall Prediction of Hydroelectric Dam Using Public Data), organized by Dacon under the sponsorship of the Korean Water Resources Association in October 2020. This data is collected from rainy events during the rainy season (April - October) from 2010 to 2017. These images have undergone a preprocessing step to convert from weather radar data to grayscale image data before they are exploited for the competition. Accordingly, each of these gray images covers a spatial dimension of 120×120 pixels and has a corresponding temporal resolution of 10 minutes. Here, each pixel corresponds to a grid of size 4km×4km. The DCNN model is designed in this study to provide 10-minute predictive images in advance. Then, precipitation information can be obtained from these forecast images through empirical conversion formulas. Model performance is assessed by comparing the Score index, which is defined based on the ratio of MAE (mean absolute error) to CSI (critical success index) values. The competition results have demonstrated the impressive performance of the DCNN model, where the Score value is 0.530 compared to the best value from the competition of 0.500, ranking 16th out of 463 participating teams. This study's findings exhibit the potential of applying the DCNN model to short-term rainfall prediction using weather radar-based images. As a result, this model can be applied to other areas with different spatiotemporal resolutions.

  • PDF

An Analysis of the Experience of Visitors of Fishing Experience Recreation Village Using Big Data - A Focus on Baekmi Village in Hwaseong-si and Susan Village in Yangyang-gun - (빅데이터를 활용한 어촌체험휴양마을 방문객의 경험분석 - 화성시 백미리와 양양군 수산리 어촌체험휴양마을을 대상으로 -)

  • Song, So-Hyun;An, Byung-Chul
    • Journal of Korean Society of Rural Planning
    • /
    • v.27 no.4
    • /
    • pp.13-24
    • /
    • 2021
  • This study used big data to analyze visitors' experiences in Fishing Experience Recreation Village. Through the portal site posting data for the past six years, the experience of visiting Fishing Experience Villages in Baekmi and Susan was analyzed. The analysis method used Text mining and Social Network Analysis which are Big data analysis techniques. Data was collected using Textom, and experience keywords were extracted by analyzing the frequency and importance of experience texts. Afterwards, the characteristics of the experience of visiting the Fishing Experience Village were identified through the analysis of the interaction between the experience keywords using 'U cinet 6.0' and 'NetDraw'. First, through TF and TF-IDF values, keywords such as "Gungpyeong Port", "Susan Port", and "Yacht Marina" that refer to the name of the port and the port facilities appeared at the top. This is interpreted as the name of the port has the greatest impact on the recognition of the Fishing Experience Villages, and visitors showed a lot of interest in the port facilities. Second, focusing on the unique elements of port facilities and fishing villages such as "mud flat experience", "fishing village experience", "Gungpyeong port", "Susan port", "yacht marina", and "beach" through the values of degree, closeness, and betweenness centrality interpreted as having an interaction with various experiences. Third, through the CONCOR analysis, it was confirmed that the visitor's experience was focused on the dynamic behavior, the experience program had the greatest influence on the experience of the visitor, and that the experience of the static and the dynamic behavior was relatively balanced. In conclusion, the experience of visitors in the Fishing Experience Villages is most affected by the environment of the fishing village such as the tidal flats and the coast and the fishing village experience program conducted at the fishing port facilities. In particular, it was found that fishing port facilities such as ports and marinas had a high influence on the awareness of the Fishing Experience Villages. Therefore, it is important to actively utilize the scenery and environment unique to fishing villages in order to revitalize the Fishing Experience Villages experience and improve the quality of the visitor experience. This study is significant in that it studied visitors' experiences in fishing village recreation villages using big data and derived the connection between fishing village and fishing village infrastructure in fishing village experience tourism.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.