• Title/Summary/Keyword: Deep-Learning

Search Result 5,648, Processing Time 0.035 seconds

Development of long-term daily high-resolution gridded meteorological data based on deep learning (딥러닝에 기반한 우리나라 장기간 일 단위 고해상도 격자형 기상자료 생산)

  • Yookyung Jeong;Kyuhyun Byu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.198-198
    • /
    • 2023
  • 유역 내 수자원 계획을 효율적으로 수립하기 위해서는 장기간에 걸친 수문 모델링 뿐만 아니라 미래 기후 시나리오에 따른 수문학적 기후변화 영향 분석도 중요하다. 이를 위해서는 관측 값에 기반한 고품질 및 고해상도 격자형 기상자료 생산이 필수적이다. 하지만, 우리나라는 종관기상관측시스템(ASOS)과 방재기상관측시스템(AWS)으로 이루어진 고밀도 관측 네트워크가 2000년 이후부터 이용 가능했기에 장기간 격자형 기상자료가 부족하다. 이를 보완하고자 본 연구는 가정적인 상황에 기반하여 만약 2000년 이전에도 현재와 동일한 고밀도 관측 네트워크가 존재했다면 산출 가능했을 장기간 일 단위 고해상도 격자형 기상자료를 생산하는 것을 목표로 한다. 구체적으로, 2000년을 기준으로 최근과 과거 기간의 격자형 기상자료를 딥러닝 알고리즘으로 모델링하여 과거 기간을 대상으로 기상자료(일 단위 기온, 강수량)의 공간적 변동성 및 특성을 재구성한다. 격자형 기상자료의 생산을 위해 우리나라의 고도에 기반하여 기상 인자들의 영향을 정량화 하는 보간법인 K-PRISM을 적용하여 고밀도 및 저밀도 관측 네트워크로 두 가지 격자형 기상자료를 생산한다. 생산한 격자형 기상자료 중 저밀도 관측 네트워크의 자료를 입력 자료로, 고밀도 관측 네트워크의 자료를 출력 자료로 선정하여 각 격자점에 대해 Long-Short Term Memory(LSTM) 알고리즘을 개발한다. 이 때, 멀티 그래픽 처리장치(GPU)에 기반한 병렬 처리를 통해 비용 효율적인 계산이 가능하도록 한다. 최종적으로 1973년부터 1999년까지의 저밀도 관측 네트워크의 격자형 기상자료를 입력 자료로 하여 해당 기간에 대한 고밀도 관측 네트워크의 격자형 기상자료를 생산한다. 개발된 대부분의 예측 모델 결과가 0.9 이상의 NSE 값을 나타낸다. 따라서, 본 연구에서 개발된 모델은 고품질의 장기간 기상자료를 효율적으로 정확도 높게 산출하며, 이는 향후 장기간 기후 추세 및 변동 분석에 중요 자료로 활용 가능하다.

  • PDF

Hybrid phishing site detection system with GRU-based shortened URL determination technique (GRU 기반 단축 URL 판별 기법을 적용한 하이브리드 피싱 사이트 탐지 시스템)

  • Hae-Soo Kim;Mi-Hui Kim
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.213-219
    • /
    • 2023
  • According to statistics from the National Police Agency, smishing crimes using texts or messengers have increased dramatically since COVID-19. In addition, most of the cases of impersonation of public institutions reported to agency were related to vaccination and reward, and many methods were used to trick people into clicking on fake URLs (Uniform Resource Locators). When detecting them, URL-based detection methods cannot detect them properly if the information of the URL is hidden, and content-based detection methods are slow and use a lot of resources. In this paper, we propose a system for URL-based detection using transformer for regular URLs and content-based detection using XGBoost for shortened URLs through the process of determining shortened URLs using GRU(Gated Recurrent Units). The F1-Score of the proposed detection system was 94.86, and its average processing time was 5.4 seconds.

Performance Evaluation of ResNet-based Pneumonia Detection Model with the Small Number of Layers Using Chest X-ray Images (흉부 X선 영상을 이용한 작은 층수 ResNet 기반 폐렴 진단 모델의 성능 평가)

  • Youngeun Choi;Seungwan Lee
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.277-285
    • /
    • 2023
  • In this study, pneumonia identification networks with the small number of layers were constructed by using chest X-ray images. The networks had similar trainable-parameters, and the performance of the trained models was quantitatively evaluated with the modification of the network architectures. A total of 6 networks were constructed: convolutional neural network (CNN), VGGNet, GoogleNet, residual network with identity blocks, ResNet with bottleneck blocks and ResNet with identity and bottleneck blocks. Trainable parameters for the 6 networks were set in a range of 273,921-294,817 by adjusting the output channels of convolution layers. The network training was implemented with binary cross entropy (BCE) loss function, sigmoid activation function, adaptive moment estimation (Adam) optimizer and 100 epochs. The performance of the trained models was evaluated in terms of training time, accuracy, precision, recall, specificity and F1-score. The results showed that the trained models with the small number of layers precisely detect pneumonia from chest X-ray images. In particular, the overall quantitative performance of the trained models based on the ResNets was above 0.9, and the performance levels were similar or superior to those based on the CNN, VGGNet and GoogleNet. Also, the residual blocks affected the performance of the trained models based on the ResNets. Therefore, in this study, we demonstrated that the object detection networks with the small number of layers are suitable for detecting pneumonia using chest X-ray images. And, the trained models based on the ResNets can be optimized by applying appropriate residual-blocks.

Algorithm Implementation of DNN-based Blood Glucose Management Dietary (DNN 기반 혈당 관리 식이요법 알고리즘 구현)

  • Seung-Hwan Choi;Gi-Jo Park;Kyung-Seok Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.73-78
    • /
    • 2023
  • Diabetes is chronic disease that is rapidly increasing in prevalence around the world, and mortality from complications continues to rise. This has made blood glucose management a critical challenge for modern society. The main methods used to manage blood glucose are diet, exercise, and medication. Among these, diet is one of the fundamental foundations of blood glucose management, avoiding foods that cause high blood glucose and minimizing blood glucose fluctuations, and is more accessible to people with diabetes as well as the general population. Currently, several platforms, both domestic and international, offer meal planning services, but this is mainly done by users or professional coaches. Accordingly, this paper implements an accurate Kcal calculation model based on DNN and presents a series of dietary algorithms for blood glucose management based on this.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

Comparison of Stock Price Prediction Using Time Series and Non-Time Series Data

  • Min-Seob Song;Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.67-75
    • /
    • 2023
  • Stock price prediction is an important topic extensively discussed in the financial market, but it is considered a challenging subject due to numerous factors that can influence it. In this research, performance was compared and analyzed by applying time series prediction models (LSTM, GRU) and non-time series prediction models (RF, SVR, KNN, LGBM) that do not take into account the temporal dependence of data into stock price prediction. In addition, various data such as stock price data, technical indicators, financial statements indicators, buy sell indicators, short selling, and foreign indicators were combined to find optimal predictors and analyze major factors affecting stock price prediction by industry. Through the hyperparameter optimization process, the process of improving the prediction performance for each algorithm was also conducted to analyze the factors affecting the performance. As a result of feature selection and hyperparameter optimization, it was found that the forecast accuracy of the time series prediction algorithm GRU and LSTM+GRU was the highest.

Morphological Analysis of Hydraulically Stimulated Fractures by Deep-Learning Segmentation Method (딥러닝 기반 균열 추출 기법을 통한 수압 파쇄 균열 형상 분석)

  • Park, Jimin;Kim, Kwang Yeom ;Yun, Tae Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.8
    • /
    • pp.17-28
    • /
    • 2023
  • Laboratory-scale hydraulic fracturing experiments were conducted on granite specimens at various viscosities and injection rates of the fracturing fluid. A series of cross-sectional computed tomography (CT) images of fractured specimens was obtained via a three-dimensional X-ray CT imaging method. Pixel-level fracture segmentation of the CT images was conducted using a convolutional neural network (CNN)-based Nested U-Net model structure. Compared with traditional image processing methods, the CNN-based model showed a better performance in the extraction of thin and complex fractures. These extracted fractures extracted were reconstructed in three dimensions and morphologically analyzed based on their fracture volume, aperture, tortuosity, and surface roughness. The fracture volume and aperture increased with the increase in viscosity of the fracturing fluid, while the tortuosity and roughness of the fracture surface decreased. The findings also confirmed the anisotropic tortuosity and roughness of the fracture surface. In this study, a CNN-based model was used to perform accurate fracture segmentation, and quantitative analysis of hydraulic stimulated fractures was conducted successfully.

Automated 3D scoring of fluorescence in situ hybridization (FISH) using a confocal whole slide imaging scanner

  • Ziv Frankenstein;Naohiro Uraoka;Umut Aypar;Ruth Aryeequaye;Mamta Rao;Meera Hameed;Yanming Zhang;Yukako Yagi
    • Applied Microscopy
    • /
    • v.51
    • /
    • pp.4.1-4.12
    • /
    • 2021
  • Fluorescence in situ hybridization (FISH) is a technique to visualize specific DNA/RNA sequences within the cell nuclei and provide the presence, location and structural integrity of genes on chromosomes. A confocal Whole Slide Imaging (WSI) scanner technology has superior depth resolution compared to wide-field fluorescence imaging. Confocal WSI has the ability to perform serial optical sections with specimen imaging, which is critical for 3D tissue reconstruction for volumetric spatial analysis. The standard clinical manual scoring for FISH is labor-intensive, time-consuming and subjective. Application of multi-gene FISH analysis alongside 3D imaging, significantly increase the level of complexity required for an accurate 3D analysis. Therefore, the purpose of this study is to establish automated 3D FISH scoring for z-stack images from confocal WSI scanner. The algorithm and the application we developed, SHIMARIS PAFQ, successfully employs 3D calculations for clear individual cell nuclei segmentation, gene signals detection and distribution of break-apart probes signal patterns, including standard break-apart, and variant patterns due to truncation, and deletion, etc. The analysis was accurate and precise when compared with ground truth clinical manual counting and scoring reported in ten lymphoma and solid tumors cases. The algorithm and the application we developed, SHIMARIS PAFQ, is objective and more efficient than the conventional procedure. It enables the automated counting of more nuclei, precisely detecting additional abnormal signal variations in nuclei patterns and analyzes gigabyte multi-layer stacking imaging data of tissue samples from patients. Currently, we are developing a deep learning algorithm for automated tumor area detection to be integrated with SHIMARIS PAFQ.

Document Classification Methodology Using Autoencoder-based Keywords Embedding

  • Seobin Yoon;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.35-46
    • /
    • 2023
  • In this study, we propose a Dual Approach methodology to enhance the accuracy of document classifiers by utilizing both contextual and keyword information. Firstly, contextual information is extracted using Google's BERT, a pre-trained language model known for its outstanding performance in various natural language understanding tasks. Specifically, we employ KoBERT, a pre-trained model on the Korean corpus, to extract contextual information in the form of the CLS token. Secondly, keyword information is generated for each document by encoding the set of keywords into a single vector using an Autoencoder. We applied the proposed approach to 40,130 documents related to healthcare and medicine from the National R&D Projects database of the National Science and Technology Information Service (NTIS). The experimental results demonstrate that the proposed methodology outperforms existing methods that rely solely on document or word information in terms of accuracy for document classification.

Estimation of CMIP5 based streamflow forecast and optimal training period using the Deep-Learning LSTM model (딥러닝 LSTM 모형을 이용한 CMIP5 기반 하천유량 예측 및 최적 학습기간 산정)

  • Chun, Beomseok;Lee, Taehwa;Kim, Sangwoo;Lim, Kyoung Jae;Jung, Younghun;Do, Jongwon;Shin, Yongchul
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.353-353
    • /
    • 2022
  • 본 연구에서는 CMIP5(The fifth phase of the Couple Model Intercomparison Project) 미래기후시나리오와 LSTM(Long Short-Term Memory) 모형 기반의 딥러닝 기법을 이용하여 하천유량 예측을 위한 최적 학습 기간을 제시하였다. 연구지역으로는 진안군(성산리) 지점을 선정하였다. 보정(2000~2002/2014~2015) 및 검증(2003~2005/2016~2017) 기간을 설정하여 연구지역의 실측 유량 자료와 LSTM 기반 모의유량을 비교한 결과, 전체적으로 모의값이 실측값을 잘 반영하는 것으로 나타났다. 또한, LSTM 모형의 장기간 예측 성능을 평가하기 위하여 LSTM 모형 기반 유량을 보정(2000~2015) 및 검증(2016~2019) 기간의 SWAT 기반 유량에 비교하였다. 비록 모의결과에일부 오차가 발생하였으나, LSTM 모형이 장기간의 하천유량을 잘 산정하는 것으로 나타났다. 검증 결과를 기반으로 2011년~2100년의 CMIP5 미래기후시나리오 기상자료를 이용하여 SWAT 기반 유량을 모의하였으며, 모의한 하천유량을 LSTM 모형의 학습자료로 사용하였다. 다양한 학습 시나리오을 적용하여 LSTM 및 SWAT 모형 기반의 하천유량을 모의하였으며, 최적 학습 기간을 제시하기 위하여 학습 시나리오별 LSTM/SWAT 기반 하천유량의 상관성 및 불확실성을 비교하였다. 비교 결과 학습 기간이 최소 30년 이상일때, 실측유량과 비교하여 LSTM 모형 기반 하천유량의 불확실성이 낮은 것으로 나타났다. 따라서 CMIP5 미래기후시나리오와 딥러닝 기반 LSTM 모형을 연계하여 미래 장기간의 일별 유량을 모의할 경우, 신뢰성 있는 LSTM 모형 기반 하천유량을 모의하기 위해서는 최소 30년 이상의 학습 기간이 필요할 것으로 판단된다.

  • PDF