• Title/Summary/Keyword: 딥러닝 모델

Search Result 2,110, Processing Time 0.042 seconds

Prediction System for Turbidity Exclusion in Imha Reservoir (임하호 탁수 대응을 위한 예측 시스템)

  • Jeong, Seokil;Choi, Hyun Gu;Kim, Hwa Yeong;Lim, Tae Hwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.487-487
    • /
    • 2021
  • 탁수는 유기물 또는 무기물이 유입되면서 빛의 투과성이 낮아진 수체를 의미한다. 탁수가 발생하게 되면 어류의 폐사, 정수처리 비용의 증가 및 경관의 변화로 인한 피해가 발생하게 된다. 국내에서는 홍수기 또는 태풍 시 유역의 토사가 저수지 상류에서 유입하여 호내의 탁수를 발생시키는 경우가 있는데, 특히 낙동강 유역의 임하호에서 빈번하게 고탁수가 발생하여 왔다. 본 연구에서는 임하호에서 탁수 발생 시 신속 배제를 위한 수치적인 예측 시스템을 소개하고자 한다. 저수지 탁수관리의 기본개념은 용수공급능력을 고려한 고탁수의 신속한 배제이다. 이는 선제적 의사결정을 요구하므로, 지류에서 탁수가 발생한 즉시 향후 상황에 대한 예측이 필요하다. 이러한 예측을 위해 유역관리처는 3단계의 수치해석을 수행한다. 첫 번째는 유역 상류에서 탁수가 감지되었을 때, 호 내 탁수의 분포를 예측하는 것이다. 수심 및 수평방향의 탁수 분포에 대한 상세한 결과가 도출되어야 하기에, 3차원 수치해석 프로그램인 AEM3D를 이용한다. 이때, 과거 고탁수 유입에 대한 자료를 기반으로 산정된 매개변수가 적용된다. 두 번째는 예측된 호내 분포를 초기조건으로 댐 방류량 및 취수탑 위치(선택배제)에 따른 탁수 배제 수치해석을 수행하게 된다. 다양하고 많은 case에 대한 신속한 모의 및 3달 이상의 장기간 예측을 요구하므로, 2차원 수치모델인 CE-QUAL-W2를 활용한다. 이 단계에서 수자원의 안정적 공급이 가능한 범위 내에서 효과적인 탁수 배제 방류 방법 등이 결정되며, 방류 탁도가 예측된다. 세 번째 단계는 방류탁도를 경계조건으로 하여 하류 하천(반변천~내성천 합류 전)의 탁도를 예측하는 것이다. 하천의 탁도 예측은 국내뿐만 아니라 국외에서도 그 사례를 찾아보기가 쉽지 않은데, 이는 중소형의 지류에 대한 입력자료가 충분하지 않고 불확실성이 높기 때문이다. 이에 과거 10여 년의 data를 이용한 회귀분석을 통해 탁수 발생물질(SS)-부유사-유량과의 관계를 도출하고, 2차원 하천모델(EFDC)을 이용하여 수심 평균 탁도를 예측하게 된다. 이러한 세 단계의 예측은 탁수가 호내로 유입됨에 따라 반복되고, 점차 예측 정확도가 향상되게 된다. 세 단계의 과정을 통한 임하호 탁수의 조기 배제는 현재 적지 않은 효과를 거두고 있다고 판단된다. 그러나 탁수를 발생시키는 현탁물질의 종류는 매번 일정하지 않기 때문에, 이러한 예측 시스템에 정확도에 영향을 줄 수 있으므로, 여러 상황을 고려한 딥러닝을 도입하여 탁수 물질에 대한 정보를 예측한다면 보다 합리적인 의사결정 지원 도구가 될 수 있을 것이다.

  • PDF

Efficient use of artificial intelligence ChatGPT in educational ministry (인공지능 챗GPT의 교육목회에 효율적인 활용방안)

  • Jang Heum Ok
    • Journal of Christian Education in Korea
    • /
    • v.78
    • /
    • pp.57-85
    • /
    • 2024
  • Purpose of the study: In order to utilize artificial intelligence-generated AI in educational ministry, this study analyzes the concept of artificial intelligence and generative AI and the educational theological aspects of educational ministry to find ways to efficiently utilize artificial intelligence ChatGPT in educational ministry. Contents and methods of the study: The contents of this study are. First, the contents of this study were analyzed by dividing the concepts of artificial intelligence and generative AI into the concept of artificial intelligence, types of artificial intelligence, and generative language model AI ChatGPT. Second, the educational theological analysis of educational ministry was divided into the concept of educational ministry, the goals of educational ministry, the content of educational ministry, and the direction of educational ministry in the era of artificial intelligence. Third, the plan to use artificial intelligence ChatGPT in educational ministry is to provide tools for writing sermon manuscripts, preparation tools for worship and prayer, and church education, focusing on the five functions of the early church community. It was analyzed by dividing it into tools for teaching, tools for teaching materials for believers, and tools for serving and volunteering. Conclusion and Recommendation: The conclusion of this study is that, first, when writing sermon manuscripts through artificial intelligence ChatGPT, high-quality sermon manuscripts can be written through the preacher's spirituality, faith, and insight. Second, through artificial intelligence ChatGPT, you can efficiently design and plan worship services and prepare services that serve the congregation objectively through various scenarios. Third, by using artificial intelligence ChatGPT in church education, it can be used while maintaining a complementary relationship with teachers through collaboration with human and artificial intelligence teachers. Fourth, through artificial intelligence ChatGPT, we provide a program that allows members of the church community to share spiritual fellowship, a plan to meet the needs of church members and strengthen interdependence, and an attitude of actively welcoming new people and respecting diversity. It provides useful materials that can play an important role in giving, loving, serving, and growing together in the love of Christ. Lastly, through artificial intelligence ChatGPT, we are seeking ways to provide various information about volunteer activities, learning support for children and youth in the community, mentoring-related programs, and playing a leading role in forming a village community in the local community.

Development of Intelligent Severity of Atopic Dermatitis Diagnosis Model using Convolutional Neural Network (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 아토피피부염 중증도 진단 모델 개발)

  • Yoon, Jae-Woong;Chun, Jae-Heon;Bang, Chul-Hwan;Park, Young-Min;Kim, Young-Joo;Oh, Sung-Min;Jung, Joon-Ho;Lee, Suk-Jun;Lee, Ji-Hyun
    • Management & Information Systems Review
    • /
    • v.36 no.4
    • /
    • pp.33-51
    • /
    • 2017
  • With the advent of 'The Forth Industrial Revolution' and the growing demand for quality of life due to economic growth, needs for the quality of medical services are increasing. Artificial intelligence has been introduced in the medical field, but it is rarely used in chronic skin diseases that directly affect the quality of life. Also, atopic dermatitis, a representative disease among chronic skin diseases, has a disadvantage in that it is difficult to make an objective diagnosis of the severity of lesions. The aim of this study is to establish an intelligent severity recognition model of atopic dermatitis for improving the quality of patient's life. For this, the following steps were performed. First, image data of patients with atopic dermatitis were collected from the Catholic University of Korea Seoul Saint Mary's Hospital. Refinement and labeling were performed on the collected image data to obtain training and verification data that suitable for the objective intelligent atopic dermatitis severity recognition model. Second, learning and verification of various CNN algorithms are performed to select an image recognition algorithm that suitable for the objective intelligent atopic dermatitis severity recognition model. Experimental results showed that 'ResNet V1 101' and 'ResNet V2 50' were measured the highest performance with Erythema and Excoriation over 90% accuracy, and 'VGG-NET' was measured 89% accuracy lower than the two lesions due to lack of training data. The proposed methodology demonstrates that the image recognition algorithm has high performance not only in the field of object recognition but also in the medical field requiring expert knowledge. In addition, this study is expected to be highly applicable in the field of atopic dermatitis due to it uses image data of actual atopic dermatitis patients.

  • PDF

LSTM Based Prediction of Ocean Mixed Layer Temperature Using Meteorological Data (기상 데이터를 활용한 LSTM 기반의 해양 혼합층 수온 예측)

  • Ko, Kwan-Seob;Kim, Young-Won;Byeon, Seong-Hyeon;Lee, Soo-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.603-614
    • /
    • 2021
  • Recently, the surface temperature in the seas around Korea has been continuously rising. This temperature rise causes changes in fishery resources and affects leisure activities such as fishing. In particular, high temperatures lead to the occurrence of red tides, causing severe damage to ocean industries such as aquaculture. Meanwhile, changes in sea temperature are closely related to military operation to detect submarines. This is because the degree of diffraction, refraction, or reflection of sound waves used to detect submarines varies depending on the ocean mixed layer. Currently, research on the prediction of changes in sea water temperature is being actively conducted. However, existing research is focused on predicting only the surface temperature of the ocean, so it is difficult to identify fishery resources according to depth and apply them to military operations such as submarine detection. Therefore, in this study, we predicted the temperature of the ocean mixed layer at a depth of 38m by using temperature data for each water depth in the upper mixed layer and meteorological data such as temperature, atmospheric pressure, and sunlight that are related to the surface temperature. The data used are meteorological data and sea temperature data by water depth observed from 2016 to 2020 at the IEODO Ocean Research Station. In order to increase the accuracy and efficiency of prediction, LSTM (Long Short-Term Memory), which is known to be suitable for time series data among deep learning techniques, was used. As a result of the experiment, in the daily prediction, the RMSE (Root Mean Square Error) of the model using temperature, atmospheric pressure, and sunlight data together was 0.473. On the other hand, the RMSE of the model using only the surface temperature was 0.631. These results confirm that the model using meteorological data together shows better performance in predicting the temperature of the upper ocean mixed layer.

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

A preliminary study for development of an automatic incident detection system on CCTV in tunnels based on a machine learning algorithm (기계학습(machine learning) 기반 터널 영상유고 자동 감지 시스템 개발을 위한 사전검토 연구)

  • Shin, Hyu-Soung;Kim, Dong-Gyou;Yim, Min-Jin;Lee, Kyu-Beom;Oh, Young-Sup
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.1
    • /
    • pp.95-107
    • /
    • 2017
  • In this study, a preliminary study was undertaken for development of a tunnel incident automatic detection system based on a machine learning algorithm which is to detect a number of incidents taking place in tunnel in real time and also to be able to identify the type of incident. Two road sites where CCTVs are operating have been selected and a part of CCTV images are treated to produce sets of training data. The data sets are composed of position and time information of moving objects on CCTV screen which are extracted by initially detecting and tracking of incoming objects into CCTV screen by using a conventional image processing technique available in this study. And the data sets are matched with 6 categories of events such as lane change, stoping, etc which are also involved in the training data sets. The training data are learnt by a resilience neural network where two hidden layers are applied and 9 architectural models are set up for parametric studies, from which the architectural model, 300(first hidden layer)-150(second hidden layer) is found to be optimum in highest accuracy with respect to training data as well as testing data not used for training. From this study, it was shown that the highly variable and complex traffic and incident features could be well identified without any definition of feature regulation by using a concept of machine learning. In addition, detection capability and accuracy of the machine learning based system will be automatically enhanced as much as big data of CCTV images in tunnel becomes rich.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Analysis of Surface Urban Heat Island and Land Surface Temperature Using Deep Learning Based Local Climate Zone Classification: A Case Study of Suwon and Daegu, Korea (딥러닝 기반 Local Climate Zone 분류체계를 이용한 지표면온도와 도시열섬 분석: 수원시와 대구광역시를 대상으로)

  • Lee, Yeonsu;Lee, Siwoo;Im, Jungho;Yoo, Cheolhee
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1447-1460
    • /
    • 2021
  • Urbanization increases the amount of impervious surface and artificial heat emission, resulting in urban heat island (UHI) effect. Local climate zones (LCZ) are a classification scheme for urban areas considering urban land cover characteristics and the geometry and structure of buildings, which can be used for analyzing urban heat island effect in detail. This study aimed to examine the UHI effect by urban structure in Suwon and Daegu using the LCZ scheme. First, the LCZ maps were generated using Landsat 8 images and convolutional neural network (CNN) deep learning over the two cities. Then, Surface UHI (SUHI), which indicates the land surface temperature (LST) difference between urban and rural areas, was analyzed by LCZ class. The results showed that the overall accuracies of the CNN models for LCZ classification were relatively high 87.9% and 81.7% for Suwon and Daegu, respectively. In general, Daegu had higher LST for all LCZ classes than Suwon. For both cities, LST tended to increase with increasing building density with relatively low building height. For both cities, the intensity of SUHI was very high in summer regardless of LCZ classes and was also relatively high except for a few classes in spring and fall. In winter the SUHI intensity was low, resulting in negative values for many LCZ classes. This implies that UHI is very strong in summer, and some urban areas often are colder than rural areas in winter. The research findings demonstrated the applicability of the LCZ data for SUHI analysis and can provide a basis for establishing timely strategies to respond urban on-going climate change over urban areas.

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.