• Title/Summary/Keyword: Performance accuracy

Search Result 8,121, Processing Time 0.044 seconds

Comparative assessment and uncertainty analysis of ensemble-based hydrologic data assimilation using airGRdatassim (airGRdatassim을 이용한 앙상블 기반 수문자료동화 기법의 비교 및 불확실성 평가)

  • Lee, Garim;Lee, Songhee;Kim, Bomi;Woo, Dong Kook;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.761-774
    • /
    • 2022
  • Accurate hydrologic prediction is essential to analyze the effects of drought, flood, and climate change on flow rates, water quality, and ecosystems. Disentangling the uncertainty of the hydrological model is one of the important issues in hydrology and water resources research. Hydrologic data assimilation (DA), a technique that updates the status or parameters of a hydrological model to produce the most likely estimates of the initial conditions of the model, is one of the ways to minimize uncertainty in hydrological simulations and improve predictive accuracy. In this study, the two ensemble-based sequential DA techniques, ensemble Kalman filter, and particle filter are comparatively analyzed for the daily discharge simulation at the Yongdam catchment using airGRdatassim. The results showed that the values of Kling-Gupta efficiency (KGE) were improved from 0.799 in the open loop simulation to 0.826 in the ensemble Kalman filter and to 0.933 in the particle filter. In addition, we analyzed the effects of hyper-parameters related to the data assimilation methods such as precipitation and potential evaporation forcing error parameters and selection of perturbed and updated states. For the case of forcing error conditions, the particle filter was superior to the ensemble in terms of the KGE index. The size of the optimal forcing noise was relatively smaller in the particle filter compared to the ensemble Kalman filter. In addition, with more state variables included in the updating step, performance of data assimilation improved, implicating that adequate selection of updating states can be considered as a hyper-parameter. The simulation experiments in this study implied that DA hyper-parameters needed to be carefully optimized to exploit the potential of DA methods.

A study on the derivation and evaluation of flow duration curve (FDC) using deep learning with a long short-term memory (LSTM) networks and soil water assessment tool (SWAT) (LSTM Networks 딥러닝 기법과 SWAT을 이용한 유량지속곡선 도출 및 평가)

  • Choi, Jung-Ryel;An, Sung-Wook;Choi, Jin-Young;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1107-1118
    • /
    • 2021
  • Climate change brought on by global warming increased the frequency of flood and drought on the Korean Peninsula, along with the casualties and physical damage resulting therefrom. Preparation and response to these water disasters requires national-level planning for water resource management. In addition, watershed-level management of water resources requires flow duration curves (FDC) derived from continuous data based on long-term observations. Traditionally, in water resource studies, physical rainfall-runoff models are widely used to generate duration curves. However, a number of recent studies explored the use of data-based deep learning techniques for runoff prediction. Physical models produce hydraulically and hydrologically reliable results. However, these models require a high level of understanding and may also take longer to operate. On the other hand, data-based deep-learning techniques offer the benefit if less input data requirement and shorter operation time. However, the relationship between input and output data is processed in a black box, making it impossible to consider hydraulic and hydrological characteristics. This study chose one from each category. For the physical model, this study calculated long-term data without missing data using parameter calibration of the Soil Water Assessment Tool (SWAT), a physical model tested for its applicability in Korea and other countries. The data was used as training data for the Long Short-Term Memory (LSTM) data-based deep learning technique. An anlysis of the time-series data fond that, during the calibration period (2017-18), the Nash-Sutcliffe Efficiency (NSE) and the determinanation coefficient for fit comparison were high at 0.04 and 0.03, respectively, indicating that the SWAT results are superior to the LSTM results. In addition, the annual time-series data from the models were sorted in the descending order, and the resulting flow duration curves were compared with the duration curves based on the observed flow, and the NSE for the SWAT and the LSTM models were 0.95 and 0.91, respectively, and the determination coefficients were 0.96 and 0.92, respectively. The findings indicate that both models yield good performance. Even though the LSTM requires improved simulation accuracy in the low flow sections, the LSTM appears to be widely applicable to calculating flow duration curves for large basins that require longer time for model development and operation due to vast data input, and non-measured basins with insufficient input data.

Modification and Validation of an Analytical Method for Dieckol in Ecklonia Stolonifera Extract (곰피추출물의 지표성분 Dieckol의 분석법 개선 및 검증)

  • Han, Xionggao;Choi, Sun-Il;Men, Xiao;Lee, Se-jeong;Oh, Geon;Jin, Heegu;Oh, Hyun-Ji;Kim, Eunjin;Kim, Jongwook;Lee, Boo-Yong;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.37 no.3
    • /
    • pp.143-148
    • /
    • 2022
  • This study was to investigate an analytical method for determining dieckol content in Ecklonia stolonifera extract. According to the guidelines of International Conference on Harmonization. Method validation was performed by measuring the specificity, linearity, precision, accuracy, limit of detection (LOD), and limit of quantification (LOQ) of dieckol using high-performance liquid chromatography-photodiode array. The results showed that the correlation coefficient of calibration curve (R2) for dieckol was 0.9997. The LOD and LOQ for dieckol were 0.18 and 0.56 ㎍/mL, respectively. The intra- and inter-day precision values of dieckol were approximately 1.58-4.39% and 1.37-4.64%, respectively. Moreover, intra- and inter-day accuracies of dieckol were approximately 96.91-102.33% and 98.41-105.71%, respectively. Thus, we successfully validated the analytical method for estimating dieckol content in E. stolonifera extract.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Comparison of rainfall-runoff performance based on various gridded precipitation datasets in the Mekong River basin (메콩강 유역의 격자형 강수 자료에 의한 강우-유출 모의 성능 비교·분석)

  • Kim, Younghun;Le, Xuan-Hien;Jung, Sungho;Yeon, Minho;Lee, Gihae
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.2
    • /
    • pp.75-89
    • /
    • 2023
  • As the Mekong River basin is a nationally shared river, it is difficult to collect precipitation data, and the quantitative and qualitative quality of the data sets differs from country to country, which may increase the uncertainty of hydrological analysis results. Recently, with the development of remote sensing technology, it has become easier to obtain grid-based precipitation products(GPPs), and various hydrological analysis studies have been conducted in unmeasured or large watersheds using GPPs. In this study, rainfall-runoff simulation in the Mekong River basin was conducted using the SWAT model, which is a quasi-distribution model with three satellite GPPs (TRMM, GSMaP, PERSIANN-CDR) and two GPPs (APHRODITE, GPCC). Four water level stations, Luang Prabang, Pakse, Stung Treng, and Kratie, which are major outlets of the main Mekong River, were selected, and the parameters of the SWAT model were calibrated using APHRODITE as an observation value for the period from 2001 to 2011 and runoff simulations were verified for the period form 2012 to 2013. In addition, using the ConvAE, a convolutional neural network model, spatio-temporal correction of original satellite precipitation products was performed, and rainfall-runoff performances were compared before and after correction of satellite precipitation products. The original satellite precipitation products and GPCC showed a quantitatively under- or over-estimated or spatially very different pattern compared to APHPRODITE, whereas, in the case of satellite precipitation prodcuts corrected using ConvAE, spatial correlation was dramatically improved. In the case of runoff simulation, the runoff simulation results using the satellite precipitation products corrected by ConvAE for all the outlets have significantly improved accuracy than the runoff results using original satellite precipitation products. Therefore, the bias correction technique using the ConvAE technique presented in this study can be applied in various hydrological analysis for large watersheds where rain guage network is not dense.

Verification of Multi-point Displacement Response Measurement Algorithm Using Image Processing Technique (영상처리기법을 이용한 다중 변위응답 측정 알고리즘의 검증)

  • Kim, Sung-Wan;Kim, Nam-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3A
    • /
    • pp.297-307
    • /
    • 2010
  • Recently, maintenance engineering and technology for civil and building structures have begun to draw big attention and actually the number of structures that need to be evaluate on structural safety due to deterioration and performance degradation of structures are rapidly increasing. When stiffness is decreased because of deterioration of structures and member cracks, dynamic characteristics of structures would be changed. And it is important that the damaged areas and extent of the damage are correctly evaluated by analyzing dynamic characteristics from the actual behavior of a structure. In general, typical measurement instruments used for structure monitoring are dynamic instruments. Existing dynamic instruments are not easy to obtain reliable data when the cable connecting measurement sensors and device is long, and have uneconomical for 1 to 1 connection process between each sensor and instrument. Therefore, a method without attaching sensors to measure vibration at a long range is required. The representative applicable non-contact methods to measure the vibration of structures are laser doppler effect, a method using GPS, and image processing technique. The method using laser doppler effect shows relatively high accuracy but uneconomical while the method using GPS requires expensive equipment, and has its signal's own error and limited speed of sampling rate. But the method using image signal is simple and economical, and is proper to get vibration of inaccessible structures and dynamic characteristics. Image signals of camera instead of sensors had been recently used by many researchers. But the existing method, which records a point of a target attached on a structure and then measures vibration using image processing technique, could have relatively the limited objects of measurement. Therefore, this study conducted shaking table test and field load test to verify the validity of the method that can measure multi-point displacement responses of structures using image processing technique.

Performance Evaluation of Monitoring System for Sargassum horneri Using GOCI-II: Focusing on the Results of Removing False Detection in the Yellow Sea and East China Sea (GOCI-II 기반 괭생이모자반 모니터링 시스템 성능 평가: 황해 및 동중국해 해역 오탐지 제거 결과를 중심으로)

  • Han-bit Lee;Ju-Eun Kim;Moon-Seon Kim;Dong-Su Kim;Seung-Hwan Min;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1615-1633
    • /
    • 2023
  • Sargassum horneri is one of the floating algae in the sea, which breeds in large quantities in the Yellow Sea and East China Sea and then flows into the coast of Republic of Korea, causing various problems such as destroying the environment and damaging fish farms. In order to effectively prevent damage and preserve the coastal environment, the development of Sargassum horneri detection algorithms using satellite-based remote sensing technology has been actively developed. However, incorrect detection information causes an increase in the moving distance of ships collecting Sargassum horneri and confusion in the response of related local governments or institutions,so it is very important to minimize false detections when producing Sargassum horneri spatial information. This study applied technology to automatically remove false detection results using the GOCI-II-based Sargassum horneri detection algorithm of the National Ocean Satellite Center (NOSC) of the Korea Hydrographic and Oceanography Agency (KHOA). Based on the results of analyzing the causes of major false detection results, it includes a process of removing linear and sporadic false detections and green algae that occurs in large quantities along the coast of China in spring and summer by considering them as false detections. The technology to automatically remove false detection was applied to the dates when Sargassum horneri occurred from February 24 to June 25, 2022. Visual assessment results were generated using mid-resolution satellite images, qualitative and quantitative evaluations were performed. Linear false detection results were completely removed, and most of the sporadic and green algae false detection results that affected the distribution were removed. Even after the automatic false detection removal process, it was possible to confirm the distribution area of Sargassum horneri compared to the visual assessment results, and the accuracy and precision calculated using the binary classification model averaged 97.73% and 95.4%, respectively. Recall value was very low at 29.03%, which is presumed to be due to the effect of Sargassum horneri movement due to the observation time discrepancy between GOCI-II and mid-resolution satellite images, differences in spatial resolution, location deviation by orthocorrection, and cloud masking. The results of this study's removal of false detections of Sargassum horneri can determine the spatial distribution status in near real-time, but there are limitations in accurately estimating biomass. Therefore, continuous research on upgrading the Sargassum horneri monitoring system must be conducted to use it as data for establishing future Sargassum horneri response plans.

Studies on Xylooligosaccharide Analysis Method Standardization using HPLC-UVD in Health Functional Food (건강기능식품에서 HPLC-UVD를 이용한 자일로올리고당 시험법의 표준화 연구)

  • Se-Yun Lee;Hee-Sun Jeong;Kyu-Heon Kim;Mi-Young Lee;Jung-Ho Choi;Jeong-Sun Ahn;Kwang-Il Kwon;Hye-Young Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.72-82
    • /
    • 2024
  • This study aimed to develop a scientifically and systematically standardized xylooligosaccharide analytical method that can be applied to products with various formulations. The analysis method was conducted using HPLC with Cadenza C18 column, involving pre-column derivatization with 1-phenyl-3-methyl-5-pyrazoline (PMP) and UV detection at 254 nm. The xylooligosaccharide content was analyzed by converting xylooligosaccharide into xylose through acid hydrolysis. The pre-treated methods were compared and evaluated by varying sonication time, acid hydrolysis time, and concentration. Optimal equipment conditions were achieved with a mobile phase consisting of 20 mM potassium phosphate buffer (pH 6)-acetonitrile (78:22, v/v) through isocratic elution at a flow rate of 0.5 mL/min (254 nm). Furthermore, we validated the advanced standardized analysis method to support the suitability of the proposed analytical procedure such as specificity, linearity, detection limits (LOD), quantitative limits (LOQ), accuracy, and precision. The standardized analysis method is now in use for monitoring relevant health-functional food products available in the market. Our results have demonstrated that the standardized analysis method is expected to enhance the reliability of quality control for healthy functional foods containing xylooligosaccharide.

A Study on the Medical Application and Personal Information Protection of Generative AI (생성형 AI의 의료적 활용과 개인정보보호)

  • Lee, Sookyoung
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.4
    • /
    • pp.67-101
    • /
    • 2023
  • The utilization of generative AI in the medical field is also being rapidly researched. Access to vast data sets reduces the time and energy spent in selecting information. However, as the effort put into content creation decreases, there is a greater likelihood of associated issues arising. For example, with generative AI, users must discern the accuracy of results themselves, as these AIs learn from data within a set period and generate outcomes. While the answers may appear plausible, their sources are often unclear, making it challenging to determine their veracity. Additionally, the possibility of presenting results from a biased or distorted perspective cannot be discounted at present on ethical grounds. Despite these concerns, the field of generative AI is continually advancing, with an increasing number of users leveraging it in various sectors, including biomedical and life sciences. This raises important legal considerations regarding who bears responsibility and to what extent for any damages caused by these high-performance AI algorithms. A general overview of issues with generative AI includes those discussed above, but another perspective arises from its fundamental nature as a large-scale language model ('LLM') AI. There is a civil law concern regarding "the memorization of training data within artificial neural networks and its subsequent reproduction". Medical data, by nature, often reflects personal characteristics of patients, potentially leading to issues such as the regeneration of personal information. The extensive application of generative AI in scenarios beyond traditional AI brings forth the possibility of legal challenges that cannot be ignored. Upon examining the technical characteristics of generative AI and focusing on legal issues, especially concerning the protection of personal information, it's evident that current laws regarding personal information protection, particularly in the context of health and medical data utilization, are inadequate. These laws provide processes for anonymizing and de-identification, specific personal information but fall short when generative AI is applied as software in medical devices. To address the functionalities of generative AI in clinical software, a reevaluation and adjustment of existing laws for the protection of personal information are imperative.

Applicability Analysis of Constructing UDM of Cloud and Cloud Shadow in High-Resolution Imagery Using Deep Learning (딥러닝 기반 구름 및 구름 그림자 탐지를 통한 고해상도 위성영상 UDM 구축 가능성 분석)

  • Nayoung Kim;Yerin Yun;Jaewan Choi;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.351-361
    • /
    • 2024
  • Satellite imagery contains various elements such as clouds, cloud shadows, and terrain shadows. Accurately identifying and eliminating these factors that complicate satellite image analysis is essential for maintaining the reliability of remote sensing imagery. For this reason, satellites such as Landsat-8, Sentinel-2, and Compact Advanced Satellite 500-1 (CAS500-1) provide Usable Data Masks(UDMs)with images as part of their Analysis Ready Data (ARD) product. Precise detection of clouds and their shadows is crucial for the accurate construction of these UDMs. Existing cloud and their shadow detection methods are categorized into threshold-based methods and Artificial Intelligence (AI)-based methods. Recently, AI-based methods, particularly deep learning networks, have been preferred due to their advantage in handling large datasets. This study aims to analyze the applicability of constructing UDMs for high-resolution satellite images through deep learning-based cloud and their shadow detection using open-source datasets. To validate the performance of the deep learning network, we compared the detection results generated by the network with pre-existing UDMs from Landsat-8, Sentinel-2, and CAS500-1 satellite images. The results demonstrated that high accuracy in the detection outcomes produced by the deep learning network. Additionally, we applied the network to detect cloud and their shadow in KOMPSAT-3/3A images, which do not provide UDMs. The experiment confirmed that the deep learning network effectively detected cloud and their shadow in high-resolution satellite images. Through this, we could demonstrate the applicability that UDM data for high-resolution satellite imagery can be constructed using the deep learning network.