• Title/Summary/Keyword: Data Accuracy

Search Result 11,668, Processing Time 0.036 seconds

Spatial analysis of water shortage areas in South Korea considering spatial clustering characteristics (공간군집특성을 고려한 우리나라 물부족 핫스팟 지역 분석)

  • Lee, Dong Jin;Kim, Tae-Woong
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.2
    • /
    • pp.87-97
    • /
    • 2024
  • This study analyzed the water shortage hotspot areas in South Korea using spatial clustering analysis for water shortage estimates in 2030 of the Master Plans for National Water Management. To identify the water shortage cluster areas, we used water shortage data from the past maximum drought (about 50-year return period) and performed spatial clustering analysis using Local Moran's I and Getis-Ord Gi*. The areas subject to spatial clusters of water shortage were selected using the cluster map, and the spatial characteristics of water shortage areas were verified based on the p-value and the Moran scatter plot. The results indicated that one cluster (lower Imjin River (#1023) and neighbor) in the Han River basin and two clusters (Daejeongcheon (#2403) and neighbor, Gahwacheon (#2501) and neighbor) in the Nakdong River basin were found to be the hotspot for water shortage, whereas one cluster (lower Namhan River (#1007) and neighbor) in the Han River Basin and one cluster (Byeongseongcheon (#2006) and neighbor) in the Nakdong River basin were found to be the HL area, which means the specific area have high water shortage and neighbor have low water shortage. When analyzing spatial clustering by standard watershed unit, the entire spatial clustering area satisfied 100% of the statistical criteria leading to statistically significant results. The overall results indicated that spatial clustering analysis performed using standard watersheds can resolve the variable spatial unit problem to some extent, which results in the relatively increased accuracy of spatial analysis.

Development of Stream Cover Classification Model Using SVM Algorithm based on Drone Remote Sensing (드론원격탐사 기반 SVM 알고리즘을 활용한 하천 피복 분류 모델 개발)

  • Jeong, Kyeong-So;Go, Seong-Hwan;Lee, Kyeong-Kyu;Park, Jong-Hwa
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.1
    • /
    • pp.57-66
    • /
    • 2024
  • This study aimed to develop a precise vegetation cover classification model for small streams using the combination of drone remote sensing and support vector machine (SVM) techniques. The chosen study area was the Idong stream, nestled within Geosan-gun, Chunbuk, South Korea. The initial stage involved image acquisition through a fixed-wing drone named ebee. This drone carried two sensors: the S.O.D.A visible camera for capturing detailed visuals and the Sequoia+ multispectral sensor for gathering rich spectral data. The survey meticulously captured the stream's features on August 18, 2023. Leveraging the multispectral images, a range of vegetation indices were calculated. These included the widely used normalized difference vegetation index (NDVI), the soil-adjusted vegetation index (SAVI) that factors in soil background, and the normalized difference water index (NDWI) for identifying water bodies. The third stage saw the development of an SVM model based on the calculated vegetation indices. The RBF kernel was chosen as the SVM algorithm, and optimal values for the cost (C) and gamma hyperparameters were determined. The results are as follows: (a) High-Resolution Imaging: The drone-based image acquisition delivered results, providing high-resolution images (1 cm/pixel) of the Idong stream. These detailed visuals effectively captured the stream's morphology, including its width, variations in the streambed, and the intricate vegetation cover patterns adorning the stream banks and bed. (b) Vegetation Insights through Indices: The calculated vegetation indices revealed distinct spatial patterns in vegetation cover and moisture content. NDVI emerged as the strongest indicator of vegetation cover, while SAVI and NDWI provided insights into moisture variations. (c) Accurate Classification with SVM: The SVM model, fueled by the combination of NDVI, SAVI, and NDWI, achieved an outstanding accuracy of 0.903, which was calculated based on the confusion matrix. This performance translated to precise classification of vegetation, soil, and water within the stream area. The study's findings demonstrate the effectiveness of drone remote sensing and SVM techniques in developing accurate vegetation cover classification models for small streams. These models hold immense potential for various applications, including stream monitoring, informed management practices, and effective stream restoration efforts. By incorporating images and additional details about the specific drone and sensors technology, we can gain a deeper understanding of small streams and develop effective strategies for stream protection and management.

Vehicle Acceleration and Vehicle Spacing Calculation Method Used YOLO (YOLO기법을 사용한 차량가속도 및 차두거리 산출방법)

  • Jeong-won Gil;Jae-seong Hwang;Jae-Kyung Kwon;Choul-ki Lee
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.23 no.1
    • /
    • pp.82-96
    • /
    • 2024
  • While analyzing traffic flow, speed, traffic volume, and density are important macroscopic indicators, and acceleration and spacing are the important microscopic indicators. The speed and traffic volume can be collected with the currently installed traffic information collection devices. However, acceleration and spacing data are necessary for safety and autonomous driving but cannot be collected using the current traffic information collection devices. 'You Look Only Once'(YOLO), an object recognition technique, has excellent accuracy and real-time performance and is used in various fields, including the transportation field. In this study, to measure acceleration and spacing using YOLO, we developed a model that measures acceleration and spacing through changes in vehicle speed at each interval and the differences in the travel time between vehicles by setting the measurement intervals closely. It was confirmed that the range of acceleration and spacing is different depending on the traffic characteristics of each point, and a comparative analysis was performed according to the reference distance and screen angle to secure the measurement rate. The measurement interval was 20m, and the closer the angle was to a right angle, the higher the measurement rate. These results will contribute to the analysis of safety by intersection and the domestic vehicle behavior model.

Domain Knowledge Incorporated Local Rule-based Explanation for ML-based Bankruptcy Prediction Model (머신러닝 기반 부도예측모형에서 로컬영역의 도메인 지식 통합 규칙 기반 설명 방법)

  • Soo Hyun Cho;Kyung-shik Shin
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.105-123
    • /
    • 2022
  • Thanks to the remarkable success of Artificial Intelligence (A.I.) techniques, a new possibility for its application on the real-world problem has begun. One of the prominent applications is the bankruptcy prediction model as it is often used as a basic knowledge base for credit scoring models in the financial industry. As a result, there has been extensive research on how to improve the prediction accuracy of the model. However, despite its impressive performance, it is difficult to implement machine learning (ML)-based models due to its intrinsic trait of obscurity, especially when the field requires or values an explanation about the result obtained by the model. The financial domain is one of the areas where explanation matters to stakeholders such as domain experts and customers. In this paper, we propose a novel approach to incorporate financial domain knowledge into local rule generation to provide explanations for the bankruptcy prediction model at instance level. The result shows the proposed method successfully selects and classifies the extracted rules based on the feasibility and information they convey to the users.

Safety Verification Techniques of Privacy Policy Using GPT (GPT를 활용한 개인정보 처리방침 안전성 검증 기법)

  • Hye-Yeon Shim;MinSeo Kweun;DaYoung Yoon;JiYoung Seo;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.207-216
    • /
    • 2024
  • As big data was built due to the 4th Industrial Revolution, personalized services increased rapidly. As a result, the amount of personal information collected from online services has increased, and concerns about users' personal information leakage and privacy infringement have increased. Online service providers provide privacy policies to address concerns about privacy infringement of users, but privacy policies are often misused due to the long and complex problem that it is difficult for users to directly identify risk items. Therefore, there is a need for a method that can automatically check whether the privacy policy is safe. However, the safety verification technique of the conventional blacklist and machine learning-based privacy policy has a problem that is difficult to expand or has low accessibility. In this paper, to solve the problem, we propose a safety verification technique for the privacy policy using the GPT-3.5 API, which is a generative artificial intelligence. Classification work can be performed evenin a new environment, and it shows the possibility that the general public without expertise can easily inspect the privacy policy. In the experiment, how accurately the blacklist-based privacy policy and the GPT-based privacy policy classify safe and unsafe sentences and the time spent on classification was measured. According to the experimental results, the proposed technique showed 10.34% higher accuracy on average than the conventional blacklist-based sentence safety verification technique.

Comparison of One- and Two-Region of Interest Strain Elastography Measurements in the Differential Diagnosis of Breast Masses

  • Hee Jeong Park;Sun Mi Kim;Bo La Yun;Mijung Jang;Bohyoung Kim;Soo Hyun Lee;Hye Shin Ahn
    • Korean Journal of Radiology
    • /
    • v.21 no.4
    • /
    • pp.431-441
    • /
    • 2020
  • Objective: To compare the diagnostic performance and interobserver variability of strain ratio obtained from one or two regions of interest (ROI) on breast elastography. Materials and Methods: From April to May 2016, 140 breast masses in 140 patients who underwent conventional ultrasonography (US) with strain elastography followed by US-guided biopsy were evaluated. Three experienced breast radiologists reviewed recorded US and elastography images, measured strain ratios, and categorized them according to the American College of Radiology breast imaging reporting and data system lexicon. Strain ratio was obtained using the 1-ROI method (one ROI drawn on the target mass), and the 2-ROI method (one ROI in the target mass and another in reference fat tissue). The diagnostic performance of the three radiologists among datasets and optimal cut-off values for strain ratios were evaluated. Interobserver variability of strain ratio for each ROI method was assessed using intraclass correlation coefficient values, Bland-Altman plots, and coefficients of variation. Results: Compared to US alone, US combined with the strain ratio measured using either ROI method significantly improved specificity, positive predictive value, accuracy, and area under the receiver operating characteristic curve (AUC) (all p values < 0.05). Strain ratio obtained using the 1-ROI method showed higher interobserver agreement between the three radiologists without a significant difference in AUC for differentiating breast cancer when the optimal strain ratio cut-off value was used, compared with the 2-ROI method (AUC: 0.788 vs. 0.783, 0.693 vs. 0.715, and 0.691 vs. 0.686, respectively, all p values > 0.05). Conclusion: Strain ratios obtained using the 1-ROI method showed higher interobserver agreement without a significant difference in AUC, compared to those obtained using the 2-ROI method. Considering that the 1-ROI method can reduce performers' efforts, it could have an important role in improving the diagnostic performance of breast US by enabling consistent management of breast lesions.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Development and Validation of a Deep Learning System for Segmentation of Abdominal Muscle and Fat on Computed Tomography

  • Hyo Jung Park;Yongbin Shin;Jisuk Park;Hyosang Kim;In Seob Lee;Dong-Woo Seo;Jimi Huh;Tae Young Lee;TaeYong Park;Jeongjin Lee;Kyung Won Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.1
    • /
    • pp.88-100
    • /
    • 2020
  • Objective: We aimed to develop and validate a deep learning system for fully automated segmentation of abdominal muscle and fat areas on computed tomography (CT) images. Materials and Methods: A fully convolutional network-based segmentation system was developed using a training dataset of 883 CT scans from 467 subjects. Axial CT images obtained at the inferior endplate level of the 3rd lumbar vertebra were used for the analysis. Manually drawn segmentation maps of the skeletal muscle, visceral fat, and subcutaneous fat were created to serve as ground truth data. The performance of the fully convolutional network-based segmentation system was evaluated using the Dice similarity coefficient and cross-sectional area error, for both a separate internal validation dataset (426 CT scans from 308 subjects) and an external validation dataset (171 CT scans from 171 subjects from two outside hospitals). Results: The mean Dice similarity coefficients for muscle, subcutaneous fat, and visceral fat were high for both the internal (0.96, 0.97, and 0.97, respectively) and external (0.97, 0.97, and 0.97, respectively) validation datasets, while the mean cross-sectional area errors for muscle, subcutaneous fat, and visceral fat were low for both internal (2.1%, 3.8%, and 1.8%, respectively) and external (2.7%, 4.6%, and 2.3%, respectively) validation datasets. Conclusion: The fully convolutional network-based segmentation system exhibited high performance and accuracy in the automatic segmentation of abdominal muscle and fat on CT images.

PASTELS project - overall progress of the project on experimental and numerical activities on passive safety systems

  • Michael Montout;Christophe Herer;Joonas Telkka
    • Nuclear Engineering and Technology
    • /
    • v.56 no.3
    • /
    • pp.803-811
    • /
    • 2024
  • Nuclear accidents such as Fukushima Daiichi have highlighted the potential of passive safety systems to replace or complement active safety systems as part of the overall prevention and/or mitigation strategies. In addition, passive systems are key features of Small Modular Reactors (SMRs), for which they are becoming almost unavoidable and are part of the basic design of many reactors available in today's nuclear market. Nevertheless, their potential to significantly increase the safety of nuclear power plants still needs to be strengthened, in particular the ability of computer codes to determine their performance and reliability in industrial applications and support the safety demonstration. The PASTELS project (September 2020-February 2024), funded by the European Commission "Euratom H2020" programme, is devoted to the study of passive systems relying on natural circulation. The project focuses on two types, namely the SAfety COndenser (SACO) for the evacuation of the core residual power and the Containment Wall Condenser (CWC) for the reduction of heat and pressure in the containment vessel in case of accident. A specific design for each of these systems is being investigated in the project. Firstly, a straight vertical pool type of SACO has been implemented on the Framatome's PKL loop at Erlangen. It represents a tube bundle type heat exchanger that transfers heat from the secondary circuit to the water pool in which it is immersed by condensing the vapour generated in the steam generator. Secondly, the project relies on the CWC installed on the PASI test loop at LUT University in Finland. This facility reproduces the thermal-hydraulic behaviour of a Passive Containment Cooling System (PCCS) mainly composed of a CWC, a heat exchanger in the containment vessel connected to a water tank at atmospheric pressure outside the vessel which represents the ultimate heat sink. Several activities are carried out within the framework of the project. Different tests are conducted on these integral test facilities to produce new and relevant experimental data allowing to better characterize the physical behaviours and the performances of these systems for various thermo-hydraulic conditions. These test programmes are simulated by different codes acting at different scales, mainly system and CFD codes. New "system/CFD" coupling approaches are also considered to evaluate their potential to benefit both from the accuracy of CFD in regions where local 3D effects are dominant and system codes whose computational speed, robustness and general level of physical validation are particularly appreciated in industrial studies. In parallel, the project includes the study of single and two-phase natural circulation loops through a bibliographical study and the simulations of the PERSEO and HERO-2 experimental facilities. After a synthetic presentation of the project and its objectives, this article provides the reader with findings related to the physical analysis of the test results obtained on the PKL and PASI installations as well an overall evaluation of the capability of the different numerical tools to simulate passive systems.