• Title/Summary/Keyword: Performance accuracy

Search Result 8,121, Processing Time 0.038 seconds

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • v.22 no.2
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

Interface Application of a Virtual Assistant Agent in an Immersive Virtual Environment (몰입형 가상환경에서 가상 보조 에이전트의 인터페이스 응용)

  • Giri Na;Jinmo Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • In immersive virtual environments including mixed reality (MR) and virtual reality (VR), avatars or agents, which are virtual humans, are being studied and applied in various ways as factors that increase users' social presence. Recently, studies are being conducted to apply generative AI as an agent to improve user learning effects or suggest a collaborative environment in an immersive virtual environment. This study proposes a novel method for interface application of a virtual assistant agent (VAA) using OpenAI's ChatGPT in an immersive virtual environment including VR and MR. The proposed method consists of an information agent that responds to user queries and a control agent that controls virtual objects and environments according to user needs. We set up a development environment that integrates the Unity 3D engine, OpenAI, and packages and development tools for user participation in MR and VR. Additionally, we set up a workflow that leads from voice input to the creation of a question query to an answer query, or a control request query to a control script. Based on this, MR and VR experience environments were produced, and experiments to confirm the performance of VAA were divided into response time of information agent and accuracy of control agent. It was confirmed that the interface application of the proposed VAA can increase efficiency in simple and repetitive tasks along with user-friendly features. We present a novel direction for the interface application of an immersive virtual environment through the proposed VAA and clarify the discovered problems and limitations so far.

Robust Speech Recognition Algorithm of Voice Activated Powered Wheelchair for Severely Disabled Person (중증 장애우용 음성구동 휠체어를 위한 강인한 음성인식 알고리즘)

  • Suk, Soo-Young;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.250-258
    • /
    • 2007
  • Current speech recognition technology s achieved high performance with the development of hardware devices, however it is insufficient for some applications where high reliability is required, such as voice control of powered wheelchairs for disabled persons. For the system which aims to operate powered wheelchairs safely by voice in real environment, we need to consider that non-voice commands such as user s coughing, breathing, and spark-like mechanical noise should be rejected and the wheelchair system need to recognize the speech commands affected by disability, which contains specific pronunciation speed and frequency. In this paper, we propose non-voice rejection method to perform voice/non-voice classification using both YIN based fundamental frequency(F0) extraction and reliability in preprocessing. We adopted a multi-template dictionary and acoustic modeling based speaker adaptation to cope with the pronunciation variation of inarticulately uttered speech. From the recognition tests conducted with the data collected in real environment, proposed YIN based fundamental extraction showed recall-precision rate of 95.1% better than that of 62% by cepstrum based method. Recognition test by a new system applied with multi-template dictionary and MAP adaptation also showed much higher accuracy of 99.5% than that of 78.6% by baseline system.

Diagnostic value of serum procalcitonin and C-reactive protein in discriminating between bacterial and nonbacterial colitis: a retrospective study

  • Jae Yong Lee;So Yeon Lee;Yoo Jin Lee;Jin Wook Lee;Jeong Seok Kim;Ju Yup Lee;Byoung Kuk Jang;Woo Jin Chung;Kwang Bum Cho;Jae Seok Hwang
    • Journal of Yeungnam Medical Science
    • /
    • v.40 no.4
    • /
    • pp.388-393
    • /
    • 2023
  • Background: Differentiating between bacterial and nonbacterial colitis remains a challenge. We aimed to evaluate the value of serum procalcitonin (PCT) and C-reactive protein (CRP) in differentiating between bacterial and nonbacterial colitis. Methods: Adult patients with three or more episodes of watery diarrhea and colitis symptoms within 14 days of a hospital visit were eligible for this study. The patients' stool pathogen polymerase chain reaction (PCR) testing results, serum PCT levels, and serum CRP levels were analyzed retrospectively. Patients were divided into bacterial and nonbacterial colitis groups according to their PCR. The laboratory data were compared between the two groups. The area under the receiver operating characteristic curve (AUC) was used to evaluate diagnostic accuracy. Results: In total, 636 patients were included; 186 in the bacterial colitis group and 450 in the nonbacterial colitis group. In the bacterial colitis group, Clostridium perfringens was the commonest pathogen (n=70), followed by Clostridium difficile toxin B (n=60). The AUC for PCT and CRP was 0.557 and 0.567, respectively, indicating poor discrimination. The sensitivity and specificity for diagnosing bacterial colitis were 54.8% and 52.6% for PCT, and 52.2% and 54.2% for CRP, respectively. Combining PCT and CRP measurements did not increase the discrimination performance (AUC, 0.522; 95% confidence interval, 0.474-0.571). Conclusion: Neither PCT nor CRP helped discriminate bacterial colitis from nonbacterial colitis.

Estimation of Frost Occurrence using Multi-Input Deep Learning (다중 입력 딥러닝을 이용한 서리 발생 추정)

  • Yongseok Kim;Jina Hur;Eung-Sup Kim;Kyo-Moon Shim;Sera Jo;Min-Gu Kang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.53-62
    • /
    • 2024
  • In this study, we built a model to estimate frost occurrence in South Korea using single-input deep learning and multi-input deep learning. Meteorological factors used as learning data included minimum temperature, wind speed, relative humidity, cloud cover, and precipitation. As a result of statistical analysis for each factor on days when frost occurred and days when frost did not occur, significant differences were found. When evaluating the frost occurrence models based on single-input deep learning and multi-input deep learning model, the model using both GRU and MLP was highest accuracy at 0.8774 on average. As a result, it was found that frost occurrence model adopting multi-input deep learning improved performance more than using MLP, LSTM, GRU respectively.

Estimation of fruit number of apple tree based on YOLOv5 and regression model (YOLOv5 및 다항 회귀 모델을 활용한 사과나무의 착과량 예측 방법)

  • Hee-Jin Gwak;Yunju Jeong;Ik-Jo Chun;Cheol-Hee Lee
    • Journal of IKEEE
    • /
    • v.28 no.2
    • /
    • pp.150-157
    • /
    • 2024
  • In this paper, we propose a novel algorithm for predicting the number of apples on an apple tree using a deep learning-based object detection model and a polynomial regression model. Measuring the number of apples on an apple tree can be used to predict apple yield and to assess losses for determining agricultural disaster insurance payouts. To measure apple fruit load, we photographed the front and back sides of apple trees. We manually labeled the apples in the captured images to construct a dataset, which was then used to train a one-stage object detection CNN model. However, when apples on an apple tree are obscured by leaves, branches, or other parts of the tree, they may not be captured in images. Consequently, it becomes difficult for image recognition-based deep learning models to detect or infer the presence of these apples. To address this issue, we propose a two-stage inference process. In the first stage, we utilize an image-based deep learning model to count the number of apples in photos taken from both sides of the apple tree. In the second stage, we conduct a polynomial regression analysis, using the total apple count from the deep learning model as the independent variable, and the actual number of apples manually counted during an on-site visit to the orchard as the dependent variable. The performance evaluation of the two-stage inference system proposed in this paper showed an average accuracy of 90.98% in counting the number of apples on each apple tree. Therefore, the proposed method can significantly reduce the time and cost associated with manually counting apples. Furthermore, this approach has the potential to be widely adopted as a new foundational technology for fruit load estimation in related fields using deep learning.

Detecting high-resolution usage status of individual parcel of land using object detecting deep learning technique (객체 탐지 딥러닝 기법을 활용한 필지별 조사 방안 연구)

  • Jeon, Jeong-Bae
    • Journal of Cadastre & Land InformatiX
    • /
    • v.54 no.1
    • /
    • pp.19-32
    • /
    • 2024
  • This study examined the feasibility of image-based surveys by detecting objects in facilities and agricultural land using the YOLO algorithm based on drone images and comparing them with the land category by law. As a result of detecting objects through the YOLO algorithm, buildings showed a performance of detecting objects corresponding to 96.3% of the buildings provided in the existing digital map. In addition, the YOLO algorithm developed in this study detected 136 additional buildings that were not located in the digital map. Plastic greenhouses detected a total of 297 objects, but the detection rate was low for some plastic greenhouses for fruit trees. Also, agricultural land had the lowest detection rate. This result is because agricultural land has a larger area and irregular shape than buildings, so the accuracy is lower than buildings due to the inconsistency of training data. Therefore, segmentation detection, rather than box-shaped detection, is likely to be more effective for agricultural fields. Comparing the detected objects with the land category by law, it was analyzed that some buildings exist in agricultural and forest areas where it is difficult to locate buildings. It seems that it is necessary to link with administrative information to understand that these buildings are used illegally. Therefore, at the current level, it is possible to objectively determine the existence of buildings in fields where it is difficult to locate buildings.

Leveraging LLMs for Corporate Data Analysis: Employee Turnover Prediction with ChatGPT (대형 언어 모델을 활용한 기업데이터 분석: ChatGPT를 활용한 직원 이직 예측)

  • Sungmin Kim;Jee Yong Chung
    • Knowledge Management Research
    • /
    • v.25 no.2
    • /
    • pp.19-47
    • /
    • 2024
  • Organizational ability to analyze and utilize data plays an important role in knowledge management and decision-making. This study aims to investigate the potential application of large language models in corporate data analysis. Focusing on the field of human resources, the research examines the data analysis capabilities of these models. Using the widely studied IBM HR dataset, the study reproduces machine learning-based employee turnover prediction analyses from previous research through ChatGPT and compares its predictive performance. Unlike past research methods that required advanced programming skills, ChatGPT-based machine learning data analysis, conducted through the analyst's natural language requests, offers the advantages of being much easier and faster. Moreover, its prediction accuracy was found to be competitive compared to previous studies. This suggests that large language models could serve as effective and practical alternatives in the field of corporate data analysis, which has traditionally demanded advanced programming capabilities. Furthermore, this approach is expected to contribute to the popularization of data analysis and the spread of data-driven decision-making (DDDM). The prompts used during the data analysis process and the program code generated by ChatGPT are also included in the appendix for verification, providing a foundation for future data analysis research using large language models.