• Title/Summary/Keyword: Dataset Training

Search Result 668, Processing Time 0.027 seconds

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

Diagnostic Performance of a New Convolutional Neural Network Algorithm for Detecting Developmental Dysplasia of the Hip on Anteroposterior Radiographs

  • Hyoung Suk Park;Kiwan Jeon;Yeon Jin Cho;Se Woo Kim;Seul Bi Lee;Gayoung Choi;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon;Woo Sun Kim;Young Jin Ryu;Jae-Yeon Hwang
    • Korean Journal of Radiology
    • /
    • v.22 no.4
    • /
    • pp.612-623
    • /
    • 2021
  • Objective: To evaluate the diagnostic performance of a deep learning algorithm for the automated detection of developmental dysplasia of the hip (DDH) on anteroposterior (AP) radiographs. Materials and Methods: Of 2601 hip AP radiographs, 5076 cropped unilateral hip joint images were used to construct a dataset that was further divided into training (80%), validation (10%), or test sets (10%). Three radiologists were asked to label the hip images as normal or DDH. To investigate the diagnostic performance of the deep learning algorithm, we calculated the receiver operating characteristics (ROC), precision-recall curve (PRC) plots, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) and compared them with the performance of radiologists with different levels of experience. Results: The area under the ROC plot generated by the deep learning algorithm and radiologists was 0.988 and 0.988-0.919, respectively. The area under the PRC plot generated by the deep learning algorithm and radiologists was 0.973 and 0.618-0.958, respectively. The sensitivity, specificity, PPV, and NPV of the proposed deep learning algorithm were 98.0, 98.1, 84.5, and 99.8%, respectively. There was no significant difference in the diagnosis of DDH by the algorithm and the radiologist with experience in pediatric radiology (p = 0.180). However, the proposed model showed higher sensitivity, specificity, and PPV, compared to the radiologist without experience in pediatric radiology (p < 0.001). Conclusion: The proposed deep learning algorithm provided an accurate diagnosis of DDH on hip radiographs, which was comparable to the diagnosis by an experienced radiologist.

Computer Vision-Based Car Accident Detection using YOLOv8 (YOLO v8을 활용한 컴퓨터 비전 기반 교통사고 탐지)

  • Marwa Chacha Andrea;Choong Kwon Lee;Yang Sok Kim;Mi Jin Noh;Sang Il Moon;Jae Ho Shin
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.91-105
    • /
    • 2024
  • Car accidents occur as a result of collisions between vehicles, leading to both vehicle damage and personal and material losses. This study developed a vehicle accident detection model based on 2,550 image frames extracted from car accident videos uploaded to YouTube, captured by CCTV. To preprocess the data, bounding boxes were annotated using roboflow.com, and the dataset was augmented by flipping images at various angles. The You Only Look Once version 8 (YOLOv8) model was employed for training, achieving an average accuracy of 0.954 in accident detection. The proposed model holds practical significance by facilitating prompt alarm transmission in emergency situations. Furthermore, it contributes to the research on developing an effective and efficient mechanism for vehicle accident detection, which can be utilized on devices like smartphones. Future research aims to refine the detection capabilities by integrating additional data including sound.

Quantitative risk analysis of industial incidents occurring in trap boats (통발어선에서 발생하는 산업재해에 대한 정량적 위험성 분석)

  • Seung-Hyun LEE;Su-Hyung KIM;Kyung-Jin RYU;Yoo-Won LEE
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.60 no.2
    • /
    • pp.161-169
    • /
    • 2024
  • This study employs Bayesian network analysis to quantitatively evaluate the risk of incidents in trap boats, utilizing accident compensation approval data spanning from 2018 to 2022. With a dataset comprising 1,635 incidents, the analysis reveals a mortality risk of approximately 0.011 across the entire trap boat. The study significantly identifies variations in incident risks contingent upon fishing area and fishing processes. Specifically, incidents are approximately 1.22 times more likely to occur in coastal compared to offshore, and the risk during fishing processes outweighs that during maintenance operations by a factor of approximately 23.20. Furthermore, a detailed examination of incident types reveals varying incidence rates. Trip/slip incidents, for instance, are approximately 1.36 times more prevalent than bump/hit incidents, 1.58 times more than stuck incidents, and a substantial 5.17 times more than fall incidents. The study concludes by providing inferred mortality risks for 16 distinct scenarios, incorporating fishing areas, processes, and incident types. This foundational data offers a tailored approach to risk mitigation, enabling proactive measures suited to specific circumstances and occurrence types in the trap boat industry.

Refractive-index Prediction for High-refractive-index Optical Glasses Based on the B2O3-La2O3-Ta2O5-SiO2 System Using Machine Learning

  • Seok Jin Hong;Jung Hee Lee;Devarajulu Gelija;Woon Jin Chung
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.230-238
    • /
    • 2024
  • The refractive index is a key material-design parameter, especially for high-refractive-index glasses, which are used for precision optics and devices. Increased demand for high-precision optical lenses produced by the glass-mold-press (GMP) process has spurred extensive studies of proper glass materials. B2O3, SiO2, and multiple heavy-metal oxides such as Ta2O5, Nb2O5, La2O3, and Gd2O3 mostly compose the high-refractive-index glasses for GMP. However, due to many oxides including up to 10 components, it is hard to predict the refractivity solely from the composition of the glass. In this study, the refractive index of optical glasses based on the B2O3-La2O3-Ta2O5-SiO2 system is predicted using machine learning (ML) and compared to experimental data. A dataset comprising up to 271 glasses with 10 components is collected and used for training. Various ML algorithms (linear-regression, Bayesian-ridge-regression, nearest-neighbor, and random-forest models) are employed to train the data. Along with composition, the polarizability and density of the glasses are also considered independent parameters to predict the refractive index. After obtaining the best-fitting model by R2 value, the trained model is examined alongside the experimentally obtained refractive indices of B2O3-La2O3-Ta2O5-SiO2 quaternary glasses.

Methodology for Variable Optimization in Injection Molding Process (사출 성형 공정에서의 변수 최적화 방법론)

  • Jung, Young Jin;Kang, Tae Ho;Park, Jeong In;Cho, Joong Yeon;Hong, Ji Soo;Kang, Sung Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.43-56
    • /
    • 2024
  • Purpose: The injection molding process, crucial for plastic shaping, encounters difficulties in sustaining product quality when replacing injection machines. Variations in machine types and outputs between different production lines or factories increase the risk of quality deterioration. In response, the study aims to develop a system that optimally adjusts conditions during the replacement of injection machines linked to molds. Methods: Utilizing a dataset of 12 injection process variables and 52 corresponding sensor variables, a predictive model is crafted using Decision Tree, Random Forest, and XGBoost. Model evaluation is conducted using an 80% training data and a 20% test data split. The dependent variable, classified into five characteristics based on temperature and pressure, guides the prediction model. Bayesian optimization, integrated into the selected model, determines optimal values for process variables during the replacement of injection machines. The iterative convergence of sensor prediction values to the optimum range is visually confirmed, aligning them with the target range. Experimental results validate the proposed approach. Results: Post-experiment analysis indicates the superiority of the XGBoost model across all five characteristics, achieving a combined high performance of 0.81 and a Mean Absolute Error (MAE) of 0.77. The study introduces a method for optimizing initial conditions in the injection process during machine replacement, utilizing Bayesian optimization. This streamlined approach reduces both time and costs, thereby enhancing process efficiency. Conclusion: This research contributes practical insights to the optimization literature, offering valuable guidance for industries seeking streamlined and cost-effective methods for machine replacement in injection molding.

A Novel, Deep Learning-Based, Automatic Photometric Analysis Software for Breast Aesthetic Scoring

  • Joseph Kyu-hyung Park;Seungchul Baek;Chan Yeong Heo;Jae Hoon Jeong;Yujin Myung
    • Archives of Plastic Surgery
    • /
    • v.51 no.1
    • /
    • pp.30-35
    • /
    • 2024
  • Background Breast aesthetics evaluation often relies on subjective assessments, leading to the need for objective, automated tools. We developed the Seoul Breast Esthetic Scoring Tool (S-BEST), a photometric analysis software that utilizes a DenseNet-264 deep learning model to automatically evaluate breast landmarks and asymmetry indices. Methods S-BEST was trained on a dataset of frontal breast photographs annotated with 30 specific landmarks, divided into an 80-20 training-validation split. The software requires the distances of sternal notch to nipple or nipple-to-nipple as input and performs image preprocessing steps, including ratio correction and 8-bit normalization. Breast asymmetry indices and centimeter-based measurements are provided as the output. The accuracy of S-BEST was validated using a paired t-test and Bland-Altman plots, comparing its measurements to those obtained from physical examinations of 100 females diagnosed with breast cancer. Results S-BEST demonstrated high accuracy in automatic landmark localization, with most distances showing no statistically significant difference compared with physical measurements. However, the nipple to inframammary fold distance showed a significant bias, with a coefficient of determination ranging from 0.3787 to 0.4234 for the left and right sides, respectively. Conclusion S-BEST provides a fast, reliable, and automated approach for breast aesthetic evaluation based on 2D frontal photographs. While limited by its inability to capture volumetric attributes or multiple viewpoints, it serves as an accessible tool for both clinical and research applications.

ML-based prediction method for estimating vortex-induced vibration amplitude of steel tubes in tubular transmission towers

  • Jiahong Li;Tao Wang;Zhengliang Li
    • Structural Engineering and Mechanics
    • /
    • v.90 no.1
    • /
    • pp.27-40
    • /
    • 2024
  • The prediction of VIV amplitude is essential for the design and fatigue life estimation of steel tubes in tubular transmission towers. Limited to costly and time-consuming traditional experimental and computational fluid dynamics (CFD) methods, a machine learning (ML)-based method is proposed to efficiently predict the VIV amplitude of steel tubes in transmission towers. Firstly, by introducing the first-order mode shape to the two-dimensional CFD method, a simplified response analysis method (SRAM) is presented to calculate the VIV amplitude of steel tubes in transmission towers, which enables to build a dataset for training ML models. Then, by taking mass ratio M*, damping ratio ξ, and reduced velocity U* as the input variables, a Kriging-based prediction method (KPM) is further proposed to estimate the VIV amplitude of steel tubes in transmission towers by combining the SRAM with the Kriging-based ML model. Finally, the feasibility and effectiveness of the proposed methods are demonstrated by using three full-scale steel tubes with C-shaped, Cross-shaped, and Flange-plate joints, respectively. The results show that the SRAM can reasonably calculate the VIV amplitude, in which the relative errors of VIV maximum amplitude in three examples are less than 6%. Meanwhile, the KPM can well predict the VIV amplitude of steel tubes in transmission towers within the studied range of M*, ξ and U*. Particularly, the KPM presents an excellent capability in estimating the VIV maximum amplitude by using the reduced damping parameter SG.

Deep learning-based clothing attribute classification using fashion image data (패션 이미지 데이터를 활용한 딥러닝 기반의 의류속성 분류)

  • Hye Seon Jeong;So Young Lee;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • Attributes such as material, color, and fit in fashion images are important factors for consumers to purchase clothing. However, the process of classifying clothing attributes requires a large amount of manpower and is inconsistent because it relies on the subjective judgment of human operators. To alleviate this problem, there is a need for research that utilizes artificial intelligence to classify clothing attributes in fashion images. Previous studies have mainly focused on classifying clothing attributes for either tops or bottoms, so there is a limitation that the attributes of both tops and bottoms cannot be identified simultaneously in the case of full-body fashion images. In this study, we propose a deep learning model that can distinguish between tops and bottoms in fashion images and classify the category of each item and the attributes of the clothing material. The deep learning models ResNet and EfficientNet were used in this study, and the dataset used for training was 1,002,718 fashion images and 125 labels including clothing categories and material properties. Based on the weighted F1-Score, ResNet is 0.800 and EfficientNet is 0.781, with ResNet showing better performance.

A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

  • Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.53-63
    • /
    • 2024
  • Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.