• Title/Summary/Keyword: Ground Truth

Search Result 295, Processing Time 0.022 seconds

Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion (ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법)

  • Jung, Sukwoo;Lee, Youn-Sung;Lee, KyungTaek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.466-468
    • /
    • 2022
  • 3D reconstruction is important issue in many applications such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, depth map can be acquired by stereo camera and time-of-flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied general multi-camera calibration technique which uses both color and depth information. Next, the depth map of the two sensors are fused by 3D registration and reprojection approach. The fused data is compared with the ground truth data which is reconstructed using RTC360 sensor. We used Geomagic Wrap to analysis the average RMSE of the two data. The proposed procedure was implemented and tested with real-world data.

  • PDF

Comparison of Paired and Unpaired Image-to-image Translation for 18F-FDG Delayed PET Generation (18F-FDG PET 지연영상 생성에 대한 딥러닝 이미지 생성 방법론 비교)

  • ALMASLAMANI MUATH;Kangsan Kim;Byung Hyun Byun;Sang-Keun Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.179-181
    • /
    • 2023
  • 본 논문에서는 GAN 기반의 영상 생성 방법론을 이용해 delayed PET 영상을 생성하는 연구를 수행하였다. PET은 양전자를 방출하는 방사성 동위원소를 표지한 방사성의약품의 체내 분포를 시각화함으로서 암 세포 진단에 이용되는 의료영상 기법이다. 하지만 PET의 스캔 과정에서 방사성의약품이 체내에 분포하는 데에 걸리는 시간이 오래 걸린다는 문제점이 존재한다. 따라서 본 연구에서는 방사성의약품이 충분히 분포되지 않은 상태에서 얻은 PET 영상을 통해 목표로 하는 충분히 시간이 지난 후에 얻은 PET 영상을 생성하는 모델을 GAN (generative adversarial network)에 기반한 image-to-image translation(I2I)를 통해 수행했다. 특히, 생성 전후의 영상 간의 영상 쌍을 고려한 paired I2I인 Pix2pix와 이를 고려하지 않은 unpaired I2I인 CycleGAN 두 가지의 방법론을 비교하였다. 연구 결과, Pix2pix에 기반해 생성한 delayed PET 영상이 CycleGAN을 통해 생성한 영상에 비해 영상 품질이 좋음을 확인했으며, 또한 실제 획득한 ground-truth delayed PET 영상과의 유사도 또한 더 높음을 확인할 수 있었다. 결과적으로, 딥러닝에 기반해 early PET을 통해 delayed PET을 생성할 수 있었으며, paired I2I를 적용할 경우 보다 높은 성능을 기대할 수 있었다. 이를 통해 PET 영상 획득 과정에서 방사성의약품의 체내 분포에 소요되는 시간을 딥러닝 모델을 통해 줄여 PET 이미징 과정의 시간적 비용을 절감하는 데에 크게 기여할 수 있을 것으로 기대된다.

  • PDF

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Performance Evaluation of U-net Deep Learning Model for Noise Reduction according to Various Hyper Parameters in Lung CT Images (폐 CT 영상에서의 노이즈 감소를 위한 U-net 딥러닝 모델의 다양한 학습 파라미터 적용에 따른 성능 평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.709-715
    • /
    • 2023
  • In this study, the performance evaluation of image quality for noise reduction was implemented using the U-net deep learning architecture in computed tomography (CT) images. In order to generate input data, the Gaussian noise was applied to ground truth (GT) data, and datasets were consisted of 8:1:1 ratio of train, validation, and test sets among 1300 CT images. The Adagrad, Adam, and AdamW were used as optimizer function, and 10, 50 and 100 times for number of epochs were applied. In addition, learning rates of 0.01, 0.001, and 0.0001 were applied using the U-net deep learning model to compare the output image quality. To analyze the quantitative values, the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. Based on the results, deep learning model was useful for noise reduction. We suggested that optimized hyper parameters for noise reduction in CT images were AdamW optimizer function, 100 times number of epochs and 0.0001 learning rates.

Estimation of Image-based Damage Location and Generation of Exterior Damage Map for Port Structures (영상 기반 항만시설물 손상 위치 추정 및 외관조사망도 작성)

  • Banghyeon Kim;Sangyoon So;Soojin Cho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.5
    • /
    • pp.49-56
    • /
    • 2023
  • This study proposed a damage location estimation method for automated image-based port infrastructure inspection. Memory efficiency was improved by calculating the homography matrix using feature detection technology and outlier removal technology, without going through the 3D modeling process and storing only damage information. To develop an algorithm specialized for port infrastructure, the algorithm was optimized through ground-truth coordinate pairs created using images of port infrastructure. The location errors obtained by applying this to the sample and concrete wall were (X: 6.5cm, Y: 1.3cm) and (X: 12.7cm, Y: 6.4cm), respectively. In addition, by applying the algorithm to the concrete wall and displaying it in the form of an exterior damage map, the possibility of field application was demonstrated.

Enhancing Alzheimer's Disease Classification using 3D Convolutional Neural Network and Multilayer Perceptron Model with Attention Network

  • Enoch A. Frimpong;Zhiguang Qin;Regina E. Turkson;Bernard M. Cobbinah;Edward Y. Baagyere;Edwin K. Tenagyei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.2924-2944
    • /
    • 2023
  • Alzheimer's disease (AD) is a neurological condition that is recognized as one of the primary causes of memory loss. AD currently has no cure. Therefore, the need to develop an efficient model with high precision for timely detection of the disease is very essential. When AD is detected early, treatment would be most likely successful. The most often utilized indicators for AD identification are the Mini-mental state examination (MMSE), and the clinical dementia. However, the use of these indicators as ground truth marking could be imprecise for AD detection. Researchers have proposed several computer-aided frameworks and lately, the supervised model is mostly used. In this study, we propose a novel 3D Convolutional Neural Network Multilayer Perceptron (3D CNN-MLP) based model for AD classification. The model uses Attention Mechanism to automatically extract relevant features from Magnetic Resonance Images (MRI) to generate probability maps which serves as input for the MLP classifier. Three MRI scan categories were considered, thus AD dementia patients, Mild Cognitive Impairment patients (MCI), and Normal Control (NC) or healthy patients. The performance of the model is assessed by comparing basic CNN, VGG16, DenseNet models, and other state of the art works. The models were adjusted to fit the 3D images before the comparison was done. Our model exhibited excellent classification performance, with an accuracy of 91.27% for AD and NC, 80.85% for MCI and NC, and 87.34% for AD and MCI.

Comparative Analysis of Supervised and Phenology-Based Approaches for Crop Mapping: A Case Study in South Korea

  • Ehsan Rahimi;Chuleui Jung
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.179-190
    • /
    • 2024
  • This study aims to compare supervised classification methods with phenology-based approaches, specifically pixel-based and segment-based methods, for accurate crop mapping in agricultural landscapes. We utilized Sentinel-2A imagery, which provides multispectral data for accurate crop mapping. 31 normalized difference vegetation index (NDVI) images were calculated from the Sentinel-2A data. Next, we employed phenology-based approaches to extract valuable information from the NDVI time series. A set of 10 phenology metrics was extracted from the NDVI data. For the supervised classification, we employed the maximum likelihood (MaxLike) algorithm. For the phenology-based approaches, we implemented both pixel-based and segment-based methods. The results indicate that phenology-based approaches outperformed the MaxLike algorithm in regions with frequent rainfall and cloudy conditions. The segment-based phenology approach demonstrated the highest kappa coefficient of 0.85, indicating a high level of agreement with the ground truth data. The pixel-based phenology approach also achieved a commendable kappa coefficient of 0.81, indicating its effectiveness in accurately classifying the crop types. On the other hand, the supervised classification method (MaxLike) yielded a lower kappa coefficient of 0.74. Our study suggests that segment-based phenology mapping is a suitable approach for regions like South Korea, where continuous cloud-free satellite images are scarce. However, establishing precise classification thresholds remains challenging due to the lack of adequately sampled NDVI data. Despite this limitation, the phenology-based approach demonstrates its potential in crop classification, particularly in regions with varying weather patterns.

Study on the Improvement of Lung CT Image Quality using 2D Deep Learning Network according to Various Noise Types (폐 CT 영상에서 다양한 노이즈 타입에 따른 딥러닝 네트워크를 이용한 영상의 질 향상에 관한 연구)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.2
    • /
    • pp.93-99
    • /
    • 2024
  • The digital medical imaging, especially, computed tomography (CT), should necessarily be considered in terms of noise distribution caused by converting to X-ray photon to digital imaging signal. Recently, the denoising technique based on deep learning architecture is increasingly used in the medical imaging field. Here, we evaluated noise reduction effect according to various noise types based on the U-net deep learning model in the lung CT images. The input data for deep learning was generated by applying Gaussian noise, Poisson noise, salt and pepper noise and speckle noise from the ground truth (GT) image. In particular, two types of Gaussian noise input data were applied with standard deviation values of 30 and 50. There are applied hyper-parameters, which were Adam as optimizer function, 100 as epochs, and 0.0001 as learning rate, respectively. To analyze the quantitative values, the mean square error (MSE), the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. According to the results, it was confirmed that the U-net model was effective for noise reduction all of the set conditions in this study. Especially, it showed the best performance in Gaussian noise.

Performance Evaluation of YOLOv5 Model according to Various Hyper-parameters in Nuclear Medicine Phantom Images (핵의학 팬텀 영상에서 초매개변수 변화에 따른 YOLOv5 모델의 성능평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.21-26
    • /
    • 2024
  • The one of the famous deep learning models for object detection task is you only look once version 5 (YOLOv5) framework based on the one stage architecture. In addition, YOLOv5 model indicated high performance for accurate lesion detection using the bottleneck CSP layer and skip connection function. The purpose of this study was to evaluate the performance of YOLOv5 framework according to various hyperparameters in position emission tomogrpahy (PET) phantom images. The dataset was obtained from QIN PET segmentation challenge in 500 slices. We set the bounding box to generate ground truth dataset using labelImg software. The hyperparameters for network train were applied by changing optimization function (SDG, Adam, and AdamW), activation function (SiLU, LeakyRelu, Mish, and Hardwish), and YOLOv5 model size (nano, small, large, and xlarge). The intersection over union (IOU) method was used for performance evaluation. As a results, the condition of outstanding performance is to apply AdamW, Hardwish, and nano size for optimization function, activation function and model version, respectively. In conclusion, we confirmed the usefulness of YOLOv5 network for object detection performance in nuclear medicine images.

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.