• Title/Summary/Keyword: Image Training Dataset

Search Result 229, Processing Time 0.03 seconds

Object Detection Accuracy Improvements of Mobility Equipments through Substitution Augmentation of Similar Objects (유사물체 치환증강을 통한 기동장비 물체 인식 성능 향상)

  • Heo, Jiseong;Park, Jihun
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.3
    • /
    • pp.300-310
    • /
    • 2022
  • A vast amount of labeled data is required for deep neural network training. A typical strategy to improve the performance of a neural network given a training data set is to use data augmentation technique. The goal of this work is to offer a novel image augmentation method for improving object detection accuracy. An object in an image is removed, and a similar object from the training data set is placed in its area. An in-painting algorithm fills the space that is eliminated but not filled by a similar object. Our technique shows at most 2.32 percent improvements on mAP in our testing on a military vehicle dataset using the YOLOv4 object detector.

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

Normal map generation based on Pix2Pix for rendering fabric image (옷감 이미지 렌더링을 위한 Pix2Pix 기반의 Normal map 생성)

  • Nam, Hyeongil;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.257-260
    • /
    • 2020
  • 본 논문은 단일의 옷감 이미지로 가상의 그래픽 렌더링을 위해 Pix2Pix 방법을 이용하여 Normal map 을 생성하는 방법을 제시한다. 구체적으로 단일의 이미지를 이용해서 Normal map 를 생성하기 위해, Color image 와 Normal map 쌍의 training dataset 을 Pix2Pix 방법을 이용해서 학습시킨다 또한, test dataset 의 Color image 를 입력으로 넣어 생성된 Normal map 결과를 확인한다. 그리고 선행연구에서 사용되어오던 U-Net 방식의 방법과 본 논문에서 사용한 Pix2Pix 를 이용한 Normal map 생성 결과를 SSIM(Structural Similarity Index)으로 비교 평가한다. 또한, 생성된 Normal map 을 렌더링하고자 하는 가상 객체의 사이즈에 맞게 사이즈를 조정하여 OpenGL 로 렌더링한 결과를 확인한다. 본 논문을 통해서 단일의 패턴 이미지를 Pix2Pix 로 생성한 Normal map 으로 옷감의 디테일을 사실감 있게 표현할 수 있음을 확인할 수 있었다.

  • PDF

Turbulent-image Restoration Based on a Compound Multibranch Feature Fusion Network

  • Banglian Xu;Yao Fang;Leihong Zhang;Dawei Zhang;Lulu Zheng
    • Current Optics and Photonics
    • /
    • v.7 no.3
    • /
    • pp.237-247
    • /
    • 2023
  • In middle- and long-distance imaging systems, due to the atmospheric turbulence caused by temperature, wind speed, humidity, and so on, light waves propagating in the air are distorted, resulting in image-quality degradation such as geometric deformation and fuzziness. In remote sensing, astronomical observation, and traffic monitoring, image information loss due to degradation causes huge losses, so effective restoration of degraded images is very important. To restore images degraded by atmospheric turbulence, an image-restoration method based on improved compound multibranch feature fusion (CMFNetPro) was proposed. Based on the CMFNet network, an efficient channel-attention mechanism was used to replace the channel-attention mechanism to improve image quality and network efficiency. In the experiment, two-dimensional random distortion vector fields were used to construct two turbulent datasets with different degrees of distortion, based on the Google Landmarks Dataset v2 dataset. The experimental results showed that compared to the CMFNet, DeblurGAN-v2, and MIMO-UNet models, the proposed CMFNetPro network achieves better performance in both quality and training cost of turbulent-image restoration. In the mixed training, CMFNetPro was 1.2391 dB (weak turbulence), 0.8602 dB (strong turbulence) respectively higher in terms of peak signal-to-noise ratio and 0.0015 (weak turbulence), 0.0136 (strong turbulence) respectively higher in terms of structure similarity compared to CMFNet. CMFNetPro was 14.4 hours faster compared to the CMFNet. This provides a feasible scheme for turbulent-image restoration based on deep learning.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

Management Software Development of Hyper Spectral Image Data for Deep Learning Training (딥러닝 학습을 위한 초분광 영상 데이터 관리 소프트웨어 개발)

  • Lee, Da-Been;Kim, Hong-Rak;Park, Jin-Ho;Hwang, Seon-Jeong;Shin, Jeong-Seop
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.111-116
    • /
    • 2021
  • The hyper-spectral image is data obtained by dividing the electromagnetic wave band in the infrared region into hundreds of wavelengths. It is used to find or classify objects in various fields. Recently, deep learning classification method has been attracting attention. In order to use hyper-spectral image data as deep learning training data, a processing technique is required compared to conventional visible light image data. To solve this problem, we developed a software that selects specific wavelength images from the hyper-spectral data cube and performs the ground truth task. We also developed software to manage data including environmental information. This paper describes the configuration and function of the software.

Automatic Classification by Land Use Category of National Level LULUCF Sector using Deep Learning Model (딥러닝모델을 이용한 국가수준 LULUCF 분야 토지이용 범주별 자동화 분류)

  • Park, Jeong Mook;Sim, Woo Dam;Lee, Jung Soo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_2
    • /
    • pp.1053-1065
    • /
    • 2019
  • Land use statistics calculation is very informative data as the activity data for calculating exact carbon absorption and emission in post-2020. To effective interpretation by land use category, This study classify automatically image interpretation by land use category applying forest aerial photography (FAP) to deep learning model and calculate national unit statistics. Dataset (DS) applied deep learning is divided into training dataset (training DS) and test dataset (test DS) by extracting image of FAP based national forest resource inventory permanent sample plot location. Training DS give label to image by definition of land use category and learn and verify deep learning model. When verified deep learning model, training accuracy of model is highest at epoch 1,500 with about 89%. As a result of applying the trained deep learning model to test DS, interpretation classification accuracy of image label was about 90%. When the estimating area of classification by category using sampling method and compare to national statistics, consistency also very high, so it judged that it is enough to be used for activity data of national GHG (Greenhouse Gas) inventory report of LULUCF sector in the future.

Application of CNN for Fish Species Classification (어종 분류를 위한 CNN의 적용)

  • Park, Jin-Hyun;Hwang, Kwang-Bok;Park, Hee-Mun;Choi, Young-Kiu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.39-46
    • /
    • 2019
  • In this study, before system development for the elimination of foreign fish species, we propose an algorithm to classify fish species by training fish images with CNN. The raw data for CNN learning were directly captured images for each species, Dataset 1 increases the number of images to improve the classification of fish species and Dataset 2 realizes images close to natural environment are constructed and used as training and test data. The classification performance of four CNNs are over 99.97% for dataset 1 and 99.5% for dataset 2, in particular, we confirm that the learned CNN using Data Set 2 has satisfactory performance for fish images similar to the natural environment. And among four CNNs, AlexNet achieves satisfactory performance, and this has also the shortest execution time and training time, we confirm that it is the most suitable structure to develop the system for the elimination of foreign fish species.

Flaw Detection in LCD Manufacturing Using GAN-based Data Augmentation

  • Jingyi Li;Yan Li;Zuyu Zhang;Byeongseok Shin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.124-125
    • /
    • 2023
  • Defect detection during liquid crystal display (LCD) manufacturing has always been a critical challenge. This study aims to address this issue by proposing a data augmentation method based on generative adversarial networks (GAN) to improve defect identification accuracy in LCD production. By leveraging synthetically generated image data from GAN, we effectively augment the original dataset to make it more representative and diverse. This data augmentation strategy enhances the model's generalization capability and robustness on real-world data. Compared to traditional data augmentation techniques, the synthetic data from GAN are more realistic, diverse and broadly distributed. Experimental results demonstrate that training models with GAN-generated data combined with the original dataset significantly improves the detection accuracy of critical defects in LCD manufacturing, compared to using the original dataset alone. This study provides an effective data augmentation approach for intelligent quality control in LCD production.

Research Trends of Generative Adversarial Networks and Image Generation and Translation (GAN 적대적 생성 신경망과 이미지 생성 및 변환 기술 동향)

  • Jo, Y.J.;Bae, K.M.;Park, J.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.91-102
    • /
    • 2020
  • Recently, generative adversarial networks (GANs) is a field of research that has rapidly emerged wherein many studies conducted shows overwhelming results. Initially, this was at the level of imitating the training dataset. However, the GAN is currently useful in many fields, such as transformation of data categories, restoration of erased parts of images, copying facial expressions of humans, and creation of artworks depicting a dead painter's style. Although many outstanding research achievements have been attracting attention recently, GANs have encountered many challenges. First, they require a large memory facility for research. Second, there are still technical limitations in processing high-resolution images over 4K. Third, many GAN learning methods have a problem of instability in the training stage. However, recent research results show images that are difficult to distinguish whether they are real or fake, even with the naked eye, and the resolution of 4K and above is being developed. With the increase in image quality and resolution, many applications in the field of design and image and video editing are now available, including those that draw a photorealistic image as a simple sketch or easily modify unnecessary parts of an image or a video. In this paper, we discuss how GANs started, including the base architecture and latest technologies of GANs used in high-resolution, high-quality image creation, image and video editing, style translation, content transfer, and technology.