• Title/Summary/Keyword: Image Data Augmentation

Search Result 168, Processing Time 0.022 seconds

Image-to-Image Translation with GAN for Synthetic Data Augmentation in Plant Disease Datasets

  • Nazki, Haseeb;Lee, Jaehwan;Yoon, Sook;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.46-57
    • /
    • 2019
  • In recent research, deep learning-based methods have achieved state-of-the-art performance in various computer vision tasks. However, these methods are commonly supervised, and require huge amounts of annotated data to train. Acquisition of data demands an additional costly effort, particularly for the tasks where it becomes challenging to obtain large amounts of data considering the time constraints and the requirement of professional human diligence. In this paper, we present a data level synthetic sampling solution to learn from small and imbalanced data sets using Generative Adversarial Networks (GANs). The reason for using GANs are the challenges posed in various fields to manage with the small datasets and fluctuating amounts of samples per class. As a result, we present an approach that can improve learning with respect to data distributions, reducing the partiality introduced by class imbalance and hence shifting the classification decision boundary towards more accurate results. Our novel method is demonstrated on a small dataset of 2789 tomato plant disease images, highly corrupted with class imbalance in 9 disease categories. Moreover, we evaluate our results in terms of different metrics and compare the quality of these results for distinct classes.

Deep Learning based Image Recognition Models for Beef Sirloin Classification (딥러닝 이미지 인식 기술을 활용한 소고기 등심 세부 부위 분류)

  • Han, Jun-Hee;Jung, Sung-Hun;Park, Kyungsu;Yu, Tae-Sun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.1-9
    • /
    • 2021
  • This research examines deep learning based image recognition models for beef sirloin classification. The sirloin of beef can be classified as the upper sirloin, the lower sirloin, and the ribeye, whereas during the distribution process they are often simply unified into the sirloin region. In this work, for detailed classification of beef sirloin regions we develop a model that can learn image information in a reasonable computation time using the MobileNet algorithm. In addition, to increase the accuracy of the model we introduce data augmentation methods as well, which amplifies the image data collected during the distribution process. This data augmentation enables to consider a larger size of training data set by which the accuracy of the model can be significantly improved. The data generated during the data proliferation process was tested using the MobileNet algorithm, where the test data set was obtained from the distribution processes in the real-world practice. Through the computational experiences we confirm that the accuracy of the suggested model is up to 83%. We expect that the classification model of this study can contribute to providing a more accurate and detailed information exchange between suppliers and consumers during the distribution process of beef sirloin.

Experimental Analysis of Equilibrization in Binary Classification for Non-Image Imbalanced Data Using Wasserstein GAN

  • Wang, Zhi-Yong;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.37-42
    • /
    • 2019
  • In this paper, we explore the details of three classic data augmentation methods and two generative model based oversampling methods. The three classic data augmentation methods are random sampling (RANDOM), Synthetic Minority Over-sampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN). The two generative model based oversampling methods are Conditional Generative Adversarial Network (CGAN) and Wasserstein Generative Adversarial Network (WGAN). In imbalanced data, the whole instances are divided into majority class and minority class, where majority class occupies most of the instances in the training set and minority class only includes a few instances. Generative models have their own advantages when they are used to generate more plausible samples referring to the distribution of the minority class. We also adopt CGAN to compare the data augmentation performance with other methods. The experimental results show that WGAN-based oversampling technique is more stable than other approaches (RANDOM, SMOTE, ADASYN and CGAN) even with the very limited training datasets. However, when the imbalanced ratio is too small, generative model based approaches cannot achieve satisfying performance than the conventional data augmentation techniques. These results suggest us one of future research directions.

Development of an Image Data Augmentation Apparatus to Evaluate CNN Model (CNN 모델 평가를 위한 이미지 데이터 증강 도구 개발)

  • Choi, Youngwon;Lee, Youngwoo;Chae, Heung-Seok
    • Journal of Software Engineering Society
    • /
    • v.29 no.1
    • /
    • pp.13-21
    • /
    • 2020
  • As CNN model is applied to various domains such as image classification and object detection, the performance of CNN model which is used to safety critical system like autonomous vehicles should be reliable. To evaluate that CNN model can sustain the performance in various environments, we developed an image data augmentation apparatus which generates images that is changed background. If an image which contains object is entered into the apparatus, it extracts an object image from the entered image and generate s composed images by synthesizing the object image with collected background images. A s a method to evaluate a CNN model, the apparatus generate s new test images from original test images, and we evaluate the CNN model by the new test image. As a case study, we generated new test images from Pascal VOC2007 and evaluated a YOLOv3 model with the new images. As a result, it was detected that mAP of new test images is almost 0.11 lower than mAP of the original test images.

Robust Head Pose Estimation for Masked Face Image via Data Augmentation (데이터 증강을 통한 마스크 착용 얼굴 이미지에 강인한 얼굴 자세추정)

  • Kyeongtak, Han;Sungeun, Hong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.944-947
    • /
    • 2022
  • Due to the coronavirus pandemic, the wearing of a mask has been increasing worldwide; thus, the importance of image analysis on masked face images has become essential. Although head pose estimation can be applied to various face-related applications including driver attention, face frontalization, and gaze detection, few studies have been conducted to address the performance degradation caused by masked faces. This study proposes a new data augmentation that synthesizes the masked face, depending on the face image size and poses, which shows robust performance on BIWI benchmark dataset regardless of mask-wearing. Since the proposed scheme is not limited to the specific model, it can be utilized in various head pose estimation models.

A Practical Implementation of Deep Learning Method for Supporting the Classification of Breast Lesions in Ultrasound Images

  • Han, Seokmin;Lee, Suchul;Lee, Jun-Rak
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.24-34
    • /
    • 2019
  • In this research, a practical deep learning framework to differentiate the lesions and nodules in breast acquired with ultrasound imaging has been proposed. 7408 ultrasound breast images of 5151 patient cases were collected. All cases were biopsy proven and lesions were semi-automatically segmented. To compensate for the shift caused in the segmentation, the boundaries of each lesion were drawn using Fully Convolutional Networks(FCN) segmentation method based on the radiologist's specified point. The data set consists of 4254 benign and 3154 malignant lesions. In 7408 ultrasound breast images, the number of training images is 6579, and the number of test images is 829. The margin between the boundary of each lesion and the boundary of the image itself varied for training image augmentation. The training images were augmented by varying the margin between the boundary of each lesion and the boundary of the image itself. The images were processed through histogram equalization, image cropping, and margin augmentation. The networks trained on the data with augmentation and the data without augmentation all had AUC over 0.95. The network exhibited about 90% accuracy, 0.86 sensitivity and 0.95 specificity. Although the proposed framework still requires to point to the location of the target ROI with the help of radiologists, the result of the suggested framework showed promising results. It supports human radiologist to give successful performance and helps to create a fluent diagnostic workflow that meets the fundamental purpose of CADx.

Gaze-Manipulated Data Augmentation for Gaze Estimation With Diffusion Autoencoders (디퓨전 오토인코더의 시선 조작 데이터 증강을 통한 시선 추적)

  • Kangryun Moon;Younghan Kim;Yongjun Park;Yonggyu Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.51-59
    • /
    • 2024
  • Collecting a dataset with a corresponding labeled gaze vector requires a high cost in the gaze estimation field. In this paper, we suggest a data augmentation of manipulating the gaze of an original image, which improves the accuracy of the gaze estimation model when the number of given gaze labels is restricted. By conducting multi-class gaze bin classification as an auxiliary task and adjusting the latent variable of the diffusion model, the model semantically edits the gaze from the original image. We manipulate a non-binary attribute, pitch and yaw of gaze vector to a desired range and uses the edited image as an augmented train data. The improved gaze accuracy of the gaze estimation network in the semi-supervised learning validates the effectiveness of our data augmentation, especially when the number of gaze labels is 50k or less.

Defect Classification of Cross-section of Additive Manufacturing Using Image-Labeling (이미지 라벨링을 이용한 적층제조 단면의 결함 분류)

  • Lee, Jeong-Seong;Choi, Byung-Joo;Lee, Moon-Gu;Kim, Jung-Sub;Lee, Sang-Won;Jeon, Yong-Ho
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.19 no.7
    • /
    • pp.7-15
    • /
    • 2020
  • Recently, the fourth industrial revolution has been presented as a new paradigm and additive manufacturing (AM) has become one of the most important topics. For this reason, process monitoring for each cross-sectional layer of additive metal manufacturing is important. Particularly, deep learning can train a machine to analyze, optimize, and repair defects. In this paper, image classification is proposed by learning images of defects in the metal cross sections using the convolution neural network (CNN) image labeling algorithm. Defects were classified into three categories: crack, porosity, and hole. To overcome a lack-of-data problem, the amount of learning data was augmented using a data augmentation algorithm. This augmentation algorithm can transform an image to 180 images, increasing the learning accuracy. The number of training and validation images was 25,920 (80 %) and 6,480 (20 %), respectively. An optimized case with a combination of fully connected layers, an optimizer, and a loss function, showed that the model accuracy was 99.7 % and had a success rate of 97.8 % for 180 test images. In conclusion, image labeling was successfully performed and it is expected to be applied to automated AM process inspection and repair systems in the future.

Implementation of a Deep Learning based Realtime Fire Alarm System using a Data Augmentation (데이터 증강 학습 이용한 딥러닝 기반 실시간 화재경보 시스템 구현)

  • Kim, Chi-young;Lee, Hyeon-Su;Lee, Kwang-yeob
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.468-474
    • /
    • 2022
  • In this paper, we propose a method to implement a real-time fire alarm system using deep learning. The deep learning image dataset for fire alarms acquired 1,500 sheets through the Internet. If various images acquired in a daily environment are learned as they are, there is a disadvantage that the learning accuracy is not high. In this paper, we propose a fire image data expansion method to improve learning accuracy. The data augmentation method learned a total of 2,100 sheets by adding 600 pieces of learning data using brightness control, blurring, and flame photo synthesis. The expanded data using the flame image synthesis method had a great influence on the accuracy improvement. A real-time fire detection system is a system that detects fires by applying deep learning to image data and transmits notifications to users. An app was developed to detect fires by analyzing images in real time using a model custom-learned from the YOLO V4 TINY model suitable for the Edge AI system and to inform users of the results. Approximately 10% accuracy improvement can be obtained compared to conventional methods when using the proposed data.

Data Augmentation Method for Deep Learning based Medical Image Segmentation Model (딥러닝 기반의 대퇴골 영역 분할을 위한 훈련 데이터 증강 연구)

  • Choi, Gyujin;Shin, Jooyeon;Kyung, Joohyun;Kyung, Minho;Lee, Yunjin
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.123-131
    • /
    • 2019
  • In this study, we modified CT images of femoral head in consideration of anatomically meaningful structure, proposing the method to augment the training data of convolution Neural network for segmentation of femur mesh model. First, the femur mesh model is obtained from the CT image. Then divide the mesh model into meaningful parts by using cluster analysis on geometric characteristic of mesh surface. Finally, transform the segments by using an appropriate mesh deformation algorithm, then create new CT images by warping CT images accordingly. Deep learning models using the data enhancement methods of this study show better image division performance compared to data augmentation methods which have been commonly used, such as geometric conversion or color conversion.