• Title/Summary/Keyword: Image data training

Search Result 673, Processing Time 0.025 seconds

Effective Analsis of GAN based Fake Date for the Deep Learning Model (딥러닝 훈련을 위한 GAN 기반 거짓 영상 분석효과에 대한 연구)

  • Seungmin, Jang;Seungwoo, Son;Bongsuck, Kim
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.8 no.2
    • /
    • pp.137-141
    • /
    • 2022
  • To inspect the power facility faults using artificial intelligence, it need that improve the accuracy of the diagnostic model are required. Data augmentation skill using generative adversarial network (GAN) is one of the best ways to improve deep learning performance. GAN model can create realistic-looking fake images using two competitive learning networks such as discriminator and generator. In this study, we intend to verify the effectiveness of virtual data generation technology by including the fake image of power facility generated through GAN in the deep learning training set. The GAN-based fake image was created for damage of LP insulator, and ResNet based normal and defect classification model was developed to verify the effect. Through this, we analyzed the model accuracy according to the ratio of normal and defective training data.

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.

Building-up and Feasibility Study of Image Dataset of Field Construction Equipments for AI Training (인공지능 학습용 토공 건설장비 영상 데이터셋 구축 및 타당성 검토)

  • Na, Jong Ho;Shin, Hyu Soun;Lee, Jae Kang;Yun, Il Dong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.99-107
    • /
    • 2023
  • Recently, the rate of death and safety accidents at construction sites is the highest among all kinds of industries. In order to apply artificial intelligence technology to construction sites, it is essential to secure a dataset which can be used as a basic training data. In this paper, a number of image data were collected through actual construction site, for which major construction equipment objects mainly operated in civil engineering sites were defined. The optimal training dataset construction was completed by annotation process of about 90,000 image dataset. Reliability of the dataset was verified with the mAP of over 90 % in use of YOLO, a representative model in the field of object detection. The construction equipment training dataset built in this study has been released which is currently available on the public data portal of the Ministry of Public Administration and Security. This dataset is expected to be freely used for any application of object detection technology on construction sites especially in the field of construction safety in the future.

Automatic selection method of ROI(region of interest) using land cover spatial data (토지피복 공간정보를 활용한 자동 훈련지역 선택 기법)

  • Cho, Ki-Hwan;Jeong, Jong-Chul
    • Journal of Cadastre & Land InformatiX
    • /
    • v.48 no.2
    • /
    • pp.171-183
    • /
    • 2018
  • Despite the rapid expansion of satellite images supply, the application of imagery is often restricted due to unautomated image processing. This paper presents the automated process for the selection of training areas which are essential to conducting supervised image classification. The training areas were selected based on the prior and cover information. After the selection, the training data were used to classify land cover in an urban area with the latest image and the classification accuracy was valuated. The automatic selection of training area was processed with following steps, 1) to redraw inner areas of prior land cover polygon with negative buffer (-15m) 2) to select the polygons with proper size of area ($2,000{\sim}200,000m^2$) 3) to calculate the mean and standard deviation of reflectance and NDVI of the polygons 4) to select the polygons having characteristic mean value of each land cover type with minimum standard deviation. The supervised image classification was conducted using the automatically selected training data with Sentinel-2 images in 2017. The accuracy of land cover classification was 86.9% ($\hat{K}=0.81$). The result shows that the process of automatic selection is effective in image processing and able to contribute to solving the bottleneck in the application of imagery.

Crop Leaf Disease Identification Using Deep Transfer Learning

  • Changjian Zhou;Yutong Zhang;Wenzhong Zhao
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.149-158
    • /
    • 2024
  • Traditional manual identification of crop leaf diseases is challenging. Owing to the limitations in manpower and resources, it is challenging to explore crop diseases on a large scale. The emergence of artificial intelligence technologies, particularly the extensive application of deep learning technologies, is expected to overcome these challenges and greatly improve the accuracy and efficiency of crop disease identification. Crop leaf disease identification models have been designed and trained using large-scale training data, enabling them to predict different categories of diseases from unlabeled crop leaves. However, these models, which possess strong feature representation capabilities, require substantial training data, and there is often a shortage of such datasets in practical farming scenarios. To address this issue and improve the feature learning abilities of models, this study proposes a deep transfer learning adaptation strategy. The novel proposed method aims to transfer the weights and parameters from pre-trained models in similar large-scale training datasets, such as ImageNet. ImageNet pre-trained weights are adopted and fine-tuned with the features of crop leaf diseases to improve prediction ability. In this study, we collected 16,060 crop leaf disease images, spanning 12 categories, for training. The experimental results demonstrate that an impressive accuracy of 98% is achieved using the proposed method on the transferred ResNet-50 model, thereby confirming the effectiveness of our transfer learning approach.

A Study on Designing Metadata Standard for Building AI Training Dataset of Landmark Images (랜드마크 이미지 AI 학습용 데이터 구축을 위한 메타데이터 표준 설계 방안 연구)

  • Kim, Jinmook
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.54 no.2
    • /
    • pp.419-434
    • /
    • 2020
  • The purpose of the study is to design and propose metadata standard for building AI training dataset of landmark images. In order to achieve the purpose, we first examined and analyzed the state of art of the types of image retrieval systems and their indexing methods, comprehensively. We then investigated open training dataset and machine learning tools for image object recognition. Sequentially, we selected metadata elements optimized for the AI training dataset of landmark images and defined the input data for each element. We then concluded the study with implications and suggestions for the development of application services using the results of the study.

A Practical Digital Video Database based on Language and Image Analysis

  • Liang, Yiqing
    • Proceedings of the Korea Database Society Conference
    • /
    • 1997.10a
    • /
    • pp.24-48
    • /
    • 1997
  • . Supported byㆍDARPA′s image Understanding (IU) program under "Video Retrieval Based on Language and image Analysis" project.DARPA′s Computer Assisted Education and Training Initiative program (CAETI)ㆍObjective: Develop practical systems for automatic understanding and indexing of video sequences using both audio and video tracks(omitted)

  • PDF

Image Super-Resolution for Improving Object Recognition Accuracy (객체 인식 정확도 개선을 위한 이미지 초해상도 기술)

  • Lee, Sung-Jin;Kim, Tae-Jun;Lee, Chung-Heon;Yoo, Seok Bong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.6
    • /
    • pp.774-784
    • /
    • 2021
  • The object detection and recognition process is a very important task in the field of computer vision, and related research is actively being conducted. However, in the actual object recognition process, the recognition accuracy is often degraded due to the resolution mismatch between the training image data and the test image data. To solve this problem, in this paper, we designed and developed an integrated object recognition and super-resolution framework by proposing an image super-resolution technique to improve object recognition accuracy. In detail, 11,231 license plate training images were built by ourselves through web-crawling and artificial-data-generation, and the image super-resolution artificial neural network was trained by defining an objective function to be robust to the image flip. To verify the performance of the proposed algorithm, we experimented with the trained image super-resolution and recognition on 1,999 test images, and it was confirmed that the proposed super-resolution technique has the effect of improving the accuracy of character recognition.

Face image classification by SVM

  • Park, Hye-Jeong;Sim, Ju-Yong;Kim, Mun-Tae;O, Gwang-Sik;Kim, Dae-Hak
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.155-159
    • /
    • 2003
  • 최근 들어 SVM(support vector machines)은 기계학습의 분야에서 많은 응용이 이루어지고 있으며 특히 분류(classification)나 회귀(regression)분석의 영역에서 많은 연구가 진행중이다. 본 논문에서는 SVM을 이용하여 입력영상자료(image data)를 분류하고자 한다. RGB 컬러 영상자료가 입력되면 이미지 크기에 관계없이 이미지 자체를 입력패턴으로 인식하고 SVM을 통한 훈련(training)을 거친 결과(weight 들과 bias 추정치)를 이용하여 입력영상자료가 사람인가를 분류할 수 있는 문제를 다룬다. 제안된 방법의 타당성은 152개의 영상자료에 적용하여 분석되었다.

  • PDF