• Title/Summary/Keyword: Image Training Dataset

검색결과 233건 처리시간 0.031초

Performance Analysis of Cloud-Net with Cross-sensor Training Dataset for Satellite Image-based Cloud Detection

  • Kim, Mi-Jeong;Ko, Yun-Ho
    • 대한원격탐사학회지
    • /
    • 제38권1호
    • /
    • pp.103-110
    • /
    • 2022
  • Since satellite images generally include clouds in the atmosphere, it is essential to detect or mask clouds before satellite image processing. Clouds were detected using physical characteristics of clouds in previous research. Cloud detection methods using deep learning techniques such as CNN or the modified U-Net in image segmentation field have been studied recently. Since image segmentation is the process of assigning a label to every pixel in an image, precise pixel-based dataset is required for cloud detection. Obtaining accurate training datasets is more important than a network configuration in image segmentation for cloud detection. Existing deep learning techniques used different training datasets. And test datasets were extracted from intra-dataset which were acquired by same sensor and procedure as training dataset. Different datasets make it difficult to determine which network shows a better overall performance. To verify the effectiveness of the cloud detection network such as Cloud-Net, two types of networks were trained using the cloud dataset from KOMPSAT-3 images provided by the AIHUB site and the L8-Cloud dataset from Landsat8 images which was publicly opened by a Cloud-Net author. Test data from intra-dataset of KOMPSAT-3 cloud dataset were used for validating the network. The simulation results show that the network trained with KOMPSAT-3 cloud dataset shows good performance on the network trained with L8-Cloud dataset. Because Landsat8 and KOMPSAT-3 satellite images have different GSDs, making it difficult to achieve good results from cross-sensor validation. The network could be superior for intra-dataset, but it could be inferior for cross-sensor data. It is necessary to study techniques that show good results in cross-senor validation dataset in the future.

군 로봇의 장소 분류 정확도 향상을 위한 적외선 이미지 데이터 결합 학습 방법 연구 (A Study on the Training Methodology of Combining Infrared Image Data for Improving Place Classification Accuracy of Military Robots)

  • 최동규;도승원;이창은
    • 로봇학회논문지
    • /
    • 제18권3호
    • /
    • pp.293-298
    • /
    • 2023
  • The military is facing a continuous decrease in personnel, and in order to cope with potential accidents and challenges in operations, efforts are being made to reduce the direct involvement of personnel by utilizing the latest technologies. Recently, the use of various sensors related to Manned-Unmanned Teaming and artificial intelligence technologies has gained attention, emphasizing the need for flexible utilization methods. In this paper, we propose four dataset construction methods that can be used for effective training of robots that can be deployed in military operations, utilizing not only RGB image data but also data acquired from IR image sensors. Since there is no publicly available dataset that combines RGB and IR image data, we directly acquired the dataset within buildings. The input values were constructed by combining RGB and IR image sensor data, taking into account the field of view, resolution, and channel values of both sensors. We compared the proposed method with conventional RGB image data classification training using the same learning model. By employing the proposed image data fusion method, we observed improved stability in training loss and approximately 3% higher accuracy.

인공지능 학습용 토공 건설장비 영상 데이터셋 구축 및 타당성 검토 (Building-up and Feasibility Study of Image Dataset of Field Construction Equipments for AI Training)

  • 나종호;신휴성;이재강;윤일동
    • 대한토목학회논문집
    • /
    • 제43권1호
    • /
    • pp.99-107
    • /
    • 2023
  • 최근 건설 현장의 안전사고 비율은 전체 산업에서 가장 높은 비중을 차지한다. 인공지능 기술을 건설 현장에 접목하기 위해서는 기초 학습 자료로 활용될 수 있는 데이터셋 확보가 필수적이다. 본 논문에서는 실제 현장 확보를 통해 원천 데이터를 수집하였으며, 토목 현장에서 주로 운용되고 있는 주요 건설장비 객체를 선정하고 약 9만장의 정지영상 데이터셋 가공을 통해 최적의 학습 데이터셋 구축을 완료하였다. 또한, 객체 인식분야의 대표적인 모델인 YOLO를 활용하여 구축된 데이터의 검증 작업을 수행하였고 90 % 근접한 검출 성능을 확인해 데이터 신뢰성을 확보하였다. 본 연구에서 사용되는 학습 데이터셋은 공공데이터포털에서 활용 가능하도록 공개를 완료하였다. 본 데이터셋은 향후 건설안전 분야의 객체 인식 기술의 건설현장 적용을 위한 기반 데이터로 활용 가능하리라 판단된다.

랜드마크 이미지 AI 학습용 데이터 구축을 위한 메타데이터 표준 설계 방안 연구 (A Study on Designing Metadata Standard for Building AI Training Dataset of Landmark Images)

  • 김진묵
    • 한국문헌정보학회지
    • /
    • 제54권2호
    • /
    • pp.419-434
    • /
    • 2020
  • 본 연구의 목적은 랜드마크 이미지의 AI 학습용 데이터 구축을 위한 메타데이터 표준 설계 방안을 제시하기 위함이다. 이를 위해, 이미지 검색시스템의 종류와 각각의 색인 방식에 관한 최신 기술 현황을 포괄적으로 조사하여 분석하고, AI 머신러닝을 적용한 랜드마크 인식에 필수적인 학습용 공개 데이터셋과 이미지 객체 인식에 관한 기계학습 도구를 조사하였다. 이를 통해, 랜드마크 이미지 AI 학습용 데이터에 최적화된 메타데이터 요소를 선정하고 각각의 요소에 대한 입력 데이터를 정의하였다. 결론 및 제언에서는 랜드마크 인식을 활용한 추천시스템을 포함한 응용서비스 개발 방안을 논의하였다.

국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구 (Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks)

  • 양훈민
    • 한국군사과학기술학회지
    • /
    • 제22권1호
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Remote Sensing Image Classification for Land Cover Mapping in Developing Countries: A Novel Deep Learning Approach

  • Lynda, Nzurumike Obianuju;Nnanna, Nwojo Agwu;Boukar, Moussa Mahamat
    • International Journal of Computer Science & Network Security
    • /
    • 제22권2호
    • /
    • pp.214-222
    • /
    • 2022
  • Convolutional Neural networks (CNNs) are a category of deep learning networks that have proven very effective in computer vision tasks such as image classification. Notwithstanding, not much has been seen in its use for remote sensing image classification in developing countries. This is majorly due to the scarcity of training data. Recently, transfer learning technique has successfully been used to develop state-of-the art models for remote sensing (RS) image classification tasks using training and testing data from well-known RS data repositories. However, the ability of such model to classify RS test data from a different dataset has not been sufficiently investigated. In this paper, we propose a deep CNN model that can classify RS test data from a dataset different from the training dataset. To achieve our objective, we first, re-trained a ResNet-50 model using EuroSAT, a large-scale RS dataset to develop a base model then we integrated Augmentation and Ensemble learning to improve its generalization ability. We further experimented on the ability of this model to classify a novel dataset (Nig_Images). The final classification results shows that our model achieves a 96% and 80% accuracy on EuroSAT and Nig_Images test data respectively. Adequate knowledge and usage of this framework is expected to encourage research and the usage of deep CNNs for land cover mapping in cases of lack of training data as obtainable in developing countries.

No-Reference Image Quality Assessment based on Quality Awareness Feature and Multi-task Training

  • Lai, Lijing;Chu, Jun;Leng, Lu
    • Journal of Multimedia Information System
    • /
    • 제9권2호
    • /
    • pp.75-86
    • /
    • 2022
  • The existing image quality assessment (IQA) datasets have a small number of samples. Some methods based on transfer learning or data augmentation cannot make good use of image quality-related features. A No Reference (NR)-IQA method based on multi-task training and quality awareness is proposed. First, single or multiple distortion types and levels are imposed on the original image, and different strategies are used to augment different types of distortion datasets. With the idea of weak supervision, we use the Full Reference (FR)-IQA methods to obtain the pseudo-score label of the generated image. Then, we combine the classification information of the distortion type, level, and the information of the image quality score. The ResNet50 network is trained in the pre-train stage on the augmented dataset to obtain more quality-aware pre-training weights. Finally, the fine-tuning stage training is performed on the target IQA dataset using the quality-aware weights to predicate the final prediction score. Various experiments designed on the synthetic distortions and authentic distortions datasets (LIVE, CSIQ, TID2013, LIVEC, KonIQ-10K) prove that the proposed method can utilize the image quality-related features better than the method using only single-task training. The extracted quality-aware features improve the accuracy of the model.

다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구 (A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images)

  • 강원빈;정민영;김용일
    • 대한원격탐사학회지
    • /
    • 제38권6_1호
    • /
    • pp.1505-1514
    • /
    • 2022
  • 영상정합은 다시기 및 다중센서 고해상도 위성영상을 효과적으로 활용하기 위해 필수적으로 선행되는 중요한 과정이다. 널리 각광받고 있는 딥러닝 기법은 위성영상에서 복잡하고 세밀한 특징을 추출하여 영상 간 빠르고 정확한 유사도 판별에 사용될 수 있음에도 불구하고, 학습자료의 양과 질이 결과에 영향을 미치는 딥러닝 모델의 한계와 고해상도 위성영상 기반 학습자료 구축의 어려움에 따라 고해상도 위성영상의 정합에는 제한적으로 적용되어 왔다. 이에 본 연구는 영상정합에서 가장 많은 시간을 소요하는 정합쌍 추출 과정에서 딥러닝 기반 기법의 적용성을 확인하기 위하여, 편향성이 존재하는 고해상도 위성영상 데이터베이스로부터 딥러닝 영상매칭 학습자료를 구축하고 학습자료의 구성이 정합쌍 추출 정확도에 미치는 영향을 분석하였다. 학습자료는 12장의 다시기 및 다중센서 고해상도 위성영상에 대하여 격자 기반의 Scale Invariant Feature Transform(SIFT) 알고리즘을 이용하여 추출한 영상쌍에 참과 거짓의 레이블(label)을 할당한 정합쌍과 오정합쌍의 집합으로 구축되도록 하였다. 구축된 학습자료로부터 정합쌍 추출을 위해 제안된 Siamese convolutional neural network (SCNN) 모델은 동일한 두 개의 합성곱 신경망 구조에 한 쌍을 이루는 두 영상을 하나씩 통과시킴으로써 학습을 진행하고 추출된 특징의 비교를 통해 유사도를 판별한다. 본 연구를 통해 고해상도 위성영상 데이터 베이스로부터 취득된 자료를 딥러닝 학습자료로 활용 가능하며 이종센서 영상을 적절히 조합하여 영상매칭 과정의 효율을 높일 수 있음을 확인하였다. 다중센서 고해상도 위성영상을 활용한 딥러닝 기반 영상매칭 기법은 안정적인 성능을 바탕으로 기존 수작업 기반의 특징 추출 방법을 대체하고, 나아가 통합적인 딥러닝 기반 영상정합 프레임워크로 발전될 것으로 기대한다.

동물 이미지를 위한 향상된 딥러닝 학습 (An Improved Deep Learning Method for Animal Images)

  • 왕광싱;신성윤;신광성;이현창
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2019년도 제59차 동계학술대회논문집 27권1호
    • /
    • pp.123-124
    • /
    • 2019
  • This paper proposes an improved deep learning method based on small data sets for animal image classification. Firstly, we use a CNN to build a training model for small data sets, and use data augmentation to expand the data samples of the training set. Secondly, using the pre-trained network on large-scale datasets, such as VGG16, the bottleneck features in the small dataset are extracted and to be stored in two NumPy files as new training datasets and test datasets. Finally, training a fully connected network with the new datasets. In this paper, we use Kaggle famous Dogs vs Cats dataset as the experimental dataset, which is a two-category classification dataset.

  • PDF

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.