• Title/Summary/Keyword: Image Learning

Search Result 3,069, Processing Time 0.034 seconds

A Study on Image Classification using Deep Learning-Based Transfer Learning (딥 러닝 기반의 전이 학습을 이용한 이미지 분류에 관한 연구)

  • Jung-Hee Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.3
    • /
    • pp.413-420
    • /
    • 2023
  • For a long time, researchers have presented excellent results in the field of image retrieval due to many studies on CBIR. However, there is still a semantic gap between these search results for images and human perception. It is still a difficult problem to classify images with a level of human perception using a small number of images. Therefore, this paper proposes an image classification model using deep learning-based transfer learning to minimize the semantic gap between images of people and search systems in image retrieval. As a result of the experiment, the loss rate of the learning model was 0.2451% and the accuracy was 0.8922%. The implementation of the proposed image classification method was able to achieve the desired goal. And in deep learning, it was confirmed that the CNN's transfer learning model method was effective in creating an image database by adding new data.

A Study on Improvement of Image Classification Accuracy Using Image-Text Pairs (이미지-텍스트 쌍을 활용한 이미지 분류 정확도 향상에 관한 연구)

  • Mi-Hui Kim;Ju-Hyeok Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.561-566
    • /
    • 2023
  • With the development of deep learning, it is possible to solve various computer non-specialized problems such as image processing. However, most image processing methods use only the visual information of the image to process the image. Text data such as descriptions and annotations related to images may provide additional tactile and visual information that is difficult to obtain from the image itself. In this paper, we intend to improve image classification accuracy through a deep learning model that analyzes images and texts using image-text pairs. The proposed model showed an approximately 11% classification accuracy improvement over the deep learning model using only image information.

Deep Learning Models for Fabric Image Defect Detection: Experiments with Transformer-based Image Segmentation Models (직물 이미지 결함 탐지를 위한 딥러닝 기술 연구: 트랜스포머 기반 이미지 세그멘테이션 모델 실험)

  • Lee, Hyun Sang;Ha, Sung Ho;Oh, Se Hwan
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.149-162
    • /
    • 2023
  • Purpose In the textile industry, fabric defects significantly impact product quality and consumer satisfaction. This research seeks to enhance defect detection by developing a transformer-based deep learning image segmentation model for learning high-dimensional image features, overcoming the limitations of traditional image classification methods. Design/methodology/approach This study utilizes the ZJU-Leaper dataset to develop a model for detecting defects in fabrics. The ZJU-Leaper dataset includes defects such as presses, stains, warps, and scratches across various fabric patterns. The dataset was built using the defect labeling and image files from ZJU-Leaper, and experiments were conducted with deep learning image segmentation models including Deeplabv3, SegformerB0, SegformerB1, and Dinov2. Findings The experimental results of this study indicate that the SegformerB1 model achieved the highest performance with an mIOU of 83.61% and a Pixel F1 Score of 81.84%. The SegformerB1 model excelled in sensitivity for detecting fabric defect areas compared to other models. Detailed analysis of its inferences showed accurate predictions of diverse defects, such as stains and fine scratches, within intricated fabric designs.

Research on Equal-resolution Image Hiding Encryption Based on Image Steganography and Computational Ghost Imaging

  • Leihong Zhang;Yiqiang Zhang;Runchu Xu;Yangjun Li;Dawei Zhang
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.270-281
    • /
    • 2024
  • Information-hiding technology is introduced into an optical ghost imaging encryption scheme, which can greatly improve the security of the encryption scheme. However, in the current mainstream research on camouflage ghost imaging encryption, information hiding techniques such as digital watermarking can only hide 1/4 resolution information of a cover image, and most secret images are simple binary images. In this paper, we propose an equal-resolution image-hiding encryption scheme based on deep learning and computational ghost imaging. With the equal-resolution image steganography network based on deep learning (ERIS-Net), we can realize the hiding and extraction of equal-resolution natural images and increase the amount of encrypted information from 25% to 100% when transmitting the same size of secret data. To the best of our knowledge, this paper combines image steganography based on deep learning with optical ghost imaging encryption method for the first time. With deep learning experiments and simulation, the feasibility, security, robustness, and high encryption capacity of this scheme are verified, and a new idea for optical ghost imaging encryption is proposed.

Performance Improvement of a Deep Learning-based Object Recognition using Imitated Red-green Color Blindness of Camouflaged Soldier Images (적록색맹 모사 영상 데이터를 이용한 딥러닝 기반의 위장군인 객체 인식 성능 향상)

  • Choi, Keun Ha
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.2
    • /
    • pp.139-146
    • /
    • 2020
  • The camouflage pattern was difficult to distinguish from the surrounding background, so it was difficult to classify the object and the background image when the color image is used as the training data of deep-learning. In this paper, we proposed a red-green color blindness image transformation method using the principle that people of red-green blindness distinguish green color better than ordinary people. Experimental results show that the camouflage soldier's recognition performance improved by proposed a deep learning model of the ensemble technique using the imitated red-green-blind image data and the original color image data.

Asymmetric Semi-Supervised Boosting Scheme for Interactive Image Retrieval

  • Wu, Jun;Lu, Ming-Yu
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.766-773
    • /
    • 2010
  • Support vector machine (SVM) active learning plays a key role in the interactive content-based image retrieval (CBIR) community. However, the regular SVM active learning is challenged by what we call "the small example problem" and "the asymmetric distribution problem." This paper attempts to integrate the merits of semi-supervised learning, ensemble learning, and active learning into the interactive CBIR. Concretely, unlabeled images are exploited to facilitate boosting by helping augment the diversity among base SVM classifiers, and then the learned ensemble model is used to identify the most informative images for active learning. In particular, a bias-weighting mechanism is developed to guide the ensemble model to pay more attention on positive images than negative images. Experiments on 5000 Corel images show that the proposed method yields better retrieval performance by an amount of 0.16 in mean average precision compared to regular SVM active learning, which is more effective than some existing improved variants of SVM active learning.

Deep Learning in Dental Radiographic Imaging

  • Hyuntae Kim
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.51 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • Deep learning algorithms are becoming more prevalent in dental research because they are utilized in everyday activities. However, dental researchers and clinicians find it challenging to interpret deep learning studies. This review aimed to provide an overview of the general concept of deep learning and current deep learning research in dental radiographic image analysis. In addition, the process of implementing deep learning research is described. Deep-learning-based algorithmic models perform well in classification, object detection, and segmentation tasks, making it possible to automatically diagnose oral lesions and anatomical structures. The deep learning model can enhance the decision-making process for researchers and clinicians. This review may be useful to dental researchers who are currently evaluating and assessing deep learning studies in the field of dentistry.

Unsupervised Learning with Natural Low-light Image Enhancement (자연스러운 저조도 영상 개선을 위한 비지도 학습)

  • Lee, Hunsang;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

Development of a transfer learning based detection system for burr image of injection molded products (전이학습 기반 사출 성형품 burr 이미지 검출 시스템 개발)

  • Yang, Dong-Cheol;Kim, Jong-Sun
    • Design & Manufacturing
    • /
    • v.15 no.3
    • /
    • pp.1-6
    • /
    • 2021
  • An artificial neural network model based on a deep learning algorithm is known to be more accurate than humans in image classification, but there is still a limit in the sense that there needs to be a lot of training data that can be called big data. Therefore, various techniques are being studied to build an artificial neural network model with high precision, even with small data. The transfer learning technique is assessed as an excellent alternative. As a result, the purpose of this study is to develop an artificial neural network system that can classify burr images of light guide plate products with 99% accuracy using transfer learning technique. Specifically, for the light guide plate product, 150 images of the normal product and the burr were taken at various angles, heights, positions, etc., respectively. Then, after the preprocessing of images such as thresholding and image augmentation, for a total of 3,300 images were generated. 2,970 images were separated for training, while the remaining 330 images were separated for model accuracy testing. For the transfer learning, a base model was developed using the NASNet-Large model that pre-trained 14 million ImageNet data. According to the final model accuracy test, the 99% accuracy in the image classification for training and test images was confirmed. Consequently, based on the results of this study, it is expected to help develop an integrated AI production management system by training not only the burr but also various defective images.

A Study on Area Detection Using Transfer-Learning Technique (Transfer-Learning 기법을 이용한 영역검출 기법에 관한 연구)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.178-179
    • /
    • 2018
  • Recently, methods of using machine learning in artificial intelligence such as autonomous navigation and speech recognition have been actively studied. Classical image processing methods such as classical boundary detection and pattern recognition have many limitations in order to recognize a specific object or area in a digital image. However, when a machine learning method such as deep-learning is used, Can be obtained. However, basically, a large amount of learning data must be secured for machine learning such as deep-learning. Therefore, it is difficult to apply the machine learning for area classification when the amount of data is very small, such as aerial photographs for environmental analysis. In this study, we apply a transfer-learning technique that can be used when the dataset size of the input image is small and the shape of the input image is not included in the category of the training dataset.

  • PDF