• Title/Summary/Keyword: learning through the image

Search Result 925, Processing Time 0.028 seconds

Research on Artificial Intelligence Based De-identification Technique of Personal Information Area at Video Data (영상데이터의 개인정보 영역에 대한 인공지능 기반 비식별화 기법 연구)

  • In-Jun Song;Cha-Jong Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.19-25
    • /
    • 2024
  • This paper proposes an artificial intelligence-based personal information area object detection optimization method in an embedded system to de-identify personal information in video data. As an object detection optimization method, first, in order to increase the detection rate for personal information areas when detecting objects, a gyro sensor is used to collect the shooting angle of the image data when acquiring the image, and the image data is converted into a horizontal image through the collected shooting angle. Based on this, each learning model was created according to changes in the size of the image resolution of the learning data and changes in the learning method of the learning engine, and the effectiveness of the optimal learning model was selected and evaluated through an experimental method. As a de-identification method, a shuffling-based masking method was used, and double-key-based encryption of the masking information was used to prevent restoration by others. In order to reuse the original image, the original image could be restored through a security key. Through this, we were able to secure security for high personal information areas and improve usability through original image restoration. The research results of this paper are expected to contribute to industrial use of data without personal information leakage and to reducing the cost of personal information protection in industrial fields using video through de-identification of personal information areas included in video data.

A Comparative Analysis of Deep Learning Frameworks for Image Learning (이미지 학습을 위한 딥러닝 프레임워크 비교분석)

  • jong-min Kim;Dong-Hwi Lee
    • Convergence Security Journal
    • /
    • v.22 no.4
    • /
    • pp.129-133
    • /
    • 2022
  • Deep learning frameworks are still evolving, and there are various frameworks. Typical deep learning frameworks include TensorFlow, PyTorch, and Keras. The Deepram framework utilizes optimization models in image classification through image learning. In this paper, we use the TensorFlow and PyTorch frameworks, which are most widely used in the deep learning image recognition field, to proceed with image learning, and compare and analyze the results derived in this process to know the optimized framework. was made.

MRI Image Super Resolution through Filter Learning Based on Surrounding Gradient Information in 3D Space (3D 공간상에서의 주변 기울기 정보를 기반에 둔 필터 학습을 통한 MRI 영상 초해상화)

  • Park, Seongsu;Kim, Yunsoo;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.178-185
    • /
    • 2021
  • Three-dimensional high-resolution magnetic resonance imaging (MRI) provides fine-level anatomical information for disease diagnosis. However, there is a limitation in obtaining high resolution due to the long scan time for wide spatial coverage. Therefore, in order to obtain a clear high-resolution(HR) image in a wide spatial coverage, a super-resolution technology that converts a low-resolution(LR) MRI image into a high-resolution is required. In this paper, we propose a super-resolution technique through filter learning based on information on the surrounding gradient information in 3D space from 3D MRI images. In the learning step, the gradient features of each voxel are computed through eigen-decomposition from 3D patch. Based on these features, we get the learned filters that minimize the difference of intensity between pairs of LR and HR images for similar features. In test step, the gradient feature of the patch is obtained for each voxel, and the filter is applied by selecting a filter corresponding to the feature closest to it. As a result of learning 100 T1 brain MRI images of HCP which is publicly opened, we showed that the performance improved by up to about 11% compared to the traditional interpolation method.

Image generation and classification using GAN-based Semi Supervised Learning (GAN기반의 Semi Supervised Learning을 활용한 이미지 생성 및 분류)

  • Doyoon Jung;Gwangmi Choi;NamHo Kim
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.27-35
    • /
    • 2024
  • This study deals with a method of combining image generation using Semi Supervised Learning based on GAN (Generative Adversarial Network) and image classification using ResNet50. Through this, a new approach was proposed to obtain more accurate and diverse results by integrating image generation and classification. The generator and discriminator are trained to distinguish generated images from actual images, and image classification is performed using ResNet50. In the experimental results, it was confirmed that the quality of the generated images changes depending on the epoch, and through this, we aim to improve the accuracy of industrial accident prediction. In addition, we would like to present an efficient method to improve the quality of image generation and increase the accuracy of image classification through the combination of GAN and ResNet50.

Implementation of YOLOv5-based Forest Fire Smoke Monitoring Model with Increased Recognition of Unstructured Objects by Increasing Self-learning data

  • Gun-wo, Do;Minyoung, Kim;Si-woong, Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.536-546
    • /
    • 2022
  • A society will lose a lot of something in this field when the forest fire broke out. If a forest fire can be detected in advance, damage caused by the spread of forest fires can be prevented early. So, we studied how to detect forest fires using CCTV currently installed. In this paper, we present a deep learning-based model through efficient image data construction for monitoring forest fire smoke, which is unstructured data, based on the deep learning model YOLOv5. Through this study, we conducted a study to accurately detect forest fire smoke, one of the amorphous objects of various forms, in YOLOv5. In this paper, we introduce a method of self-learning by producing insufficient data on its own to increase accuracy for unstructured object recognition. The method presented in this paper constructs a dataset with a fixed labelling position for images containing objects that can be extracted from the original image, through the original image and a model that learned from it. In addition, by training the deep learning model, the performance(mAP) was improved, and the errors occurred by detecting objects other than the learning object were reduced, compared to the model in which only the original image was learned.

Analysis of JPEG Image Compression Effect on Convolutional Neural Network-Based Cat and Dog Classification

  • Yueming Qu;Qiong Jia;Euee S. Jang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.112-115
    • /
    • 2022
  • The process of deep learning usually needs to deal with massive data which has greatly limited the development of deep learning technologies today. Convolutional Neural Network (CNN) structure is often used to solve image classification problems. However, a large number of images may be required in order to train an image in CNN, which is a heavy burden for existing computer systems to handle. If the image data can be compressed under the premise that the computer hardware system remains unchanged, it is possible to train more datasets in deep learning. However, image compression usually adopts the form of lossy compression, which will lose part of the image information. If the lost information is key information, it may affect learning performance. In this paper, we will analyze the effect of image compression on deep learning performance on CNN-based cat and dog classification. Through the experiment results, we conclude that the compression of images does not have a significant impact on the accuracy of deep learning.

  • PDF

Unsupervised Learning with Natural Low-light Image Enhancement (자연스러운 저조도 영상 개선을 위한 비지도 학습)

  • Lee, Hunsang;Sohn, Kwanghoon;Min, Dongbo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Recently, deep-learning based methods for low-light image enhancement accomplish great success through supervised learning. However, they still suffer from the lack of sufficient training data due to difficulty of obtaining a large amount of low-/normal-light image pairs in real environments. In this paper, we propose an unsupervised learning approach for single low-light image enhancement using the bright channel prior (BCP), which gives the constraint that the brightest pixel in a small patch is likely to be close to 1. With this prior, pseudo ground-truth is first generated to establish an unsupervised loss function. The proposed enhancement network is then trained using the proposed unsupervised loss function. To the best of our knowledge, this is the first attempt that performs a low-light image enhancement through unsupervised learning. In addition, we introduce a self-attention map for preserving image details and naturalness in the enhanced result. We validate the proposed method on various public datasets, demonstrating that our method achieves competitive performance over state-of-the-arts.

A Study on the Processing Method for Improving Accuracy of Deep Learning Image Segmentation (딥러닝 영상 분할의 정확도 향상을 위한 처리방법 연구)

  • Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.169-171
    • /
    • 2021
  • Image processing through cameras such as self-driving, CCTV, mobile phone security, and parking facilities is being used to solve many real-life problems. Simple classification is solved through image processing, but it is difficult to find images or in-image features of complexly mixed objects. To solve this feature point, we utilize deep learning techniques in classification, detection, and segmentation of image data so that we can think and judge closely. Of course, the results are better than just image processing, but we confirm that the results judged by the method of image segmentation using deep learning have deviations from the real object. In this paper, we study how to perform accuracy improvement through simple image processing just before outputting the output of deep learning image segmentation to increase the precision of image segmentation.

  • PDF

Performance of Real-time Image Recognition Algorithm Based on Machine Learning (기계학습 기반의 실시간 이미지 인식 알고리즘의 성능)

  • Sun, Young Ghyu;Hwang, Yu Min;Hong, Seung Gwan;Kim, Jin Young
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.3
    • /
    • pp.69-73
    • /
    • 2017
  • In this paper, we developed a real-time image recognition algorithm based on machine learning and tested the performance of the algorithm. The real-time image recognition algorithm recognizes the input image in real-time based on the machine-learned image data. In order to test the performance of the real-time image recognition algorithm, we applied the real-time image recognition algorithm to the autonomous vehicle and showed the performance of the real-time image recognition algorithm through the application of the autonomous vehicle.

The training of convolution neural network for advanced driver assistant system

  • Nam, Kihun;Jeon, Heekyeong
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.23-29
    • /
    • 2016
  • In this paper, the learning technique for CNN processor on vehicle is proposed. In the case of conventional CNN processors, weighted values learned through training are stored for use, but when there is distortion in the image due to the weather conditions, the accuracy is decreased. Therefore, the method of enhancing the input image for classification is general, but it has the weakness of increasing the processor size. To solve this problem, the CNN performance was improved in this paper through the learning method of the distorted image. As a result, the proposed method showed improvement of approximately 38% better accuracy than the conventional method.