• Title/Summary/Keyword: 이미지 데이터 셋

Search Result 299, Processing Time 0.031 seconds

Generation of Stage Tour Contents with Deep Learning Style Transfer (딥러닝 스타일 전이 기반의 무대 탐방 콘텐츠 생성 기법)

  • Kim, Dong-Min;Kim, Hyeon-Sik;Bong, Dae-Hyeon;Choi, Jong-Yun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1403-1410
    • /
    • 2020
  • Recently, as interest in non-face-to-face experiences and services increases, the demand for web video contents that can be easily consumed using mobile devices such as smartphones or tablets is rapidly increasing. To cope with these requirements, in this paper we propose a technique to efficiently produce video contents that can provide experience of visiting famous places (i.e., stage tour) in animation or movies. To this end, an image dataset was established by collecting images of stage areas using Google Maps and Google Street View APIs. Afterwards, a deep learning-based style transfer method to apply the unique style of animation videos to the collected street view images and generate the video contents from the style-transferred images was presented. Finally, we showed that the proposed method could produce more interesting stage-tour video contents through various experiments.

Deep-Learning Based Real-time Fire Detection Using Object Tracking Algorithm

  • Park, Jonghyuk;Park, Dohyun;Hyun, Donghwan;Na, Youmin;Lee, Soo-Hong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.1
    • /
    • pp.1-8
    • /
    • 2022
  • In this paper, we propose a fire detection system based on CCTV images using an object tracking technology with YOLOv4 model capable of real-time object detection and a DeepSORT algorithm. The fire detection model was learned from 10800 pieces of learning data and verified through 1,000 separate test sets. Subsequently, the fire detection rate in a single image and fire detection maintenance performance in the image were increased by tracking the detected fire area through the DeepSORT algorithm. It is verified that a fire detection rate for one frame in video data or single image could be detected in real time within 0.1 second. In this paper, our AI fire detection system is more stable and faster than the existing fire accident detection system.

A Lightweight Deep Learning Model for Text Detection in Fashion Design Sketch Images for Digital Transformation

  • Ju-Seok Shin;Hyun-Woo Kang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a lightweight deep learning architecture tailored for efficient text detection in fashion design sketch images. Given the increasing prominence of Digital Transformation in the fashion industry, there is a growing emphasis on harnessing digital tools for creating fashion design sketches. As digitization becomes more pervasive in the fashion design process, the initial stages of text detection and recognition take on pivotal roles. In this study, a lightweight network was designed by building upon existing text detection deep learning models, taking into consideration the unique characteristics of apparel design drawings. Additionally, a separately collected dataset of apparel design drawings was added to train the deep learning model. Experimental results underscore the superior performance of our proposed deep learning model, outperforming existing text detection models by approximately 20% when applied to fashion design sketch images. As a result, this paper is expected to contribute to the Digital Transformation in the field of clothing design by means of research on optimizing deep learning models and detecting specialized text information.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Towards Next Generation Multimedia Information Retrieval by Analyzing User-centered Image Access and Use (이용자 중심의 이미지 접근과 이용 분석을 통한 차세대 멀티미디어 검색 패러다임 요소에 관한 연구)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.121-138
    • /
    • 2017
  • As information users seek multimedia with a wide variety of information needs, information environments for multimedia have been developed drastically. More specifically, as seeking multimedia with emotional access points has been popular, the needs for indexing in terms of abstract concepts including emotions have grown. This study aims to analyze the index terms extracted from Getty Image Bank. Five basic emotion terms, which are sadness, love, horror, happiness, anger, were used when collected the indexing terms. A total 22,675 index terms were used for this study. The data are three sets; entire emotion, positive emotion, and negative emotion. For these three data sets, co-word occurrence matrices were created and visualized in weighted network with PNNC clusters. The entire emotion network demonstrates three clusters and 20 sub-clusters. On the other hand, positive emotion network and negative emotion network show 10 clusters, respectively. The results point out three elements for next generation of multimedia retrieval: (1) the analysis on index terms for emotions shown in people on image, (2) the relationship between connotative term and denotative term and possibility for inferring connotative terms from denotative terms using the relationship, and (3) the significance of thesaurus on connotative term in order to expand related terms or synonyms for better access points.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A Study on the Deep Learning-Based Tomato Disease Diagnosis Service (딥러닝기반 토마토 병해 진단 서비스 연구)

  • Jo, YuJin;Shin, ChangSun
    • Smart Media Journal
    • /
    • v.11 no.5
    • /
    • pp.48-55
    • /
    • 2022
  • Tomato crops are easy to expose to disease and spread in a short period of time, so late measures against disease are directly related to production and sales, which can cause damage. Therefore, there is a need for a service that enables early prevention by simply and accurately diagnosing tomato diseases in the field. In this paper, we construct a system that applies a deep learning-based model in which ImageNet transition is learned in advance to classify and serve nine classes of tomatoes for disease and normal cases. We use the input of MobileNet, ResNet, with a deep learning-based CNN structure that builds a lighter neural network using a composite product for the image set of leaves classifying tomato disease and normal from the Plant Village dataset. Through the learning of two proposed models, it is possible to provide fast and convenient services using MobileNet with high accuracy and learning speed.

A Feasibility Study on Application of a Deep Convolutional Neural Network for Automatic Rock Type Classification (자동 암종 분류를 위한 딥러닝 영상처리 기법의 적용성 검토 연구)

  • Pham, Chuyen;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.30 no.5
    • /
    • pp.462-472
    • /
    • 2020
  • Rock classification is fundamental discipline of exploring geological and geotechnical features in a site, which, however, may not be easy works because of high diversity of rock shape and color according to its origin, geological history and so on. With the great success of convolutional neural networks (CNN) in many different image-based classification tasks, there has been increasing interest in taking advantage of CNN to classify geological material. In this study, a feasibility of the deep CNN is investigated for automatically and accurately identifying rock types, focusing on the condition of various shapes and colors even in the same rock type. It can be further developed to a mobile application for assisting geologist in classifying rocks in fieldwork. The structure of CNN model used in this study is based on a deep residual neural network (ResNet), which is an ultra-deep CNN using in object detection and classification. The proposed CNN was trained on 10 typical rock types with an overall accuracy of 84% on the test set. The result demonstrates that the proposed approach is not only able to classify rock type using images, but also represents an improvement as taking highly diverse rock image dataset as input.

Machine Classification in Ship Engine Rooms Using Transfer Learning (전이 학습을 이용한 선박 기관실 기기의 분류에 관한 연구)

  • Park, Kyung-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.2
    • /
    • pp.363-368
    • /
    • 2021
  • Ship engine rooms have improved automation systems owing to the advancement of technology. However, there are many variables at sea, such as wind, waves, vibration, and equipment aging, which cause loosening, cutting, and leakage, which are not measured by automated systems. There are cases in which only one engineer is available for patrolling. This entails many risk factors in the engine room, where rotating equipment is operating at high temperature and high pressure. When the engineer patrols, he uses his five senses, with particular high dependence on vision. We hereby present a preliminary study to implement an engine-room patrol robot that detects and informs the machine room while a robot patrols the engine room. Images of ship engine-room equipment were classified using a convolutional neural network (CNN). After constructing the image dataset of the ship engine room, the network was trained with a pre-trained CNN model. Classification performance of the trained model showed high reproducibility. Images were visualized with a class activation map. Although it cannot be generalized because the amount of data was limited, it is thought that if the data of each ship were learned through transfer learning, a model suitable for the characteristics of each ship could be constructed with little time and cost expenditure.

Design of CNN-based Gastrointestinal Landmark Classifier for Tracking the Gastrointestinal Location (캡슐내시경의 위치추적을 위한 CNN 기반 위장관 랜드마크 분류기 설계)

  • Jang, Hyeon-Woong;Lim, Chang-Nam;Park, Ye-Seul;Lee, Kwang-Jae;Lee, Jung-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1019-1022
    • /
    • 2019
  • 최근의 영상 처리 분야는 딥러닝 기법들의 성능이 입증됨에 따라 다양한 분야에서 이와 같은 기법들을 활용해 영상에 대한 분류, 분석, 검출 등을 수행하려는 시도가 활발하다. 그중에서도 의료 진단 보조 역할을 할 수 있는 의료 영상 분석 소프트웨어에 대한 기대가 증가하고 있는데, 본 연구에서는 캡슐내시경 영상에 주목하였다. 캡슐내시경은 주로 소장 촬영을 목표로 하며 식도부터 대장까지 약 8~10시간 동안 촬영된다. 이로 인해 CT, MR, X-ray와 같은 다른 의료 영상과 다르게 하나의 데이터 셋이 10~15만 장의 이미지를 갖는다. 일반적으로 캡슐내시경 영상을 판독하는 순서는 위장관 교차점(Z-Line, 유문판, 회맹판)을 기준으로 위장관 랜드마크(식도, 위, 소장, 대장)를 구분한 뒤, 각 랜드마크 별로 병변 정보를 찾아내는 방식이다. 그러나 워낙 방대한 영상 데이터를 가지기 때문에 의사 혹은 의료 전문가가 영상을 판독하는데 많은 시간과 노력이 소모되고 있다. 본 논문의 목적은 캡슐내시경 영상의 판독에서 모든 환자에 대해 공통으로 수행되고, 판독하는 데 많은 시간을 차지하는 위장관 랜드마크를 찾는 것에 있다. 이를 위해, 위장관 랜드마크를 식별할 수 있는 CNN 학습 모델을 설계하였으며, 더욱 효과적인 학습을 위해 전처리 과정으로 학습에 방해가 되는 학습 노이즈 영상들을 제거하고 위장관 랜드마크 별 특징 분석을 진행하였다. 총 8명의 환자 데이터를 가지고 학습된 모델에 대해 평가 및 검증을 진행하였는데, 무작위로 환자 데이터를 샘플링하여 학습한 모델을 평가한 결과, 평균 정확도가 95% 가 확인되었으며 개별 환자별로 교차 검증 방식을 진행한 결과 평균 정확도 67% 가 확인되었다.