• Title/Summary/Keyword: Image Training Dataset

Search Result 229, Processing Time 0.024 seconds

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

High-Resolution Satellite Image Super-Resolution Using Image Degradation Model with MTF-Based Filters

  • Minkyung Chung;Minyoung Jung;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.395-407
    • /
    • 2023
  • Super-resolution (SR) has great significance in image processing because it enables downstream vision tasks with high spatial resolution. Recently, SR studies have adopted deep learning networks and achieved remarkable SR performance compared to conventional example-based methods. Deep-learning-based SR models generally require low-resolution (LR) images and the corresponding high-resolution (HR) images as training dataset. Due to the difficulties in obtaining real-world LR-HR datasets, most SR models have used only HR images and generated LR images with predefined degradation such as bicubic downsampling. However, SR models trained on simple image degradation do not reflect the properties of the images and often result in deteriorated SR qualities when applied to real-world images. In this study, we propose an image degradation model for HR satellite images based on the modulation transfer function (MTF) of an imaging sensor. Because the proposed method determines the image degradation based on the sensor properties, it is more suitable for training SR models on remote sensing images. Experimental results on HR satellite image datasets demonstrated the effectiveness of applying MTF-based filters to construct a more realistic LR-HR training dataset.

CNN-Based Fake Image Identification with Improved Generalization (일반화 능력이 향상된 CNN 기반 위조 영상 식별)

  • Lee, Jeonghan;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1624-1631
    • /
    • 2021
  • With the continued development of image processing technology, we live in a time when it is difficult to visually discriminate processed (or tampered) images from real images. However, as the risk of fake images being misused for crime increases, the importance of image forensic science for identifying fake images is emerging. Currently, various deep learning-based identifiers have been studied, but there are still many problems to be used in real situations. Due to the inherent characteristics of deep learning that strongly relies on given training data, it is very vulnerable to evaluating data that has never been viewed. Therefore, we try to find a way to improve generalization ability of deep learning-based fake image identifiers. First, images with various contents were added to the training dataset to resolve the over-fitting problem that the identifier can only classify real and fake images with specific contents but fails for those with other contents. Next, color spaces other than RGB were exploited. That is, fake image identification was attempted on color spaces not considered when creating fake images, such as HSV and YCbCr. Finally, dropout, which is commonly used for generalization of neural networks, was used. Through experimental results, it has been confirmed that the color space conversion to HSV is the best solution and its combination with the approach of increasing the training dataset significantly can greatly improve the accuracy and generalization ability of deep learning-based identifiers in identifying fake images that have never been seen before.

A Divide-Conquer U-Net Based High-Quality Ultrasound Image Reconstruction Using Paired Dataset (짝지어진 데이터셋을 이용한 분할-정복 U-net 기반 고화질 초음파 영상 복원)

  • Minha Yoo;Chi Young Ahn
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.3
    • /
    • pp.118-127
    • /
    • 2024
  • Commonly deep learning methods for enhancing the quality of medical images use unpaired dataset due to the impracticality of acquiring paired dataset through commercial imaging system. In this paper, we propose a supervised learning method to enhance the quality of ultrasound images. The U-net model is designed by incorporating a divide-and-conquer approach that divides and processes an image into four parts to overcome data shortage and shorten the learning time. The proposed model is trained using paired dataset consisting of 828 pairs of low-quality and high-quality images with a resolution of 512x512 pixels obtained by varying the number of channels for the same subject. Out of a total of 828 pairs of images, 684 pairs are used as the training dataset, while the remaining 144 pairs served as the test dataset. In the test results, the average Mean Squared Error (MSE) was reduced from 87.6884 in the low-quality images to 45.5108 in the restored images. Additionally, the average Peak Signal-to-Noise Ratio (PSNR) was improved from 28.7550 to 31.8063, and the average Structural Similarity Index (SSIM) was increased from 0.4755 to 0.8511, demonstrating significant enhancements in image quality.

Manchu Script Letters Dataset Creation and Labeling

  • Aaron Daniel Snowberger;Choong Ho Lee
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.80-87
    • /
    • 2024
  • The Manchu language holds historical significance, but a complete dataset of Manchu script letters for training optical character recognition machine-learning models is currently unavailable. Therefore, this paper describes the process of creating a robust dataset of extracted Manchu script letters. Rather than performing automatic letter segmentation based on whitespace or the thickness of the central word stem, an image of the Manchu script was manually inspected, and one copy of the desired letter was selected as a region of interest. This selected region of interest was used as a template to match all other occurrences of the same letter within the Manchu script image. Although the dataset in this study contained only 4,000 images of five Manchu script letters, these letters were collected from twenty-eight writing styles. A full dataset of Manchu letters is expected to be obtained through this process. The collected dataset was normalized and trained using a simple convolutional neural network to verify its effectiveness.

Aerial Dataset Integration For Vehicle Detection Based on YOLOv4

  • Omar, Wael;Oh, Youngon;Chung, Jinwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.747-761
    • /
    • 2021
  • With the increasing application of UAVs in intelligent transportation systems, vehicle detection for aerial images has become an essential engineering technology and has academic research significance. In this paper, a vehicle detection method for aerial images based on the YOLOv4 deep learning algorithm is presented. At present, the most known datasets are VOC (The PASCAL Visual Object Classes Challenge), ImageNet, and COCO (Microsoft Common Objects in Context), which comply with the vehicle detection from UAV. An integrated dataset not only reflects its quantity and photo quality but also its diversity which affects the detection accuracy. The method integrates three public aerial image datasets VAID, UAVD, DOTA suitable for YOLOv4. The training model presents good test results especially for small objects, rotating objects, as well as compact and dense objects, and meets the real-time detection requirements. For future work, we will integrate one more aerial image dataset acquired by our lab to increase the number and diversity of training samples, at the same time, while meeting the real-time requirements.

The development of food image detection and recognition model of Korean food for mobile dietary management

  • Park, Seon-Joo;Palvanov, Akmaljon;Lee, Chang-Ho;Jeong, Nanoom;Cho, Young-Im;Lee, Hae-Jeung
    • Nutrition Research and Practice
    • /
    • v.13 no.6
    • /
    • pp.521-528
    • /
    • 2019
  • BACKGROUND/OBJECTIVES: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake. MATERIALS/METHODS: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of $150{\times}150$ and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition. RESULTS: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks. CONCLUSION: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

A DCT Learning Combined RRU-Net for the Image Splicing Forgery Detection (DCT 학습을 융합한 RRU-Net 기반 이미지 스플라이싱 위조 영역 탐지 모델)

  • Young-min Seo;Jung-woo Han;Hee-jung Kwon;Su-bin Lee;Joongjin Kook
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.11-17
    • /
    • 2023
  • This paper proposes a lightweight deep learning network for detecting an image splicing forgery. The research on image forgery detection using CNN, a deep learning network, and research on detecting and localizing forgery in pixel units are in progress. Among them, CAT-Net, which learns the discrete cosine transform coefficients of images together with images, was released in 2022. The DCT coefficients presented by CAT-Net are combined with the JPEG artifact learning module and the backbone model as pre-learning, and the weights are fixed. The dataset used for pre-training is not included in the public dataset, and the backbone model has a relatively large number of network parameters, which causes overfitting in a small dataset, hindering generalization performance. In this paper, this learning module is designed to learn the characterization depending on the DCT domain in real-time during network training without pre-training. The DCT RRU-Net proposed in this paper is a network that combines RRU-Net which detects forgery by learning only images and JPEG artifact learning module. It is confirmed that the network parameters are less than those of CAT-Net, the detection performance of forgery is better than that of RRU-Net, and the generalization performance for various datasets improves through the network architecture and training method of DCT RRU-Net.

  • PDF

Semantic Image Segmentation for Efficiently Adding Recognition Objects

  • Lu, Chengnan;Park, Jinho
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.701-710
    • /
    • 2022
  • With the development of artificial intelligence technology, various methods have been developed for recognizing objects in images using machine learning. Image segmentation is the most effective among these methods for recognizing objects within an image. Conventionally, image datasets of various classes are trained simultaneously. In situations where several classes require segmentation, all datasets have to be trained thoroughly. Such repeated training results in low training efficiency because most of the classes have already been trained. In addition, the number of classes that appear in the datasets affects training. Some classes appear in datasets in remarkably smaller numbers than others, and hence, the training errors will not be properly reflected when all the classes are trained simultaneously. Therefore, a new method that separates some classes from the dataset is proposed to improve efficiency during training. In addition, the accuracies of the conventional and proposed methods are compared.

Adversarial Shade Generation and Training Text Recognition Algorithm that is Robust to Text in Brightness (밝기 변화에 강인한 적대적 음영 생성 및 훈련 글자 인식 알고리즘)

  • Seo, Minseok;Kim, Daehan;Choi, Dong-Geol
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.276-282
    • /
    • 2021
  • The system for recognizing text in natural scenes has been applied in various industries. However, due to the change in brightness that occurs in nature such as light reflection and shadow, the text recognition performance significantly decreases. To solve this problem, we propose an adversarial shadow generation and training algorithm that is robust to shadow changes. The adversarial shadow generation and training algorithm divides the entire image into a total of 9 grids, and adjusts the brightness with 4 trainable parameters for each grid. Finally, training is conducted in a adversarial relationship between the text recognition model and the shaded image generator. As the training progresses, more and more difficult shaded grid combinations occur. When training with this curriculum-learning attitude, we not only showed a performance improvement of more than 3% in the ICDAR2015 public benchmark dataset, but also confirmed that the performance improved when applied to our's android application text recognition dataset.