• Title/Summary/Keyword: 영상편집

Search Result 442, Processing Time 0.019 seconds

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

The Relationship Between Chewing Ability and Health Status in the Long-lived Elderly of Kyungpook Area (경북지역 장수노인의 저작능력과 건강상태)

  • Lee, Hee-Kyung;Lee, Young-Kwon
    • Journal of Yeungnam Medical Science
    • /
    • v.16 no.2
    • /
    • pp.200-207
    • /
    • 1999
  • Background: The objective of this study is to evaluate the effect of the dental and general health in relation to the state of dentition and chewing ability by surveying oral condition and anthropometric measure in order to provide primary statistics for the development of a program which may lead to an improvement in the long-lived elderly health status in a rural community. Materials and Methods: The subjects of this study were 97 rural long-lived elderly(27 males and 70 females) who were over 85 years-old (average age of subjects are $88.14{\pm}3.20$ year old) in Sungju-Gun, Kyungpook Province. Data were collected by using questionnaires and direct measurement of anthropometrics, and oral examination from all 97 subjects on July, 1999. Results: The following results were obtained: 1. 53.6% of all subjects believe that they are healthy. The average values of height, weight, BMI, body fat, lean body fat and total water were $148.8{\pm}11.2cm$, $46.9{\pm}10.5kg$, $21.2{\pm}3.5kg/m^2$, $26.7{\pm}6.9%$, $73.0{\pm}7.1%$, and $53.4{\pm}5.2%$, respectively. 2. The average number of teeth remaining in the subjects were $3.50{\pm}5.71$; the number of maxillary teeth remaining were $1.08{\pm}2.88$; and the number of mandibular teeth remaining were $2.41{\pm}3.76$. The maximum number of teeth remaining among subjects were 22 teeth, and the fully edentulous(no natural teeth) people were 76.3%. The oral conditions of the subjects were 52.6% using denture, 23.7% using natural teeth, and 23.7% masticating edentulous ridge without denture. 3. In terms of oral condition in self-assessment of health, digestive ability, and chewing ability ; On self-assessment of health, 47.1% of those wearing denture group responded as feeling good, 56.5% of those in the group of edentulous without denture, and 65.2% in group of natural teeth only. On self-assessment of digestive ability, 82.4% of those in group of denture responded as feeling good, 65.2% of those in group of no teeth and no denture, and 73.9% of those in group of natural teeth only. On self-assessment of chewing ability, 90.2% of those in the group wearing a denture, 60. 9% of those in the group of no teeth and no denture, and 65.2% of those in the group of natural teeth only. 4. In terms of oral condition in anthropometric measurements; The height, weight, body fat, lean body mass, and total water according to oral conditions were $150.0{\pm}10.7cm$, $49.0{\pm}10.9kg$, $26.9{\pm}6.6%$, $72.7{\pm}7.0%$, $53.2{\pm}5.1%$, respectively, in group wearing a denture, $142.7{\pm}6.0cm$, $43.2{\pm}5.5kg$, $29.5{\pm}7.2%$, $70.8{\pm}6.9%$, $51.8{\pm}5.0%$, respectively, in the group of no teeth and no denture, and $152.3{\pm}14.1cm$, $45.9{\pm}12.6kg$, $23.4{\pm}6.0%$, $75.9{\pm}6.9%$, $55.6{\pm}5.1%$, respectively, in the group of natural teeth only. Conclusion: The subjective measurements of good health were higher denture user, and natural teeth.

  • PDF