• Title/Summary/Keyword: 영상 패션

Search Result 61, Processing Time 0.029 seconds

The global response to K-POP idol group's New Hanbok: The case of Black Pink Fashion (K-POP 아이돌 그룹 신한복 스타일에 대한 글로벌 반응: 블랙핑크 패션 사례)

  • Choi, Yeong-Hyeon;Chen, Tianyi;Lee, Kyu-Hye
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.533-541
    • /
    • 2020
  • This study aims for investigating the consumers' reaction to the New Hanbok Style of K-pop idol groups. We collected YouTube videos and user comments that include 'Black Pink New Hanbok' as a keyword, applying social network analysis and sentiment analysis. First, the New Hanbok of Black Pink was designed as a mini-dress to make it easier to dance and turned out that it reinterpreted traditional elements modernly. Second, the issue about revealing costumes appeared as a keyword in domestic reactions, it did not appear in international reaction. Third, as a result of sentiment analysis, international audience viewed New Hanbok outfit more positively than domestic audience. This study is significant in that it suggests the direction to which New Hanbok should head to by investigating extensive consumers' reaction and finding out the positive and negative elements of New Hanbok.

A Study on the Method of Making One-person media YouTuber Makeup Video Content (1인 미디어 유튜버 메이크업 동영상 콘텐츠 제작 기법 연구)

  • Chae Won Shin;Tai Gi Kwak
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.26 no.3
    • /
    • pp.179-192
    • /
    • 2024
  • YouTube, the most representative video sharing platform, is used by millions every day. New videos across various fields, such as daily life, beauty, games, eating, fashion, and music are constantly being uploaded. Among the content available on YouTube, beauty-related videos are particularly popular. Creators who produce and share beauty-related content on online media platforms are known as beauty creators. There is a significant amount of beauty-related content produced in response to high public interest. Most of this content focuses on sponsored advertisements related to product relevance and attitudes, such as makeup tutorials, hairstyle creation, and new product reviews. However, there is a lack of research on the diverse environments of self-media beauty creators and basic makeup production techniques. This study investigated the content composition of individual media YouTube beauty creators and examined the current status of YouTube beauty creators in Korea. This research is necessary for the development of diverse content and videos by beauty creators. Based on these findings, a makeup technique was developed for YouTube beauty creators, considering the production environment of marketing content and video production steps. This study aims to expand and specialize the broad range of content environments of beauty creators and develop makeup techniques that can be effectively used in YouTube content production.

A Comparative Study on the Brand Experiences of Metaverse and Offline Stores (메타버스와 오프라인 스토어의 브랜드 체험 비교 연구)

  • Gwang-Ho Yi;Yu-Jin Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.2
    • /
    • pp.53-66
    • /
    • 2023
  • In recent times, more fashion brands have been seeking ways to use metaverse platforms, in which users can actively participate, as their new brand touch-points. This study aims to compare the brand experiences of the fashion brand Gentle Monster's offline store and its equivalent metaverse store. By changing the order of offline and metaverse visits, two groups participated in the field study that allowed them to experience directly the offline and metaverse stores. As a result of the analysis, the following findings were discovered: (1) In the overall experiential response, the frequency of sensory modules responding to new information was much higher than that of feeling experiences; (2) Experiential responses were more active in the offline store where the subjects could touch and use products directly rather than in the metaverse; (3) Among the four types of theme space, the experiential response was the most frequent in the product space; (4) The first group that visited the metaverse store before the offline store showed a more active experience than the second group that visited the offline store first. Finally, the results of this study show that metaverse brand stores in virtual space not only provide differentiated experiences beyond the spatiotemporal constraints of real space but can also be used as a strategic tool to make offline store experiences more meaningful and rich.

A Study on the Viewers' Perceived Benefits and Responses from the YouTube Videos of Senior Fashion Influencers (시니어 패션 인플루언서의 유튜브 영상에서 시청자의 지각된 혜택과 반응에 관한 연구)

  • Seo, Min Jeong
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.24 no.3
    • /
    • pp.85-96
    • /
    • 2022
  • The MZ Generation likes watching YouTube videos produced by senior influencers despite the big age gap. In addition, the final goal of collaborative YouTube videos between senior influencers and brands is to increase sales. Accordingly, this study examined this interesting phenomenon and aimed to provide useful insights for creating YouTube videos targeting the MZ generation. This study was divided into two parts. Study 1 explored the viewers' perceived benefits by looking at the YouTube video comments of a senior fashion influencer, and then classified the perceived benefits into informational, psychological, and hedonic benefits. Study 2 analyzed the relationships among the viewers' perceived benefits (informational, psychological, hedonic benefits), attitude toward a brand, and behavior intention. Study 2 found that only informational benefits, among the three perceived benefits, enhanced attitudes toward a brand, thereby affecting behavior intention. Based on the results of two sub-studies, this study highlights the importance of informational benefits to maximize the marketing effects employing the collaborative YouTube videos of senior influencers and brands.

An Implementation of Metaverse Virtual Fitting Technology using a Posture extraction based on Deep Learning. (딥러닝 기반 자세 추출을 통한 메타버스 가상 피팅 기술 구현)

  • Lee, Bum-Ro;Lee, Sang-Won;Shin, Soo-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.73-76
    • /
    • 2022
  • 본 논문에서는 메타버스 공간에서 패션 아이템 판매에 있어서 필수적이라 할 수 있는 온라인 가상 피팅 기술을 동작 인식 전용 디바이스가 아닌 일반 스마트폰 카메라를 활용하여 구현하는 기술을 제안한다. 가상 피팅 기술을 구현하기 위해서는 딥러닝 기법을 활용하여 입력 영상을 분석하고, 분석 결과를 토대로 인체의 전체 자세를 추정하며, 인체 사이즈의 근사값을 추출하는 과정들이 수행되어야 하는데, 현재의 스마트폰 컴퓨팅 환경은 이를 수행하기에 충분한 연산 성능을 가지지 못한다는 문제점을 가진다. 본 논문에서는 높은 비용이 요구되는 고부하 연산을 클라우드 서버를 통해 수행하는 서버 기반 프레임워크를 도입하여, 낮은 성능의 스마트폰으로도 고성능 연산이 가능한 서비스 구조를 확보하고 이를 통해 휴대성 높은 증강현실 기반의 가상 피팅 기술을 구현한다. 본 논문의 성과를 통해 메타버스 상거래의 활성화와 메타버스 본연의 의미에 충실한 가상 월드 구축에 기여할 것이라 기대한다.

  • PDF

Characteristic of Film Music by Director Baz Luhrmann : Focusing on the Movies , and (바즈루어만 감독의 영화음악 특징 : 영화 <댄싱히어로>, <로미오와 줄리엣>, <물랑루즈>를 중심으로)

  • Kim, Youn-Sik;Kim, Young-Sam
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.223-230
    • /
    • 2019
  • This paper aims to derive the specificity of film music of Baz Luhrmann, a Hollywood film director, focusing on his representative works such as Dancing Heroes, Romeo and Juliet, and Moulin Rouge. Frist, Dancing Heroes captures various dance music genres through dynamic shooting techniques and shows trendy sensibility by using the main theme song 'Time after Time,' sung by the main character, Tina. Second, Romeo and Juliet, the original work of Shakespeare, keeps thier lines and stories while decorating gorgeous fashion and rock music in jukebox style. Also, it is harmonized with the most modern and trendy MTV-style video. Third, Moulin Rouge presents film music through the 'mix and match' method, which consists of jukebox-type trendy songs containing classical back stage musical and Bollywood musical images. In conclusion, the style of Baz Luhrmann has been reborn as a unique way of directing Buz Luhrmann's film music that it is expressed by connecting various juke box style music with amazing visual effect. Through director's style, it is possible to suggest the direction of various film music to the industries.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Web-based V-Commerce Development Using Persona Model - For Vietnamese Consumer (페르소나 모델을 활용한 웹 기반의 V-Commerce 개발 -베트남 소비자들을 대상으로-)

  • Chung, Hae-Kyung;Lee, Seung-Min
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1169-1176
    • /
    • 2018
  • This study aims to create a video-commerce website that uses video for Vietnamese consumers. First of all, we completed the persona by conducting understanding questionnaires, in - depth interviews, and analysis of prior research to understand the content interest and understanding of Vietnamese consumers, purpose of website use. Persona results: First, make it easy for Vietnamese and Korean people to interact with each other by creating a contact menu. Secondly, it would be provided various information about K-pop, Korean fashion, Korean makeup(cosmetics), Korean food, Korean travel, Korean, etc., which Vietnamese people are interested in. Third, it makes it easy to purchase the products that are shown in the video that they watch.

Grouping-based 3D Animation Data Compression Method (군집화 기반 3차원 애니메이션 데이터 압축 기법)

  • Choi, Young-Jin;Yeo, Du-Hwan;Klm, Hyung-Seok;Kim, Jee-In
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.461-468
    • /
    • 2008
  • The needs for visualizing interactive multimedia contents on portable devices with realistic three dimensional shapes are increasing as new ubiquitous services are coming into reality. Especially in digital fashion applications with virtual reality technologies for clothes of various forms on different avatars, it is required to provide very high quality visual models over mobile networks. Due to limited network bandwidths and memory spaces of portable devices, it is very difficult to transmit visual data effectively and render realistic appearance of three dimensional images. In this thesis, we propose a compression method to reduce three dimensional data for digital fashion applications. The three dimensional model includes animation of avatar which require very large amounts of data over time. Our proposed method utilizes temporal and spatial coherence of animation data, to reduce the amount. By grouping vertices from three dimensional models, the entire animation is represented by a movement path of a few representative vertices. The existing three dimensional model compression approaches can get benefits from the proposed method by reducing the compression sources through grouping. We expect that the proposed method to be applied not only to three dimensional garment animations but also to generic deformable objects.

  • PDF

Communication Characteristics of Fashion Shows Using Digital Images (디지털 영상을 활용한 패션쇼의 커뮤니케이션 특성)

  • Hong, Hye Rim;Kim, Young In
    • Journal of the Korean Society of Costume
    • /
    • v.64 no.6
    • /
    • pp.1-15
    • /
    • 2014
  • In the fashion industry, the thing that shows the latest trends and makes issues is the fashion show. The function of fashion shows is changing from a promotional mean to a method of communication with customers. Recently, some fashion shows have used digital images and omitted traditional parts of a fashion show such as models, stages, garments, music and audiences. In this study, 30 fashion shows that used digital images were selected from the 2000-2010 collections of Paris, Milan, London and New York, and its communication characteristics were analyzed and discussed. The three categories of the communication characteristics are as follows: First, the shows used digital images as stage sceneries or effects to create desired stage effects. The digital images were used as extra tools to reinforce the concept of the fashion show. Second, the fashion shows used real-time videos to extend its presentation into the virtual space. The interactive videos were designed to encourage audiences to actively participate in the show. Third, the digital images were the focus of the show for the internet-only digital fashion shows. Since the Internet is not constrained by time or space, multi-faceted, communication between audiences and fashion designers or among audiences is possible. In addition, the number of audience it can reach is higher than traditional fashion shows. Digital images will be used more often in the fashion shows. In the future, fashion shows will try to become more interactive with audiences through the use of new digital image technology.