• Title/Summary/Keyword: Generate AI Video

Search Result 15, Processing Time 0.02 seconds

Analysis of Success Factors of OTT Original Contents Through BigData, Netflix's 'Squid Game Season 2' Proposal (빅데이터를 통한 OTT 오리지널 콘텐츠의 성공요인 분석, 넷플릭스의 '오징어게임 시즌2' 제언)

  • Ahn, Sunghun;Jung, JaeWoo;Oh, Sejong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.1
    • /
    • pp.55-64
    • /
    • 2022
  • This study analyzes the success factors of OTT original content through big data, and intends to suggest scenarios, casting, fun, and moving elements when producing the next work. In addition, I would like to offer suggestions for the success of 'Squid Game Season 2'. The success factor of 'Squid Game' through big data is first, it is a simple psychological experimental game. Second, it is a retro strategy. Third, modern visual beauty and color. Fourth, it is simple aesthetics. Fifth, it is the platform of OTT Netflix. Sixth, Netflix's video recommendation algorithm. Seventh, it induced Binge-Watch. Lastly, it can be said that the consensus was high as it was related to the time to think about 'death' and 'money' in a pandemic situation. The suggestions for 'Squid Game Season 2' are as follows. First, it is a fusion of famous traditional games of each country. Second, it is an AI-based planned MD product production and sales strategy. Third, it is casting based on artificial intelligence big data. Fourth, secondary copyright and copyright sales strategy. The limitations of this study were analyzed only through external data. Data inside the Netflix platform was not utilized. In this study, if AI big data is used not only in the OTT field but also in entertainment and film companies, it will be possible to discover better business models and generate stable profits.

Generating Extreme Close-up Shot Dataset Based On ROI Detection For Classifying Shots Using Artificial Neural Network (인공신경망을 이용한 샷 사이즈 분류를 위한 ROI 탐지 기반의 익스트림 클로즈업 샷 데이터 셋 생성)

  • Kang, Dongwann;Lim, Yang-mi
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.983-991
    • /
    • 2019
  • This study aims to analyze movies which contain various stories according to the size of their shots. To achieve this, it is needed to classify dataset according to the shot size, such as extreme close-up shots, close-up shots, medium shots, full shots, and long shots. However, a typical video storytelling is mainly composed of close-up shots, medium shots, full shots, and long shots, it is not an easy task to construct an appropriate dataset for extreme close-up shots. To solve this, we propose an image cropping method based on the region of interest (ROI) detection. In this paper, we use the face detection and saliency detection to estimate the ROI. By cropping the ROI of close-up images, we generate extreme close-up images. The dataset which is enriched by proposed method is utilized to construct a model for classifying shots based on its size. The study can help to analyze the emotional changes of characters in video stories and to predict how the composition of the story changes over time. If AI is used more actively in the future in entertainment fields, it is expected to affect the automatic adjustment and creation of characters, dialogue, and image editing.

Implementation of Hair Style Recommendation System Based on Big data and Deepfakes (빅데이터와 딥페이크 기반의 헤어스타일 추천 시스템 구현)

  • Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.13-19
    • /
    • 2023
  • In this paper, we investigated the implementation of a hairstyle recommendation system based on big data and deepfake technology. The proposed hairstyle recommendation system recognizes the facial shapes based on the user's photo (image). Facial shapes are classified into oval, round, and square shapes, and hairstyles that suit each facial shape are synthesized using deepfake technology and provided as videos. Hairstyles are recommended based on big data by applying the latest trends and styles that suit the facial shape. With the image segmentation map and the Motion Supervised Co-Part Segmentation algorithm, it is possible to synthesize elements between images belonging to the same category (such as hair, face, etc.). Next, the synthesized image with the hairstyle and a pre-defined video are applied to the Motion Representations for Articulated Animation algorithm to generate a video animation. The proposed system is expected to be used in various aspects of the beauty industry, including virtual fitting and other related areas. In future research, we plan to study the development of a smart mirror that recommends hairstyles and incorporates features such as Internet of Things (IoT) functionality.

SHVC-based Texture Map Coding for Scalable Dynamic Mesh Compression (스케일러블 동적 메쉬 압축을 위한 SHVC 기반 텍스처 맵 부호화 방법)

  • Naseong Kwon;Joohyung Byeon;Hansol Choi;Donggyu Sim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.314-328
    • /
    • 2023
  • In this paper, we propose a texture map compression method based on the hierarchical coding method of SHVC to support the scalability function of dynamic mesh compression. The proposed method effectively eliminates the redundancy of multiple-resolution texture maps by downsampling a high-resolution texture map to generate multiple-resolution texture maps and encoding them with SHVC. The dynamic mesh decoder supports the scalability of mesh data by decoding a texture map having an appropriate resolution according to receiver performance and network environment. To evaluate the performance of the proposed method, the proposed method is applied to V-DMC (Video-based Dynamic Mesh Coding) reference software, TMMv1.0, and the performance of the scalable encoder/decoder proposed in this paper and TMMv1.0-based simulcast method is compared. As a result of experiments, the proposed method effectively improves in performance the average of -7.7% and -5.7% in terms of point cloud-based BD-rate (Luma PSNR) in AI and LD conditions compared to the simulcast method, confirming that it is possible to effectively support the texture map scalability of dynamic mesh data through the proposed method.

Location Tracking and Visualization of Dynamic Objects using CCTV Images (CCTV 영상을 활용한 동적 객체의 위치 추적 및 시각화 방안)

  • Park, Sang-Jin;Cho, Kuk;Im, Junhyuck;Kim, Minchan
    • Journal of Cadastre & Land InformatiX
    • /
    • v.51 no.1
    • /
    • pp.53-65
    • /
    • 2021
  • C-ITS(Cooperative Intelligent Transport System) that pursues traffic safety and convenience uses various sensors to generate traffic information. Therefore, it is necessary to improve the sensor-related technology to increase the efficiency and reliability of the traffic information. Recently, the role of CCTV in collecting video information has become more important due to advances in AI(Artificial Intelligence) technology. In this study, we propose to identify and track dynamic objects(vehicles, people, etc.) in CCTV images, and to analyze and provide information about them in various environments. To this end, we conducted identification and tracking of dynamic objects using the Yolov4 and Deepsort algorithms, establishment of real-time multi-user support servers based on Kafka, defining transformation matrices between images and spatial coordinate systems, and map-based dynamic object visualization. In addition, a positional consistency evaluation was performed to confirm its usefulness. Through the proposed scheme, we confirmed that CCTVs can serve as important sensors to provide relevant information by analyzing road conditions in real time in terms of road infrastructure beyond a simple monitoring role.