• Title/Summary/Keyword: AI Video

Search Result 166, Processing Time 0.034 seconds

A Discriminating Mechanism of Suspected Copyright Infringement Video with Strong Distortion Resistance (왜곡 저항력이 강한 저작권 침해 영상 저작물 판별 기법)

  • Yu, Ho-jei;Kim, Chan-hee;Chung, A-yun;Oh, Soo-hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.3
    • /
    • pp.387-400
    • /
    • 2021
  • The increase in number of streaming platforms and contents thereof, owing to an advancement of cloud environment, has triggered the rapid proliferation of illegally replicated contents as well as legal contents. This necessitates the development of technology capable of discriminating the copyright infringement of various contents. The Korea Copyright Protection Agency operates a video content demonstration system using AI, but it has limitations on distortions such as resolution changes. In this paper, we propose the powerful mechanism using skeleton, which is resistant against distorted video contents and capable of discriminating copyright infringement of platforms streaming illegal video contents. The proposed mechanism exploits the calculation of Hamming distance to the original video by converting collected data into binary ones for the efficient calculation. As a result of the experiment, the proposed mechanism have demonstrated the discrimination of illegally replicated video contents with an accuracy of 94.79% and average magnitude of 215KB.

Effective teaching using textbooks and AI web apps (교과서와 AI 웹앱을 활용한 효과적인 교육방식)

  • Sobirjon, Habibullaev;Yakhyo, Mamasoliev;Kim, Ki-Hawn
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.211-213
    • /
    • 2022
  • Images in the textbooks influence the learning process. Students often see pictures before reading the text and these pictures can enhance the power of imagination of the students. The findings of some researches show that the images in textbooks can increase students' creativity. However, when learning major subjects, reading a textbook or looking at a picture alone may not be enough to understand the topics and completely realize the concepts. Studies show that viewers remember 95% of a message when watching a video than reading a text. If we can combine textbooks and videos, this teaching method is fantastic. The "TEXT + IMAGE + VIDEO (Animation)" concept could be more beneficial than ordinary ones. We tried to give our solution by using machine learning Image Classification. This paper covers the features, approaches and detailed objectives of our project. For now, we have developed the prototype of this project as a web app and it only works when accessed via smartphone. Once you have accessed the web app through your smartphone, the web app asks for access to use the camera. Suppose you bring your smartphone's camera closer to the picture in the textbook. It will then display the video related to the photo below.

  • PDF

Development of An Intelligent G-Learning Virtual Learning Platform Based on Real Video (실 화상 기반의 지능형 G-러닝 가상 학습 플랫폼 개발)

  • Jae-Yeon Park;Sung-Jun Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.79-86
    • /
    • 2024
  • In this paper, we propose a virtual learning platform based on various interactions that occur during real class activities, rather than the existing content delivery-oriented learning metaverse platform. In this study, we provide a learning environment that combines AI and a virtual environment to solve problems by talking to real-time AI. Also, we applied G-learning techinques to improve class immersion. The Virtual Edu platform developed through this study provides an effective learning experience combining self-directed learning, simulation of interest through games, and PBL teaching method. And we propose a new educational method that improves student participation learning effectiveness. Experiment, we test performance on learninng activity based on real-time video classroom. As a result, it was found that the class progressing stably.

A Comparative Study on Artificial in Intelligence Model Performance between Image and Video Recognition in the Fire Detection Area (화재 탐지 영역의 이미지와 동영상 인식 사이 인공지능 모델 성능 비교 연구)

  • Jeong Rok Lee;Dae Woong Lee;Sae Hyun Jeong;Sang Jeong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.968-975
    • /
    • 2023
  • Purpose: We would like to confirm that the false positive rate of flames/smoke is high when detecting fires. Propose a method and dataset to recognize and classify fire situations to reduce the false detection rate. Method: Using the video as learning data, the characteristics of the fire situation were extracted and applied to the classification model. For evaluation, the model performance of Yolov8 and Slowfast were compared and analyzed using the fire dataset conducted by the National Information Society Agency (NIA). Result: YOLO's detection performance varies sensitively depending on the influence of the background, and it was unable to properly detect fires even when the fire scale was too large or too small. Since SlowFast learns the time axis of the video, we confirmed that detects fire excellently even in situations where the shape of an atypical object cannot be clearly inferred because the surrounding area is blurry or bright. Conclusion: It was confirmed that the fire detection rate was more appropriate when using a video-based artificial intelligence detection model rather than using image data.

A Study on Artificial Intelligence Based Business Models of Media Firms

  • Song, Minzheong
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.56-67
    • /
    • 2019
  • The aim of this study is to develop Artificial Intelligence (AI) based business models of media firms. We define AI and discuss 'AI activity model'. The practices of the efficiency model are home equipment-based personalization and media content recommendation. The practices of the expert model are media content commissioning, content rights negotiation, copyright infringement, and promotion. The practices of the effectiveness model are photo & video auto-tagging and auto subtitling & simultaneous translation. The practices of the innovation model are content script creation and metadata management. The related use cases from 2012 to 2017 are introduced along the four activity models of AI. In conclusion, we propose for media companies to fully utilize the AI for transforming from traditional to successful digital media firms.

Study on AI-based content reproduction system using movie contents (영화를 이용한 AI 기반 콘텐츠 재생산 시스템 연구)

  • Yang, Seokhwan;Lee, Young-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.336-343
    • /
    • 2021
  • AI technology is spreading not only to industrial fields, but also to culture, art, and content fields. In this paper, we proposed a system based on AI technology that can automate the process of reproducing contents using characters for movie contents. After creating the basic appearance of the character by using the StyleGAN2 model from the video extracted from the movie contents, analyzing the character's personality and propensity using the extracted dialogue data, it was determined from the contemplative appearance based on the yin-yang and five elements to the character's propensity. Accordingly, the external characteristics are reflected in the character. Using the OpenPose model, a character's motion is created, and the finally generated data is integrated to reproduce the content. It is expected that many movie contents can be reproduced through the study of the proposed system.

A Case Study on AI-Driven <DEEPMOTION> Motion Capture Technology

  • Chen Xi;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.87-92
    • /
    • 2024
  • The rapid development of artificial intelligence technology in recent years is evident, from the emergence of ChatGPT to innovations like Midjourney, Stable Diffution, and the upcoming SORA text-to-video technology by OPENai. Animation capture technology, driven by the AI technology trend, is undergoing significant advancements, accelerating the progress of the animation industry. Through an analysis of the current application of DEEPMOTION, this paper explores the development direction of AI motion capture technology, analyzes issues such as errors in multi-person object motion capture, and examines the vast prospects. With the continuous advancement of AI technology, the ability to recognize and track complex movements and expressions faster and more accurately, reduce human errors, enhance processing speed and efficiency. This advancement lowers technological barriers and accelerates the fusion of virtual and real worlds.

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

RAVIP: Real-Time AI Vision Platform for Heterogeneous Multi-Channel Video Stream

  • Lee, Jeonghun;Hwang, Kwang-il
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.227-241
    • /
    • 2021
  • Object detection techniques based on deep learning such as YOLO have high detection performance and precision in a single channel video stream. In order to expand to multiple channel object detection in real-time, however, high-performance hardware is required. In this paper, we propose a novel back-end server framework, a real-time AI vision platform (RAVIP), which can extend the object detection function from single channel to simultaneous multi-channels, which can work well even in low-end server hardware. RAVIP assembles appropriate component modules from the RODEM (real-time object detection module) Base to create per-channel instances for each channel, enabling efficient parallelization of object detection instances on limited hardware resources through continuous monitoring with respect to resource utilization. Through practical experiments, RAVIP shows that it is possible to optimize CPU, GPU, and memory utilization while performing object detection service in a multi-channel situation. In addition, it has been proven that RAVIP can provide object detection services with 25 FPS for all 16 channels at the same time.

GENERATION OF FUTURE MAGNETOGRAMS FROM PREVIOUS SDO/HMI DATA USING DEEP LEARNING

  • Jeon, Seonggyeong;Moon, Yong-Jae;Park, Eunsu;Shin, Kyungin;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.82.3-82.3
    • /
    • 2019
  • In this study, we generate future full disk magnetograms in 12, 24, 36 and 48 hours advance from SDO/HMI images using deep learning. To perform this generation, we apply the convolutional generative adversarial network (cGAN) algorithm to a series of SDO/HMI magnetograms. We use SDO/HMI data from 2011 to 2016 for training four models. The models make AI-generated images for 2017 HMI data and compare them with the actual HMI magnetograms for evaluation. The AI-generated images by each model are very similar to the actual images. The average correlation coefficient between the two images for about 600 data sets are about 0.85 for four models. We are examining hundreds of active regions for more detail comparison. In the future we will use pix2pix HD and video2video translation networks for image prediction.

  • PDF