• Title/Summary/Keyword: Video Platforms

Search Result 189, Processing Time 0.026 seconds

Secure and Energy-Efficient MPEG Encoding using Multicore Platforms (멀티코어를 이용한 안전하고 에너지 효율적인 MPEG 인코딩)

  • Lee, Sung-Ju;Lee, Eun-Ji;Hong, Seung-Woo;Choi, Han-Na;Chung, Yong-Wha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.3
    • /
    • pp.113-120
    • /
    • 2010
  • Content security and privacy protection are important issues in emerging network-based video surveillance applications. Especially, satisfying both real-time constraint and energy efficiency with embedded system-based video sensors is challenging since the battery-operated sensors need to compress and protect video content in real-time. In this paper, we propose a multicore-based solution to compress and protect video surveillance data, and evaluate the effectiveness of the solution in terms of both real-time constraint and energy efficiency. Based on the experimental results with MPEG2/AES software, we confirm that the multicore-based solution can improve the energy efficiency of a singlecore-based solution by a factor of 30 under the real-time constraint.

Analysis of time-series user request pattern dataset for MEC-based video caching scenario (MEC 기반 비디오 캐시 시나리오를 위한 시계열 사용자 요청 패턴 데이터 세트 분석)

  • Akbar, Waleed;Muhammad, Afaq;Song, Wang-Cheol
    • KNOM Review
    • /
    • v.24 no.1
    • /
    • pp.20-28
    • /
    • 2021
  • Extensive use of social media applications and mobile devices continues to increase data traffic. Social media applications generate an endless and massive amount of multimedia traffic, specifically video traffic. Many social media platforms such as YouTube, Daily Motion, and Netflix generate endless video traffic. On these platforms, only a few popular videos are requested many times as compared to other videos. These popular videos should be cached in the user vicinity to meet continuous user demands. MEC has emerged as an essential paradigm for handling consistent user demand and caching videos in user proximity. The problem is to understand how user demand pattern varies with time. This paper analyzes three publicly available datasets, MovieLens 20M, MovieLens 100K, and The Movies Dataset, to find the user request pattern over time. We find hourly, daily, monthly, and yearly trends of all the datasets. Our resulted pattern could be used in other research while generating and analyzing the user request pattern in MEC-based video caching scenarios.

XML Based Standard Protocol for Remote Control (확장성표기언어(XML) 기반의 원격제어규약 표준)

  • Choi, Jung-In
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.5
    • /
    • pp.216-219
    • /
    • 2006
  • This paper presents an XML based standard specification of remote control protocol for home appliances. In the framework level, the XML protocol provides a useful bridge between services and platforms. The proposed protocol has been implemented into a personal video recorder for remote recording service. The results imply the potential of global standards for remote control by its minimum overhead in network and processor. The proposed XML is designated RCXML in this paper.

Augmented Reality Annotation for Real-Time Collaboration System

  • Cao, Dongxing;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.483-489
    • /
    • 2020
  • Advancements in mobile phone hardware and network connectivity made communication becoming more and more convenient. Compared to pictures or texts, people prefer to share videos to convey information. For intentions clearer, the way to annotating comments directly on the video are quite important issues. Recently there have been many attempts to make annotations on video. These previous works have many limitations that do not support user-defined handwritten annotations or annotating on local video. In this sense, we propose an augmented reality based real-time video annotation system which allowed users to make any annotations directly on the video freely. The contribution of this work is the development of a real-time video annotation system based on recent augmented reality platforms that not only enables annotating drawing geometry shape on video in real-time but also drastically reduces the production costs. For practical use, we proposed a real-time collaboration system based on the proposed annotation method. Experimental results show that the proposed annotation method meets the requirements of real-time, accuracy and robustness of the collaboration system.

Design of an Automatic Summary System for Minutes Using Virtual Reality

  • Amsuk Oh
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.239-243
    • /
    • 2023
  • Owing to the environment in which it has become difficult to face people, on-tact communication has been activated, and online video conferencing platforms have become indispensable collaboration tools. Although costs have dropped and productivity has improved, communication remains poor. Recently, various companies, including existing online videoconferencing companies, have attempted to solve communication problems by establishing a videoconferencing platform within the virtual reality (Virtual Reality) space. Although the VR videoconference platform has only improved upon the benefits of existing video conferences, the problem of manually summarizing minutes because there is no function to summarize minute documents still remains. Therefore, this study proposes a method for establishing a meeting minute summary system without applying cases to a VR videoconference platform. This study aims to solve the problem of communication difficulties by combining VR, a metaverse technology, with an existing online video conferencing platform.

Enhancing Video Storyboarding with Artificial Intelligence: An Integrated Approach Using ChatGPT and Midjourney within AiSAC

  • Sukchang Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.253-259
    • /
    • 2023
  • The increasing incorporation of AI in video storyboard creation has been observed recently. Traditionally, the production of storyboards requires significant time, cost, and specialized expertise. However, the integration of AI can amplify the efficiency of storyboard creation and enhance storytelling. In Korea, AiSAC stands at the forefront of AI-driven storyboard platforms, boasting the capability to generate realistic images built on open datasets foundations. Yet, a notable limitation is the difficulty in intricately conveying a director's vision within the storyboard. To address this challenge, we proposed the application of image generation features from ChatGPT and Midjourney to AiSAC. Through this research, we aimed to enhance the efficiency of storyboard production and refined the intricacy of expression, thereby facilitating advancements in the video production process.

Comparison of the Characteristics of Three Premium Large-Format Platforms IMAX, Screen X and 360 Degrees Circular Screen (PLF 플랫폼 아이맥스, 스크린 X, 360도 서큘러 스크린의 특징 비교 연구)

  • Shan, Xinyi;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.8
    • /
    • pp.375-381
    • /
    • 2017
  • The America film, Beauty and Beast has grossed over 4,273,401 after being released for 21 days. The growth of movie profit in the video market is also developing rapidly. In this paper, we will focus on the PLF(Premium Large-Format) video technology, because PLF video technology can help audience to enhance the sense of 'immersion' and enjoy a different visual feast. In PLF video technology, IMAX, screen X, 360 degrees circular screen are the most important formats. By comparative analysis of these 3 formats, the biggest difference is their number of screens and appearance. Based on the result we can understand the 3 kinds of PLF platforms better and help us to make a choice between them. In addition, further research about the manufacture method of PLF technology will be discussed.

Effects of YouTube Users' Mindfulness factors on the Continued Use Intention (유튜브 이용자의 마음챙김 요인이 지속이용의도에 미치는 영향)

  • Choi, Kyung Woong;Byeon, Benja min;Kwon, Do soon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.3
    • /
    • pp.39-61
    • /
    • 2021
  • This study aims to examine the effect of the Mindfulness for YouTube users. Recently, video sharing platforms have grown rapidly with YouTube, and YouTube has grown rapidly despite the emergence of various video sharing platforms. Which YouTube represents these changes, provides an environment where anyone can make and distribute videos. And through these advantages, many viewers are coming in and more creators are making videos. The purpose of this study is to understand the influence factors of YouTube service in Korea and to clarify the causal relationship between the factors that affect the continued use intention through usefulness, confirmation, and satisfaction. For this reason, presented a research model that applied key variables of the Mindfulness theory, which emphasized that users should be aware of their emotions. In order to empirical verification the research model of this study, a questionnaire was conducted for university students and general office workers who had experience using YouTube. As a result, first decentered attention has been shown that has no positive effect on perceived usefulness and confirmation. Second, non-judgmental acceptance has benn shown that has a postitive effect on perceived usefulness and confirmation. Third, current self-perception has been shown that has no positive effect on perceived usefulness and confirmation. Fourth, concentration has been shown that has no positive effect on perceived usefulness and confirmation. And Mindfulness has been shown that has a positive effect on satisfaction and perceived usefulness. Through this, it is important to use YouTube to use what users feel is important, and it is important to determine whether the video is helpful to them when accepting content and make them expect the next video.

Video Highlight Prediction Using Multiple Time-Interval Information of Chat and Audio (채팅과 오디오의 다중 시구간 정보를 이용한 영상의 하이라이트 예측)

  • Kim, Eunyul;Lee, Gyemin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.553-563
    • /
    • 2019
  • As the number of videos uploaded on live streaming platforms rapidly increases, the demand for providing highlight videos is increasing to promote viewer experiences. In this paper, we present novel methods for predicting highlights using chat logs and audio data in videos. The proposed models employ bi-directional LSTMs to understand the contextual flow of a video. We also propose to use the features over various time-intervals to understand the mid-to-long term flows. The proposed Our methods are demonstrated on e-Sports and baseball videos collected from personal broadcasting platforms such as Twitch and Kakao TV. The results show that the information from multiple time-intervals is useful in predicting video highlights.

Suboptimal video coding for machines method based on selective activation of in-loop filter

  • Ayoung Kim;Eun-Vin An;Soon-heung Jung;Hyon-Gon Choo;Jeongil Seo;Kwang-deok Seo
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.538-549
    • /
    • 2024
  • A conventional codec aims to increase the compression efficiency for transmission and storage while maintaining video quality. However, as the number of platforms using machine vision rapidly increases, a codec that increases the compression efficiency and maintains the accuracy of machine vision tasks must be devised. Hence, the Moving Picture Experts Group created a standardization process for video coding for machines (VCM) to reduce bitrates while maintaining the accuracy of machine vision tasks. In particular, in-loop filters have been developed for improving the subjective quality and machine vision task accuracy. However, the high computational complexity of in-loop filters limits the development of a high-performance VCM architecture. We analyze the effect of an in-loop filter on the VCM performance and propose a suboptimal VCM method based on the selective activation of in-loop filters. The proposed method reduces the computation time for video coding by approximately 5% when using the enhanced compression model and 2% when employing a Versatile Video Coding test model while maintaining the machine vision accuracy and compression efficiency of the VCM architecture.