• Title/Summary/Keyword: video content

Search Result 1,260, Processing Time 0.035 seconds

Development and Distribution of Deep Fake e-Learning Contents Videos Using Open-Source Tools

  • HO, Won;WOO, Ho-Sung;LEE, Dae-Hyun;KIM, Yong
    • Journal of Distribution Science
    • /
    • v.20 no.11
    • /
    • pp.121-129
    • /
    • 2022
  • Purpose: Artificial intelligence is widely used, particularly in the popular neural network theory called Deep learning. The improvement of computing speed and capability expedited the progress of Deep learning applications. The application of Deep learning in education has various effects and possibilities in creating and managing educational content and services that can replace human cognitive activity. Among Deep learning, Deep fake technology is used to combine and synchronize human faces with voices. This paper will show how to develop e-Learning content videos using those technologies and open-source tools. Research design, data, and methodology: This paper proposes 4 step development process, which is presented step by step on the Google Collab environment with source codes. This technology can produce various video styles. The advantage of this technology is that the characters of the video can be extended to any historical figures, celebrities, or even movie heroes producing immersive videos. Results: Prototypes for each case are also designed, developed, presented, and shared on YouTube for each specific case development. Conclusions: The method and process of creating e-learning video contents from the image, video, and audio files using Deep fake open-source technology was successfully implemented.

An Exploratory Study on Video Information Literacy (영상정보 활용능력에 관한 탐색적 연구)

  • Min Kyung Na;Jee Yeon Lee
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.2
    • /
    • pp.19-46
    • /
    • 2024
  • In this study, we conducted a literature review and exploratory research to identify the characteristics of recently popular video information and to propose the basic capabilities required for video information literacy. Through a literature review, the distinct characteristics of video information were examined from various perspectives, differentiating it from other types of information. Subsequently, we had one-on-one, in-depth, semi-structured interviews with 16 participants in their teens to 50s to collect their video usage experiences. The interview contents were categorized to create a codebook, and content analysis was performed. Based on this analysis, we derived the characteristics of video information. Finally, the characteristics of video information were identified through the literature review and interview analysis outcomes, and these characteristics were classified into properties of video and characteristics related to video information usage. Based on the identified characteristics of video information, this study proposed the basic capabilities required for video information literacy.

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

MPEG2-TS to RTP Transformation and Application system (MPEG2-TS의 RTP 변환 및 적용 시스템)

  • Im, Sung-Jin;Kim, Ho-Kyom;Hong, Jin-Woo;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.643-645
    • /
    • 2010
  • The Internet-based multimedia services such as IPTV is being expanded with the development of technology to support the convergence of broadcasting and telecommunications technology for the control seems to be growing larger. Especially for the real-time TV broadcast multicast control technology to support the authentication and resource control, in addition to the technology services that enhance the value of technology for a variety of services in both directions seems to be developed. And, Internet-based transmission system transmit the video content for the video content delivery using RTP(Real Time Transport Protocol). Standardization body, IETF(Internet Engineering Task Force) within the RTP, according to a variety of audio and video formats only transmission format(RTP Payload Format) Establish a separate standard and scalable video content "RTP Payload Format for SVC(Switched Virtual Connection) Video" the standardization is currently processing. In this paper we are improving the quality of broadcasting and telecommunication systems, so that the upper layer by the application can react adaptively to the existing MPEG2-TS and RTP who are provided by a variety of content applied to a variety of devices consumers ETE(End- to-End) QoS(Quality of Service) for enhance the system who was designed and implemented.

  • PDF

VVC Intra Triangular Partitioning Prediction for Screen Contents (스크린 콘텐츠를 위한 VVC 화면내 삼각형 분할 예측 방법)

  • Choe, Jaeryun;Gwon, Daehyeok;Han, Heeji;Lee, Hahyun;Kang, Jungwon;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.325-337
    • /
    • 2020
  • Versatile Video Coding (VVC) is a new video coding standard that is being developed by the Joint Video Experts Team of ISO/IEC/ITU-T and it has adopted various technologies including screen content coding tools. Screen contents have a feature that blocks are likely to have diagonal edges like character regions. If triangular partitioning coding is allowed for screen contents having such the feature, coding efficiency would increase. This paper proposes a intra prediction method using triangular partitioning prediction for screen content coding. Similar to the Triangular Prediction Mode of VVC that supports the triangular partitioning prediction, the proposed method derives two prediction blocks using Horizontal and Vertical modes and then it blends the predicted blocks applying masks with triangle shape to generate a final prediction block. The experimental results of the proposed method showed an average of 1.86%, 1.49%, and 1.55% coding efficiency in YUV, respectively, for VVC screen content test sequences.

A Study on Determinants of VR Video Content Popularity (VR 영상 조회수 결정요인 연구)

  • Soojeong Kim;Chanhee Kwak;Minhyung Lee;Junyeong Lee;Heeseok Lee
    • Information Systems Review
    • /
    • v.22 no.2
    • /
    • pp.25-41
    • /
    • 2020
  • Along with the expectation about 5G network commercialization, interests in realistic and immersive media industries such as virtual reality (VR) are increasing. However, most of studies on VR still focus on video technologies instead of factors for popularity and consumption. Thus, the main objective of this research is to identify meaningful factors, which affect the view counts of VR videos and to provide business implications of the content strategies for VR video creators and service providers. Using a regression analysis with 700 VR videos, this study tries to find major factors that affect the view counts of VR videos. As a result, user assessment factors such as number of likes and sicknesses have a strong influence on the view counts. In addition, the result shows that both general information factors (video length and age) and content characteristic factors (series, one source multi use (OSMU), and category) are all influential factors. The findings suggest that it is necessary to support recommendation and curation based on user assessments for increasing popularity and diffusion of VR video streaming.

A User Driven Adaptable Bandwidth Video System for Remote Medical Diagnosis System (원격 의료 진단 시스템을 위한 사용자 기반 적응 대역폭 비디오 시스템)

  • Chung, Yeongjee;Wright, Dustin;Ozturk, Yusuf
    • Journal of Information Technology Services
    • /
    • v.14 no.1
    • /
    • pp.99-113
    • /
    • 2015
  • Adaptive bitrate (ABR) streaming technology has become an important and prevalent feature in many multimedia delivery systems, with content providers such as Netflix and Amazon using ABR streaming to increase bandwidth efficiency and provide the maximum user experience when channel conditions are not ideal. Where such systems could see improvement is in the delivery of live video with a closed loop cognitive control of video encoding. In this paper, we present streaming camera system which provides spatially and temporally adaptive video streams, learning the user's preferences in order to make intelligent scaling decisions. The system employs a hardware based H.264/AVC encoder for video compression. The encoding parameters can be configured by the user or by the cognitive system on behalf of the user when the bandwidth changes. A cognitive video client developed in this study learns the user's preferences (i.e. video size over frame rate) over time and intelligently adapts encoding parameters when the channel conditions change. It has been demonstrated that the cognitive decision system developed has the ability to control video bandwidth by altering the spatial and temporal resolution, as well as the ability to make scaling decisions

Design and Implementation of the Video Data Model Based on Temporal Relationship (시간 관계성을 기반으로 한 비디오 데이터 모델의 설계 및 구현)

  • 최지희;용환승
    • Journal of Korea Multimedia Society
    • /
    • v.2 no.3
    • /
    • pp.252-264
    • /
    • 1999
  • The key characteristic of video data is its spatial/temporal relationships. In this paper, we propose an content based video retrieval system based on hierarchical data structure for specifying the temporal semantics of video data. In this system, video data's hierarchical structure temporal relationship, inter video object temporal relationship, and moving video object temporal relationship can be represented. We also implemented these video data's temporal relationship into an object-relational database management system using inheritance, encapsulation function overloading, etc. So more extended and richer temporal functions can be used to support a broad range of temporal queries.

  • PDF

A Method for Identification of Harmful Video Images Using a 2-Dimensional Projection Map

  • Kim, Chang-Geun;Kim, Soung-Gyun;Kim, Hyun-Ju
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.62-68
    • /
    • 2013
  • This paper proposes a method for identification of harmful video images based on the degree of harmfulness in the video content. To extract harmful candidate frames from the video effectively, we used a video color extraction method applying a projection map. The procedure for identifying the harmful video has five steps, first, extract the I-frames from the video and map them onto projection map. Next, calculate the similarity and select the potentially harmful, then identify the harmful images by comparing the similarity measurement value. The method estimates similarity between the extracted frames and normative images using the critical value of the projection map. Based on our experimental test, we propose how the harmful candidate frames are extracted and compared with normative images. The various experimental data proved that the image identification method based on the 2-dimensional projection map is superior to using the color histogram technique in harmful image detection performance.

CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events

  • Choe, Giseok;Lee, Seungbin;Nang, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1689-1701
    • /
    • 2019
  • In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC data.