• Title/Summary/Keyword: video analysis

Search Result 2,500, Processing Time 0.025 seconds

Temporal Anti-aliasing of a Stereoscopic 3D Video

  • Kim, Wook-Joong;Kim, Seong-Dae;Hur, Nam-Ho;Kim, Jin-Woong
    • ETRI Journal
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Frequency domain analysis is a fundamental procedure for understanding the characteristics of visual data. Several studies have been conducted with 2D videos, but analysis of stereoscopic 3D videos is rarely carried out. In this paper, we derive the Fourier transform of a simplified 3D video signal and analyze how a 3D video is influenced by disparity and motion in terms of temporal aliasing. It is already known that object motion affects temporal frequency characteristics of a time-varying image sequence. In our analysis, we show that a 3D video is influenced not only by motion but also by disparity. Based on this conclusion, we present a temporal anti-aliasing filter for a 3D video. Since the human process of depth perception mainly determines the quality of a reproduced 3D image, 2D image processing techniques are not directly applicable to 3D images. The analysis presented in this paper will be useful for reducing undesirable visual artifacts in 3D video as well as for assisting the development of relevant technologies.

  • PDF

Frame Rearrangement Method by Time Information Remarked on Recovered Image (복원된 영상에 표기된 시간 정보에 의한 프레임 재정렬 기법)

  • Kim, Yong Jin;Lee, Jung Hwan;Byun, Jun Seok;Park, Nam In
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1641-1652
    • /
    • 2021
  • To analyze the crime scene, the role of digital evidence such as CCTV and black box is very important. Such digital evidence is often damaged due to device defects or intentional deletion. In this case, the deleted video can be restored by well-known techniques like the frame-based recovery method. Especially, the data such as the video can be generally fragmented and saved in the case of the memory used almost fully. If the fragmented video were recovered in units of images, the sequence of the recovered images may not be continuous. In this paper, we proposed a new video restoration method to match the sequence of recovered images. First, the images are recovered through a frame-based recovery technique. Then, after analyzing the time information marked on the images, the time information was extracted and recognized via optical character recognition (OCR). Finally, the recovered images are rearranged based on the time information obtained by OCR. For performance evaluation, we evaluate the recovery rate of our proposed video restoration method. As a result, it was shown that the recovery rate for the fragmented video was recovered from a minimum of about 47% to a maximum of 98%.

Video Learning Enhances Financial Literacy: A Systematic Review Analysis of the Impact on Video Content Distribution

  • Yin Yin KHOO;Mohamad Rohieszan RAMDAN;Rohaila YUSOF;Chooi Yi WEI
    • Journal of Distribution Science
    • /
    • v.21 no.9
    • /
    • pp.43-53
    • /
    • 2023
  • Purpose: This study aims to examine the demographic similarities and differences in objectives, methodology, and findings of previous studies in the context of gaining financial literacy using videos. This study employs a systematic review design. Research design, data and methodology: Based on the content analysis method, 15 articles were chosen from Scopus and Science Direct during 2015-2020. After formulating the research questions, the paper identification process, screening, eligibility, and quality appraisal are discussed in the methodology. The keywords for the advanced search included "Financial literacy," "Financial Education," and "Video". Results: The results of this study indicate the effectiveness of learning financial literacy using videos. Significant results were obtained when students interacted with the video content distribution. The findings of this study provide an overview and lead to a better understanding of the use of video in financial literacy. Conclusions: This study is important as a guide for educators in future research and practice planning. A systematic review on this topic is the research gap. Video learning was active learning that involved student-centered activities that help students engage with financial literacy. By conducting a systematic review, researchers and readers may also understand how extending an individual's financial literacy may change after financial education.

What Do The Algorithms of The Online Video Platform Recommend: Focusing on Youtube K-pop Music Video (온라인 동영상 플랫폼의 알고리듬은 어떤 연관 비디오를 추천하는가: 유튜브의 K POP 뮤직비디오를 중심으로)

  • Lee, Yeong-Ju;Lee, Chang-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.4
    • /
    • pp.1-13
    • /
    • 2020
  • In order to understand the recommendation algorithm applied to the online video platform, this study examines the relationship between the content characteristics of K-pop music videos and related videos recommended for playback on YouTube, and analyses which videos are recommended as related videos through network analysis. As a result, the more liked videos, the higher recommendation ranking and most of the videos belonging to the same channel or produced by the same agency were recommended as related videos. As a result of the network analysis of the related video, the network of K-pop music video is strongly formed, and the BTS music video is highly centralized in the network analysis of the related video. These results suggest that the network between K-pops is strong, so when you enter K-pop as a search query and watch videos, you can enjoy K-pop continuously. But when watching other genres of video, K-pop may not be recommended as a related video.

Hash Based Equality Analysis of Video Files with Steganography of Identifier Information

  • Lee, Wan Yeon;Choi, Yun-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.17-25
    • /
    • 2022
  • Hash functions are widely used for fast equality analysis of video files because of their fixed small output sizes regardless of their input sizes. However, the hash function has the possibility of a hash collision in which different inputs derive the same output value, so there is a problem that different video files may be mistaken for the same file. In this paper, we propose an equality analysis scheme in which different video files always derive different output values using identifier information and double hash. The scheme first extracts the identifier information of an original video file, and attaches it into the end of the original file with a steganography method. Next the scheme calculates two hash output values of the original file and the extended file with attached identifier information. Finally the scheme utilizes the identifier information, the hash output value of the original file, and the hash output value of the extended file for the equality analysis of video files. For evaluation, we implement the proposed scheme into a practical software tool and show that the proposed scheme performs well the equality analysis of video files without hash collision problem and increases the resistance against the malicious hash collision attack.

Rotational Drive-Versus-Quality and Video Compression-Versus-Delay Analysis for Multi-Channel Video Streaming System on Ground Combat Vehicles (지상 전투 차량을 위한 다채널 영상 스트리밍 시스템의 회전 구동 대비 품질과 압축 대비 지연 분석)

  • Yun, Jihyeok;Cho, Younggeol;Chang, HyeMin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.1
    • /
    • pp.31-40
    • /
    • 2021
  • The multi-channel video streaming system is an essential device for future ground combat vehicles. For the system, the application of digital interfaces is required instead of the direct analog method to support selectable multiple channels. However, due to the characteristics of the digital interfaces that require en/decoding and signal conversion, the system should support the ability to adapt to quality and delay requirements depending on how video data is utilized. To support addressed issue, this study designs and emulates the multi-channel compressed-video streaming system of ground combat vehicle's fire control system based on commercial standards. Using the system, this study analyzes the quality of video according to the rotational speed of the acquisition device and Glass-to-Glass (G2G) delay between video acquisition and display devices according to video compression rates. Through these experiments and analysis, this paper presents the design direction of the system having scalability on the latest technology while providing high-quality video data streaming flexibly.

Abnormal Object Detection-based Video Synopsis Framework in Multiview Video (다시점 영상에 대한 이상 물체 탐지 기반 영상 시놉시스 프레임워크)

  • Ingle, Palash Yuvraj;Yu, Jin-Yong;Kim, Young-Gab
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.213-216
    • /
    • 2022
  • There has been an increase in video surveillance for public safety and security, which increases the video data, leading to analysis, and storage issues. Furthermore, most surveillance videos contain an empty frame of hours of video footage; thus, extracting useful information is crucial. The prominent framework used in surveillance for efficient storage and analysis is video synopsis. However, the existing video synopsis procedure is not applicable for creating an abnormal object-based synopsis. Therefore, we proposed a lightweight synopsis methodology that initially detects and extracts abnormal foreground objects and their respective backgrounds, which is stitched to construct a synopsis.

Automatic Video Management System Using Face Recognition and MPEG-7 Visual Descriptors

  • Lee, Jae-Ho
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.806-809
    • /
    • 2005
  • The main goal of this research is automatic video analysis using a face recognition technique. In this paper, an automatic video management system is introduced with a variety of functions enabled, such as index, edit, summarize, and retrieve multimedia data. The automatic management tool utilizes MPEG-7 visual descriptors to generate a video index for creating a summary. The resulting index generates a preview of a movie, and allows non-linear access with thumbnails. In addition, the index supports the searching of shots similar to a desired one within saved video sequences. Moreover, a face recognition technique is utilized to personalbased video summarization and indexing in stored video data.

  • PDF

GeoVideo: A First Step to MediaGIS

  • Kim, Kyong-Ho;Kim, Sung-Soo;Lee, Sung-Ho;Kim, Kyoung-Ok;Lee, Jong-Hun
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.827-831
    • /
    • 2002
  • MediaGIS is a concept of tightly integrated multimedia with spatial information. VideoGIS is an example of MediaGIS focused on the interaction or interaction of video and spatial information. Our suggested GeoVideo, a new concept of VideoGIS has its key feature in interactiveness. In GeoVideo, the geographic tasks such as browsing, searching, querying, spatial analysis can be performed based on video itself. GeoVideo can have the meaning of paradigm shift from artificial, static, abstracted and graphical paradigm to natural, dynamic, real, and image-based paradigm. We discuss about the integration of video and geography and also suggest the GeoVideo system design. Several considerations on expanding the functionalities of GeoVideo are explained for the future works.

  • PDF

Frontal Face Video Analysis for Detecting Fatigue States

  • Cha, Simyeong;Ha, Jongwoo;Yoon, Soungwoong;Ahn, Chang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.43-52
    • /
    • 2022
  • We can sense somebody's feeling fatigue, which means that fatigue can be detected through sensing human biometric signals. Numerous researches for assessing fatigue are mostly focused on diagnosing the edge of disease-level fatigue. In this study, we adapt quantitative analysis approaches for estimating qualitative data, and propose video analysis models for measuring fatigue state. Proposed three deep-learning based classification models selectively include stages of video analysis: object detection, feature extraction and time-series frame analysis algorithms to evaluate each stage's effect toward dividing the state of fatigue. Using frontal face videos collected from various fatigue situations, our CNN model shows 0.67 accuracy, which means that we empirically show the video analysis models can meaningfully detect fatigue state. Also we suggest the way of model adaptation when training and validating video data for classifying fatigue.