• Title/Summary/Keyword: Video Modeling

Search Result 308, Processing Time 0.023 seconds

A study on Metadata Modeling using Structure Information of Video Document (비디오 문서의 구조 정보를 이용한 메타데이터 모델링에 관한 연구)

  • 권재길
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.4
    • /
    • pp.10-18
    • /
    • 1998
  • Video information is an important component of multimedia system such as Digital Library. World-Wide Web(WWW) and Video-On-Demand(VOD) service system. It can support various types of information because of including audio-visual, spatial-temporal and semantics information. In addition, it requires the ability of retrieving the specific scene of video instead of entire retrieval of video document. Therefore, so as to support a variety of retrieval, this paper models metadata using video document structure information that consists of hierarchical structure, and designs database schema that can manipulate video document.

  • PDF

Feature-Based Light and Shadow Estimation for Video Compositing and Editing (동영상 합성 및 편집을 위한 특징점 기반 조명 및 그림자 추정)

  • Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.1-9
    • /
    • 2012
  • Video-based modeling / rendering developed to produce photo-realistic video contents have been one of the important research topics in computer graphics and computer visions. To smoothly combine original input video clips and 3D graphic models, geometrical information of light sources and cameras used to capture a scene in the real world is essentially required. In this paper, we present a simple technique to estimate the position and orientation of an optimal light source from the topology of objects and the silhouettes of shadows appeared in the original video clips. The technique supports functions to generate well matched shadows as well as to render the inserted models by applying the estimated light sources. Shadows are known as an important visual cue that empirically indicates the relative location of objects in the 3D space. Thus our method can enhance realism in the final composed videos through the proposed shadow generation and rendering algorithms in real-time.

Depth Video Coding Method for Spherical Object (구형 객체의 깊이 영상 부호화 방법)

  • Kwon, Soon-Kak;Lee, Dong-Seok;Park, Yoo-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.6
    • /
    • pp.23-29
    • /
    • 2016
  • In this paper, we propose a method of depth video coding to find the closest sphere through the depth information when the spherical object is captured. We find the closest sphere to the captured spherical object using method of least squares in each block. Then, we estimate the depth value through the found sphere and encode the depth video through difference between the measured depth values and the estimated depth values. Also, we encode factors of the estimated sphere with encoded pixels within the block. The proposed method improves the coding efficiency up to 81% compared to the conventional DPCM method.

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

A Fast Image Matching Method for Oblique Video Captured with UAV Platform

  • Byun, Young Gi;Kim, Dae Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 2020
  • There is growing interest in Vision-based video image matching owing to the constantly developing technology of unmanned-based systems. The purpose of this paper is the development of a fast and effective matching technique for the UAV oblique video image. We first extracted initial matching points using NCC (Normalized Cross-Correlation) algorithm and improved the computational efficiency of NCC algorithm using integral image. Furthermore, we developed a triangulation-based outlier removal algorithm to extract more robust matching points among the initial matching points. In order to evaluate the performance of the propose method, our method was quantitatively compared with existing image matching approaches. Experimental results demonstrated that the proposed method can process 2.57 frames per second for video image matching and is up to 4 times faster than existing methods. The proposed method therefore has a good potential for the various video-based applications that requires image matching as a pre-processing.

Automatic Jitter Evaluation Method from Video using Optical Flow (Optical Flow를 사용한 동영상의 흔들림 자동 평가 방법)

  • Baek, Sang Hyune;Hwang, WonJun
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1236-1247
    • /
    • 2017
  • In this paper, we propose a method for evaluating the uncomfortable shaking in the video. When you shoot a video using a handheld device, such as a smartphone, most of the video contains unwanted shake. Most of these fluctuations are caused by hand tremors that occurred during shooting, and many methods for correcting them automatically have been proposed. It is necessary to evaluate the shake correction performance in order to compare the proposed shake correction methods. However, since there is no standardized performance evaluation method, a correction performance evaluation method is proposed for each shake correction method. Therefore, it is difficult to make objective comparison of shake correction method. In this paper, we propose a method for objectively evaluating video shake. Automatically analyze the video to find out how much tremors are included in the video and how much the tremors are concentrated at a specific time. In order to measure the shaking index, we proposed jitter modeling. We applied the algorithm implemented by Optical Flow to the real video to automatically measure shaking frequency. Finally, we analyzed how the shaking indices appeared after applying three different image stabilization methods to nine sample videos.

Generation of Video Clips Utilizing Shot Boundary Detection (샷 경계 검출을 이용한 영상 클립 생성)

  • Kim, Hyeok-Man;Cho, Seong-Kil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.582-592
    • /
    • 2001
  • Video indexing plays an important role in the applications such as digital video libraries or web VOD which archive large volume of digital videos. Video indexing is usually based on video segmentation. In this paper, we propose a software tool called V2Web Studio which can generate video clips utilizing shot boundary detection algorithm. With the V2Web Studio, the process of clip generation consists of the following four steps: 1) Automatic detection of shot boundaries by parsing the video, 2) Elimination of errors by manually verifying the results of the detection, 3) Building a modeling structure of logical hierarchy using the verified shots, and 4) Generating multiple video clips corresponding to each logically modeled segment. The aforementioned steps are performed by shot detector, shot verifier, video modeler and clip generator in the V2Web Studio respectively.

  • PDF

Modeling of Infectious Information Hiding System for Video Contents using the Biological Virus (생물학적 바이러스를 이용한 비디오 콘텐츠의 전염성 정보은닉 시스템 모델링)

  • Jang, Bong-Joo;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.34-45
    • /
    • 2012
  • In this paper, we proposed and modeled a video contents protection system based on the infectious information hiding(IIH) technique as using characteristics of biological viruses. Our proposed IIH System considered the requisite important information for video contents protection as the infectious virus, and suggested a new paradigm about video contents protection that transmitted infectious information from contents(host) or video CODECs(viral vector). Also, we modeled the Pathogen, Mutant and Contagion virus as the infectious information and defined technical tools about verification of infectious information, kernel based IIH, contents based IIH and creation/regeneration of infectious information as main techniques for our IIH system. Finally, through simulations that carried the infectious information by using conventional information hiding algorithms as kernel based and contents based IIH techniques, we verified possibilities of our proposed IIH system.

Development of Online Video Mash-up System based on Automatic Scene Elements Composition using Storyboard (스토리보드에 따라 장면요소를 자동 조합하는 주제모델링 기반 온라인 비디오 매쉬업 시스템 개발)

  • Park, Jongbin;Kim, Kyung-Won;Jung, Jong-Jin;Lim, Tae-Beom
    • Journal of Broadcast Engineering
    • /
    • v.21 no.4
    • /
    • pp.525-537
    • /
    • 2016
  • In this paper, we develop an online video mash-up system which use automatic scene elements composition scheme using a storyboard. There are two conventional online video production schemes. Video collage method is simple and easy, but it was difficult to reflect narrative or story. Another way is a template based method which usually select a template and it replaces resources such as photos or videos in the template. However, if the related templates do not exist, there are limitations that cannot create the desired output. In addition, the quality and atmosphere of the output is too dependent on the template. To solve these problems, we propose a video mash-up scheme using storyboard and we also implement a classification and recommendation scheme based on topic modeling.

Teacher Education for Mathematical Modeling: a Case Study (수학적 모델링의 구현을 위한 교사 교육: 사례 연구)

  • Kim, Yeon
    • East Asian mathematical journal
    • /
    • v.36 no.2
    • /
    • pp.173-201
    • /
    • 2020
  • Mathematical modeling has been emphasized because it offers important opportunities for students to both apply their learning of mathematics to a situation and to explore the mathematics involved in the context of the situation. However, unlike its importance, mathematical modeling has not been grounded in typical mathematics classes because teachers do not have enough understanding of mathematical modeling and they are skeptical to implement it in their lessons. The current study analyzed the data, such as video recordings, slides, and surveys for teachers, collected in four lessons of teacher education in terms of mathematical modeling. The study reported different kinds of tasks that are authentic with regards to mathematical modeling. Furthermore, in teacher education, teachers' identities have separated a mode as learners and a mode as teachers and conflicts and intentional transition were observed. Analysis of the surveys shows what teachers think about mathematical modeling with their understanding of it. In teacher education, teachers achieved different kinds of modeling tasks and experience them which are helpful to enact mathematical modeling in their lessons. However, teacher education also needs to specifically offer what to do and how to do it for their lessons.