• Title/Summary/Keyword: Temporal background modeling

Search Result 13, Processing Time 0.025 seconds

Hole-Filling Method Using Extrapolated Spatio-temporal Background Information (추정된 시공간 배경 정보를 이용한 홀채움 방식)

  • Kim, Beomsu;Nguyen, Tien Dat;Hong, Min-Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.8
    • /
    • pp.67-80
    • /
    • 2017
  • This paper presents a hole-filling method using extrapolated spatio-temporal background information to obtain a synthesized view. A new temporal background model using non-overlapped patch based background codebook is introduced to extrapolate temporal background information In addition, a depth-map driven spatial local background estimation is addressed to define spatial background constraints that represent the lower and upper bounds of a background candidate. Background holes are filled by comparing the similarities between the temporal background information and the spatial background constraints. Additionally, a depth map-based ghost removal filter is described to solve the problem of the non-fit between a color image and the corresponding depth map of a virtual view after 3-D warping. Finally, an inpainting is applied to fill in the remaining holes with the priority function that includes a new depth term. The experimental results demonstrated that the proposed method led to results that promised subjective and objective improvement over the state-of-the-art methods.

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

Background Subtraction in Dynamic Environment based on Modified Adaptive GMM with TTD for Moving Object Detection

  • Niranjil, Kumar A.;Sureshkumar, C.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.372-378
    • /
    • 2015
  • Background subtraction is the first processing stage in video surveillance. It is a general term for a process which aims to separate foreground objects from a background. The goal is to construct and maintain a statistical representation of the scene that the camera sees. The output of background subtraction will be an input to a higher-level process. Background subtraction under dynamic environment in the video sequences is one such complex task. It is an important research topic in image analysis and computer vision domains. This work deals background modeling based on modified adaptive Gaussian mixture model (GMM) with three temporal differencing (TTD) method in dynamic environment. The results of background subtraction on several sequences in various testing environments show that the proposed method is efficient and robust for the dynamic environment and achieves good accuracy.

Non-parametric Background Generation based on MRF Framework (MRF 프레임워크 기반 비모수적 배경 생성)

  • Cho, Sang-Hyun;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.17B no.6
    • /
    • pp.405-412
    • /
    • 2010
  • Previous background generation techniques showed bad performance in complex environments since they used only temporal contexts. To overcome this problem, in this paper, we propose a new background generation method which incorporates spatial as well as temporal contexts of the image. This enabled us to obtain 'clean' background image with no moving objects. In our proposed method, first we divided the sampled frame into m*n blocks in the video sequence and classified each block as either static or non-static. For blocks which are classified as non-static, we used MRF framework to model them in temporal and spatial contexts. MRF framework provides a convenient and consistent way of modeling context-dependent entities such as image pixels and correlated features. Experimental results show that our proposed method is more efficient than the traditional one.

People Detection Algorithm in the Beach (해변에서의 사람 검출 알고리즘)

  • Choi, Yu Jung;Kim, Yoon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.558-570
    • /
    • 2018
  • Recently, object detection is a critical function for any system that uses computer vision and is widely used in various fields such as video surveillance and self-driving cars. However, the conventional methods can not detect the objects clearly because of the dynamic background change in the beach. In this paper, we propose a new technique to detect humans correctly in the dynamic videos like shores. A new background modeling method that combines spatial GMM (Gaussian Mixture Model) and temporal GMM is proposed to make more correct background image. Also, the proposed method improve the accuracy of people detection by using SVM (Support Vector Machine) to classify people from the objects and KCF (Kernelized Correlation Filter) Tracker to track people continuously in the complicated environment. The experimental result shows that our method can work well for detection and tracking of objects in videos containing dynamic factors and situations.

Real-Time Foreground Segmentation and Background Substitution for Protecting Privacy on Visual Communication (화상 통신에서의 사생활 보호를 위한 실시간 전경 분리 및 배경 대체)

  • Bae, Gun-Tae;Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.5C
    • /
    • pp.505-513
    • /
    • 2009
  • This paper proposes a real-time foreground segmentation and background substitution method for protecting the privacy on visual communication. Previous works on this topic have some problems with the color and shape of foreground and the capture device such as stereo camera. we provide a solution which can segment the foreground in real-time using fixed mono camera. For improving the performance of a foreground extraction, we propose the Temporal Foreground Probability Model (TFPM) by modeling temporal information of a video. Also we provide an boundary processing method for natural and smooth synthesizing that using alpha matte and simple post-processing method.

People Detection Algorithm in Dynamic Background (동적인 배경에서의 사람 검출 알고리즘)

  • Choi, Yu Jung;Lee, Dong Ryeol;Kim, Yoon
    • Journal of Industrial Technology
    • /
    • v.38 no.1
    • /
    • pp.41-52
    • /
    • 2018
  • Recently, object detection is a critical function for any system that uses computer vision and is widely used in various fields such as video surveillance and self-driving cars. However, the conventional methods can not detect the objects clearly because of the dynamic background change in the beach. In this paper, we propose a new technique to detect humans correctly in the dynamic videos like shores. A new background modeling method that combines spatial GMM (Gaussian Mixture Model) and temporal GMM is proposed to make more correct background image. Also, the proposed method improve the accuracy of people detection by using SVM (Support Vector Machine) to classify people from the objects and KCF (Kernelized Correlation Filter) Tracker to track people continuously in the complicated environment. The experimental result shows that our method can work well for detection and tracking of objects in videos containing dynamic factors and situations.

Influence of Greenhouse Gases on Radiative Forcing at Urban Center and Background Sites on Jeju Island Using the Atmospheric Radiative Transfer Model (대기복사전달모델을 이용한 제주지역 도심 및 배경지점에서의 온실가스에 따른 복사강제력 영향 연구)

  • Lee, Soo-Jeong;Song, Sang-Keun;Han, Seung-Beom
    • Atmosphere
    • /
    • v.27 no.4
    • /
    • pp.423-433
    • /
    • 2017
  • The spatial and temporal variations in radiative forcing (RF) and mean temperature changes of greenhouse gases (GHGs), such as $CO_2$, $CH_4$, and $N_2O$, were analyzed at urban center (Yeon-dong) and background sites (Gosan) on Jeju Island during 2010~2015, based on a modeling approach (i.e., radiative transfer model). Overall, the RFs and mean temperature changes of $CO_2$ at Yeon-dong during most years (except for 2014) were estimated to be higher than those at Gosan. This might be possibly because of its higher concentrations at Yeon-dong due to relatively large energy consumption and small photosynthesis and also the difference in radiation flux due to the different input condition (e.g., local time and geographic coordinates of solar zenith angle) in the model. The annual mean RFs and temperature changes of $CO_2$ were highest in 2015 ($2.41Wm^{-2}$ and 1.76 K) at Yeon-dong and in 2013 ($2.22Wm^{-2}$ and 1.62 K) at Gosan (except for 2010 and 2011). The maximum monthly/seasonal mean RFs and temperature changes of $CO_2$ occurred in spring (Mar. and/or Apr.) or winter (Jan. and/or Feb.) at the two sites during the study period, whereas the minimum RFs and temperature changes in summer (Jun.-Aug.). In the case of $CH_4$ and $N_2O$, their impacts on the RF and mean temperature changes were very small (an order of magnitude lower) compared to $CO_2$. The spatio-temporal differences in these RF values of GHGs might primarily depend on the atmospheric profile (e.g., ozone profile), surface albedo, local time (or solar zenith angle), as well as their mass concentrations.

A Simulation of High Ozone Episode in Downwind Area of Seoul Metropolitan Using CMAQ Model (CMAQ을 이용한 수도권 풍하지역의 고농도 오존 현상 모사)

  • Lee, Chong Bum;Song, Eun Young
    • Journal of Environmental Impact Assessment
    • /
    • v.15 no.3
    • /
    • pp.193-206
    • /
    • 2006
  • Recently, high ozone episode occurred frequently in Korea. Moreover ozone episode occurred not only in the city but also in background area where local anthropogenic sources are not important. It analyzed frequency exceeding 100ppb ozone at air quality monitoring stations in Seoul and rural area during 1995-2004. This paper reports on the use of the Community Multiscale Air Quality (CMAQ) modelling system to predict hourly ozone levels. Domain resolutions of 30km, 10km, 3.333km (innermost) have been employed for this study. Summer periods in June 2004 have been simulated and the predicted results have been compared to data for metropolitan and rural air quality monitoring stations. The model performance has been evaluated with measured data through a range of statistical measures. Although, the CMAQ model reproduces the ozone temporal spatial trends it was not able to simulate the peak magnitudes consistently.

A Simulation Method For Virtual Situations Through Seamless Integration Of Independent Events Via Autonomous And Independent Agents

  • Park, Jong Hee;Choi, Jun Seong
    • International Journal of Contents
    • /
    • v.14 no.3
    • /
    • pp.7-16
    • /
    • 2018
  • The extent and depth of the event plan determines the scope of pedagogical experience in situations and consequently the quality of immersive learning based on our simulated world. In contrast to planning in conventional narrative-based systems mainly pursuing dramatic interests, planning in virtual world-based pedagogical systems strive to provide realistic experiences in immersed situations. Instead of story plot comprising predetermined situations, our inter-event planning method aims at simulating diverse situations that each involve multiple events coupled via their associated agents' conditions and meaningful associations between events occurring in a background world. The specific techniques to realize our planning method include, two-phase planning based on inter-event search and intra-event decomposition (down to the animated action level); autonomous and independent agents to behave proactively with their own belief and planning capability; full-blown background world to be used as the comprehensive stage for all events to occur in; coupling events via realistic association types including deontic associations as well as conventional causality; separation of agents from event roles; temporal scheduling; and parallel and concurrent event progression mechanism. Combining all these techniques, diverse exogenous events can be derived and seamlessly (i.e., semantically meaningfully) integrated with the original event to form a wide scope of situations providing chances of abundant pedagogical experiences. For effective implementation of plan execution, we devise an execution scheme based on multiple priority queues, particularly to realize concurrent progression of many simultaneous events to simulate its corresponding reality. Specific execution mechanisms include modeling an action in terms of its component motions, adjustability of priority for agent across different events, and concurrent and parallel execution method for multiple actions and its expansion for multiple events.