• Title/Summary/Keyword: frame difference map

Search Result 18, Processing Time 0.028 seconds

Tracking Moving Objects Using an Active Contour Model Based on a Frame Difference Map (차 영상 맵 기반의 능동 윤곽선 모델을 이용한 이동 물체 추적)

  • 이부환;전기준
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.153-156
    • /
    • 2003
  • This paper proposes a video tracking method for a deformable moving object using an active contour model. In order to decide the convergent directions of the contour points automatically, a new energy function based on a frame difference map and an updating rules of the frame difference map are presented. Experimental results on a set of synthetic and real image sequences showed that the proposed method can fully track a speedy deformable object while extracting the boundary of the object exactly in every frame.

  • PDF

Tracking a Moving Object Using an Active Contour Model Based on a Frame Difference Map (차 영상 맵 기반의 능동 윤곽선 모델을 이용한 이동 물체 추적)

  • 이부환;김도종;최일;전기준
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.153-163
    • /
    • 2004
  • This paper presents a video tracking method for a deformable moving object using an active contour model in the image sequences. It is quite important to decide the local convergence directions of the contour points for correctly extracting the boundary of the moving object with deformable shape. For this purpose, an energy function for the active contour model is newly proposed by adding a directional energy term using a frame difference map to tile Greedy algorithm. In addition, an updating rule of tile frame difference map is developed to encourage the stable convergence of the contour points. Experimental results on a set of synthetic and real image sequences showed that the proposed method can fully track the deformable object while extracting the boundary of the object elaborately in every frame.

Motion Map Generation for Maintaining the Temporal Coherence of Brush Strokes in the Painterly Animation (회화적 애니메이션에서 브러시 스트로크의 시간적 일관성을 유지하기 위한 모션 맵 생성)

  • Park Youngs-Up;Yoon Kyung-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.536-546
    • /
    • 2006
  • Painterly animation is a method that expresses painterly images with a hand-painted appearance from a video, and the most crucial element for it is the temporal coherence of brush strokes between frames. A motion map is proposed in this paper as a solution to the issue of maintaining the temporal coherence in the brush strokes between the frames. A motion map is the region that frame-to-frame motions have occurred. Namely, this map refers to the region frame-to-frame edges move by the motion information with the motion occurred edges as a starting point. In this paper, we employ the optical flow method and block-based method to estimate the motion information. The method that yielded the biggest PSNR using the motion information (the directions and magnitudes) acquired by various methods of motion estimation has been chosen as the final motion information to form a motion map. The created motion map determine the part of the frame that should be re-painted. In order to express painterly images with a hand- painted appearance and maintain the temporal coherence of brush strokes, the motion information was applied to only the strong edges that determine the directions of the brush strokes. Also, this paper seek to reduce the flickering phenomenon between the frames by using the multiple exposure method and the difference map created by the difference between images of the source and the canvas. Maintenance of the coherence in the direction of the brush strokes was also attempted by a local gradient interpolation to maintain the structural coherence.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

An Evaluation System to Determine the Completeness of a Space Map Obtained by Visual SLAM (Visual SLAM을 통해 획득한 공간 지도의 완성도 평가 시스템)

  • Kim, Han Sol;Kam, Jae Won;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.417-423
    • /
    • 2019
  • This paper presents an evaluation system to determine the completeness of a space map obtained by a visual SLAM(Simultaneous Localization And Mapping) algorithm. The proposed system consists of three parts. First, the proposed system detects the occurrence of loop closing to confirm that users acquired the information from all directions. Thereafter, the acquired map is divided with regular intervals and is verified whether each area has enough map points to successfully estimate users' position. Finally, to check the effectiveness of each map point, the system checks whether the map points are identifiable even at the location where there is a large distance difference from the acquisition position. Experimental results show that space maps whose completeness is proven by the proposed system has higher stability and accuracy in terms of position estimation than other maps that are not proven.

Novel VO and HO Map for Vertical Obstacle Detection in Driving Environment (새로운 VO, HO 지도를 이용한 차량 주행환경의 수직 장애물 추출)

  • Baek, Seung-Hae;Park, Soon-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.163-173
    • /
    • 2013
  • We present a new computer vision technique which can detect unexpected or static vertical objects in road driving environment. We first obtain temporal and spatial difference images in each frame of a stereo video sequence. Using the difference images, we then generate VO and HO maps by improving the conventional V and H disparity maps. From the VO and HO maps, candidate areas of vertical obstacles on the road are detected. Finally, the candidate areas are merged and refined to detect vertical obstacles.

REAL-TIME DETECTION OF MOVING OBJECTS IN A ROTATING AND ZOOMING CAMERA

  • Li, Ying-Bo;Cho, Won-Ho;Hong, Ki-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.71-75
    • /
    • 2009
  • In this paper, we present a real-time method to detect moving objects in a rotating and zooming camera. It is useful for camera surveillance of fixed but rotating camera, camera on moving car, and so on. We first compensate the global motion, and then exploit the displaced frame difference (DFD) to find the block-wise boundary. For robust detection, we propose a kind of image to combine the detections from consecutive frames. We use the block-wise detection to achieve the real-time speed, except the pixel-wise DFD. In addition, a fast block-matching algorithm is proposed to obtain local motions and then global affine motion. In the experimental results, we demonstrate that our proposed algorithm can handle the real-time detection of common object, small object, multiple objects, the objects in low-contrast environment, and the object in zooming camera.

  • PDF

Implementation of Real-time Stereoscopic Image Conversion Algorithm Using Luminance and Vertical Position (휘도와 수직 위치 정보를 이용한 입체 변환 알고리즘 구현)

  • Yun, Jong-Ho;Choi, Myul-Rul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.9 no.5
    • /
    • pp.1225-1233
    • /
    • 2008
  • In this paper, the 2D/3D converting algorithm is proposed. The single frame of 2D image is used fur the real-time processing of the proposed algorithm. The proposed algorithm creates a 3D image with the depth map by using the vertical position information of a object in a single frame. In order to real-time processing and improve the hardware complexity, it performs the generation of a depth map using the image sampling, the object segmentation with the luminance standardization and the boundary scan. It might be suitable to a still image and a moving image, and it can provide a good 3D effect on a image such as a long distance image, a landscape, or a panorama photo because it uses a vertical position information. The proposed algorithm can adapt a 3D effect to a image without the restrictions of the direction, velocity or scene change of an object. It has been evaluated with the visual test and the comparing to the MTD(Modified Time Difference) method using the APD(Absolute Parallax Difference).

A Robust Object Extraction Method for Immersive Video Conferencing (몰입형 화상 회의를 위한 강건한 객체 추출 방법)

  • Ahn, Il-Koo;Oh, Dae-Young;Kim, Jae-Kwang;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.11-23
    • /
    • 2011
  • In this paper, an accurate and fully automatic video object segmentation method is proposed for video conferencing systems in which the real-time performance is required. The proposed method consists of two steps: 1) accurate object extraction on the initial frame, 2) real-time object extraction from the next frame using the result of the first step. Object extraction on the initial frame starts with generating a cumulative edge map obtained from frame differences in the beginning. This is because we can estimate the initial shape of the foreground object from the cumulative motion. This estimated shape is used to assign the seeds for both object and background, which are needed for Graph-Cut segmentation. Once the foreground object is extracted by Graph-Cut segmentation, real-time object extraction is conducted using the extracted object and the double edge map obtained from the difference between two successive frames. Experimental results show that the proposed method is suitable for real-time processing even in VGA resolution videos contrary to previous methods, being a useful tool for immersive video conferencing systems.

Local and Global Navigation Maps for Safe UAV Flight (드론의 안전비행을 위한 국부 및 전역지도 인터페이스)

  • Yu, Sanghyeong;Jeon, Jongwoo;Cho, Kwangsu
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.2
    • /
    • pp.113-120
    • /
    • 2018
  • To fly a drone or unmanned aerial vechicle(UAV) safely, its pilot needs to maintain high situation awareness of its flight space. One of the important ways to improve the flight space awareness is to integrate both the global and the local navigation map a drone provides. However, the drone pilot often has to use the inconsistent reference frames or perspectives between the two maps. In specific, the global navigation map tends to display space information in the third-person perspective, whereas the local map tends to use the first-person perspective through the drone camera. This inconsistent perspective problem makes the pilot use mental rotation to align the different perspectives. In addition, integrating different dimensionalities (2D vs. 3D) of the two maps may aggravate the pilot's cognitive load of mental rotation. Therefore, this study aims to investigate the relation between perspective difference ($0^{\circ}$, $90^{\circ}$, $180^{\circ}$, $270^{\circ}$) and the map dimensionality matches (3D-3D vs. 3D-2D) to improve the way of integrating the two maps. The results show that the pilot's flight space awareness improves when the perspective differences are smaller and also when the dimensionalities between the two maps are matched.