• Title/Summary/Keyword: Video extraction

Search Result 466, Processing Time 0.021 seconds

A Fast Semiautomatic Video Object Tracking Algorithm (고속의 세미오토매틱 비디오객체 추적 알고리즘)

  • Lee, Jong-Won;Kim, Jin-Sang;Cho, Won-Kyung
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.291-294
    • /
    • 2004
  • Semantic video object extraction is important for tracking meaningful objects in video and object-based video coding. We propose a fast semiautomatic video object extraction algorithm which combines a watershed segmentation schemes and chamfer distance transform. Initial object boundaries in the first frame are defined by a human before the tracking, and fast video object tracking can be achieved by tracking only motion-detected regions in a video frame. Experimental results shows that the boundaries of tracking video object arc close to real video object boundaries and the proposed algorithm is promising in terms of speed.

  • PDF

An Efficient Implementation of Key Frame Extraction and Sharing in Android for Wireless Video Sensor Network

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3357-3376
    • /
    • 2015
  • Wireless sensor network is an important research topic that has attracted a lot of attention in recent years. However, most of the interest has focused on wireless sensor network to gather scalar data such as temperature, humidity and vibration. Scalar data are insufficient for diverse applications such as video surveillance, target recognition and traffic monitoring. However, if we use camera sensors in wireless sensor network to collect video data which are vast in information, they can provide important visual information. Video sensor networks continue to gain interest due to their ability to collect video information for a wide range of applications in the past few years. However, how to efficiently store the massive data that reflect environmental state of different times in video sensor network and how to quickly search interested information from them are challenging issues in current research, especially when the sensor network environment is complicated. Therefore, in this paper, we propose a fast algorithm for extracting key frames from video and describe the design and implementation of key frame extraction and sharing in Android for wireless video sensor network.

On-line Background Extraction in Video Image Using Vector Median (벡터 미디언을 이용한 비디오 영상의 온라인 배경 추출)

  • Kim, Joon-Cheol;Park, Eun-Jong;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.515-524
    • /
    • 2006
  • Background extraction is an important technique to find the moving objects in video surveillance system. This paper proposes a new on-line background extraction method for color video using vector order statistics. In the proposed method, using the fact that background occurs more frequently than objects, the vector median of color pixels in consecutive frames Is treated as background at the position. Also, the objects of current frame are consisted of the set of pixels whose distance from background pixel is larger than threshold. In the paper, the proposed method is compared with the on-line multiple background extraction based on Gaussian mixture model(GMM) in order to evaluate the performance. As the result, its performance is similar or superior to the method based on GMM.

An Improved ViBe Algorithm of Moving Target Extraction for Night Infrared Surveillance Video

  • Feng, Zhiqiang;Wang, Xiaogang;Yang, Zhongfan;Guo, Shaojie;Xiong, Xingzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4292-4307
    • /
    • 2021
  • For the research field of night infrared surveillance video, the target imaging in the video is easily affected by the light due to the characteristics of the active infrared camera and the classical ViBe algorithm has some problems for moving target extraction because of background misjudgment, noise interference, ghost shadow and so on. Therefore, an improved ViBe algorithm (I-ViBe) for moving target extraction in night infrared surveillance video is proposed in this paper. Firstly, the video frames are sampled and judged by the degree of light influence, and the video frame is divided into three situations: no light change, small light change, and severe light change. Secondly, the ViBe algorithm is extracted the moving target when there is no light change. The segmentation factor of the ViBe algorithm is adaptively changed to reduce the impact of the light on the ViBe algorithm when the light change is small. The moving target is extracted using the region growing algorithm improved by the image entropy in the differential image of the current frame and the background model when the illumination changes drastically. Based on the results of the simulation, the I-ViBe algorithm proposed has better robustness to the influence of illumination. When extracting moving targets at night the I-ViBe algorithm can make target extraction more accurate and provide more effective data for further night behavior recognition and target tracking.

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

A Robust Object Extraction Method for Immersive Video Conferencing (몰입형 화상 회의를 위한 강건한 객체 추출 방법)

  • Ahn, Il-Koo;Oh, Dae-Young;Kim, Jae-Kwang;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.11-23
    • /
    • 2011
  • In this paper, an accurate and fully automatic video object segmentation method is proposed for video conferencing systems in which the real-time performance is required. The proposed method consists of two steps: 1) accurate object extraction on the initial frame, 2) real-time object extraction from the next frame using the result of the first step. Object extraction on the initial frame starts with generating a cumulative edge map obtained from frame differences in the beginning. This is because we can estimate the initial shape of the foreground object from the cumulative motion. This estimated shape is used to assign the seeds for both object and background, which are needed for Graph-Cut segmentation. Once the foreground object is extracted by Graph-Cut segmentation, real-time object extraction is conducted using the extracted object and the double edge map obtained from the difference between two successive frames. Experimental results show that the proposed method is suitable for real-time processing even in VGA resolution videos contrary to previous methods, being a useful tool for immersive video conferencing systems.

Fast Extraction of Objects of Interest from Images with Low Depth of Field

  • Kim, Chang-Ick;Park, Jung-Woo;Lee, Jae-Ho;Hwang, Jenq-Neng
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.353-362
    • /
    • 2007
  • In this paper, we propose a novel unsupervised video object extraction algorithm for individual images or image sequences with low depth of field (DOF). Low DOF is a popular photographic technique which enables the representation of the photographer's intention by giving a clear focus only on an object of interest (OOI). We first describe a fast and efficient scheme for extracting OOIs from individual low-DOF images and then extend it to deal with image sequences with low DOF in the next part. The basic algorithm unfolds into three modules. In the first module, a higher-order statistics map, which represents the spatial distribution of the high-frequency components, is obtained from an input low-DOF image. The second module locates the block-based OOI for further processing. Using the block-based OOI, the final OOI is obtained with pixel-level accuracy. We also present an algorithm to extend the extraction scheme to image sequences with low DOF. The proposed system does not require any user assistance to determine the initial OOI. This is possible due to the use of low-DOF images. The experimental results indicate that the proposed algorithm can serve as an effective tool for applications, such as 2D to 3D and photo-realistic video scene generation.

  • PDF

Caption Region Extraction of Sports Video Using Multiple Frame Merge (다중 프레임 병합을 이용한 스포츠 비디오 자막 영역 추출)

  • 강오형;황대훈;이양원
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.4
    • /
    • pp.467-473
    • /
    • 2004
  • Caption in video plays an important role that delivers video content. Existing caption region extraction methods are difficult to extract caption region from background because they are sensitive to noise. This paper proposes the method to extract caption region in sports video using multiple frame merge and MBR(Minimum Bounding Rectangles). As preprocessing, adaptive threshold can be extracted using contrast stretching and Othu Method. Caption frame interval is extracted by multiple frame merge and caption region is efficiently extracted by median filtering, morphological dilation, region labeling, candidate character region filtering, and MBR extraction.

  • PDF

Context-based Video Retrieval using Fast Key Frame Extraction (고속 key frame 추출 기법을 이용한 내용 기반 비디오 검색 기법)

  • Hong, Bo-Hyun;Eum, Min-Young;Kim, Myoung-Ho;Choe, Yoon-Sik
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.539-541
    • /
    • 2005
  • We propose efficient video retrieval scheme which use fast key frame extraction in DCT domain. Our scheme extract key frame using the edge histogram difference which is extracted in compressed domain for I-frames. And the video retrieval is implemented using Hausdorff distance function about edge histogram of key frame. This approach enables fast content-based video retrieval of the compressed video content without decompression process. Experimental results show our scheme is very fast and efficient.

  • PDF

Video Summarization Using Hidden Markov Model (은닉 마르코브 모델을 이용한 비디오 요약 시스템)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.6
    • /
    • pp.1175-1181
    • /
    • 2004
  • This paper proposes a system to analyze and summarize the video shots of baseball game TV program into fifteen categories. Our System consists of three modules: feature extraction, Hidden Markov Model (HMM) training, and video shot categorization. Video Shots belongs to the same class are not necessarily similar, so we require that the training set is large enough to include video shot with all possible variations to create a robust Hidden Markov Model. In the experiments, we have illustrated that our system can recognize the 15 different shot classes with a success ratio of 84.72%.