• Title/Summary/Keyword: Video Object

Search Result 1,060, Processing Time 0.028 seconds

A Fast Semiautomatic Video Object Tracking Algorithm (고속의 세미오토매틱 비디오객체 추적 알고리즘)

  • Lee, Jong-Won;Kim, Jin-Sang;Cho, Won-Kyung
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.291-294
    • /
    • 2004
  • Semantic video object extraction is important for tracking meaningful objects in video and object-based video coding. We propose a fast semiautomatic video object extraction algorithm which combines a watershed segmentation schemes and chamfer distance transform. Initial object boundaries in the first frame are defined by a human before the tracking, and fast video object tracking can be achieved by tracking only motion-detected regions in a video frame. Experimental results shows that the boundaries of tracking video object arc close to real video object boundaries and the proposed algorithm is promising in terms of speed.

  • PDF

Online Video Synopsis via Multiple Object Detection

  • Lee, JaeWon;Kim, DoHyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.19-28
    • /
    • 2019
  • In this paper, an online video summarization algorithm based on multiple object detection is proposed. As crime has been on the rise due to the recent rapid urbanization, the people's appetite for safety has been growing and the installation of surveillance cameras such as a closed-circuit television(CCTV) has been increasing in many cities. However, it takes a lot of time and labor to retrieve and analyze a huge amount of video data from numerous CCTVs. As a result, there is an increasing demand for intelligent video recognition systems that can automatically detect and summarize various events occurring on CCTVs. Video summarization is a method of generating synopsis video of a long time original video so that users can watch it in a short time. The proposed video summarization method can be divided into two stages. The object extraction step detects a specific object in the video and extracts a specific object desired by the user. The video summary step creates a final synopsis video based on the objects extracted in the previous object extraction step. While the existed methods do not consider the interaction between objects from the original video when generating the synopsis video, in the proposed method, new object clustering algorithm can effectively maintain interaction between objects in original video in synopsis video. This paper also proposed an online optimization method that can efficiently summarize the large number of objects appearing in long-time videos. Finally, Experimental results show that the performance of the proposed method is superior to that of the existing video synopsis algorithm.

Low-Complexity MPEG-4 Shape Encoding towards Realtime Object-Based Applications

  • Jang, Euee-Seon
    • ETRI Journal
    • /
    • v.26 no.2
    • /
    • pp.122-135
    • /
    • 2004
  • Although frame-based MPEG-4 video services have been successfully deployed since 2000, MPEG-4 video coding is now facing great competition in becoming a dominant player in the market. Object-based coding is one of the key functionalities of MPEG-4 video coding. Real-time object-based video encoding is also important for multimedia broadcasting for the near future. Object-based video services using MPEG-4 have not yet made a successful debut due to several reasons. One of the critical problems is the coding complexity of object-based video coding over frame-based video coding. Since a video object is described with an arbitrary shape, the bitstream contains not only motion and texture data but also shape data. This has introduced additional complexity to the decoder side as well as to the encoder side. In this paper, we have analyzed the current MPEG-4 video encoding tools and proposed efficient coding technologies that reduce the complexity of the encoder. Using the proposed coding schemes, we have obtained a 56 percent reduction in shape-coding complexity over the MPEG-4 video reference software (Microsoft version, 2000 edition).

  • PDF

Object segmentation and object-based surveillance video indexing

  • Kim, Jin-Woong;Kim, Mun-Churl;Lee, Kyu-Won;Kim, Jae-Gon;Ahn, Chie-Teuk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.165.1-170
    • /
    • 1999
  • Object segmentation fro natural video scenes has recently become one of very active research to pics due to the object-based video coding standard MPEG-4. Object detection and isolation is also useful for object-based indexing and search of video content, which is a goal of the emerging new standard, MPEG-7. In this paper, an automatic segmentation method of moving objects in image sequence is presented which is applicable to multimedia content authoring for MPEG-4, and two different segmentation approaches suitable for surveillance applications are addressed in raw data domain and compressed bitstream domains. We also propose an object-based video description scheme based on object segmentation for video indexing purposes.

A Robust Algorithm for Moving Object Segmentation and VOP Extraction in Video Sequences (비디오 시퀸스에서 움직임 객체 분할과 VOP 추출을 위한 강력한 알고리즘)

  • Kim, Jun-Ki;Lee, Ho-Suk
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.430-441
    • /
    • 2002
  • Video object segmentation is an important component for object-based video coding scheme such as MPEG-4. In this paper, a robust algorithm for segmentation of moving objects in video sequences and VOP(Video Object Planes) extraction is presented. The points of this paper are detection, of an accurate object boundary by associating moving object edge with spatial object edge and generation of VOP. The algorithm begins with the difference between two successive frames. And after extracting difference image, the accurate moving object edge is produced by using the Canny algorithm and morphological operation. To enhance extracting performance, we app]y the morphological operation to extract more accurate VOP. To be specific, we apply morphological erosion operation to detect only accurate object edges. And moving object edges between two images are generated by adjusting the size of the edges. This paper presents a robust algorithm implementation for fast moving object detection by extracting accurate object boundaries in video sequences.

Segmentation of Objects of Interest for Video Content Analysis (동영상 내용 분석을 위한 관심 객체 추출)

  • Park, So-Jung;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.967-980
    • /
    • 2007
  • Video objects of interest play an important role in representing the video content and are useful for improving the performance of video retrieval and compression. The objects of interest may be a main object in describing contents of a video shot or a core object that a video producer wants to represent in the video shot. We know that any object attracting one's eye much in the video shot may not be an object of interest and a non-moving object may be an object of interest as well as a moving one. However it is not easy to define an object of interest clearly, because procedural description of human interest is difficult. In this paper, a set of four filtering conditions for extracting moving objects of interest is suggested, which is defined by considering variation of location, size, and moving pattern of moving objects in a video shot. Non-moving objects of interest are also defined as another set of four extracting conditions that are related to saliency of color/texture, location, size, and occurrence frequency of static objects in a video shot. On a test with 50 video shots, the segmentation method based on the two sets of conditions could extract the moving and non-moving objects of interest chosen manually on accuracy of 84%.

  • PDF

Stereoscopic Conversion of Object-based MPEG-4 Video (객체 기반 MPEG-4 동영상의 입체 변환)

  • 박상훈;김만배;손현식
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2407-2410
    • /
    • 2003
  • In this paper, we propose a new stereoscopic video conversion methodology that converts two-dimensional (2-D) MPEG-4 video to stereoscopic video. In MPEG-4, each Image is composed of background object and primary object. In the first step of the conversion methodology, the camera motion type is determined for stereo Image generation. In the second step, the object-based stereo image generation is carried out. The background object makes use of a current image and a delayed image for its stereo image generation. On the other hand, the primary object uses a current image and its horizontally-shifted version to avoid the possible vertical parallax that could happen. Furthermore, URFA(Uncovered Region Filling Algorithm) is applied in the uncovered region which might be created after the stereo image generation of a primary object. In our experiment, show MPEG-4 test video and its stereoscopic video based upon out proposed methodology and analyze Its results.

  • PDF

A Novel Approach for Object Detection in Illuminated and Occluded Video Sequences Using Visual Information with Object Feature Estimation

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.110-114
    • /
    • 2015
  • This paper reports a novel object-detection technique in video sequences. The proposed algorithm consists of detection of objects in illuminated and occluded videos by using object features and a neural network technique. It consists of two functional modules: region-based object feature extraction and continuous detection of objects in video sequences with region features. This scheme is proposed as an enhancement of the Lowe's scale-invariant feature transform (SIFT) object detection method. This technique solved the high computation time problem of feature generation in the SIFT method. The improvement is achieved by region-based feature classification in the objects to be detected; optimal neural network-based feature reduction is presented in order to reduce the object region feature dataset with winner pixel estimation between the video frames of the video sequence. Simulation results show that the proposed scheme achieves better overall performance than other object detection techniques, and region-based feature detection is faster in comparison to other recent techniques.

(Dynamic Video Object Data Model(DIVID) (동적 비디오 객체 데이터 모델(DVID))

  • Song, Yong-Jun;Kim, Hyeong-Ju
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.9
    • /
    • pp.1052-1060
    • /
    • 1999
  • 이제까지 비디오 데이타베이스를 모델링하기 위한 많은 연구들이 수행되었지만 그 모든 모델들에서 다루는 비디오 데이타는 사용자의 개입이 없을 때 항상 미리 정의된 순서로 보여진다는 점에서 정적 데이타 모델로 간주될 수 있다. 주문형 뉴스 서비스, 주문형 비디오 서비스, 디지털 도서관, 인터넷 쇼핑 등과 같이 최신 비디오 정보 서비스를 제공하는 비디오 데이타베이스 응용들에서는 빈번한 비디오 편집이 요구되는데 실시간 처리가 바람직하다. 이를 위해서 기존의 비디오 데이타 내용이 변경되거나 새로운 비디오 데이타가 생성되어야 하지만 이제까지의 비디오 데이타 모델에서는 이러한 비디오 편집 작업이 일일이 수작업으로 수행되어야만 했다. 본 논문에서는 비디오 편집에 드는 노력을 줄이기 위해서 객체지향 데이타 모델에 기반하여 DVID(Dynamic Video Object Data Model)라는 동적 비디오 객체 데이타 모델을 제안한다. DVID는 기존의 정적 비디오 객체뿐만 아니라 사용자의 개입없이도 비디오의 내용을 비디오 데이타베이스로부터 동적으로 결정하여 보여주는 동적 비디오 객체도 함께 제공한다.Abstract A lot of research has been done on modeling video databases, but all of them can be considered as the static video data model from the viewpoint that all video data on those models are always presented according to the predefined sequences if there is no user interaction. For some video database applications which provides with up-to-date video information services such as news-on-demand, video-on-demand, digital library, internet shopping, etc., video editing is requested frequently, preferably in real time. To do this, the contents of the existing video data should be changed or new video data should be created, but on the traditional video data models such video editing works should be done manually. In order to save trouble in video editing work, this paper proposes the dynamic video object data model named DVID based on object oriented data model. DVID allows not only the static video object but also the dynamic video object whose contents are dynamically determined from video databases in real time even without user interaction.

Design of Object-based Video CODEC for the Mobile Video Telephony Using Hybrid Transform (모바일 영상통화 환경에 적합한 하이브리드 변환을 이용한 객체 기반 비디오 코덱 설계)

  • Jeon, Sung-Hye;Seo, Yong-Su;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.4
    • /
    • pp.560-574
    • /
    • 2010
  • Recently, many people can easily contact video telephony service through the mobile terminal owing to the commercialization of 3G communication technology. However, the quality of the serviced video telephony has been not good yet by the actual mobile restrictions. For solving quality problems, this paper presents the design of the object-based video CODEC using hybrid transform in mobile video telephony. The proposed design firstly segment each frame into a significant object and an insignificant object. The proposed design is to improve the quality of a significant object by limiting the bit rate of a insignificant object. Thus, we compress a significant object with high quality and low compression ratio and compress an insignificant object with low quality and high compression ratio. Furthermore, we control the bit rate of the video stream in the limited bandwidth by adjusting the compression ratio of each object. From experimental results, we confirmed that our method has more higher quality than methods in the conventional CODECs at the significant region on the same bit rate.