• Title/Summary/Keyword: Scene Segmentation

Search Result 147, Processing Time 0.024 seconds

Video Data Scene Segmentation Method Using Region Segmentation (영역분할을 사용한 동영상 데이터 장면 분할 기법)

  • Yeom, Seong-Ju;Kim, U-Saeng
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.493-500
    • /
    • 2001
  • Video scene segmentation is fundamental role for content based video analysis. In this paper, we propose a new region based video scene segmentation method using continuity test for each object region which is segmented by the watershed algorithm for all frames in video data. For this purpose, we first classify video data segments into classes that are the dynamic and static sections according to the object movement rate by comparing the spatial and shape similarity of each region. And then, try to segment each sections by grouping each sections by comparing the neighbor section sections by comparing the neighbor section similarity. Because, this method uses the region which represented on object as a similarity measure, it can segment video scenes efficiently without undesirable fault alarms by illumination and partial changes.

  • PDF

Video Segmentation and Key frame Extraction using Multi-resolution Analysis and Statistical Characteristic

  • Cho, Wan-Hyun;Park, Soon-Young;Park, Jong-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.2
    • /
    • pp.457-469
    • /
    • 2003
  • In this paper, we have proposed the efficient algorithm that can segment the video scene change using a various statistical characteristics obtained from by applying the wavelet transformation for each frames. Our method firstly extracts the histogram features from low frequency subband of wavelet-transformed image and then uses these features to detect the abrupt scene change. Second, it extracts the edge information from applying the mesh method to the high frequency subband of transformed image. We quantify the extracted edge information as the values of variance characteristic of each pixel and use these values to detect the gradual scene change. And we have also proposed an algorithm how extract the proper key frame from segmented video scene. Experiment results show that the proposed method is both very efficient algorithm in segmenting video frames and also is to become the appropriate key frame extraction method.

A Segmentation Method for a Moving Object on A Static Complex Background Scene. (복잡한 배경에서 움직이는 물체의 영역분할에 관한 연구)

  • Park, Sang-Min;Kwon, Hui-Ung;Kim, Dong-Sung;Jeong, Kyu-Sik
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.3
    • /
    • pp.321-329
    • /
    • 1999
  • Moving Object segmentation extracts an interested moving object on a consecutive image frames, and has been used for factory automation, autonomous navigation, video surveillance, and VOP(Video Object Plane) detection in a MPEG-4 method. This paper proposes new segmentation method using difference images are calculated with three consecutive input image frames, and used to calculate both coarse object area(AI) and it's movement area(OI). An AI is extracted by removing background using background area projection(BAP). Missing parts in the AI is recovered with help of the OI. Boundary information of the OI confines missing parts of the object and gives inital curves for active contour optimization. The optimized contours in addition to the AI make the boundaries of the moving object. Experimental results of a fast moving object on a complex background scene are included.

  • PDF

Dynamic Scene Segmentation Algorithm Using a Cross Mask and Edge Information (Cross Mask와 에지 정보를 사용한 동영상 분할)

  • 강정숙;박래홍;이상욱
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.8
    • /
    • pp.1247-1256
    • /
    • 1989
  • In this paper, we propose the dynamic scene segmentation algorithm using a cross mask and edge information. This method, a combination of the conventioanl feature-based and pixel-based approaches, uses edges as features and determines moving pixels, with a cross mask centered on each edge pixel, by computing similarity measure between two consecutive image frames. With simple calcualtion the proposed method works well for image consisting of complex background or several moving objects. Also this method works satisfactorily in case of rotaitional motion.

  • PDF

Adaptive Scene Classification based on Semantic Concepts and Edge Detection (시멘틱개념과 에지탐지 기반의 적응형 이미지 분류기법)

  • Jamil, Nuraini;Ahmed, Shohel;Kim, Kang-Seok;Kang, Sang-Jil
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.2
    • /
    • pp.1-13
    • /
    • 2009
  • Scene classification and concept-based procedures have been the great interest for image categorization applications for large database. Knowing the category to which scene belongs, we can filter out uninterested images when we try to search a specific scene category such as beach, mountain, forest and field from database. In this paper, we propose an adaptive segmentation method for real-world natural scene classification based on a semantic modeling. Semantic modeling stands for the classification of sub-regions into semantic concepts such as grass, water and sky. Our adaptive segmentation method utilizes the edge detection to split an image into sub-regions. Frequency of occurrences of these semantic concepts represents the information of the image and classifies it to the scene categories. K-Nearest Neighbor (k-NN) algorithm is also applied as a classifier. The empirical results demonstrate that the proposed adaptive segmentation method outperforms the Vogel and Schiele's method in terms of accuracy.

  • PDF

Bilayer Segmentation of Consistent Scene Images by Propagation of Multi-level Cues with Adaptive Confidence (다중 단계 신호의 적응적 전파를 통한 동일 장면 영상의 이원 영역화)

  • Lee, Soo-Chahn;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.450-462
    • /
    • 2009
  • So far, many methods for segmenting single images or video have been proposed, but few methods have dealt with multiple images with analogous content. These images, which we term consistent scene images, include concurrent images of a scene and gathered images of a similar foreground, and may be collectively utilized to describe a scene or as input images for multi-view stereo. In this paper, we present a method to segment these images with minimum user input, specifically, manual segmentation of one image, by iteratively propagating information via multi-level cues with adaptive confidence depending on the nature of the images. Propagated cues are used as the bases to compute multi-level potentials in an MRF framework, and segmentation is done by energy minimization. Both cues and potentials are classified as low-, mid-, and high- levels based on whether they pertain to pixels, patches, and shapes. A major aspect of our approach is utilizing mid-level cues to compute low- and mid- level potentials, and high-level cues to compute low-, mid-, and high- level potentials, thereby making use of inherent information. Through this process, the proposed method attempts to maximize the amount of both extracted and utilized information in order to maximize the consistency of the segmentation. We demonstrate the effectiveness of the proposed method on several sets of consistent scene images and provide a comparison with results based only on mid-level cues [1].

Image-based fire area segmentation method by removing the smoke area from the fire scene videos (화재 현장 영상에서 연기 영역을 제외한 이미지 기반 불의 영역 검출 기법)

  • KIM, SEUNGNAM;CHOI, MYUNGJIN;KIM, SUN-JEONG;KIM, CHANG-HUN
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • In this paper, we propose an algorithm that can accurately segment a fire even when it is surrounded by smoke of a similar color. Existing fire area segmentation algorithms have a problem in that they cannot separate fire and smoke from fire images. In this paper, the fire was successfully separated from the smoke by applying the color compensation method and the fog removal method as a preprocessing process before applying the fire area segmentation algorithm. In fact, it was confirmed that it segments fire more effectively than the existing methods in the image of the fire scene covered with smoke. In addition, we propose a method that can use the proposed fire segmentation algorithm for efficient fire detection in factories and homes.

Image Segmentation based on Statistics of Sequential Frame Imagery of a Static Scene (정지장면의 연속 프레임 영상 간 통계에 기반한 영상분할)

  • Seo, Su-Young;Ko, In-Chul
    • Spatial Information Research
    • /
    • v.18 no.3
    • /
    • pp.73-83
    • /
    • 2010
  • This study presents a method to segment an image, employing the statistics observed at each pixel location across sequential frame images. In the acquisition and analysis of spatial information, utilization of digital image processing technique has very important implications. Various image segmentation techniques have been presented to distinguish the area of digital images. In this study, based on the analysis of the spectroscopic characteristics of sequential frame images that had been previously researched, an image segmentation method was proposed by using the randomness occurring among a sequence of frame images for a same scene. First of all, we computed the mean and standard deviation values at each pixel and found reliable pixels to determine seed points using their standard deviation value. For segmenting an image into individual regions, we conducted region growing based on a T-test between reference and candidate sample sets. A comparative analysis was conducted to assure the performance of the proposed method with reference to a previous method. From a set of experimental results, it is confirmed that the proposed method using a sequence of frame images segments a scene better than a method using a single frame image.

Semantic Segmentation of Drone Images Based on Combined Segmentation Network Using Multiple Open Datasets (개방형 다중 데이터셋을 활용한 Combined Segmentation Network 기반 드론 영상의 의미론적 분할)

  • Ahram Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.967-978
    • /
    • 2023
  • This study proposed and validated a combined segmentation network (CSN) designed to effectively train on multiple drone image datasets and enhance the accuracy of semantic segmentation. CSN shares the entire encoding domain to accommodate the diversity of three drone datasets, while the decoding domains are trained independently. During training, the segmentation accuracy of CSN was lower compared to U-Net and the pyramid scene parsing network (PSPNet) on single datasets because it considers loss values for all dataset simultaneously. However, when applied to domestic autonomous drone images, CSN demonstrated the ability to classify pixels into appropriate classes without requiring additional training, outperforming PSPNet. This research suggests that CSN can serve as a valuable tool for effectively training on diverse drone image datasets and improving object recognition accuracy in new regions.

Detection of Video Scene Boundaries based on the Local and Global Context Information (지역 컨텍스트 및 전역 컨텍스트 정보를 이용한 비디오 장면 경계 검출)

  • 강행봉
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.778-786
    • /
    • 2002
  • Scene boundary detection is important in the understanding of semantic structure from video data. However, it is more difficult than shot change detection because scene boundary detection needs to understand semantics in video data well. In this paper, we propose a new approach to scene segmentation using contextual information in video data. The contextual information is divided into two categories: local and global contextual information. The local contextual information refers to the foreground regions' information, background and shot activity. The global contextual information refers to the video shot's environment or its relationship with other video shots. Coherence, interaction and the tempo of video shots are computed as global contextual information. Using the proposed contextual information, we detect scene boundaries. Our proposed approach consists of three consecutive steps: linking, verification, and adjusting. We experimented the proposed approach using TV dramas and movies. The detection accuracy of correct scene boundaries is over than 80%.