• Title/Summary/Keyword: Object region

Search Result 997, Processing Time 0.026 seconds

Motion Estimation Method by Using Depth Camera (깊이 카메라를 이용한 움직임 추정 방법)

  • Kwon, Soon-Kak;Kim, Seong-Woo
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.676-683
    • /
    • 2012
  • Motion estimation in video coding greatly affects implementation complexity. In this paper, a reducing method of the complexity in motion estimation is proposed by using both the depth and color cameras. We obtain object information with video sequence from distance information calculated by depth camera, then perform labeling for grouping pixels within similar distances as the same object. Three search regions (background, inside-object, boundary) are determined adaptively for each of motion estimation blocks within current and reference pictures. If a current block is the inside-object region, then motion is searched within the inside-object region of reference picture. Also if a current block is the background region, then motion is searched within the background region of reference picture. From simulation results, we can see that the proposed method compared to the full search method remains the almost same as the motion estimated difference signal and significantly reduces the searching complexity.

A study on automatic extraction of a moving object using optical flow (Optical flow 이론을 이용한 움직이는 객체의 자동 추출에 관한 연구)

  • 정철곤;김경수;김중규
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.50-53
    • /
    • 2000
  • In this work, the new algorithm that automatically extracts moving object of the video image is presented. In order to extract moving object, it is that velocity vectors correspond to each frame of the video image. Using the estimated velocity vector, the position of the object are determined. the value of the coordination of the object is initialized to the seed, and in the image plane, the moving object is automatically segmented by the region growing method. As the result of an application in sequential images, it is available to extract a moving object.

  • PDF

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.

IR Image Segmentation using GrabCut (GrabCut을 이용한 IR 영상 분할)

  • Lee, Hee-Yul;Lee, Eun-Young;Gu, Eun-Hye;Choi, Il;Choi, Byung-Jae;Ryu, Gang-Soo;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.260-267
    • /
    • 2011
  • This paper proposes a method for segmenting objects from the background in IR(Infrared) images based on GrabCut algorithm. The GrabCut algorithm needs the window encompassing the interesting known object. This procedure is processed by user. However, to apply it for object recognition problems in image sequences. the location of window should be determined automatically. For this, we adopted the Otsu' algorithm for segmenting the interesting but unknown objects in an image coarsely. After applying the Otsu' algorithm, the window is located automatically by blob analysis. The GrabCut algorithm needs the probability distributions of both the candidate object region and the background region surrounding closely the object for estimating the Gaussian mixture models(GMMs) of the object and the background. The probability distribution of the background is computed from the background window, which has the same number of pixels within the candidate object region. Experiments for various IR images show that the proposed method is proper to segment out the interesting object in IR image sequences. To evaluate performance of proposed segmentation method, we compare other segmentation methods.

Positive Random Forest based Robust Object Tracking (Positive Random Forest 기반의 강건한 객체 추적)

  • Cho, Yunsub;Jeong, Soowoong;Lee, Sangkeun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.6
    • /
    • pp.107-116
    • /
    • 2015
  • In compliance with digital device growth, the proliferation of high-tech computers, the availability of high quality and inexpensive video cameras, the demands for automated video analysis is increasing, especially in field of intelligent monitor system, video compression and robot vision. That is why object tracking of computer vision comes into the spotlight. Tracking is the process of locating a moving object over time using a camera. The consideration of object's scale, rotation and shape deformation is the most important thing in robust object tracking. In this paper, we propose a robust object tracking scheme using Random Forest. Specifically, an object detection scheme based on region covariance and ZNCC(zeros mean normalized cross correlation) is adopted for estimating accurate object location. Next, the detected region will be divided into five regions for random forest-based learning. The five regions are verified by random forest. The verified regions are put into the model pool. Finally, the input model is updated for the object location correction when the region does not contain the object. The experiments shows that the proposed method produces better accurate performance with respect to object location than the existing methods.

Video Data Scene Segmentation Method Using Region Segmentation (영역분할을 사용한 동영상 데이터 장면 분할 기법)

  • Yeom, Seong-Ju;Kim, U-Saeng
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.493-500
    • /
    • 2001
  • Video scene segmentation is fundamental role for content based video analysis. In this paper, we propose a new region based video scene segmentation method using continuity test for each object region which is segmented by the watershed algorithm for all frames in video data. For this purpose, we first classify video data segments into classes that are the dynamic and static sections according to the object movement rate by comparing the spatial and shape similarity of each region. And then, try to segment each sections by grouping each sections by comparing the neighbor section sections by comparing the neighbor section similarity. Because, this method uses the region which represented on object as a similarity measure, it can segment video scenes efficiently without undesirable fault alarms by illumination and partial changes.

  • PDF

Anatomical Labeling System of Human Brain Imaging (뇌영상의 해부학적 레이블링 시스템)

  • Kim, Tae-Woo;Paik, Chul-Hwa
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1995 no.11
    • /
    • pp.171-172
    • /
    • 1995
  • In this paper, an anatomical labeling system for assisting localization of region of interest on human brain imaging is represented. Model image for labeling anatomical name on the other image is Atlas. Object image to be labeled, such as CT, MR, and PET, is registered onto Atlas. And then, anatomical name for region of interest is appeared on a window by clicking mouse button on object image. The same part named anatomically on that region is labeled and drawn on object image.

  • PDF

Simple Denoising Method for Novel Speckle-shifting Ghost Imaging with Connected-region Labeling

  • Yuan, Sheng;Liu, Xuemei;Bing, Pibin
    • Current Optics and Photonics
    • /
    • v.3 no.3
    • /
    • pp.220-226
    • /
    • 2019
  • A novel speckle-shifting ghost imaging (SSGI) technique is proposed in this paper. This method can effectively extract the edge of an unknown object without achieving its clear ghost image beforehand. However, owing to the imaging mechanism of SSGI, the imaging result generally contains serious noise. To solve the problem, we further propose a simple and effective method to remove noise from the speckle-shifting ghost image with a connected-region labeling (CRL) algorithm. In this method, two ghost images of an object are first generated according to SSGI. A threshold and the CRL are then used to remove noise from the imaging results in turn. This method can retrieve a high-quality image of an object with fewer measurements. Numerical simulations are carried out to verify the feasibility and effectiveness.

Fuzzy-based gaseous object segmentation on image plane (Fuzzy를 이용한 영상에서의 기체분리)

  • Kim, Won-Ha;Park, Min-Sik
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.169-171
    • /
    • 2001
  • Unlike rigid objects, the edge intensity of a gaseous object is various along the object boundary (edge intensities of some pixels on a gaseous object boundary are weaker than those of small rigid objects or noise itself). Therefore, the conventional edge detectors may not adequately detect boundary-like edge pixels for gaseous objects. In this paper A new methodology for segmenting gaseous object images is introduced. Proposed method consists of fuzzy-based boundary detector applicable to gaseous as well as rigid objects and concave region filling to recover object regions.

  • PDF

Automatic Object Segmentation and Background Composition for Interactive Video Communications over Mobile Phones

  • Kim, Daehee;Oh, Jahwan;Jeon, Jieun;Lee, Junghyun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.125-132
    • /
    • 2012
  • This paper proposes an automatic object segmentation and background composition method for video communication over consumer mobile phones. The object regions were extracted based on the motion and color variance of the first two frames. To combine the motion and variance information, the Euclidean distance between the motion boundary pixel and the neighboring color variance edge pixels was calculated, and the nearest edge pixel was labeled to the object boundary. The labeling results were refined using the morphology for a more accurate and natural-looking boundary. The grow-cut segmentation algorithm begins in the expanded label map, where the inner and outer boundary belongs to the foreground and background, respectively. The segmented object region and a new background image stored a priori in the mobile phone was then composed. In the background composition process, the background motion was measured using the optical-flow, and the final result was synthesized by accurately locating the object region according to the motion information. This study can be considered an extended, improved version of the existing background composition algorithm by considering motion information in a video. The proposed segmentation algorithm reduces the computational complexity significantly by choosing the minimum resolution at each segmentation step. The experimental results showed that the proposed algorithm can generate a fast, accurate and natural-looking background composition.

  • PDF