• Title/Summary/Keyword: Scene Segmentation

Search Result 147, Processing Time 0.027 seconds

Motion-Based Background Subtraction without Geometric Computation in Dynamic Scenes

  • Kawamoto, Kazuhiko;Imiya, Atsushi;Hirota, Kaoru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.559-562
    • /
    • 2003
  • A motion-based background subtraction method without geometric computation is proposed, allowing that the camera is moving parallel to the ground plane with uniform velocity. The proposed method subtracts the background region from a given image by evaluating the difference between calculated and model Hows. This approach is insensitive to small errors of calculated optical flows. Furthermore, in order to tackle the significant errors, a strategy for incorporating a set of optical flows calculated over different frame intervals is presented. An experiment with two real image sequences, in which a static box or a moving toy car appears, to evaluate the performance in terms of accuracy under varying thresholds using a receiver operating characteristic (ROC) curve. The ROC curves show, in the best case, the figure-ground segmentation is done at 17.8 % in false positive fraction (FPF) and 71.3% in true positive fraction (TPF) for the static-object scene and also at 14.8% in FPF and 72.4% In TPF for the moving-object scene, regardless if the calculated optical flows contain significant errors of calculation.

  • PDF

Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners (무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법)

  • Ahn, Seung-Uk;Choe, Yun-Geun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

An Improved Cast Shadow Removal in Object Detection (객체검출에서의 개선된 투영 그림자 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Kim, Yu-Sung;Kim, Jae-Min
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.889-894
    • /
    • 2009
  • Accompanied by the rapid development of Computer Vision, Visual surveillance has achieved great evolution with more and more complicated processing. However there are still many problems to be resolved for robust and reliable visual surveillance, and the cast shadow occurring in motion detection process is one of them. Shadow pixels are often misclassified as object pixels so that they cause errors in localization, segmentation, tracking and classification of objects. This paper proposes a novel cast shadow removal method. As opposed to previous conventional methods, which considers pixel properties like intensity properties, color distortion, HSV color system, and etc., the proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the background scene. Then, the product of the outcomes of application determines whether the blob pixels in the foreground mask comes from object blob regions or shadow regions. The proposed method is simple but turns out practically very effective for Gaussian Mixture Model, which is verified through experiments.

  • PDF

Human Assisted Fitting and Matching Primitive Objects to Sparse Point Clouds for Rapid Workspace Modeling in Construction Automation (-건설현장에서의 시공 자동화를 위한 Laser Sensor기반의 Workspace Modeling 방법에 관한 연구-)

  • KWON SOON-WOOK
    • Korean Journal of Construction Engineering and Management
    • /
    • v.5 no.5 s.21
    • /
    • pp.151-162
    • /
    • 2004
  • Current methods for construction site modeling employ large, expensive laser range scanners that produce dense range point clouds of a scene from different perspectives. Days of skilled interpretation and of automatic segmentation may be required to convert the clouds to a finished CAD model. The dynamic nature of the construction environment requires that a real-time local area modeling system be capable of handling a rapidly changing and uncertain work environment. However, in practice, large, simple, and reasonably accurate embodying volumes are adequate feedback to an operator who, for instance, is attempting to place materials in the midst of obstacles with an occluded view. For real-time obstacle avoidance and automated equipment control functions, such volumes also facilitate computational tractability. In this research, a human operator's ability to quickly evaluate and associate objects in a scene is exploited. The operator directs a laser range finder mounted on a pan and tilt unit to collect range points on objects throughout the workspace. These groups of points form sparse range point clouds. These sparse clouds are then used to create geometric primitives for visualization and modeling purposes. Experimental results indicate that these models can be created rapidly and with sufficient accuracy for automated obstacle avoidance and equipment control functions.

Efficient 3D Geometric Structure Inference and Modeling for Tensor Voting based Region Segmentation (효과적인 3차원 기하학적 구조 추정 및 모델링을 위한 텐서 보팅 기반 영역 분할)

  • Kim, Sang-Kyoon;Park, Soon-Young;Park, Jong-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.10-17
    • /
    • 2012
  • In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. In this paper, we propose a method for creating 3D virtual scenes based on 2D image that is completely automatic and requires only a single scene as input data. The proposed method is similar to the creation of a pop-up illustration in a children's book. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting to an image segmentation. The tensor voting is used based on the fact that homogeneous region in an image is usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. And then, our algorithm labels regions of the input image into coarse categories: "ground", "sky", and "vertical". These labels are then used to "cut and fold" the image into a pop-up model using a set of simple assumptions. The experimental results show that our method successfully segments coarse regions in many complex natural scene images and can create a 3D pop-up model to infer the structure information based on the segmented region information.

Multi-stage Image Restoration for High Resolution Panchromatic Imagery (고해상도 범색 영상을 위한 다중 단계 영상 복원)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.551-566
    • /
    • 2016
  • In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. The degradation results in noise and blurring which badly affect identification and extraction of useful information in image data. Especially, the degradation gives bad influence in the analysis of images collected over the scene with complicate surface structure such as urban area. This study proposes a multi-stage image restoration to improve the accuracy of detailed analysis for the images collected over the complicate scene. The proposed method assumes a Gaussian additive noise, Markov random field of spatial continuity, and blurring proportional to the distance between the pixels. Point-Jacobian Iteration Maximum A Posteriori (PJI-MAP) estimation is employed to restore a degraded image. The multi-stage process includes the image segmentation performing region merging after pixel-linking. A dissimilarity coefficient combining homogeneity and contrast is proposed for image segmentation. In this study, the proposed method was quantitatively evaluated using simulation data and was also applied to the two panchromatic images of super-high resolution: Dubaisat-2 data of 1m resolution from LA, USA and KOMPSAT3 data of 0.7 m resolution from Daejeon in the Korean peninsula. The experimental results imply that it can improve analytical accuracy in the application of remote sensing high resolution panchromatic imagery.

Video Segmentation and Video Browsing using the Edge and Color Distribution (윤곽선과 컬러 분포를 이용한 비디오 분할과 비디오 브라우징)

  • Heo, Seoung;Kim, Woo-Saeng
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.9
    • /
    • pp.2197-2207
    • /
    • 1997
  • In this paper, we propose a video data segmentation method using edge and color distribution of video frames and also develop a video browser by using the proposed algorithm. To segment a video, we use a 644-bin HSV color histogram and the edge information which generated with automatic threshold method. We consider scene's characteristics by using positions and colo distributions of object in each frame. We develop a hierarchical and a shot-based browser for video browsing. We also show that our proposed method is less sensitive to light effects and more robust to motion effects than previous ones like a histogram-based method by testing with various video data.

  • PDF

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.

Optical Flow Measurement Based on Boolean Edge Detection and Hough Transform

  • Chang, Min-Hyuk;Kim, Il-Jung;Park, Jong an
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.119-126
    • /
    • 2003
  • The problem of tracking moving objects in a video stream is discussed in this pa-per. We discussed the popular technique of optical flow for moving object detection. Optical flow finds the velocity vectors at each pixel in the entire video scene. However, optical flow based methods require complex computations and are sensitive to noise. In this paper, we proposed a new method based on the Hough transform and on voting accumulation for improving the accuracy and reducing the computation time. Further, we applied the Boo-lean based edge detector for edge detection. Edge detection and segmentation are used to extract the moving objects in the image sequences and reduce the computation time of the CHT. The Boolean based edge detector provides accurate and very thin edges. The difference of the two edge maps with thin edges gives better localization of moving objects. The simulation results show that the proposed method improves the accuracy of finding the optical flow vectors and more accurately extracts moving objects' information. The process of edge detection and segmentation accurately find the location and areas of the real moving objects, and hence extracting moving information is very easy and accurate. The Combinatorial Hough Transform and voting accumulation based optical flow measures optical flow vectors accurately. The direction of moving objects is also accurately measured.

Accurate Parked Vehicle Detection using GMM-based 3D Vehicle Model in Complex Urban Environments (가우시안 혼합모델 기반 3차원 차량 모델을 이용한 복잡한 도시환경에서의 정확한 주차 차량 검출 방법)

  • Cho, Younggun;Roh, Hyun Chul;Chung, Myung Jin
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.1
    • /
    • pp.33-41
    • /
    • 2015
  • Recent developments in robotics and intelligent vehicle area, bring interests of people in an autonomous driving ability and advanced driving assistance system. Especially fully automatic parking ability is one of the key issues of intelligent vehicles, and accurate parked vehicles detection is essential for this issue. In previous researches, many types of sensors are used for detecting vehicles, 2D LiDAR is popular since it offers accurate range information without preprocessing. The L shape feature is most popular 2D feature for vehicle detection, however it has an ambiguity on different objects such as building, bushes and this occurs misdetection problem. Therefore we propose the accurate vehicle detection method by using a 3D complete vehicle model in 3D point clouds acquired from front inclined 2D LiDAR. The proposed method is decomposed into two steps: vehicle candidate extraction, vehicle detection. By combination of L shape feature and point clouds segmentation, we extract the objects which are highly related to vehicles and apply 3D model to detect vehicles accurately. The method guarantees high detection performance and gives plentiful information for autonomous parking. To evaluate the method, we use various parking situation in complex urban scene data. Experimental results shows the qualitative and quantitative performance efficiently.