• Title/Summary/Keyword: Foreground detection

Search Result 118, Processing Time 0.029 seconds

Codebook-Based Foreground-Background Segmentation with Background Model Updating (배경 모델 갱신을 통한 코드북 기반의 전배경 분할)

  • Jung, Jae-young
    • Journal of Digital Contents Society
    • /
    • v.17 no.5
    • /
    • pp.375-381
    • /
    • 2016
  • Recently, a foreground-background segmentation using codebook model has been researched actively. The codebook is created one for each pixel in the image. The codewords are vector-quantized representative values of same positional training samples from the input image sequences. The training is necessary for a long time in the most of codebook-based algorithms. In this paper, the initial codebook model is generated simply using median operation with several image frames. The initial codebook is updated to adapt the dynamic changes of backgrounds based on the frequencies of codewords that matched to input pixel during the detection process. We implemented the proposed algorithm in the environment of visual c++ with opencv 3.0, and tested to some of the public video sequences from PETS2009. The test sequences contain the various scenarios including quasi-periodic motion images, loitering objects in the local area for a short time, etc. The experimental results show that the proposed algorithm has good performance compared to the GMM algorithm and standard codebook algorithm.

Moving Shadow Detection using Deep Learning and Markov Random Field (딥 러닝과 마르코프 랜덤필드를 이용한 동영상 내 그림자 검출)

  • Lee, Jong Taek;Kang, Hyunwoo;Lim, Kil-Taek
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1432-1438
    • /
    • 2015
  • We present a methodology to detect moving shadows in video sequences, which is considered as a challenging and critical problem in the most visual surveillance systems since 1980s. While most previous moving shadow detection methods used hand-crafted features such as chromaticity, physical properties, geometry, or combination thereof, our method can automatically learn features to classify whether image segments are shadow or foreground by using a deep learning architecture. Furthermore, applying Markov Random Field enables our system to refine our shadow detection results to improve its performance. Our algorithm is applied to five different challenging datasets of moving shadow detection, and its performance is comparable to that of state-of-the-art approaches.

Removing Shadows Using Background Features in the Images of a Surveillance Camera (감시용 카메라 영상에서의 배경 특성을 사용한 그림자 제거)

  • Kim, Jeongdae;Do, Yongtae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.3
    • /
    • pp.202-208
    • /
    • 2013
  • In the image processing for VS (Video Surveillance), the detection of moving entities in a monitored scene is an important step. A background subtraction technique has been widely employed to find the moving entities. However, the extracted foreground regions often include not only real entities but also their cast shadows, and this can cause errors in following image processing steps, such as tracking, recognition, and analysis. In this paper, a novel technique is proposed to determine the shadow pixels of moving objects in the foreground image of a VS camera. Compared to existing techniques where the same decision criteria are applied to all moving pixels, the proposed technique determines shadow pixels using local features based on two facts: First, the amount of pixel intensity drop due to a shadow depends on the intensity level of background. Second, the distribution pattern of pixel intensities remains even if a shadow is cast. The proposed method has been tested at various situations with different backgrounds and moving humans in different colors.

Detection of Fingerprint Ridge Direction Based on the Run-Length and Chain Codes (런길이 및 체인코드를 이용한 지문 융선의 방향 검출)

  • Lee Jeong-Hwan;Park Se-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.8
    • /
    • pp.1740-1747
    • /
    • 2004
  • In this paper, we proposed an effective method for detecting fingerprint ridge direction based on the run-length and chain codes. First, a fingerprint image is normalized, and it is thresholded to obtain binary image with foreground and background regions. The foreground regions is composed of fingerprint ridges, and the ridges is encoded with the run-length and chain codes. To detect directional information, the boundary of ridge codes is traced, and curvature is calculated at ecah point of boundary. And the detected direction value is smoothed with appropriate window locally. The proposed method is applied to NIST and FVC2002 fingerprint database to evaluate performance. By the experimental results, the proposed method can be used to obtain ridge direction value in fingerprint image.

Low-light Image Enhancement Based on Frame Difference and Tone Mapping (프레임 차와 톤 매핑을 이용한 저조도 영상 향상)

  • Jeong, Yunju;Lee, Yeonghak;Shim, Jaechang;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1044-1051
    • /
    • 2018
  • In this paper, we propose a new method to improve low light image. In order to improve the image quality of a night image with a moving object as much as the quality of a daytime image, the following tasks were performed. Firstly, we reduce the noisy of the input night image and improve the night image by the tone mapping method. Secondly, we segment the input night image into a foreground with motion and a background without motion. The motion is detected using both the difference between the current frame and the previous frame and the difference between the current frame and the night background image. The background region of the night image takes pixels from corresponding positions in the daytime image. The foreground regions of the night image take the pixels from the corresponding positions of the image which is improved by the tone mapping method. Experimental results show that the proposed method can improve the visual quality more clearly than the existing methods.

Stereoscopic Video Conversion Based on Image Motion Classification and Key-Motion Detection from a Two-Dimensional Image Sequence (영상 운동 분류와 키 운동 검출에 기반한 2차원 동영상의 입체 변환)

  • Lee, Kwan-Wook;Kim, Je-Dong;Kim, Man-Bae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10B
    • /
    • pp.1086-1092
    • /
    • 2009
  • Stereoscopic conversion has been an important and challenging issue for many 3-D video applications. Usually, there are two different stereoscopic conversion approaches, i.e., image motion-based conversion that uses motion information and object-based conversion that partitions an image into moving or static foreground object(s) and background and then converts the foreground in a stereoscopic object. As well, since the input sequence is MPEG-1/2 compressed video, motion data stored in compressed bitstream are often unreliable and thus the image motion-based conversion might fail. To solve this problem, we present the utilization of key-motion that has the better accuracy of estimated or extracted motion information. To deal with diverse motion types, a transform space produced from motion vectors and color differences is introduced. A key-motion is determined from the transform space and its associated stereoscopic image is generated. Experimental results validate effectiveness and robustness of the proposed method.

Data Augmentation for Tomato Detection and Pose Estimation (토마토 위치 및 자세 추정을 위한 데이터 증대기법)

  • Jang, Minho;Hwang, Youngbae
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.44-55
    • /
    • 2022
  • In order to automatically provide information on fruits in agricultural related broadcasting contents, instance image segmentation of target fruits is required. In addition, the information on the 3D pose of the corresponding fruit may be meaningfully used. This paper represents research that provides information about tomatoes in video content. A large amount of data is required to learn the instance segmentation, but it is difficult to obtain sufficient training data. Therefore, the training data is generated through a data augmentation technique based on a small amount of real images. Compared to the result using only the real images, it is shown that the detection performance is improved as a result of learning through the synthesized image created by separating the foreground and background. As a result of learning augmented images using images created using conventional image pre-processing techniques, it was shown that higher performance was obtained than synthetic images in which foreground and background were separated. To estimate the pose from the result of object detection, a point cloud was obtained using an RGB-D camera. Then, cylinder fitting based on least square minimization is performed, and the tomato pose is estimated through the axial direction of the cylinder. We show that the results of detection, instance image segmentation, and cylinder fitting of a target object effectively through various experiments.

Intrusion Detection Algorithm based on Motion Information in Video Sequence (비디오 시퀀스에서 움직임 정보를 이용한 침입탐지 알고리즘)

  • Kim, Alla;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.2
    • /
    • pp.284-288
    • /
    • 2010
  • Video surveillance is widely used in establishing the societal security network. In this paper, intrusion detection based on visual information acquired by static camera is proposed. Proposed approach uses background model constructed by approximated median filter(AMF) to find a foreground candidate, and detected object is calculated by analyzing motion information. Motion detection is determined by the relative size of 2D object in RGB space, finally, the threshold value for detecting object is determined by heuristic method. Experimental results showed that the performance of intrusion detection is better one when the spatio-temporal candidate informations change abruptly.

Development of a Fall Detection System Using Fish-eye Lens Camera (어안 렌즈 카메라 영상을 이용한 기절동작 인식)

  • So, In-Mi;Han, Dae-Kyung;Kang, Sun-Kyung;Kim, Young-Un;Jong, Sung-tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.97-103
    • /
    • 2008
  • This study is to present a fainting motion recognizing method by using fish-eye lens images to sense emergency situations. The camera with fish-eye lens located at the center of the ceiling of the living room sends images, and then the foreground pixels are extracted by means of the adaptive background modeling method based on the Gaussian complex model, which is followed by tracing of outer points in the foreground pixel area and the elliptical mapping. During the elliptical tracing, the fish-eye lens images are converted to fluoroscope images. the size and location changes, and moving speed information are extracted to judge whether the movement, pause, and motion are similar to fainting motion. The results show that compared to using fish-eye lens image, extraction of the size and location changes. and moving speed by means of the conversed fluoroscope images has good recognition rates.

  • PDF

Visual Attention Detection By Adaptive Non-Local Filter

  • Anh, Dao Nam
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.1
    • /
    • pp.49-54
    • /
    • 2016
  • Regarding global and local factors of a set of features, a given single image or multiple images is a common approach in image processing. This paper introduces an application of an adaptive version of non-local filter whose original version searches non-local similarity for removing noise. Since most images involve texture partner in both foreground and background, extraction of signified regions with texture is a challenging task. Aiming to the detection of visual attention regions for images with texture, we present the contrast analysis of image patches located in a whole image but not nearby with assistance of the adaptive filter for estimation of non-local divergence. The method allows extraction of signified regions with texture of images of wild life. Experimental results for a benchmark demonstrate the ability of the proposed method to deal with the mentioned challenge.