• Title/Summary/Keyword: Moving region detection

Search Result 147, Processing Time 0.15 seconds

De-interlacing and Block Code Generation For Outsole Model Recognition In Moving Picture (동영상에서 신발 밑창 모델 인식을 위한 인터레이스 제거 및 블록 코드 생성 기법)

  • Kim Cheol-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2006
  • This paper presents a method that automatically recognizes products into model type, which it flows with the conveyor belt. The specific interlaced image are occurred by moving image when we use the NTSC based camera. It is impossible to process interlaced images, so a suitable post-processing is required. For the purpose of this processing, after it remove interlaced images using de-interlacing method, it leads rectangle region of object by thresholding. And then, after rectangle region is separated into several blocks through edge detection, we calculate pixel numbers per each block, re-classify using its average, and classify products into model type. Through experiments, we know that the proposed method represent high classification ratio.

  • PDF

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

A Fuzzy Logic System for Detection and Recognition of Human in the Automatic Surveillance System (유전자 알고리즘과 퍼지규칙을 기반으로한 지능형 자동감시 시스템의 개발)

  • 장석윤;박민식;이영주;박민용
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.237-240
    • /
    • 2001
  • An image processing and decision making method for the Automatic Surveillance System is proposed. The aim of our Automatic Surveillance System is to detect a moving object and make a decision on whether it is human or not. Various object features such as the ratio of the width and the length of the moving object, the distance dispersion between the principal axis and the object contour, the eigenvectors, the symmetric axes, and the areas if the segmented region are used in this paper. These features are not the unique and decisive characteristics for representing human Also, due to the outdoor image property, the object feature information is unavoidably vague and inaccurate. In order to make an efficient decision from the information, we use a fuzzy rules base system ai an approximate reasoning method. The fuzzy rules, combining various object features, are able to describe the conditions for making an intelligent decision. The fuzzy rule base system is initially constructed by heuristic approach and then, trained and tasted with input/output data Experimental result are shown, demonstrating the validity of our system.

  • PDF

Fusion of Background Subtraction and Clustering Techniques for Shadow Suppression in Video Sequences

  • Chowdhury, Anuva;Shin, Jung-Pil;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.231-234
    • /
    • 2013
  • This paper introduces a mixture of background subtraction technique and K-Means clustering algorithm for removing shadows from video sequences. Lighting conditions cause an issue with segmentation. The proposed method can successfully eradicate artifacts associated with lighting changes such as highlight and reflection, and cast shadows of moving object from segmentation. In this paper, K-Means clustering algorithm is applied to the foreground, which is initially fragmented by background subtraction technique. The estimated shadow region is then superimposed on the background to eliminate the effects that cause redundancy in object detection. Simulation results depict that the proposed approach is capable of removing shadows and reflections from moving objects with an accuracy of more than 95% in every cases considered.

Active Object Tracking using Image Mosaic Background

  • Jung, Young-Kee;Woo, Dong-Min
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.1
    • /
    • pp.52-57
    • /
    • 2004
  • In this paper, we propose a panorama-based object tracking scheme for wide-view surveillance systems that can detect and track moving objects with a pan-tilt camera. A dynamic mosaic of the background is progressively integrated in a single image using the camera motion information. For the camera motion estimation, we calculate affine motion parameters for each frame sequentially with respect to its previous frame. The camera motion is robustly estimated on the background by discriminating between background and foreground regions. The modified block-based motion estimation is used to separate the background region. Each moving object is segmented by image subtraction from the mosaic background. The proposed tracking system has demonstrated good performance for several test video sequences.

Visibility detection approach to road scene foggy images

  • Guo, Fan;Peng, Hui;Tang, Jin;Zou, Beiji;Tang, Chenggong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4419-4441
    • /
    • 2016
  • A cause of vehicle accidents is the reduced visibility due to bad weather conditions such as fog. Therefore, an onboard vision system should take visibility detection into account. In this paper, we propose a simple and effective approach for measuring the visibility distance using a single camera placed onboard a moving vehicle. The proposed algorithm is controlled by a few parameters and mainly includes camera parameter estimation, region of interest (ROI) estimation and visibility computation. Thanks to the ROI extraction, the position of the inflection point may be measured in practice. Thus, combined with the estimated camera parameters, the visibility distance of the input foggy image can be computed with a single camera and just the presence of road and sky in the scene. To assess the accuracy of the proposed approach, a reference target based visibility detection method is also introduced. The comparative study and quantitative evaluation show that the proposed method can obtain good visibility detection results with relatively fast speed.

Multiple Pedestrians Detection and Tracking using Color Information from a Moving Camera (이동 카메라 영상에서 컬러 정보를 이용한 다수 보행자 검출 및 추적)

  • Lim, Jong-Seok;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.3
    • /
    • pp.317-326
    • /
    • 2004
  • This paper presents a new method for the detection of multiple pedestrians and tracking of a specific pedestrian using color information from a moving camera. We first extract motion vector on the input image using BMA. Next, a difference image is calculated on the basis of the motion vector. The difference image is converted to a binary image. The binary image has an unnecessary noise. So, it is removed by means of the proposed noise deletion method. Then, we detect pedestrians through the projection algorithm. But, if pedestrians are very adjacent to each other, we separate them using RGB color information. And we track a specific pedestrian using RGB color information in center region of it. The experimental results on our test sequences demonstrated the high efficiency of our approach as it had shown detection success ratio of 97% and detection failure ratio of 3% and excellent tracking.

Envelope Generation for Freeform Objects (자유 곡면체의 엔벨롭 생성)

  • 송수창;김재정
    • Korean Journal of Computational Design and Engineering
    • /
    • v.6 no.2
    • /
    • pp.89-100
    • /
    • 2001
  • Swept volume is the sweeping region of moving objects. It is used in various applications such as interference detection in assembly design, visualization of manipulator motions in robotics, simulation of the volume removal by a cutter in NC machining. The shape of swept volume is defined by the envelope, which is determined by the boundary of moving objects and its direction of motion. In order to implement the generation of swept volume, researchers have taken much effort to develop the techniques how to generate the envelope. However, their results are confined to envelope generated only in simple shape objects, such as polyhedra or quadric surfaces. This study provided the envelope generation algorithm of NURBS objects. Characteristic points were obtained by applying the geometric conditions of envelope to NURBS equations, and then characteristic curves were created by means of interpolating those points. Silhouette edges were determined in the following procedures. First, two adjacent surfaces which have the same edge were found from B-Rep data. Then, by taking the scalar product of velocity vector of a point on that edge with each normal vector on two surfaces, silhouette edges were discriminated. Finally, envelope was generated along moving direction in the form of ruled surfaces by using both the partial information between initial and final position of objects affecting envelope along with characteristic curves and silhouette edge. Since this developed algorithm can be applied not only to NURBS objects but also to their Boolean objects, it can be used effectively in various applications.

  • PDF

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Real-time Moving Object Recognition and Tracking Using The Wavelet-based Neural Network and Invariant Moments (웨이블릿 기반의 신경망과 불변 모멘트를 이용한 실시간 이동물체 인식 및 추적 방법)

  • Kim, Jong-Bae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.10-21
    • /
    • 2008
  • The present paper propose a real-time moving object recognition and tracking method using the wavelet-based neural network and invariant moments. Candidate moving region detection phase which is the first step of the proposed method detects the candidate regions where a pixel value changes occur due to object movement based on the difference image analysis between continued two image frames. The object recognition phase which is second step of proposed method recognizes the vehicle regions from the detected candidate regions using wavelet neurual-network. From object tracking Phase which is third step the recognized vehicle regions tracks using matching methods of wavelet invariant moments bases to recognized object. To detect a moving object from image sequence the candidate regions detection phase uses an adaptive thresholding method between previous image and current image as result it was robust surroundings environmental change and moving object detections were possible. And by using wavelet features to recognize and tracking of vehicle, the proposed method decrease calculation time and not only it will be able to minimize the effect in compliance with noise of road image, vehicle recognition accuracy became improved. The result which it experiments from the image which it acquires from the general road image sequence and vehicle detection rate is 92.8%, the computing time per frame is 0.24 seconds. The proposed method can be efficiently apply to a real-time intelligence road traffic surveillance system.