• Title/Summary/Keyword: Edge Tracking

Search Result 185, Processing Time 0.025 seconds

Multiple Vehicles Tracking via sequential posterior estimation (순차적인 사후 추정에 의한 다중 차량 추적)

  • Lee, Won-Ju;Yoon, Chang-Young;Lee, Hee-Jin;Kim, Eun-Tai;Park, Mignon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.1
    • /
    • pp.40-49
    • /
    • 2007
  • In a visual driver-assistance system, separating moving objects from fixed objects are an important problem to maintain multiple hypothesis for the state. Color and edge-based tracker can often be 'distracted' causing them to track the wrong object. Many researchers have dealt with this problem by using multiple features, as it is unlikely that all will be distracted at the same time. In this paper, we improve the accuracy and robustness of real-time tracking by combining a color histogram feature with a brightness of Optical Flow-based feature under a Sequential Monte Carlo framework. And it is also excepted from Tracking as time goes on, reducing density by Adaptive Particles Number in case of the fixed object. This new framework makes two main contributions. The one is about the prediction framework which separating moving objects from fixed objects and the other is about measurement framework to get a information from the visual data under a partial occlusion.

MRF Particle filter-based Multi-Touch Tracking and Gesture Likelihood Estimation (MRF 입자필터 멀티터치 추적 및 제스처 우도 측정)

  • Oh, Chi-Min;Shin, Bok-Suk;Klette, Reinhard;Lee, Chil-Woo
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.16-24
    • /
    • 2015
  • In this paper, we propose a method for multi-touch tracking using MRF-based particle filters and gesture likelihood estimation Each touch (of one finger) is considered to be one object. One of frequently occurring issues is the hijacking problem which means that an object tracker can be hijacked by neighboring object. If a predicted particle is close to an adjacent object then the particle's weight should be lowered by analysing the influence of neighboring objects for avoiding hijacking problem. We define a penalty function to lower the weights of those particles. MRF is a graph representation where a node is the location of a target object and an edge describes the adjacent relation of target object. It is easy to utilize MRF as data structure of adjacent objects. Moreover, since MRF graph representation is helpful to analyze multi-touch gestures, we describe how to define gesture likelihoods based on MRF. The experimental results show that the proposed method can avoid the occurrence of hijacking problems and is able to estimate gesture likelihoods with high accuracy.

IoT Roaming Service for Seamless IoT Service (무중단 IoT 서비스 제공을 위한 IoT 로밍서비스)

  • Ahn, Junguk;Lee, Byung Mun
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1258-1269
    • /
    • 2020
  • The IoT(Internet of Things) service provides users with valuable services by collecting and analyzing data using Internet-connected IoT devices. Currently, IoT service platforms are accomplished by using edge computing to reduce the delay time required to collect data from IoT devices. However, if a user moves to another network with IoT device, the connection will be lost and IoT service will be suspended. To solve this problem, we proposes a service that automatically roaming IoT service when IoT device makes move. IoT roaming service provides a device automatic tracking management technique designed to continue receiving IoT services even if users move to other networks. To check if the proposed roaming service was effective, we implemented IoT roaming service and measured the data transfer time while move between networks along with devices while using IoT service. As a result, the average data transfer time was 124.62ms, and the average service interrupt time was 812.12ms. with this result, we can assume that the user could feel service interruption time very shortly and it will not affect the service experience. with IoT roaming service, we expect that it will present a method that stably providing IoT services even if user moves networks.

Vision Based Vehicle Detection and Traffic Parameter Extraction (비젼 기반 차량 검출 및 교통 파라미터 추출)

  • 하동문;이종민;김용득
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.610-620
    • /
    • 2003
  • Various shadows are one of main factors that cause errors in vision based vehicle detection. In this paper, two simple methods, land mark based method and BS & Edge method, are proposed for vehicle detection and shadow rejection. In the experiments, the accuracy of vehicle detection is higher than 96%, during which the shadows arisen from roadside buildings grew considerably. Based on these two methods, vehicle counting, tracking, classification, and speed estimation are achieved so that real-time traffic parameters concerning traffic flow can be extracted to describe the load of each lane.

An Improved Cast Shadow Removal in Object Detection (객체검출에서의 개선된 투영 그림자 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Kim, Yu-Sung;Kim, Jae-Min
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.889-894
    • /
    • 2009
  • Accompanied by the rapid development of Computer Vision, Visual surveillance has achieved great evolution with more and more complicated processing. However there are still many problems to be resolved for robust and reliable visual surveillance, and the cast shadow occurring in motion detection process is one of them. Shadow pixels are often misclassified as object pixels so that they cause errors in localization, segmentation, tracking and classification of objects. This paper proposes a novel cast shadow removal method. As opposed to previous conventional methods, which considers pixel properties like intensity properties, color distortion, HSV color system, and etc., the proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the background scene. Then, the product of the outcomes of application determines whether the blob pixels in the foreground mask comes from object blob regions or shadow regions. The proposed method is simple but turns out practically very effective for Gaussian Mixture Model, which is verified through experiments.

  • PDF

Detection of Pupil Center using Projection Function and Hough Transform (프로젝션 함수와 허프 변환을 이용한 눈동자 중심점 찾기)

  • Choi, Yeon-Seok;Mun, Won-Ho;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.167-170
    • /
    • 2010
  • In this paper, we proposed a novel algorithm to detect the center of pupil in frontal view face. This algorithm, at first, extract an eye region from the face image using integral projection function and variance projection function. In an eye region, detect the center of pupil positions using circular hough transform with sobel edge mask. The experimental results show good performance in detecting pupil center from FERET face image.

  • PDF

An Effective Moving Cast Shadow Removal in Gray Level Video for Intelligent Visual Surveillance (지능 영상 감시를 위한 흑백 영상 데이터에서의 효과적인 이동 투영 음영 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.4
    • /
    • pp.420-432
    • /
    • 2014
  • In detection of moving objects from video sequences, an essential process for intelligent visual surveillance, the cast shadows accompanying moving objects are different from background so that they may be easily extracted as foreground object blobs, which causes errors in localization, segmentation, tracking and classification of objects. Most of the previous research results about moving cast shadow detection and removal usually utilize color information about objects and scenes. In this paper, we proposes a novel cast shadow removal method of moving objects in gray level video data for visual surveillance application. The proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the corresponding regions in the background scene. Then, the product of the outcomes of application determines moving object blob pixels from the blob pixels in the foreground mask. The minimal rectangle regions containing all blob pixles classified as moving object pixels are extracted. The proposed method is simple but turns out practically very effective for Adative Gaussian Mixture Model-based object detection of intelligent visual surveillance applications, which is verified through experiments.

Logical operation tracking using optical flow and improvement of gradient operation speed (옵티컬 플로우를 이용한 논리연산 트래킹과 그레디언트 연산속도 개선)

  • 안태홍;정상화;박종안
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.4
    • /
    • pp.787-795
    • /
    • 1998
  • In this paper, we have improved the speed of gradient operation, which needs to calculate Optical Flow for estimating a moving object, and proposed a method which estimate the contour of a moving object by the logical operationg of Optical Flow and edge in noisy images. The proposed method, which recognize to a moving ogject and traking a moving object, using logical operation of Optical Flow and edge in low-level has a advantage that is simpler than the known method for moving objects estimation. In addition, we have simulated several images using method I and method II on improved Gradient operation speed. When we have compared the average value of total operation time, method I is improved with 12% of operation speed compared with the known method, method II is improved with 38% operation speed.

  • PDF

Omni-directional Surveillance and Motion Detection using a Fish-Eye Lens (어안 렌즈를 이용한 전방향 감시 및 움직임 검출)

  • Cho, Seog-Bin;Yi, Un-Kun;Baek, Kwang-Ryul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.79-84
    • /
    • 2005
  • In this paper, we developed an omni-directional surveillance and motion detection method. The fish-eye lens provides a wide field of view image. Using this image, the equi-distance model for the fish-eye lens is applied to get the perspective and panorama images. Generally, we must consider the trade-off between resolution and field of view of an image from a camera. To enhance the resolution of the result images, some kind of interpolation methods are applied. Also the moving edge method is used to detect moving objects for the object tracking.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.