• Title/Summary/Keyword: video object tracking

Search Result 319, Processing Time 0.032 seconds

An Intelligent Video Image Segmentation System using Watershed Algorithm (워터쉐드 알고리즘을 이용한 지능형 비디오 영상 분할 시스템)

  • Yang, Hwang-Kyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.3
    • /
    • pp.309-314
    • /
    • 2010
  • In this paper, an intelligent security camera over internet is proposed. Among ISC methods, watersheds based methods produce a good performance in segmentation accuracy. But traditional watershed transform has been suffered from over-segmentation due to small local minima included in gradient image that is input to the watershed transform. And a zone face candidates of detection using skin-color model. last step, face to check at face of candidate location using SVM method. It is extract of wavelet transform coefficient to the zone face candidated. Therefore, it is likely that it is applicable to read world problem, such as object tracking, surveillance, and human computer interface application etc.

The design and implementation of Object-based bioimage matching on a Mobile Device (모바일 장치기반의 바이오 객체 이미지 매칭 시스템 설계 및 구현)

  • Park, Chanil;Moon, Seung-jin
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.1-10
    • /
    • 2019
  • Object-based image matching algorithms have been widely used in the image processing and computer vision fields. A variety of applications based on image matching algorithms have been recently developed for object recognition, 3D modeling, video tracking, and biomedical informatics. One prominent example of image matching features is the Scale Invariant Feature Transform (SIFT) scheme. However many applications using the SIFT algorithm have implemented based on stand-alone basis, not client-server architecture. In this paper, We initially implemented based on client-server structure by using SIFT algorithms to identify and match objects in biomedical images to provide useful information to the user based on the recently released Mobile platform. The major methodological contribution of this work is leveraging the convenient user interface and ubiquitous Internet connection on Mobile device for interactive delineation, segmentation, representation, matching and retrieval of biomedical images. With these technologies, our paper showcased examples of performing reliable image matching from different views of an object in the applications of semantic image search for biomedical informatics.

A Study on Object Detection Algorithm for Abandoned and Removed Objects for Real-time Intelligent Surveillance System (실시간 지능형 감시 시스템을 위한 방치, 제거된 객체 검출에 관한 연구)

  • Jeon, Ji-Hye;Park, Jong-Hwa;Jeong, Cheol-Jun;Kang, In-Goo;An, Tae-Ki;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.1C
    • /
    • pp.24-32
    • /
    • 2010
  • In this paper we proposed an object tracking system that detects the abandoned and removed objects, which is to be used in the intelligent surveillance applications. After the GMM based background subtraction and by using histogram method, the static region is identified to detect abandoned and removed objects. Since the system is implemented on DSP chip, it operates in realtime and is programmable. The input videos used in the experiment contain various indoor and outdoor scenes, and they are categorized into three different complexities; low, midium and high. By 10 times of experiment, we obtained high detection ratio at low and medium complexity sequences. On the high complexity video, successful detection ratio was relatively low because the scene contains crowdedness and repeated occlusion. In the future work, these complicated situation should be solved.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.

A Study on Correcting Virtual Camera Tracking Data for Digital Compositing (디지털영상 합성을 위한 가상카메라의 트래킹 데이터 보정에 관한 연구)

  • Lee, Junsang;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.11
    • /
    • pp.39-46
    • /
    • 2012
  • The development of the computer widens the expressive ways for the nature objects and the scenes. The cutting edge computer graphics technologies effectively create any images we can imagine. Although the computer graphics plays an important role in filming and video production, the status of the domestic contents production industry is not favorable for producing and research all at the same time. In digital composition, the match moving stage, which composites the captured real sequence with computer graphics image, goes through many complicating processes. The camera tracking process is the most important issue in this stage. This comprises the estimation of the 3D trajectory and the optical parameter of the real camera. Because the estimating process is based only on the captured sequence, there are many errors which make the process more difficult. In this paper we propose the method for correcting the tracking data. The proposed method can alleviate the unwanted camera shaking and object bouncing effect in the composited scene.

Dynamic Modeling of Eigenbackground for Object Tracking (객체 추적을 위한 고유 배경의 동적 모델링)

  • Kim, Sung-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.4
    • /
    • pp.67-74
    • /
    • 2012
  • In this paper, we propose an efficient dynamic background modelling method by using eigenbackground to extract moving objects from video stream. Even if a background model has been created, the model has to be updated to adapt to change due to several reasons such as weather or lighting. In this paper, we update a background model based on R-SVD method. At this time we define a change ratio of images and update the model dynamically according this value. Also eigenbackground need to be modelled by using sufficient training images for accurate models but we reorganize input images to reduce the number of images for training models. Through simulation, we show that the proposed method improves the performance against traditional eigenbackground method without background updating and a previous method.

Study on abnormal behavior prediction models using flexible multi-level regression (유연성 다중 회귀 모델을 활용한 보행자 이상 행동 예측 모델 연구)

  • Jung, Yu Jin;Yoon, Yong Ik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • In the recently, violent crime and accidental crime has been generated continuously. Consequently, people anxiety has been heightened. The Closed Circuit Television (CCTV) has been used to ensure the security and evidence for the crimes. However, the video captured from CCTV has being used in the post-processing to apply to the evidence. In this paper, we propose a flexible multi-level models for estimating whether dangerous behavior and the environment and context for pedestrians. The situation analysis builds the knowledge for the pedestrians tracking. Finally, the decision step decides and notifies the threat situation when the behavior observed object is determined to abnormal behavior. Thereby, tracking the behavior of objects in a multi-region, it can be seen that the risk of the object behavior. It can be predicted by the behavior prediction of crime.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Development of Real-Time Tracking System Through Information Sharing Between Cameras (카메라 간 정보 공유를 통한 실시간 차량 추적 시스템 개발)

  • Kim, Seon-Hyeong;Kim, Sang-Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.6
    • /
    • pp.137-142
    • /
    • 2020
  • As research on security systems using IoT (Internet of Things) devices increases, the need for research to track the location of specific objects is increasing. The goal is to detect the movement of objects in real-time and to predict the radius of movement in short time. Many studies have been done to clearly recognize and detect moving objects. However, it does not require the sharing of information between cameras that recognize objects. In this paper, using the device information of the camera and the video information taken from the camera, the movement radius of the object is predicted and information is shared about the camera within the radius to provide the movement path of the object.

A Real-time Vehicle Localization Algorithm for Autonomous Parking System (자율 주차 시스템을 위한 실시간 차량 추출 알고리즘)

  • Hahn, Jong-Woo;Choi, Young-Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.31-38
    • /
    • 2011
  • This paper introduces a video based traffic monitoring system for detecting vehicles and obstacles on the road. To segment moving objects from image sequence, we adopt the background subtraction algorithm based on the local binary patterns (LBP). Recently, LBP based texture analysis techniques are becoming popular tools for various machine vision applications such as face recognition, object classification and so on. In this paper, we adopt an extension of LBP, called the Diagonal LBP (DLBP), to handle the background subtraction problem arise in vision-based autonomous parking systems. It reduces the code length of LBP by half and improves the computation complexity drastically. An edge based shadow removal and blob merging procedure are also applied to the foreground blobs, and a pose estimation technique is utilized for calculating the position and heading angle of the moving object precisely. Experimental results revealed that our system works well for real-time vehicle localization and tracking applications.