• Title/Summary/Keyword: Video-tracking software

Search Result 43, Processing Time 0.037 seconds

Separation of Occluding Pigs using Deep Learning-based Image Processing Techniques (딥 러닝 기반의 영상처리 기법을 이용한 겹침 돼지 분리)

  • Lee, Hanhaesol;Sa, Jaewon;Shin, Hyunjun;Chung, Youngwha;Park, Daihee;Kim, Hakjae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.136-145
    • /
    • 2019
  • The crowded environment of a domestic pig farm is highly vulnerable to the spread of infectious diseases such as foot-and-mouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a camera. Although it is required to correctly separate occluding pigs for tracking each individual pigs, extracting the boundaries of the occluding pigs fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate occluding pigs not only by exploiting the characteristics (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the limitation (i.e., the bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with two-pigs occlusion patterns show that the proposed method can provide better accuracy and processing speed than one of the state-of-the-art widely used deep learning-based segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).

Maritime radar display unit based on PC for safe ship navigation

  • Bae, Jin-Ho;Lee, Chong-Hyun;Hwang, Chang-Ku
    • International Journal of Ocean System Engineering
    • /
    • v.1 no.1
    • /
    • pp.52-59
    • /
    • 2011
  • A prototype radar display unit was implemented using inexpensive off-the-shelf components, including a nonlinear estimation algorithm for the target tracking in a clutter environment. Two custom designed boards; an analog signal processing board and a DSP board, can be plugged into an expansion slot of a personal computer (PC) to form a maritime radar display unit. Our system provided all the functionality specified in the International Maritime Organization (IMO) resolution A422(XI). The analog signal processing board was used for A/D conversion as well as rain and sea clutter suppression. The main functions of the DSP board were scan conversion and video overlay operations. A host PC was used to run the tracking algorithm of targets in clutter, using the discrete-time Bayes optimal (nonlinear, and non-Gaussian) estimation method, and the graphic user interface (GUI) software for Automatic Radar Plotting Aid (ARPA). The proposed tracking method recursively found the entire probability density function of the target position and velocity by converting into linear convolution operations.

Touching Pigs Segmentation and Tracking Verification Using Motion Information (움직임 정보를 이용한 근접 돼지 분리와 추적 검증)

  • Park, Changhyun;Sa, Jaewon;Kim, Heegon;Chung, Yongwha;Park, Daihee;Kim, Hakjae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.4
    • /
    • pp.135-144
    • /
    • 2018
  • The domestic pigsty environment is highly vulnerable to the spread of respiratory diseases such as foot-and-mouth disease because of the small space. In order to manage this issue, a variety of studies have been conducted to automatically analyze behavior of individual pigs in a pig pen through a video surveillance system using a camera. Even though it is required to correctly segment touching pigs for tracking each pig in complex situations such as aggressive behavior, detecting the correct boundaries among touching pigs using Kinect's depth information of lower accuracy is a challenging issue. In this paper, we propose a segmentation method using motion information of the touching pigs. In addition, our proposed method can be applied for detecting tracking errors in case of tracking individual pigs in the complex environment. In the experimental results, we confirmed that the touching pigs in a pig farm were separated with the accuracy of 86%, and also confirmed that the tracking errors were detected accurately.

Object Feature Extraction and Matching for Effective Multiple Vehicles Tracking (효과적인 다중 차량 추적을 위한 객체 특징 추출 및 매칭)

  • Cho, Du-Hyung;Lee, Seok-Lyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.789-794
    • /
    • 2013
  • A vehicle tracking system makes it possible to induce the vehicle movement path for avoiding traffic congestion and to prevent traffic accidents in advance by recognizing traffic flow, monitoring vehicles, and detecting road accidents. To track the vehicles effectively, those which appear in a sequence of video frames need to identified by extracting the features of each object in the frames. Next, the identical vehicles over the continuous frames need to be recognized through the matching among the objects' feature values. In this paper, we identify objects by binarizing the difference image between a target and a referential image, and the labelling technique. As feature values, we use the center coordinate of the minimum bounding rectangle(MBR) of the identified object and the averages of 1D FFT(fast Fourier transform) coefficients with respect to the horizontal and vertical direction of the MBR. A vehicle is tracked in such a way that the pair of objects that have the highest similarity among objects in two continuous images are regarded as an identical object. The experimental result shows that the proposed method outperforms the existing methods that use geometrical features in tracking accuracy.

Semiautomated Analysis of Data from an Imaging Sonar for Fish Counting, Sizing, and Tracking in a Post-Processing Application

  • Kang, Myoung-Hee
    • Fisheries and Aquatic Sciences
    • /
    • v.14 no.3
    • /
    • pp.218-225
    • /
    • 2011
  • Dual frequency identification sonar (DIDSON) is an imaging sonar that has been used for numerous fisheries investigations in a diverse range of freshwater and marine environments. The main purpose of DIDSON is fish counting, fish sizing, and fish behavioral studies. DIDSON records video-quality data, so processing power for handling the vast amount of data with high speed is a priority. Therefore, a semiautomated analysis of DIDSON data for fish counting, sizing, and fish behavior in Echoview (fisheries acoustic data analysis software) was accomplished using testing data collected on the Rakaia River, New Zealand. Using this data, the methods and algorithms for background noise subtraction, image smoothing, target (fish) detection, and conversion to single targets were precisely illustrated. Verification by visualization identified the resulting targets. As a result, not only fish counts but also fish sizing information such as length, thickness, perimeter, compactness, and orientation were obtained. The alpha-beta fish tracking algorithm was employed to extract the speed, change in depth, and the distributed depth relating to fish behavior. Tail-beat pattern was depicted using the maximum intensity of all beams. This methodology can be used as a template and applied to data from BlueView two-dimensional imaging sonar.

A study on the effect of introducing EBS AR production system on content (EBS AR 실감영상 제작 시스템 도입이 콘텐츠에 끼친 영향에 대한 연구)

  • Kim, Ho-sik;Kwon, Soon-chul;Lee, Seung-hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.711-719
    • /
    • 2021
  • EBS has been producing numerous educational contents with traditional virtual studio production systems since the early 2000s and applied AR video production system in October 2020, twenty-years after. Although the basic concept of synthesizing graphic elements and actual image in real time by tracking camera movement and lens information is similar to the previous one but the newly applied AR video production system contains some of advanced technologies that are improved over the previous ones. Marker tracking technology that enables camera movement free and position tracking has been applied that can track the location stably, and the operating software has been applied with Unreal Engine, one of the representative graphic engines used in computer game production, therefore the system's rendering burden has been reduced, enabling high-quality and real-time graphic effects. This system is installed on a crane camera that is mainly used in a crane shot at the live broadcasting studio and applied for live broadcasting programs for children and some of the videos such as program introductions and quiz events that used to be expressed in 2D graphics were converted to 3D AR videos which has been enhanced. This paper covers the effect of introduction and application of the AR video production system on EBS content production and the future development direction and possibility.

Experimental Flow Visualisation of an Artificial Heart Pump

  • Tan, A.C.C.;Timms, D.L.;Pearcy, M.J.;McNeil, K.;Galbraith, A.
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.2
    • /
    • pp.210-216
    • /
    • 2004
  • Flow visualization techniques were employed to qualitatively visualize the flow patterns through a 400% scaled up centrifugal blood pump. The apparatus comprised of a scaled up centrifugal pump. high speed video camera. Argon Ion Laser Light Sheet and custom coded particle tracking software. Reynolds similarity laws are applied in order to reduce the rotational speed of the pump. The outlet (cutwater) region was identified as a site of high turbulence and thus a likely source of haemolysis. The region underneath the impeller was identified as a region of lower flow.

Adaptive Model-based Multi-object Tracking Robust to Illumination Changes and Overlapping (조명변화와 곁침에 강건한 적응적 모델 기반 다중객체 추적)

  • Lee Kyoung-Mi;Lee Youn-Mi
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.449-460
    • /
    • 2005
  • This paper proposes a method to track persons robustly in illumination changes and partial occlusions in color video frames acquired from a fixed camera. To solve a problem of changing appearance by illumination change, a time-independent intrinsic image is used to remove noises in an frame and is adaptively updated frame-by-frame. We use a hierarchical human model including body color information in order to track persons in occlusion. The tracked human model is recorded into a persons' list for some duration after the corresponding person's exit and is recovered from the list after her reentering. The proposed method was experimented in several indoor and outdoor scenario. This demonstrated the potential effectiveness of an adaptive model-base method that corrected distorted person's color information by lighting changes, and succeeded tracking of persons which was overlapped in a frame.

A Study on Attitude Estimation of UAV Using Image Processing (영상 처리를 이용한 UAV의 자세 추정에 관한 연구)

  • Paul, Quiroz;Hyeon, Ju-Ha;Moon, Yong-Ho;Ha, Seok-Wun
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.5
    • /
    • pp.137-148
    • /
    • 2017
  • Recently, researchers are actively addressed to utilize Unmanned Aerial Vehicles(UAV) for military and industry applications. One of these applications is to trace the preceding flight when it is necessary to track the route of the suspicious reconnaissance aircraft in secret, and it is necessary to estimate the attitude of the target flight such as Roll, Yaw, and Pitch angles in each instant. In this paper, we propose a method for estimating in real time the attitude of a target aircraft using the video information that is provide by an external camera of a following aircraft. Various image processing methods such as color space division, template matching, and statistical methods such as linear regression were applied to detect and estimate key points and Euler angles. As a result of comparing the X-plane flight data with the estimated flight data through the simulation experiment, it is shown that the proposed method can be an effective method to estimate the flight attitude information of the previous flight.

A Real-time Face Recognition System using Fast Face Detection (빠른 얼굴 검출을 이용한 실시간 얼굴 인식 시스템)

  • Lee Ho-Geun;Jung Sung-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.12
    • /
    • pp.1247-1259
    • /
    • 2005
  • This paper proposes a real-time face recognition system which detects multiple faces from low resolution video such as web-camera video. Face recognition system consists of the face detection step and the face classification step. At First, it finds face region candidates by using AdaBoost based object detection method which have fast speed and robust performance. It generates reduced feature vector for each face region candidate by using principle component analysis. At Second, Face classification used Principle Component Analysis and multi-SVM. Experimental result shows that the proposed method achieves real-time face detection and face recognition from low resolution video. Additionally, We implement the auto-tracking face recognition system using the Pan-Tilt Web-camera and radio On/Off digital door-lock system with face recognition system.