• Title/Summary/Keyword: Video Image Tracking

Search Result 272, Processing Time 0.026 seconds

Algorithm for ball tracking and image enhancement of tennis game video (테니스 영상에서 공의 위치 추적과 영상 개선을 위한 알고리즘 연구)

  • Bae, Min-Seop;Hong, Young-Tack;Choi, Tae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.73-76
    • /
    • 2006
  • 테니스 경기 영상에서 테니스 코트 영역과 선수의 위치를 추적하기 위한 알고리즘을 제안한다. 공의 움직임을 찾기 위해서는 코트와 선수의 위치를 파악해야 하므로 이진화 영상과 형태론적 해석을 통하여 테니스 코트 영역과 선수 영역을 인식하도록 한다. 선수의 위치가 확인되면 공의 움직임이 예측되는 장소를 찾아 공이 지나가는 정보가 제공될 경우 공의 위치를 인식하고 칼만 필터를 이용하여 공의 움직임을 추정하고 공의 위치를 추적한다. 공의 움직임 정보를 이용하여 고속 이동에 의한 이미지의 손실을 개선하는 알고리즘을 제안한다.

  • PDF

Automatic Detecting and Tracking Algorithm of Joint of Human Body using Human Ratio (인체 비율을 이용한 인체의 조인트 자동 검출 및 객체 추적 알고리즘)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.4
    • /
    • pp.215-224
    • /
    • 2011
  • There have been studying many researches to detect human body and to track one with increasing interest on human and computer interaction. In this paper, we propose the algorithm that automatically extracts joints, linked points of human body, using the ratio of human body under single camera and tracks object. The proposed method gets the difference images of the grayscale images and ones of the hue images between input image and background image. Then the proposed method composes the results, splits background and foreground, and extracts objects. Also we standardize the ratio of human body using face' length and the measurement of human body and automatically extract joints of the object using the ratio and the corner points of the silhouette of object. After then, we tract the joints' movement using block-matching algorithm. The proposed method is applied to test video to be acquired through a camera and the result shows that the proposed method automatically extracts joints and effectively tracks the detected joints.

An Algorithm for Traffic Information by Vehicle Tracking from CCTV Camera Images on the Highway (고속도로 CCTV카메라 영상에서 차량 추적에 의한 교통정보 수집 알고리즘)

  • Min Joon-Young
    • Journal of Digital Contents Society
    • /
    • v.3 no.1
    • /
    • pp.1-9
    • /
    • 2002
  • This paper is proposed to algorithm for measuring traffic information automatically, for example, volume count, speed and occupancy rate, from CCTV camera images installed on highway, add to function of image detectors which can be collected the traffic information. Recently the method of traffic informations are counted in lane one by one, but this manner is occurred critical errors by occlusion frequently in case of passing larger vehicles(bus, truck etc.) and is impossible to measure in the 8 lanes of highway. In this paper, installed the detection area include with all lanes, traffic informations are collected using tracking algorithm with passing vehicles individually in this detection area, thus possible to detect all of 8 lanes. The experiment have been conducted two different real road scenes for 20 minutes. For the experiments, the images are provided with CCTV camera which was installed at Kiheung Interchange upstream of Kyongbu highway, and video recording images at Chungkye Tunnel. For image processing, images captured by frame-grabber board 30 frames per second, $640{\times}480$ pixels resolution and 256 gray-levels to reduce the total amount of data to be interpreted.

  • PDF

A Forest Fire Detection Algorithm Using Image Information (영상정보를 이용한 산불 감지 알고리즘)

  • Seo, Min-Seok;Lee, Choong Ho
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.159-164
    • /
    • 2019
  • Detecting wildfire using only color in image information is a very difficult issue. This paper proposes an algorithm to detect forest fire area by analyzing color and motion of the area in the video including forest fire. The proposed algorithm removes the background region using the Gaussian Mixture based background segmentation algorithm, which does not depend on the lighting conditions. In addition, the RGB channel is changed to an HSV channel to extract flame candidates based on color. The extracted flame candidates judge that it is not a flame if the area moves while labeling and tracking. If the flame candidate areas extracted in this way are in the same position for more than 2 minutes, it is regarded as flame. Experimental results using the implemented algorithm confirmed the validity.

A Study for Detecting a Gazing Point Based on Reference Points (참조점을 이용한 응시점 추출에 관한 연구)

  • Kim, S.I.;Lim, J.H.;Cho, J.M.;Kim, S.H.;Nam, T.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.5
    • /
    • pp.250-259
    • /
    • 2006
  • The information of eye movement is used in various fields such as psychology, ophthalmology, physiology, rehabilitation medicine, web design, HMI(human-machine interface), and so on. Various devices to detect the eye movement have been developed but they are too expensive. The general methods of eye movement tracking are EOG(electro-oculograph), Purkinje image tracker, scleral search coil technique, and video-oculograph(VOG). The purpose of this study is to embody the algorithm which tracks the location of the gazing point at a pupil. Two kinds of location data were compared to track the gazing point. One is the reference points(infrared LEDs) which is effected from the globe. Another is the center point of the pupil which is gained with a CCD camera. The reference point was captured with the CCD camera and infrared lights which were not recognized by human eyes. Both of images which were thrown and were not thrown an infrared light on the globe were captured and saved. The reflected reference points were detected with the brightness difference between the two saved images. In conclusion, the circumcenter theory of a triangle was used to look for the center of the pupil. The location of the gazing point was relatively indicated with the each center of the pupil and the reference point.

Unauthorized person tracking system in video using CNN-LSTM based location positioning

  • Park, Chan;Kim, Hyungju;Moon, Nammee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.77-84
    • /
    • 2021
  • In this paper, we propose a system that uses image data and beacon data to classify authorized and unauthorized perosn who are allowed to enter a group facility. The image data collected through the IP camera uses YOLOv4 to extract a person object, and collects beacon signal data (UUID, RSSI) through an application to compose a fingerprinting-based radio map. Beacon extracts user location data after CNN-LSTM-based learning in order to improve location accuracy by supplementing signal instability. As a result of this paper, it showed an accuracy of 93.47%. In the future, it can be expected to fusion with the access authentication process such as QR code that has been used due to the COVID-19, track people who haven't through the authentication process.

Escape Route Prediction and Tracking System using Artificial Intelligence (인공지능을 활용한 도주경로 예측 및 추적 시스템)

  • Yang, Bum-Suk;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1130-1135
    • /
    • 2022
  • In Seoul, about 75,000 CCTVs are installed in 25 district offices. Each ward office has built a control center for CCTV control and is performing 24-hour CCTV video control for the safety of citizens. Seoul Metropolitan Government is building a smart city integrated platform that is safe for citizens by providing CCTV images of the ward office to enable rapid response to emergency/emergency situations by signing an MOU with related organizations. In this paper, when an incident occurs at the Seoul Metropolitan Government Office, the escape route is predicted by discriminating people and vehicles using the AI DNN-based Template Matching technology, MLP algorithm and CNN-based YOLO SPP DNN model for CCTV images. In addition, it is designed to automatically disseminate image information and situation information to adjacent ward offices when vehicles and people escape from the competent ward office. The escape route prediction and tracking system using artificial intelligence can expand the smart city integrated platform nationwide.

Video Production Method using Match Moving Technique (매치무빙 기법을 활용한 모션그래픽 영상제작에 관한 연구)

  • Lee, Junsang;Park, Junhong;Lee, Imgeun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.4
    • /
    • pp.755-762
    • /
    • 2016
  • Motion graphic is the recently emerged technique which extends the ways of expression in video industry. Currently, it is worldwide trends that the image design gets more attention in the field of movie, advertisement, exhibition, web, mobile, games and new media, etc. With the development of computer's new technologies, VFX methods for the visual content is dynamically changed. Such production methods combine the real scenary and CG(Computer Graphic) to compose realistic scenes, which cannot be pictured in the ways of ordinary filming. This methods overcome the difference between the real and virtual world, maximize the expressive ways in graphics and real space. Match moving is technique of accurate matching between real and virtual camera to provide realistic scene. In this paper we propose the novel technique for motion graphic image production. In this framework we utilize the match moving methods to get the movements of the real camera into 3D layer data.

Estimation of Human Height and Position using a Single Camera (단일 카메라를 이용한 보행자의 높이 및 위치 추정 기법)

  • Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.20-31
    • /
    • 2008
  • In this paper, we propose a single view-based technique for the estimation of human height and position. Conventional techniques for the estimation of 3D geometric information are based on the estimation of geometric cues such as vanishing point and vanishing line. The proposed technique, however, back-projects the image of moving object directly, and estimates the position and the height of the object in 3D space where its coordinate system is designated by a marker. Then, geometric errors are corrected by using geometric constraints provided by the marker. Unlike most of the conventional techniques, the proposed method offers a framework for simultaneous acquisition of height and position of an individual resident in the image. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences from outdoor environments.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.