• Title/Summary/Keyword: Tracking training

Search Result 215, Processing Time 0.02 seconds

Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space (인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.

Presentation Training System based on 3D Virtual Reality (3D 가상현실기반의 발표훈련시스템)

  • Jung, Young-Kee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.309-316
    • /
    • 2018
  • In this study, we propose a 3D virtual reality based presentation training system to help implement the virtual presentation environment, such as the real world, to present it confidently in the real world. The proposed system provided a realistic and highly engaging presentation and interview environment by analyzing the speakers' voice and behavior in real time to be reflected in the audience of the virtual space. Using HMD and VR Controller that become 6DOF Tracking, the presenter can change the timing and interaction of the virtual space using Kinect, and the virtual space can be changed to various settings set by the user. The presenter will look at presentation files and scripts displayed in separate views within the virtual space to understand the content and master the presentation.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.

Training of a Siamese Network to Build a Tracker without Using Tracking Labels (샴 네트워크를 사용하여 추적 레이블을 사용하지 않는 다중 객체 검출 및 추적기 학습에 관한 연구)

  • Kang, Jungyu;Song, Yoo-Seung;Min, Kyoung-Wook;Choi, Jeong Dan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.274-286
    • /
    • 2022
  • Multi-object tracking has been studied for a long time under computer vision and plays a critical role in applications such as autonomous driving and driving assistance. Multi-object tracking techniques generally consist of a detector that detects objects and a tracker that tracks the detected objects. Various publicly available datasets allow us to train a detector model without much effort. However, there are relatively few publicly available datasets for training a tracker model, and configuring own tracker datasets takes a long time compared to configuring detector datasets. Hence, the detector is often developed separately with a tracker module. However, the separated tracker should be adjusted whenever the former detector model is changed. This study proposes a system that can train a model that performs detection and tracking simultaneously using only the detector training datasets. In particular, a Siam network with augmentation is used to compose the detector and tracker. Experiments are conducted on public datasets to verify that the proposed algorithm can formulate a real-time multi-object tracker comparable to the state-of-the-art tracker models.

Real Time Eye and Gaze Tracking

  • Park Ho Sik;Nam Kee Hwan;Cho Hyeon Seob;Ra Sang Dong;Bae Cheol Soo
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.857-861
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

The Design of Target Tracking System Using the Identification of TS Fuzzy Model (TS 퍼지 모델 동정을 이용한 표적 추적 시스템 설계)

  • Lee, Bum-Jik;Joo, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.1958-1960
    • /
    • 2001
  • In this paper, we propose the design methodology of target tracking system using the identification of TS fuzzy model based on genetic algorithm(GA) and RLS algorithm. In general, the objective of target tracking is to estimate the future trajectory of the target based on the past position of the target obtained from the sensor. In the conventional and mathematical nonlinear filtering method such as extended Kalman filter(EKF), the performance of the system may be deteriorated in highly nonlinear situation. In this paper, to resolve these problems of nonlinear filtering technique, the error of EKF by nonlinearity is compensated by identifying TS fuzzy model. In the proposed method, after composing training datum from the parameters of EKF, by identifying the premise and consequent parameters and the rule numbers of TS fuzzy model using GA, and by tuning finely the consequent parameters of TS fuzzy model using recursive least square(RLS) algorithm, the error of EKF is compensated. Finally, the proposed method is applied to three dimensional tracking problem, and the simulation results shows that the tracking performance is improved by the proposed method.

  • PDF

The Development and Application of a Training Base for the Installation and Adjustment of Photovoltaic Power Generation Systems

  • Chuanqing, SUN
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.1
    • /
    • pp.37-50
    • /
    • 2016
  • In recent years, the development and application of green energy resources have attracted more and more /$^*$ 'tention of people. The training room presented here is focused on the terminal applications of a photovoltaic power generation system (PPGS). Through introducing the composition and the general design principles, we aimed at leading the students to master the fundamental skills required for its design, installation and construction. The training room consists of numerous platforms, such as: PPGS, Wind and Photovoltaic Hybrid Power Generation Systems, Wind Power Generation Equipments, Simulative Grid-Connected Power Generation System, Electronic Technology Application of New Energy, etc. This enables the students to obtain their project and professional skills training via assembling, adjusting, maintaining and inspecting, etc., various component parts of the photovoltaic and new energy power generation systems, to further grasp the fundamental and related theoretical knowledge, and to further reinforce their practical and operational skills, so as to improve their problem-analyzing and problem-solving abilities.

Location-Based Military Simulation and Virtual Training Management System (위치인식 기반의 군사 시뮬레이션 및 가상훈련 관리 시스템)

  • Jeon, Hyun Min;Kim, Jae Wan
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.51-57
    • /
    • 2017
  • The purpose of this study is to design a system that can be used for military simulation and virtual training using the location information of individual soldier's weapons. After acquiring the location information using Arduino's GPS shield, it is designed to transmit data to the Smartphone using Bluetooth Shield, and transmit the data to the server using 3G/4G of Smartphone in real time. The server builds the system to measure, analyze and manage the current position and the tracking information of soldier. Using this proposed system makes it easier to analyze the training situation for individual soldiers and expect better training results.

Recognition of Container Identifiers Using 8-directional Contour Tracking Method and Refined RBF Network

  • Kim, Kwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.6 no.1
    • /
    • pp.100-104
    • /
    • 2008
  • Generally, it is difficult to find constant patterns on identifiers in a container image, since the identifiers are not normalized in color, size, and position, etc. and their shapes are damaged by external environmental factors. This paper distinguishes identifier areas from background noises and removes noises by using an ART2-based quantization method and general morphological information on the identifiers such as color, size, ratio of height to width, and a distance from other identifiers. Individual identifier is extracted by applying the 8-directional contour tracking method to each identifier area. This paper proposes a refined ART2-based RBF network and applies it to the recognition of identifiers. Through experiments with 300 container images, the proposed algorithm showed more improved accuracy of recognizing container identifiers than the others proposed previously, in spite of using shorter training time.