• Title/Summary/Keyword: Visual tracking

Search Result 527, Processing Time 0.031 seconds

Tracking Method of Dynamic Smoke based on U-net (U-net기반 동적 연기 탐지 기법)

  • Gwak, Kyung-Min;Rho, Young J.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.81-87
    • /
    • 2021
  • Artificial intelligence technology is developing as it enters the fourth industrial revolution. Active researches are going on; visual-based models using CNNs. U-net is one of the visual-based models. It has shown strong performance for semantic segmentation. Although various U-net studies have been conducted, studies on tracking objects with unclear outlines such as gases and smokes are still insufficient. We conducted a U-net study to tackle this limitation. In this paper, we describe how 3D cameras are used to collect data. The data are organized into learning and test sets. This paper also describes how U-net is applied and how the results is validated.

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

The complexity of opt-in procedures in mobile shopping: Moderating effects of visual attention using the eyetracker (모바일 쇼핑에서 옵트인의 절차적 복잡성 연구: 아이트래커(eyetracker) 기반 시각적 주의의 조절효과)

  • Kim, Sang-Hu;Kim, Yerang;Yang, Byunghwa
    • Journal of Digital Convergence
    • /
    • v.15 no.8
    • /
    • pp.127-135
    • /
    • 2017
  • Consumers tend to feel concern about disclosure of personal information and, at the same time, to avoid inconvenience of procedural complexity caused by the privacy protections. The purpose of current paper is to investigate relationships between opt-in procedural complexity and shopping behavior using smart phones, moderating by the amount of visual attentions using eyetrackers. Therefore, we created a virtual mobile Web-site in which the complexity of opt-in procedures in our experiment is manipulated and measured. Also, we measured the dwell-time of area of interest using SMI-RED 250 instrument for tracking the real eye movement. Results indicated that the levels of procedural complexity are related to repurchase, indicating a moderating effect of the amount of visual attentions. Finally, we discussed several theoretical and practical implications of management for mobile commerce.

Detection of Abnormal Behavior by Scene Analysis in Surveillance Video (감시 영상에서의 장면 분석을 통한 이상행위 검출)

  • Bae, Gun-Tae;Uh, Young-Jung;Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.744-752
    • /
    • 2011
  • In intelligent surveillance system, various methods for detecting abnormal behavior were proposed recently. However, most researches are not robust enough to be utilized for actual reality which often has occlusions because of assumption the researches have that individual objects can be tracked. This paper presents a novel method to detect abnormal behavior by analysing major motion of the scene for complex environment in which object tracking cannot work. First, we generate Visual Word and Visual Document from motion information extracted from input video and process them through LDA(Latent Dirichlet Allocation) algorithm which is one of document analysis technique to obtain major motion information(location, magnitude, direction, distribution) of the scene. Using acquired information, we compare similarity between motion appeared in input video and analysed major motion in order to detect motions which does not match to major motions as abnormal behavior.

Multiple Object Tracking for Surveillance System (감시 시스템을 위한 다중 객체 추적)

  • Cho, Yong-Il;Choi, Jin;Yang, Hyun-Seung
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.653-659
    • /
    • 2006
  • 다중 객체 추적이란 컴퓨터 비전의 한 분야로, 주어진 비디오 시퀀스 내에서 관심 있는 객체들을 추적하는 것을 말한다. 다중 객체 추적 시스템은 감시 시스템, 사용자 행동 인식, 스포츠 중계, 비디오 회의와 같은 다양한 응용 분야에 핵심 기반 기술로 쓰이고 있어 그 중요성이 매우 크다. 본 논문은 감시 목적의 다중 객체를 추적하는 방법에 대하여 다룬다. 감시 시스템의 특성상, 객체의 외관이나 움직임 등에 대한 가정을 하기가 어렵다. 따라서 본 논문에서는 크기, 색, 형태 같은 객체의 단순하고 직관적인 외관 특성을 이용하면서도, 객체들끼리 부분적으로 혹은 완전히 겹쳐졌을 때에도 객체들의 위치를 적절히 추적할 수 있는 방법을 제안한다. 본 논문에서 제안하는 방법은 객체들의 경로에 대한 정보를 유지하는데 그래프 구조를 이용한다. 그래프를 확장하고, 제거하여 영상에 대한 정보를 추론한다. 크게 보면 객체들을 영역 레벨, 객체 레벨 두 단계에 걸쳐 추적한다. 영역 레벨에서는 각 객체들이 있을 수 있을만한 영역에 대한 가설을 세우고, 객체 레벨에서는 각 가설에 대한 검증을 한다. 제안된 방법은 직관적인 정보만을 이용하여 서로 다른 형태의 객체를 빠르게 추적할 수 있음을 보여준다. 다만 객체의 외관 정보만을 이용하였기 추적하기 때문에, 객체가 다른 객체에 의해 완전히 가려진 채 또다시 다른 객체와 겹쳐지면, 정확한 추적이 되지 않는다. 이를 해결하기 위해서는 객체가 겹쳐졌을 때, 그 관계에 대한 정보를 모아야 하는데 이는 향후 연구를 통해 해결하고자 한다.

  • PDF

A Fast Vision-based Head Tracking Method for Interactive Stereoscopic Viewing

  • Putpuek, Narongsak;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1102-1105
    • /
    • 2004
  • In this paper, the problem of a viewer's head tracking in a desktop-based interactive stereoscopic display system is considered. A fast and low-cost approach to the problem is important for such a computing environment. The system under consideration utilizes a shuttle glass for stereoscopic display. The proposed method makes use of an image taken from a single low-cost video camera. By using a simple feature extraction algorithm, the obtained points corresponding to the image of the user-worn shuttle glass are used to estimate the glass center, its local 'yaw' angle, as measured with respect to the glass center, and its global 'yaw' angle as measured with respect to the camera location. With these estimations, the stereoscopic image synthetic program utilizes those values to interactively adjust the two-view stereoscopic image pair as displayed on a computer screen. The adjustment is carried out such that the so-obtained stereoscopic picture, when viewed from a current user position, provides a close-to-real perspective and depth perception. However, because the algorithm and device used are designed for fast computation, the estimation is typically not precise enough to provide a flicker-free interactive viewing. An error concealment method is thus proposed to alleviate the problem. This concealment method should be sufficient for applications that do not require a high degree of visual realism and interaction.

  • PDF

A Study on the Performance of a Hybrid Daylighting System Using AVR Microcontrollers (AVR Microcontroller를 이용한 하이브리드 자연채광시스템의 성능에 관한 기초연구)

  • Lim, Sang Hoon;Oh, Seung Jin;Kim, Won-Sik;Jeong, Hae-Jun;Chun, Wongee
    • Journal of the Korean Solar Energy Society
    • /
    • v.35 no.6
    • /
    • pp.1-7
    • /
    • 2015
  • This paper deals with the design and operation of a hybrid daylighting system that uses natural and artificial lighting to enhance visual comfort in buildings. The system was developed using an AVR micro controller for solar tracking in conjunction with dimming controls, which, acting together, enables the maximum use of natural daylight and also improves energy efficiency in buildings. Experimental results clearly demonstrates the usefulness of the present system capable of enhancing indoor lighting conditions when sufficient daylight is available and distributed appropriately in harmony with artificial lighting.

Position Tracking of Underwater Robot for Nuclear Reactor Inspection using Color Information (색상정보를 이용한 원자로 육안검사용 수중로봇의 위치 추적)

  • 조재완;김창회;서용칠;최영수;김승호
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2259-2262
    • /
    • 2003
  • This paper describes visual tracking procedure of the underwater mobile robot for nuclear reactor vessel inspection, which is required to find the foreign objects such as loose parts. The yellowish underwater robot body tend to present a big contrast to boron solute cold water of nuclear reactor vessel, tinged with indigo by Cerenkov effect. In this paper, we have found and tracked the positions of underwater mobile robot using the two color informations, yellow and indigo. The center coordinates extraction procedures is as follows. The first step is to segment the underwater robot body to cold water with indigo background. From the RGB color components of the entire monitoring image taken with the color CCD camera, we have selected the red color component. In the selected red image, we extracted the positions of the underwater mobile robot using the following process sequences: binarization labelling, and centroid extraction techniques. In the experiment carried out at the Youngkwang unit 5 nuclear reactor vessel, we have tracked the center positions of the underwater robot submerged near the cold leg and the hot leg way, which is fathomed to 10m deep in depth.

  • PDF

Height and Position Estimation of Moving Objects using a Single Camera

  • Lee, Seok-Han;Lee, Jae-Young;Kim, Bu-Gyeom;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.158-163
    • /
    • 2009
  • In recent years, there has been increased interest in characterizing and extracting 3D information from 2D images for human tracking and identification. In this paper, we propose a single view-based framework for robust estimation of height and position. In the proposed method, 2D features of target object is back-projected into the 3D scene space where its coordinate system is given by a rectangular marker. Then the position and the height are estimated in the 3D space. In addition, geometric error caused by inaccurate projective mapping is corrected by using geometric constraints provided by the marker. The accuracy and the robustness of our technique are verified on the experimental results of several real video sequences from outdoor environments.

  • PDF

Alphabetical Gesture Recognition using HMM (HMM을 이용한 알파벳 제스처 인식)

  • Yoon, Ho-Sub;Soh, Jung;Min, Byung-Woo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.384-386
    • /
    • 1998
  • The use of hand gesture provides an attractive alternative to cumbersome interface devices for human-computer interaction(HCI). Many methods hand gesture recognition using visual analysis have been proposed such as syntactical analysis, neural network(NN), Hidden Markov Model(HMM) and so on. In our research, a HMMs is proposed for alphabetical hand gesture recognition. In the preprocessing stage, the proposed approach consists of three different procedures for hand localization, hand tracking and gesture spotting. The hand location procedure detects the candidated regions on the basis of skin-color and motion in an image by using a color histogram matching and time-varying edge difference techniques. The hand tracking algorithm finds the centroid of a moving hand region, connect those centroids, and thus, produces a trajectory. The spotting a feature database, the proposed approach use the mesh feature code for codebook of HMM. In our experiments, 1300 alphabetical and 1300 untrained gestures are used for training and testing, respectively. Those experimental results demonstrate that the proposed approach yields a higher and satisfying recognition rate for the images with different sizes, shapes and skew angles.

  • PDF