• Title/Summary/Keyword: Vision based tracking

Search Result 405, Processing Time 0.042 seconds

Real-time Finger Gesture Recognition (실시간 손가락 제스처 인식)

  • Park, Jae-Wan;Song, Dae-Hyun;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.847-850
    • /
    • 2008
  • On today, human is going to develop machine by using mutual communication to machine. Including vision - based HCI(Human Computer Interaction), the technique which to recognize finger and to track finger is important in HCI systems, in HCI systems. In order to divide finger, this paper uses more effectively dividing the technique using subtraction which is separation of background and foreground, as well as to divide finger from limited background and cluttered background. In order to divide finger, the finger is recognized to make "Template-Matching" by identified fingertip images. And, identified gestures be compared the tracked gesture after tracking recognized finger. In this paper, after obtaining interest area, not only using subtraction image and template-matching but to perform template-matching in the area. So, emphasis is placed on decreasing perform speed and reaction speed, and we propose technique which is more effectively recognizing gestures.

  • PDF

Motion-Recognizing Game Controller with Tactile Feedback (동작인식 및 촉감제공 게임 컨트롤러)

  • Jeon, Seok-Hee;Kim, Sang-Ki;Park, Gun-Hyuk;Han, Gab-Jong;Lee, Sung-Kil;Choi, Seung-Moon;Choi, Seung-Jin;Eoh, Hong-Jun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.1-6
    • /
    • 2008
  • This paper proposes a game controller that provides user motion input and tactile feedback display, in addition to the traditional button-type input. The device utilizes both an accelerometer and an infrared camera in order to estimate 3D position and to recognize user motion. The information from the accelerometer and the camera are integrated for better performance. Various tactile sensations are presented using a voice-coil type vibrator. We apply the proposed controller to a motion-based game and validate its usability.

  • PDF

Image segmentation algorithm based on weight information (가중치 정보를 이용한 영상 분할 알고리즘)

  • Kim, Sun-jib;Park, Byung-Joon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.5
    • /
    • pp.472-477
    • /
    • 2016
  • The most important and critical to the performance of video surveillance systems is to be detected exactly how much. In order to accurately track the object must be able to accurately separate the background and object. However, the system itself rather than the human vision exactly distinguish the object and the background, to assess the situation, it is not easy. If we can accurately detect the background and the object, to be able to accurately track an object, it is possible to increase the reliability of the system, have a significant impact on the success of the entire production system. In this paper, we propose a way to distinguish more precisely the background and the object being to determine the background environment changes more accurately.

In-Car Video Stabilization using Focus of Expansion

  • Kim, Jin-Hyun;Baek, Yeul-Min;Yun, Jea-Ho;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.12
    • /
    • pp.1536-1543
    • /
    • 2011
  • Video stabilization is a very important step for vision based applications in the vehicular technology because the accuracy of these applications such as obstacle distance estimation, lane detection and tracking can be affected by bumpy roads and oscillation of vehicle. Conventional methods suffer from either the zooming effect which caused by a camera movement or some motion of surrounding vehicles. In order to overcome this problem, we propose a novel video stabilization method using FOE(Focus of Expansion). When a vehicle moves, optical flow diffuses from the FOE and the FOE is equal to an epipole. If a vehicle moves with vibration, the position of the epipole in the two consecutive frames is changed by oscillation of the vehicle. Therefore, we carry out video stabilization using motion vector estimated from the amount of change of the epipoles. Experiment results show that the proposed method is more efficient than conventional methods.

Hand Gesture Recognition Using HMM(Hidden Markov Model) (HMM(Hidden Markov Model)을 이용한 핸드 제스처인식)

  • Ha, Jeong-Yo;Lee, Min-Ho;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.10 no.2
    • /
    • pp.291-298
    • /
    • 2009
  • In this paper we proposed a vision based realtime hand gesture recognition method. To extract skin color, we translate RGB color space into YCbCr color space and use CbCr color for the final extraction. To find the center of extracted hand region we apply practical center point extraction algorithm. We use Kalman filter to tracking hand region and use HMM(Hidden Markov Model) algorithm (learning 6 type of hand gesture image) to recognize it. We demonstrated the effectiveness of our algorithm by some experiments.

  • PDF

A Study on Object Tracking for Autonomous Mobile Robot using Vision Information (비젼 정보를 이용한 이동 자율로봇의 물체 추적에 관한 연구)

  • Kang, Jin-Gu;Lee, Jang-Myung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.235-242
    • /
    • 2008
  • An Autonomous mobile robot is a very useful system to achieve various tasks in dangerous environment, because it has the higher performance than a fixed base manipulator in terms of its operational workspace size as well as efficiency. A method for estimating the position of an object in the Cartesian coordinate system based upon the geometrical relationship between the image captured by 2-DOF active camera mounted on mobile robot and the real object, is proposed. With this position estimation, a method of determining an optimal path for the autonomous mobile robot from the current position to the position of object estimated by the image information using homogeneous matrices. Finally, the corresponding joint parameters to make the desired displacement are calculated to capture the object through the control of a mobile robot. The effectiveness of proposed method is demonstrated by the simulation and real experiments using the autonomous mobile robot.

  • PDF

Vision-based Real-time Velocity Detection Method (비젼 베이스 실시간 속도 검출 방법)

  • Kim Beom-Seok;Park Sung-Il;Ko Young-Hyuk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.301-304
    • /
    • 2006
  • As is different from formerly used fixing camera method in this paper, proposed method that can measure the speed of vehicles and logarithm of vehicles in video. Vehicles that proposed method runs with 50km/h, 80km/h, 90km/h's the speed recording on Video Tape beginning point and time of reaching point draw, and calculated 47.57km/h, 81.20km/h, the 90.00km/h speed by time and distance, the tracking cars and the velocity detection in video with the 'begin-line mark' and the 'end-line mark' processing.

  • PDF

Error Correction Scheme in Location-based AR System Using Smartphone (스마트폰을 이용한 위치정보기반 AR 시스템에서의 부정합 현상 최소화를 위한 기법)

  • Lee, Ju-Yong;Kwon, Jun-Sik
    • Journal of Digital Contents Society
    • /
    • v.16 no.2
    • /
    • pp.179-187
    • /
    • 2015
  • Spread of smartphone creates various contents. Among many contents, AR application using Location Based Service(LBS) is needed widely. In this paper, we propose error correction algorithm for location-based Augmented Reality(AR) system using computer vision technology in android environment. This method that detects the early features with SURF(Speeded Up Robust Features) algorithm to minimize the mismatch and to reduce the operations, and tracks the detected, and applies it in mobile environment. We use the GPS data to retrieve the location information, and use the gyro sensor and G-sensor to get the pose estimation and direction information. However, the cumulative errors of location information cause the mismatch that and an object is not fixed, and we can not accept it the complete AR technology. Because AR needs many operations, implementation in mobile environment has many difficulties. The proposed approach minimizes the performance degradation in mobile environments, and are relatively simple to implement, and a variety of existing systems can be useful in a mobile environment.

Anomalous Event Detection in Traffic Video Based on Sequential Temporal Patterns of Spatial Interval Events

  • Ashok Kumar, P.M.;Vaidehi, V.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.169-189
    • /
    • 2015
  • Detection of anomalous events from video streams is a challenging problem in many video surveillance applications. One such application that has received significant attention from the computer vision community is traffic video surveillance. In this paper, a Lossy Count based Sequential Temporal Pattern mining approach (LC-STP) is proposed for detecting spatio-temporal abnormal events (such as a traffic violation at junction) from sequences of video streams. The proposed approach relies mainly on spatial abstractions of each object, mining frequent temporal patterns in a sequence of video frames to form a regular temporal pattern. In order to detect each object in every frame, the input video is first pre-processed by applying Gaussian Mixture Models. After the detection of foreground objects, the tracking is carried out using block motion estimation by the three-step search method. The primitive events of the object are represented by assigning spatial and temporal symbols corresponding to their location and time information. These primitive events are analyzed to form a temporal pattern in a sequence of video frames, representing temporal relation between various object's primitive events. This is repeated for each window of sequences, and the support for temporal sequence is obtained based on LC-STP to discover regular patterns of normal events. Events deviating from these patterns are identified as anomalies. Unlike the traditional frequent item set mining methods, the proposed method generates maximal frequent patterns without candidate generation. Furthermore, experimental results show that the proposed method performs well and can detect video anomalies in real traffic video data.

Target-free vision-based approach for vibration measurement and damage identification of truss bridges

  • Dong Tan;Zhenghao Ding;Jun Li;Hong Hao
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.421-436
    • /
    • 2023
  • This paper presents a vibration displacement measurement and damage identification method for a space truss structure from its vibration videos. Features from Accelerated Segment Test (FAST) algorithm is combined with adaptive threshold strategy to detect the feature points of high quality within the Region of Interest (ROI), around each node of the truss structure. Then these points are tracked by Kanade-Lucas-Tomasi (KLT) algorithm along the video frame sequences to obtain the vibration displacement time histories. For some cases with the image plane not parallel to the truss structural plane, the scale factors cannot be applied directly. Therefore, these videos are processed with homography transformation. After scale factor adaptation, tracking results are expressed in physical units and compared with ground truth data. The main operational frequencies and the corresponding mode shapes are identified by using Subspace Stochastic Identification (SSI) from the obtained vibration displacement responses and compared with ground truth data. Structural damages are quantified by elemental stiffness reductions. A Bayesian inference-based objective function is constructed based on natural frequencies to identify the damage by model updating. The Success-History based Adaptive Differential Evolution with Linear Population Size Reduction (L-SHADE) is applied to minimise the objective function by tuning the damage parameter of each element. The locations and severities of damage in each case are then identified. The accuracy and effectiveness are verified by comparison of the identified results with the ground truth data.