• Title/Summary/Keyword: feature-based tracking

Search Result 315, Processing Time 0.023 seconds

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot (LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행)

  • Kim, Hyun Woo;Hawng, Yo-Seup;Kim, Yun-Ki;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

An Embedded FAST Hardware Accelerator for Image Feature Detection (영상 특징 추출을 위한 내장형 FAST 하드웨어 가속기)

  • Kim, Taek-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.28-34
    • /
    • 2012
  • Various feature extraction algorithms are widely applied to real-time image processing applications for extracting significant features from images. Feature extraction algorithms are mostly combined with image processing algorithms mostly for image tracking and recognition. Feature extraction function is used to supply feature information to the other image processing algorithms and it is mainly implemented in a preprocessing stage. Nowadays, image processing applications are faced with embedded system implementation for a real-time processing. In order to satisfy this requirement, it is necessary to reduce execution time so as to improve the performance. Reducing the time for executing a feature extraction function dose not only extend the execution time for the other image processing algorithms, but it also helps satisfy a real-time requirement. This paper explains FAST (Feature from Accelerated Segment Test algorithm) of E. Rosten and presents FPGA-based embedded hardware accelerator architecture. The proposed acceleration scheme can be implemented by using approximately 2,217 Flip Flops, 5,034 LUTs, 2,833 Slices, and 18 Block RAMs in the Xilinx Vertex IV FPGA. In the Modelsim - based simulation result, the proposed hardware accelerator takes 3.06 ms to extract 954 features from a image with $640{\times}480$ pixels and this result shows the cost effectiveness of the propose scheme.

A Hybrid Algorithm for Online Location Update using Feature Point Detection for Portable Devices

  • Kim, Jibum;Kim, Inbin;Kwon, Namgu;Park, Heemin;Chae, Jinseok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.600-619
    • /
    • 2015
  • We propose a cost-efficient hybrid algorithm for online location updates that efficiently combines feature point detection with the online trajectory-based sampling algorithm. Our algorithm is designed to minimize the average trajectory error with the minimal number of sample points. The algorithm is composed of 3 steps. First, we choose corner points from the map as sample points because they will most likely cause fewer trajectory errors. By employing the online trajectory sampling algorithm as the second step, our algorithm detects several missing and important sample points to prevent unwanted trajectory errors. The final step improves cost efficiency by eliminating redundant sample points on straight paths. We evaluate the proposed algorithm with real GPS trajectory data for various bus routes and compare our algorithm with the existing one. Simulation results show that our algorithm decreases the average trajectory error 28% compared to the existing one. In terms of cost efficiency, simulation results show that our algorithm is 29% more cost efficient than the existing one with real GPS trajectory data.

AUTOMATIC SCALE DETECTION BASED ON DIFFERENCE OF CURVATURE

  • Kawamura, Kei;Ishii, Daisuke;Watanabe, Hiroshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.482-486
    • /
    • 2009
  • Scale-invariant feature is an effective method for retrieving and classifying images. In this study, we analyze a scale-invariant planar curve features for developing 2D shapes. Scale-space filtering is used to determine contour structures on different scales. However, it is difficult to track significant points on different scales. In mathematics, curvature is considered to be fundamental feature of a planar curve. However, the curvature of a digitized planar curve depends on a scale. Therefore, automatic scale detection for curvature analysis is required for practical use. We propose a technique for achieving automatic scale detection based on difference of curvature. Once the curvature values are normalized with regard to the scale, we can calculate difference in the curvature values for different scales. Further, an appropriate scale and its position are detected simultaneously, thereby avoiding tracking problem. Appropriate scales and their positions can be detected with high accuracy. An advantage of the proposed method is that the detected significant points do not need to be located in the same contour. The validity of the proposed method is confirmed by experimental results.

  • PDF

A Close Contact Tracing Method Based on Bluetooth Signals Applicable to Ship Environments

  • Qianfeng Lin;Jooyoung Son
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.644-662
    • /
    • 2023
  • There are still outbreaks of COVID-19 across the world. Ships increase the risk of worldwide transmission of the virus. Close contact tracing remains as an effective method of reducing the risk of virus transmission. Therefore, close contact tracing in ship environments becomes a research topic. Exposure Notifications API (Application Programming Interface) can be used to determine the encountered location points of close contacts on ships. Location points of close contact are estimated by the encountered location points. Risky areas in ships can be calculated based on the encountered location points. The tracking of close contacts is possible with Bluetooth technology without the Internet. The Bluetooth signal can be used to judge the proximity among detecting devices by using the feature that Bluetooth has a strong signal at close range. This Bluetooth feature makes it possible to trace close contacts in ship environments. In this paper, we propose a method for close contact tracing and showing the risky area in a ship environment by combining beacon and Exposure Notification API using Bluetooth technology. This method does not require an Internet connection for tracing close contacts and can protect the personal information of close contacts.

Detection of Optical Flows on the Trajectories of Feature Points Using the Cellular Nonlinear Neural Networks (셀룰라 비선형 네트워크를 이용한 특징점 궤적 상에서 Optical Flow 검출)

  • Son, Hon-Rak;Kim, Hyeong-Suk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.6
    • /
    • pp.10-21
    • /
    • 2000
  • The Cellular Noninear Networks structure for Distance Transform(DT) and the robust optical flow detection algorithm based on the DT are proposed. For some applications of optical flows such as target tracking and camera ego-motion computation, correct optical flows at a few feature points are more useful than unreliable one at every pixel point. The proposed algorithm is for detecting the optical flows on the trajectories only of the feature points. The translation lengths and the directions of feature movements are detected on the trajectories of feature points on which Distance Transform Field is developed. The robustness caused from the use of the Distance Transform and the easiness of hardware implementation with local analog circuits are the properties of the proposed structure. To verify the performance of the proposed structure and the algorithm, simulation has been done about various images under different noisy environment.

  • PDF

FPGA Implementation of SURF-based Feature extraction and Descriptor generation (SURF 기반 특징점 추출 및 서술자 생성의 FPGA 구현)

  • Na, Eun-Soo;Jeong, Yong-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.483-492
    • /
    • 2013
  • SURF is an algorithm which extracts feature points and generates their descriptors from input images, and it is being used for many applications such as object recognition, tracking, and constructing panorama pictures. Although SURF is known to be robust to changes of scale, rotation, and view points, it is hard to implement it in real time due to its complex and repetitive computations. Using 3.3 GHz Pentium, in our experiment, it takes 240ms to extract feature points and create descriptors in a VGA image containing about 1,000 feature points, which means that software implementation cannot meet the real time requirement, especially in embedded systems. In this paper, we present a hardware architecture that can compute the SURF algorithm very fast while consuming minimum hardware resources. Two key concepts of our architecture are parallelism (for repetitive computations) and efficient line memory usage (obtained by analyzing memory access patterns). As a result of FPGA synthesis using Xilinx Virtex5LX330, it occupies 101,348 LUTs and 1,367 KB on-chip memory, giving performance of 30 frames per second at 100 MHz clock.

Target Object Image Extraction from 3D Space using Stereo Cameras

  • Yoo, Chae-Gon;Jung, Chang-Sung;Hwang, Chi-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1678-1680
    • /
    • 2002
  • Stereo matching technique is used in many practical fields like satellite image analysis and computer vision. In this paper, we suggest a method to extract a target object image from a complicated background. For example, human face image can be extracted from random background. This method can be applied to computer vision such as security system, dressing simulation by use of extracted human face, 3D modeling, and security system. Many researches about stereo matching have been performed. Conventional approaches can be categorized into area-based and feature-based method. In this paper, we start from area-based method and apply area tracking using scanning window. Coarse depth information is used for area merging process using area searching data. Finally, we produce a target object image.

  • PDF

Learning Spatio-Temporal Topology of a Multiple Cameras Network by Tracking Human Movement (사람의 움직임 추적에 근거한 다중 카메라의 시공간 위상 학습)

  • Nam, Yun-Young;Ryu, Jung-Hun;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.7
    • /
    • pp.488-498
    • /
    • 2007
  • This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs) in Ubiquitous Smart Space (USS). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.