• Title/Summary/Keyword: Real-time 3D Feature Extraction

Search Result 22, Processing Time 0.03 seconds

Facial Feature Extraction with Its Applications

  • Lee, Minkyu;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.7-9
    • /
    • 2015
  • Purpose In the many face-related application such as head pose estimation, 3D face modeling, facial appearance manipulation, the robust and fast facial feature extraction is necessary. We present the facial feature extraction method based on shape regression and feature selection for real-time facial feature extraction. Materials and Methods The facial features are initialized by statistical shape model and then the shape of facial features are deformed iteratively according to the texture pattern which is selected on the feature pool. Results We obtain fast and robust facial feature extraction result with error less than 4% and processing time less than 12 ms. The alignment error is measured by average of ratio of pixel difference to inter-ocular distance. Conclusion The accuracy and processing time of the method is enough to apply facial feature based application and can be used on the face beautification or 3D face modeling.

Real-time 3D Feature Extraction Combined with 3D Reconstruction (3차원 물체 재구성 과정이 통합된 실시간 3차원 특징값 추출 방법)

  • Hong, Kwang-Jin;Lee, Chul-Han;Jung, Kee-Chul;Oh, Kyoung-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.789-799
    • /
    • 2008
  • For the communication between human and computer in an interactive computing environment, the gesture recognition has been studied vigorously. The algorithms which use the 2D features for the feature extraction and the feature comparison are faster, but there are some environmental limitations for the accurate recognition. The algorithms which use the 2.5D features provide higher accuracy than 2D features, but these are influenced by rotation of objects. And the algorithms which use the 3D features are slow for the recognition, because these algorithms need the 3d object reconstruction as the preprocessing for the feature extraction. In this paper, we propose a method to extract the 3D features combined with the 3D object reconstruction in real-time. This method generates three kinds of 3D projection maps using the modified GPU-based visual hull generation algorithm. This process only executes data generation parts only for the gesture recognition and calculates the Hu-moment which is corresponding to each projection map. In the section of experimental results, we compare the computational time of the proposed method with the previous methods. And the result shows that the proposed method can apply to real time gesture recognition environment.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

A 3D Face Reconstruction Method Robust to Errors of Automatic Facial Feature Point Extraction (얼굴 특징점 자동 추출 오류에 강인한 3차원 얼굴 복원 방법)

  • Lee, Youn-Joo;Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.122-131
    • /
    • 2011
  • A widely used single image-based 3D face reconstruction method, 3D morphable shape model, reconstructs an accurate 3D facial shape when 2D facial feature points are correctly extracted from an input face image. However, in the case that a user's cooperation is not available such as a real-time 3D face reconstruction system, this method can be vulnerable to the errors of automatic facial feature point extraction. In order to solve this problem, we automatically classify extracted facial feature points into two groups, erroneous and correct ones, and then reconstruct a 3D facial shape by using only the correctly extracted facial feature points. The experimental results showed that the 3D reconstruction performance of the proposed method was remarkably improved compared to that of the previous method which does not consider the errors of automatic facial feature point extraction.

3D Radar Objects Tracking and Reflectivity Profiling

  • Kim, Yong Hyun;Lee, Hansoo;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.263-269
    • /
    • 2012
  • The ability to characterize feature objects from radar readings is often limited by simply looking at their still frame reflectivity, differential reflectivity and differential phase data. In many cases, time-series study of these objects' reflectivity profile is required to properly characterize features objects of interest. This paper introduces a novel technique to automatically track multiple 3D radar structures in C,S-band in real-time using Doppler radar and profile their characteristic reflectivity distribution in time series. The extraction of reflectivity profile from different radar cluster structures is done in three stages: 1. static frame (zone-linkage) clustering, 2. dynamic frame (evolution-linkage) clustering and 3. characterization of clusters through time series profile of reflectivity distribution. The two clustering schemes proposed here are applied on composite multi-layers CAPPI (Constant Altitude Plan Position Indicator) radar data which covers altitude range of 0.25 to 10 km and an area spanning over hundreds of thousands $km^2$. Discrete numerical simulations show the validity of the proposed technique and that fast and accurate profiling of time series reflectivity distribution for deformable 3D radar structures is achievable.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.

FPGA Design of a SURF-based Feature Extractor (SURF 알고리즘 기반 특징점 추출기의 FPGA 설계)

  • Ryu, Jae-Kyung;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.3
    • /
    • pp.368-377
    • /
    • 2011
  • This paper explains the hardware structure of SURF(Speeded Up Robust Feature) based feature point extractor and its FPGA verification result. SURF algorithm produces novel scale- and rotation-invariant feature point and descriptor which can be used for object recognition, creation of panorama image, 3D Image restoration. But the feature point extraction processing takes approximately 7,200msec for VGA-resolution in embedded environment using ARM11(667Mhz) processor and 128Mbytes DDR memory, hence its real-time operation is not guaranteed. We analyzed integral image memory access pattern which is a key component of SURF algorithm to reduce memory access and memory usage to operate in c real-time. We assure feature extraction that using a Vertex-5 FPGA gives 60frame/sec of VGA image at 100Mhz.

An Implementation of a Feature Extraction Hardware Accelerator based on Memory Usage Improvement SURF Algorithm (메모리 사용률을 개선한 SURF 알고리즘 특징점 추출기의 하드웨어 가속기 설계)

  • Jung, Chang-min;Kwak, Jae-chang;Lee, Kwang-yeob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.77-80
    • /
    • 2013
  • SURF algorithm is an algorithm to extract feature points and to generate descriptors from input images. It is robust to change of environment such as scale, rotation, illumination and view points. Because of these features, it is used for many image processing applications such as object recognition, constructing panorama pictures and 3D image restoration. But there is disadvantage for real time operation because many recognition algorithms such as SURF algorithm requires a lot of calculations. In this paper, we propose a design of feature extractor and descriptor generator based on SURF for high memory efficiency. The proposed design reduced a memory access and memory usage to operate in real time.

  • PDF

Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos

  • Ming, Yue;Wang, Guangchao;Hong, Xiaopeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1595-1613
    • /
    • 2017
  • The IR camera and laser-based IR projector provide an effective solution for real-time collection of moving targets in RGB-D videos. Different from the traditional RGB videos, the captured depth videos are not affected by the illumination variation. In this paper, we propose a novel feature extraction framework to describe human activities based on the above optical video capturing method, namely spatial-temporal texture features for 3D human activity recognition. Spatial-temporal texture feature with depth information is insensitive to illumination and occlusions, and efficient for fine-motion description. The framework of our proposed algorithm begins with video acquisition based on laser projection, video preprocessing with visual background extraction and obtains spatial-temporal key images. Then, the texture features encoded from key images are used to generate discriminative features for human activity information. The experimental results based on the different databases and practical scenarios demonstrate the effectiveness of our proposed algorithm for the large-scale data sets.