• Title/Summary/Keyword: 3D Feature Extraction

Search Result 202, Processing Time 0.06 seconds

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

Hierarchical 3D modeling using disparity-motion relationship and feature points (변이-움직임 관계와 특징점을 이용한 계층적 3차원 모델링)

  • Lee, Ho-Geun;Han, Gyu-Pil;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.9-16
    • /
    • 2002
  • This paper proposes a new 3D modeling technique using disparity-motion relationship and feature points. To generate the 3D model from real scene, generally, we need to compute depth of model vertices from the dense correspondence map over whole images. It takes much time and is also very difficult to get accurate depth. To improve such problems, in this paper, we only need to find the correspondence of some feature points to generate a 3D model of object without dense correspondence map. The proposed method consists of three parts, which are the extraction of object, the extraction of feature points, and the hierarchical 3D modeling using classified feature points. It has characteristics of low complexity and is effective to synthesize images with virtual view and to express the smoothness of Plain regions and the sharpness of edges.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

3D Mesh Model Exterior Salient Part Segmentation Using Prominent Feature Points and Marching Plane

  • Hong, Yiyu;Kim, Jongweon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1418-1433
    • /
    • 2019
  • In computer graphics, 3D mesh segmentation is a challenging research field. This paper presents a 3D mesh model segmentation algorithm that focuses on removing exterior salient parts from the original 3D mesh model based on prominent feature points and marching plane. To begin with, the proposed approach uses multi-dimensional scaling to extract prominent feature points that reside on the tips of each exterior salient part of a given mesh. Subsequently, a set of planes intersect the 3D mesh; one is the marching plane, which start marching from prominent feature points. Through the marching process, local cross sections between marching plane and 3D mesh are extracted, subsequently, its corresponding area are calculated to represent local volumes of the 3D mesh model. As the boundary region of an exterior salient part generally lies on the location at which the local volume suddenly changes greatly, we can simply cut this location with the marching plane to separate this part from the mesh. We evaluated our algorithm on the Princeton Segmentation Benchmark, and the evaluation results show that our algorithm works well for some categories.

3-D Object Recognition Using a Feature Extraction Scheme: Open-Ball Operator (Open-Ball 피처 추출 방법에 의한 3차원 물체 인식)

  • Kim, Sung-Soo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.3
    • /
    • pp.821-831
    • /
    • 1999
  • Recognition of three-dimensional objects with convexities and concavities is a hard and challenging problem. This paper presents a feature extraction method out of three-dimensional objects for the purpose of classification. This new method not only provides invariance to scale, translation, and rotation $R^3$ but also distinguishes any three-dimensional model objects with concavities and convexities by measuring a relative similarity in the information space where a set of characteristics features of objects is mapped.

  • PDF

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Geometric Feature Recognition Directly from Scanned Points using Artificial Neural Networks (신경회로망을 이용한 측정 점으로부터 특징형상 인식)

  • 전용태;박세형
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.6
    • /
    • pp.176-184
    • /
    • 2000
  • Reverse engineering (RE) is a process to create computer aided design (CAD) models from the scanned data of an existing part acquired using 3D position scanners. This paper proposes a novel methodology of extracting geometric features directly from a set of 3D scanned points, which utilizes the concepts of feature-based technology and artificial neural networks (ANNs). The use of ANN has enabled the development of a flexible feature-based RE application that can be trained to deal with various features. The following four main tasks were mainly investigated and implemented: (1) Data reduction; (2) edge detection; (3) ANN-based feature recognition; (4) feature extraction. This approach was validated with a variety of real industrial components. The test results show that the developed feature-based RE application proved to be suitable for reconstructing prismatic features such as block, pocket, step, slot, hole, and boss, which are very common and crucial in mechanical engineering products.

  • PDF

A Study on Feature Selection and Feature Extraction for Hyperspectral Image Classification Using Canonical Correlation Classifier (정준상관분류에 의한 하이퍼스펙트럴영상 분류에서 유효밴드 선정 및 추출에 관한 연구)

  • Park, Min-Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3D
    • /
    • pp.419-431
    • /
    • 2009
  • The core of this study is finding out the efficient band selection or extraction method discovering the optimal spectral bands when applying canonical correlation classifier (CCC) to hyperspectral data. The optimal efficient bands grounded on each separability decision technique are selected using Multispec$^{(C)}$ software developed by Purdue university of USA. Total 6 separability decision techniques are used, which are Divergence, Transformed Divergence, Bhattacharyya, Mean Bhattacharyya, Covariance Bhattacharyya, Noncovariance Bhattacharyya. For feature extraction, PCA transformation and MNF transformation are accomplished by ERDAS Imagine and ENVI software. For the comparison and assessment on the effect of feature selection and feature extraction, land cover classification is performed by CCC. The overall accuracy of CCC using the firstly selected 60 bands is 71.8%, the highest classification accuracy acquired by CCC is 79.0% as the case that executes CCC after appling Noncovariance Bhattacharyya. In conclusion, as a matter of fact, only Noncovariance Bhattacharyya separability decision method was valuable as feature selection algorithm for hyperspectral image classification depended on CCC. The lassification accuracy using other feature selection and extraction algorithms except Divergence rather declined in CCC.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

Accurate Parked Vehicle Detection using GMM-based 3D Vehicle Model in Complex Urban Environments (가우시안 혼합모델 기반 3차원 차량 모델을 이용한 복잡한 도시환경에서의 정확한 주차 차량 검출 방법)

  • Cho, Younggun;Roh, Hyun Chul;Chung, Myung Jin
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.1
    • /
    • pp.33-41
    • /
    • 2015
  • Recent developments in robotics and intelligent vehicle area, bring interests of people in an autonomous driving ability and advanced driving assistance system. Especially fully automatic parking ability is one of the key issues of intelligent vehicles, and accurate parked vehicles detection is essential for this issue. In previous researches, many types of sensors are used for detecting vehicles, 2D LiDAR is popular since it offers accurate range information without preprocessing. The L shape feature is most popular 2D feature for vehicle detection, however it has an ambiguity on different objects such as building, bushes and this occurs misdetection problem. Therefore we propose the accurate vehicle detection method by using a 3D complete vehicle model in 3D point clouds acquired from front inclined 2D LiDAR. The proposed method is decomposed into two steps: vehicle candidate extraction, vehicle detection. By combination of L shape feature and point clouds segmentation, we extract the objects which are highly related to vehicles and apply 3D model to detect vehicles accurately. The method guarantees high detection performance and gives plentiful information for autonomous parking. To evaluate the method, we use various parking situation in complex urban scene data. Experimental results shows the qualitative and quantitative performance efficiently.