• Title/Summary/Keyword: Feature point extraction

Search Result 266, Processing Time 0.025 seconds

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • v.9 no.3
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Development of Robust Feature Detector Using Sonar Data (초음파 데이터를 이용한 강인한 형상 검출기 개발)

  • Lee, Se-Jin;Lim, Jong-Hwan;Cho, Dong-Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.25 no.2
    • /
    • pp.35-42
    • /
    • 2008
  • This study introduces a robust feature detector for sonar data from a general fixed-type of sonar ring. The detector is composed of a data association filter and a feature extractor. The data association filter removes false returns provided frequently from sonar sensors, and classifies set of data from various objects and robot positions into a group in which all the data are from the same object. The feature extractor calculates the geometries of the feature for the group. We show the possibility of extracting circle feature as well as a line and a point features. The proposed method was applied to a real home environment with a real robot.

Reference Feature Based Cell Decomposition and Form Feature Recognition (기준 특징형상에 기반한 셀 분해 및 특징형상 인식에 관한 연구)

  • Kim, Jae-Hyun;Park, Jung-Whan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.4
    • /
    • pp.245-254
    • /
    • 2007
  • This research proposed feature extraction algorithms as an input of STEP Ap214 data, and feature parameterization process to simplify further design change and maintenance. The procedure starts with suppression of blend faces of an input solid model to generate its simplified model, where both constant and variable-radius blends are considered. Most existing cell decomposition algorithms utilize concave edges, and they usually require complex procedures and computing time in recomposing the cells. The proposed algorithm using reference features, however, was found to be more efficient through testing with a few sample cases. In addition, the algorithm is able to recognize depression features, which is another strong point compared to the existing cell decomposition approaches. The proposed algorithm was implemented on a commercial CAD system and tested with selected industrial product models, along with parameterization of recognized features for further design change.

Implementation of the Panoramic System Using Feature-Based Image Stitching (특징점 기반 이미지 스티칭을 이용한 파노라마 시스템 구현)

  • Choi, Jaehak;Lee, Yonghwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.2
    • /
    • pp.61-65
    • /
    • 2017
  • Recently, the interest and research on 360 camera and 360 image production are expanding. In this paper, we describe the feature extraction algorithm, alignment and image blending that make up the feature-based stitching system. And it deals with the theory of representative algorithm at each stage. In addition, the feature-based stitching system was implemented using OPENCV library. As a result of the implementation, the brightness of the two images is different, and it feels a sense of heterogeneity in the resulting image. We will study the proper preprocessing to adjust the brightness value to improve the accuracy and seamlessness of the feature-based stitching system.

  • PDF

Accurate Parked Vehicle Detection using GMM-based 3D Vehicle Model in Complex Urban Environments (가우시안 혼합모델 기반 3차원 차량 모델을 이용한 복잡한 도시환경에서의 정확한 주차 차량 검출 방법)

  • Cho, Younggun;Roh, Hyun Chul;Chung, Myung Jin
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.1
    • /
    • pp.33-41
    • /
    • 2015
  • Recent developments in robotics and intelligent vehicle area, bring interests of people in an autonomous driving ability and advanced driving assistance system. Especially fully automatic parking ability is one of the key issues of intelligent vehicles, and accurate parked vehicles detection is essential for this issue. In previous researches, many types of sensors are used for detecting vehicles, 2D LiDAR is popular since it offers accurate range information without preprocessing. The L shape feature is most popular 2D feature for vehicle detection, however it has an ambiguity on different objects such as building, bushes and this occurs misdetection problem. Therefore we propose the accurate vehicle detection method by using a 3D complete vehicle model in 3D point clouds acquired from front inclined 2D LiDAR. The proposed method is decomposed into two steps: vehicle candidate extraction, vehicle detection. By combination of L shape feature and point clouds segmentation, we extract the objects which are highly related to vehicles and apply 3D model to detect vehicles accurately. The method guarantees high detection performance and gives plentiful information for autonomous parking. To evaluate the method, we use various parking situation in complex urban scene data. Experimental results shows the qualitative and quantitative performance efficiently.

An Identification and Feature Search System for Scanned Comics (스캔 만화도서 식별 및 특징 검색 시스템)

  • Lee, Sang-Hoon;Choi, Nakyeon;Lee, Sanghoon
    • Journal of KIISE:Databases
    • /
    • v.41 no.4
    • /
    • pp.199-208
    • /
    • 2014
  • In this paper, we represent a system of identification and feature search for scanned comics in consideration of their content characteristics. For creating the feature of the scanned comics, we utilize a method of hierarchical symmetry fingerprinting. Proposed identification and search system is designed to give online service provider, such as Webhard, an immediate identification result under conditions of huge volume of the scanned comics. In simulation part, we analyze the robustness of the identification of the fingerprint to image modification such as rotation and translation. Also, we represent a structure of database for fast matching in feature point database, and compare search performance between other existing searching methods such as full-search and most significant feature search.

An Efficient Feature Point Detection for Interactive Pen-Input Display Applications (인터액티브 펜-입력 디스플레이 애플리케이션을 위한 효과적인 특징점 추출법)

  • Kim Dae-Hyun;Kim Myoung-Jun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.705-716
    • /
    • 2005
  • There exist many feature point detection algorithms that developed in pattern recognition research . However, interactive applications for the pen-input displays such as Tablet PCs and LCD tablets have set different goals; reliable segmentation for different drawing styles and real-time on-the-fly fieature point defection. This paper presents a curvature estimation method crucial for segmenting freeHand pen input. It considers only local shape descriptors, thus, peforming a novel curvature estimation on-the-fly while drawing on a pen-input display This has been used for pen marking recognition to build a 3D sketch-based modeling application.

A Study on Reducing Learning Time of Deep-Learning using Network Separation (망 분리를 이용한 딥러닝 학습시간 단축에 대한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.2
    • /
    • pp.273-279
    • /
    • 2021
  • In this paper, we propose an algorithm that shortens the learning time by performing individual learning using partitioning the deep learning structure. The proposed algorithm consists of four processes: network classification origin setting process, feature vector extraction process, feature noise removal process, and class classification process. First, in the process of setting the network classification starting point, the division starting point of the network structure for effective feature vector extraction is set. Second, in the feature vector extraction process, feature vectors are extracted without additional learning using the weights previously learned. Third, in the feature noise removal process, the extracted feature vector is received and the output value of each class is learned to remove noise from the data. Fourth, in the class classification process, the noise-removed feature vector is input to the multi-layer perceptron structure, and the result is output and learned. To evaluate the performance of the proposed algorithm, we experimented with the Extended Yale B face database. As a result of the experiment, in the case of the time required for one-time learning, the proposed algorithm reduced 40.7% based on the existing algorithm. In addition, the number of learning up to the target recognition rate was shortened compared with the existing algorithm. Through the experimental results, it was confirmed that the one-time learning time and the total learning time were reduced and improved over the existing algorithm.

Detecting outliers in segmented genomes of flu virus using an alignment-free approach

  • Daoud, Mosaab
    • Genomics & Informatics
    • /
    • v.18 no.1
    • /
    • pp.2.1-2.11
    • /
    • 2020
  • In this paper, we propose a new approach to detecting outliers in a set of segmented genomes of the flu virus, a data set with a heterogeneous set of sequences. The approach has the following computational phases: feature extraction, which is a mapping into feature space, alignment-free distance measure to measure the distance between any two segmented genomes, and a mapping into distance space to analyze a quantum of distance values. The approach is implemented using supervised and unsupervised learning modes. The experiments show robustness in detecting outliers of the segmented genome of the flu virus.