• Title/Summary/Keyword: Feature-Based Sparse Method

Search Result 38, Processing Time 0.025 seconds

Performance Analysis of Optimization Method and Filtering Method for Feature-based Monocular Visual SLAM (특징점 기반 단안 영상 SLAM의 최적화 기법 및 필터링 기법 성능 분석)

  • Jeon, Jin-Seok;Kim, Hyo-Joong;Shim, Duk-Sun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.1
    • /
    • pp.182-188
    • /
    • 2019
  • Autonomous mobile robots need SLAM (simultaneous localization and mapping) to look for the location and simultaneously to make the map around the location. In order to achieve visual SLAM, it is necessary to form an algorithm that detects and extracts feature points from camera images, and gets the camera pose and 3D points of the features. In this paper, we propose MPROSAC algorithm which combines MSAC and PROSAC, and compare the performance of optimization method and the filtering method for feature-based monocular visual SLAM. Sparse Bundle Adjustment (SBA) is used for the optimization method and the extended Kalman filter is used for the filtering method.

Comparisons of Linear Feature Extraction Methods (선형적 특징추출 방법의 특성 비교)

  • Oh, Sang-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.4
    • /
    • pp.121-130
    • /
    • 2009
  • In this paper, feature extraction methods, which is one field of reducing dimensions of high-dimensional data, are empirically investigated. We selected the traditional PCA(Principal Component Analysis), ICA(Independent Component Analysis), NMF(Non-negative Matrix Factorization), and sNMF(Sparse NMF) for comparisons. ICA has a similar feature with the simple cell of V1. NMF implemented a "parts-based representation in the brain" and sNMF is a improved version of NMF. In order to visually investigate the extracted features, handwritten digits are handled. Also, the extracted features are used to train multi-layer perceptrons for recognition test. The characteristic of each feature extraction method will be useful when applying feature extraction methods to many real-world problems.

Feature selection for text data via sparse principal component analysis (희소주성분분석을 이용한 텍스트데이터의 단어선택)

  • Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.6
    • /
    • pp.501-514
    • /
    • 2023
  • When analyzing high dimensional data such as text data, if we input all the variables as explanatory variables, statistical learning procedures may suffer from over-fitting problems. Furthermore, computational efficiency can deteriorate with a large number of variables. Dimensionality reduction techniques such as feature selection or feature extraction are useful for dealing with these problems. The sparse principal component analysis (SPCA) is one of the regularized least squares methods which employs an elastic net-type objective function. The SPCA can be used to remove insignificant principal components and identify important variables from noisy observations. In this study, we propose a dimension reduction procedure for text data based on the SPCA. Applying the proposed procedure to real data, we find that the reduced feature set maintains sufficient information in text data while the size of the feature set is reduced by removing redundant variables. As a result, the proposed procedure can improve classification accuracy and computational efficiency, especially for some classifiers such as the k-nearest neighbors algorithm.

Post-Processing for JPEG-Coded Image Deblocking via Sparse Representation and Adaptive Residual Threshold

  • Wang, Liping;Zhou, Xiao;Wang, Chengyou;Jiang, Baochen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1700-1721
    • /
    • 2017
  • The problem of blocking artifacts is very common in block-based image and video compression, especially at very low bit rates. In this paper, we propose a post-processing method for JPEG-coded image deblocking via sparse representation and adaptive residual threshold. This method includes three steps. First, we obtain the dictionary by online dictionary learning and the compressed images. The dictionary is then modified by the histogram of oriented gradient (HOG) feature descriptor and K-means cluster. Second, an adaptive residual threshold for orthogonal matching pursuit (OMP) is proposed and used for sparse coding by combining blind image blocking assessment. At last, to take advantage of human visual system (HVS), the edge regions of the obtained deblocked image can be further modified by the edge regions of the compressed image. The experimental results show that our proposed method can keep the image more texture and edge information while reducing the image blocking artifacts.

Person Re-identification using Sparse Representation with a Saliency-weighted Dictionary

  • Kim, Miri;Jang, Jinbeum;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.262-268
    • /
    • 2017
  • Intelligent video surveillance systems have been developed to monitor global areas and find specific target objects using a large-scale database. However, person re-identification presents some challenges, such as pose change and occlusions. To solve the problems, this paper presents an improved person re-identification method using sparse representation and saliency-based dictionary construction. The proposed method consists of three parts: i) feature description based on salient colors and textures for dictionary elements, ii) orthogonal atom selection using cosine similarity to deal with pose and viewpoint change, and iii) measurement of reconstruction error to rank the gallery corresponding a probe object. The proposed method provides good performance, since robust descriptors used as a dictionary atom are generated by weighting some salient features, and dictionary atoms are selected by reducing excessive redundancy causing low accuracy. Therefore, the proposed method can be applied in a large scale-database surveillance system to search for a specific object.

Enhanced Stereo Matching Algorithm based on 3-Dimensional Convolutional Neural Network (3차원 합성곱 신경망 기반 향상된 스테레오 매칭 알고리즘)

  • Wang, Jian;Noh, Jackyou
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.179-186
    • /
    • 2021
  • For stereo matching based on deep learning, the design of network structure is crucial to the calculation of matching cost, and the time-consuming problem of convolutional neural network in image processing also needs to be solved urgently. In this paper, a method of stereo matching using sparse loss volume in parallax dimension is proposed. A sparse 3D loss volume is constructed by using a wide step length translation of the right view feature map, which reduces the video memory and computing resources required by the 3D convolution module by several times. In order to improve the accuracy of the algorithm, the nonlinear up-sampling of the matching loss in the parallax dimension is carried out by using the method of multi-category output, and the training model is combined with two kinds of loss functions. Compared with the benchmark algorithm, the proposed algorithm not only improves the accuracy but also shortens the running time by about 30%.

Fault Diagnosis of Wind Power Converters Based on Compressed Sensing Theory and Weight Constrained AdaBoost-SVM

  • Zheng, Xiao-Xia;Peng, Peng
    • Journal of Power Electronics
    • /
    • v.19 no.2
    • /
    • pp.443-453
    • /
    • 2019
  • As the core component of transmission systems, converters are very prone to failure. To improve the accuracy of fault diagnosis for wind power converters, a fault feature extraction method combined with a wavelet transform and compressed sensing theory is proposed. In addition, an improved AdaBoost-SVM is used to diagnose wind power converters. The three-phase output current signal is selected as the research object and is processed by the wavelet transform to reduce the signal noise. The wavelet approximation coefficients are dimensionality reduced to obtain measurement signals based on the theory of compressive sensing. A sparse vector is obtained by the orthogonal matching pursuit algorithm, and then the fault feature vector is extracted. The fault feature vectors are input to the improved AdaBoost-SVM classifier to realize fault diagnosis. Simulation results show that this method can effectively realize the fault diagnosis of the power transistors in converters and improve the precision of fault diagnosis.

Multiscale Spatial Position Coding under Locality Constraint for Action Recognition

  • Yang, Jiang-feng;Ma, Zheng;Xie, Mei
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1851-1863
    • /
    • 2015
  • – In the paper, to handle the problem of traditional bag-of-features model ignoring the spatial relationship of local features in human action recognition, we proposed a Multiscale Spatial Position Coding under Locality Constraint method. Specifically, to describe this spatial relationship, we proposed a mixed feature combining motion feature and multi-spatial-scale configuration. To utilize temporal information between features, sub spatial-temporal-volumes are built. Next, the pooled features of sub-STVs are obtained via max-pooling method. In classification stage, the Locality-Constrained Group Sparse Representation is adopted to utilize the intrinsic group information of the sub-STV features. The experimental results on the KTH, Weizmann, and UCF sports datasets show that our action recognition system outperforms the classical local ST feature-based recognition systems published recently.

Topic-based Multi-document Summarization Using Non-negative Matrix Factorization and K-means (비음수 행렬 분해와 K-means를 이용한 주제기반의 다중문서요약)

  • Park, Sun;Lee, Ju-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.255-264
    • /
    • 2008
  • This paper proposes a novel method using K-means and Non-negative matrix factorization (NMF) for topic -based multi-document summarization. NMF decomposes weighted term by sentence matrix into two sparse non-negative matrices: semantic feature matrix and semantic variable matrix. Obtained semantic features are comprehensible intuitively. Weighted similarity between topic and semantic features can prevent meaningless sentences that are similar to a topic from being selected. K-means clustering removes noises from sentences so that biased semantics of documents are not reflected to summaries. Besides, coherence of document summaries can be enhanced by arranging selected sentences in the order of their ranks. The experimental results show that the proposed method achieves better performance than other methods.

Mask Estimation Based on Band-Independent Bayesian Classifler for Missing-Feature Reconstruction (Missing-Feature 복구를 위한 대역 독립 방식의 베이시안 분류기 기반 마스크 예측 기법)

  • Kim Wooil;Stern Richard M.;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.78-87
    • /
    • 2006
  • In this paper. we propose an effective mask estimation scheme for missing-feature reconstruction in order to achieve robust speech recognition under unknown noise environments. In the previous work. colored noise is used for training the mask classifer, which is generated from the entire frequency Partitioned signals. However it gives a limited performance under the restricted number of training database. To reflect the spectral events of more various background noise and improve the performance simultaneously. a new Bayesian classifier for mask estimation is proposed, which works independent of other frequency bands. In the proposed method, we employ the colored noise which is obtained by combining colored noises generated from each frequency band in order to reflect more various noise environments and mitigate the 'sparse' database problem. Combined with the cluster-based missing-feature reconstruction. the performance of the proposed method is evaluated on a task of noisy speech recognition. The results show that the proposed method has improved performance compared to the Previous method under white noise. car noise and background music conditions.