• Title/Summary/Keyword: sparse matrix

Search Result 253, Processing Time 0.022 seconds

Nonlinear Filter for Orbit Determination (궤도결정을 위한 비선형 필터)

  • Yoon, Jangho
    • Journal of Aerospace System Engineering
    • /
    • v.10 no.1
    • /
    • pp.21-28
    • /
    • 2016
  • Orbit determination problems have been interest of many researchers for long time. Due to the high nonlinearity of the equation of motion and the measurement model, it is necessary to linearize the both equations. To avoid linearization, the filter based on Fokker-Planck equation is designed. with the extended Kalman filter update mechanism, in which the associated Fokker-Planck equation was solved efficiently and accurately via discrete quadrature and the measurement update was done through the extended Kalman filter update mechanism. This filter based on the DQMOM and the EKF update is applied to the orbit determination problem with appropriate modification to mitigate the filter smugness. Unlike the extended Kalman filter, the hybrid filter based on the DQMOM and the EKF update does not require the burdensome evaluation of the Jacobian matrix and Gaussian assumption for the system, and can still provide more accurate estimations of the state than those of the extended Kalman filter especially when measurements are sparse. Simulation results indicate that the advantages of the hybrid filter based on the DQMOM and the EKF update make it a promising alternative to the extended Kalman filter for orbit estimation problems.

Two Dimensional Slow Feature Discriminant Analysis via L2,1 Norm Minimization for Feature Extraction

  • Gu, Xingjian;Shu, Xiangbo;Ren, Shougang;Xu, Huanliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3194-3216
    • /
    • 2018
  • Slow Feature Discriminant Analysis (SFDA) is a supervised feature extraction method inspired by biological mechanism. In this paper, a novel method called Two Dimensional Slow Feature Discriminant Analysis via $L_{2,1}$ norm minimization ($2DSFDA-L_{2,1}$) is proposed. $2DSFDA-L_{2,1}$ integrates $L_{2,1}$ norm regularization and 2D statically uncorrelated constraint to extract discriminant feature. First, $L_{2,1}$ norm regularization can promote the projection matrix row-sparsity, which makes the feature selection and subspace learning simultaneously. Second, uncorrelated features of minimum redundancy are effective for classification. We define 2D statistically uncorrelated model that each row (or column) are independent. Third, we provide a feasible solution by transforming the proposed $L_{2,1}$ nonlinear model into a linear regression type. Additionally, $2DSFDA-L_{2,1}$ is extended to a bilateral projection version called $BSFDA-L_{2,1}$. The advantage of $BSFDA-L_{2,1}$ is that an image can be represented with much less coefficients. Experimental results on three face databases demonstrate that the proposed $2DSFDA-L_{2,1}/BSFDA-L_{2,1}$ can obtain competitive performance.

Fast Cardiac CINE MRI by Iterative Truncation of Small Transformed Coefficients

  • Park, Jinho;Hong, Hye-Jin;Yang, Young-Joong;Ahn, Chang-Beom
    • Investigative Magnetic Resonance Imaging
    • /
    • v.19 no.1
    • /
    • pp.19-30
    • /
    • 2015
  • Purpose: A new compressed sensing technique by iterative truncation of small transformed coefficients (ITSC) is proposed for fast cardiac CINE MRI. Materials and Methods: The proposed reconstruction is composed of two processes: truncation of the small transformed coefficients in the r-f domain, and restoration of the measured data in the k-t domain. The two processes are sequentially applied iteratively until the reconstructed images converge, with the assumption that the cardiac CINE images are inherently sparse in the r-f domain. A novel sampling strategy to reduce the normalized mean square error of the reconstructed images is proposed. Results: The technique shows the least normalized mean square error among the four methods under comparison (zero filling, view sharing, k-t FOCUSS, and ITSC). Application of ITSC for multi-slice cardiac CINE imaging was tested with the number of slices of 2 to 8 in a single breath-hold, to demonstrate the clinical usefulness of the technique. Conclusion: Reconstructed images with the compression factors of 3-4 appear very close to the images without compression. Furthermore the proposed algorithm is computationally efficient and is stable without using matrix inversion during the reconstruction.

Runoff and Unsteady Pipe Flow Computation (유출과 부정류 관수로 흐름 계산에 관한 연구)

  • Jeon, Byeong-Ho;Lee, Jae-Cheol;Gwon, Yeong-Ha
    • Water for future
    • /
    • v.23 no.2
    • /
    • pp.251-263
    • /
    • 1990
  • For surcharge flow in a sewer, the slot technique simulates surcharge flow as open - channel flow using a hypothetical narrow open piezometric slot at the sewer crown. The flow in a sewer is described mathematically using the unsteady open - channel Saint-Venant equations. In this study, the computer simulation model(USS-slot) using slot techniques is develeped to simulate the inlet hydrographs to manholes and the flow under pressure as well as free - surface flow in tree - type sewer networks of circular conduits. The inlet hydrographs are simulated by using the rational method or the ILSD progrm. The Saint-Venant equations for unsteady open - channel flow in seweres are solved by using a four - point implicit difference scheme. The flow equations of the sewers and the junction flow equations are solved simulaneously using a sparse matrix solution technique.

  • PDF

A 95% accurate EEG-connectome Processor for a Mental Health Monitoring System

  • Kim, Hyunki;Song, Kiseok;Roh, Taehwan;Yoo, Hoi-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.4
    • /
    • pp.436-442
    • /
    • 2016
  • An electroencephalogram (EEG)-connectome processor to monitor and diagnose mental health is proposed. From 19-channel EEG signals, the proposed processor determines whether the mental state is healthy or unhealthy by extracting significant features from EEG signals and classifying them. Connectome approach is adopted for the best diagnosis accuracy, and synchronization likelihood (SL) is chosen as the connectome feature. Before computing SL, reconstruction optimizer (ReOpt) block compensates some parameters, resulting in improved accuracy. During SL calculation, a sparse matrix inscription (SMI) scheme is proposed to reduce the memory size to 1/24. From the calculated SL information, a small world feature extractor (SWFE) reduces the memory size to 1/29. Finally, using SLs or small word features, radial basis function (RBF) kernel-based support vector machine (SVM) diagnoses user's mental health condition. For RBF kernels, look-up-tables (LUTs) are used to replace the floating-point operations, decreasing the required operation by 54%. Consequently, The EEG-connectome processor improves the diagnosis accuracy from 89% to 95% in Alzheimer's disease case. The proposed processor occupies $3.8mm^2$ and consumes 1.71 mW with $0.18{\mu}m$ CMOS technology.

Coding-based Storage Design for Continuous Data Collection in Wireless Sensor Networks

  • Zhan, Cheng;Xiao, Fuyuan
    • Journal of Communications and Networks
    • /
    • v.18 no.3
    • /
    • pp.493-501
    • /
    • 2016
  • In-network storage is an effective technique for avoiding network congestion and reducing power consumption in continuous data collection in wireless sensor networks. In recent years, network coding based storage design has been proposed as a means to achieving ubiquitous access that permits any query to be satisfied by a few random (nearby) storage nodes. To maintain data consistency in continuous data collection applications, the readings of a sensor over time must be sent to the same set of storage nodes. In this paper, we present an efficient approach to updating data at storage nodes to maintain data consistency at the storage nodes without decoding out the old data and re-encoding with new data. We studied a transmission strategy that identifies a set of storage nodes for each source sensor that minimizes the transmission cost and achieves ubiquitous access by transmitting sparsely using the sparse matrix theory. We demonstrate that the problem of minimizing the cost of transmission with coding is NP-hard. We present an approximation algorithm based on regarding every storage node with memory size B as B tiny nodes that can store only one packet. We analyzed the approximation ratio of the proposed approximation solution, and compared the performance of the proposed coding approach with other coding schemes presented in the literature. The simulation results confirm that significant performance improvement can be achieved with the proposed transmission strategy.

PARAFAC Tensor Reconstruction for Recommender System based on Apache Spark (아파치 스파크에서의 PARAFAC 분해 기반 텐서 재구성을 이용한 추천 시스템)

  • Im, Eo-Jin;Yong, Hwan-Seung
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.443-454
    • /
    • 2019
  • In recent years, there has been active research on a recommender system that considers three or more inputs in addition to users and goods, making it a multi-dimensional array, also known as a tensor. The main issue with using tensor is that there are a lot of missing values, making it sparse. In order to solve this, the tensor can be shrunk using the tensor decomposition algorithm into a lower dimensional array called a factor matrix. Then, the tensor is reconstructed by calculating factor matrices to fill original empty cells with predicted values. This is called tensor reconstruction. In this paper, we propose a user-based Top-K recommender system by normalized PARAFAC tensor reconstruction. This method involves factorization of a tensor into factor matrices and reconstructs the tensor again. Before decomposition, the original tensor is normalized based on each dimension to reduce overfitting. Using the real world dataset, this paper shows the processing of a large amount of data and implements a recommender system based on Apache Spark. In addition, this study has confirmed that the recommender performance is improved through normalization of the tensor.

Large-scaled truss topology optimization with filter and iterative parameter control algorithm of Tikhonov regularization

  • Nguyen, Vi T.;Lee, Dongkyu
    • Steel and Composite Structures
    • /
    • v.39 no.5
    • /
    • pp.511-528
    • /
    • 2021
  • There are recently some advances in solving numerically topology optimization problems for large-scaled trusses based on ground structure approach. A disadvantage of this approach is that the final design usually includes many bars, which is difficult to be produced in practice. One of efficient tools is a so-called filter scheme for the ground structure to reduce this difficulty and determine several distinct bars. In detail, this technique is valuable for practical uses because unnecessary bars are filtered out from the ground structure to obtain a well-defined structure during the topology optimization process, while it still guarantees the global equilibrium condition. This process, however, leads to a singular system of equilibrium equations. In this case, the minimization of least squares with Tikhonov regularization is adopted. In this paper, a proposed algorithm in controlling optimal Tikhonov parameter is considered in combination with the filter scheme due to its crucial role in obtaining solution to remove numerical singularity and saving computational time by using sparse matrix, which means that the discrete optimal topology solutions depend on choosing the Tikhonov parameter efficiently. Several numerical examples are investigated to demonstrate the efficiency of the filter parameter control algorithm in terms of the large-scaled optimal topology designs.

A review on robust principal component analysis (강건 주성분분석에 대한 요약)

  • Lee, Eunju;Park, Mingyu;Kim, Choongrak
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.327-333
    • /
    • 2022
  • Principal component analysis (PCA) is the most widely used technique in dimension reduction, however, it is very sensitive to outliers. A robust version of PCA, called robust PCA, was suggested by two seminal papers by Candès et al. (2011) and Chandrasekaran et al. (2011). The robust PCA is an essential tool in the artificial intelligence such as background detection, face recognition, ranking, and collaborative filtering. Also, the robust PCA receives a lot of attention in statistics in addition to computer science. In this paper, we introduce recent algorithms for the robust PCA and give some illustrative examples.

An efficient adaptive finite element method based on EBE-PCG iterative solver for LEFM analysis

  • Hearunyakij, Manat;Phongthanapanich, Sutthisak
    • Structural Engineering and Mechanics
    • /
    • v.83 no.3
    • /
    • pp.353-361
    • /
    • 2022
  • Linear Elastic Fracture Mechanics (LEFM) has been developed by applying stress analysis to determine the stress intensity factor (SIF, K). The finite element method (FEM) is widely used as a standard tool for evaluating the SIF for various crack configurations. The prediction accuracy can be achieved by applying an adaptive Delaunay triangulation combined with a FEM. The solution can be solved using either direct or iterative solvers. This work adopts the element-by-element preconditioned conjugate gradient (EBE-PCG) iterative solver into an adaptive FEM to solve the solution to heal problem size constraints that exist when direct solution techniques are applied. It can avoid the formation of a global stiffness matrix of a finite element model. Several numerical experiments reveal that the present method is simple, fast, and efficient compared to conventional sparse direct solvers. The optimum convergence criterion for two-dimensional LEFM analysis is studied. In this paper, four sample problems of a two-edge cracked plate, a center cracked plate, a single-edge cracked plate, and a compact tension specimen is used to evaluate the accuracy of the prediction of the SIF values. Finally, the efficiency of the present iterative solver is summarized by comparing the computational time for all cases.