• Title/Summary/Keyword: Minimum-Norm

Search Result 79, Processing Time 0.024 seconds

A Robust Estimation Procedure for the Linear Regression Model

  • Kim, Bu-Yong
    • Journal of the Korean Statistical Society
    • /
    • v.16 no.2
    • /
    • pp.80-91
    • /
    • 1987
  • Minimum $L_i$ norm estimation is a robust procedure ins the sense that it leads to an estimator which has greater statistical eficiency than the least squares estimator in the presence of outliers. And the $L_1$ norm estimator has some desirable statistical properties. In this paper a new computational procedure for $L_1$ norm estimation is proposed which combines the idea of reweighted least squares method and the linear programming approach. A modification of the projective transformation method is employed to solve the linear programming problem instead of the simplex method. It is proved that the proposed algorithm terminates in a finite number of iterations.

  • PDF

Balanced model reduction of non-minimum phase plant into minimum phase plant (비최소 위상 플랜트의 최소 위상 플랜트로의 균형 모델 저차화)

  • 구세완;권혁성;서병설
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.1205-1208
    • /
    • 1996
  • This paper proposes balanced model reduction of non-minimum phase plant. The algorithm presented in this paper is to convert high-order non-minimum phase plant into low-oder minimum phase plant using balanced model reduction. Balanced model reduction requires the error bound that Hankel singular value produces. This algorithm shows the tolerance that admits the method of this paper.

  • PDF

A PARAMETER ESTIMATION METHOD FOR MODEL ANALYSIS

  • Oh Se-Young;Kwon Sun-Joo;Yun Jae-Heon
    • Journal of applied mathematics & informatics
    • /
    • v.22 no.1_2
    • /
    • pp.373-385
    • /
    • 2006
  • To solve a class of nonlinear parameter estimation problems, a method combining the regularized structured nonlinear total least norm (RSNTLN) method and parameter separation scheme is suggested. The method guarantees the convergence of parameters and has an advantages in reducing the residual norm over the use of RSNTLN only. Numerical experiments for two models appeared in signal processing show that the suggested method is more effective in obtaining solution and parameter with minimum residual norm.

Enhancing seismic reflection signal (탄성파 반사 신호 향상)

  • Hien, D.H.;Jang, Seong-Hyung;Kim, Young-Wan;Suh, Sang-Yong
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2008.05a
    • /
    • pp.606-609
    • /
    • 2008
  • Deconvolution is one of the most used techniques for processing seismic reflection data. It is applied to improve temporal resolution by wavelet shaping and removal of short period reverberations. Several deconvolution algorithms such as predicted, spike, minimum entropy deconvolution and so on has been proposed to obtain such above purposes. Among of them, $\iota_1$ norm proposed by Taylor et al., (1979) and used to compared to minimum entropy deconvolution by Sacchi et al., (1994) has given some advantages on time computing and high efficiency. Theoritically, the deconvolution can be considered as inversion technique to invert the single seismic trace to the reflectivity, but it has not been successfully adopted due to noisy signals of the real data set and unknown source wavelet. After stacking, the seismic traces are moved to zero offset, thus each seismic traces now can be a single trace that is created by convolving the seismic source wavelet and reflectivity. In this paper, the fundamental of $\iota_1$ norm deconvolution method will be introduced. The method will be tested by synthetic data and applied to improve the stacked section of gas hydrate.

  • PDF

AN UPPER BOUND ON THE NUMBER OF PARITY CHECKS FOR BURST ERROR DETECTION AND CORRECTION IN EUCLIDEAN CODES

  • Jain, Sapna;Lee, Ki-Suk
    • Journal of the Korean Mathematical Society
    • /
    • v.46 no.5
    • /
    • pp.967-977
    • /
    • 2009
  • There are three standard weight functions on a linear code viz. Hamming weight, Lee weight, and Euclidean weight. Euclidean weight function is useful in connection with the lattice constructions [2] where the minimum norm of vectors in the lattice is related to the minimum Euclidean weight of the code. In this paper, we obtain an upper bound over the number of parity check digits for Euclidean weight codes detecting and correcting burst errors.

A Study on High Resolution Ranging Algorithm for The UWB Indoor Channel

  • Lee, Chong-Hyun
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.21 no.4
    • /
    • pp.96-103
    • /
    • 2007
  • In this paper, we present a novel and numerically efficient algorithm for high resolution TOA(Time Of Arrival) estimation under indoor radio propagation channels. The proposed algorithm is not dependent on the structure of receivers, i.e, it can be used with either coherent or non-coherent receivers. The TOA estimation algorithm is based on a high resolution frequency estimation algorithm of Minimum-norm. The efficiency of the proposed algorithm relies on numerical analysis techniques in computing signal or noise subspaces. The algorithm is based on the two step procedures, one for transforming input data to frequency domain data and the other for estimating the unknown TOA using the proposed efficient algorithm. The efficiency in number of operations over other algorithms is presented. The performance of the proposed algorithm is investigated by means of computer simulations.. Throughout the analytic and computer simulation results, we show that the proposed algorithm exhibits superior performance in estimating TOA estimation with limited computational cost.

Source Current Reconstruction Based on MCG Signal (심자도 신호를 이용한 전류원 재구성)

  • 권혁찬;이용호;김진목
    • Progress in Superconductivity
    • /
    • v.4 no.1
    • /
    • pp.48-52
    • /
    • 2002
  • When applying a SQUID system for diagnosing heart disease, it is informative to obtain the source current distributions from the measured MCG (magnetocardiogram) signals since the bioelectric activity in the heart is generally represented by distributed current sources. In order to estimate the Primary current distribution in a heart, the minimum norm estimate was computed, assuming a source plane below the chest surface. In the simulation, current distributions, which were computed for the test dipoles represented well the essential feature of the test-current configurations. Source current reconstruction was performed for MCG signal of a healthy volunteer, which was recorded using a 40-channel SQUID system in a magnetically shielded room. It was found that the obtained current distribution is consistent with the electrical activity in a heart.

  • PDF

Two Dimensional Slow Feature Discriminant Analysis via L2,1 Norm Minimization for Feature Extraction

  • Gu, Xingjian;Shu, Xiangbo;Ren, Shougang;Xu, Huanliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3194-3216
    • /
    • 2018
  • Slow Feature Discriminant Analysis (SFDA) is a supervised feature extraction method inspired by biological mechanism. In this paper, a novel method called Two Dimensional Slow Feature Discriminant Analysis via $L_{2,1}$ norm minimization ($2DSFDA-L_{2,1}$) is proposed. $2DSFDA-L_{2,1}$ integrates $L_{2,1}$ norm regularization and 2D statically uncorrelated constraint to extract discriminant feature. First, $L_{2,1}$ norm regularization can promote the projection matrix row-sparsity, which makes the feature selection and subspace learning simultaneously. Second, uncorrelated features of minimum redundancy are effective for classification. We define 2D statistically uncorrelated model that each row (or column) are independent. Third, we provide a feasible solution by transforming the proposed $L_{2,1}$ nonlinear model into a linear regression type. Additionally, $2DSFDA-L_{2,1}$ is extended to a bilateral projection version called $BSFDA-L_{2,1}$. The advantage of $BSFDA-L_{2,1}$ is that an image can be represented with much less coefficients. Experimental results on three face databases demonstrate that the proposed $2DSFDA-L_{2,1}/BSFDA-L_{2,1}$ can obtain competitive performance.

Reduction of magnetic anomaly observations from helicopter surveys at varying elevations (고도가 변화하는 헬리콥터 탐사에서 얻어지는 자력이상의 변환)

  • Nakatsuka, Tadashi;Okuma, Shigeo
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.121-128
    • /
    • 2006
  • Magnetic survey flights by helicopters are usually parallel to the topographic surface, with a nominal clearance, but especially in high-resolution surveys the altitudes at which observations are made may be too variable to be regarded as a smooth surface. We have developed a reduction procedure for such data using the method of equivalent sources, where surrounding sources are included to control edge effects, and data from points distributed randomly in three dimensions are directly modelled. Although the problem is generally underdetermined, the method of conjugate gradients can be used to find a minimum-norm solution. There is freedom to select the harmonic function that relates the magnetic anomaly with the source. When the upward continuation function operator is selected, the equivalent source is the magnetic anomaly itself. If we select as source a distribution of magnetic dipoles in the direction of the ambient magnetic field, we can easily derive reduction-to-pole anomalies by rotating the direction of the magnetic dipoles to vertical.

Trace Interpolation using Model-constrained Minimum Weighted Norm Interpolation (모델 제약조건이 적용된 MWNI (Minimum Weighted Norm Interpolation)를 이용한 트레이스 내삽)

  • Choi, Jihyun;Song, Youngseok;Choi, Jihun;Byun, Joongmoo;Seol, Soon Jee;Kim, Kiyoung;Lee, Jeongmo
    • Geophysics and Geophysical Exploration
    • /
    • v.20 no.2
    • /
    • pp.78-87
    • /
    • 2017
  • For efficient data processing, trace interpolation and regularization techniques should be antecedently applied to the seismic data which were irregularly sampled with missing traces. Among many interpolation techniques, MWNI (Minimum Weighted Norm Interpolation) technique is one of the most versatile techniques and widely used to regularize seismic data because of easy extension to the high-order module and low computational cost. However, since it is difficult to interpolate spatially aliased data using this technique, model-constrained MWNI was suggested to compensate for this problem. In this paper, conventional MWNI and model-constrained MWNI modules have been developed in order to analyze their performance using synthetic data and validate the applicability to the field data. The result by using model-constrained MWNI was better in spatially aliased data. In order to verify the applicability to the field data, interpolation and regularization were performed for two field data sets, respectively. Firstly, the seismic data acquired in Ulleung Basin gas hydrate field was interpolated. Even though the data has very chaotic feature and complex structure due to the chimney, the developed module showed fairly good interpolation result. Secondly, very irregularly sampled and widely missing seismic data was regularized and the connectivity of events was quite improved. According to these experiments, we can confirm that the developed module can successfully interpolate and regularize the irregularly sampled field data.