• Title/Summary/Keyword: least-squares problem

Search Result 348, Processing Time 0.029 seconds

ITERATIVE ALGORITHMS FOR THE LEAST-SQUARES SYMMETRIC SOLUTION OF AXB = C WITH A SUBMATRIX CONSTRAINT

  • Wang, Minghui;Feng, Yan
    • Journal of applied mathematics & informatics
    • /
    • v.27 no.1_2
    • /
    • pp.1-12
    • /
    • 2009
  • Iterative algorithms are proposed for the least-squares symmetric solution of AXB = E with a submatrix constraint. We characterize the linear mappings from their independent element space to the constrained solution sets, study their properties and use these properties to propose two matrix iterative algorithms that can find the minimum and quasi-minimum norm solution based on the classical LSQR algorithm for solving the unconstrained LS problem. Numerical results are provided that show the efficiency of the proposed methods.

  • PDF

BLOCK DIAGONAL PRECONDITIONERS FOR THE GALERKIN LEAST SQUARES METHOD IN LINEAR ELASTICITY

  • Yoo, Jae-Chil
    • Communications of the Korean Mathematical Society
    • /
    • v.15 no.1
    • /
    • pp.143-153
    • /
    • 2000
  • In [8], Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we propose the block diagonal preconditioners. The preconditioned conjugate residual method is robust in that the convergence is uniform as the parameter, v, goes to $\sfrac{1}{2}$. Computational experiments are included.

  • PDF

Least Squares Approach for Structural Reanalysis

  • Kyung-Joon Cha;Ho-Jong Jang;Dal-Sun Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.3
    • /
    • pp.369-379
    • /
    • 1996
  • A study is made of approximate technique for structural reanalysis based on the force method. Perturbntion analysis of generalized least squares problem is adopted to reanalyze a damaged structure, and related results are presented.

  • PDF

THE EXTREMAL RANKS AND INERTIAS OF THE LEAST SQUARES SOLUTIONS TO MATRIX EQUATION AX = B SUBJECT TO HERMITIAN CONSTRAINT

  • Dai, Lifang;Liang, Maolin
    • Journal of applied mathematics & informatics
    • /
    • v.31 no.3_4
    • /
    • pp.545-558
    • /
    • 2013
  • In this paper, the formulas for calculating the extremal ranks and inertias of the Hermitian least squares solutions to matrix equation AX = B are established. In particular, the necessary and sufficient conditions for the existences of the positive and nonnegative definite solutions to this matrix equation are given. Meanwhile, the least squares problem of the above matrix equation with Hermitian R-symmetric and R-skew symmetric constraints are also investigated.

Least Squares Estimation with Autocorrelated Residuals : A Survey

  • Rhee, Hak-Yong
    • Journal of the Korean Statistical Society
    • /
    • v.4 no.1
    • /
    • pp.39-56
    • /
    • 1975
  • Ever since Gauss discussed the least-squares method in 1812 and Bertrand translated Gauss's work in French, the least-squares method has been used for various economic analysis. The justification of the least-squares method was given by Markov in 1912 in connection with the previous discussion by Gauss and Bertrand. The main argument concerned the problem of obtaining the best linear unbiased estimates. In some modern language, the argument can be explained as follow.

  • PDF

The Geolocation Based on Total Least Squares Algorithm Using Satellites (위성을 이용한 Total Least Squares 기반 신호원 측위 알고리즘)

  • 박영미;조상우;전주환
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.2C
    • /
    • pp.255-261
    • /
    • 2004
  • The problem of geolocation using multiple satellites is to determine the position of a transmitter located on the Earth by processing received signals. The specific problem addressed in this paper is that of estimating the position of a stationary transmitter located on or above the Earth's surface from measured time difference of arrivals (TDOA) by a geostationary orbiting (GSO) satellite and a low earth orbiting (LEO) satellite. The proposed geolocation method is based on the total least squares (TLS) algorithm. Under erroneous positions of the satellites together with noisy TDOA measurements, the TLS algorithm provides a better solution. By running Monte-Carlo simulations, the proposed method is compared with the ordinary least squares (LS) approach.

A Channel Equalization Algorithm Using Neural Network Based Data Least Squares (뉴럴네트웍에 기반한 Data Least Squares를 사용한 채널 등화기 알고리즘)

  • Lim, Jun-Seok;Pyeon, Yong-Kuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.63-68
    • /
    • 2007
  • Using the neural network model for oriented principal component analysis (OPCA), we propose a solution to the data least squares (DLS) problem, in which the error is assumed to lie in the data matrix only. In this paper, we applied this neural network model to channel equalization. Simulations show that the neural network based DLS outperforms ordinary least squares in channel equalization problems.

Visual Servo Navigation of a Mobile Robot Using Nonlinear Least Squares Optimization for Large Residual (비선형 최소 자승법을 이용한 이동 로봇의 비주얼 서보 네비게이션)

  • Kim, Gon-Woo;Nam, Kyung-Tae;Lee, Sang-Moo;Shon, Woong-Hee
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.4
    • /
    • pp.327-333
    • /
    • 2007
  • We propose a navigation algorithm using image-based visual servoing utilizing a fixed camera. We define the mobile robot navigation problem as an unconstrained optimization problem to minimize the image error between the goal position and the position of a mobile robot. The residual function which is the image error between the position of a mobile robot and the goal position is generally large for this navigation problem. So, this navigation problem can be considered as the nonlinear least squares problem for the large residual case. For large residual, we propose a method to find the second-order term using the secant approximation method. The performance was evaluated using the simulation.

  • PDF

GRADIENT PROJECTION METHODS FOR THE n-COUPLING PROBLEM

  • Kum, Sangho;Yun, Sangwoon
    • Journal of the Korean Mathematical Society
    • /
    • v.56 no.4
    • /
    • pp.1001-1016
    • /
    • 2019
  • We are concerned with optimization methods for the $L^2$-Wasserstein least squares problem of Gaussian measures (alternatively the n-coupling problem). Based on its equivalent form on the convex cone of positive definite matrices of fixed size and the strict convexity of the variance function, we are able to present an implementable (accelerated) gradient method for finding the unique minimizer. Its global convergence rate analysis is provided according to the derived upper bound of Lipschitz constants of the gradient function.

STOCHASTIC GRADIENT METHODS FOR L2-WASSERSTEIN LEAST SQUARES PROBLEM OF GAUSSIAN MEASURES

  • YUN, SANGWOON;SUN, XIANG;CHOI, JUNG-IL
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.162-172
    • /
    • 2021
  • This paper proposes stochastic methods to find an approximate solution for the L2-Wasserstein least squares problem of Gaussian measures. The variable for the problem is in a set of positive definite matrices. The first proposed stochastic method is a type of classical stochastic gradient methods combined with projection and the second one is a type of variance reduced methods with projection. Their global convergence are analyzed by using the framework of proximal stochastic gradient methods. The convergence of the classical stochastic gradient method combined with projection is established by using diminishing learning rate rule in which the learning rate decreases as the epoch increases but that of the variance reduced method with projection can be established by using constant learning rate. The numerical results show that the present algorithms with a proper learning rate outperforms a gradient projection method.