• 제목/요약/키워드: Inverse orthogonalization process

검색결과 3건 처리시간 0.019초

직교 입력 벡터를 이용하는 수정된 RLS 알고리즘에 관한 연구 (A Study on the Modified RLS Algorithm Using Orthogonal Input Vectors)

  • 안봉만;김관웅;안현규;한병성
    • 한국전기전자재료학회논문지
    • /
    • 제32권1호
    • /
    • pp.13-19
    • /
    • 2019
  • This paper proposes an easy algorithm for finding tapped-delay-line (TDL) filter coefficients in an adaptive filter algorithm using orthogonal input signals. The proposed algorithm can be used to obtain the coefficients and errors of a TDL filter without using an inverse orthogonalization process for the orthogonal input signals. The form of the proposed algorithm in this paper has the advantages of being easy to use and similar to the familiar recursive least-squares (RLS) algorithm. In order to evaluate the proposed algorithm, system identification simulation of the $11^{th}$-order finite-impulse-response (FIR) filter was performed. It is shown that the convergence characteristics of the learning curve and the tracking ability of the coefficient vectors are similar to those of the conventional RLS analysis. Also, the derived equations and computer simulation results ensure that the proposed algorithm can be used in a similar manner to the Levinson-Durbin algorithm.

등가의 Wiener-Hopf 방정식에 관한 연구 (Research on an Equivalent Wiener-Hopf Equation)

  • 안봉만;조주필
    • 한국통신학회논문지
    • /
    • 제33권9C호
    • /
    • pp.743-748
    • /
    • 2008
  • 본 논문은 평균 자승 측면에서 직교 입력 신호에 대하여 TDL 필터의 계수를 구할 수 있는 등가의 Wiener-Hopf 방정식을 제안한다. 본 논문에서 제안한 등가의 Wiener-Hopf 방정식을 이용하면 직교 입력 신호에 대하여 역 직교화 과정을 거치지 않고 직접적으로 TDL 필터의 계수와 오차를 표현할 수 있다. 본 논문에는 MMSE(minimum mean square error)에 대한 이론적 해석을 포함하였으며 수학적 예제에서 Wiener-Hopf 해와 제안한 등가의 Wiener-Hopf 해를 동시에 나타내었다.

A PRECONDITIONER FOR THE LSQR ALGORITHM

  • Karimi, Saeed;Salkuyeh, Davod Khojasteh;Toutounian, Faezeh
    • Journal of applied mathematics & informatics
    • /
    • 제26권1_2호
    • /
    • pp.213-222
    • /
    • 2008
  • Iterative methods are often suitable for solving least squares problems min$||Ax-b||_2$, where A $\epsilon\;\mathbb{R}^{m{\times}n}$ is large and sparse. The well known LSQR algorithm is among the iterative methods for solving these problems. A good preconditioner is often needed to speedup the LSQR convergence. In this paper we present the numerical experiments of applying a well known preconditioner for the LSQR algorithm. The preconditioner is based on the $A^T$ A-orthogonalization process which furnishes an incomplete upper-lower factorization of the inverse of the normal matrix $A^T$ A. The main advantage of this preconditioner is that we apply only one of the factors as a right preconditioner for the LSQR algorithm applied to the least squares problem min$||Ax-b||_2$. The preconditioner needs only the sparse matrix-vector product operations and significantly reduces the solution time compared to the unpreconditioned iteration. Finally, some numerical experiments on test matrices from Harwell-Boeing collection are presented to show the robustness and efficiency of this preconditioner.

  • PDF