• Title/Summary/Keyword: Subspace projection

Search Result 69, Processing Time 0.024 seconds

Hyperspectral Target Detection by Iterative Error Analysis based Spectral Unmixing (Iterative Error Analysis 기반 분광혼합분석에 의한 초분광 영상의 표적물질 탐지 기법)

  • Kim, Kwang-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.547-557
    • /
    • 2017
  • In this paper, a new spectral unmixing based target detection algorithm is proposed which adopted Iterative Error Analysis as a tool for extraction of background endmembers by using the target spectrum to be detected as initial endmember. In the presented method, the number of background endmembers is automatically decided during the IEA by stopping the iteration when the maximum change in abundance of the target is less than a given threshold value. The proposed algorithm does not have the dependence on the selection of image endmembers in the model-based approaches such as Orthogonal Subspace Projection and the target influence on the background statistics in the stochastic approaches such as Matched Filter. The experimental result with hyperspectral image data where various real and simulated targets are implanted shows that the proposed method is very effective for the detection of both rare and non-rare targets. It is expected that the proposed method can be effectively used for mineral detection and mapping as well as target object detection.

UNITARY INTERPOLATION ON AX = Y IN ALG$\mathcal{L}$

  • Kang, Joo-Ho
    • Honam Mathematical Journal
    • /
    • v.31 no.3
    • /
    • pp.421-428
    • /
    • 2009
  • Given operators X and Y acting on a Hilbert space $\mathcal{H}$, an interpolating operator is a bounded operator A such that AX = Y. In this paper, we showed the following : Let $\mathcal{L}$ be a subspace lattice acting on a Hilbert space $\mathcal{H}$ and let $X_i$ and $Y_i$ be operators in B($\mathcal{H}$) for i = 1, 2, ${\cdots}$. Let $P_i$ be the projection onto $\overline{rangeX_i}$ for all i = 1, 2, ${\cdots}$. If $P_kE$ = $EP_k$ for some k in $\mathbb{N}$ and all E in $\mathcal{L}$, then the following are equivalent: (1) $sup\;\{{\frac{{\parallel}E^{\perp}({\sum}^n_{i=1}Y_if_i){\parallel}}{{\parallel}E^{\perp}({\sum}^n_{i=1}Y_if_i){\parallel}}:f{\in}H,n{\in}{\mathbb{N}},E{\in}\mathcal{L}}\}$ < ${\infty}$ range $\overline{rangeY_k}\;=\;\overline{rangeX_k}\;=\;\mathcal{H}$, and < $X_kf,\;X_kg$ >=< $Y_kf,\;Y_kg$ > for some k in $\mathbb{N}$ and for all f and g in $\mathcal{H}$. (2) There exists an operator A in Alg$\mathcal{L}$ such that $AX_i$ = $Y_i$ for i = 1, 2, ${\cdots}$ and AA$^*$ = I = A$^*$A.

NORMAL INTERPOLATION ON AX = Y IN ALG$\mathcal{L}$

  • Jo, Young-Soo
    • Honam Mathematical Journal
    • /
    • v.30 no.2
    • /
    • pp.329-334
    • /
    • 2008
  • Given operators X and Y acting on a Hilbert space $\mathcal{H}$, an interpolating operator is a bounded operator A such that AX = Y. In this article, the following is proved: Let $\mathcal{L}$ be a subspace lattice on $\mathcal{H}$ and let X and Y be operators acting on a Hilbert space H. Let P be the projection onto the $\overline{rangeX}$. If PE = EP for each E ${\in}$ $\mathcal{L}$, then the following are equivalent: (1) sup ${{\frac{{\parallel}E^{\perp}Yf{\parallel}}{{\parallel}E^{\perp}Xf{\parallel}}}:f{\in}\mathcal{H},\;E{\in}\mathcal{L}}$ < ${\infty},\;\overline{rangeY}\;{\subset}\;\overline{rangeX}$, and there is a bounded operator T acting on $\mathcal{H}$ such that < Xf, Tg >=< Yf, Xg >, < Tf, Tg >=< Yf, Yg > for all f and gin $\mathcal{H}$ and $T^*h$ = 0 for h ${\in}\;{\overline{rangeX}}^{\perp}$. (2) There is a normal operator A in AlgL such that AX = Y and Ag = 0 for all g in range ${\overline{rangeX}}^{\perp}$.

SELF-ADJOINT INTERPOLATION ON Ax = y IN ALG$\cal{L}$

  • Kwak, Sung-Kon;Kang, Joo-Ho
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.3_4
    • /
    • pp.981-986
    • /
    • 2011
  • Given vectors x and y in a Hilbert space $\cal{H}$, an interpolating operator is a bounded operator T such that Tx = y. An interpolating operator for n vectors satisfies the equations $Tx_i=y_i$, for i = 1, 2, ${\cdots}$, n. In this paper the following is proved : Let $\cal{L}$ be a subspace lattice on a Hilbert space $\cal{H}$. Let x and y be vectors in $\cal{H}$ and let $P_x$ be the projection onto sp(x). If $P_xE=EP_x$ for each $E{\in}\cal{L}$, then the following are equivalent. (1) There exists an operator A in Alg$\cal{L}$ such that Ax = y, Af = 0 for all f in $sp(x)^{\perp}$ and $A=A^*$. (2) sup $sup\;\{\frac{{\parallel}E^{\perp}y{\parallel}}{{\parallel}E^{\perp}x{\parallel}}\;:\;E\;{\in}\;{\cal{L}}\}$ < ${\infty}$, $y\;{\in}\;sp(x)$ and < x, y >=< y, x >.

Robust Face Recognition based on Gabor Feature Vector illumination PCA Model (가버 특징 벡터 조명 PCA 모델 기반 강인한 얼굴 인식)

  • Seol, Tae-In;Kim, Sang-Hoon;Chung, Sun-Tae;Jo, Seong-Won
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.67-76
    • /
    • 2008
  • Reliable face recognition under various illumination environments is essential for successful commercialization. Feature-based face recognition relies on a good choice of feature vectors. Gabor feature vectors are known to be more robust to variations of pose and illumination than any other feature vectors so that they are popularly adopted for face recognition. However, they are not completely independent of illuminations. In this paper, we propose an illumination-robust face recognition method based on the Gabor feature vector illumination PCA model. We first construct the Gabor feature vector illumination PCA model where Gator feature vector space is rendered to be decomposed into two orthogonal illumination subspace and face identity subspace. Since the Gabor feature vectors obtained by projection into the face identity subspace are separated from illumination, the face recognition utilizing them becomes more robust to illumination. Through experiments, it is shown that the proposed face recognition based on Gabor feature vector illumination PCA model performs more reliably under various illumination and Pose environments.

Multiple Target Angle Tracking Algorithm with Efficient Equation for Angular Innovation (효율적으로 방위각 이노베이션을 구하는 다중표적 방위각 추적 알고리즘)

  • Ryu, Chang-Soo;Lee, Jang-Sik;Lee, Kyun-Kyung
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.38 no.6
    • /
    • pp.1-8
    • /
    • 2001
  • Recently, Ryu et al. proposed a multiple target angle tracking algorithm using the angular innovation extracted from the estimated signal subspace. This algorithm obtains the angles of targets and associates data simultaneously. Therefore, it has a simple structure without data association problem. However it requires the calculation of the inverse of a real matrix with dimension (2N+1)${\times}$(2N+1) to obtain the angular innovations of N targets. In this paper, a new linear equation for angular innovation is proposed using the fact that the projection error is zero when the target steering vector is projected onto the signal subspace. As a result, the proposed algorithm dose not require the matrix inversion and is computationally efficient.

  • PDF

Mean Square Projection Error Gradient-based Variable Forgetting Factor FAPI Algorithm (평균 제곱 투영 오차의 기울기에 기반한 가변 망각 인자 FAPI 알고리즘)

  • Seo, YoungKwang;Shin, Jong-Woo;Seo, Won-Gi;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.177-187
    • /
    • 2014
  • This paper proposes a fast subspace tracking methods, which is called GVFF FAPI, based on FAPI (Fast Approximated Power Iteration) method and GVFF RLS (Gradient-based Variable Forgetting Factor Recursive Lease Squares). Since the conventional FAPI uses a constant forgetting factor for estimating covariance matrix of source signals, it has difficulty in applying to non-stationary environments such as continuously changing DOAs of source signals. To overcome the drawback of conventioanl FAPI method, the GVFF FAPI uses the gradient-based variable forgetting factor derived from an improved means square error (MSE) analysis of RLS. In order to achieve the decreased subspace error in non-stationary environments, the GVFF-FAPI algorithm used an improved forgetting factor updating equation that can produce a fast decreasing forgetting factor when the gradient is positive and a slowly increasing forgetting factor when the gradient is negative. Our numerical simulations show that GVFF-FAPI algorithm offers lower subspace error and RMSE (Root Mean Square Error) of tracked DOAs of source signals than conventional FAPI based MUSIC (MUltiple SIgnal Classification).

Control Of Flexible Multi-Body System

  • Cho, Sung-Ki;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2566-2569
    • /
    • 2003
  • An alternative optimal control law formulation is introduced and compared with two different control law, a conventional linear quadratic regulator and the control law based on game theory. This formulation eliminates the undesired modes of the system by the projection of a controller onto the subspace orthogonal to that of the bad modes. In conventional LQR control law, the control performance can be improved only by using proper weighting matrices in performance index, normally, with high cost. The control law formulation by game theory may provide various ways to obtain the desired performance. The control law modified by the elimination of bad modes provides efficient ways to get rid of an undesired performance since it eliminates the exact modes which cause the bad control performance.

  • PDF

LMS and LTS-type Alternatives to Classical Principal Component Analysis

  • Huh, Myung-Hoe;Lee, Yong-Goo
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.233-241
    • /
    • 2006
  • Classical principal component analysis (PCA) can be formulated as finding the linear subspace that best accommodates multidimensional data points in the sense that the sum of squared residual distances is minimized. As alternatives to such LS (least squares) fitting approach, we produce LMS (least median of squares) and LTS (least trimmed squares)-type PCA by minimizing the median of squared residual distances and the trimmed sum of squares, in a similar fashion to Rousseeuw (1984)'s alternative approaches to LS linear regression. Proposed methods adopt the data-driven optimization algorithm of Croux and Ruiz-Gazen (1996, 2005) that is conceptually simple and computationally practical. Numerical examples are given.

The metric approximation property and intersection properties of balls

  • Cho, Chong-Man
    • Journal of the Korean Mathematical Society
    • /
    • v.31 no.3
    • /
    • pp.467-475
    • /
    • 1994
  • In 1983 Harmand and Lima [5] proved that if X is a Banach space for which K(X), the space of compact linear operators on X, is an M-ideal in L(X), the space of bounded linear operators on X, then it has the metric compact approximation property. A strong converse of the above result holds if X is a closed subspace of either $\elll_p(1 < p < \infty) or c_0 [2,15]$. In 1979 J. Johnson [7] actually proved that if X is a Banach space with the metric compact approximation property, then the annihilator K(X)^\bot$ of K(X) in $L(X)^*$ is the kernel of a norm-one projection in $L(X)^*$, which is the case if K(X) is an M-ideal in L(X).

  • PDF