• 제목/요약/키워드: Least Square Solution

Search Result 143, Processing Time 0.025 seconds

Performance Improvement of Channel Estimation based on Time-domain Threshold for OFDM Systems (시간영역 문턱값을 이용한 OFDM 시스템의 채널 추정 성능 향상)

  • Lee, You-Seok;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.9C
    • /
    • pp.720-724
    • /
    • 2008
  • Channel estimation in OFDM systems is usually carried out in frequency domain based on the least-squares (LS) method and the minimum mean-square error (MMSE) method with known pilot symbols. The LS estimator has a merit of low complexity but may suffer from the noise because it does not consider any noise effect in obtaining its solution. To enhance the noise immunity of the LS estimator, we consider estimation noise in time domain. Residual noise existing at the estimated channel coefficients in time domain could be reduced by reasonable selection of a threshold value. To achieve this, we propose a channel-estimation method based on a time-domain threshold which is a standard deviation of noise obtained by wavelet decomposition. Computer simulation shows that the estimation performance of the proposed method approaches to that of the known-channel case in terms of bit-error rates after the Viterbi decoder in overall SNRs.

A Study on Polynomial Neural Networks for Stabilized Deep Networks Structure (안정화된 딥 네트워크 구조를 위한 다항식 신경회로망의 연구)

  • Jeon, Pil-Han;Kim, Eun-Hu;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1772-1781
    • /
    • 2017
  • In this study, the design methodology for alleviating the overfitting problem of Polynomial Neural Networks(PNN) is realized with the aid of two kinds techniques such as L2 regularization and Sum of Squared Coefficients (SSC). The PNN is widely used as a kind of mathematical modeling methods such as the identification of linear system by input/output data and the regression analysis modeling method for prediction problem. PNN is an algorithm that obtains preferred network structure by generating consecutive layers as well as nodes by using a multivariate polynomial subexpression. It has much fewer nodes and more flexible adaptability than existing neural network algorithms. However, such algorithms lead to overfitting problems due to noise sensitivity as well as excessive trainning while generation of successive network layers. To alleviate such overfitting problem and also effectively design its ensuing deep network structure, two techniques are introduced. That is we use the two techniques of both SSC(Sum of Squared Coefficients) and $L_2$ regularization for consecutive generation of each layer's nodes as well as each layer in order to construct the deep PNN structure. The technique of $L_2$ regularization is used for the minimum coefficient estimation by adding penalty term to cost function. $L_2$ regularization is a kind of representative methods of reducing the influence of noise by flattening the solution space and also lessening coefficient size. The technique for the SSC is implemented for the minimization of Sum of Squared Coefficients of polynomial instead of using the square of errors. In the sequel, the overfitting problem of the deep PNN structure is stabilized by the proposed method. This study leads to the possibility of deep network structure design as well as big data processing and also the superiority of the network performance through experiments is shown.

THE EFFECT OF THE DENTINE PRETREATMENT ON THE MARGINAL LEAKAGE OF A GLASS IONOMER CEMENT (상아질 표면처리가 글라스 아이오노머 시멘트의 변연누출에 미치는 영향에 관한 연구)

  • Cho, Jung-Hee;Hong, Chan-Ui;Shin, Dong-Hoon
    • Restorative Dentistry and Endodontics
    • /
    • v.17 no.1
    • /
    • pp.95-103
    • /
    • 1992
  • The purpose of this study was to evaluate the effect of the dentin pretreatment on the marginal leakage of a glassionomer cement. 1n this study, 60 molars with sound and healthy crown portion were used. The dentin surface of these teeth were exposed and polished with 600 grit silicon carbide paper. Square - shaped cavities were prepared on the flattened dentin surfaces and these were divided into 4 groups according to the dentin pretreatment procedures. Group I : Dentin pretreatment with distilled water as a control group. Group II : Dentin pretreatment with 5% sodium hypochlorite solution. Group III : Dentin pretreatment with Ketac conditioner. Group IV : Dentin pretreatment with 40% polyacrylic acid. The degrees of dye penetration in the cavity walls were assessed using a stereoscope at ${\times}40$ magnification according to the maximum dye penetration. The results were analyzed by using Mann - Whitney U test. The results were as follows : 1. All groups showed varying depth of dye penetration. 2. Distilled water group showed the most severe marginal leakage when compared with the other groups(P<0.05). 3. 40% polyacrylic acid group showed the least amount of marginal leakage compared with the other groups (P<0.05). 4. There were significant differences between Goup I(distilled water) and Group IV (40% polyacrylic acid)(P<0.05), but there were no significant differences among Group I(distilled water), Group II(sodium hypochlorite), Group III(Ketac conditioner) (P>0.05).

  • PDF

Piezoelectric 6-dimensional accelerometer cross coupling compensation algorithm based on two-stage calibration

  • Dengzhuo Zhang;Min Li;Tongbao Zhu;Lan Qin;Jingcheng Liu;Jun Liu
    • Smart Structures and Systems
    • /
    • v.32 no.2
    • /
    • pp.101-109
    • /
    • 2023
  • In order to improve the measurement accuracy of the 6-dimensional accelerometer, the cross coupling compensation method of the accelerometer needs to be studied. In this paper, the non-linear error caused by cross coupling of piezoelectric six-dimensional accelerometer is compensated online. The cross coupling filter is obtained by analyzing the cross coupling principle of a piezoelectric six-dimensional accelerometer. Linear and non-linear fitting methods are designed. A two-level calibration hybrid compensation algorithm is proposed. An experimental prototype of a piezoelectric six-dimensional accelerometer is fabricated. Calibration and test experiments of accelerometer were carried out. The measured results show that the average non-linearity of the proposed algorithm is 2.2628% lower than that of the least square method, the solution time is 0.019382 seconds, and the proposed algorithm can realize the real-time measurement in six dimensions while improving the measurement accuracy. The proposed algorithm combines real-time and high precision. The research results provide theoretical and technical support for the calibration method and online compensation technology of the 6-dimensional accelerometer.

Orbit Determination of High-Earth-Orbit Satellites by Satellite Laser Ranging

  • Oh, Hyungjik;Park, Eunseo;Lim, Hyung-Chul;Lee, Sang-Ryool;Choi, Jae-Dong;Park, Chandeok
    • Journal of Astronomy and Space Sciences
    • /
    • v.34 no.4
    • /
    • pp.271-280
    • /
    • 2017
  • This study presents the application of satellite laser ranging (SLR) to orbit determination (OD) of high-Earth-orbit (HEO) satellites. Two HEO satellites are considered: the Quasi-Zenith Satellite-1 (QZS-1), a Japanese elliptical-inclinedgeosynchronous-orbit (EIGSO) satellite, and the Compass-G1, a Chinese geostationary-orbit (GEO) satellite. One week of normal point (NP) data were collected for each satellite to perform the OD based on the batch least-square process. Five SLR tracking stations successfully obtained 374 NPs for QZS-1 in eight days, whereas only two ground tracking stations could track Compass-G1, yielding 68 NPs in ten days. Two types of station bias estimation and a station data weighting strategy were utilized for the OD of QZS-1. The post-fit root-mean-square (RMS) residuals of the two week-long arcs were 11.98 cm and 10.77 cm when estimating the biases once in an arc (MBIAS). These residuals were decreased significantly to 2.40 cm and 3.60 cm by estimating the biases every pass (PBIAS). Then, the resultant OD precision was evaluated by the orbit overlap method, yielding three-dimensional errors of 55.013 m with MBIAS and 1.962 m with PBIAS for the overlap period of six days. For the OD of Compass-G1, no station weighting strategy was applied, and only MBIAS was utilized due to the lack of NPs. The post-fit RMS residuals of OD were 8.81 cm and 12.00 cm with 49 NPs and 47 NPs, respectively, and the corresponding threedimensional orbit overlap error for four days was 160.564 m. These results indicate that the amount of SLR tracking data is critical for obtaining precise OD of HEO satellites using SLR because additional parameters, such as station bias, are available for estimation with sufficient tracking data. Furthermore, the stand-alone SLR-based orbit solution is consistently attainable for HEO satellites if a target satellite is continuously trackable for a specific period.

Numerical Analysis of Runup and Wave Force Acting on Coastal Revetment and Onshore Structure due to Tsunami (해안안벽과 육상구조물에서 지진해일파의 처오름 및 작용파력에 관한 수치해석)

  • Lee, Kwang Ho;Kim, Chang Hoon;Kim, Do Sam;Yeh, Harry;Hwang, Young Tae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.3B
    • /
    • pp.289-301
    • /
    • 2009
  • In this work, wave run-up heights and resultant wave forces on a vertical revetment due to tsunami (solitary wave) are investigated numerically using a numerical wave tank model called CADMAS-SURF (CDIT, 2001. Research and Development of Numerical Wave Channel (CADMAS-SURF). CDIT library, No. 12, Japan.), which is based on a 2-D Navier-Stokes solver, coupled to a volume of fluid (VOF) method. The third order approximate solution (Fenton, 1972. A ninth-order solution for the solitary wave. J. of Fluid Mech., Vol. 53, No.2, pp.257-271) is used to generate solitary waves and implemented in original CADMAS-SURF code. Numerical results of the wave profiles and forces are in good agreements with available experimental data. Using the numerical results, the regression curves determined from the least-square analysis are proposed, which can be used to determine the maximum wave run-up height and force on a vertical revetment due to tsunami. In addition, the capability of CADMAS-SURF is demonstrated for tsunami wave forces acting on an onshore structure using various configuration computations including the variations of the crown heights of the vertical wall and the position of the onshore structure. Based on the numerical results such as water level, velocity field and wave force, the direct effects of tsunami on an onshore structure are discussed.

Time- and Frequency-Domain Block LMS Adaptive Digital Filters: Part Ⅱ - Performance Analysis (시간영역 및 주파수영역 블럭적응 여파기에 관한 연구 : 제 2 부- 성능분석)

  • Lee, Jae-Chon;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.7 no.4
    • /
    • pp.54-76
    • /
    • 1988
  • In Part Ⅰ of the paper, we have developed various block least mean-square (BLMS) adaptive digital filters (ADF's) based on a unified matrix treatment. In Part Ⅱ we analyze the convergence behaviors of the self-orthogonalizing frequency-domain BLMS (FBLMS) ADF and the unconstrained FBLMS (UFBLMS) ADF both for the overlap-save and overlap-add sectioning methods. We first show that, unlike the FBLMS ADF with a constant convergence factor, the convergence behavior of the self-orthogonalizing FBLMS ADF is governed by the same autocorrelation matrix as that of the UFBLMS ADF. We then show that the optimum solution of the UFBLMS ADF is the same as that of the constrained FBLMS ADF when the filter length is sufficiently long. The mean of the weight vector of the UFBLMS ADF is also shown to converge to the optimum Wiener weight vector under a proper condition. However, the steady-state mean-squared error(MSE) of the UFBLMS ADF turns out to be slightly worse than that of the constrained algorithm if the same convergence constant is used in both cases. On the other hand, when the filter length is not sufficiently long, while the constrained FBLMS ADF yields poor performance, the performance of the UFBLMS ADF can be improved to some extent by utilizing its extended filter-length capability. As for the self-orthogonalizing FBLMS ADF, we study how we can approximate the autocorrelation matrix by a diagonal matrix in the frequency domain. We also analyze the steady-state MSE's of the self-orthogonalizing FBLMS ADF's with and without the constant. Finally, we present various simulation results to verify our analytical results.

  • PDF

Diagonalized Approximate Factorization Method for 3D Incompressible Viscous Flows (대각행렬화된 근사 인수분해 기법을 이용한 3차원 비압축성 점성 흐름 해석)

  • Paik, Joongcheol
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.3B
    • /
    • pp.293-303
    • /
    • 2011
  • An efficient diagonalized approximate factorization algorithm (DAF) is developed for the solution of three-dimensional incompressible viscous flows. The pressure-based, artificial compressibility (AC) method is used for calculating steady incompressible Navier-Stokes equations. The AC form of the governing equations is discretized in space using a second-order-accurate finite volume method. The present DAF method is applied to derive a second-order accurate splitting of the discrete system of equations. The primary objective of this study is to investigate the computational efficiency of the present DAF method. The solutions of the DAF method are evaluated relative to those of well-known four-stage Runge-Kutta (RK4) method for fully developed and developing laminar flows in curved square ducts and a laminar flow in a cavity. While converged solutions obtained by DAF and RK4 methods on the same computational meshes are essentially identical because of employing the same discrete schemes in space, both algorithms shows significant discrepancy in the computing efficiency. The results reveal that the DAF method requires substantially at least two times less computational time than RK4 to solve all applied flow fields. The increase in computational efficiency of the DAF methods is achieved with no increase in computational resources and coding complexity.

RPC Model Generation from the Physical Sensor Model (영상의 물리적 센서모델을 이용한 RPC 모델 추출)

  • Kim, Hye-Jin;Kim, Jae-Bin;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.11 no.4 s.27
    • /
    • pp.21-27
    • /
    • 2003
  • The rational polynomial coefficients(RPC) model is a generalized sensor model that is used as an alternative for the physical sensor model for IKONOS-2 and QuickBird. As the number of sensors increases along with greater complexity, and as the need for standard sensor model has become important, the applicability of the RPC model is also increasing. The RPC model can be substituted for all sensor models, such as the projective camera the linear pushbroom sensor and the SAR This paper is aimed at generating a RPC model from the physical sensor model of the KOMPSAT-1(Korean Multi-Purpose Satellite) and aerial photography. The KOMPSAT-1 collects $510{\sim}730nm$ panchromatic images with a ground sample distance (GSD) of 6.6m and a swath width of 17 km by pushbroom scanning. We generated the RPC from a physical sensor model of KOMPSAT-1 and aerial photography. The iterative least square solution based on Levenberg-Marquardt algorithm is used to estimate the RPC. In addition, data normalization and regularization are applied to improve the accuracy and minimize noise. And the accuracy of the test was evaluated based on the 2-D image coordinates. From this test, we were able to find that the RPC model is suitable for both KOMPSAT-1 and aerial photography.

  • PDF

Least-Square Fitting of Intrinsic and Scattering Q Parameters (최소자승법(最小自乘法)에 의(衣)한 고유(固有) Q와 산란(散亂) Q의 측정(測定))

  • Kang, Ik Bum;McMechan, George A.;Min, Kyung Duck
    • Economic and Environmental Geology
    • /
    • v.27 no.6
    • /
    • pp.557-561
    • /
    • 1994
  • Q estimates are made by direct measurements of energy loss per cycle from primary P and S waves, as a function of frequency. Assuming that intrinsic Q is frequency independent and scattering Q is frequency dependent over the frequencies of interest, the relative contributions of each, to a total observed Q, may be estimated. Test examples are produced by computing viscoelastic synthetic seismograms using a pseudo spectral solution with inclusion of relaxation mechanisms (for intrinsic Q) and a fractal distribution of scatterers (for scattering Q). The composite theory implies that when the total Q for S-waves is smaller than that for P-waves (the usual situation), intrinsic Q is dominating; when it is larger, scattering Q is dominating. In the inverse problem, performed by a global least squares search, intrinsic $Q_p$ and $Q_s$ estimates are reliable and unique when their absolute values are sufficiently low that their effects are measurable in the data. Large $Q_p$ and $Q_s$ have no measurable effect and hence are not resolvable. Standard deviation of velocity $({\sigma})$ and scatterer size (A) are less unique as they exhibit a tradeoff as predicted by Blair's equation. For the P-waves, intrinsic and scattering contributions are of approximately the same importance, for S-waves, the intrinsic contributions dominate.

  • PDF