• Title/Summary/Keyword: Least Squares Algorithm

Search Result 565, Processing Time 0.026 seconds

Prediction of Failure Time of Tunnel Applying the Curve Fitting Techniques (곡선적합기법을 이용한 터널의 파괴시간 예측)

  • Yoon, Yong-Kyun;Jo, Young-Do
    • Tunnel and Underground Space
    • /
    • v.20 no.2
    • /
    • pp.97-104
    • /
    • 2010
  • The materials failure relation $\ddot{\Omega}=A{(\dot{\Omega})}^\alpha$ where $\Omega$ is a measurable quantity such as displacement and the dot superscript is the time derivative, may be used to analyze the accelerating creep of materials. Coefficients, A and $\alpha$, are determined by fitting given data sets. In this study, it is tried to predict the failure time of tunnel using the materials failure relation. Four fitting techniques of applying the materials failure relation are attempted to forecast a failure time. Log velocity versus log acceleration technique, log time versus log velocity technique, inverse velocity technique are based on the linear least squares fits and non-linear least squares technique utilizes the Levenberg-Marquardt algorithm. Since the log velocity versus log acceleration technique utilizes a logarithmic representation of the materials failure relation, it indicates the suitability of the materials failure relation applied to predict a failure time of tunnel. A linear correlation between log velocity and log acceleration appears satisfactory(R=0.84) and this represents that the materials failure relation is a suitable model for predicting a failure time of tunnel. Through comparing the real failure time of tunnel with the predicted failure times from four curve fittings, it is shown that the log time versus log velocity technique results in the best prediction.

Improving SVM Classification by Constructing Ensemble (앙상블 구성을 이용한 SVM 분류성능의 향상)

  • 제홍모;방승양
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.251-258
    • /
    • 2003
  • A support vector machine (SVM) is supposed to provide a good generalization performance, but the actual performance of a actually implemented SVM is often far from the theoretically expected level. This is largely because the implementation is based on an approximated algorithm, due to the high complexity of time and space. To improve this limitation, we propose ensemble of SVMs by using Bagging (bootstrap aggregating) and Boosting. By a Bagging stage each individual SVM is trained independently using randomly chosen training samples via a bootstrap technique. By a Boosting stage an individual SVM is trained by choosing training samples according to their probability distribution. The probability distribution is updated by the error of independent classifiers, and the process is iterated. After the training stage, they are aggregated to make a collective decision in several ways, such ai majority voting, the LSE(least squares estimation) -based weighting, and double layer hierarchical combining. The simulation results for IRIS data classification, the hand-written digit recognition and Face detection show that the proposed SVM ensembles greatly outperforms a single SVM in terms of classification accuracy.

A machine learning model for the derivation of major molecular descriptor using candidate drug information of diabetes treatment (당뇨병 치료제 후보약물 정보를 이용한 기계 학습 모델과 주요 분자표현자 도출)

  • Namgoong, Youn;Kim, Chang Ouk;Lee, Chang Joon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.23-30
    • /
    • 2019
  • The purpose of this study is to find out the structure of the substance that affects antidiabetic using the candidate drug information for diabetes treatment. A quantitative structure activity relationship model based on machine learning method was constructed and major molecular descriptors were determined for each experimental data variables from coefficient values using a partial least squares algorithm. The results of the analysis of the molecular access system fingerprint data reflecting the candidate drug structure information were higher than those of the in vitro data analysis in terms of goodness-of-fit, and the major molecular expression factors affecting the antidiabetic effect were also variously derived. If the proposed method is applied to the new drug development environment, it is possible to reduce the cost for conducting candidate screening experiment and to shorten the search time for new drug development.

An Accuracy Estimation of AEP Based on Geographic Characteristics and Atmospheric Variations in Northern East Region of Jeju Island (제주 북동부 지역의 지형과 대기변수에 따른 AEP계산의 정확성에 대한 연구)

  • Ko, Jung-Woo;Lee, Byung-Gul
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.3
    • /
    • pp.295-303
    • /
    • 2012
  • Clarify wind energy productivity depends on three factors: the wind probability density function(PDF), the turbine's power curve, and the air density. The wind PDF gives the probability that a variable will take on the wind speed value. Wind shear refers to the change in wind speed with height above ground. The wind speed tends to increase with the height above ground. also, Wind PDF refers to the change with height above ground. Wind analysts typically use the Weibull distribution to characterize the breadth of the distribution of wind speeds. The Weibull distribution has the two-parameter: the scale factor c and the shape factor k. We can use a linear least squares algorithm(or Ln-least method) and moment method to fit a Weibull distribution to measured wind speed data which data was located same site and different height. In this study, find that the scale factor is related to the average wind speed than the shape factor. and also different types of terrain are characterized by different the scale factor slop with height above ground. The gross turbine power output (before accounting for losses) was caculated the power curve whose corresponding air density is closest to the air density. and air desity was choose two way. one is the pressure of the International Standard Atmosphere up to an elevation, the other is the measured air pressure and temperature to calculate the air density. and then each power output was compared.

A Study on the Generation for Negotiation Alternative Considering Negotiator's Strategy (협상자의 전략을 고려한 협상 대안 생성에 관한 연구)

  • Sim Joung-Hoon;Choi Hyung-Rim;Kim Hyun-Soo;Hong Soon-Goo;Cho Min-Je
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.10 no.3
    • /
    • pp.21-29
    • /
    • 2005
  • The most of automated negotiation systems are dependent upon negotiators' offers, as negotiation is going on. Particularly, the preference, evaluation function and negotiation strategy are variously changed at every negotiation round by the negotiator and have an effect on the counter offers. Therefore, this study proposed the automated negotiation methodology or negotiation model which makes the negotiator's participation minimize. To minimize negotiator's participation, the preference of negotiator was predicted by the ratio of seller and buyer's count offers and the evaluation function of negotiator was also predicted by least squares approximation method at every negotiation round. The predicted evaluation function was evaluated and selected by $R^2$ value, coefficient of determination. Finally the optimal counter offers were generated by the genetic algorithm using the predicted preference and value function.

  • PDF

Wavelength selection by loading vector analysis in determining total protein in human serum using near-infrared spectroscopy and Partial Least Squares Regression

  • Kim, Yoen-Joo;Yoon, Gil-Won
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.4102-4102
    • /
    • 2001
  • In multivariate analysis, absorbance spectrum is measured over a band of wavelengths. One does not often pay attention to the size of this wavelength band. However, it is desirable that spectrum is measured at only necessary wavelengths as long as the acceptable accuracy of prediction can be met. In this paper, the method of selecting an optimal band of wavelengths based on the loading vector analysis was proposed and applied for determining total protein in human serum using near-infrared transmission spectroscopy and PLSR. Loading vectors in the full spectrum PLSR were used as reference in selecting wavelengths, but only the first loading vector was used since it explains the spectrum best. Absorbance spectra of sera from 97 outpatients were measured at 1530∼1850 nm with an interval of 2 nm. Total protein concentrations of sera were ranged from 5.1 to 7.7 g/㎗. Spectra were measured by Cary 5E spectrophotometer (Varian, Australia). Serum in the 5 mm-pathlength cuvette was put in the sample beam and air in the reference beam. Full spectrum PLSR was applied to determine total protein from sera. Next, the wavelength region of 1672∼1754 nm was selected based on the first loading vector analysis. Standard Error of Cross Validation (SECV) of full spectrum (1530∼l850 nm) PLSR and selected wavelength PLSR (1672∼1754 nm) was respectively 0.28 and 0.27 g/㎗. The prediction accuracy between the two bands was equal. Wavelength selection based on loading vector in PLSR seemed to be simple and robust in comparison to other methods based on correlation plot, regression vector and genetic algorithm. As a reference of wavelength selection for PLSR, the loading vector has the advantage over the correlation plot since the former is based on multivariate model whereas the latter, on univariate model. Wavelength selection by the first loading vector analysis requires shorter computation time than that by genetic algorithm and needs not smoothing.

  • PDF

Direction-of-Arrival Estimation in Broadband Signal Processing : Rotation of Signal Subspace Approach (광대역 신호 처리에서의 도래각 추정 : Rotation of Signal Subspaces 방법)

  • Kim, Young-Soo
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.166-175
    • /
    • 1989
  • In this paper, we present a method which is based on the concept of the rotation of subspaces. This method is highly related to the angle (or distance) between subspaces arising in many applications. An effective procedures is first derived for finding the optimal transformation matrix which rotates one subspace into another as closely as possible in the least squares sense , and then this algorithm is applied to the solution to general direction-of-arrival estimation problem of multiple broadband plane waves which may be a mixture of incoherent, partially coherent or coherent. In this typical application, the rotation of signal subspaces (ROSS) algorithm is effectively developed to achieve the high performance in the active systems for the case in which the noise field remains invariant with the measurement of the array spectral density matrix (or data matrix). It is not uncommon to observe this situation in sonar systems. The advantage of this techniques is not to require the preliminary processing and spatial prefiltering which is used in Wang-Kaveh's CSS focusing method. Furthermore, the array's geometry is not restricted. Simulation results are presented to illustrate the high performance achieved with this new approach relative to that obtained with Wang-Kaveh's CSS focusing method for incoherent sources and forward-backward spatial smoothed MUSIC for coherent sources including the signal eigenvector method (SEM).

  • PDF

Performance Evaluation of a Time-domain Gauss-Newton Full-waveform Inversion Method (시간영역 Gauss-Newton 전체파형 역해석 기법의 성능평가)

  • Kang, Jun Won;Pakravan, Alireza
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.26 no.4
    • /
    • pp.223-231
    • /
    • 2013
  • This paper presents a time-domain Gauss-Newton full-waveform inversion method for the material profile reconstruction in heterogeneous semi-infinite solid media. To implement the inverse problem in a finite computational domain, perfectly-matchedlayers( PMLs) are introduced as wave-absorbing boundaries within which the domain's wave velocity profile is to be reconstructed. The inverse problem is formulated in a partial-differential-equations(PDE)-constrained optimization framework, where a least-squares misfit between measured and calculated surface responses is minimized under the constraint of PML-endowed wave equations. A Gauss-Newton-Krylov optimization algorithm is utilized to iteratively update the unknown wave velocity profile with the aid of a specialized regularization scheme. Through a series of one-dimensional examples, the solution of the Gauss-Newton inversion was close enough to the target profile, and showed superior convergence behavior with reduced wall-clock time of implementation compared to a conventional inversion using Fletcher-Reeves optimization algorithm.

Color Image Restoration in Detected Aliasing Region (에일리어싱 영역 검출을 통한 컬러 영상 복원)

  • Kwon, Ji Yong;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.12
    • /
    • pp.105-110
    • /
    • 2016
  • To reduce the cost and volume of a digital camera, a subsampled color filter array(CFA) image is used and demosaicking is applied to estimate the missing color values. However, aliasing, the overlaps of signals in the frequency domain, occurs when signals are subsampled. This causes aliasing artifacts such as false colors and zipper effects in demosaicking processes. In this paper, the algorithm estimating high-quality color images by removing aliasing artifacts in them is proposed. The aliasing region map is estimated using the sub-sampled signals of the CFA image. By using the aliasing region map and the estimated luminance image, the least squares problem of the observation models is designed and aliasing artifacts are eliminated. The experiments demonstrate that the proposed algorithm restores color images without aliasing artifacts.

A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

  • He, Shanshan;Ou, Daojiang;Yan, Changya;Lee, Chen-Han
    • Journal of Computational Design and Engineering
    • /
    • v.2 no.4
    • /
    • pp.218-232
    • /
    • 2015
  • Piecewise linear (G01-based) tool paths generated by CAM systems lack $G_1$ and $G_2$ continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA) is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA) to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1) an improved technique for initial control point determination over Dominant Point Method, (2) an algorithm that updates foot point parameters as needed, (3) analysis of the degrees of freedom of control points to insert new control points only when needed, (4) chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.