• Title/Summary/Keyword: Variance Modeling

Search Result 281, Processing Time 0.033 seconds

A Study on the Cutting Tool Wear by Time Series Approach (시계열분석 방법 에 의한 절삭공구 의 마멸 에 관한 연구)

  • 김광준;황홍연
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.8 no.5
    • /
    • pp.450-461
    • /
    • 1984
  • A new indirect tool wear sensing technique is proposed using time series analysis. The acceleration measured on the tool post in the vertical direction during turning is sampled with uniform interval, and fitted following the ARMA modeling procedures. Various signal characteristics are observed in respect with the progress of flank wear. From those observations it is believed that; *The variance of the signal is approximately proportional to the increase of flank wear. *The absolute power of a dynamic mode decreases in the beginning of cutting until the maximum flank wear increases up to 0.4-0.5mm, and then increases. *The other characteristics are not so much related to the tool wear as the signal variance and the absolute power of a dynamic mode. Hence, the absolute power of a dynamic mode seems to be a good factor for the indication of tool-change time regardless of tool material or cutting conditions.

A Study on the Finite Element Analysis in Friction Stir Welding of Al Alloy (알루미늄 합금재의 마찰교반용접 유한요소해석에 관한 연구)

  • Lee, Dai Yeal;Park, Kyong Do;Kang, Dae Min
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.14 no.5
    • /
    • pp.81-87
    • /
    • 2015
  • In this paper, the finite element method was used for the flow and strength analysis of aluminum alloy under friction stir welding. The simulations were carried out using Sysweld s/w, and the modeling of the sheet was executed using Unigraphics NX6 s/w. The welding variables for the analysis were the shoulder diameter, rotating speed, and welding speed of the tool. Additionally, a three-way factorial design method was applied to confirm the effect of the welding variables on the flow and strength analysis with variance analysis. From these results, the rotating speed had the greatest influence on the maximum temperature, and the maximum temperature was $578.84{\pm}12.72$ at a confidence interval of 99%. The greater the rotating speed and shoulder diameter, the greater the difference between maximum and minimum temperature. Furthermore, the shoulder diameter had the largest influence on von Mises stress, and the von Mises stress was $184.54{\pm}12.62$ at a confidence interval of 99%. In addition to the increased shoulder diameter, welding speed, and rotating speed of the tool increased the von Mises stress.

A Study on Unfolding Asymmetric Volatility: A Case Study of National Stock Exchange in India

  • SAMINENI, Ravi Kumar;PUPPALA, Raja Babu;KULAPATHI, Syamsundar;MADAPATHI, Shiva Kumar
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.4
    • /
    • pp.857-861
    • /
    • 2021
  • The study aims to find the asymmetric effect in National Stock Exchange in which the Nifty50 is considered as proxy for NSE. A return can be stated as the change in value of a security over a certain time period. Volatility is the rate of change in security value. It is an arithmetical assessment of the dispersion of yields of security prices. Stock prices are extremely unpredictable and make the investment in equities risky. Predicting volatility and modeling are the most profuse areas to explore. The current study describes the association between two variables, namely, stock yields and volatility in equity market in India. The volatility is measured by employing asymmetric GARCH technique, i.e., the EGARCH (1,1) tool, which was used in building the study. The closing prices of Nifty on day-to-day basis were used for analysis from the period 2011 to 2020 with 2,478 observations in the study. The model arrests the lopsided volatility during the mentioned period. The outcome of asymmetric GARCH model revealed the subsistence of leverage effect in the index and confirms the impact of conditional variance as well. Furthermore, the EGARCH technique was evidenced to be apt in seizure of unsymmetrical volatility.

Symptom Management to Predict Quality of Life in Patients with Heart Failure: A Structural Equation Modeling Approach (증상관리를 통한 심부전 환자의 삶의 질 예측모형)

  • Lee, Ja Ok;Song, Rhayun
    • Journal of Korean Academy of Nursing
    • /
    • v.45 no.6
    • /
    • pp.846-856
    • /
    • 2015
  • Purpose: The focus of this study was on symptom management to predict quality of life among individuals with heart failure. The theoretical model was constructed based on situation-specific theory of heart failure self-care and literature review. Methods: For participants, 241 outpatients at a university hospital were invited to the study from May 19 to July 30, 2014. Data were collected with structured questionnaires and analyzed using SPSSWIN and AMOS 20.0. Results: The goodness of fit index for the hypothetical model was .93, incremental fit index, .90, and comparative fit index, .90. As the outcomes satisfied the recommended level, the hypothetical model appeared to fit the data. Seven of the eight hypotheses selected for the hypothetical model were statistically significant. The predictors of symptom management, symptom management confidence and social support together explained 32% of the variance in quality of life. The 28% of variance in symptom management was explained by symptom recognition, heart failure knowledge and symptom management confidence. The 4% of variance in symptom management confidence was explained by social support. Conclusion: The hypothetical model of this study was confirmed to be adequate in explaining and predicting quality of life among patients with heart failure through symptom management. Effective strategies to improve quality of life among patients with heart failure should focus on symptom management. Symptom management can be enhanced by providing educational programs, encouraging social support and confidence, consequently improving quality of life among this population.

Analysis of Measurement Errors Using Short-Baseline GPS Positioning Model (단기선 GPS측위 모델을 이용한 관측오차 분석)

  • Hong, Chang-Ki;Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.573-580
    • /
    • 2017
  • Precise stochastic modeling for GPS measurements is one of key factors in adjustment computations for GPS positioning. To analyze the GPS measurement errors, Minimum Norm Quadratic Unbiased Estimators(MINQUE) approach is used in this study to estimate the variance components for measurement types with short-baseline GPS positioning model. The results showed the magnitudes of measurement errors for C1, P2, L1, L2 are 22.3cm, 27.6cm, 2.5mm, 2.2mm, respectively. To reduce the memory usage and computational burden, variance components are also estimated on epoch-by-epoch basis. The results showed that there exists slight differences between the solutions. However, epoch-by-epoch analysis may also be used for most of GPS applications considering the magnitudes of the differences.

A Model for Nursing Students' Stress (간호학생의 스트레스 지각, 대처, 스트레스결과에 대한 구조모형)

  • Lee, Mi-Ra;Chung, Hyun-Sook;Cho, Mee-Kyung
    • Research in Community and Public Health Nursing
    • /
    • v.11 no.2
    • /
    • pp.321-332
    • /
    • 2000
  • The purpose of this study was to test the hypothetical model designed to explain nursing students' perceived stress, coping levels, and stress outcomes. This hypothetical model was based on the Kim. Jung Hee(l987)' s stress model and stress-related literature. Exogenous variables were self-efficacy. hardiness. social support. and exercise. Endogenous variables were stress perception. coping levels. and stress outcomes. Empirical data for testing the hypothetical model consisted of 205 nursing students. SAS PC Program and LISREL 8.12a program were used for descriptive statistics and linear structural relationship(LISREL) modeling. The results were as follows. 1) The overall fit of the hypothetical model to the data was good( $x^2$=78.41(p=0.010), $x^2$/ df=1.50. RMSEA=0.05, standardized RMR= 0.05, GFI=0.95, AGFI=0.91, NNFI=0.90, NFI=0.94). 2) The results of statistical testing of the hypotheses were as follows. (1) As expected. self-efficacy had a significant effect on stress perception. But. hardiness. social support, and exercise did not have a significant effect on stress perception. Self-efficacy, hardiness. social support, and exercise explained 12% of the total variance of stress perception. (2) As expected, self-efficacy, hardiness, social support, exercise, and stress perception had a significant effect on coping behavior, Self-efficacy, hardiness, social support, exercise, and stress perception explained 53% of the total variance of coping behavior. (3) As expected, stress perception and coping behavior had a significant effect on stress outcomes. Stress perception and coping behavior explained 84% of the total variance of stress outcomes. In conclusion, the hypothetical model of this study was confirmed in explaining and predicting stress perception, coping levels, and stress outcomes in nursing students. And these findings suggest the need to develop nursing intervention to enhance self-efficacy, hardiness, social support, and exercise to decrease the harmful outcomes of stress.

  • PDF

Principal Discriminant Variate (PDV) Method for Classification of Multicollinear Data: Application to Diagnosis of Mastitic Cows Using Near-Infrared Spectra of Plasma Samples

  • Jiang, Jian-Hui;Tsenkova, Roumiana;Yu, Ru-Qin;Ozaki, Yukihiro
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1244-1244
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from mastitic and healthy cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from mastitic and healthy cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA and FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference, thereby providing a useful means for spectroscopy-based clinic applications.

  • PDF

PRINCIPAL DISCRIMINANT VARIATE (PDV) METHOD FOR CLASSIFICATION OF MULTICOLLINEAR DATA WITH APPLICATION TO NEAR-INFRARED SPECTRA OF COW PLASMA SAMPLES

  • Jiang, Jian-Hui;Yuqing Wu;Yu, Ru-Qin;Yukihiro Ozaki
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1042-1042
    • /
    • 2001
  • In linear discriminant analysis there are two important properties concerning the effectiveness of discriminant function modeling. The first is the separability of the discriminant function for different classes. The separability reaches its optimum by maximizing the ratio of between-class to within-class variance. The second is the stability of the discriminant function against noises present in the measurement variables. One can optimize the stability by exploring the discriminant variates in a principal variation subspace, i. e., the directions that account for a majority of the total variation of the data. An unstable discriminant function will exhibit inflated variance in the prediction of future unclassified objects, exposed to a significantly increased risk of erroneous prediction. Therefore, an ideal discriminant function should not only separate different classes with a minimum misclassification rate for the training set, but also possess a good stability such that the prediction variance for unclassified objects can be as small as possible. In other words, an optimal classifier should find a balance between the separability and the stability. This is of special significance for multivariate spectroscopy-based classification where multicollinearity always leads to discriminant directions located in low-spread subspaces. A new regularized discriminant analysis technique, the principal discriminant variate (PDV) method, has been developed for handling effectively multicollinear data commonly encountered in multivariate spectroscopy-based classification. The motivation behind this method is to seek a sequence of discriminant directions that not only optimize the separability between different classes, but also account for a maximized variation present in the data. Three different formulations for the PDV methods are suggested, and an effective computing procedure is proposed for a PDV method. Near-infrared (NIR) spectra of blood plasma samples from daily monitoring of two Japanese cows have been used to evaluate the behavior of the PDV method in comparison with principal component analysis (PCA), discriminant partial least squares (DPLS), soft independent modeling of class analogies (SIMCA) and Fisher linear discriminant analysis (FLDA). Results obtained demonstrate that the PDV method exhibits improved stability in prediction without significant loss of separability. The NIR spectra of blood plasma samples from two cows are clearly discriminated between by the PDV method. Moreover, the proposed method provides superior performance to PCA, DPLS, SIMCA md FLDA, indicating that PDV is a promising tool in discriminant analysis of spectra-characterized samples with only small compositional difference.

  • PDF

Characteristics of Measurement Errors due to Reflective Sheet Targets - Surveying for Sejong VLBI IVP Estimation (반사 타겟의 관측 오차 특성 분석 - 세종 VLBI IVP 결합 측량)

  • Hong, Chang-Ki;Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.325-332
    • /
    • 2022
  • Determination of VLBI IVP (Very Long Baseline Interferometry Invariant Point) position with high accuracy is required to compute local tie vectors between the space geodetic techniques. In general, reflective targets are attached on VLBI antenna and slant distances, horizontal and vertical angles are measured from the pillars. Then, adjustment computation is performed by using the mathematical model which connects measurements and unknown parameters. This indicates that the accuracy of the estimated solutions is affected by the accuracy of the measurements. One of issues in local tie surveying, however, is that the reflective targets are not in favorable condition, that is, the reflective sheet target cannot be perfectly aligned to the instrument perpendicularly. Deviation from the line of sight of an instrument may cause different type of measurement errors. This inherent limitation may lead to incorrect stochastic modeling for the measurements in adjustment computation procedures. In this study, error characteristics by measurement types and pillars are analyzed, respectively. The analysis on the studentized residuals is performed after adjustment computation. The normality of the residuals is tested and then equal variance test between the measurement types are performed. The results show that there are differences in variance according to the measurement types. Differences in variance between distances and angle measurements are observed when F-test is performed for the measurements from each pillar. Therefore, more detailed stochastic modeling is required for optimal solutions, especially in local tie survey.

Kalman filter modeling for the estimation of tropospheric and ionospheric delays from the GPS network (망기반 대류 및 전리층 지연 추출을 위한 칼만필터 모델링)

  • Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_1
    • /
    • pp.575-581
    • /
    • 2012
  • In general, various modeling and estimation techniques have been proposed to extract the tropospheric and ionospheric delays from the GPS CORS. In this study, Kalman filter approach is adopted to estimate the tropospheric and ionospheric delays and the proper modeling for the state vector and the variance-covariance matrix for the process noises are performed. The coordinates of reference stations and the zenith wet delays are estimated with the assumption of random walk stochastic process. Also, the first-order Gauss-Markov stochastic process is applied to compute the ionospheric effects. For the evaluation of the proposed modeling technique, Kalman filter algorithm is implemented and the numerical test is performed with the CORS data. The results show that the atmospheric effects can be estimated successfully and, as a consequence, can be used for the generation of VRS data.