• Title/Summary/Keyword: Regression testing

Search Result 707, Processing Time 0.027 seconds

Studies on Rapid Microbiological Testing Method of Fresh Pork by Applied Resazurin Reduction Test(RRT) Method (Resazurin 환원법을 응용한 돈육의 신속 미생물 검사법에 관한 연구)

  • Lim, S.D.;Kim, K.S.
    • Journal of Animal Science and Technology
    • /
    • v.44 no.4
    • /
    • pp.453-458
    • /
    • 2002
  • In order to search for reliable rapid methods of estimating bacterial counts in pork, this study was tried to measure resazurin reduction time which is simple in experimental method, low in analytical cost, able to estimate bacterial count within short time. The results were summarized as follows; Correlation coefficient between initial bacterial log count(25$^{\circ}C$/72hr, Y) and resazurin reduction time(X) from blue color to pink color during incubation at 25$^{\circ}C$ and 30$^{\circ}C$ was higher than other conditions as -0.95 and -0.94, respectively. Considering correlation coefficient and reduction time, incubation temperature was compatible at 30$^{\circ}C$, and regression equation(RE) was Y = -0.4386X + 7.7870. At a bacterial load of $10^2$cfu/$cm^2$, $10^3$cfu/$cm^2$ and $10^4$cfu/$cm^2$ in pork, reduction time was 13.2hr, 10.9hr and 8.6hr, respectively. Correlation coefficient between initial bacterial log count(30$^{\circ}C$/72hr, Y) and resazurin reduction time(X) from blue color to pink color during incubation at 30$^{\circ}C$ was highest among other conditions as -0.93, and RE was Y = -0.4171X + 7.5540. At a bacterial load of $10^2$cfu/$cm^2$, $10^3cfu/$cm^2$ and $10^4cfu/$cm^2$ in pork, reduction time was 13.3hr, 10.9hr and 8.5hr, respectively. Correlation coefficient between initial bacterial log count(35$^{\circ}C$/72hr, Y) and resazurin reduction time(X) from blue color to pink color during incubation at 30$^{\circ}C$ was highest among other conditions as -0.93, and RE was Y = -0.3514X + 6.7513. At a bacterial load of $10^2$cfu/$cm^2$, $10^3$cfu/$cm^2$ and $10^4$cfu/$cm^2$ in pork, reduction time was 13.5hr, 10.7hr and 7.8hr, respectively.

A Study on the Influence of Information Security on Consumer's Preference of Android and iOS based Smartphone (정보보안이 안드로이드와 iOS 기반 스마트폰 소비자 선호에 미치는 영향)

  • Park, Jong-jin;Choi, Min-kyong;Ahn, Jong-chang
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.105-119
    • /
    • 2017
  • Smartphone users hit over eighty-five percentage of Korean populations and personal private items and various information are stored in each user's smartphone. There are so many cases to propagate malicious codes or spywares for the purpose of catching illegally these kinds of information and earning pecuniary gains. Thus, need of information security is outstanding for using smartphone but also user's security perception is important. In this paper, we investigate about how information security affects smartphone operating system choices by users. For statistical analysis, the online survey with questionnaires for users of smartphones is conducted and effective 218 subjects are collected. We test hypotheses via communalities analysis using factor analysis, reliability analysis, independent sample t-test, and linear regression analysis by IBM SPSS statistical package. As a result, it is found that hardware environment influences on perceived ease of use. Brand power affects both perceived usefulness and perceived ease of use and degree of personal risk-accepting influences on perception of smartphone spy-ware risk. In addition, it is found that perceived usefulness, perceived ease of use, degree of personal risk-accepting, and spy-ware risk of smartphone influence significantly on intention to purchase smartphone. However, results of independent sample t-test for each operating system users of Android or iOS do not present statistically significant differences among two OS user groups. In addition, each result of OS user group testing for hypotheses is different from the results of total sample testing. These results can give important suggestions to organizations and managers related to smartphone ecology and contribute to the sphere of information systems (IS) study through a new perspective.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Experimental Studies on the Properties of Epoxy Resin Mortars (에폭시 수지 모르터의 특성에 관한 실험적 연구)

  • 연규석;강신업
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.26 no.1
    • /
    • pp.52-72
    • /
    • 1984
  • This study was performed to obtain the basic data which can be applied to the use of epoxy resin mortars. The data was based on the properties of epoxy resin mortars depending upon various mixing ratios to compare those of cement mortar. The resin which was used at this experiment was Epi-Bis type epoxy resin which is extensively being used as concrete structures. In the case of epoxy resin mortar, mixing ratios of resin to fine aggregate were 1: 2, 1: 4, 1: 6, 1: 8, 1:10, 1 :12 and 1:14, but the ratio of cement to fine aggregate in cement mortar was 1 : 2.5. The results obtained are summarized as follows; 1.When the mixing ratio was 1: 6, the highest density was 2.01 g/cm$^3$, being lower than 2.13 g/cm$^3$ of that of cement mortar. 2.According to the water absorption and water permeability test, the watertightness was shown very high at the mixing ratios of 1: 2, 1: 4 and 1: 6. But then the mixing ratio was less than 1 : 6, the watertightness considerably decreased. By this result, it was regarded that optimum mixing ratio of epoxy resin mortar for watertight structures should be richer mixing ratio than 1: 6. 3.The hardening shrinkage was large as the mixing ratio became leaner, but the values were remarkably small as compared with cement mortar. And the influence of dryness and moisture was exerted little at richer mixing ratio than 1: 6, but its effect was obvious at the lean mixing ratio, 1: 8, 1:10,1:12 and 1:14. It was confirmed that the optimum mixing ratio for concrete structures which would be influenced by the repeated dryness and moisture should be rich mixing ratio higher than 1: 6. 4.The compressive, bending and splitting tensile strenghs were observed very high, even the value at the mixing ratio of 1:14 was higher than that of cement mortar. It showed that epoxy resin mortar especially was to have high strength in bending and splitting tensile strength. Also, the initial strength within 24 hours gave rise to high value. Thus it was clear that epoxy resin was rapid hardening material. The multiple regression equations of strength were computed depending on a function of mixing ratios and curing times. 5.The elastic moduli derived from the compressive stress-strain curve were slightly smaller than the value of cement mortar, and the toughness of epoxy resin mortar was larger than that of cement mortar. 6.The impact resistance was strong compared with cement mortar at all mixing ratios. Especially, bending impact strength by the square pillar specimens was higher than the impact resistance of flat specimens or cylinderic specimens. 7.The Brinell hardness was relatively larger than that of cement mortar, but it gradually decreased with the decline of mixing ratio, and Brinell hardness at mixing ratio of 1 :14 was much the same as cement mortar. 8.The abrasion rate of epoxy resin mortar at all mixing ratio, when Losangeles abation testing machine revolved 500 times, was very low. Even mixing ratio of 1 :14 was no more than 31.41%, which was less than critical abrasion rate 40% of coarse aggregate for cement concrete. Consequently, the abrasion rate of epoxy resin mortar was superior to cement mortar, and the relation between abrasion rate and Brinell hardness was highly significant as exponential curve. 9.The highest bond strength of epoxy resin mortar was 12.9 kg/cm$^2$ at the mixing ratio of 1:2. The failure of bonded flat steel specimens occurred on the part of epoxy resin mortar at the mixing ratio of 1: 2 and 1: 4, and that of bonded cement concrete specimens was fond on the part of combained concrete at the mixing ratio of 1 : 2 ,1: 4 and 1: 6. It was confirmed that the optimum mixing ratio for bonding of steel plate, and of cement concrete should be rich mixing ratio above 1 : 4 and 1 : 6 respectively. 10.The variations of color tone by heating began to take place at about 60˚C, and the ultimate change occurred at 120˚C. The compressive, bending and splitting tensile strengths increased with rising temperature up to 80˚ C, but these rapidly decreased when temperature was above 800 C. Accordingly, it was evident that the resistance temperature of epoxy resin mortar was about 80˚C which was generally considered lower than that of the other concrete materials. But it is likely that there is no problem in epoxy resin mortar when used for unnecessary materials of high temperature resistance. The multiple regression equations of strength were computed depending on a function of mixing ratios and heating temperatures. 11.The susceptibility to chemical attack of cement mortar was easily affected by inorganic and organic acid. and that of epoxy resin mortar with mixing ratio of 1: 4 was of great resistance. On the other hand, when mixing ratio was lower than 1 : 8 epoxy resin mortar had very poor resistance, especially being poor resistant to organicacid. Therefore, for the structures requiring chemical resistance optimum mixing of epoxy resin mortar should be rich mixing ratio higher than 1: 4.

  • PDF

Synthetic Application of Seismic Piezo-cone Penetration Test for Evaluating Shear Wave Velocity in Korean Soil Deposits (국내 퇴적 지반의 전단파 속도 평가를 위한 탄성파 피에조콘 관입 시험의 종합적 활용)

  • Sun, Chang-Guk;Kim, Hong-Jong;Jung, Jong-Hong;Jung, Gyung-Ja
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.3
    • /
    • pp.207-224
    • /
    • 2006
  • It has been widely known that the seismic piezo-cone penetration test (SCPTu) is one of the most useful techniques for investigating the geotechnical characteristics such as static and dynamic soil properties. As practical applications in Korea, SCPTu was carried out at two sites in Busan and four sites in Incheon, which are mainly composed of alluvial or marine soil deposits. From the SCPTu waveform data obtained from the testing sites, the first arrival times of shear waves and the corresponding time differences with depth were determined using the cross-over method, and the shear wave velocity $(V_S)$ profiles with depth were derived based on the refracted ray path method based on Snell's law. Comparing the determined $V_S$ profile with the cone tip resistance $(q_t)$ profile, both trends of profiles with depth were similar. For the application of the conventional CPTu to earthquake engineering practices, the correlations between $V_S$ and CPTu data were deduced based on the SCPTu results. For the empirical evaluation of $V_S$ for all soils together with clays and sands which are classified unambiguously in this study by the soil behavior type classification index $(I_C)$, the authors suggested the $V_S-CPTu$ data correlations expressed as a function of four parameters, $q_t,\;f_s,\;\sigma'_{v0}$ and $B_q$, determined by multiple statistical regression modeling. Despite the incompatible strain levels of the downhole seismic test during SCPTu and the conventional CPTu, it is shown that the $V_S-CPTu$ data correlations for all soils, clays and sands suggested in this study is applicable to the preliminary estimation of $V_S$ for the soil deposits at a part in Korea and is more reliable than the previous correlations proposed by other researchers.

Application Effect of the Controlled Release Fertilizer Applied on Seedling Tray at Seeding Time in Rice (벼 모판 파종동시처리 완효성비료 시용효과)

  • Won, Tae-Jin;Choi, Byoung-Rourl;Cho, Kwang-Rae;Lim, Gab-June;Chi, Jeong-Hyun;Woo, Sun-Hee
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.3
    • /
    • pp.204-212
    • /
    • 2019
  • The optimal application rate of a controlled release fertilizer (CRF) on the growth, yield, and seeding time of rice grown on seedling trays was investigated. The experimental field was located at $37^{\circ}22^{\prime}10^{{\prime}{\prime}}N$ latitude and $127^{\circ}03^{\prime}85^{{\prime}{\prime}}E$ longitude in Hwaseong, Gyeonggi-do, Republic of Korea. The soil in the paddy field was a clay loam. The CRF used in the experiment contained $300g\;kg^{-1}$ of nitrogen, $60g\;kg^{-1}$ of phosphate, and $60g\;kg^{-1}$ of potassium, respectively. The CRF was applied at the rate of 0, 200, 300, 400, 500, and 600 grams on rice seedling tray compared with the field application based on soil testing (control), respectively. The CRF can be applied as single application(which can replace basal fertilizer application and two top dressing application) directly to the seedling tray, and showed the minimum release at the seedling period. Considering the plant growth, nitrogen use efficency and yield of rice, the optimal application rate of developed CRF was 500 g per seedling tray and the yield of rice at this application rate was $4.92{\sim}5.04Mg\;ha^{-1}$. The regression formula between the rice yield and application rates of CRF was as follows ; "$Y=0.0002{\chi}^2+0.0963{\chi}+411.6$($R^2$ : 0.9922) in 2010 and $Y=8E-6{\chi}^2+0.2723{\chi}+344.04$($R^2$:0.9864) in 2011, Y : Rice yield ($Mg\;ha^{-1}$), ${\chi}$ : Application rate (grams) of controlled release fertilizer". The optimum application rates of CRF per rice seedling tray by regression formula was 498 grams in 2010 and 513 grams in 2011.

The prediction of the stock price movement after IPO using machine learning and text analysis based on TF-IDF (증권신고서의 TF-IDF 텍스트 분석과 기계학습을 이용한 공모주의 상장 이후 주가 등락 예측)

  • Yang, Suyeon;Lee, Chaerok;Won, Jonggwan;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.237-262
    • /
    • 2022
  • There has been a growing interest in IPOs (Initial Public Offerings) due to the profitable returns that IPO stocks can offer to investors. However, IPOs can be speculative investments that may involve substantial risk as well because shares tend to be volatile, and the supply of IPO shares is often highly limited. Therefore, it is crucially important that IPO investors are well informed of the issuing firms and the market before deciding whether to invest or not. Unlike institutional investors, individual investors are at a disadvantage since there are few opportunities for individuals to obtain information on the IPOs. In this regard, the purpose of this study is to provide individual investors with the information they may consider when making an IPO investment decision. This study presents a model that uses machine learning and text analysis to predict whether an IPO stock price would move up or down after the first 5 trading days. Our sample includes 691 Korean IPOs from June 2009 to December 2020. The input variables for the prediction are three tone variables created from IPO prospectuses and quantitative variables that are either firm-specific, issue-specific, or market-specific. The three prospectus tone variables indicate the percentage of positive, neutral, and negative sentences in a prospectus, respectively. We considered only the sentences in the Risk Factors section of a prospectus for the tone analysis in this study. All sentences were classified into 'positive', 'neutral', and 'negative' via text analysis using TF-IDF (Term Frequency - Inverse Document Frequency). Measuring the tone of each sentence was conducted by machine learning instead of a lexicon-based approach due to the lack of sentiment dictionaries suitable for Korean text analysis in the context of finance. For this reason, the training set was created by randomly selecting 10% of the sentences from each prospectus, and the sentence classification task on the training set was performed after reading each sentence in person. Then, based on the training set, a Support Vector Machine model was utilized to predict the tone of sentences in the test set. Finally, the machine learning model calculated the percentages of positive, neutral, and negative sentences in each prospectus. To predict the price movement of an IPO stock, four different machine learning techniques were applied: Logistic Regression, Random Forest, Support Vector Machine, and Artificial Neural Network. According to the results, models that use quantitative variables using technical analysis and prospectus tone variables together show higher accuracy than models that use only quantitative variables. More specifically, the prediction accuracy was improved by 1.45% points in the Random Forest model, 4.34% points in the Artificial Neural Network model, and 5.07% points in the Support Vector Machine model. After testing the performance of these machine learning techniques, the Artificial Neural Network model using both quantitative variables and prospectus tone variables was the model with the highest prediction accuracy rate, which was 61.59%. The results indicate that the tone of a prospectus is a significant factor in predicting the price movement of an IPO stock. In addition, the McNemar test was used to verify the statistically significant difference between the models. The model using only quantitative variables and the model using both the quantitative variables and the prospectus tone variables were compared, and it was confirmed that the predictive performance improved significantly at a 1% significance level.

The Effect of Attributes of Innovation and Perceived Risk on Product Attitudes and Intention to Adopt Smart Wear (스마트 의류의 혁신속성과 지각된 위험이 제품 태도 및 수용의도에 미치는 영향)

  • Ko, Eun-Ju;Sung, Hee-Won;Yoon, Hye-Rim
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.89-111
    • /
    • 2008
  • Due to the development of digital technology, studies regarding smart wear integrating daily life have rapidly increased. However, consumer research about perception and attitude toward smart clothing hardly could find. The purpose of this study was to identify innovative characteristics and perceived risk of smart clothing and to analyze the influences of theses factors on product attitudes and intention to adopt. Specifically, five hypotheses were established. H1: Perceived attributes of smart clothing except for complexity would have positive relations to product attitude or purchase intention, while complexity would be opposite. H2: Product attitude would have positive relation to purchase intention. H3: Product attitude would have a mediating effect between perceived attributes and purchase intention. H4: Perceived risks of smart clothing would have negative relations to perceived attributes except for complexity, and positive relations to complexity. H5: Product attitude would have a mediating effect between perceived risks and purchase intention. A self-administered questionnaire was developed based on previous studies. After pretest, the data were collected during September, 2006, from university students in Korea who were relatively sensitive to innovative products. A total of 300 final useful questionnaire were analyzed by SPSS 13.0 program. About 60.3% were male with the mean age of 21.3 years old. About 59.3% reported that they were aware of smart clothing, but only 9 respondents purchased it. The mean of attitudes toward smart clothing and purchase intention was 2.96 (SD=.56) and 2.63 (SD=.65) respectively. Factor analysis using principal components with varimax rotation was conducted to identify perceived attribute and perceived risk dimensions. Perceived attributes of smart wear were categorized into relative advantage (including compatibility), observability (including triability), and complexity. Perceived risks were identified into physical/performance risk, social psychological risk, time loss risk, and economic risk. Regression analysis was conducted to test five hypotheses. Relative advantage and observability were significant predictors of product attitude (adj $R^2$=.223) and purchase intention (adj $R^2$=.221). Complexity showed negative influence on product attitude. Product attitude presented significant relation to purchase intention (adj $R^2$=.692) and partial mediating effect between perceived attributes and purchase intention (adj $R^2$=.698). Therefore hypothesis one to three were accepted. In order to test hypothesis four, four dimensions of perceived risk and demographic variables (age, gender, monthly household income, awareness of smart clothing, and purchase experience) were entered as independent variables in the regression models. Social psychological risk, economic risk, and gender (female) were significant to predict relative advantage (adj $R^2$=.276). When perceived observability was a dependent variable, social psychological risk, time loss risk, physical/performance risk, and age (younger) were significant in order (adj $R^2$=.144). However, physical/performance risk was positively related to observability. The more Koreans seemed to be observable of smart clothing, the more increased the probability of physical harm or performance problems received. Complexity was predicted by product awareness, social psychological risk, economic risk, and purchase experience in order (adj $R^2$=.114). Product awareness was negatively related to complexity, meaning high level of product awareness would reduce complexity of smart clothing. However, purchase experience presented positive relation with complexity. It appears that consumers can perceive high level of complexity when they are actually consuming smart clothing in real life. Risk variables were positively related with complexity. That is, in order to decrease complexity, it is also necessary to consider minimizing anxiety factors about social psychological wound or loss of money. Thus, hypothesis 4 was partially accepted. Finally, in testing hypothesis 5, social psychological risk and economic risk were significant predictors for product attitude (adj $R^2$=.122) and purchase intention (adj $R^2$=.099) respectively. When attitude variable was included with risk variables as independent variables in the regression model to predict purchase intention, only attitude variable was significant (adj $R^2$=.691). Thus attitude variable presented full mediating effect between perceived risks and purchase intention, and hypothesis 5 was accepted. Findings would provide guidelines for fashion and electronic businesses who aim to create and strengthen positive attitude toward smart clothing. Marketers need to consider not only functional feature of smart clothing, but also practical and aesthetic attributes, since appropriateness for social norm or self image would reduce uncertainty of psychological or social risk, which increase relative advantage of smart clothing. Actually social psychological risk was significantly associated to relative advantage. Economic risk is negatively associated with product attitudes as well as purchase intention, suggesting that smart-wear developers have to reflect on price ranges of potential adopters. It will be effective to utilize the findings associated with complexity when marketers in US plan communication strategy.

  • PDF

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF