• Title/Summary/Keyword: linearity

Search Result 3,247, Processing Time 0.031 seconds

Establishment of Analytical Method for Dichlorprop Residues, a Plant Growth Regulator in Agricultural Commodities Using GC/ECD (GC/ECD를 이용한 농산물 중 생장조정제 dichlorprop 잔류 분석법 확립)

  • Lee, Sang-Mok;Kim, Jae-Young;Kim, Tae-Hoon;Lee, Han-Jin;Chang, Moon-Ik;Kim, Hee-Jeong;Cho, Yoon-Jae;Choi, Si-Won;Kim, Myung-Ae;Kim, MeeKyung;Rhee, Gyu-Seek;Lee, Sang-Jae
    • Korean Journal of Environmental Agriculture
    • /
    • v.32 no.3
    • /
    • pp.214-223
    • /
    • 2013
  • BACKGROUND: This study focused on the development of an analytical method about dichlorprop (DCPP; 2-(2,4-dichlorophenoxy)propionic acid) which is a plant growth regulator, a synthetic auxin for agricultural commodities. DCPP prevents falling of fruits during their growth periods. However, the overdose of DCPP caused the unwanted maturing time and reduce the safe storage period. If we take fruits with exceeding maximum residue limits, it could be harmful. Therefore, this study presented the analytical method of DCPP in agricultural commodities for the nation-wide pesticide residues monitoring program of the Ministry of Food and Drug Safety. METHODS AND RESULTS: We adopted the analytical method for DCPP in agricultural commodities by gas chromatograph in cooperated with Electron Capture Detector(ECD). Sample extraction and purification by ion-associated partition method were applied, then quantitation was done by GC/ECD with DB-17, a moderate polarity column under the temperature-rising condition with nitrogen as a carrier gas and split-less mode. Standard calibration curve presented linearity with the correlation coefficient ($r^2$) > 0.9998, analysed from 0.1 to 2.0 mg/L concentration. Limit of quantitation in agricultural commodities represents 0.05 mg/kg, and average recoveries ranged from 78.8 to 102.2%. The repeatability of measurements expressed as coefficient of variation (CV %) was less than 9.5% in 0.05, 0.10, and 0.50 mg/kg. CONCLUSION(S): Our newly improved analytical method for DCPP residues in agricultural commodities was applicable to the nation-wide pesticide residues monitoring program with the acceptable level of sensitivity, repeatability and reproducibility.

The Effect of Using Two Different Type of Dose Calibrators on In Vivo Standard Uptake Value of FDG PET (FDG 사용 시 Dose Calibrator에 따른 SUV에 미치는 영향)

  • Park, Young-Jae;Bang, Seong-Ae;Lee, Seung-Min;Kim, Sang-Un;Ko, Gil-Man;Lee, Kyung-Jae;Lee, In-Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.115-121
    • /
    • 2010
  • Purpose: The purpose of this study is to measure F-18 FDG with two different types of dose calibrator measuring radionuclide and radioactivity and investigate the effect of F-18 FDG on SUV (Standard Uptake Value) in human body. Materials and Methods: Two different dose calibrators used in this study are CRC-15 Dual PET (Capintec) and CRC-15R (Capintec). Inject 1 mL, 2 mL, 3 mL of F-18 FDG into three 2 mL syringes, respectively, and measure initial radioactivity from each dose calibrator. Then measure and record radioactivity at 30 minute interval for 270 minutes. According to the initial radioactivity, linearity between decay factor driven from radioactive decay formula and the values measured by dose calibrator have been analyzed by simple linear regression. Fine linear regression line optimizing values measured with CRC-15 through regression analysis on the basis of the volume of which the measured value is close to the most ideal one in CRC-15 Dual PET. Create ROI on lung, liver, and region part of 50 persons who has taken PET/CT test, applying values from linear regression equation, and find SUV. We have also performed paired t-test to examine statistically significant difference in the radioactivity measured with CRC-15 Dual PET, CRC-15R and its SUV. Results: Regression analysis of radioactivity measured with CRC-15 Dual PET and CRC-15R shows results as follows: in the case 1 mL, the r statistic representing correlation was 0.9999 and linear regression equation was y=1.0345x+0.2601; in 2 mL case, r=0.9999, linear regression equation y=1.0226x+0.1669; in 3 mL case, r=0.9999, linear regression equation y=1.0094x+0.1577. Based on the linear regression equation from each volume, t-test results show significant difference in SUV of ROI in lung, liver, region part in all three case. P-values in each case are as follows: in 1 mL case, lung, liver and region (p<0.0001); in 2 mL case, lung (p<0.002), liver and region (p<0.0001); in 3 mL case, lung (p<0.044), liver and region (p<0.0001). Conclusion: Radioactivity measured with CRC-15 Dual PET, CRC-15R, dose calibrator for F-18 FDG test, do not show difference correlation, while these values infer that SUV has significant differences in the aspect of uptake in human body. Therefore, it is necessary to consider the difference of SUV in human body when using these dose calibrator.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

The Monitoring on Plasticizers and Heavy Metals in Teabags (침출용 티백 포장재의 안전성에 관한 연구)

  • Eom, Mi-Ok;Kwak, In-Shin;Kang, Kil-Jin;Jeon, Dae-Hoon;Kim, Hyung-Il;Sung, Jun-Hyun;Choi, Hee-Jung;Lee, Young-Ja
    • Journal of Food Hygiene and Safety
    • /
    • v.21 no.4
    • /
    • pp.231-237
    • /
    • 2006
  • Nowadays the teabag is worldwide used for various products including green tea, tea, coffee, etc. since it is convenient for use. In case of outer packaging printed, however, there is a possibility that the plasticizers which is used for improvement in adhesiveness of printing ink may shift to inner tea bag. In this study, in order to monitor residual levels of plasticizers in teabags, we have established the simultaneous analysis method of 9 phthalates and 7 adipates plasticizers using gas chromatography (GC). These compounds were also confirmed using gas chromatography-mass spectrometry (GC-MSD). The recoveries of plasticizers analyzed by GC ranged from 82.7% to 104.6% with coefficient of variation of $0.6\sim2.7%$ and the correlation coefficients of each plasticizer was $0.9991\sim0.9999$. Therefore this simultaneous analysis method was showed excellent reproducibility and linearity. And limit of detection (LOD) and limit of quantitation (LOQ) on individual plasticizer were $0.1\sim3.5\;ppm\;and\;0.3\sim11.5\;ppm$ respectively. When 143 commercial products of teabag were monitored, no plasticizers analysed were detected in filter of teabag products. The migration into $95^{\circ}C$ water as food was also examined and the 16 plasticizers are not detected. In addition we carried out analysis of heavy metals, lead (Pb), cadmium (Cd), arsenic (As) and aluminum (Al) in teabag filters using ICP/AES. $Trace\sim23{\mu}g$ Pb per teabag and $0.6\sim1718{\mu}g$ Al per teabag were detected in materials of samples and Cd and As are detected less than LOQ (0.05 ppm). The migration levels of Pb and Al from teabag filter to $95^{\circ}C$ water were upto $11.5{\mu}g\;and\;20.8{\mu}g$ per teabag, respectively and Cd and As were not detected in exudate water of all samples. Collectively, these results suggest that there is no safety concern from using teabag filter.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.