• Title/Summary/Keyword: Stocks

Search Result 1,070, Processing Time 0.029 seconds

Studies on the Properties of Populus Grown in Korea (포플러재(材)의 재질(材質)에 관(關)한 시험(試驗))

  • Jo, Jae-Myeong;Kang, Sun-Goo;Lee, Yong-Dae;Jung, Hee-Suk;Ahn, Jung-Mo;Shim, Chong-Supp
    • Journal of the Korean Wood Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.68-87
    • /
    • 1982
  • In Korea, this is the situation at moment that the total demand of timber in 1972 is more than 5 million cubic meters. On the other hand, however, the available domestic supply of timber at the same year is only about, 1 million cubic meters. A great unbalancing between demand and supply of timber has been prevailing. To solve this hard problem, it has been necessitiated to build up the forest stocks as early as possible with fast grown species such as poplar. Under circumstances, poplar plantations which have been carryed on government and private have reached to large area of 116,603 hectors from 1962 up to date. It has now be come a principal timber resources in this country, and required the basic study on various properties of wood for it's proper utilization, since it has not been made of any systematic study on the properties of Populus grown in Korea. In order to investigate the properties such as anatomical, physical and mechanical properties of nine different species (P. euramericana Guiner I-214. P. euramericana Guiner I-476, P. deltoides Marsh, P. nigra var. italica (Muchk) Koeme, P. alba L.,P. alba $\times$ glandulosa P. maximowiczii Henry, P. koreana Rehder, P. davidiana Dode) of poplar for their proper use and development of new ways of grading processing and quality improving, this study has been made by the Forest Research Institute.

  • PDF

Empirical Analysis on Bitcoin Price Change by Consumer, Industry and Macro-Economy Variables (비트코인 가격 변화에 관한 실증분석: 소비자, 산업, 그리고 거시변수를 중심으로)

  • Lee, Junsik;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.195-220
    • /
    • 2018
  • In this study, we conducted an empirical analysis of the factors that affect the change of Bitcoin Closing Price. Previous studies have focused on the security of the block chain system, the economic ripple effects caused by the cryptocurrency, legal implications and the acceptance to consumer about cryptocurrency. In various area, cryptocurrency was studied and many researcher and people including government, regardless of country, try to utilize cryptocurrency and applicate to its technology. Despite of rapid and dramatic change of cryptocurrencies' price and growth of its effects, empirical study of the factors affecting the price change of cryptocurrency was lack. There were only a few limited studies, business reports and short working paper. Therefore, it is necessary to determine what factors effect on the change of closing Bitcoin price. For analysis, hypotheses were constructed from three dimensions of consumer, industry, and macroeconomics for analysis, and time series data were collected for variables of each dimension. Consumer variables consist of search traffic of Bitcoin, search traffic of bitcoin ban, search traffic of ransomware and search traffic of war. Industry variables were composed GPU vendors' stock price and memory vendors' stock price. Macro-economy variables were contemplated such as U.S. dollar index futures, FOMC policy interest rates, WTI crude oil price. Using above variables, we did times series regression analysis to find relationship between those variables and change of Bitcoin Closing Price. Before the regression analysis to confirm the relationship between change of Bitcoin Closing Price and the other variables, we performed the Unit-root test to verifying the stationary of time series data to avoid spurious regression. Then, using a stationary data, we did the regression analysis. As a result of the analysis, we found that the change of Bitcoin Closing Price has negative effects with search traffic of 'Bitcoin Ban' and US dollar index futures, while change of GPU vendors' stock price and change of WTI crude oil price showed positive effects. In case of 'Bitcoin Ban', it is directly determining the maintenance or abolition of Bitcoin trade, that's why consumer reacted sensitively and effected on change of Bitcoin Closing Price. GPU is raw material of Bitcoin mining. Generally, increasing of companies' stock price means the growth of the sales of those companies' products and services. GPU's demands increases are indirectly reflected to the GPU vendors' stock price. Making an interpretation, a rise in prices of GPU has put a crimp on the mining of Bitcoin. Consequently, GPU vendors' stock price effects on change of Bitcoin Closing Price. And we confirmed U.S. dollar index futures moved in the opposite direction with change of Bitcoin Closing Price. It moved like Gold. Gold was considered as a safe asset to consumers and it means consumer think that Bitcoin is a safe asset. On the other hand, WTI oil price went Bitcoin Closing Price's way. It implies that Bitcoin are regarded to investment asset like raw materials market's product. The variables that were not significant in the analysis were search traffic of bitcoin, search traffic of ransomware, search traffic of war, memory vendor's stock price, FOMC policy interest rates. In search traffic of bitcoin, we judged that interest in Bitcoin did not lead to purchase of Bitcoin. It means search traffic of Bitcoin didn't reflect all of Bitcoin's demand. So, it implies there are some factors that regulate and mediate the Bitcoin purchase. In search traffic of ransomware, it is hard to say concern of ransomware determined the whole Bitcoin demand. Because only a few people damaged by ransomware and the percentage of hackers requiring Bitcoins was low. Also, its information security problem is events not continuous issues. Search traffic of war was not significant. Like stock market, generally it has negative in relation to war, but exceptional case like Gulf war, it moves stakeholders' profits and environment. We think that this is the same case. In memory vendor stock price, this is because memory vendors' flagship products were not VRAM which is essential for Bitcoin supply. In FOMC policy interest rates, when the interest rate is low, the surplus capital is invested in securities such as stocks. But Bitcoin' price fluctuation was large so it is not recognized as an attractive commodity to the consumers. In addition, unlike the stock market, Bitcoin doesn't have any safety policy such as Circuit breakers and Sidecar. Through this study, we verified what factors effect on change of Bitcoin Closing Price, and interpreted why such change happened. In addition, establishing the characteristics of Bitcoin as a safe asset and investment asset, we provide a guide how consumer, financial institution and government organization approach to the cryptocurrency. Moreover, corroborating the factors affecting change of Bitcoin Closing Price, researcher will get some clue and qualification which factors have to be considered in hereafter cryptocurrency study.

A Study on the Changes of Taste Components in brisket and shank Gom-Kuk by Cooking Conditions (조리조건에 따른 양지머리와 사골곰국의 맛성분 변화에 대한 연구)

  • 조은자;정은정
    • Korean journal of food and cookery science
    • /
    • v.15 no.5
    • /
    • pp.490-499
    • /
    • 1999
  • The purpose of this study was to investigate the changes of taste components in the boiled beef brisket soup stock and shank soup stock by varying pretreatment, boiling temperature and time. Free amino acids and nucleotides color and sensory evaluation in each samples were analyzed. The results were obtained as follows : 1. The amount of free amino acids in the brisket soup stock pretreated by soaking and blanching showed a tendency to increase in proportion to boiling time. The amount of glutamic acid in the brisket soup stock was much in order of soaking > blanching > roasting pretreatment. While the amount of glutamic acid in the boiled soup stock samples pretreated by soaking and blanching was much more at low temperature than at high temperature, the glutamic acid contents in the boiled soup stock pretreated by roasting were large at high temperature. The amount of glutamic acid in pretreated by soaked soup stock showed the highest and recorded 8.73 mg% at 6 hour-low temperature-boiling. 2. The amount of free amino acids in the shank soup stock did not show any regular tendency and had few changes in quantity by the methods of pretreatment. Each amount of glutamic acid in the shank soup stock pretreated by soaking and blanching was the highest, when boiled for 3 hours at high temperature. The samples pretreated by roasting showed the highest record 2.49 mg%, when boiled for 6 hours at high temperature, but could not recognize any regular tendency in the case of boiling at low temperature. 3. The amount of nucleotides in the brisket soup stock generally showed increase in proportion to boiling time. The amount of 5'-IMP extracted from the brisket soup stock was much in order of blanching > soaking > roaking pretreatment, but few differences between blanching and soaking soup stock samples. The amount of 5'-IMP extracted from soup stock samples pretreated by soaking and blanching was high at low-boiling and by roasting at high-boiling. Each amount of 5'-IMP extracted from soup stocks pretreated by soaking(BSL) and blanching(BBL) was the highest at 6 hour-low-boiling(37.06 mg%), and 5 hours(38.37 mg%) respectively. The amount of 5'in the soup stock pretreated by roasting(BRH) showed the highest records at 6 hour-high-boiling(10.85 mg%). 4. The amount of 5'-IMP extracted from the shank soup stock preteated by soaking and blanching showed a tendency to decrease after 3 hours boiling irrelative of boiling temperature. The amount of 5'in the shank soup stock was much in order of soaking > blanching > roasting pretreatment and showed high at the boiling of high temperature. In the sample pretreated by roasting it showed the highst records when boiled for 6 hours at high temperature(1.55 mg%). 5. The L Value of the brisket soup stock pretreared by roasting at high temperature(BRH) was the lowest and the b value of it was the highest of all the brisket samples boiled for 6 hours. No differences were found in the Value of L, a, and b in shank soup stock by the methods of pretreatment and boiling temperature. 6. The sensory scores in color and flavor of the brisket soup stock showd that BRH was higher than the other samples, and the preference in taste and overall was the highest in BSH while it was the lowest in BRH. The preference in the all sensory characteristics of SSH was higher than any other shank soup stock, but did not show any significant difference statistically.

  • PDF

Scale and Scope Economies and Prospect for the Korea's Banking Industry (우리나라 은행산업(銀行産業)의 효율성분석(效率性分析)과 제도개선방안(制度改善方案))

  • Jwa, Sung-hee
    • KDI Journal of Economic Policy
    • /
    • v.14 no.2
    • /
    • pp.109-153
    • /
    • 1992
  • This paper estimates a translog cost function for the Korea's banking industry and derives various implications on the prospect for the Korean banking structure in the future based on the estimated efficiency indicators for the banking sector. The Korean banking industry is permitted to operate trust business to the full extent and the security business to a limited extent, while it is formally subjected to the strict, specialized banking system. Security underwriting and investment businesses are allowed in a very limited extent only for stocks and bonds of maturity longer than three year and only up to 100 percent of the bank paid-in capital. Until the end of 1991, the ceiling was only up to 25 percent of the total balance of the demand deposits. However, they are prohibited from the security brokerage business. While the in-house integration of security businesses with the traditional business of deposit and commercial lending is restrictively regulated as such, Korean banks can enter the security business by establishing subsidiaries in the industry. This paper, therefore, estimates the efficiency indicators as well as the cost functions, identifying the in-house integrated trust business and security investment business as important banking activities, for various cases where both the production and the intermediation function approaches in modelling the financial intermediaries are separately applied, and the banking businesses of deposit, lending and security investment as one group and the trust businesses as another group are separately and integrally analyzed. The estimation results of the efficiency indicators for various cases are summarized in Table 1 and Table 2. First, security businesses exhibit economies of scale but also economies of scope with traditional banking activities, which implies that in-house integration of the banking and security businesses may not be a nonoptimal banking structure. Therefore, this result further implies that the transformation of Korea's banking system from the current, specialized system to the universal banking system will not impede the improvement of the banking industry's efficiency. Second, the lending businesses turn out to be subjected to diseconomies of scale, while exhibiting unclear evidence for economies of scope. In sum, it implies potential efficiency gain of the continued in-house integration of the lending activity. Third, the continued integration of the trust businesses seems to contribute to improving the efficiency of the banking businesses, since the trust businesses exhibit economies of scope. Fourth, deposit services and fee-based activities, such as foreign exchange and credit card businesses, exhibit economies of scale but constant returns to scope, which implies, the possibility of separating those businesses from other banking and trust activities. The recent trend of the credit card business being operated separately from other banking activities by an independent identity in Korea as well as in the global banking market seems to be consistent with this finding. Then, how can the possibility of separating deposit services from the remaining activities be interpreted? If one insists a strict definition of commercial banking that is confined to deposit and commercial lending activities, separating the deposit service will suggest a resolution or a disappearance of banking, itself. Recently, however, there has been a suggestion that separating banks' deposit and lending activities by allowing a depository institution which specialize in deposit taking and investing deposit fund only in the safest securities such as government securities to administer the deposit activity will alleviate the risk of a bank run. This method, in turn, will help improve the safety of the payment system (Robert E. Litan, What should Banks Do? Washington, D.C., The Brookings Institution, 1987). In this context, the possibility of separating the deposit activity will imply that a new type of depository institution will arise naturally without contradicting the efficiency of the banking businesses, as the size of the banking market grows in the future. Moreover, it is also interesting to see additional evidences confirming this statement that deposit taking and security business are cost complementarity but deposit taking and lending businesses are cost substitute (see Table 2 for cost complementarity relationship in Korea's banking industry). Finally, it has been observed that the Korea's banking industry is lacking in the characteristics of natural monopoly. Therefore, it may not be optimal to encourage the merger and acquisition in the banking industry only for the purpose of improving the efficiency.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study on Estimation of Edible Meat Weight in Live Broiler Chickens (육용계(肉用鷄)에서 가식육량(可食肉量)의 추정(推定)에 관(關)한 연구(硏究))

  • Han, Sung Wook;Kim, Jae Hong
    • Korean Journal of Agricultural Science
    • /
    • v.10 no.2
    • /
    • pp.221-234
    • /
    • 1983
  • A study was conducted to devise a method to estimate the edible meat weight in live broilers. White Cornish broiler chicks CC, Single Comb White Leghorn egg strain chicks LL, and two reciprocal cross breeds of these two parent stocks (CL and LC) were employed A total of 240 birds, 60 birds from each breed, were reared and sacrificed at 0, 2, 4, 6, 8 and 10 weeks of ages in order to measure various body parameters. Results obtained from this study were summarized as follows. 1) The average body weight of CC and LL were 1,820g and 668g, respectively, at 8 weeks of age. The feed to gain ratios for CC and LL were 2.24 and 3.28, respectively. 2) The weight percentages of edible meat to body weight were 34.7, 36.8 and 37.5% at 6, 8 and 10 weeks of ages, respectively, for CC. The values for LL were 30.7, 30.5 and 32.3%, respectively, The CL and LC were intermediate in this respect. No significant differences were found among four breeds employed. 3) The CC showed significantly smaller weight percentages than did the other breeds in neck, feather, and inedible viscera. In comparison, the LL showed the smaller weight percentages of leg and abdominal fat to body weight than did the others. No significant difference was found among breeds in terms of the weight percentages of blood to body weight. With regard to edible meat, the CC showed significantly heavier breast and drumstick, and the edible viscera was significantly heavier in LL. There was no consistent trend in neck, wing and back weights. 4) The CC showed significantly larger measurements body shape components than did the other breeds at all time. Moreover, significant difference was found in body shape measurements between CL and LC at 10 weeks of age. 5) All of the measurements of body shape components except breast angle were highly correlated with edible meat weight. Therefore, it appeared to be possible to estimate the edible meat wight of live chickens by the use of these values. 6) The optimum regression equations for the estimation of edible meat weight by body shape measurements at 10 weeks of age were as follows. $$Y_{cc}=-1,475.581 +5.054X_{26}+3.080X_{24}+3.772X_{25}+14.321X_{35}+1.922X_{27}(R^2=0.88)$$ $$Y_{LL}=-347.407+4.549X_{33}+3.003X_{31}(R^2=0.89)$$ $$Y_{CL}=-1,616.793+4.430X_{24}+8.566X_{32}(R^2=0.73)$$ $$Y_{LC}=-603.938+2.142X_{24}+3.039X_{27}+3.289X_{33}(R^2=0.96)$$ Where $X_{24}$=chest girth, $X_{25}$=breast width, $X_{26}$=breast length, $X_{27}$=keel length, $X_{31}$=drumstick girth, $X_{32}$=tibotarsus length, $X_{33}$=shank length, and $X_{35}$=shank diameter. 7) The breed and age factors caused considerable variations in assessing the edible meat weight in live chicken. It seems however that the edible meat weight in live chicken can be estimated fairly accurately with optimum regression equations derived from various body shape measurements.

  • PDF

A Study on the Forest Yield Regulation by Systems Analysis (시스템분석(分析)에 의(依)한 삼림수확조절(森林收穫調節)에 관(關)한 연구(硏究))

  • Cho, Eung-hyouk
    • Korean Journal of Agricultural Science
    • /
    • v.4 no.2
    • /
    • pp.344-390
    • /
    • 1977
  • The purpose of this paper was to schedule optimum cutting strategy which could maximize the total yield under certain restrictions on periodic timber removals and harvest areas from an industrial forest, based on a linear programming technique. Sensitivity of the regulation model to variations in restrictions has also been analyzed to get information on the changes of total yield in the planning period. The regulation procedure has been made on the experimental forest of the Agricultural College of Seoul National University. The forest is composed of 219 cutting units, and characterized by younger age group which is very common in Korea. The planning period is devided into 10 cutting periods of five years each, and cutting is permissible only on the stands of age groups 5-9. It is also assumed in the study that the subsequent forests are established immediately after cutting existing forests, non-stocked forest lands are planted in first cutting period, and established forests are fully stocked until next harvest. All feasible cutting regimes have been defined to each unit depending on their age groups. Total yield (Vi, k) of each regime expected in the planning period has been projected using stand yield tables and forest inventory data, and the regime which gives highest Vi, k has been selected as a optimum cutting regime. After calculating periodic yields and cutting areas, and total yield from the optimum regimes selected without any restrictions, the upper and lower limits of periodic yields(Vj-max, Vj-min) and those of periodic cutting areas (Aj-max, Aj-min) have been decided. The optimum regimes under such restrictions have been selected by linear programming. The results of the study may be summarized as follows:- 1. The fluctuations of periodic harvest yields and areas under cutting regimes selected without restrictions were very great, because of irregular composition of age classes and growing stocks of existing stands. About 68.8 percent of total yield is expected in period 10, while none of yield in periods 6 and 7. 2. After inspection of the above solution, restricted optimum cutting regimes were obtained under the restrictions of Amin=150 ha, Amax=400ha, $Vmin=5,000m^3$ and $Vmax=50,000m^3$, using LP regulation model. As a result, about $50,000m^3$ of stable harvest yield per period and a relatively balanced age group distribution is expected from period 5. In this case, the loss in total yield was about 29 percent of that of unrestricted regimes. 3. Thinning schedule could be easily treated by the model presented in the study, and the thinnings made it possible to select optimum regimes which might be effective for smoothing the wood flows, not to speak of increasing total yield in the planning period. 4. It was known that the stronger the restrictions becomes in the optimum solution the earlier the period comes in which balanced harvest yields and age group distribution can be formed. There was also a tendency in this particular case that the periodic yields were strongly affected by constraints, and the fluctuations of harvest areas depended upon the amount of periodic yields. 5. Because the total yield was decreased at the increasing rate with imposing stronger restrictions, the Joss would be very great where strict sustained yield and normal age group distribution are required in the earlier periods. 6. Total yield under the same restrictions in a period was increased by lowering the felling age and extending the range of cutting age groups. Therefore, it seemed to be advantageous for producing maximum timber yield to adopt wider range of cutting age groups with the lower limit at which the smallest utilization size of timber could be produced. 7. The LP regulation model presented in the study seemed to be useful in the Korean situation from the following point of view: (1) The model can provide forest managers with the solution of where, when, and how much to cut in order to best fulfill the owners objective. (2) Planning is visualized as a continuous process where new strateges are automatically evolved as changes in the forest environment are recognized. (3) The cost (measured as decrease in total yield) of imposing restrictions can be easily evaluated. (4) Thinning schedule can be treated without difficulty. (5) The model can be applied to irregular forests. (6) Traditional regulation methods can be rainforced by the model.

  • PDF