• Title/Summary/Keyword: Mathematical Models

Search Result 1,784, Processing Time 0.041 seconds

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

The Impact of the Internet Channel Introduction Depending on the Ownership of the Internet Channel (도입주체에 따른 인터넷경로의 도입효과)

  • Yoo, Weon-Sang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.1
    • /
    • pp.37-46
    • /
    • 2009
  • The Census Bureau of the Department of Commerce announced in May 2008 that U.S. retail e-commerce sales for 2006 reached $ 107 billion, up from $ 87 billion in 2005 - an increase of 22 percent. From 2001 to 2006, retail e-sales increased at an average annual growth rate of 25.4 percent. The explosive growth of E-Commerce has caused profound changes in marketing channel relationships and structures in many industries. Despite the great potential implications for both academicians and practitioners, there still exists a great deal of uncertainty about the impact of the Internet channel introduction on distribution channel management. The purpose of this study is to investigate how the ownership of the new Internet channel affects the existing channel members and consumers. To explore the above research questions, this study conducts well-controlled mathematical experiments to isolate the impact of the Internet channel by comparing before and after the Internet channel entry. The model consists of a monopolist manufacturer selling its product through a channel system including one independent physical store before the entry of an Internet store. The addition of the Internet store to this channel system results in a mixed channel comprised of two different types of channels. The new Internet store can be launched by the independent physical store such as Bestbuy. In this case, the physical retailer coordinates the two types of stores to maximize the joint profits from the two stores. The Internet store also can be introduced by an independent Internet retailer such as Amazon. In this case, a retail level competition occurs between the two types of stores. Although the manufacturer sells only one product, consumers view each product-outlet pair as a unique offering. Thus, the introduction of the Internet channel provides two product offerings for consumers. The channel structures analyzed in this study are illustrated in Fig.1. It is assumed that the manufacturer plays as a Stackelberg leader maximizing its own profits with the foresight of the independent retailer's optimal responses as typically assumed in previous analytical channel studies. As a Stackelberg follower, the independent physical retailer or independent Internet retailer maximizes its own profits, conditional on the manufacturer's wholesale price. The price competition between two the independent retailers is assumed to be a Bertrand Nash game. For simplicity, the marginal cost is set at zero, as typically assumed in this type of study. In order to explore the research questions above, this study develops a game theoretic model that possesses the following three key characteristics. First, the model explicitly captures the fact that an Internet channel and a physical store exist in two independent dimensions (one in physical space and the other in cyber space). This enables this model to demonstrate that the effect of adding an Internet store is different from that of adding another physical store. Second, the model reflects the fact that consumers are heterogeneous in their preferences for using a physical store and for using an Internet channel. Third, the model captures the vertical strategic interactions between an upstream manufacturer and a downstream retailer, making it possible to analyze the channel structure issues discussed in this paper. Although numerous previous models capture this vertical dimension of marketing channels, none simultaneously incorporates the three characteristics reflected in this model. The analysis results are summarized in Table 1. When the new Internet channel is introduced by the existing physical retailer and the retailer coordinates both types of stores to maximize the joint profits from the both stores, retail prices increase due to a combination of the coordination of the retail prices and the wider market coverage. The quantity sold does not significantly increase despite the wider market coverage, because the excessively high retail prices alleviate the market coverage effect to a degree. Interestingly, the coordinated total retail profits are lower than the combined retail profits of two competing independent retailers. This implies that when a physical retailer opens an Internet channel, the retailers could be better off managing the two channels separately rather than coordinating them, unless they have the foresight of the manufacturer's pricing behavior. It is also found that the introduction of an Internet channel affects the power balance of the channel. The retail competition is strong when an independent Internet store joins a channel with an independent physical retailer. This implies that each retailer in this structure has weak channel power. Due to intense retail competition, the manufacturer uses its channel power to increase its wholesale price to extract more profits from the total channel profit. However, the retailers cannot increase retail prices accordingly because of the intense retail level competition, leading to lower channel power. In this case, consumer welfare increases due to the wider market coverage and lower retail prices caused by the retail competition. The model employed for this study is not designed to capture all the characteristics of the Internet channel. The theoretical model in this study can also be applied for any stores that are not geographically constrained such as TV home shopping or catalog sales via mail. The reasons the model in this study is names as "Internet" are as follows: first, the most representative example of the stores that are not geographically constrained is the Internet. Second, catalog sales usually determine the target markets using the pre-specified mailing lists. In this aspect, the model used in this study is closer to the Internet than catalog sales. However, it would be a desirable future research direction to mathematically and theoretically distinguish the core differences among the stores that are not geographically constrained. The model is simplified by a set of assumptions to obtain mathematical traceability. First, this study assumes the price is the only strategic tool for competition. In the real world, however, various marketing variables can be used for competition. Therefore, a more realistic model can be designed if a model incorporates other various marketing variables such as service levels or operation costs. Second, this study assumes the market with one monopoly manufacturer. Therefore, the results from this study should be carefully interpreted considering this limitation. Future research could extend this limitation by introducing manufacturer level competition. Finally, some of the results are drawn from the assumption that the monopoly manufacturer is the Stackelberg leader. Although this is a standard assumption among game theoretic studies of this kind, we could gain deeper understanding and generalize our findings beyond this assumption if the model is analyzed by different game rules.

  • PDF

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Review Study on Major Factors Influencing Chlorine Disappearances in Water Storage Tanks (저수조 내 잔류염소 감소에 미치는 주요 영향 인자에 관한 문헌연구)

  • Noh, Yoorae;Kim, Sang-Hyo;Choi, Sung-Uk;Park, Joonhong
    • Journal of Korean Society of Disaster and Security
    • /
    • v.9 no.2
    • /
    • pp.63-75
    • /
    • 2016
  • For safe water supply, residual chlorine has to be maintained in tap-water above a certain level from drinking water treatment plants to the final tap-water end-point. However, according to the current literature, approximately 30-60% of residual chlorine is being lost during the whole water supply pathways. The losses of residual chlorine may have been attributed to the current tendency for water supply managers to reduce chlorine dosage in drinking water treatment plants, aqueous phase decomposition of residual chlorine in supply pipes, accelerated chlorine decomposition at a high temperature during summer, leakage or losses of residual chlorine from old water supply pipes, and disappearances of residual chlorine in water storage tanks. Because of these, it is difficult to rule out the possibility that residual chlorine concentrations become lower than a regulatory level. In addition, it is concerned that the regulatory satisfaction of residual chlorine in water storage tanks can not always be guaranteed by using the current design method in which only storage capacity and/or hydraulic retention time are simply used as design factors, without considering other physico-chemical processes involved in chlorine disappearances in water storage tank. To circumvent the limitations of the current design method, mathematical models for aqueous chlorine decomposition, sorption of chlorine into wall surface, and mass-transfer into air-phase via evaporation were selected from literature, and residual chlorine reduction behavior in water storage tanks was numerically simulated. The model simulation revealed that the major factors influencing residual chlorine disappearances in water storage tanks are the water quality (organic pollutant concentration) of tap-water entering into a storage tank, the hydraulic dispersion developed by inflow of tap-water into a water storage tank, and sorption capacity onto the wall of a water storage tank. The findings from his work provide useful information in developing novel design and technology for minimizing residual chlorine disappearances in water storage tanks.

Protein Requirements of the Korean Rockfish Sebastes schlegeli (조피볼락 Sebastes schlegeli의 단백질 요구량)

  • LEE Jong Yun;KANG Yong Jin;LEE Sang-Min;KIM In-Bae
    • Journal of Aquaculture
    • /
    • v.6 no.1
    • /
    • pp.13-27
    • /
    • 1993
  • In order to determine the protein requirements of the Korean rockfish Sebastes schlegeli six isocaloric diets containing crude protein level from 20\%\;to\;60\%$ were fed to two groups of fish, small and large size, with the initial average body weight of 8 g and 220 g respectively. White fish meal was used as a sole protein source. Daily weight gain, daily protein retention. daily energy retention, feed efficiency, protein retention efficiency and energy retention efficiency were significantly affected by the dietary protein content (p< 0.05). The growth parameters (that is, daily weight gain, daily protein retention and daily energy retention) increased up to $44\%$ protein level with no additional response above this point. The protein requirements were determined from daily weight gain using two different mathematical models. Second order polynomial regression analysis showed that maximum daily weight gain occurred at $56.7\%\;and\;50.6\%$ protein levels for the small size group and the large size group, respectively. However the protein requirements, determined by the broken line model, appeared to be about $40\%$ for both groups. Nutrient utilization also suggested that the protein requirements of both groups were close to $40\%$. When daily protein intake was considered, daily protein requirements per 100g of fish, estimated by the broken line model, were 0.99g and 0.35g for the small and large size groups respectively. Based on these results, a $40\%$ dietary crude protein level could be recommended for the optimum growth and efficient nutrient utilization of the Korean rockfish weighing between 8g and 300g.

  • PDF

A Study on the Optimal Limit State Design of Reinforced Concrete Flat Slab-Column Structures (한계상태설계법(限界狀態設計法)에 의한 철근(鐵筋)콘크리트 플래트 슬라브형(型) 구조체(構造體)의 최적화(最適化)에 관한 연구(研究))

  • Park, Moon Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.4 no.1
    • /
    • pp.11-26
    • /
    • 1984
  • The aim of this study is to establish a synthetical optimal method that simultaneously analyze and design reinforced concrete flat slab-column structures involving multi-constraints and multi-design variables. The variables adopted in this mathematical models consist of design variables including sectional sizes and steel areas of frames, and analysis variable of the ratio of bending moment redistribution. The cost function is taken as the objective function in the formulation of optimal problems. A number of constraint equations, involving the ultimate limit state and the serviceability limit state, is derived in accordance with BSI CP110 requirements on the basis of limit state design theory. Both objective function and constraint equations derived from design variables and an analysis variable generally become high degree nonlinear problems. Using SLP as an analytical method of nonlinear optimal problems, an optimal algorithm is developed so as to analyze and design the structures considered in this study. The developed algorithm is directly applied to a few reinforced concrete flat slab-column structures to assure the validity of it and the possibility of optimization From the research it is found that the algorithm developed in this study is applicable to the optimization of reinforced concrete flat slab column structures and it converges to a optimal solution with 4 to 6 iterations regardless of initial variables. The result shows that an economical design can be possible when compared with conventional designs. It is also found that considering the ratio of bending moment redistribution as a variable is reasonable. It has a great effect on the composition of optimal sections and the economy of structures.

  • PDF

Development of a Predictive Model Describing the Growth of Listeria Monocytogenes in Fresh Cut Vegetable (샐러드용 신선 채소에서의 Listerio monocytogenes 성장예측모델 개발)

  • Cho, Joon-Il;Lee, Soon-Ho;Lim, Ji-Su;Kwak, Hyo-Sun;Hwang, In-Gyun
    • Journal of Food Hygiene and Safety
    • /
    • v.26 no.1
    • /
    • pp.25-30
    • /
    • 2011
  • In this study, predictive mathematical models were developed to predict the kinetics of Listeria monocytogenes growth in the mixed fresh-cut vegetables, which is the most popular ready-to-eat food in the world, as a function of temperature (4, 10, 20 and $30^{\circ}C$). At the specified storage temperatures, the primary growth curve fit well ($r^2$=0.916~0.981) with a Gompertz and Baranyi equation to determine the specific growth rate (SGR). The Polynomial model for natural logarithm transformation of the SGR as a function of temperature was obtained by nonlinear regression (Prism, version 4.0, GraphPad Software). As the storage temperature decreased from $30^{\circ}C$ to $4^{\circ}C$, the SGR decreased, respectively. Polynomial model was identified as appropriate secondary model for SGR on the basis of most statistical indices such as mean square error (MSE=0.002718 by Gompertz, 0.055186 by Baranyi), bias factor (Bf=1.050084 by Gompertz, 1.931472 by Baranyi) and accuracy factor (Af=1.160767 by Gompertz, 2.137181 by Baranyi). Results indicate L. monocytogenes growth was affected by temperature mainly, and equation was developed by Gompertz model (-0.1606+$0.0574^*Temp$+$0.0009^*Temp^*Temp$) was more effective than equation was developed by Baranyi model (0.3502-$0.0496^*Temp$+$0.0022^*Temp^*Temp$) for specific growth rate prediction of L.monocytogenes in the mixed fresh-cut vegetables.

Quantitative Analysis of Magnetization Transfer by Phase Sensitive Method in Knee Disorder (무릎 이상에 대한 자화전이 위상감각에 의한 정량분석법)

  • Yoon, Moon-Hyun;Sung, Mi-Sook;Yin, Chang-Sik;Lee, Heung-Kyu;Choe, Bo-Young
    • Investigative Magnetic Resonance Imaging
    • /
    • v.10 no.2
    • /
    • pp.98-107
    • /
    • 2006
  • Magnetization Transfer (MT) imaging generates contrast dependent on the phenomenon of magnetization exchange between free water proton and restricted proton in macromolecules. In biological materials in knee, MT or cross-relaxation is commonly modeled using two spin pools identified by their different T2 relaxation times. Two models for cross-relaxation emphasize the role of proton chemical exchange between protons of water and exchangeable protons on macromolecules, as well as through dipole-dipole interaction between the water and macromolecule protons. The most essential tool in medical image manipulation is the ability to adjust the contrast and intensity. Thus, it is desirable to adjust the contrast and intensity of an image interactively in the real time. The proton density (PD) and T2-weighted SE MR images allow the depiction of knee structures and can demonstrate defects and gross morphologic changes. The PD- and T2-weighted images also show the cartilage internal pathology due to the more intermediate signal of the knee joint in these sequences. Suppression of fat extends the dynamic range of tissue contrast, removes chemical shift artifacts, and decreases motion-related ghost artifacts. Like fat saturation, phase sensitive methods are also based on the difference in precession frequencies of water and fat. In this study, phase sensitive methods look at the phase difference that is accumulated in time as a result of Larmor frequency differences rather than using this difference directly. Although how MT work was given with clinical evidence that leads to quantitative model for MT in tissues, the mathematical formalism used to describe the MT effect applies to explaining to evaluate knee disorder, such as anterior cruciate ligament (ACL) tear and meniscal tear. Calculation of the effect of the effect of the MT saturation is given in the magnetization transfer ratio (MTR) which is a quantitative measure of the relative decrease in signal intensity due to the MT pulse.

  • PDF

A Study of Organic Matter Fraction Method of the Wastewater by using Respirometry and Measurements of VFAs on the Filtered Wastewater and the Non-Filtered Wastewater (여과한 하수와 하수원액의 VFAs 측정과 미생물 호흡률 측정법을 이용한 하수의 유기물 분액 방법에 관한 연구)

  • Kang, Seong-wook;Cho, Wook-sang
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.17 no.1
    • /
    • pp.58-72
    • /
    • 2009
  • In this study, the organic matter and biomass was characterized by using respirometry based on ASM No.2d (Activated Sludge Model No.2d). The activated sludge models are based on the ASM No.2d model, published by the IAWQ(International Association on Water Quality) task group on mathematical modeling for design and operation of biological wastewater treatment processes. For this study, OUR(Oxygen Uptake Rate) measurements were made on filtered as well as non-filtered wastewater. Also, GC-FID and LC analysis were applied for the estimation of VFAs(Volatile Fatty Acids) COD(S_A) in slowly bio-degradable soluble substrates of the ASM No.2d. Therefore, this study was intended to clearly identify slowly bio-degradable dissolved materials(S_S) and particulate materials(X_I). In addition, a method capable of determining the accurate time to measure non-biodegradable COD(S_I), by the change of transition graphs in the process of measuring microbial OUR, was presented in this study. Influent fractionation is a critical step in the model calibrations. From the results of respirometry on filtered wastewater, the fraction of fermentable and readily biodegradable organic matter(S_F), fermentation products(S_A), inert soluble matter(S_I), slowly biodegradable matter(X_S) and inert particular matter(X_I) was 33.2%, 14.1%, 6.9%, 34.7%, 5.8%, respectively. The active heterotrophic biomass fraction(X_H) was about 5.3%.

  • PDF

Estimation and Methods Estimating Daily Food Consumption of Agrammus agrammus (노래미, Agrammus agrammus의 일간섭식량 추정법과 추정)

  • KIM Chong-Kwan;KANG Yong-Joo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.25 no.4
    • /
    • pp.241-250
    • /
    • 1992
  • This study was covered the amount of food consumed per day as well as methods estimating the daily food consumption per fish of Agrammus agrammus in natural population to understand flow of food organisms among trophic levels in bio-community of the coastal waters, Shinsudo, Samchonpo. The estimating formulas were induced from the mathematical models that representing the diurnal fluctuation of the stomach fullness of the fish. The daily food consumption could be estimated by both feeding rates and gastric evacuation rates, but it was more reasonable method that based on gastric evacuation rates than feeding rates. The daily food consumption in wet weight per fish by gastric evacuation rates were 1.9856g/day, 3.4725g/day, 4.4418g/day, 5.8168g/day, and 7.2113g/day in the order of age groups from 0 to 4. The daily rations as percentage of body weight were $9.35\%,\;6.65\%,\;5.76\%,\;4.72\%\;and\;5.31\%$ in the order of ages. The daily food consumption was proportional to the body weight of fish, but the daily food consumption per specific body weight was reciprocal to the body weight. Annual food consumption in wet weight. per fish by gastric evacuation rates were 529.98g from the age of 0.25 to 1.0, 1,269.28g from the age of 1.0 to 2.0, 1,622.76g from the age of 2.0 to 3.0, 2,125.57g from the age of 3.0 to 4.0, 1,316.09g from the age of 4.0 to 4.5 The amount of food consumed per fish during 4.25 years, from the age of 0.25 to 4.5, was 6,863.68g in wet weight. the relationships between the daily food consumption(Dr) by gastric evacuation rates and the total length(L, cm) or the body weight(W, g) were as follows: $$Dr=0.036L^{1.702}$$ $$Dr=0.254W^{0.664}$$

  • PDF