• Title/Summary/Keyword: mathematical

Search Result 31,224, Processing Time 0.044 seconds

Correlation Between the Parameters of Radiosensitivity in Human Cancer Cell Lines (인체 암세포주에서 방사선감수성의 지표간의 상호관계)

  • Park, Woo-Yoon;Kim, Won-Dong;Min, Kyung-Soo
    • Radiation Oncology Journal
    • /
    • v.16 no.2
    • /
    • pp.99-106
    • /
    • 1998
  • Purpose : We conducted clonogenic assay using human cancer cell lines (MKN-45, PC-14, Y-79, HeLa) to investigate a correlation between the parameters of radiosensitivity. Materials and Methods : Human cancer cell lines were irradiated with single doses of 1, 2, 3, 5, 7 and 10Gy for the study of radiosensitivity and subrethal damage repair capacity was assessed with two fractions of 5Gy separated with a time interval of 0, 1, 2, 3, 4, 6 and 24 hours. Surviving fraction was assessed with clonogenic assay using $Sperman-H\"{a}rbor$ method and mathematical analysis of survival curves was done with linear-quadratic (LQ) , multitarget-single hit(MS) model and mean inactivation dose$(\v{D})$. Results : Surviving fractions at 2Gy(SF2) were variable among the cell lines, ranged from 0.174 to 0.85 The SF2 of Y-79 was lowest and that of PC-14 was highest(p<0.05, t-test). LQ model analysis showed that the values of $\alpha$ for Y-79, MKN-45, HeLa and PC-14 were 0.603, 0.356, 0.275 and 0.102 respectively, and those of $\beta$ were 0.005, 0.016, 0.025 and 0.027 respectively. Fitting to MS model showed that the values of Do for Y-79. MKN-45, HeLa and PC-14 were 1.59. 1.84. 1.88 and 2.52 respectively, and those of n were 0.97, 1.46, 1.52 and 1 69 respectively. The $\v{D}s$ calculated by Gauss-Laguerre method were 1.62, 2.37, 2,01 and 3.95 respectively So the SF2 was significantly correlated with $\alpha$, Do and $\v{D}$. Their Pearson correlation coefficiencics were -0.953 and 0,993. 0.999 respectively(p<0.05). Sublethal damage repair was saturated around 4 hours and recovery ratios (RR) at plateau phase ranged from 2 to 3.79. But RR was not correlated with SF2, ${\alpha}$, ${\beta}$, Do, $\v{D}$. Conclusion : The intrinsic radiosensitivity was very different among the tested human cell lines. Y-79 was the most sensitive and PC-l4 was the least sensitive. SF2 was well correlated with ${\alpha}$, Do, and $\v{D}$. RR was high for MKN-45 and HeLa but had nothing to do with radiosensitivity parameters. These basic parameters can be used as baseline data for various in vitro radiobiological experiments.

  • PDF

Studies on the Derivation of the Instantaneous Unit Hydrograph for Small Watersheds of Main River Systems in Korea (한국주요빙계의 소유역에 대한 순간단위권 유도에 관한 연구 (I))

  • 이순혁
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.19 no.1
    • /
    • pp.4296-4311
    • /
    • 1977
  • This study was conducted to derive an Instantaneous Unit Hydrograph for the accurate and reliable unitgraph which can be used to the estimation and control of flood for the development of agricultural water resources and rational design of hydraulic structures. Eight small watersheds were selected as studying basins from Han, Geum, Nakdong, Yeongsan and Inchon River systems which may be considered as a main river systems in Korea. The area of small watersheds are within the range of 85 to 470$\textrm{km}^2$. It is to derive an accurate Instantaneous Unit Hydrograph under the condition of having a short duration of heavy rain and uniform rainfall intensity with the basic and reliable data of rainfall records, pluviographs, records of river stages and of the main river systems mentioned above. Investigation was carried out for the relations between measurable unitgraph and watershed characteristics such as watershed area, A, river length L, and centroid distance of the watershed area, Lca. Especially, this study laid emphasis on the derivation and application of Instantaneous Unit Hydrograph (IUH) by applying Nash's conceptual model and by using an electronic computer. I U H by Nash's conceptual model and I U H by flood routing which can be applied to the ungaged small watersheds were derived and compared with each other to the observed unitgraph. 1 U H for each small watersheds can be solved by using an electronic computer. The results summarized for these studies are as follows; 1. Distribution of uniform rainfall intensity appears in the analysis for the temporal rainfall pattern of selected heavy rainfall event. 2. Mean value of recession constants, Kl, is 0.931 in all watersheds observed. 3. Time to peak discharge, Tp, occurs at the position of 0.02 Tb, base length of hlrdrograph with an indication of lower value than that in larger watersheds. 4. Peak discharge, Qp, in relation to the watershed area, A, and effective rainfall, R, is found to be {{{{ { Q}_{ p} = { 0.895} over { { A}^{0.145 } } }}}} AR having high significance of correlation coefficient, 0.927, between peak discharge, Qp, and effective rainfall, R. Design chart for the peak discharge (refer to Fig. 15) with watershed area and effective rainfall was established by the author. 5. The mean slopes of main streams within the range of 1.46 meters per kilometer to 13.6 meter per kilometer. These indicate higher slopes in the small watersheds than those in larger watersheds. Lengths of main streams are within the range of 9.4 kilometer to 41.75 kilometer, which can be regarded as a short distance. It is remarkable thing that the time of flood concentration was more rapid in the small watersheds than that in the other larger watersheds. 6. Length of main stream, L, in relation to the watershed area, A, is found to be L=2.044A0.48 having a high significance of correlation coefficient, 0.968. 7. Watershed lag, Lg, in hrs in relation to the watershed area, A, and length of main stream, L, was derived as Lg=3.228 A0.904 L-1.293 with a high significance. On the other hand, It was found that watershed lag, Lg, could also be expressed as {{{{Lg=0.247 { ( { LLca} over { SQRT { S} } )}^{ 0.604} }}}} in connection with the product of main stream length and the centroid length of the basin of the watershed area, LLca which could be expressed as a measure of the shape and the size of the watershed with the slopes except watershed area, A. But the latter showed a lower correlation than that of the former in the significance test. Therefore, it can be concluded that watershed lag, Lg, is more closely related with the such watersheds characteristics as watershed area and length of main stream in the small watersheds. Empirical formula for the peak discharge per unit area, qp, ㎥/sec/$\textrm{km}^2$, was derived as qp=10-0.389-0.0424Lg with a high significance, r=0.91. This indicates that the peak discharge per unit area of the unitgraph is in inverse proportion to the watershed lag time. 8. The base length of the unitgraph, Tb, in connection with the watershed lag, Lg, was extra.essed as {{{{ { T}_{ b} =1.14+0.564( { Lg} over {24 } )}}}} which has defined with a high significance. 9. For the derivation of IUH by applying linear conceptual model, the storage constant, K, with the length of main stream, L, and slopes, S, was adopted as {{{{K=0.1197( {L } over { SQRT {S } } )}}}} with a highly significant correlation coefficient, 0.90. Gamma function argument, N, derived with such watershed characteristics as watershed area, A, river length, L, centroid distance of the basin of the watershed area, Lca, and slopes, S, was found to be N=49.2 A1.481L-2.202 Lca-1.297 S-0.112 with a high significance having the F value, 4.83, through analysis of variance. 10. According to the linear conceptual model, Formular established in relation to the time distribution, Peak discharge and time to peak discharge for instantaneous Unit Hydrograph when unit effective rainfall of unitgraph and dimension of watershed area are applied as 10mm, and $\textrm{km}^2$ respectively are as follows; Time distribution of IUH {{{{u(0, t)= { 2.78A} over {K GAMMA (N) } { e}^{-t/k } { (t.K)}^{N-1 } }}}} (㎥/sec) Peak discharge of IUH {{{{ {u(0, t) }_{max } = { 2.78A} over {K GAMMA (N) } { e}^{-(N-1) } { (N-1)}^{N-1 } }}}} (㎥/sec) Time to peak discharge of IUH tp=(N-1)K (hrs) 11. Through mathematical analysis in the recession curve of Hydrograph, It was confirmed that empirical formula of Gamma function argument, N, had connection with recession constant, Kl, peak discharge, QP, and time to peak discharge, tp, as {{{{{ K'} over { { t}_{ p} } = { 1} over {N-1 } - { ln { t} over { { t}_{p } } } over {ln { Q} over { { Q}_{p } } } }}}} where {{{{K'= { 1} over { { lnK}_{1 } } }}}} 12. Linking the two, empirical formulars for storage constant, K, and Gamma function argument, N, into closer relations with each other, derivation of unit hydrograph for the ungaged small watersheds can be established by having formulars for the time distribution and peak discharge of IUH as follows. Time distribution of IUH u(0, t)=23.2 A L-1S1/2 F(N, K, t) (㎥/sec) where {{{{F(N, K, t)= { { e}^{-t/k } { (t/K)}^{N-1 } } over { GAMMA (N) } }}}} Peak discharge of IUH) u(0, t)max=23.2 A L-1S1/2 F(N) (㎥/sec) where {{{{F(N)= { { e}^{-(N-1) } { (N-1)}^{N-1 } } over { GAMMA (N) } }}}} 13. The base length of the Time-Area Diagram for the IUH was given by {{{{C=0.778 { ( { LLca} over { SQRT { S} } )}^{0.423 } }}}} with correlation coefficient, 0.85, which has an indication of the relations to the length of main stream, L, centroid distance of the basin of the watershed area, Lca, and slopes, S. 14. Relative errors in the peak discharge of the IUH by using linear conceptual model and IUH by routing showed to be 2.5 and 16.9 percent respectively to the peak of observed unitgraph. Therefore, it confirmed that the accuracy of IUH using linear conceptual model was approaching more closely to the observed unitgraph than that of the flood routing in the small watersheds.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.