• Title/Summary/Keyword: Size Optimization

Search Result 1,532, Processing Time 0.037 seconds

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Optimization of Tube Voltage according to Patient's Body Type during Limb examination in Digital X-ray Equipment (디지털 엑스선 장비의 사지 검사 시 환자 체형에 따른 관전압 최적화)

  • Kim, Sang-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.5
    • /
    • pp.379-385
    • /
    • 2017
  • This study identifies the optimal tube voltages depending on the changes in the patient's body type for limb tests using a digital radiography (DR) system. For the upper-limp test, the dose area product (DAP) was fixed at $5.06dGy{\ast} cm^2$, and for the lower-limb test, the DAP was fixed at $5.04dGy{\ast} cm^2$. Afterwards, the tube voltage was changed to four different stages and the images were taken three times at each stage. The thickness of the limbs was increased by 10 mm to 30 mm to change in the patient's body type. For a quantitative evaluation, Image J was used to calculate the contrast to noise ratio (CNR) and signal to noise ratio (SNR) among the four groups, according to the tube voltage. For statistical testing, the statistically significant differences were analyzed through the Kruskal-Wallis test at a 95% confidence level. For the qualitative analysis of the images, the pre-determined items were evaluated based on a 5-point Likert scale. In both upper-limb and lower-limb tests, the more the tube voltage increased, the more the CNR and SNR of the images decreased. The test on the changes depending on the patient's body shape showed that the more the thickness increased, the more the CNR and SNR decreased. In the qualitative evaluation on the upper limbs, the more the tube voltage increased, the more score increased to 4.6 at the maximum of 55kV and 3.6 at 40kV, respectively. The mean score for the lower limbs was 4.4, regardless of the tube voltage. The more either the upper or lower limbs got thicker, the more the score generally decreased. The score of the upper limps sharply dropped at 40kV, whereas that of the lower limps sharply dropped at 50kV. For patients with a standard thickness, the optimized images can be obtained when taken at 45kV for the upper limbs, and at 50kV for the lower limbs. However, when the thickness of the patient's limbs increases, it is best to set the tube voltage at 50 kV for the upper limbs and at 55 kV for the lower limbs.

Dose Verification Using Pelvic Phantom in High Dose Rate (HDR) Brachytherapy (자궁경부암용 팬톰을 이용한 HDR (High dose rate) 근접치료의 선량 평가)

  • 장지나;허순녕;김회남;윤세철;최보영;이형구;서태석
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.15-19
    • /
    • 2003
  • High dose rate (HDR) brachytherapy for treating a cervix carcinoma has become popular, because it eliminates many of the problems associated with conventional brachytherapy. In order to improve the clinical effectiveness with HDR brachytherapy, a dose calculation algorithm, optimization procedures, and image registrations need to be verified by comparing the dose distributions from a planning computer and those from a phantom. In this study, the phantom was fabricated in order to verify the absolute doses and the relative dose distributions. The measured doses from the phantom were then compared with the treatment planning system for the dose verification. The phantom needs to be designed such that the dose distributions can be quantitatively evaluated by utilizing the dosimeters with a high spatial resolution. Therefore, the small size of the thermoluminescent dosimeter (TLD) chips with a dimension of <1/8"and film dosimetry with a spatial resolution of <1mm used to measure the radiation dosages in the phantom. The phantom called a pelvic phantom was made from water and the tissue-equivalent acrylic plates. In order to firmly hold the HDR applicators in the water phantom, the applicators were inserted into the grooves of the applicator holder. The dose distributions around the applicators, such as Point A and B, were measured by placing a series of TLD chips (TLD-to-TLD distance: 5mm) in the three TLD holders, and placing three verification films in the orthogonal planes. This study used a Nucletron Plato treatment planning system and a Microselectron Ir-192 source unit. The results showed good agreement between the treatment plan and measurement. The comparisons of the absolute dose showed agreement within $\pm$4.0 % of the dose at point A and B, and the bladder and rectum point. In addition, the relative dose distributions by film dosimetry and those calculated by the planning computer show good agreement. This pelvic phantom could be a useful to verify the dose calculation algorithm and the accuracy of the image localization algorithm in the high dose rate (HDR) planning computer. The dose verification with film dosimetry and TLD as quality assurance (QA) tools are currently being undertaken in the Catholic University, Seoul, Korea.

  • PDF

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

The Preparation of Magnetic Chitosan Nanoparticles with GABA and Drug Adsorption-Release (GABA를 담지한 자성 키토산 나노입자 제조와 약물의흡수 및 방출 연구)

  • Yoon, Hee-Soo;Kang, Ik-Joong
    • Korean Chemical Engineering Research
    • /
    • v.58 no.4
    • /
    • pp.541-549
    • /
    • 2020
  • The Drug Delivery System (DDS) is defined as a technology for designing existing or new drug formulations and optimizing drug treatment. DDS is designed to efficiently deliver drugs for the care of diseases, minimize the side effects of drug, and maximize drug efficacy. In this study, the optimization of tripolyphosphate (TPP) concentration on the size of Chitosan nanoparticles (CNPs) produced by crosslinking with chitosan was measured. In addition, the characteristics of Fe3O4-CNPs according to the amount of iron oxide (Fe3O4) were measured, and it was confirmed that the higher the amount of Fe3O4, the better the characteristics as a magnetic drug carrier were displayed. Through the ninhydrin reaction, a calibration curve was obtained according to the concentration of γ-aminobutyric acid (GABA) of Y = 0.00373exp(179.729X)-0.0114 (R2 = 0.989) in the low concentration (0.004 to 0.02 wt%) and Y = 21.680X-0.290 (R2 = 0.999) in the high concentration (0.02 to 0.1 wt%). Absorption was constant at about 62.5% above 0.04 g of initial GABA. In addition, the amount of GABA released from GABA-Fe3O4-CNPs over time was measured to confirm that drug release was terminated after about 24 hr. Finally, GABA-Fe3O4-CNPs performed under the optimal conditions were spherical particles of about 150 nm, and it was confirmed that the properties of the particles appear well, indicating that GABA-Fe3O4-CNPs were suitable as drug carriers.

Mass Screening of Lovastatin High-yielding Mutants through Statistical Optimization of Sporulation Medium and Application of Miniaturized Fungal Cell Cultures (Lovastatin 고생산성 변이주의 신속 선별을 위해 통계적 방법을 적용한 Sporulation 배지 개발 및 Miniature 배양 방법 개발)

  • Ahn, Hyun-Jung;Jeong, Yong-Seob;Kim, Pyeung-Hyeun;Chun, Gie-Taek
    • KSBB Journal
    • /
    • v.22 no.5
    • /
    • pp.297-304
    • /
    • 2007
  • For large and rapid screening of high-yielding mutants of lovastatin produced by filamentous fungal cells of Aspergillus terreus, one of the most important stage is to test as large amounts of mutated strains as possible. For this purpose, we intended to develop a miniaturized cultivation method using $7m{\ell}$ culture tube instead of traditional $250m{\ell}$ flask (working volume $50m{\ell}$). For obtaining large amounts of conidiospores to be used as inoculums for miniaturized cultures, 4 components i.e., glucose, sucrose, yeast extract and $KH_2PO_4$ were intensively investigated, which had been observed to show positive effect on enhancement of spore production through Plackett-Burman design experimet. When optimum concentrations of these components that were determined through application of response surface method (RSM) based on central composite design (CCD) were used, maximum spore numbers amounting to $1.9\times10^{10}$ spores/plate were obtained, resulting in approximately 190 fold increase as compared to the commonly used PDA sporulation medium. Using the miniaturized cultures, intensive strain development programs were carried out for screening of lovastatin high-yielding as well as highly reproducible mutants. It was observed that, for maximum production of lovastatin, the producers should be activated through 'PaB' adaptation process during the early solid culture stage. In addition, they should be proliferated in condensed filamentous forms in miniaturized growth cultures, so that optimum amounts of highly active cells could be transferred to the production culture-tube as reproducible inoculums. Under these highly controlled fermentation conditions, compact-pelleted morphology of optimum size (less than 1 mm in diameter) was successfully induced in the miniaturized production cultures, which proved essential for maximal utilization of the producers' physiology leading to significantly enhanced production of lovastatin. As a result of continuous screening in the miniaturized cultures, lovastatin production levels of the 81% of the daughter cells derived from the high-yielding producers turned out to be in the range of 80%$\sim$120% of the lovastatin production level of the parallel flask cultures. These results demonstrate that the miniaturized cultivation method developed in this study is efficient high throughput system for large and rapid screening of highly stable and productive strains.

Optimization of Production Medium by Response Surface Method and Development of Fermentation Condition for Monascus pilosus Culture (Monascus pilosus 배양을 위한 반응표면분석법에 의한 생산배지 최적화 및 발효조건 확립)

  • Yoon, Sang-Jin;Shin, Woo-Shik;Chun, Gie-Taek;Jeong, Yong-Seob
    • KSBB Journal
    • /
    • v.22 no.5
    • /
    • pp.288-296
    • /
    • 2007
  • Monascus pilosus (KCCM 60160) in submerged culture was optimized based on culture medium and fermentation conditions. Monacolin-K (Iovastatin), one of the cholesterol lowing-agent which was produced by Monascus pilosus may maintain a healthy lipid level by inhibiting the biosynthesis of cholesterol. Plackett-Burman design and response surface method were employed to study the culture medium for the desirable monacolin-K production. As a result of experimental designs, optimized production medium components and concentrations (g/L) were determined on soluble starch 96, malt extract 44.5, beef extract 30.23, yeast extract 15, $(NH_4)_2SO_4$ 4.03, $Na_2HPO_4{\cdot}12H_2O$ 0.5, L-Histidine 3.0, $KHSO_4$ 1.0, respectively. Monacolin-K production was improved about 3 times in comparison with shake flask fermentation of the basic production medium. The effect of agitation speed (300, 350, 400 and 450 rpm) on the monacolin-K production were also observed in a batch fermenter. Maximum monacolin-K production with the basic production medium was 68 mg/L when agitation speed was 500 rpm. And it was found that all spherical pellets (average diameter of $1.0{\sim}1.5mm$) were dominant during fermentation. Based on the results, the maximum production of 185 mg/L of monacolin-K with the optimized production medium was obtained at pH (controlled) 6.5, agitation rate 400 rpm, aeration rate 1 vvm, and inoculum size 3%.