• Title/Summary/Keyword: 최적화 변수

Search Result 2,219, Processing Time 0.029 seconds

Optimization of cultivation conditions for pullulan production from Aureobasidium pullulans MR by response surface methodology (반응표면분석법을 이용한 Aureobasidium pullulans MR의 풀루란 생산을 위한 배양 조건 최적화)

  • Jo, Hye-Mi;Kim, Ye-Jin;Yoo, Sang-Ho;Kim, Chang-Mu;Kim, KyeWon;Park, Cheon-Seok
    • Korean Journal of Food Science and Technology
    • /
    • v.53 no.2
    • /
    • pp.195-203
    • /
    • 2021
  • Aureobasidium pullulans, a black yeast, produces pullulan, a linear α-glucan composed of maltotriose repeating units linked by α(1→6)-glycosidic linkages. Pullulan can be widely used in food, cosmetic, and biotechnology industries. In this study, we isolated eight strains of A. pullulans from Forsythia koreana, Magnolia kobus DC., Spiraea prunifolia var. simpliciflora, Cornus officinalis, Cerasus, and Hippophae rhamnoides. Among them, A. pullulans MR was selected as the best pullulan producer. The effects of a carbon source, a nitrogen source, and pH on pullulan production were examined. The optimal cultivation conditions for pullulan production by A. pullulans MR were determined by response surface methodology as 15% sucrose, 0.4% soy peptone, and an initial pH of 7 at 26℃. Under these conditions, the predicted pullulan production was 47.6 g/L, which was very close to the experimental data (48.9 g/L).

Development of A Material Flow Model for Predicting Nano-TiO2 Particles Removal Efficiency in a WWTP (하수처리장 내 나노 TiO2 입자 제거효율 예측을 위한 물질흐름모델 개발)

  • Ban, Min Jeong;Lee, Dong Hoon;Shin, Sangwook;Lee, Byung-Tae;Hwang, Yu Sik;Kim, Keugtae;Kang, Joo-Hyon
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.345-353
    • /
    • 2022
  • A wastewater treatment plant (WWTP) is a major gateway for the engineered nano-particles (ENPs) entering the water bodies. However existing studies have reported that many WWTPs exceed the No Observed Effective Concentration (NOEC) for ENPs in the effluent and thus they need to be designed or operated to more effectively control ENPs. Understanding and predicting ENPs behaviors in the unit and \the whole process of a WWTP should be the key first step to develop strategies for controlling ENPs using a WWTP. This study aims to provide a modeling tool for predicting behaviors and removal efficiencies of ENPs in a WWTP associated with process characteristics and major operating conditions. In the developed model, four unit processes for water treatment (primary clarifier, bioreactor, secondary clarifier, and tertiary treatment unit) were considered. Additionally the model simulates the sludge treatment system as a single process that integrates multiple unit processes including thickeners, digesters, and dewatering units. The simulated ENP was nano-sized TiO2, (nano-TiO2) assuming that its behavior in a WWTP is dominated by the attachment with suspendid solids (SS), while dissolution and transformation are insignificant. The attachment mechanism of nano-TiO2 to SS was incorporated into the model equations using the apparent solid-liquid partition coefficient (Kd) under the equilibrium assumption between solid and liquid phase, and a steady state condition of nano-TiO2 was assumed. Furthermore, an MS Excel-based user interface was developed to provide user-friendly environment for the nano-TiO2 removal efficiency calculations. Using the developed model, a preliminary simulation was conducted to examine how the solid retention time (SRT), a major operating variable affects the removal efficiency of nano-TiO2 particles in a WWTP.

Application of Environmental Friendly Bio-adsorbent based on a Plant Root for Copper Recovery Compared to the Synthetic Resin (구리 회수를 위한 식물뿌리 기반 친환경 바이오 흡착제의 적용 - 합성수지와의 비교)

  • Bawkar, Shilpa K.;Jha, Manis K.;Choubey, Pankaj K.;Parween, Rukshana;Panda, Rekha;Singh, Pramod K.;Lee, Jae-chun
    • Resources Recycling
    • /
    • v.31 no.4
    • /
    • pp.56-65
    • /
    • 2022
  • Copper is one of the non-ferrous metals used in the electrical/electronic manufacturing industries due to its superior properties particularly the high conductivity and less resistivity. The effluent generated from the surface finishing process of these industries contains higher copper content which gets discharged in to water bodies directly or indirectly. This causes severe environmental pollution and also results in loss of an important valuable metal. To overcome this issue, continuous R & D activities are going on across the globe in adsorption area with the purpose of finding an efficient, low cost and ecofriendly adsorbent. In view of the above, present investigation was made to compare the performance of a plant root (Datura root powder) as a bio-adsorbent to that of the synthetic one (Tulsion T-42) for copper adsorption from such effluent. Experiments were carried out in batch studies to optimize parameters such as adsorbent dose, contact time, pH, feed concentration, etc. Results of the batch experiments indicate that 0.2 g of Datura root powder and 0.1 g of Tulsion T-42 showed 95% copper adsorption from an initial feed/solution of 100 ppm Cu at pH 4 in contact time of 15 and 30 min, respectively. Adsorption data for both the adsorbents were fitted well to the Freundlich isotherm. Experimental results were also validated with the kinetic model, which showed that the adsorption of copper followed pseudo-second order rate expression for the both adsorbents. Overall result demonstrates that the bio-adsorbent tested has a potential applicability for metal recovery from the waste solutions/effluents of metal finishing units. In view of the requirements of commercial viability and minimal environmental damage there from, Datura root powder being an effective material for metal uptake, may prove to be a feasible adsorbent for copper recovery after the necessary scale-up studies.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.

Prediction of Acer pictum subsp. mono Distribution using Bioclimatic Predictor Based on SSP Scenario Detailed Data (SSP 시나리오 상세화 자료 기반 생태기후지수를 활용한 고로쇠나무 분포 예측)

  • Kim, Whee-Moon;Kim, Chaeyoung;Cho, Jaepil;Hur, Jina;Song, Wonkyong
    • Ecology and Resilient Infrastructure
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 2022
  • Climate change is a key factor that greatly influences changes in the biological seasons and geographical distribution of species. In the ecological field, the BioClimatic predictor (BioClim), which is most related to the physiological characteristics of organisms, is used for vulnerability assessment. However, BioClim values are not provided other than the future period climate average values for each GCM for the Shared Socio-economic Pathways (SSPs) scenario. In this study, BioClim data suitable for domestic conditions was produced using 1 km resolution SSPs scenario detailed data produced by Rural Development Administration, and based on the data, a species distribution model was applied to mainly grow in southern, Gyeongsangbuk-do, Gangwon-do and humid regions. Appropriate habitat distributions were predicted every 30 years for the base years (1981 - 2010) and future years (2011 - 2100) of the Acer pictum subsp. mono. Acer pictum subsp. mono appearance data were collected from a total of 819 points through the national natural environment survey data. In order to improve the performance of the MaxEnt model, the parameters of the model (LQH-1.5) were optimized, and 7 detailed biolicm indices and 5 topographical indices were applied to the MaxEnt model. Drainage, Annual Precipitation (Bio12), and Slope significantly contributed to the distribution of Acer pictum subsp. mono in Korea. As a result of reflecting the growth characteristics that favor moist and fertile soil, the influence of climatic factors was not significant. Accordingly, in the base year, the suitable habitat for a high level of Acer pictum subsp. mono is 3.41% of the area of Korea, and in the near future (2011 - 2040) and far future (2071 - 2100), SSP1-2.6 accounts for 0.01% and 0.02%, gradually decreasing. However, in SSP5-8.5, it was 0.01% and 0.72%, respectively, showing a tendency to decrease in the near future compared to the base year, but to gradually increase toward the far future. This study confirms the future distribution of vegetation that is more easily adapted to climate change, and has significance as a basic study that can be used for future forest restoration of climate change-adapted species.

Assessing the Sensitivity of Runoff Projections Under Precipitation and Temperature Variability Using IHACRES and GR4J Lumped Runoff-Rainfall Models (집중형 모형 IHACRES와 GR4J를 이용한 강수 및 기온 변동성에 대한 유출 해석 민감도 평가)

  • Woo, Dong Kook;Jo, Jihyeon;Kang, Boosik;Lee, Songhee;Lee, Garim;Noh, Seong Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.1
    • /
    • pp.43-54
    • /
    • 2023
  • Due to climate change, drought and flood occurrences have been increasing. Accurate projections of watershed discharges are imperative to effectively manage natural disasters caused by climate change. However, climate change and hydrological model uncertainty can lead to imprecise analysis. To address this issues, we used two lumped models, IHACRES and GR4J, to compare and analyze the changes in discharges under climate stress scenarios. The Hapcheon and Seomjingang dam basins were the study site, and the Nash-Sutcliffe efficiency (NSE) and the Kling-Gupta efficiency (KGE) were used for parameter optimizations. Twenty years of discharge, precipitation, and temperature (1995-2014) data were used and divided into training and testing data sets with a 70/30 split. The accuracies of the modeled results were relatively high during the training and testing periods (NSE>0.74, KGE>0.75), indicating that both models could reproduce the previously observed discharges. To explore the impacts of climate change on modeled discharges, we developed climate stress scenarios by changing precipitation from -50 % to +50 % by 1 % and temperature from 0 ℃ to 8 ℃ by 0.1 ℃ based on two decades of weather data, which resulted in 8,181 climate stress scenarios. We analyzed the yearly maximum, abundant, and ordinary discharges projected by the two lumped models. We found that the trends of the maximum and abundant discharges modeled by IHACRES and GR4J became pronounced as changes in precipitation and temperature increased. The opposite was true for the case of ordinary water levels. Our study demonstrated that the quantitative evaluations of the model uncertainty were important to reduce the impacts of climate change on water resources.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.