• Title/Summary/Keyword: model Optimization

Search Result 5,666, Processing Time 0.033 seconds

Optimization of TDA Recycling Process for TDI Residue using Near-critical Hydrolysis Process (근임계수 가수분해 공정을 이용한 TDI 공정 폐기물로부터 TDA 회수 공정 최적화)

  • Han, Joo Hee;Han, Kee Do;Jeong, Chang Mo;Do, Seung Hoe;Sin, Yeong Ho
    • Korean Chemical Engineering Research
    • /
    • v.44 no.6
    • /
    • pp.650-658
    • /
    • 2006
  • The recycling of TDA from solid waste of TDI plant(TDI-R) by near-critical hydrolysis reaction had been studied by means of a statistical design of experiment. The main and interaction effects of process variables had been defined from the experiments in a batch reactor and the correlation equation with process variables for TDA yield had been obtained from the experiments in a continuous pilot plant. It was confirmed that the effects of reaction temperature, catalyst type and concentration, and the weight ratio of water to TDI-R(WR) on TDA yield were significant. TDA yield decreased with increases in reaction temperature and catalyst concentration, and increased with an increase in WR. As a catalyst, NaOH was more effective than $Na_2CO_3$ for TDA yield. The interaction effects between catalyst concentration and temperature, WR and temperature, catalyst type and reaction time on TDA yield had been defined as significant. Although the effect of catalyst concentration on TDA yield at $300^{\circ}C$ as subcritical water was insignificant, the TDA yield decreased with increasing catalyst concentration at $400^{\circ}C$ as supercritical water. On the other hand, the yield increased with an increase in WR at $300^{\circ}C$ but showed negligible effect with WR at $400^{\circ}C$. The optimization of process variables for TDA yield has been explored with a pilot plant for scale-up. The catalyst concentration and WR were selected as process variables with respect to economic feasibility and efficiency. The effects of process variables on TDA yield had been explored by means of central composite design. The TDA yield increased with an increase in catalyst concentration. It showed maximum value at below 2.5 of WR and then decreased with an increase in WR. However, the ratio at which the TDA yield showed a maximum value increased with increasing catalyst concentration. The correlation equation of a quadratic model with catalyst concentration and WR had been obtained by the regression analysis of experimental results in a pilot plant.

[Retraction] Characteristics and Optimization of Platycodon grandiflorum Root Concentrate Stick Products with Fermented Platycodon grandiflorum Root Extracts by Lactic Acid Bacteria ([논문 철회] 반응표면분석법을 이용한 젖산발효 도라지 추출물이 첨가된 도라지 농축액 제품의 최적화 연구)

  • Lee, Ka Soon;Seong, Bong Jae;Kim, Sun Ick;Jee, Moo Geun;Park, Shin Young;Mun, Jung Sik;Kil, Mi Ja;Doh, Eun Soo;Kim, Hyun Ho
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.46 no.11
    • /
    • pp.1386-1396
    • /
    • 2017
  • The purpose of this study was to determine the optimum Platycodon grandiflorum root concentrate (PGRC, $65^{\circ}Brix$), fermented P. grandiflorum root extract by Lactobacillus plantarum (FPGRE, $2^{\circ}Brix$), and cactus Chounnyouncho extract (Cactus-E, $2^{\circ}Brix$) for preparation of PGRC stick product with FPGRE using response surface methodology (RSM). The experimental conditions were designed according to a central composite design with 20 experimental points, including three replicates for three independent variables such as amount of PGRC (8~12 g), FPGRE (0~20 g), and Cactus-E (0~20 g). The experimental data for the sensory evaluation and functional properties based on antioxidant activity and antimicrobial activity were fitted with the quadratic model, and accuracy of equations was analyzed by ANOVA. For the responses, sensory and functional properties showed significant correlation with contents of three independent variables. The results indicate that addition of PGRC contributed to increased bitterness and acridity based on the sensory test and antimicrobial activity, addition of FPGRE contributed to increased antioxidant activity and antimicrobial activity, and addition of Cactus-E contributed to increased fluidity based on the sensory test, antioxidant activity, and antimicrobial activity. Based on the results of RSM, the optimum formulation of PGRC stick product was calculated as PGRC 8.456 g, FPGRE 20.00 g, and Cactus-Ex 20.00 g with minimal bitterness and acridity, as well as optimized fluidity, antioxidant activity, and antimicrobial activity.

Optimization of Medium for the Carotenoid Production by Rhodobacter sphaeroides PS-24 Using Response Surface Methodology (반응 표면 분석법을 사용한 Rhodobacter sphaeroides PS-24 유래 carotenoid 생산 배지 최적화)

  • Bong, Ki-Moon;Kim, Kong-Min;Seo, Min-Kyoung;Han, Ji-Hee;Park, In-Chul;Lee, Chul-Won;Kim, Pyoung-Il
    • Korean Journal of Organic Agriculture
    • /
    • v.25 no.1
    • /
    • pp.135-148
    • /
    • 2017
  • Response Surface Methodology (RSM), which is combining with Plackett-Burman design and Box-Behnken experimental design, was applied to optimize the ratios of the nutrient components for carotenoid production by Rhodobacter sphaeroides PS-24 in liquid state fermentation. Nine nutrient ingredients containing yeast extract, sodium acetate, NaCl, $K_2HPO_4$, $MgSO_4$, mono-sodium glutamate, $Na_2CO_3$, $NH_4Cl$ and $CaCl_2$ were finally selected for optimizing the medium composition based on their statistical significance and positive effects on carotenoid yield. Box-Behnken design was employed for further optimization of the selected nutrient components in order to increase carotenoid production. Based on the Box-Behnken assay data, the secondary order coefficient model was set up to investigate the relationship between the carotenoid productivity and nutrient ingredients. The important factors having influence on optimal medium constituents for carotenoid production by Rhodobacter sphaeroides PS-24 were determined as follows: yeast extract 1.23 g, sodium acetate 1 g, $NH_4Cl$ 1.75 g, NaCl 2.5 g, $K_2HPO_4$ 2 g, $MgSO_4$ 1.0 g, mono-sodium glutamate 7.5 g, $Na_2CO_3$ 3.71 g, $NH_4Cl$ 3.5g, $CaCl_2$ 0.01 g, per liter. Maximum carotenoid yield of 18.11 mg/L was measured by confirmatory experiment in liquid culture using 500 L fermenter.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Optimization of Medium Components using Response Surface Methodology for Cost-effective Mannitol Production by Leuconostoc mesenteroides SRCM201425 (반응표면분석법을 이용한 Leuconostoc mesenteroides SRCM201425의 만니톨 생산배지 최적화)

  • Ha, Gwangsu;Shin, Su-Jin;Jeong, Seong-Yeop;Yang, HoYeon;Im, Sua;Heo, JuHee;Yang, Hee-Jong;Jeong, Do-Youn
    • Journal of Life Science
    • /
    • v.29 no.8
    • /
    • pp.861-870
    • /
    • 2019
  • This study was undertaken to establish optimum medium compositions for cost-effective mannitol production by Leuconostoc mesenteroides SRCM201425 isolated from kimchi. L. mesenteroides SRCM21425 from kimchi was selected for efficient mannitol production based on fructose analysis and identified by its 16S rRNA gene sequence, as well as by carbohydrate fermentation pattern analysis. To enhance mannitol production by L. mesenteroides SRCM201425, the effects of carbon, nitrogen, and mineral sources on mannitol production were first determined using Plackett-Burman design (PBD). The effects of 11 variables on mannitol production were investigated of which three variables, fructose, sucrose, and peptone, were selected. In the second step, each concentration of fructose, sucrose, and peptone was optimized using a central composite design (CCD) and response surface analysis. The predicted concentrations of fructose, sucrose, and peptone were 38.68 g/l, 30 g/l, and 39.67 g/l, respectively. The mathematical response model was reliable, with a coefficient of determination of $R^2=0.9185$. Mannitol production increased 20-fold as compared with the MRS medium, corresponding to a mannitol yield 97.46% when compared to MRS supplemented with 100 g/l of fructose in flask system. Furthermore, the production in the optimized medium was cost-effective. The findings of this study can be expected to be useful in biological production for catalytic hydrogenation causing byproduct and additional production costs.

Decomposition Characteristics of Fungicides(Benomyl) using a Design of Experiment(DOE) in an E-beam Process and Acute Toxicity Assessment (전자빔 공정에서 실험계획법을 이용한 살균제 Benomyl의 제거특성 및 독성평가)

  • Yu, Seung-Ho;Cho, Il-Hyoung;Chang, Soon-Woong;Lee, Si-Jin;Chun, Suk-Young;Kim, Han-Lae
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.9
    • /
    • pp.955-960
    • /
    • 2008
  • We investigated and estimated at the characteristics of decomposition and mineralization of benomyl using a design of experiment(DOE) based on the general factorial design in an E-beam process, and also the main factors(variables) with benomyl concentration(X$_1$) and E-beam irradiation(X$_2$) which consisted of 5 levels in each factor was set up to estimate the prediction model and the optimization conditions. At frist, the benomyl in all treatment combinations except 17 and 18 trials was almost degraded and the difference in the decomposition of benomyl in the 3 blocks was not significant(p > 0.05, one-way ANOVA). However, the % of benomyl mineralization was 46%(block 1), 36.7%(block 2) and 22%(block 3) and showed the significant difference of the % that between each block(p < 0.05). The linear regression equations of benomyl mineralization in each block were also estimated as followed; block 1(Y$_1$ = 0.024X$_1$ + 34.1(R$^2$ = 0.929)), block 2(Y$_2$ = 0.026X$_2$ + 23.1(R$^2$ = 0.976)) and block 3(Y$_3$ = 0.034X$_3$ + 6.2(R$^2$ = 0.98)). The normality of benomyl mineralization obtained from Anderson-Darling test in all treatment conditions was satisfied(p > 0.05). The results of prediction model and optimization point using the canonical analysis in order to obtain the optimal operation conditions were Y = 39.96 - 9.36X$_1$ + 0.03X$_2$ - 10.67X$_1{^2}$ - 0.001X$_2{^2}$ + 0.011X$_1$X$_2$(R$^2$ = 96.3%, Adjusted R$^2$ = 94.8%) and 57.3% at 0.55 mg/L and 950 Gy, respectively. A Microtox test using V. fischeri showed that the toxicity, expressed as the inhibition(%), was reduced almost completely after an E-beam irradiation, whereas the inhibition(%) for 0.5 mg/L, 1 mg/L and 1.5 mg/L was 10.25%, 20.14% and 26.2% in the initial reactions in the absence of an E-beam illumination.

Limit Pricing by Noncooperative Oligopolists (과점산업(寡占産業)에서의 진입제한가격(進入制限價格))

  • Nam, Il-chong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.127-148
    • /
    • 1990
  • A Milgrom-Roberts style signalling model of limit pricing is developed to analyze the possibility and the scope of limit pricing in general, noncooperative oligopolies. The model contains multiple incumbent firms facing a potential entrant and assumes an information asymmetry between incombents and the potential entrant about the market demand. There are two periods in the model. In period 1, n incumbent firms simultaneously and noncooperatively choose quantities. At the end of period 1, the potential entrant observes the market price and makes an entry decision. In period 2, depending on the entry decision of the entrant, n' or (n+1) firms choose quantities again before the game terminates. Since the choice of incumbent firms in period 1 depends on their information about demand, the market price in period 1 conveys information about the market demand. Thus, there is a systematic link between the market price and the profitability of entry. Using Bayes-Nash equilibrium as the solution concept, we find that there exist some demand conditions under which incumbent firms will limit price. In symmetric equilibria, incumbent firms each produce an output that is greater than the Cournot output and induce a price that is below the Cournot price. In doing so, each incumbent firm refrains from maximizing short-run profit and supplies a public good that is entry deterrence. The reason that entry is deterred by such a reduced price is that it conveys information about the demand of the industry that is unfavorable to the entrant. This establishes the possibility of limit pricing by noncooperative oligopolists in a setting that is fully rational, and also generalizes the result of Milgrom and Roberts to general oligopolies, confirming Bain's intuition. Limit pricing by incumbents explained above can be interpreted as a form of credible collusion in which each firm voluntarily deviates from myopic optimization in order to deter entry using their superior information. This type of implicit collusion differs from Folk-theorem type collusions in many ways and suggests that a collusion can be a credible one even in finite games as long as there is information asymmetry. Another important result is that as the number of incumbent firms approaches infinity, or as the industry approaches a competitive one, the probability that limit pricing occurs converges to zero and the probability of entry converges to that under complete information. This limit result confirms the intuition that as the number of agents sharing the same private information increases, the value of the private information decreases, and the probability that the information gets revealed increases. This limit result also supports the conventional belief that there is no entry problem in a competitive market. Considering the fact that limit pricing is generally believed to occur at an early stage of an industry and the fact that many industries in Korea are oligopolies in their infant stages, the theoretical results of this paper suggest that we should pay attention to the possibility of implicit collusion by incumbent firms aimed at deterring new entry using superior information. The long-term loss to the Korean economy from limit pricing can be very large if the industry in question is a part of the world market and the domestic potential entrant whose entry is deterred could .have developed into a competitor in the world market. In this case, the long-term loss to the Korean economy should include the lost opportunity in the world market in addition to the domestic long-run welfare loss.

  • PDF

Assessment of the Angstrom-Prescott Coefficients for Estimation of Solar Radiation in Korea (국내 일사량 추정을 위한 Angstrom-Prescott계수의 평가)

  • Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.221-232
    • /
    • 2016
  • Models to estimate solar radiation have been used because solar radiation is measured at a smaller number of weather stations than other variables including temperature and rainfall. For example, solar radiation has been estimated using the Angstrom-Prescott (AP) model that depends on two coefficients obtained empirically at a specific site ($AP_{Choi}$) or for a climate zone ($AP_{Frere}$). The objective of this study was to identify the coefficients of the AP model for reliable estimation of solar radiation under a wide range of spatial and temporal conditions. A global optimization was performed for a range of AP coefficients to identify the values of $AP_{max}$ that resulted in the greatest degree of agreement at each of 20 sites for a given month during 30 years. The degree of agreement was assessed using the value of Concordance Correlation Coefficient (CCC). When $AP_{Frere}$ was used to estimate solar radiation, the values of CCC were relatively high for conditions under which crop growth simulation would be performed, e.g., at rural sites during summer. The statistics for $AP_{Frere}$ were greater than those for $AP_{Choi}$ although $AP_{Frere}$ had the smaller statistics than $AP_{max}$ did. The variation of CCC values was small over a wide range of AP coefficients when those statistics were summarized by site. $AP_{Frere}$ was included in each range of AP coefficients that resulted in reasonable accuracy of solar radiation estimates by site, year, and month. These results suggested that $AP_{Frere}$ would be useful to provide estimates of solar radiation as an input to crop models in Korea. Further studies would be merited to examine feasibility of using $AP_{Frere}$ to obtain gridded estimates of solar radiation at a high spatial resolution under a complex terrain in Korea.

A study on the optimization of tunnel support patterns using ANN and SVR algorithms (ANN 및 SVR 알고리즘을 활용한 최적 터널지보패턴 선정에 관한 연구)

  • Lee, Je-Kyum;Kim, YangKyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.617-628
    • /
    • 2022
  • A ground support pattern should be designed by properly integrating various support materials in accordance with the rock mass grade when constructing a tunnel, and a technical decision must be made in this process by professionals with vast construction experiences. However, designing supports at the early stage of tunnel design, such as feasibility study or basic design, may be very challenging due to the short timeline, insufficient budget, and deficiency of field data. Meanwhile, the design of the support pattern can be performed more quickly and reliably by utilizing the machine learning technique and the accumulated design data with the rapid increase in tunnel construction in South Korea. Therefore, in this study, the design data and ground exploration data of 48 road tunnels in South Korea were inspected, and data about 19 items, including eight input items (rock type, resistivity, depth, tunnel length, safety index by tunnel length, safety index by rick index, tunnel type, tunnel area) and 11 output items (rock mass grade, two items for shotcrete, three items for rock bolt, three items for steel support, two items for concrete lining), were collected to automatically determine the rock mass class and the support pattern. Three machine learning models (S1, A1, A2) were developed using two machine learning algorithms (SVR, ANN) and organized data. As a result, the A2 model, which applied different loss functions according to the output data format, showed the best performance. This study confirms the potential of support pattern design using machine learning, and it is expected that it will be able to improve the design model by continuously using the model in the actual design, compensating for its shortcomings, and improving its usability.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.