• Title/Summary/Keyword: 최적선정

Search Result 3,152, Processing Time 0.03 seconds

Optimization of Medium Components using Response Surface Methodology for Cost-effective Mannitol Production by Leuconostoc mesenteroides SRCM201425 (반응표면분석법을 이용한 Leuconostoc mesenteroides SRCM201425의 만니톨 생산배지 최적화)

  • Ha, Gwangsu;Shin, Su-Jin;Jeong, Seong-Yeop;Yang, HoYeon;Im, Sua;Heo, JuHee;Yang, Hee-Jong;Jeong, Do-Youn
    • Journal of Life Science
    • /
    • v.29 no.8
    • /
    • pp.861-870
    • /
    • 2019
  • This study was undertaken to establish optimum medium compositions for cost-effective mannitol production by Leuconostoc mesenteroides SRCM201425 isolated from kimchi. L. mesenteroides SRCM21425 from kimchi was selected for efficient mannitol production based on fructose analysis and identified by its 16S rRNA gene sequence, as well as by carbohydrate fermentation pattern analysis. To enhance mannitol production by L. mesenteroides SRCM201425, the effects of carbon, nitrogen, and mineral sources on mannitol production were first determined using Plackett-Burman design (PBD). The effects of 11 variables on mannitol production were investigated of which three variables, fructose, sucrose, and peptone, were selected. In the second step, each concentration of fructose, sucrose, and peptone was optimized using a central composite design (CCD) and response surface analysis. The predicted concentrations of fructose, sucrose, and peptone were 38.68 g/l, 30 g/l, and 39.67 g/l, respectively. The mathematical response model was reliable, with a coefficient of determination of $R^2=0.9185$. Mannitol production increased 20-fold as compared with the MRS medium, corresponding to a mannitol yield 97.46% when compared to MRS supplemented with 100 g/l of fructose in flask system. Furthermore, the production in the optimized medium was cost-effective. The findings of this study can be expected to be useful in biological production for catalytic hydrogenation causing byproduct and additional production costs.

Impact of Lambertian Cloud Top Pressure Error on Ozone Profile Retrieval Using OMI (램버시안 구름 모델의 운정기압 오차가 OMI 오존 프로파일 산출에 미치는 영향)

  • Nam, Hyeonshik;Kim, Jae Hawn;Shin, Daegeun;Baek, Kanghyun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.347-358
    • /
    • 2019
  • Lambertian cloud model (Lambertian Cloud Model) is the simplified cloud model which is used to effectively retrieve the vertical ozone distribution of the atmosphere where the clouds exist. By using the Lambertian cloud model, the optical characteristics of clouds required for radiative transfer simulation are parametrized by Optical Centroid Cloud Pressure (OCCP) and Effective Cloud Fraction (ECF), and the accuracy of each parameter greatly affects the radiation simulation accuracy. However, it is very difficult to generalize the vertical ozone error due to the OCCP error because it varies depending on the radiation environment and algorithm setting. In addition, it is also difficult to analyze the effect of OCCP error because it is mixed with other errors that occur in the vertical ozone calculation process. This study analyzed the ozone retrieval error due to OCCP error using two methods. First, we simulated the impact of OCCP error on ozone retrieval based on Optimal Estimation. Using LIDORT radiation model, the radiation error due to the OCCP error is calculated. In order to convert the radiation error to the ozone calculation error, the radiation error is assigned to the conversion equation of the optimal estimation method. The results show that when the OCCP error occurs by 100 hPa, the total ozone is overestimated by 2.7%. Second, a case analysis is carried out to find the ozone retrieval error due to OCCP error. For the case analysis, the ozone retrieval error is simulated assuming OCCP error and compared with the ozone error in the case of PROFOZ 2005-2006, an OMI ozone profile product. In order to define the ozone error in the case, we assumed an ideal assumption. Considering albedo, and the horizontal change of ozone for satisfying the assumption, the 49 cases are selected. As a result, 27 out of 49 cases(about 55%)showed a correlation of 0.5 or more. This result show that the error of OCCP has a significant influence on the accuracy of ozone profile calculation.

Development and Validation of a Simultaneous Analytical Method for 5 Residual Pesticides in Agricultural Products using GC-MS/MS (GC-MS/MS를 이용한 농산물 중 잔류농약 5종 동시시험법 개발 및 검증)

  • Park, Eun-Ji;Kim, Nam Young;Shim, Jae-Han;Lee, Jung Mi;Jung, Yong Hyun;Oh, Jae-Ho
    • Journal of Food Hygiene and Safety
    • /
    • v.36 no.3
    • /
    • pp.228-238
    • /
    • 2021
  • The aim of this research was to develop a rapid and easy multi-residue method for determining dimethipin, omethoate, dimethipin, chlorfenvinphos and azinphos-methyl in agricultural products (hulled rice, potato, soybean, mandarin and green pepper). Samples were prepared using QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) and analyzed using gas chromatography-tandem mass spectrometry (GC-MS/MS). Residual pesticides were extracted with 1% acetic acid in acetonitrile followed by addition of anhydrous magnesium sulfate (MgSO4) and anhydrous sodium acetate. The extracts were cleaned up using MgSO4, primary secondary amine (PSA) and octadecyl (C18). The linearity of the calibration curves, which waas excellent by matrix-matched standards, ranged from 0.005 mg/kg to 0.3 mg/kg and yielded the coefficients of determination (R2) ≥ 0.9934 for all analytes. Average recoveries spiked at three levels (0.01, 0.1, 0.5 mg/kg) and were in the range of 74.2-119.3%, while standard deviation values were less than 14.6%, which is below the Codex guideline (CODEX CAC/GL 40).

Development of Lateral Flow Immunofluorescence Assay Applicable to Lung Cancer (폐암 진단에 적용 가능한 측면 유동 면역 형광 분석법 개발)

  • Supianto, Mulya;Lim, Jungmin;Lee, Hye Jin
    • Applied Chemistry for Engineering
    • /
    • v.33 no.2
    • /
    • pp.173-178
    • /
    • 2022
  • A lateral flow immunoassay (LFIA) method using carbon nanodot@silica as a signaling material was developed for analyzing the concentration of retinol-binding protein 4 (RBP4), one of the lung cancer biomarkers. Instead of antibodies mainly used as bioreceptors in nitrocellulose membranes in LFIA for protein detection, aptamers that are more economical, easy to store for a long time, and have strong affinities toward specific target proteins were used. A 5' terminal of biotin-modified aptamer specific to RBP4 was first reacted with neutravidin followed by spraying the mixture on the membrane in order to immobilize the aptamer in a porous membrane by the strong binding affinity between biotin and neutravidin. Carbon nanodot@silica nanoparticles with blue fluorescent signal covalently conjugated to the RBP4 antibody, and RBP4 were injected in a lateral flow manner on to the surface bound aptamer to form a sandwich complex. Surfactant concentrations, ionic strength, and additional blocking reagents were added to the running buffer solution to optimize the fluorescent signal off from the sandwich complex which was correlated to the concentration of RBP4. A 10 mM Tris (pH 7.4) running buffer containing 150 mM NaCl and 0.05% Tween-20 with 0.6 M ethanolamine as a blocking agent showed the optimum assay condition for carbon nanodot@silica-based LFIA. The results indicate that an aptamer, more economical and easier to store for a long time can be used as an alternative immobilizing probe for antibody in a LFIA device which can be used as a point-of-care diagnosis kit for lung cancer diseases.

A Study on Heterogeneous Catalysts for Transesterification of Nepalese Jatropha Oil (네팔산 Jatropha 오일의 전이에스테르화 반응용 불균일계 촉매 연구)

  • Youngbin Kim;Seunghee Lee;Minseok Sim;Yehee Kim;Rajendra Joshi;Jong-Ki Jeon
    • Clean Technology
    • /
    • v.30 no.1
    • /
    • pp.47-54
    • /
    • 2024
  • Jatropha oil extracted from the seeds of Nepalese Jatropha curcas, a non-edible crop, was used as a raw material and converted to biodiesel through a two-step process consisting of an esterification reaction and a transesterification reaction. Amberlyst-15 catalyst was applied to the esterification reaction between the free fatty acids contained in the Jatropha oil and methanol. The acid value of the Jatropha oil could be lowered from 11.0 to 0.26 mgKOH/g through esterification. Biodiesel was synthesized through a transesterification reaction between Jatropha oil with an acid value of 0.26 mgKOH/g and methanol over NaOH/γ-Al2O3 catalysts. As the loading amount of NaOH increased from 3 to 25 wt%, the specific surface area decreased from 129 to 28 m2/g and the pore volume decreased from 0.249 to 0.129 cm3/g. The amount and intensity of base sites over the NaOH/γ-Al2O3 catalysts increased simultaneously with the NaOH loading amount. It was confirmed that the optimal NaOH loading amount for the NaOH/γ-Al2O3 catalyst was 12 wt%. The optimal temperature for the transesterification reaction of Jatropha oil using the NaOH/γ-Al2O3 catalyst was selected to be 65 ℃. In the transesterification reaction of Jatropha oil using the NaOH/γ-Al2O3 catalyst, the reaction rate was affected by external diffusion limitation when the stirring speed was below 150 RPM, however the external diffusion limitation was negligible at higher stirring speeds.

A study of compaction ratio and permeability of soil with different water content (축제용흙의 함수비 변화에 의한 다짐율 및 수용계수 변화에 관한 연구)

  • 윤충섭
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.13 no.4
    • /
    • pp.2456-2470
    • /
    • 1971
  • Compaction of soil is very important for construction of soil structures such as highway fills, embankment of reservoir and seadike. With increasing compaction effort, the strength of soil, interor friction and Cohesion increas greatly while the reduction of permerbilityis evident. Factors which may influence compaction effort are moisture content, grain size, grain distribution and other physical properties as well as the variable method of compaction. The moisture content among these parameter is the most important thing. For making the maximum density to a given soil, the comparable optimum water content is required. If there is a slight change in water content when compared with optimum water content, the compaction ratio will decrease and the corresponding mechanical properties will change evidently. The results in this study of soil compaction with different water content are summarized as follows. 1) The maximum dry density increased and corresponding optimum moisture content decreased with increasing of coarse grain size and the compaction curve is steeper than increasing of fine grain size. 2) The maximum dry density is decreased with increasing of the optimum water content and a relationship both parameter becomes rdam-max=2.232-0.02785 $W_0$ But this relstionship will be change to $r_d=ae^{-bw}$ when comparable water content changes. 3) In case of most soils, a dry condition is better than wet condition to give a compactive effort, but the latter condition is only preferable when the liquid limit of soil exceeds 50 percent. 4) The compaction ratio of cohesive soil is greeter than cohesionless soil even the amount of coarse grain sizes are same. 5) The relationship between the maximum dry density and porosity is as rdmax=2,186-0.872e, but it changes to $r_d=ae^{be}$ when water content vary from optimum water content. 6) The void ratio is increased with increasing of optimum water content as n=15.85+1.075 w, but therelation becames $n=ae^{bw}$ if there is a variation in water content. 7) The increament of permeabilty is high when the soil is a high plasticity or coarse. 8) The coefficient of permeability of soil compacted in wet condition is lower than the soil compacted in dry condition. 9) Cohesive soil has higher permeability than cohesionless soil even the amount of coarse particles are same. 10) In generall, the soil which has high optimum water content has lower coefficient of permeability than low optimum water content. 11) The coefficient of permeability has a certain relations with density, gradation and void ratio and it increase with increasing of saturation degree.

  • PDF

A Study on the Forest Yield Regulation by Systems Analysis (시스템분석(分析)에 의(依)한 삼림수확조절(森林收穫調節)에 관(關)한 연구(硏究))

  • Cho, Eung-hyouk
    • Korean Journal of Agricultural Science
    • /
    • v.4 no.2
    • /
    • pp.344-390
    • /
    • 1977
  • The purpose of this paper was to schedule optimum cutting strategy which could maximize the total yield under certain restrictions on periodic timber removals and harvest areas from an industrial forest, based on a linear programming technique. Sensitivity of the regulation model to variations in restrictions has also been analyzed to get information on the changes of total yield in the planning period. The regulation procedure has been made on the experimental forest of the Agricultural College of Seoul National University. The forest is composed of 219 cutting units, and characterized by younger age group which is very common in Korea. The planning period is devided into 10 cutting periods of five years each, and cutting is permissible only on the stands of age groups 5-9. It is also assumed in the study that the subsequent forests are established immediately after cutting existing forests, non-stocked forest lands are planted in first cutting period, and established forests are fully stocked until next harvest. All feasible cutting regimes have been defined to each unit depending on their age groups. Total yield (Vi, k) of each regime expected in the planning period has been projected using stand yield tables and forest inventory data, and the regime which gives highest Vi, k has been selected as a optimum cutting regime. After calculating periodic yields and cutting areas, and total yield from the optimum regimes selected without any restrictions, the upper and lower limits of periodic yields(Vj-max, Vj-min) and those of periodic cutting areas (Aj-max, Aj-min) have been decided. The optimum regimes under such restrictions have been selected by linear programming. The results of the study may be summarized as follows:- 1. The fluctuations of periodic harvest yields and areas under cutting regimes selected without restrictions were very great, because of irregular composition of age classes and growing stocks of existing stands. About 68.8 percent of total yield is expected in period 10, while none of yield in periods 6 and 7. 2. After inspection of the above solution, restricted optimum cutting regimes were obtained under the restrictions of Amin=150 ha, Amax=400ha, $Vmin=5,000m^3$ and $Vmax=50,000m^3$, using LP regulation model. As a result, about $50,000m^3$ of stable harvest yield per period and a relatively balanced age group distribution is expected from period 5. In this case, the loss in total yield was about 29 percent of that of unrestricted regimes. 3. Thinning schedule could be easily treated by the model presented in the study, and the thinnings made it possible to select optimum regimes which might be effective for smoothing the wood flows, not to speak of increasing total yield in the planning period. 4. It was known that the stronger the restrictions becomes in the optimum solution the earlier the period comes in which balanced harvest yields and age group distribution can be formed. There was also a tendency in this particular case that the periodic yields were strongly affected by constraints, and the fluctuations of harvest areas depended upon the amount of periodic yields. 5. Because the total yield was decreased at the increasing rate with imposing stronger restrictions, the Joss would be very great where strict sustained yield and normal age group distribution are required in the earlier periods. 6. Total yield under the same restrictions in a period was increased by lowering the felling age and extending the range of cutting age groups. Therefore, it seemed to be advantageous for producing maximum timber yield to adopt wider range of cutting age groups with the lower limit at which the smallest utilization size of timber could be produced. 7. The LP regulation model presented in the study seemed to be useful in the Korean situation from the following point of view: (1) The model can provide forest managers with the solution of where, when, and how much to cut in order to best fulfill the owners objective. (2) Planning is visualized as a continuous process where new strateges are automatically evolved as changes in the forest environment are recognized. (3) The cost (measured as decrease in total yield) of imposing restrictions can be easily evaluated. (4) Thinning schedule can be treated without difficulty. (5) The model can be applied to irregular forests. (6) Traditional regulation methods can be rainforced by the model.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Evaluation of Dose Distributions Recalculated with Per-field Measurement Data under the Condition of Respiratory Motion during IMRT for Liver Cancer (간암 환자의 세기조절방사선치료 시 호흡에 의한 움직임 조건에서 측정된 조사면 별 선량결과를 기반으로 재계산한 체내 선량분포 평가)

  • Song, Ju-Young;Kim, Yong-Hyeob;Jeong, Jae-Uk;Yoon, Mee Sun;Ahn, Sung-Ja;Chung, Woong-Ki;Nam, Taek-Keun
    • Progress in Medical Physics
    • /
    • v.25 no.2
    • /
    • pp.79-88
    • /
    • 2014
  • The dose distributions within the real volumes of tumor targets and critical organs during internal target volume-based intensity-modulated radiation therapy (ITV-IMRT) for liver cancer were recalculated by applying the effects of actual respiratory organ motion, and the dosimetric features were analyzed through comparison with gating IMRT (Gate-IMRT) plan results. The ITV was created using MIM software, and a moving phantom was used to simulate respiratory motion. The doses were recalculated with a 3 dose-volume histogram (3DVH) program based on the per-field data measured with a MapCHECK2 2-dimensional diode detector array. Although a sufficient prescription dose covered the PTV during ITV-IMRT delivery, the dose homogeneity in the PTV was inferior to that with the Gate-IMRT plan. We confirmed that there were higher doses to the organs-at-risk (OARs) with ITV-IMRT, as expected when using an enlarged field, but the increased dose to the spinal cord was not significant and the increased doses to the liver and kidney could be considered as minor when the reinforced constraints were applied during IMRT plan optimization. Because the Gate-IMRT method also has disadvantages such as unsuspected dosimetric variations when applying the gating system and an increased treatment time, it is better to perform a prior analysis of the patient's respiratory condition and the importance and fulfillment of the IMRT plan dose constraints in order to select an optimal IMRT method with which to correct the respiratory organ motional effect.

Design of a pilot-scale helium heating system to support the SI cycle (파이롯 규모 SI 공정 시험 설비에서의 헬륨 가열 장치 설계)

  • Jang, Se-Hyun;Choi, Yong-Suk;Lee, Ki-Young;Shin, Young-Joon;Lee, Tae-Hoon;Kim, Jong-Ho;Yoon, Seok-Hun;Choi, Jae-Hyuk
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.3
    • /
    • pp.157-164
    • /
    • 2016
  • In this study, researchers performed preliminary design and numerical analysis for a pilot-scale helium heating system intended to support full-scale construction for a sulfur-iodine (SI) cycle. The helium heat exchanger used a liquefied petroleum gas (LPG) combustor. Exhaust gas velocity at the heat exchanger outlet was approximately 40 m/s based on computational thermal and flow analysis. The maximum gas temperature was reached with six baffles in the design; lower gas temperatures were observed with four baffles. The amount of heat transfer was also higher with six baffles. Installation of additional baffles may reduce fuel costs because of the reduced LPG exhausted to the heat exchanger. However, additional baffles may also increase the pressure difference between the exchanger's inlet and outlet. Therefore, it is important to find the optimum number of baffles. Structural analysis, followed by thermal and flow analysis, indicated a 3.86 mm thermal expansion at the middle of the shell and tube type heat exchanger when both ends were supported. Structural analysis conditions included a helium flow rate of 3.729 mol/s and a helium outlet temperature of $910^{\circ}C$. An exhaust gas temperature of $1300^{\circ}C$ and an exhaust gas rate of 52 g/s were confirmed to achieve the helium outlet temperature of $910^{\circ}C$ with an exchanger inlet temperature of $135^{\circ}C$ in an LPG-fueled helium heating system.