• Title/Summary/Keyword: optimization conditions

Search Result 3,150, Processing Time 0.036 seconds

Optimization of the Extraction of Bioactive Compounds from Chaga Mushroom (Inonotus obliquus) by the Response Surface Methodology (반응표면분석법을 이용한 차가버섯(Inonotus obliquus)의 생리활성물질 최적 추출조건 탐색)

  • Kim, Jaecheol;Yi, Haechang;Lee, Kiuk;Hwang, Keum Taek;Yoo, Gichun
    • Korean Journal of Food Science and Technology
    • /
    • v.47 no.2
    • /
    • pp.233-239
    • /
    • 2015
  • This study determined the optimum extraction conditions based on five response variables (yield, total phenolics, 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) free radical scavanging activity, oxygen radical absorbance capacity (ORAC), and ${\beta}$-1,3-glucan content) in chaga mushroom (Inonotus obliquus) using the response surface methodology, where three independent variables (ethanol concentration, extraction temperature, and extraction time) were optimized using a central composite design. The optimum ethanol concentration, extraction temperature, and extraction time were 50% (w/w), $88.7^{\circ}C$, and 14.5 h; 9.2%, $92.7^{\circ}C$, and 14.5 h; 50.8%, $92.7^{\circ}C$, and 14.5 h; 9.2%, $92.7^{\circ}C$, and 1.5 h; and 90.8%, $92.7^{\circ}C$, and 1.5 h for yield, total phenolics, ABTS, ORAC, and ${\beta}$-1,3-glucan content, respectively. The predicted values of the response variables were compared with those of the extracts under the optimal extraction conditions to verify the models. The optimum extraction condition for the five response variables was predicted to be 81.4% ethanol at $92.7^{\circ}C$ for 14.5 h.

Assessment of the Angstrom-Prescott Coefficients for Estimation of Solar Radiation in Korea (국내 일사량 추정을 위한 Angstrom-Prescott계수의 평가)

  • Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.221-232
    • /
    • 2016
  • Models to estimate solar radiation have been used because solar radiation is measured at a smaller number of weather stations than other variables including temperature and rainfall. For example, solar radiation has been estimated using the Angstrom-Prescott (AP) model that depends on two coefficients obtained empirically at a specific site ($AP_{Choi}$) or for a climate zone ($AP_{Frere}$). The objective of this study was to identify the coefficients of the AP model for reliable estimation of solar radiation under a wide range of spatial and temporal conditions. A global optimization was performed for a range of AP coefficients to identify the values of $AP_{max}$ that resulted in the greatest degree of agreement at each of 20 sites for a given month during 30 years. The degree of agreement was assessed using the value of Concordance Correlation Coefficient (CCC). When $AP_{Frere}$ was used to estimate solar radiation, the values of CCC were relatively high for conditions under which crop growth simulation would be performed, e.g., at rural sites during summer. The statistics for $AP_{Frere}$ were greater than those for $AP_{Choi}$ although $AP_{Frere}$ had the smaller statistics than $AP_{max}$ did. The variation of CCC values was small over a wide range of AP coefficients when those statistics were summarized by site. $AP_{Frere}$ was included in each range of AP coefficients that resulted in reasonable accuracy of solar radiation estimates by site, year, and month. These results suggested that $AP_{Frere}$ would be useful to provide estimates of solar radiation as an input to crop models in Korea. Further studies would be merited to examine feasibility of using $AP_{Frere}$ to obtain gridded estimates of solar radiation at a high spatial resolution under a complex terrain in Korea.

Optimization of DNA sequencing with plasmid DNA templates using the DNA sequencer (Plasmid DNA template를 이용한 DNA 염기서열 분석기기의 최적 조건 확립)

  • Lee, Jae-Bong;Kim, Jae-Hwan;Seo, Bo-Young;Lee, Kyeong-Tae;Park, Eung-Woo;Yoo, Chae-Kyoung;Lim, Hyun-Tae;Jeon, Jin-Tae
    • Journal of agriculture & life science
    • /
    • v.43 no.2
    • /
    • pp.31-38
    • /
    • 2009
  • The DNA sequencer is known to be more sensitive for the quality of template DNA, method of purification followed by sequencing reaction, and gel concentration. Therefore, we investigated optimal conditions for template preparation, purification, sequencing reaction, gel concentration, and injection medium. For plasmid prepara- tion, using chloroform instead of phenol improved the average read length from 532 bp to 684 bp. The addition of 2.5% DMSO sequencing PCR reaction resulted in 200 bp longer sequences. Purification using 50 mM EDTA and 0.6 M Sodium acetate(pH 8.0) presented 20 bp longer sequences than that using 50 mM EDTA(pH 8.0) and 0.6 M sodium acetate(pH 5.2). The injection for sequencing analysis using ABI formamide presented 90 bp longer sequences than that of using formamide deionized by resin. Moreover, there were 150 bp more readable sequences in 3.6% PAGE gel than in 4%. Consequently, it was concluded that an average of 700 bp per reaction with 85% accuracy can be obtained by the following optimal conditions: template preparation using chloroform, 2.5% DMSO, 50 mM EDTA and 0.6 M sodium acetate(pH 8.0), ABI formamide and 3.6% gel concentration.

Optimization of the Acetic Acid Fermentation Condition of Apple Juice (사과식초 제조를 위한 사과주스의 초산발효 최적화)

  • Kang, Bok-Hee;Shin, Eun-Jeong;Lee, Sang-Han;Lee, Dong-Sun;Hur, Sang-Sun;Shin, Kee-Sun;Kim, Seong-Ho;Son, Seok-Min;Lee, Jin-Man
    • Food Science and Preservation
    • /
    • v.18 no.6
    • /
    • pp.980-985
    • /
    • 2011
  • This study was conducted to determine the acetic-acid fermentation properties of apple juice (initial alcohol content, apple juice concentration, acetic-acid content, and inoculum size) in flask scale. At the acetic-acid fermentation of apple juice with 3, 5, 7, and 9% initial alcohol content, the maximum acidity after 10-day fermentation was 5.88% when the initial alcohol content was 5%. The acetic-acid fermentation did not proceed normally when the initial alcohol content was 9%. When the initial Brix was $1^{\circ}$, the acidity gradually increased, and the acidity after 12-day acetic-acid fermentation was 4.48%. Above 4% acidity was attained faster when the apple juice concentration was 5 and 10 $^{\circ}Brix$ than when it was 1 and 14 $^{\circ}Brix$. When the initial acidity was 1% or above (0.3, 0.5, 1.0, and 2.0%), the acetic-acid fermentation proceeded normally. The acetic-acid fermentation also proceeded normally when the inoculum sizes were 10 and 15%, and the acidity after eight-day acetic-acid fermentation was 5.60 and 6.05%, respectively. Therefore, the following were considered the optimal acetic-acid fermentation conditions for apple cider vinegar: 5% initial alcohol content, 5 $^{\circ}Brix$ or above apple juice concentration, 1.0% or above initial acidity, and 10% or above inoculum size. Apple cider vinegar with above 5% acidity can be produced within 48 h under the following acetic-acid fermentation conditions: 7% initial alcohol content, about 1% initial acidity, and 10% inoculum volume at $30^{\circ}C$, 30 rpm, and 1.0 vvm, using 14 $^{\circ}Brix$ apple juice in a mini-jar fermentor as a pre-step for industrial-scale adaptation.

A Study on Optimization of Nitric Acid Leaching and Roasting Process for Selective Lithium Leaching of Spent Batreries Cell Powder (폐 배터리 셀 분말의 선택적 리튬 침출을 위한 질산염화 공정 최적화 연구)

  • Jung, Yeon Jae;Park, Sung Cheol;Kim, Yong Hwan;Yoo, Bong Young;Lee, Man Seung;Son, Seong Ho
    • Resources Recycling
    • /
    • v.30 no.6
    • /
    • pp.43-52
    • /
    • 2021
  • In this study, the optimal nitration process for selective lithium leaching from powder of a spent battery cell (LiNixCoyMnzO2, LiCoO2) was studied using Taguchi method. The nitration process is a method of selective lithium leaching that involves converting non-lithium nitric compounds into oxides via nitric acid leaching and roasting. The influence of pretreatment temperature, nitric acid concentration, amount of nitric acid, and roasting temperature were evaluated. The signal-to-noise ratio and analysis of variance of the results were determined using L16(44) orthogonal arrays. The findings indicated that the roasting temperature followed by the nitric acid concentration, pretreatment temperature, and amount of nitric acid used had the greatest impact on the lithium leaching ratio. Following detailed experiments, the optimal conditions were found to be 10 h of pretreatment at 700℃ with 2 ml/g of 10 M nitric acid leaching followed by 10 h of roasting at 275℃. Under these conditions, the overall recovery of lithium exceeded 80%. X-ray diffraction (XRD) analysis of the leaching residue in deionized water after roasting of lithium nitrate and other nitrate compounds was performed. This was done to determine the cause of rapid decrease in lithium leaching rate above a roasting temperature of 400℃. The results confirmed that lithium manganese oxide was formed from lithium nitrate and manganese nitrate at these temperatures, and that it did not leach in deionized water. XRD analysis was also used to confirm the recovery of pure LiNO3 from the solution that was leached during the nitration process. This was carried out by evaporating and concentrating the leached solution through solid-liquid separation.

Optimization of Characteristic Change due to Differences in the Electrode Mixing Method (전극 혼합 방식의 차이로 인한 특성 변화 최적화)

  • Jeong-Tae Kim;Carlos Tafara Mpupuni;Beom-Hui Lee;Sun-Yul Ryou
    • Journal of the Korean Electrochemical Society
    • /
    • v.26 no.1
    • /
    • pp.1-10
    • /
    • 2023
  • The cathode, which is one of the four major components of a lithium secondary battery, is an important component responsible for the energy density of the battery. The mixing process of active material, conductive material, and polymer binder is very essential in the commonly used wet manufacturing process of the cathode. However, in the case of mixing conditions of the cathode, since there is no systematic method, in most cases, differences in performance occur depending on the manufacturer. Therefore, LiMn2O4 (LMO) cathodes were prepared using a commonly used THINKY mixer and homogenizer to optimize the mixing method in the cathode slurry preparation step, and their characteristics were compared. Each mixing condition was performed at 2000 RPM and 7 min, and to determine only the difference in the mixing method during the manufacture of the cathode other experiment conditions (mixing time, material input order, etc.) were kept constant. Among the manufactured THINKY mixer LMO (TLMO) and homogenizer LMO (HLMO), HLMO has more uniform particle dispersion than TLMO, and thus shows higher adhesive strength. Also, the result of the electrochemical evaluation reveals that HLMO cathode showed improved performance with a more stable life cycle compared to TLMO. The initial discharge capacity retention rate of HLMO at 69 cycles was 88%, which is about 4.4 times higher than that of TLMO, and in the case of rate capability, HLMO exhibited a better capacity retention even at high C-rates of 10, 15, and 20 C and the capacity recovery at 1 C was higher than that of TLMO. It's postulated that the use of a homogenizer improves the characteristics of the slurry containing the active material, the conductive material, and the polymer binder creating an electrically conductive network formed by uniformly dispersing the conductive material suppressing its strong electrostatic properties thus avoiding aggregation. As a result, surface contact between the active material and the conductive material increases, electrons move more smoothly, changes in lattice volume during charging and discharging are more reversible and contact resistance between the active material and the conductive material is suppressed.

A Study on Formulation Optimization for Improving Skin Absorption of Glabridin-Containing Nanoemulsion Using Response Surface Methodology (반응표면분석법을 활용한 Glabridin 함유 나노에멀젼의 피부흡수 향상을 위한 제형 최적화 연구)

  • Se-Yeon Kim;Won Hyung Kim;Kyung-Sup Yoon
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.49 no.3
    • /
    • pp.231-245
    • /
    • 2023
  • In the cosmetics industry, it is important to develop new materials for functional cosmetics such as whitening, wrinkles, anti-oxidation, and anti-aging, as well as technology to increase absorption when applied to the skin. Therefore, in this study, we tried to optimize the nanoemulsion formulation by utilizing response surface methodology (RSM), an experimental design method. A nanoemulsion was prepared by a high-pressure emulsification method using Glabridin as an active ingredient, and finally, the optimized skin absorption rate of the nanoemulsion was evaluated. Nanoemulsions were prepared by varying the surfactant content, cholesterol content, oil content, polyol content, high-pressure homogenization pressure, and cycling number of high-pressure homogenization as RSM factors. Among them, surfactant content, oil content, high-pressure homogenization pressure, and cycling number of high-pressure homogenization, which are factors that have the greatest influence on particle size, were used as independent variables, and particle size and skin absorption rate of nanoemulsion were used as response variables. A total of 29 experiments were conducted at random, including 5 repetitions of the center point, and the particle size and skin absorption of the prepared nanoemulsion were measured. Based on the results, the formulation with the minimum particle size and maximum skin absorption was optimized, and the surfactant content of 5.0 wt%, oil content of 2.0 wt%, high-pressure homogenization pressure of 1,000 bar, and the cycling number of high-pressure homogenization of 4 pass were derived as the optimal conditions. As the physical properties of the nanoemulsion prepared under optimal conditions, the particle size was 111.6 ± 0.2 nm, the PDI was 0.247 ± 0.014, and the zeta potential was -56.7 ± 1.2 mV. The skin absorption rate of the nanoemulsion was compared with emulsion as a control. As a result of the nanoemulsion and general emulsion skin absorption test, the cumulative absorption of the nanoemulsion was 79.53 ± 0.23%, and the cumulative absorption of the emulsion as a control was 66.54 ± 1.45% after 24 h, which was 13% higher than the emulsion.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Applications of Fuzzy Theory on The Location Decision of Logistics Facilities (퍼지이론을 이용한 물류단지 입지 및 규모결정에 관한 연구)

  • 이승재;정창무;이헌주
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.75-85
    • /
    • 2000
  • In existing models in optimization, the crisp data improve has been used in the objective or constraints to derive the optimal solution, Besides, the subjective environments are eliminated because the complex and uncertain circumstances were regarded as Probable ambiguity, In other words those optimal solutions in the existing models could be the complete satisfactory solutions to the objective functions in the Process of application for industrial engineering methods to minimize risks of decision-making. As a result of those, decision-makers in location Problems couldn't face appropriately with the variation of demand as well as other variables and couldn't Provide the chance of wide selection because of the insufficient information. So under the circumstance. it has been to develop the model for the location and size decision problems of logistics facility in the use of the fuzzy theory in the intention of making the most reasonable decision in the Point of subjective view under ambiguous circumstances, in the foundation of the existing decision-making problems which must satisfy the constraints to optimize the objective function in strictly given conditions in this study. Introducing the Process used in this study after the establishment of a general mixed integer Programming(MIP) model based upon the result of existing studies to decide the location and size simultaneously, a fuzzy mixed integer Programming(FMIP) model has been developed in the use of fuzzy theory. And the general linear Programming software, LINDO 6.01 has been used to simulate, to evaluate the developed model with the examples and to judge of the appropriateness and adaptability of the model(FMIP) in the real world.

  • PDF

Bottom electrode optimization for the applications of ferroelectric memory device (강유전체 기억소자 응용을 위한 하부전극 최적화 연구)

  • Jung, S.M.;Choi, Y.S.;Lim, D.G.;Park, Y.;Song, J.T.;Yi, J.
    • Journal of the Korean Crystal Growth and Crystal Technology
    • /
    • v.8 no.4
    • /
    • pp.599-604
    • /
    • 1998
  • We have investigated Pt and $RuO_2$ as a bottom electrode for ferroelectric capacitor applications. The bottom electrodes were prepared by using an RF magnetron sputtering method. Some of the investigated parameters were a substrate temperature, gas flow rate, RF power for the film growth, and post annealing effect. The substrate temperature strongly influenced the surface morphology and resistivity of the bottom electrodes as well as the film crystallographic structure. XRD results on Pt films showed a mixed phase of (111) and (200) peak for the substrate temperature ranged from RT to $200^{\circ}C$, and a preferred (111) orientation for $300^{\circ}C$. From the XRD and AFM results, we recommend the substrate temperature of $300^{\circ}C$ and RF power 80W for the Pt bottom electrode growth. With the variation of an oxygen partial pressure from 0 to 50%, we learned that only Ru metal was grown with 0~5% of $O_2$ gas, mixed phase of Ru and $RuO_2$ for $O_ 2$ partial pressure between 10~40%, and a pure $RuO_2$ phase with $O_2$ partial pressure of 50%. This result indicates that a double layer of $RuO_2/Ru$ can be grown in a process with the modulation of gas flow rate. Double layer structure is expected to reduce the fatigue problem while keeping a low electrical resistivity. As post anneal temperature was increased from RT to $700^{\circ}C$, the resistivity of Pt and $RuO_2$ was decreased linearly. This paper presents the optimized process conditions of the bottom electrodes for memory device applications.

  • PDF