• Title/Summary/Keyword: Optimal performance design

Search Result 2,763, Processing Time 0.043 seconds

True Digestibility of Phosphorus in Different Resources of Feed Ingredients in Growing Pigs

  • Wu, X.;Ruan, Z.;Zhang, Y.G.;Hou, Y.Q.;Yin, Y.L.;Li, T.J.;Huang, R.L.;Chu, W.Y.;Kong, X.F.;Gao, B.;Chen, L.X.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.21 no.1
    • /
    • pp.107-119
    • /
    • 2008
  • To determine the true digestible phosphorus (TDP) requirement of growing pigs, two experiments were designed with the experimental diets containing five true digestible P levels (0.16%, 0.20%, 0.23%, 0.26% and 0.39%) and the ratio of total calcium to true digestible P (TDP) kept at 2:1. In Experiment 1, five barrows (Duroc${\times}$Landrace${\times}$Yorkshire) with an average initial body weight of 27.9 kg were used in a $5{\times}5$ Latin-square design to evaluate the effect of different dietary P levels on the digestibility and output of P and nitrogen. In Experiment 2, sixty healthy growing pigs (Duroc${\times}$Landrace${\times}$Yorkshire) with an average body weight (BW) of 21.4 kg were assigned randomly to one of the five dietary treatments (12 pigs/diet), and were used to determine the true digestible phosphorus (TDP) requirement of growing pigs on the basis of growth performance and serum biochemical indices. The results indicated that the true digestibility of P increased (p<0.05) linearly with increasing dietary TDP level below 0.26%. The true P digestibility was highest (56.6%) when dietary TDP was 0.34%. Expressed as g/kg dry matter intake (DMI), fecal P output increased (p<0.05) linearly with increasing P input. On the basis of g/kg fecal dry matter (DM), fecal P output was lowest for Diet 4 and highest (p<0.05) for Diet 5. The apparent digestibility of crude protein (CP) did not differ (p>0.05) among the five diets, with the average nitrogen output of 12.14 g/d and nitrogen retention of 66% to 74% (p>0.05), which suggested that there was no interaction between dietary P and CP protein levels. During the 28-d experimental period of Experiment 2, the average daily gain (ADG) of pigs was affected by dietary TDP levels as described by Eq. (1): $y=-809,532x^4+788,079x^3-276,250x^2+42,114x-1,759$; ($R^2=0.99$; p<0.01; y = ADG, g/d; x = dietary TDP, %), F/G for pigs by Eq. (2): $y=3,651.1x^4-3,480.4x^3+1,183.8x^2-172.5x+10.9$ ($R^2=0.99$; p<0.01; y = F/G; x = dietary TDP, %), and Total P concentrations in serum by Eq. (3): $y=-3,311.7x^4+3,342.7x^3-1,224.6x^2+195.6x-8.7$ (R2 = 0.99; p<0.01; y = total serum P concentration and x = dietary TDP, %). The highest ADG (782 g/d), the lowest F/G (1.07) and the highest total serum P concentration (3.1 mmol/L) were obtained when dietary TDP level was 0.34%. Collectively, these results indicate that the optimal TDP requirement of growing pigs is 0.34% of the diet at a total Ca to TDP ratio of 2:1.

An Optimization Study on a Low-temperature De-NOx Catalyst Coated on Metallic Monolith for Steel Plant Applications (제철소 적용을 위한 저온형 금속지지체 탈질 코팅촉매 최적화 연구)

  • Lee, Chul-Ho;Choi, Jae Hyung;Kim, Myeong Soo;Seo, Byeong Han;Kang, Cheul Hui;Lim, Dong-Ha
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.332-340
    • /
    • 2021
  • With the recent reinforcement of emission standards, it is necessary to make efforts to reduce NOx from air pollutant-emitting workplaces. The NOx reduction method mainly used in industrial facilities is selective catalytic reduction (SCR), and the most commercial SCR catalyst is the ceramic honeycomb catalyst. This study was carried out to reduce the NOx emitted from steel plants by applying De-NOx catalyst coated on metallic monolith. The De-NOx catalyst was synthesized through the optimized coating technique, and the coated catalyst was uniformly and strongly adhered onto the surface of the metallic monolith according to the air jet erosion and bending test. Due to the good thermal conductivity of metallic monolith, the De-NOx catalyst coated on metallic monolith showed good De-NOx efficiency at low temperatures (200 ~ 250 ℃). In addition, the optimal amount of catalyst coating on the metallic monolith surface was confirmed for the design of an economical catalyst. Based on these results, the De-NOx catalyst of commercial grade size was tested in a semi-pilot De-NOx performance facility under a simulated gas similar to the exhaust gas emitted from a steel plant. Even at a low temperature (200 ℃), it showed excellent performance satisfying the emission standard (less than 60 ppm). Therefore, the De-NOx catalyst coated metallic monolith has good physical and chemical properties and showed a good De-NOx efficiency even with the minimum amount of catalyst. Additionally, it was possible to compact and downsize the SCR reactor through the application of a high-density cell. Therefore, we suggest that the proposed De-NOx catalyst coated metallic monolith may be a good alternative De-NOx catalyst for industrial uses such as steel plants, thermal power plants, incineration plants ships, and construction machinery.

Effect of Combined Supplementation Catechin and Vitamin C on Growth Performance, Meat Quality, Blood Composition and Stress Responses of Broilers under High Temperature (고온 환경에서 카테킨 및 비타민 C 첨가가 육계의 생산성, 계육품질, 혈액성분 및 스트레스 지표에 미치는 영향)

  • Jiseon Son;Woo-Do Lee;Hee-jin Kim;Hyunsoo Kim;Eui-Chul Hong;Iksoo Jeon;Hwan-Ku Kang
    • Korean Journal of Poultry Science
    • /
    • v.50 no.1
    • /
    • pp.1-13
    • /
    • 2023
  • The study was carried out to investigate the effects of dietary combined supplementation of antioxidants as catechin and vitamin C on growth performance, meat quality, blood profiles and stress responses of broilers exposed to high temperature. For this experiment, a total of 360 21-day-old male Ross 308 broilers were used. Treatments were assigned with 6 replicates per treatment and 10 birds per replicate in a 2 × 3 factorial design with vitamin C (0, 250 mg/kg) and catechin (0, 600, 1,200 mg/kg). The heat stress environment was maintained at temperature 32±1℃ and relative humidity 60±5% for 24 hours until the end of the experiment. The supplemented antioxidants had no significant difference in weight gain, feed intake and feed conversion ratio (P>0.05). The content of total cholesterol in blood had no interaction, but decrease (P<0.01) in the supplemented catechin group. Also, the supplementation with catechin showed increase in the SOD activity of blood, and lower corticosterone and IgM levels of broilers. The contents of HSP70 and MDA in liver decrease (P<0.05) with the supplementation of antioxidants, and HSP70 showed an interaction between groups. DPPH radical scavenging ability in breast meat increased (P<0.01) in catechin, but meat quality did not show difference according to treatments. Respiratory rate decreased (P<0.05) in catechin, but no interaction with vitamin C. In conclusion, the combination of vitamin C and catechin can alleviate stress under high temperature, such as HSP70 and MDA, but further study on the optimal supplemental level is needed.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

A Study on Increasing the Efficiency of Biogas Production using Mixed Sludge in an Improved Single-Phase Anaerobic Digestion Process (개량형 단상 혐기성 소화공정에서의 혼합슬러지를 이용한 바이오가스 생산효율 증대방안 연구)

  • Jung, Jong-Cheal;Chung, Jln-Do;Kim, San
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.6
    • /
    • pp.588-597
    • /
    • 2016
  • In this study, we attempted to improve the biogas production efficiency by varying the mixing ratio of the mixed sludge of organic wastes in the improved single-phase anaerobic digestion process. The types of organic waste used in this study were raw sewage sludge, food wastewater leachate and livestock excretions. The biomethane potential was determined through the BMP test. The results showed that the biomethane potential of the livestock excretions was the highest at $1.55m^3CN4/kgVS$, and that the highest value of the composite sample, containing primary sludge, food waste leachate and livestock excretions at proportions of 50%, 30% and 20% respectively) was $0.43m^3CN4/kgVS$. On the other hand, the optimal mixture ratio of composite sludge in the demonstration plant was 68.5 (raw sludge) : 18.0 (food waste leachate) : 13.5 (livestock excretions), which was a somewhat different result from that obtained in the BMP test. This difference was attributed to the changes in the composite sludge properties and digester operating conditions, such as the retention time. The amount of biogas produced in the single-phase anaerobic digestion process was $2,514m^3/d$ with a methane content of 62.8%. Considering the value of $2,319m^3/d$ of biogas produced as its design capacity, it was considered that this process demonstrated the maximum capacity. Also, through this study, it was shown that, in the case of the anaerobic digestion process, the two-phase digestion process is better in terms of its stable tank operation and high efficiency, whereas the existing single-phase digestion process allows for the improvement of the digestion efficiency and performance.

Low Grade Astrocytoma-Need Postoperative Radiotherapy or Not? (저분화 성상세포종-수술후 방사선치료가 필수적인가 ?)

  • Hong Seong Eon;Choi Doo Ho;Kim Tee Sung;Leem Won
    • Radiation Oncology Journal
    • /
    • v.10 no.2
    • /
    • pp.171-180
    • /
    • 1992
  • The precise role of radiotherapy for low grade gliomas including the optimal radiation dose and timing of treatment remains unclear. The information given by a retrosepctive analysis may be useful in the design of prospective randomized studies looking at radiation dose and time of surgical and radiotherapeutic treatment. The records of 56 patients (M:F = 29:27) with histologically verified cerebral low grade gliomas (47 cases of grade 1 or 2 astrocytomas and 9 oligodendrogliomas) diagnosed between 1979 and 1989 were retrospectively reviewed. The extent of surgical tumor removal was gross total or radical subtotal in 38 patients ($68\%$) and partial or biopsy only in the remaining 18 patients ($32\%$). Postooperative radiation therapy was given to 36 patients ($64\%$) of the total 56 patients with minimum dose of 5000 cGy (range=1250 to 7220 cGy). The 5-and 10-year survival rates for the total 56 patients were $44\%$ and $32\%$ respectively with a median survival of 4.1 years. According to the histologic grade the 5- and 10-year survivals were $52\%$ and $35\%$ for the 24 patients respectively with grade I astrocytomas compared to $20\%$ and $10\%$ for the 23 patients with grade II astrocytomas. Survival of oligodendroglioma patients was greater than those with astrocytoma ($65\%$ vs $36\%$ at 5 years), and the difference was also remarkable in the long term period of follow up ($54\%$ vs $23\%$ at 10 years). Those who received high-dose radiation therapy ($\geq$5400 cGy) had significant better survival than those who received low-dose radiation (< 5400cGy) or surgery alone (p<0.05). The 5- and 10-year survival rates were, respectively $59\%$ and $46\%$ for the 23 patients receiving high-dose radiation, $36\%$ and $24\%$ for the 13 patients receiving low-dose radiation, and $35\%$ and $26\%$ for the 20 patients with surgery alone. Survival rates by the extent of surgical resection were similar at 5 years ($46\%$ vs $41\%$), but long term survival was quite different (p<0.01) between total/subtotal resection and partial resection/biopsy ($41\%$ and $12\%$, resepctively). Previously published studies have identified important prognostic factors in these tumor: age, extent of surgery, grade, performance status, and duration of symptoms. But in our cases statistical analysis revealed that grade I histology (p<0.025) and young age (p<0.001) were the most significant good prognostic variables.

  • PDF

IPv6 Migration, OSPFv3 Routing based on IPv6, and IPv4/IPv6 Dual-Stack Networks and IPv6 Network: Modeling, and Simulation (IPv6 이관, IPv6 기반의 OSPFv3 라우팅, IPv4/IPv6 듀얼 스택 네트워크와 IPv6 네트워크: 모델링, 시뮬레이션)

  • Kim, Jeong-Su
    • The KIPS Transactions:PartC
    • /
    • v.18C no.5
    • /
    • pp.343-360
    • /
    • 2011
  • The objective of this paper is to analyze and characterize to simulate routing observations on end-to-end routing circuits and a ping experiment of a virtual network after modeling, such as IPv6 migration, an OSPFv3 routing experiment based on an IPv6 environment, and a ping experiment for IPv4/IPv6 dual-stack networks and IPv6 network for OSPFv3 routing using IPv6 planning and operations in an OPNET Modeler. IPv6 deployment based largely on the integrated wired and wireless network was one of the research tasks at hand. The previous studies' researchers recommended that future research work be done on the explicit features of both OSPFv3 and EIGRP protocols in the IPv4/IPv6 environment, and more research should be done to explore how to improve the end-to-end IPv6 performance. Also, most related work was performed with an IPv4 environment but lacked studies related to the OSPFv3 virtual network based on an end-to-end IPv6 environment. Hence, this research continues work in previous studies in analyzing IPv6 migration, an OSPFv3 routing experiment based on IPv6, and a ping experiment for IPv4/IPv6 dual-stack networks and IPv6 network for OSPFv3 routing. In the not too distant future, before enabling the default IPv6, it would help to understand network design and deployment based on an IPv6 environment through IPv6 planning and operations for the end-user perspective such as success or failure of connection on IPv6 migration, exploration of an OSPFv3 routing circuit based on an end-to-end IPv6 environment, and a ping experiment for IPv4/IPv6 dual-stack networks and IPv6 network for OSPFv3 routing. We were able to observe an optimal route for modeling of an end-to-end virtual network through simulation results as well as find what appeared to be a fast ping response time VC server to ensure Internet quality of service better than an HTTP server.

Design and Optimization of Pilot-Scale Bunsen Process in Sulfur-Iodine (SI) Cycle for Hydrogen Production (수소 생산을 위한 Sulfur-Iodine Cycle 분젠반응의 Pilot-Scale 공정 모델 개발 및 공정 최적화)

  • Park, Junkyu;Nam, KiJeon;Heo, SungKu;Lee, Jonggyu;Lee, In-Beum;Yoo, ChangKyoo
    • Korean Chemical Engineering Research
    • /
    • v.58 no.2
    • /
    • pp.235-247
    • /
    • 2020
  • Simulation study and validation on 50 L/hr pilot-scale Bunsen process was carried out in order to investigate thermodynamics parameters, suitable reactor type, separator configuration, and the optimal conditions of reactors and separation. Sulfur-Iodine is thermochemical process using iodine and sulfur compounds for producing hydrogen from decomposition of water as net reaction. Understanding in phase separation and reaction of Bunsen Process is crucial since Bunsen Process acts as an intermediate process among three reactions. Electrolyte Non-Random Two-Liquid model is implemented in simulation as thermodynamic model. The simulation results are validated with the thermodynamic parameters and the 50 L/hr pilot-scale experimental data. The SO2 conversions of PFR and CSTR were compared as varying the temperature and reactor volume in order to investigate suitable type of reactor. Impurities in H2SO4 phase and HIX phase were investigated for 3-phase separator (vapor-liquid-liquid) and two 2-phase separators (vapor-liquid & liquid-liquid) in order to select separation configuration with better performance. The process optimization on reactor and phase separator is carried out to find the operating conditions and feed conditions that can reach the maximum SO2 conversion and the minimum H2SO4 impurities in HIX phase. For reactor optimization, the maximum 98% SO2 conversion was obtained with fixed iodine and water inlet flow rate when the diameter and length of PFR reactor are 0.20 m and 7.6m. Inlet water and iodine flow rate is reduced by 17% and 22% to reach the maximum 10% SO2 conversion with fixed temperature and PFR size (diameter: 3/8", length:3 m). When temperature (121℃) and PFR size (diameter: 0.2, length:7.6 m) are applied to the feed composition optimization, inlet water and iodine flow rate is reduced by 17% and 22% to reach the maximum 10% SO2 conversion.

Water Digital Twin for High-tech Electronics Industrial Wastewater Treatment System (II): e-ASM Calibration, Effluent Prediction, Process selection, and Design (첨단 전자산업 폐수처리시설의 Water Digital Twin(II): e-ASM 모델 보정, 수질 예측, 공정 선택과 설계)

  • Heo, SungKu;Jeong, Chanhyeok;Lee, Nahui;Shim, Yerim;Woo, TaeYong;Kim, JeongIn;Yoo, ChangKyoo
    • Clean Technology
    • /
    • v.28 no.1
    • /
    • pp.79-93
    • /
    • 2022
  • In this study, an electronics industrial wastewater activated sludge model (e-ASM) to be used as a Water Digital Twin was calibrated based on real high-tech electronics industrial wastewater treatment measurements from lab-scale and pilot-scale reactors, and examined for its treatment performance, effluent quality prediction, and optimal process selection. For specialized modeling of a high-tech electronics industrial wastewater treatment system, the kinetic parameters of the e-ASM were identified by a sensitivity analysis and calibrated by the multiple response surface method (MRS). The calibrated e-ASM showed a high compatibility of more than 90% with the experimental data from the lab-scale and pilot-scale processes. Four electronics industrial wastewater treatment processes-MLE, A2/O, 4-stage MLE-MBR, and Bardenpo-MBR-were implemented with the proposed Water Digital Twin to compare their removal efficiencies according to various electronics industrial wastewater characteristics. Bardenpo-MBR stably removed more than 90% of the chemical oxygen demand (COD) and showed the highest nitrogen removal efficiency. Furthermore, a high concentration of 1,800 mg L-1 T MAH influent could be 98% removed when the HRT of the Bardenpho-MBR process was more than 3 days. Hence, it is expected that the e-ASM in this study can be used as a Water Digital Twin platform with high compatibility in a variety of situations, including plant optimization, Water AI, and the selection of best available technology (BAT) for a sustainable high-tech electronics industry.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.