• Title/Summary/Keyword: Application optimization

Search Result 2,030, Processing Time 0.028 seconds

Optimization of Hybrid Process of(Chemical Coagulation, Fenton Oxidation and Ceramic Membrane Filtration) for the Treatment of Reactive Dye Solutions (반응성 염료폐수 처리를 위한 화학응집, 펜톤산화, 세라믹 분리막 복합공정의 최적화)

  • Yang, Jeong-Mok;Park, Chul-Hwan;Lee, Byung-Hwan;Kim, Tak-Hyun;Lee, Jin-Won;Kim, Sang-Yong
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.28 no.3
    • /
    • pp.257-264
    • /
    • 2006
  • This study investigated the effects of hybrid process(chemical coagulation, Fenton oxidation and ceramic UF(ultrafiltration)) on COD and color removals of commercial reactive dyestuffs. In the case of chemical coagulation, the optimal concentrations of $Fe^{3+}$ coagulant for COD and color removals of RB49(reactive blue 49) and RY84(reactive yellow 84) were determined according to the different coagulant dose at the optimal pH. They were 2.78 mM(pH 7) in RB49 and 1.85 mM(pH 6) in RY84, respectively. In the case of Fenton oxidation, the optimal concentrations of $Fe^{3+}\;and\;H_2O_2$ were obtained. Optimal $[Fe^{2+}]:[H_2O_2]$ molar ratio of COD and color removals of RB49 and RY84 were 4.41:5.73 mM and 1.15:0.81 mM, respectively. In the case of ceramic UF, the flux and rejection of supernatant after Fenton oxidation were investigated. After ceramic UF for 9 hr, the average flux of RB49 and RY84 solutions were $53.4L/m^2hr\;and\;67.4L/m^2hr$ at 1 bar, respectively. In addition, the permeate flux increased and the average flux recovery were 98.5-99.9%(RB49) and 91.0-97.3%(RY84) according to adopting off-line cleaning(5% $H_2SO_4$). Finally, COD and color removals were 91.6-95.7% and 99.8% by hybrid process, respectively.

The Study on New Radiating Structure with Multi-Layered Two-Dimensional Metallic Disk Array for Shaping flat-Topped Element Pattern (구형 빔 패턴 형성을 위한 다층 이차원 원형 도체 배열을 갖는 새로운 방사 구조에 대한 연구)

  • 엄순영;스코벨레프;전순익;최재익;박한규
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.13 no.7
    • /
    • pp.667-678
    • /
    • 2002
  • In this paper, a new radiating structure with a multi-layered two-dimensional metallic disk array was proposed for shaping the flat-topped element pattern. It is an infinite periodic planar array structure with metallic disks finitely stacked above the radiating circular waveguide apertures. The theoretical analysis was in detail performed using rigid full-wave analysis, and was based on modal representations for the fields in the partial regions of the array structure and for the currents on the metallic disks. The final system of linear algebraic equations was derived using the orthogonal property of vector wave functions, mode-matching method, boundary conditions and Galerkin's method, and also their unknown modal coefficients needed for calculation of the array characteristics were determined by Gauss elimination method. The application of the algorithm was demonstrated in an array design for shaping the flat-topped element patterns of $\pm$20$^{\circ}$ beam width in Ka-band. The optimal design parameters normalized by a wavelength for general applications are presented, which are obtained through optimization process on the basis of simulation and design experience. A Ka-band experimental breadboard with symmetric nineteen elements was fabricated to compare simulation results with experimental results. The metallic disks array structure stacked above the radiating circular waveguide apertures was realized using ion-beam deposition method on thin polymer films. It was shown that the calculated and measured element patterns of the breadboard were in very close agreement within the beam scanning range. The result analysis for side lobe and grating lobe was done, and also a blindness phenomenon was discussed, which may cause by multi-layered metallic disk structure at the broadside. Input VSWR of the breadboard was less than 1.14, and its gains measured at 29.0 GHz. 29.5 GHz and 30 GHz were 10.2 dB, 10.0 dB and 10.7 dB, respectively. The experimental and simulation results showed that the proposed multi-layered metallic disk array structure could shape the efficient flat-topped element pattern.

Optimization of Protocol for Injection of Iodinated Contrast Medium in Pediatric Thoracic CT Examination (소아 흉부 CT검사에서 조영제 주입에 관한 프로토콜의 최적화)

  • Kim, Yung-Kyoon;Kim, Yon-Min
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.6
    • /
    • pp.879-887
    • /
    • 2019
  • The purpose of this study is to establish a physiological injection protocol according to body weight, in order to minimize amount of contrast medium and optimize contrast enhancement in pediatric patients performing thoracic CT examinations. The 80 pediatric patients under the age of 10 were studied. Intravenous contrast material containing 300 mgI/ml was used. The group A injected with a capacity of 1.5 times its weight, and groups B, C and D added 5 to 15 ml of normal saline with a 10% decrease in each. The physiologic model which can be calculated by weight about amount of injection of contrast medium and normal saline, flow rate and delay time were applied. To assess image quality, measured average HU value and SNR of superior vena cava, pulmonary artery, ascending and descending aorta, right and left atrium, right and left ventricle. CT numbers of subclavian vein and superior vena cava were compared to identify the effects of reducing artifacts due to normal saline. Comparing SNR according to the contrast medium injection protocol, significant differences were found in superior vena cava and pulmonary artery, descending aorta, right and left ventricle, and CT numbers showed significant differences in all organs. In particular, B group with a 10% decrease in contrast medium and an additional injection of saline showed a low degree of contrast enhancement in groups with a decrease of more than 20%. In addition, the group injected with normal saline greatly reduced contrast enhancement of subclavian vein and superior vena cava, and the beam hardening artifact by contrast medium was significantly attenuated. In conclusion, the application of physiological protocol for injection of contrast medium in pediatric thoracic CT examinations was able to reduce artifacts by contrast medium, prevent unnecessary use of contrast medium and improve the effect of contrast enhancement.

An Optimization Study on a Low-temperature De-NOx Catalyst Coated on Metallic Monolith for Steel Plant Applications (제철소 적용을 위한 저온형 금속지지체 탈질 코팅촉매 최적화 연구)

  • Lee, Chul-Ho;Choi, Jae Hyung;Kim, Myeong Soo;Seo, Byeong Han;Kang, Cheul Hui;Lim, Dong-Ha
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.332-340
    • /
    • 2021
  • With the recent reinforcement of emission standards, it is necessary to make efforts to reduce NOx from air pollutant-emitting workplaces. The NOx reduction method mainly used in industrial facilities is selective catalytic reduction (SCR), and the most commercial SCR catalyst is the ceramic honeycomb catalyst. This study was carried out to reduce the NOx emitted from steel plants by applying De-NOx catalyst coated on metallic monolith. The De-NOx catalyst was synthesized through the optimized coating technique, and the coated catalyst was uniformly and strongly adhered onto the surface of the metallic monolith according to the air jet erosion and bending test. Due to the good thermal conductivity of metallic monolith, the De-NOx catalyst coated on metallic monolith showed good De-NOx efficiency at low temperatures (200 ~ 250 ℃). In addition, the optimal amount of catalyst coating on the metallic monolith surface was confirmed for the design of an economical catalyst. Based on these results, the De-NOx catalyst of commercial grade size was tested in a semi-pilot De-NOx performance facility under a simulated gas similar to the exhaust gas emitted from a steel plant. Even at a low temperature (200 ℃), it showed excellent performance satisfying the emission standard (less than 60 ppm). Therefore, the De-NOx catalyst coated metallic monolith has good physical and chemical properties and showed a good De-NOx efficiency even with the minimum amount of catalyst. Additionally, it was possible to compact and downsize the SCR reactor through the application of a high-density cell. Therefore, we suggest that the proposed De-NOx catalyst coated metallic monolith may be a good alternative De-NOx catalyst for industrial uses such as steel plants, thermal power plants, incineration plants ships, and construction machinery.

Optimization and Scale-up of Fish Skin Peptide Loaded Liposome Preparation and Its Storage Stability (어피 펩타이드 리포좀 대량생산 최적 조건 및 저장 안정성)

  • Lee, JungGyu;Lee, YunJung;Bai, JingJing;Kim, Soojin;Cho, Youngjae;Choi, Mi-Jung
    • Food Engineering Progress
    • /
    • v.21 no.4
    • /
    • pp.360-366
    • /
    • 2017
  • Fish skin peptide-loaded liposomes were prepared in 100 mL and 1 L solution as lab scales, and 10 L solution as a prototype scale. The particle size and zeta potential were measured to determine the optimal conditions for the production of fish skin peptide-loaded liposome. The liposome was manufactured by the following conditions: (1) primary homogenization at 4,000 rpm, 8,000 rpm, and 12,000 rpm for 3 minutes; (2) secondary homogenization at 40 watt (W), 60 W, and 80 W for 3 minutes. From this experimental design, the optimal conditions of homogenization were selected as 4,000 rpm and 60 W. For the next step, fish peptides were prepared as the concentrations of 3, 6, and 12% at the optimum manufacturing conditions of liposome and stored at $4^{\circ}C$. Particle size, polydispersion index (pdI), and zeta potential of peptide-loaded liposome were measured for its stability. Particle size increased significantly as manufacture scale and peptide concentration increased, and decreased over storage time. The zeta potential results increased as storage time increased at 10 L scale. In addition, 12% peptide showed the formation of a sediment layer after 3 weeks, and 6% peptide was considered to be the most suitable for industrial application.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Optimization of Cultivational Conditions of Rice(Oryza sativa L.) by a Central Composite Design Applied to an Early Cultivar in Southern Region (중심합성계획법에 의한 남부 조생벼 재배요인의 최적조건 구명)

  • Shon, Gil-Man;Kim, Jeung-Kyo;Choe, Zhin-Ryong;Lee, Yu-Sik;Park, Joong-Yang
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.34 no.1
    • /
    • pp.60-73
    • /
    • 1989
  • Two field experiments were carried out to assess the applicability of a central composite design (CCD) in determining optimum culture condition of an early rice cultivar, Unbongbyeo in southern Korea. A central composite design with two replicates was applied to five levels of five factors such as the number of hills per 3.3m2, the number of seedlings per hill, the levels of nitrogen, the transplanting date and the seedling age (Experiment 1). The levels of planting density were ranged from 30 hills to 150 hills per 3.3m2 ; the number of seedlings per hill from 1 seedling to 9 seedlings per hill; the levels of nitrogen application from 1 kg/l0a to 21 kg/l0a; the transplanting date from June 15 to July 5; the seedling age from 25 days to 45 days. A fractional factorial design was applied to three levels of five factors tested in CCD (Experiment 2). Yield per hill and per unit area were examined and the results obtained from both experiments were compared. The benefits from the central composite design were discussed. Maximum yield of brown rice per unit area was obtained at the combination of the central levels of one of five factors when the other four factors were fixed at central point. Furthermore, brown rice yield per unit area affected by interaction of two factors was maximized at the central point when the remain three factors being fixed at the central level. The responses of five factors to brown rice yield per hill and unit area were found to be a saddle point in both designs. Actual values of the stationary points were 107 hills per 3.3 m2, 4 seedlings per hill, 10 kg nitrogen per l0a, transplanting date of rice on June 26 and 33 days of seedling age in the central composite design. Brown rice yield per unit area at the stationary points were estimated 439 kg/l0a in the central composite design and 442 kg/l0a in the fractional factorial design. Considering the number of experimental treatment combinations, the central composite design was rather convenient in reducing the number of treatment combinations for similar information. It was more convenient for an experimenter to present the results from the central composite design than those from the fractional factorial design. Considering the optimum yields of brown rice per unit area at the stationary points being verified as saddle points in both designs. inter-heterogeneity of each of the factors should be avoided in setting up factors in pursuit of inducing unidirectional response of the factors to yield. Even though both the lower and higher levels in the central composite design being beyond the region of an experimenter's interest. they were considered highly valued in interpretation of the results. Conclusively. the central composite design was found to be more beneficial to optimize culture condition of paddy rice even with several levels of various factors were involved.

  • PDF