• Title/Summary/Keyword: Binary gas

Search Result 152, Processing Time 0.023 seconds

Comparative Study on the Estimation of CO2 absorption Equilibrium in Methanol using PC-SAFT equation of state and Two-model approach. (메탄올의 이산화탄소 흡수평형 추산에 대한 PC-SAFT모델식과 Two-model approach 모델식의 비교연구)

  • Noh, Jaehyun;Park, Hoey Kyung;Kim, Dongsun;Cho, Jungho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.136-152
    • /
    • 2017
  • The thermodynamic models, PC-SAFT (Perturbed-Chain Statistical Associated Fluid Theory) state equation and the Two-model approach liquid activity coefficient model NRTL (Non Random Two Liquid) + Henry + Peng-Robinson, for modeling the Rectisol process using methanol aqueous solution as the $CO_2$ removal solvent were compared. In addition, to determine the new binary interaction parameters of the PC-SAFT state equations and the Henry's constant of the two-model approach, absorption equilibrium experiments between carbon dioxide and methanol at 273.25K and 262.35K were carried out and regression analysis was performed. The accuracy of the newly determined parameters was verified through the regression results of the experimental data. These model equations and validated parameters were used to model the carbon dioxide removal process. In the case of using the two-model approach, the methanol solvent flow rate required to remove 99.00% of $CO_2$ was estimated to be approximately 43.72% higher, the cooling water consumption in the distillation tower was 39.22% higher, and the steam consumption was 43.09% higher than that using PC-SAFT EOS. In conclusion, the Rectisol process operating under high pressure was designed to be larger than that using the PC-SAFT state equation when modeled using the liquid activity coefficient model equation with Henry's relation. For this reason, if the quantity of low-solubility gas components dissolved in a liquid at a constant temperature is proportional to the partial pressure of the gas phase, the carbon dioxide with high solubility in methanol does not predict the absorption characteristics between methanol and carbon dioxide.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.