• Title/Summary/Keyword: Problem Decomposition

Search Result 592, Processing Time 0.028 seconds

Estimation of Fire Dynamics Properties for Charring Material Using a Genetic Algorithm (유전 알고리즘을 이용한 탄화 재료의 화재 물성치 추정)

  • Chang, Hee-Chul;Park, Won-Hee;Lee, Duck-Hee;Jung, Woo-Sung;Son, Bong-Sei;Kim, Tae-Kuk
    • Fire Science and Engineering
    • /
    • v.24 no.2
    • /
    • pp.106-113
    • /
    • 2010
  • Fire characteristics can be analyzed more realistically by using more accurate material properties related to the fire dynamics and one way to acquire these fire properties is to use one of the inverse property analyses. In this study the genetic algorithm which is frequently applied for the inverse heat transfer problems is selected to demonstrate the procedure of obtaining fire properties of the solid charring material with relatively simple chemical structure. The thermal decomposition on the surface of the test plate is occurred by receiving the radiative energy from external heat sources, and in this process the heat transfer through the test plate can be simplified by an unsteady 1-D problem. The inverse property analysis based on the genetic algorithm is then applied for the estimation of the properties related to the reaction pyrolysis. The input parameters for the analysis are the surface temperature and mass loss rate of the char plate which are determined from the unsteady 1-D analysis with a givenset of 8 properties. The estimated properties using the inverse analysis based on the genetic algorithm show acceptable agreements with the input properties used to obtain the surface temperature and mass loss rate with errors between 1.8% for the specific heat of the virgin material and 151% for the specific heat of the charred material.

Application of Multispectral Remotely Sensed Imagery for the Characterization of Complex Coastal Wetland Ecosystems of southern India: A Special Emphasis on Comparing Soft and Hard Classification Methods

  • Shanmugam, Palanisamy;Ahn, Yu-Hwan;Sanjeevi , Shanmugam
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.189-211
    • /
    • 2005
  • This paper makes an effort to compare the recently evolved soft classification method based on Linear Spectral Mixture Modeling (LSMM) with the traditional hard classification methods based on Iterative Self-Organizing Data Analysis (ISODATA) and Maximum Likelihood Classification (MLC) algorithms in order to achieve appropriate results for mapping, monitoring and preserving valuable coastal wetland ecosystems of southern India using Indian Remote Sensing Satellite (IRS) 1C/1D LISS-III and Landsat-5 Thematic Mapper image data. ISODATA and MLC methods were attempted on these satellite image data to produce maps of 5, 10, 15 and 20 wetland classes for each of three contrast coastal wetland sites, Pitchavaram, Vedaranniyam and Rameswaram. The accuracy of the derived classes was assessed with the simplest descriptive statistic technique called overall accuracy and a discrete multivariate technique called KAPPA accuracy. ISODATA classification resulted in maps with poor accuracy compared to MLC classification that produced maps with improved accuracy. However, there was a systematic decrease in overall accuracy and KAPPA accuracy, when more number of classes was derived from IRS-1C/1D and Landsat-5 TM imagery by ISODATA and MLC. There were two principal factors for the decreased classification accuracy, namely spectral overlapping/confusion and inadequate spatial resolution of the sensors. Compared to the former, the limited instantaneous field of view (IFOV) of these sensors caused occurrence of number of mixture pixels (mixels) in the image and its effect on the classification process was a major problem to deriving accurate wetland cover types, in spite of the increasing spatial resolution of new generation Earth Observation Sensors (EOS). In order to improve the classification accuracy, a soft classification method based on Linear Spectral Mixture Modeling (LSMM) was described to calculate the spectral mixture and classify IRS-1C/1D LISS-III and Landsat-5 TM Imagery. This method considered number of reflectance end-members that form the scene spectra, followed by the determination of their nature and finally the decomposition of the spectra into their endmembers. To evaluate the LSMM areal estimates, resulted fractional end-members were compared with normalized difference vegetation index (NDVI), ground truth data, as well as those estimates derived from the traditional hard classifier (MLC). The findings revealed that NDVI values and vegetation fractions were positively correlated ($r^2$= 0.96, 0.95 and 0.92 for Rameswaram, Vedaranniyam and Pitchavaram respectively) and NDVI and soil fraction values were negatively correlated ($r^2$ =0.53, 0.39 and 0.13), indicating the reliability of the sub-pixel classification. Comparing with ground truth data, the precision of LSMM for deriving moisture fraction was 92% and 96% for soil fraction. The LSMM in general would seem well suited to locating small wetland habitats which occurred as sub-pixel inclusions, and to representing continuous gradations between different habitat types.

High Performance Separator at High-Temperature for Lithium-ion Batteries (고온 싸이클 성능이 우수한 리튬 이차전지 분리막)

  • Yoo, Seungmin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.789-793
    • /
    • 2021
  • A lithium secondary battery is the most promising candidate for future energy storage devices. On the other hand, the battery capacity decreases gradually due to the small amount of water and decomposition of the salts during the charging and discharging process, which deteriorates at high temperatures. Many researchers focused on increasing the cycling performance, but there have been few studies on the fundamental problem that removes water and HF molecules. In this study, silane molecules that are capable of absorbing water and HF molecules are introduced to the separator. Firstly, silica-coated amino-silane (APTES, 3-aminopropyltriethoxysilane) was synthesized, then the silica reacted with epoxy-silane, GPTMS ((3-glycidyloxypropyl)trimethoxysilane). A ceramic-coated separator was fabricated using the silane-coated silica, which is coated on porous polyethylene substrates. FT-IR spectroscopy and TEM analysis were performed to examine the chemical composition and the shape of the silane-coated silica. SEM was performed to confirm the ceramic layers. LMO half cells were fabricated to evaluate the cycling performance at 60 ℃. The cells equipped with a GPTMS-silica separator showed stable cycling performance, suggesting that it would be a solution for improving the cycling performance of the Li-ion batteries at high temperatures.

Electrochemical Behaviors of Graphite/LiNi0.6Co0.2Mn0.2O2 Cells during Overdischarge (흑연과 LiNi0.6Co0.2Mn0.2O2로 구성된 완전지의 과방전 중 전기화학적 거동분석)

  • Bong Jin Kim;Geonwoo Yoon;Inje Song;Ji Heon Ryu
    • Journal of the Korean Electrochemical Society
    • /
    • v.26 no.1
    • /
    • pp.11-18
    • /
    • 2023
  • As the use of lithium-ion secondary batteries is rapidly increasing due to the rapid growth of the electric vehicle market, the disposal and recycling of spent batteries after use has been raised as a serious problem. Since stored energy must be removed in order to recycle the spent batteries, an effective discharging process is required. In this study, graphite and NCM622 were used as active materials to manufacture coin-type half cells and full cells, and the electrochemical behavior occurring during overdischarge was analyzed. When the positive and negative electrodes are overdischarged respectively using a half-cell, a conversion reaction in which transition metal oxide is reduced to metal occurs first in the positive electrode, and a side reaction in which Cu, the current collector, is corroded following decomposition of the SEI film occurs in the negative electrode. In addition, a side reaction during overdischarge is difficult to occur because a large polarization at the initial stage is required. When the full cell is overdischarged, the cell reaches 0 V and the overdischarge ends with almost no side reaction due to this large polarization. However, if the full cell whose capacity is degraded due to the cycle is overdischarged, corrosion of the Cu current collector occurs in the negative electrode. Therefore, cycled cell requires an appropriate treatment process because its electrochemical behavior during overdischarge is different from that of a fresh cell.

Consumption Inequality of Elderly Households (노인가구의 소비불평등 분석)

  • Lee, So-chung
    • Korean Journal of Social Welfare Studies
    • /
    • v.40 no.1
    • /
    • pp.235-260
    • /
    • 2009
  • This study aims to analyze consumption inequality of Korean elderly households. The justification for analyzing consumption inequality during old age could be summarized as follows. First, due to the rapid growth of elderly population, the intra generational inequality of older people will bring greater consequences to the society in the coming years. Second, inequality is more actualized during old age when income stops playing a major role and the everyday lives are based mostly on consumption activities. For analysis, this study used the 2nd, 5th, 7th and 9th wave of 『Korea Labor and Income Panel Study』. The findings are as follows. First, total consumption inequality of elderly households is gradually decreasing after the economic crisis. Also, the gini coefficient of consumption items representing modern consumption culture, such as expenditures on eating out and car maintenance is decreasing. However, the inequality contribution rate of such items is continually rising, indicating that whereas the elderly households in general are being assimilated to the mainstream consumption culture, the disparity between classes is continually expanding. Second, gini coefficient and inequality contribution rate of the essentials such as food and housing has decreased indicating that basic livelihoods in general has risen. Third, the inequality of education expenditure is increasing after the year 2000 which implies that the problem of education inequality in general might have an effect on elderly households.

Investigation of the Current Status on Generation Route and Recycling of Residue derived Animals (동물성 잔재물의 발생경로 및 재활용업체의 재할용 실태에 대한 조사)

  • Lee, Ju-Ho;Phae, Chae-Gun
    • Journal of the Korea Organic Resources Recycling Association
    • /
    • v.17 no.2
    • /
    • pp.81-92
    • /
    • 2009
  • This study investigated the occurrence of residues derived animals produced during the process of slaughter and the subsequent channels of processing those residues derived animals; the status of recycling of these residues derived animals by recycling business is investigated to utilize the results as the basic data for management purposes. At present, animal slaughter is highly specialized and the residues derived animals obtained from the slaughter of animals are separated and dissected into different parts to serve as fodder and residual compost. Some of the residues derived animals obtained from slaughter, which are edible are utilized for edible purposes and those parts which are not edible such as horns, claws, fats etc. are confirmed to be recycled. Poultry residues derived animals are mostly recycled as single-component feed, used as original forms, residual compost whereas fish remains are recycled mostly as singlecomponent feed etc. Most of the companies that recycle residues derived animals are situated in provinces such as Jeollanamdo, Jeollabukdo, Gyeongsangnamdo, Gyeongsangbukdo, where many of the slaughterhouses are located. And many of these recylcing business find themselves in the vicinity of these slaughterhouse. Majority of these slaughterhouse are capable of processing residues derived animals in the range of 10~60 ton/day, which is quite small in terms of processing capacity. The problem encountered in the recycling of the residues derived animals is the occurrence of foul smell caused by the decomposition, for which appropriate measures have to be taken. The residues derived animals are on many occasions directly collected and transported to save costs and secure required amount of residues derived animals.

Stability of $^{188}Re$ Labeled Antibody for Radioimmunotherapy and the Effect of Stabilizing Agents (방사면역치료용 $^{188}Re$ 표지 항체의 안정성과 안정제의 효과)

  • Chang, Young-Soo;Kim, Bo-Kwang;Jeong, Jae-Min;Chung, June-Key;Lee, Seung-Jin;Lee, Dong-Soo;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.3
    • /
    • pp.195-202
    • /
    • 2002
  • Purpose: For clinical application of beta-emitter labeled antibody, high specific activity is imporiant. Carrier-free $^{188}Re$ from $^{188}W/^{188}Re$ generator is an ideal radionuclide for this purpose. However, low stability of $^{188}Re$ labeled antibody, especially in high specific activity, due to radiolytic decomposition by high energy (2.1 MeV) beta ray was problem. We studied the stability of $^{188}Re$ labeled antibody, and stabilizing effect of several stabilizers. Materials and Methods: Pre-reduced monoclonal antibody (CEA79.4) was labeled with $^{188}Re$ by incubating with generator-eluted $^{188}Re-perrhenate$ in the presence of stannous tartrate for 2 hr at room temperature. Radiochemical purity of each preparation was determined by chromatography. Human serum albumin was added to the labeled antibodies (2%). Stability of $^{188}Re-CEA79.4$ was investigated in the presence of ascorbic acid, ethanol, of Tween 80 as stabilizing agents. Results: Labeling efficiencies were $88{\pm}4%\;(n=12)$. Specific activities of $1.25{\sim}4.77MBq/{\mu}g$ were obtained. If stored after purging with $N_2$, all the preparations were stable for 10 hr. However, stability decreased in the presence of air. Perrhenate and $^{188}Re-tartrate$ was major impurity in declined preparation. colloid-formation was not a significant problem in all cases. Addition of ascorbic acid stabilized the labeled antibodies either under $N_2$ or under air by reducing the formation of perrhenate. Conclusion: High specific activity $^{188}Re$ labeled antibody is unstable, especially, in the presence of oxygen. Addition of ascorbic acid increased the stability.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.