• Title/Summary/Keyword: Cross Decomposition

Search Result 142, Processing Time 0.026 seconds

A Study on the Curing Behaviors of Glass/Epoxy Prepreg by Dielectrometer and the Thermal Properties of Cured Glass/Epoxy Composites (Dielectrometer를 이용한 Glass/Epoxy 프리프레그의 경화거동 및 경화물의 열적 특성연구)

  • 제갈영순;이원철;전영재;윤남균
    • Polymer(Korea)
    • /
    • v.24 no.3
    • /
    • pp.350-357
    • /
    • 2000
  • Curing behaviors of glass/epoxy prepreg for printed circuit boards (PCB) were studied by using dielectrometer and differential scanning calorimeter. This prepreg was showed the lowest ionic viscosity at about 115$^{\circ}C$, and then the ionic viscosity was gradully increased up to 15$0^{\circ}C$. This indicated that the curing reaction of this prepreg started at 115$^{\circ}C$ and the molecular weight was increased by the accelerated thermal cross-linking reaction. The loss factor and tan $\delta$ values were also measured and discussed. The dielectric behaviors of this prepreg system were also measured according to the cure cycle for PCB. This material was found to be thermally stable up to about 30$0^{\circ}C$ and then was showed an abrupt decomposition beyond this temperature.

  • PDF

Reduced Order Modeling of Marine Engine Status by Principal Component Analysis (주성분 분석을 통한 선박 기관 상태의 차수 축소 모델링)

  • Seungbeom Lee;Jeonghwa Seo;Dong-Hwan Kim;Sangmin Han;Kwanwoo Kim;Sungwook Chung;Byeongwoo Yoo
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.1
    • /
    • pp.8-18
    • /
    • 2024
  • The present study concerns reduced order modeling of a marine diesel engine, which can be used for outlier detection in status monitoring and carbon intensity index calculation. Principal Component Analysis (PCA) is introduced for the reduced order modeling, focusing on the feasibility of detecting and treating nonlinear variables. By cross-correlation, it is found that there are seven non-linear data channels among 23 data channels, i.e., fuel mode, exhaust gas temperature after the turbocharger, and cylinder coolant temperatures. The dataset is handled so that the mean is located at the nominal continuous rating. Polynomial presentation of the dataset is also applied to reflect the linearity between the engine speed and other channels. The first principal mode shows strong effects of linearity of the most data channels to show the linearity of the system. The non-linear variables are effectively explained by other modes. second mode concerns the temperature of the cylinder cooling water, which shows small correlation with other variables. The third and fourth modes correlates the fuel mode and turbocharger exhaust gas temperature, which have inferior linearity to other channels. PCA is proven to be applicable to data given in binary type of fuel mode selection, as well as numerical type data.

Analysis of Interactions in Multiple Genes using IFSA(Independent Feature Subspace Analysis) (IFSA 알고리즘을 이용한 유전자 상호 관계 분석)

  • Kim, Hye-Jin;Choi, Seung-Jin;Bang, Sung-Yang
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.3
    • /
    • pp.157-165
    • /
    • 2006
  • The change of external/internal factors of the cell rquires specific biological functions to maintain life. Such functions encourage particular genes to jnteract/regulate each other in multiple ways. Accordingly, we applied a linear decomposition model IFSA, which derives hidden variables, called the 'expression mode' that corresponds to the functions. To interpret gene interaction/regulation, we used a cross-correlation method given an expression mode. Linear decomposition models such as principal component analysis (PCA) and independent component analysis (ICA) were shown to be useful in analyzing high dimensional DNA microarray data, compared to clustering methods. These methods assume that gene expression is controlled by a linear combination of uncorrelated/indepdendent latent variables. However these methods have some difficulty in grouping similar patterns which are slightly time-delayed or asymmetric since only exactly matched Patterns are considered. In order to overcome this, we employ the (IFSA) method of [1] to locate phase- and shut-invariant features. Membership scoring functions play an important role to classify genes since linear decomposition models basically aim at data reduction not but at grouping data. We address a new function essential to the IFSA method. In this paper we stress that IFSA is useful in grouping functionally-related genes in the presence of time-shift and expression phase variance. Ultimately, we propose a new approach to investigate the multiple interaction information of genes.

Optimal Configuration of the Truss Structures by Using Decomposition Method of Three-Phases (3단계(段階) 분할기법(分割技法)에 의한 평면(平面)트러스 구조물(構造物)의 형상(形狀) 최적화(最適化)에 관한 연구(硏究))

  • Lee, Gyu Won;Song, Gi Beom
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.3
    • /
    • pp.39-55
    • /
    • 1992
  • In this research, a Three Level Decomposition technique has been developed for configuration design optimization of truss structures. In the first level, as design variables, behavior variables are used and the strain energy has been treated as the cost function to be maximized so that the truss structure can absorb maximum energy. For design constraint of the optimal design problem, allowable stress, buckling stress, and displacement under multi-loading conditions are considered. In the second level, design problem is formulated using the cross-sectional area as the design variable and the weight of the truss structure as the cost function. As for the design constraint, the equilibrium equation with the optimal displacement obtained in the first level is used. In the third level, the nodal point coordinates of the truss structure are used as coordinating variable and the weight has been taken as the cost function. An advantage of the Three Level Decomposition technique is that the first and second level design problems are simple because they are linear programming problems. Moreover, the method is efficient because it is not necessary to carry out time consuming structural analysis and techniques for sensitivity analysis during the design optimization process. By treating the nodal point coordinates as design variables, the third level becomes unconstrained optimal design problems which is easier to solve. Moreover, by using different convergence criteria at each level of design problem, improved convergence can be obtained. The proposed technique has been tested using four different truss structures to yield almost identical optimum designs in the literature with efficient convergence rate regardless of constraint types and configuration of truss structures.

  • PDF

Respiratory Health of Foundry Workers Exposed to Binding Resin (RESIN 취급 주물공장 근로자들의 호흡기 건강에 관한 연구)

  • Choi, Jung-Keun;Rhee, Chang-Ok;Paek, Do-Myung;Choi, Byung-Soon;Shin, Yong-Chul;Chung, Ho-Keun
    • Journal of Preventive Medicine and Public Health
    • /
    • v.27 no.2 s.46
    • /
    • pp.274-285
    • /
    • 1994
  • The effects of resin on the respiratory health have been investigated in 309 workers from four iron and steel foundries and the results compared with those from 122 workers who were not significantly exposed to resin gas and silica dust at the same industries. Phenol-formaldehyde resin was used in the core making and molding processes and workers were exposed to their decomposition products as well as to silica dust containing particulates. The subjects were grouped according to formaldehyde, dust and other gas exposures, and smoking habits were considered also in thi analysis. Standardized respiratory symptom questionnaire was administered by trained interviewers. Chest radiograph, pulmonary funtion tests, and methacholine challenge tests were done. Environmental measurements at the breathing zone were carried out to determine levels of formaldehyde, respiable dust and total dust. Foundry workers had a higher prevalence of symptoms of chronic bronchitis with chronic phlegm and chronic cough when exposed to dust. Exposure to gas was significantly associated with lowered $FEV_1$ and obstructive pulmonary function changes. Exposure to formaldehyde and phenol gas was associated with wheezing symptom among workers, but $FEV_1$ changes after methacholine challenge were not significantly different among different exposure groups. When asthma was defined as the presence of bronchial hyperreactivity with more than 20% decrease in $FEV_1$ after methacholine challenge, 17 workers out of 222 tested had asthma. Fewer asthmatic welters were found among groups exposed to formaldehyde, gas and dust, which indicates a healthy worker effects ill a cross-sectional study. The concentration of formaldehyde gas ranged from 0.24 to 0.43 ppm among studied foundries. The authors conclude that formaldehyde and phenol gas from combusted resin is probably the cause of asthmatic symptoms and also a selection force of those with higher bronchial reactivity away from exposures.

  • PDF

Image Registration and Fusion between Passive Millimeter Wave Images and Visual Images (수동형 멀리미터파 영상과 가시 영상과의 정합 및 융합에 관한 연구)

  • Lee, Hyoung;Lee, Dong-Su;Yeom, Seok-Won;Son, Jung-Young;Guschin, Vladmir P.;Kim, Shin-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6C
    • /
    • pp.349-354
    • /
    • 2011
  • Passive millimeter wave imaging has the capability of detecting concealed objects under clothing. Also, passive millimeter imaging can obtain interpretable images under low visibility conditions like rain, fog, smoke, and dust. However, the image quality is often degraded due to low spatial resolution, low signal level, and low temperature resolution. This paper addresses image registration and fusion between passive millimeter images and visual images. The goal of this study is to combine and visualize two different types of information together: human subject's identity and concealed objects. The image registration process is composed of body boundary detection and an affine transform maximizing cross-correlation coefficients of two edge images. The image fusion process comprises three stages: discrete wavelet transform for image decomposition, a fusion rule for merging the coefficients, and the inverse transform for image synthesis. In the experiments, various types of metallic and non-metallic objects such as a knife, gel or liquid type beauty aids and a phone are detected by passive millimeter wave imaging. The registration and fusion process can visualize the meaningful information from two different types of sensors.

Measurement System of Dynamic Liquid Motion using a Laser Doppler Vibrometer and Galvanometer Scanner (액체거동의 비접촉 다점측정을 위한 레이저진동계와 갈바노미터스캐너 계측시스템)

  • Kim, Junhee;Shin, Yoon-Soo;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.5
    • /
    • pp.227-234
    • /
    • 2018
  • Researches regarding measurement and control of the dynamic behavior of liquid such as sloshing have been actively on undertaken in various engineering fields. Liquid vibration is being measured in the study of tuned liquid dampers(TLDs), which attenuates wind motion of buildings even in building structures. To overcome the limitations of existing wave height measurement sensors, a method of measuring liquid vibration in a TLD using a laser Doppler vibrometer(LDV) and galvanometer scanner is proposed in this paper: the principle of measuring speed and displacement is discussed; a system of multi-point measurement with a single point of LDV according to the operating principles of the galvanometer scanner is established. 4-point liquid vibration on the TLD is measured, and the time domain data of each point is compared with the conventional video sensing data. It was confirmed that the waveform is transformed into the traveling wave and the standing wave. In addition, the data with measurement delay are cross-correlated to perform singular value decomposition. The natural frequencies and mode shapes are compared using theoretical and video sensing results.

A Method for Prediction of Quality Defects in Manufacturing Using Natural Language Processing and Machine Learning (자연어 처리 및 기계학습을 활용한 제조업 현장의 품질 불량 예측 방법론)

  • Roh, Jeong-Min;Kim, Yongsung
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.52-62
    • /
    • 2021
  • Quality control is critical at manufacturing sites and is key to predicting the risk of quality defect before manufacturing. However, the reliability of manual quality control methods is affected by human and physical limitations because manufacturing processes vary across industries. These limitations become particularly obvious in domain areas with numerous manufacturing processes, such as the manufacture of major nuclear equipment. This study proposed a novel method for predicting the risk of quality defects by using natural language processing and machine learning. In this study, production data collected over 6 years at a factory that manufactures main equipment that is installed in nuclear power plants were used. In the preprocessing stage of text data, a mapping method was applied to the word dictionary so that domain knowledge could be appropriately reflected, and a hybrid algorithm, which combined n-gram, Term Frequency-Inverse Document Frequency, and Singular Value Decomposition, was constructed for sentence vectorization. Next, in the experiment to classify the risky processes resulting in poor quality, k-fold cross-validation was applied to categorize cases from Unigram to cumulative Trigram. Furthermore, for achieving objective experimental results, Naive Bayes and Support Vector Machine were used as classification algorithms and the maximum accuracy and F1-score of 0.7685 and 0.8641, respectively, were achieved. Thus, the proposed method is effective. The performance of the proposed method were compared and with votes of field engineers, and the results revealed that the proposed method outperformed field engineers. Thus, the method can be implemented for quality control at manufacturing sites.

A Meta Analysis of Using Structural Equation Model on the Korean MIS Research (국내 MIS 연구에서 구조방정식모형 활용에 관한 메타분석)

  • Kim, Jong-Ki;Jeon, Jin-Hwan
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.47-75
    • /
    • 2009
  • Recently, researches on Management Information Systems (MIS) have laid out theoretical foundation and academic paradigms by introducing diverse theories, themes, and methodologies. Especially, academic paradigms of MIS encourage a user-friendly approach by developing the technologies from the users' perspectives, which reflects the existence of strong causal relationships between information systems and user's behavior. As in other areas in social science the use of structural equation modeling (SEM) has rapidly increased in recent years especially in the MIS area. The SEM technique is important because it provides powerful ways to address key IS research problems. It also has a unique ability to simultaneously examine a series of casual relationships while analyzing multiple independent and dependent variables all at the same time. In spite of providing many benefits to the MIS researchers, there are some potential pitfalls with the analytical technique. The research objective of this study is to provide some guidelines for an appropriate use of SEM based on the assessment of current practice of using SEM in the MIS research. This study focuses on several statistical issues related to the use of SEM in the MIS research. Selected articles are assessed in three parts through the meta analysis. The first part is related to the initial specification of theoretical model of interest. The second is about data screening prior to model estimation and testing. And the last part concerns estimation and testing of theoretical models based on empirical data. This study reviewed the use of SEM in 164 empirical research articles published in four major MIS journals in Korea (APJIS, ISR, JIS and JITAM) from 1991 to 2007. APJIS, ISR, JIS and JITAM accounted for 73, 17, 58, and 16 of the total number of applications, respectively. The number of published applications has been increased over time. LISREL was the most frequently used SEM software among MIS researchers (97 studies (59.15%)), followed by AMOS (45 studies (27.44%)). In the first part, regarding issues related to the initial specification of theoretical model of interest, all of the studies have used cross-sectional data. The studies that use cross-sectional data may be able to better explain their structural model as a set of relationships. Most of SEM studies, meanwhile, have employed. confirmatory-type analysis (146 articles (89%)). For the model specification issue about model formulation, 159 (96.9%) of the studies were the full structural equation model. For only 5 researches, SEM was used for the measurement model with a set of observed variables. The average sample size for all models was 365.41, with some models retaining a sample as small as 50 and as large as 500. The second part of the issue is related to data screening prior to model estimation and testing. Data screening is important for researchers particularly in defining how they deal with missing values. Overall, discussion of data screening was reported in 118 (71.95%) of the studies while there was no study discussing evidence of multivariate normality for the models. On the third part, issues related to the estimation and testing of theoretical models on empirical data, assessing model fit is one of most important issues because it provides adequate statistical power for research models. There were multiple fit indices used in the SEM applications. The test was reported in the most of studies (146 (89%)), whereas normed-test was reported less frequently (65 studies (39.64%)). It is important that normed- of 3 or lower is required for adequate model fit. The most popular model fit indices were GFI (109 (66.46%)), AGFI (84 (51.22%)), NFI (44 (47.56%)), RMR (42 (25.61%)), CFI (59 (35.98%)), RMSEA (62 (37.80)), and NNFI (48 (29.27%)). Regarding the test of construct validity, convergent validity has been examined in 109 studies (66.46%) and discriminant validity in 98 (59.76%). 81 studies (49.39%) have reported the average variance extracted (AVE). However, there was little discussion of direct (47 (28.66%)), indirect, and total effect in the SEM models. Based on these findings, we suggest general guidelines for the use of SEM and propose some recommendations on concerning issues of latent variables models, raw data, sample size, data screening, reporting parameter estimated, model fit statistics, multivariate normality, confirmatory factor analysis, reliabilities and the decomposition of effects.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.