• Title/Summary/Keyword: Power modeling

Search Result 3,046, Processing Time 0.034 seconds

The Impact of Conflict and Influence Strategies Between Local Korean-Products-Selling Retailers and Wholesalers on Performance in Chinese Electronics Distribution Channels: On Moderating Effects of Relational Quality (중국 가전유통경로에서 한국제품 현지 판매업체와 도매업체간 갈등 및 영향전략이 성과에 미치는 영향: 관계 질의 조절효과)

  • Chun, Dal-Young;Kwon, Joo-Hyung;Lee, Guo-Ming
    • Journal of Distribution Research
    • /
    • v.16 no.3
    • /
    • pp.1-32
    • /
    • 2011
  • I. Introduction: In Chinese electronics industry, the local wholesalers are still dominant but power is rapidly swifting from wholesalers to retailers because in recent foreign big retailers and local mass merchandisers are growing fast. During such transient period, conflicts among channel members emerge important issues. For example, when wholesalers who have more power exercise influence strategies to maintain status, conflicts among manufacturer, wholesaler, and retailer will be intensified. Korean electronics companies in China need differentiated channel strategies by dealing with wholesalers and retailers simultaneously to sell more Korean products in competition with foreign firms. For example, Korean electronics firms should utilize 'guanxi' or relational quality to form long-term relationships with whloesalers instead of power and conflict issues. The major purpose of this study is to investigate the impact of conflict, dependency, and influence strategies between local Korean-products-selling retailers and wholesalers on performance in Chinese electronics distribution channels. In particular, this paper proposes effective distribution strategies for Korean electronics companies in China by analyzing moderating effects of 'Guanxi'. II. Literature Review and Hypotheses: The specific purposes of this study are as follows. First, causes of conflicts between local Korean-products-selling retailers and wholesalers are examined from the perspectives of goal incongruence and role ambiguity and then effects of these causes are found out on perceived conflicts of local retailers. Second, the effects of dependency of local retailers upon wholesalers are investigated on local retailers' perceived conflicts. Third, the effects of non-coercive influence strategies such as information exchange and recommendation and coercive strategies such as threats and legalistic pleas exercised by wholesalers are explored on perceived conflicts by local retailers. Fourth, the effects of level of conflicts perceived by local retailers are verified on local retailers' financial performance and satisfaction. Fifth, moderating effects of relational qualities, say, 'quanxi' between wholesalers and retailers are analyzed on the impact of wholesalers' influence strategies on retailers' performances. Finally, moderating effects of relational qualities are examined on the relationship between conflicts and performance. To accomplish above-mentioned research objectives, Figure 1 and the following research hypotheses are proposed and verified. III. Measurement and Data Analysis: To verify the proposed research model and hypotheses, data were collected from 97 retailers who are selling Korean electronic products located around Central and Southern regions in China. Covariance analysis and moderated regression analysis were employed to validate hypotheses. IV. Conclusion: The following results were drawn using structural equation modeling and hierarchical moderated regression. First, goal incongruence perceived by local retailers significantly affected conflict but role ambiguity did not. Second, consistent with conflict spiral theory, the level of conflict decreased when retailers' dependency increased toward wholesalers. Third, noncoercive influence strategies such as information exchange and recommendation implemented by wholesalers had significant effects on retailers' performance such as sales and satisfaction without conflict. On the other hand, coercive influence strategies such as threat and legalistic plea had insignificant effects on performance in spite of increasing the level of conflict. Fourth, 'guanxi', namely, relational quality between local retailers and wholesalers showed unique effects on performance. In case of noncoercive influence strategies, 'guanxi' did not play a role of moderator. Rather, relational quality and noncoercive influence strategies can serve as independent variables to enhance performance. On the other hand, when 'guanxi' was well built due to mutual trust and commitment, relational quality as a moderator can positively function to improve performance even though hostile, coercive influence strategies were implemented. Fifth, 'guanxi' significantly moderated the effects of conflict on performance. Even if conflict arises, local retailers who form solid relational quality can increase performance by dealing with dysfunctional conflict synergistically compared with low 'quanxi' retailers. In conclusion, this study verified the importance of relational quality via 'quanxi' between local retailers and wholesalers in Chinese electronic industry because relational quality could cross out the adverse effects of coercive influence strategies and conflict on performance.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF

Process Design of Carbon Dioxide Storage in the Marine Geological Structure: II. Effect of Thermodynamic Equations of State on Compression and Transport Process (이산화탄소 해양지중저장 처리를 위한 공정 설계: II. 열역학 상태방정식이 압축 및 수송 공정에 미치는 영향 평가)

  • Huh, Cheol;Kang, Seong-Gil
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.11 no.4
    • /
    • pp.191-198
    • /
    • 2008
  • To design a reliable $CO_2$ marine geological storage system, it is necessary to perform numerical process simulation using thermodynamic equation of state. $CO_2$ capture process from the major point sources such as power plants, transport process from the capture sites to storage sites and storage process to inject $CO_2$ into the deep marine geological structure can be simulate with numerical modeling. The purpose of this paper is to compare and analyse the relevant equations of state including ideal, BWRS, PR, PRBM and SRK equation of state. We also studied the effect of thermodynamic equation of state in designing the compression and transport process. As a results of comparison of numerical calculations, all relevant equation of state excluding ideal equation of state showed similar compression behavior in pure $CO_2$. On the other hand, calculation results of BWRS, PR and PRBM showed totally different behavior in compression and transport process of captured $CO_2$ mixture from the oxy-fuel combustion coal-fired plants. It is recommended to use PR or PRBM in designing of compression and transport process of $CO_2$ mixture containing NO, Ar and $O_2$.

  • PDF

A Study on the Black Box Design using Collective Intelligence Analysis (집단지성 분석법을 활용한 블랙박스 디자인 개발 연구)

  • Lee, Hee young;Hong, Jeong Pyo;Cho, Kwang Soo
    • Science of Emotion and Sensibility
    • /
    • v.21 no.2
    • /
    • pp.101-112
    • /
    • 2018
  • This study was carried out to enhance the competitiveness of blackbox design for domestic and international companies, based on the explosive growth of the blackbox market due to development of blackbox design for vehicle accident prevention and post-treatment. In the past, the blackbox market has produced products indiscriminately to meet the ever-increasing demand of consumers. Therefore, we thought a new design method was necessary to effectively investigate the needs of rapidly changing consumers. In this study, we aimed to identify the best-selling blackbox to understand the design flow, and the optimum area for a blackbox, considering the uniqueness of associated vehicle. Based on discussion with blackbox design experts, we studied the direction of design and the problems with blackbox use, which were reflected in blackbox development. Through this research, two types of design - leading blackbox (A type) and mass production blackbox (B type) - were proposed for compatibility of the blackbox with the car. The leading type of blackbox was positioned so that it was wrapped with the room mirror hinge before the screw was fastened, in order to achieve an integrated design. Therefore, we designed an integrated form and resolved the placement problem of an adhesive blackbox. To blend, the mass production blackbox implemented material and surface processing in the same way with the car, and adopted the slide structure to automatically turn off the main body power when removing the SDcard, reflecting consumer needs. This study considers evolving consumer needs through a case study and collective intelligence and deals with implementation of the whole design process during mass production. In this study, we aimed to strengthen the competitiveness of the blackbox design based on design method and its realization.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Analysis of Modality and Procedures for CCS as CDM Project and Its Countmeasures (CCS 기술의 CDM 사업화 수용에 대한 방식과 절차 분석 및 대응방안 고찰)

  • Noh, Hyon-Jeong;Huh, Cheol;Kang, Seong-Gil
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.15 no.3
    • /
    • pp.263-272
    • /
    • 2012
  • Carbon dioxide, emitted by human activities since the industrial revolution, is regarded as a major contributor of global warming. There are many efforts to mitigate climate change, and carbon dioxide capture and geological storage (CCS) is recognized as one of key technologies because it can reduce carbon dioxide emissions from large point sources such as a power station or other industrial installation. The inclusion of CCS as clean development mechanism (CDM) project activities has been considered at UNFCCC as financial incentive mechanisms for those developing countries that may wish to deploy the CCS. Although the Conference of the Parties serving as the Meeting of the Parties to the UNFCCC's Kyoto Protocol (CMP), at Cancun in December 2010, decided that CCS is eligible as CDM project activities, the issues identified in decision 2/CMP.5 should be addressed and resolved in a satisfactory manner. Major issues regarding modalities and procedure are 1) Site selection, 2) Monitoring, 3) Modeling, 4) Boundaries, 5) Seepage Measuring and Accounting, 6) Trans-Boundary Effects, 7) Accounting of Associated Project Emissions (Leakage), 8) Risk and Safety Assessment, and 9) Liability Under the CDM Scheme. The CMP, by its decision 7/CMP.6, invited Parties to submit their views to the secretariat of Subsidiary Body for Scientific and Technological Advice (SBSTA), SBSTA prepared a draft modalities and procedure by exchanging views of Parties through workshop held in Abu Dhabi, UAE (September 2011). The 7th CMP (Durban, December 2011) finally adopted the modalities and procedures for CCS as CDM project activities (CMP[2011], Decision-/CMP.7). The inclusion of CCS as CDM project activities means that CCS is officially accredited as one of $CO_2$ reducing technologies in global carbon market. Consequently, it will affect relevant technologies and industry as well as law and policy in Korea and aboard countries. This paper presents a progress made on discussion and challenges regarding the issue, and aims to suggest some considerations to policy makers in Korea in order to demonstrate and deploy the CCS project in the near future. According to the adopted modalities and procedures for CCS as CDM project activities, it is possible to implement relevant CCS projects in Non-Annex I countries, including Korea, as long as legal and regulatory frameworks are established. Though Korea enacted 'Framework Act on Low Carbon, Green Growth', the details are too inadequate to content the requirements of modalities and procedures for CCS as CDM project. Therefore, it is required not only to amend the existing laws related with capture, transport, and storage of $CO_2$ for paving the way of an prompt deployment of CCS CDM activities in Korea as a short-term approach, but also to establish the united framework as a long-term approach.

Geochemical Characteristics of the Gyeongju LILW Repository II. Rock and Mineral (중.저준위 방사성폐기물 처분부지의 지구화학 특성 II. 암석 및 광물)

  • Kim, Geon-Young;Koh, Yong-Kwon;Choi, Byoung-Young;Shin, Seon-Ho;Kim, Doo-Haeng
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.6 no.4
    • /
    • pp.307-327
    • /
    • 2008
  • Geochemical study on the rocks and minerals of the Gyeongju low and intermediate level waste repository was carried out in order to provide geochemical data for the safety assessment and geochemical modeling. Polarized microscopy, X-ray diffraction method, chemical analysis for the major and trace elements, scanning electron microscopy(SEM), and stable isotope analysis were applied. Fracture zones are locally developed with various degrees of alteration in the study area. The study area is mainly composed of granodiorite and diorite and their relation is gradational in the field. However, they could be easily distinguished by their chemical property. The granodiorite showed higher $SiO_2$ content and lower MgO and $Fe_2O_3$ contents than the diorite. Variation trends of the major elements of the granodiorite and diorite were plotted on the same line according to the increase of $SiO_2$ content suggesting that they were differentiated from the same magma. Spatial distribution of the various elements showed that the diorite region had lower $SiO_2,\;Al_2O_3,\;Na_2O\;and\;K_2O$ contents, and higher CaO, $Fe_2O_3$ contents than the granodiorite region. Especially, because the differences in the CaO and $Na_2O$ distribution were most distinct and their trends were reciprocal, the chemical variation of the plagioclase of the granitic rocks was the main parameter of the chemical variation of the host rocks in the study area. Identified fracture-filling minerals from the drill core were montmorillonite, zeolite minerals, chlorite, illite, calcite and pyrite. Especially pyrite and laumontite, which are known as indicating minerals of hydrothermal alteration, were widely distributed in the study area indicating that the study area was affected by mineralization and/or hydrothermal alteration. Sulfur isotope analysis for the pyrite and oxygen-hydrogen stable isotope analysis for the clay minerals indicated that they were originated from the magma. Therefore, it is considered that the fracture-filling minerals from the study area were affected by the hydrothermal solution as well as the simply water-rock interaction.

  • PDF

A Study on the DC Resistivity Method to Image the Underground Structure Beneath River or Lake Bottom (하저 지반특성 규명을 위한 수상 전기비저항 탐사에 관한 연구)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Choi Seong-Jun;Lee Seoung Kon;Son Jeong-Sul;Chung Seung-Hwan
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.4
    • /
    • pp.223-235
    • /
    • 2002
  • Since weak Bones or geological lineaments are likely to be eroded, there may develop weak Bones beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. DC resistivity method, however, has seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of a case history, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing them at the water bottom, because the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the DC resistivity method can provide fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio (S/N ratio) data as well as the high resolving power. Some of the modified electrode arrays can provide the data having reasonably high S/N ratio and need not to install remote electrode(s), and thus, they may be suitable to the resistivity survey at the water-covered area.

A Meta Analysis of Using Structural Equation Model on the Korean MIS Research (국내 MIS 연구에서 구조방정식모형 활용에 관한 메타분석)

  • Kim, Jong-Ki;Jeon, Jin-Hwan
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.47-75
    • /
    • 2009
  • Recently, researches on Management Information Systems (MIS) have laid out theoretical foundation and academic paradigms by introducing diverse theories, themes, and methodologies. Especially, academic paradigms of MIS encourage a user-friendly approach by developing the technologies from the users' perspectives, which reflects the existence of strong causal relationships between information systems and user's behavior. As in other areas in social science the use of structural equation modeling (SEM) has rapidly increased in recent years especially in the MIS area. The SEM technique is important because it provides powerful ways to address key IS research problems. It also has a unique ability to simultaneously examine a series of casual relationships while analyzing multiple independent and dependent variables all at the same time. In spite of providing many benefits to the MIS researchers, there are some potential pitfalls with the analytical technique. The research objective of this study is to provide some guidelines for an appropriate use of SEM based on the assessment of current practice of using SEM in the MIS research. This study focuses on several statistical issues related to the use of SEM in the MIS research. Selected articles are assessed in three parts through the meta analysis. The first part is related to the initial specification of theoretical model of interest. The second is about data screening prior to model estimation and testing. And the last part concerns estimation and testing of theoretical models based on empirical data. This study reviewed the use of SEM in 164 empirical research articles published in four major MIS journals in Korea (APJIS, ISR, JIS and JITAM) from 1991 to 2007. APJIS, ISR, JIS and JITAM accounted for 73, 17, 58, and 16 of the total number of applications, respectively. The number of published applications has been increased over time. LISREL was the most frequently used SEM software among MIS researchers (97 studies (59.15%)), followed by AMOS (45 studies (27.44%)). In the first part, regarding issues related to the initial specification of theoretical model of interest, all of the studies have used cross-sectional data. The studies that use cross-sectional data may be able to better explain their structural model as a set of relationships. Most of SEM studies, meanwhile, have employed. confirmatory-type analysis (146 articles (89%)). For the model specification issue about model formulation, 159 (96.9%) of the studies were the full structural equation model. For only 5 researches, SEM was used for the measurement model with a set of observed variables. The average sample size for all models was 365.41, with some models retaining a sample as small as 50 and as large as 500. The second part of the issue is related to data screening prior to model estimation and testing. Data screening is important for researchers particularly in defining how they deal with missing values. Overall, discussion of data screening was reported in 118 (71.95%) of the studies while there was no study discussing evidence of multivariate normality for the models. On the third part, issues related to the estimation and testing of theoretical models on empirical data, assessing model fit is one of most important issues because it provides adequate statistical power for research models. There were multiple fit indices used in the SEM applications. The test was reported in the most of studies (146 (89%)), whereas normed-test was reported less frequently (65 studies (39.64%)). It is important that normed- of 3 or lower is required for adequate model fit. The most popular model fit indices were GFI (109 (66.46%)), AGFI (84 (51.22%)), NFI (44 (47.56%)), RMR (42 (25.61%)), CFI (59 (35.98%)), RMSEA (62 (37.80)), and NNFI (48 (29.27%)). Regarding the test of construct validity, convergent validity has been examined in 109 studies (66.46%) and discriminant validity in 98 (59.76%). 81 studies (49.39%) have reported the average variance extracted (AVE). However, there was little discussion of direct (47 (28.66%)), indirect, and total effect in the SEM models. Based on these findings, we suggest general guidelines for the use of SEM and propose some recommendations on concerning issues of latent variables models, raw data, sample size, data screening, reporting parameter estimated, model fit statistics, multivariate normality, confirmatory factor analysis, reliabilities and the decomposition of effects.