• Title/Summary/Keyword: Output Variable

Search Result 1,171, Processing Time 0.034 seconds

An Empirical Analysis on the Efficiency of the Projects for Strengthening the Service Business Competitiveness (서비스기업경쟁력강화사업의 효율성에 대한 실증 분석)

  • Kim, Dae Ho;Kim, Dongwook
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.6 no.5
    • /
    • pp.367-377
    • /
    • 2016
  • The purpose of the projects for strengthening the Service Business Competitiveness, which had been sponsored by the Ministry of Trade, Industry and Energy, and managed by the NIPA, is to support for combining the whole business process of the SMEs with the business model considering the scientific aspects of the services, to enhance the productivity of them and to add the values of their activities. 5 organizations are selected in 2014, and 4 in 2015 as leading organizations for these projects. This study analyzed the efficiency of these projects using DEA. Throughout the analysis of the prior researches, this study used the amount of government-sponsored money as the input variable, and the number of new customer business, the sales revenue, and the number of new employment as the output variables. And the result of this analysis showed that the decision making unit 12, 15, and 21 was efficient. And from this study, we found out two more performance indicators such as, the number of new employment and the amount of sales revenue, besides the number of new customer businesses.

The Study for EV Charging Infrastructure connected with Microgrid (마이크로그리드와 연계된 전기자동차 충전인프라에 관한 연구)

  • Hun Shim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.1
    • /
    • pp.1-6
    • /
    • 2024
  • In order to increase the use of electric vehicles (EVs) and minimize grid strain, microgrid using renewable energy must take an important role. Microgrid may use fossil fuels such as small diesel power, but in many cases, they can be supplied with energy from renewable energy, which is an eco-friendly energy source. However, renewable energy such as solar and wind power have variable output characteristics. Therefore, in order to meet the charging and discharging energy demands of electric vehicles and at the same time supply load power stably, it is necessary to review the configuration of electric vehicle charging infrastructure that utilizes diesel power or electric vehicle-to-grid (V2G) as a parallel energy source in the microgrid. Against this background, this study modelized a microgrid that can stably supply power to loads using solar power, wind power, diesel power, and V2G. The proposed microgrid uses solar power and wind power generation as the primary supply energy source to respond to power demand, and determines the operation type of the load's electric vehicles and the rotation speed of the load synchronous machine to provide stable power from diesel power for insufficient generations. In order to verify the system performance of the proposed model, we studied the stable operation plan of the microgrid by simulating it with MATLAB /Simulink.

Technical Inefficiency in Korea's Manufacturing Industries (한국(韓國) 제조업(製造業)의 기술적(技術的) 효율성(效率性) : 산업별(産業別) 기술적(技術的) 효율성(效率性)의 추정(推定))

  • Yoo, Seong-min;Lee, In-chan
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.51-79
    • /
    • 1990
  • Research on technical efficiency, an important dimension of market performance, had received little attention until recently by most industrial organization empiricists, the reason being that traditional microeconomic theory simply assumed away any form of inefficiency in production. Recently, however, an increasing number of research efforts have been conducted to answer questions such as: To what extent do technical ineffciencies exist in the production activities of firms and plants? What are the factors accounting for the level of inefficiency found and those explaining the interindustry difference in technical inefficiency? Are there any significant international differences in the levels of technical efficiency and, if so, how can we reconcile these results with the observed pattern of international trade, etc? As the first in a series of studies on the technical efficiency of Korea's manufacturing industries, this paper attempts to answer some of these questions. Since the estimation of technical efficiency requires the use of plant-level data for each of the five-digit KSIC industries available from the Census of Manufactures, one may consture the findings of this paper as empirical evidence of technical efficiency in Korea's manufacturing industries at the most disaggregated level. We start by clarifying the relationship among the various concepts of efficiency-allocative effciency, factor-price efficiency, technical efficiency, Leibenstein's X-efficiency, and scale efficiency. It then becomes clear that unless certain ceteris paribus assumptions are satisfied, our estimates of technical inefficiency are in fact related to factor price inefficiency as well. The empirical model employed is, what is called, a stochastic frontier production function which divides the stochastic term into two different components-one with a symmetric distribution for pure white noise and the other for technical inefficiency with an asymmetric distribution. A translog production function is assumed for the functional relationship between inputs and output, and was estimated by the corrected ordinary least squares method. The second and third sample moments of the regression residuals are then used to yield estimates of four different types of measures for technical (in) efficiency. The entire range of manufacturing industries can be divided into two groups, depending on whether or not the distribution of estimated regression residuals allows a successful estimation of technical efficiency. The regression equation employing value added as the dependent variable gives a greater number of "successful" industries than the one using gross output. The correlation among estimates of the different measures of efficiency appears to be high, while the estimates of efficiency based on different regression equations seem almost uncorrelated. Thus, in the subsequent analysis of the determinants of interindustry variations in technical efficiency, the choice of the regression equation in the previous stage will affect the outcome significantly.

  • PDF

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Study on the Short-Term Hemodynamic Effects of Experimental Cardiomyoplasty in Heart Failure Model (심부전 모델에서 실험적 심근성형술의 단기 혈역학적 효과에 관한 연구)

  • Jeong, Yoon-Seop;Youm, Wook;Lee, Chang-Ha;Kim, Wook-Seong;Lee, Young-Tak;Kim, Won-Gon
    • Journal of Chest Surgery
    • /
    • v.32 no.3
    • /
    • pp.224-236
    • /
    • 1999
  • Background: To evaluate the short-term effect of dynamic cardiomyoplasty on circulatory function and detect the related factors that can affect it, experimental cardiomyoplasties were performed under the state of normal cardiac function and heart failure. Material and Method: A total of 10 mongrel dogs weighing 20 to 30kg were divided arbitrarily into two groups. Five dogs of group A underwent cardiomyoplasty with latissimus dorsi(LD) muscle mobilization followed by a 2-week vascular delay and 6-week muscle training. Then, hemodynamic studies were conducted. In group B, doxorubicin was given to 5 dogs in an IV dose of 1 mg/kg once a week for 8 weeks to induce chronic heart failure, and simultaneous muscle training was given for preconditioning during this period. Then, cardiomyoplasties were performed and hemodynamic studies were conducted immediately after these cardiomyoplasties in group B. Result: In group A, under the state of normal cardiac function, only mean right atrial pressure significantly increased with the pacer-on(p<0.05) and the left ventricular hemodynamic parameters did not change significantly. However, with pacer-on in group B, cardiac output(CO), rate of left ventricular pressure development(dp/dt), stroke volume(SV), and left ventricular stroke work(SW) increased by 16.7${\pm}$7.2%, 9.3${\pm}$3.2%, 16.8${\pm}$8.6%, and 23.1${\pm}$9.7%, respectively, whereas left ventricular end-diastole pressure(LVEDP) and mean pulmonary capillary wedge pressure(mPCWP) decreased by 32.1${\pm}$4.6% and 17.7${\pm}$9.1%, respectively(p<0.05). In group A, imipramine was infused at the rate of 7.5mg/kg/hour for 34${\pm}$2.6 minutes to induce acute heart failure, which resulted in the reduction of cardiac output by 17.5${\pm}$2.7%, systolic left ventricular pressure by 15.8${\pm}$2.5% and the elevation of left ventricular end-diastole pressure by 54.3${\pm}$15.2%(p<0.05). With pacer-on under this state of acute heart failu e, CO, dp/dt, SV, and SW increased by 4.5${\pm}$1.8% and 3.1${\pm}$1.1%, 5.7${\pm}$3.6%, and 6.9${\pm}$4.4%, respectively, whereas LVEDP decreased by 11.7${\pm}$4.7%(p<0.05). Comparing CO, dp/dt, SV, SW and LVEDP that changed significantly with pacer-on, both under the state of acute and chronic heart failure, augmentation widths of these left ventricular hemodynamic parameters were significantly larger under the state of chronic heart failure(group B) than acute heart failure(group A)(p<0.05). On gross inspection, variable degrees of adhesion and inflammation were present in all 5 dogs of group A, including 2 dogs that showed no muscle contraction. No adhesion and inflammation were, however, present in all 5 dogs of group B, which showed vivid muscle contractions. Considering these differences in gross findings along with the following premise that the acute heart failure state was not statistically different from the chronic one in terms of left ventricular parameters(p>0.05), the larger augmentation effect seen in group B is presumed to be mainly attributed to the viability and contractility of the LD muscle. Conclusion: These results indicate that the positive circulatory augmentation effect of cardiomyoplasty is apparent only under the state of heart failure and the preservation of muscle contractility is important to maximize this effect.

  • PDF

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Joint Application of DRASTIC and Numerical Groundwater Flow Model for The Assessment of Groundwater Vulnerability of Buyeo-Eup Area (DRASTIC 모델 및 지하수 수치모사 연계 적용에 의한 부여읍 일대의 지하수 오염 취약성 평가)

  • Lee, Hyun-Ju;Park, Eun-Gyu;Kim, Kang-Joo;Park, Ki-Hoon
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.1
    • /
    • pp.77-91
    • /
    • 2008
  • In this study, we developed a technique of applying DRASTIC, which is the most widely used tool for estimation of groundwater vulnerability to the aqueous phase contaminant infiltrated from the surface, and a groundwater flow model jointly to assess groundwater contamination potential. The developed technique is then applied to Buyeo-eup area in Buyeo-gun, Chungcheongnam-do, Korea. The input thematic data of a depth to water required in DRASTIC model is known to be the most sensitive to the output while only a few observations at a few time schedules are generally available. To overcome this practical shortcoming, both steady-state and transient groundwater level distributions are simulated using a finite difference numerical model, MODFLOW. In the application for the assessment of groundwater vulnerability, it is found that the vulnerability results from the numerical simulation of a groundwater level is much more practical compared to cokriging methods. Those advantages are, first, the results from the simulation enable a practitioner to see the temporally comprehensive vulnerabilities. The second merit of the technique is that the method considers wide variety of engaging data such as field-observed hydrogeologic parameters as well as geographic relief. The depth to water generated through geostatistical methods in the conventional method is unable to incorporate temporally variable data, that is, the seasonal variation of a recharge rate. As a result, we found that the vulnerability out of both the geostatistical method and the steady-state groundwater flow simulation are in similar patterns. By applying the transient simulation results to DRASTIC model, we also found that the vulnerability shows sharp seasonal variation due to the change of groundwater recharge. The change of the vulnerability is found to be most peculiar during summer with the highest recharge rate and winter with the lowest. Our research indicates that numerical modeling can be a useful tool for temporal as well as spatial interpolation of the depth to water when the number of the observed data is inadequate for the vulnerability assessments through the conventional techniques.

A Study on Relationships Between Environment, Organizational Structure, and Organizational Effectiveness of Public Health Centers in Korea (보건소의 환경, 조직구조와 조직유효성과의 관계)

  • Yun, Soon-Nyoung
    • Research in Community and Public Health Nursing
    • /
    • v.6 no.1
    • /
    • pp.5-33
    • /
    • 1995
  • The objective of the study are two-fold: one is to explore the relationship between environment, organizational structure, and organizational effectiveness of public health centers in Korea, and the other is to examine the validity of contingency theory for improving the organizational structure of public health care agencies, with special emphasis on public health nursing administration. Accordingly, the conceptual model of the study consisted of three different concepts: environment, organizational structure, and organizational effectiveness, which were built up from the contingency theory. Data were collected during the period from 1st of May through 30th of June, 1990. From the total of 249 health centers in the country, one hundred and five centers were sampled non proportionally, according to the geopolitical distribution. Out of 105, 73 health centers responded to mailed questionnaire. The health centers were the unit of the study, and a various statistical analysis techniques were used: Reliability analysis(Cronbach's Alpha) for 4 measurement tools; Shapiro-Wilk statistic for normality test of measured scores of 6 variables: ANOVA, Pearson Correlaion analysis, regressional analysis, and canonical correlation analysis for the test of the relationships and differences between the variables. The results were. as follows : 1. No significant differences between forma lization, decision-making authority and environmental complexity were found(F=1.383, P=.24 ; F=.801, P=.37). 2. Negative relationships between formalization and decision-making authority for both urban and rural health centers were found(r=-.470, P=.002 ; r=-.348, P=.46). 3. No significant relationship between formalization and job satisfaction for both urban and rural health centers were found (r=-.242, P=.132, r=-.060, P=.739). 4. Significant positive relationship between decision - making authority and job satisfaction were found in urban health centers (r=.504, P=.0009), but no such relationship was observed in rural health centers. Regression coefficient between them was statistically significant($\beta=1.535$, P=.0002), and accuracy of regression line was accepted (W=.975, P= .420). 5. No significant relationships among formalization and family planning services, maternal health services, and tuberculosis control services for both urban and rural health centers were found. 6. Among decision-making authority and family planning services, maternal health services, and tuberculosis control services, significant positive relationship was found between de cision-making authority and family planning services(r=.286, P=.73). 7. A significant difference was found in maternal health services by the type of health centers (F=5.13, P=.026) but no difference was found in tuberculosis control services by the type of health centers, formalization, and decision-making authority. 8. A significant positive relationships were found between family planning services and maternal health services and tuberculosis control services, and between maternal health services and tuberculosis control services (r=-.499, P=.001 ; r=.457, P=.004 ; r=.495, P=.002) in case of urban health centers. In case of rural health centers, relationships between family planning services and tuberculosis control services, and between maternal health services and tuberculosis control services were statistically significant (r=.534, P=.002 ; r=.389, P=.027). No significant relationship was found between family planning and maternal health services. 9. A significant positive canonical correlation was found between the group of independent variables consisted of formalization and de cision-making authority and the group of dependent variables consisted of family planning services, maternal health services and tuberculosis control services(Rc=.455, P=.02). In case of urban health centers, no significant canonical correlation was found between them, but significant canoncial correlation was found in rural health centers(Rc=.578, P=.069), 10. Relationships between job satisfaction and health care productivity was not found significant. Through these results, the assumed relationship between environment and organizational structure was not supported in health centers. Therefore, the relationship between the organizational effectiveness and the congruence between environment and organizational structure that contingency theory proposes to exist was not able to be tested. However decision-making authority was found as an important variable of organizational structure affecting family planning services and job satisfaction in urban health centers. Thus it was suggested that decentralized decision making among health professionals would be a valuable strategy for improvement of organizational effectiveness in public health centers. It is also recommended that further studies to test contingency theory would use variability and uncertainty to define environment of public health centers instead of complexity.

  • PDF