• Title/Summary/Keyword: Experimental Method

Search Result 28,719, Processing Time 0.067 seconds

Effect of Temperature During Grain Filling Stage on Grain Quality and Taste of Cooked Rice in Mid-late Maturing Rice Varieties (등숙기 온도변이가 중만생종 벼의 쌀 품질과 식미치에 미치는 영향)

  • Choi, Kyung-Jin;Park, Tae-Shik;Lee, Choon-Ki;Kim, Jung-Tae;Kim, Jun-Hwan;Ha, Ki-Yong;Yang, Woon-Ho;Lee, Chung-Keun;Kwak, Kang-Su;Park, Hong-Kyu;Nam, Jeong-Kwon;Kim, Jeong-Il;Han, Gwi-Jung;Cho, Yong-Sik;Park, Young-Hee;Han, Sang-Wook;Kim, Jae-Rok;Lee, Sang-Young;Choi, Hyun-Gu;Cho, Seung-Hyun;Park, Heung-Gyu;Ahn, Duok-Jong;Joung, Wan-Kyu;Han, Sang-Ik;Kim, Sang-Yeol;Jang, Ki-Chang;Oh, Seong-Hwan;Seo, Woo-Duck;Ra, Ji-Eun;Kim, Jun-Young;Kang, Hang-Won
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.56 no.4
    • /
    • pp.404-412
    • /
    • 2011
  • This experiment was conducted to clarify the effect of the temperature for grain filling duration on quality and taste of cooked rice cultivated in different region in Korea. In 2006 and 2007, 4 mid-late maturing group of rice varieties (Nampyeongbyeo, Ilpumbyeo, Junambyeo and Dongjin 1) were cultivated in 28 experimental plots of 27 different regions located in 8 provinces. The taste of cooked rice were positively correlated with 1,000 grain weight but negatively correlated with protein content of brown rice. Mean temperature for 30 days from heading was more closely correlated with grain filling and tastes of cooked rice than those for 40 days. Though, the optimum mean temperature for the best taste of cooked rice for 30 days after heading was 22.1 to $23.1^{\circ}C$ depending on varieties, in general, 1,000 grain weight and cooked rice taste were the highest in the mean temperature of $22.2^{\circ}C$ for 30 days from heading. But grains were poorly ripened in case of the mean temperature lower than $21.0^{\circ}C$ for 30 days after heading. Therefore, for the better taste of cooked rice in Korea, the developing new rice varieties and cultivation method should be focused to adjust the mean temperature within $22-23^{\circ}C$ during the period of 30 days after heading.

A Study on the Effects of the Early Use of Nasal CPAP in the Weaning of Mechanical Ventilators (인공호흡기 이탈시 비강내 CPAP 조기 사용 효과에 관한 연구)

  • Kim, Yeoung Ju;Jung, Byun Kyung;Lee, Sang Geel
    • Clinical and Experimental Pediatrics
    • /
    • v.46 no.12
    • /
    • pp.1200-1206
    • /
    • 2003
  • Purpose : This study was conducted for the use of nasal continuous positive airway pressure (CPAP), by comparing the early use of non-invasive nasal CPAP with low intermittent mandatory ventilation(low IMV) and endotracheal CPAP in weaning a mechanical ventilator from infants with moderate respiratory distress syndrome(RDS). Methods : Thirty infants in the study group, with moderate RDS from November 2001 to June 2002, were administered surfactants and treated with the mechanical ventilator, and applied the nasal CPAP in weaning. Thirty infants of the control group, from January 1999 to September 2001, were applied low IMV and endoctracheal CPAP in weaning. Results : There were no significant differences in the characteristics, the severity of clinical symptoms, the initial laboratory findings and settings of the mechanical ventilator. After weaning, the study group showed no significant changes in $PaCO_2$. However, the control group showed a slight $CO_2$ retension after one and 12 hours. Twenty eight infants(93.3%) of the study group and 24 infants(80%) of the control group were successfully extubated. The primary cause of failure was apnea. There were no significant differences in the duration of weaning and the mechanical ventilator treatment between the groups. Complications in weaning were related to the fixation of nasal CPAP and the mechanical problems caused by endotracheal tube. Conclusion : Aggressive weaning is possible for moderate RDS, in which the nasal CPAP was used without the low IMV and the endotracheal CPAP process. It had no difficulties. In conclusion, the nasal CPAP is an adequate weaning method for moderate RDS.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Development of Korean Version of Heparin-Coated Shunt (헤파린 표면처리된 국산화 혈관우회도관의 개발)

  • Sun, Kyung;Park, Ki-Dong;Baik, Kwang-Je;Lee, Hye-Won;Choi, Jong-Won;Kim, Seung-Chol;Kim, Taik-Jin;Lee, Seung-Yeol;Kim, Kwang-Taek;Kim, Hyoung-Mook;Lee, In-Sung
    • Journal of Chest Surgery
    • /
    • v.32 no.2
    • /
    • pp.97-107
    • /
    • 1999
  • Background: This study was designed to develop a Korean version of the heparin-coated vascular bypass shunt by using a physical dispersing technique. The safety and effectiveness of the thrombo-resistant shunt were tested in experimental animals. Material and Method: A bypass shunt model was constructed on the descending thoracic aorta of 21 adult mongrel dogs(17.5-25 kg). The animals were divided into groups of no-treatment(CONTROL group; n=3), no-treatment with systemic heparinization(HEPARIN group; n=6), Gott heparin shunt (GOTT group; n=6), or Korean heparin shunt(KIST group; n=6). Parameters observed were complete blood cell counts, coagulation profiles, kidney and liver function(BUN/Cr and AST/ ALT), and surface scanning electron microscope(SSEM) findings. Blood was sampled from the aortic blood distal to the shunt and was compared before the bypass and at 2 hours after the bypass. Result: There were no differences between the groups before the bypass. At bypass 2 hours, platelet level increased in the HEPARIN and GOTT groups(p<0.05), but there were no differences between the groups. Changes in other blood cell counts were insignificant between the groups. Activated clotting time, activated partial thromboplastin time, and thrombin time were prolonged in the HEPARIN group(p<0.05) and differences between the groups were significant(p<0.005). Prothrombin time increased in the GOTT group(p<0.05) without having any differences between the groups. Changes in fibrinogen level were insignificant between the groups. Antithrombin III levels were increased in the HEPARIN and KIST groups(p<0.05), and the inter-group differences were also significant(p<0.05). Protein C level decreased in the HEPARIN group(p<0.05) without having any differences between the groups. BUN levels increased in all groups, especially in the HEPARIN and KIST groups(p<0.05), but there were no differences between the groups. Changes of Cr, AST, and ALT levels were insignificant between the groups. SSEM findings revealed severe aggregation of platelets and other cellular elements in the CONTROL group, and the HEPARIN group showed more adherence of the cellular elements than the GOTT or KIST group. Conclusion: Above results show that the heparin-coated bypass shunts(either GOTT or KIST) can suppress thrombus formation on the surface without inducing bleeding tendencies, while systemic heparinization(HEPARIN) may not be able to block activation of the coagulation system on the surface in contact with foreign materials but increases the bleeding tendencies. We also conclude that the thrombo-resistant effects of the Korean version of heparin shunt(KIST) are similar to those of the commercialized heparin shunt(GOTT).

  • PDF

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

A Study on Improvement for Fishing Gear and Method of Pound Net - I - Net Shapes of the Commerical Net in the Flow - (정치망 어구어법의 개발에 관한 연구-I - 현용어구의 흐름에 대한 형상 변화 -)

  • Yun, Il-Bu;Lee, Ju-Hee;Kwon, Byeong-Guk;Cho, Young-Bok;Yoo, Jae-Bum;Kim, Seong-Hun;Kim, Boo-Young
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.40 no.4
    • /
    • pp.268-281
    • /
    • 2004
  • A study was carried out in order to estimate the deformation of the pound net according to the current by the model test in the circulating water channel. The tension of the frame rope and the variation of net shape were measured to investigate the deforming of the model pound net in the flow. The results are obtained as follows; 1. The experimental equation between tensions (R) of the frame rope and velocity (ν)was found to be R=$19.58v^{1.98}$($r^2$=0.98) in case of the upperward flow with fish court net and R=$26.90v^{1.72}$($r^2$=0.95)at the upperward flow with bag net according to the velocity from 0.0m/s to 0.6m/s, respectively. 2. As the variation of flow speed inside of the model net was gradually decreased according as which is passed through netting panels, in case of the upperward flow with fish court net, the flow speed was about 70% of initial flow speed at 0.1m/s, 60% at 0.2m/s, 50% at 0.3m/s and 40% 0.4~0.6m/s at the measurement point(h) inside of the first bag net, respectively. In case of the upperward flow with bag net, as the flow speed was steeply decreased according as which if passed through the second bag net, it was 30~60% of the initial flow speed and was 20~30% inside of the first bag net and was about 10~20% inside of the inclined passage net. 3. In case of the upperward flow with fish court net, the variation of deformed angle of fish court net was from 0$^{\circ}$ to 70$^{\circ}$and that of inclined passage net was from 0$^{\circ}$ to 63$^{\circ}$and that of the second bag net was from 0$^{\circ}$ to 47$^{\circ}$ . 4. In case of the upperward flow with fish court net, the variation of deformed angle of the second bag net was changed from 0$^{\circ}$ to 70$^{\circ}$and that of the inclined passage net was from 0$^{\circ}$ to 55$^{\circ}$ and that of the fish court net was from 0$^{\circ}$ to 50$^{\circ}$. The depth ratio of the first bag net was changed from 0% to 35% and that of the second bag net was from 0% to 20% and that of the inclined passage net was from 0% to 35%. In the flow speed 0.5m/s, the inclined passage net was raised up to the entry of the bag net and then prevented it more over 90%. 5. To be increased the opening volume of pound net, it needs to attach the added weight outside of the fish court net, inclined passage net and bag net. At the same time, it needs to adjust the tension of the twine for maintenance of the shape.

The Relationship Between DEA Model-based Eco-Efficiency and Economic Performance (DEA 모형 기반의 에코효율성과 경제적 성과의 연관성)

  • Kim, Myoung-Jong
    • Journal of Environmental Policy
    • /
    • v.13 no.4
    • /
    • pp.3-49
    • /
    • 2014
  • Growing interest of stakeholders on corporate responsibilities for environment and tightening environmental regulations are highlighting the importance of environmental management more than ever. However, companies' awareness of the importance of environment is still falling behind, and related academic works have not shown consistent conclusions on the relationship between environmental performance and economic performance. One of the reasons is different ways of measuring these two performances. The evaluation scope of economic performance is relatively narrow and the performance can be measured by a unified unit such as price, while the scope of environmental performance is diverse and a wide range of units are used for measuring environmental performances instead of using a single unified unit. Therefore, the results of works can be different depending on the performance indicators selected. In order to resolve this problem, generalized and standardized performance indicators should be developed. In particular, the performance indicators should be able to cover the concepts of both environmental and economic performances because the recent idea of environmental management has expanded to encompass the concept of sustainability. Another reason is that most of the current researches tend to focus on the motive of environmental investments and environmental performance, and do not offer a guideline for an effective implementation strategy for environmental management. For example, a process improvement strategy or a market discrimination strategy can be deployed through comparing the environment competitiveness among the companies in the same or similar industries, so that a virtuous cyclical relationship between environmental and economic performances can be secured. A novel method for measuring eco-efficiency by utilizing Data Envelopment Analysis (DEA), which is able to combine multiple environmental and economic performances, is proposed in this report. Based on the eco-efficiencies, the environmental competitiveness is analyzed and the optimal combination of inputs and outputs are recommended for improving the eco-efficiencies of inefficient firms. Furthermore, the panel analysis is applied to the causal relationship between eco-efficiency and economic performance, and the pooled regression model is used to investigate the relationship between eco-efficiency and economic performance. The four-year eco-efficiencies between 2010 and 2013 of 23 companies are obtained from the DEA analysis; a comparison of efficiencies among 23 companies is carried out in terms of technical efficiency(TE), pure technical efficiency(PTE) and scale efficiency(SE), and then a set of recommendations for optimal combination of inputs and outputs are suggested for the inefficient companies. Furthermore, the experimental results with the panel analysis have demonstrated the causality from eco-efficiency to economic performance. The results of the pooled regression have shown that eco-efficiency positively affect financial perform ances(ROA and ROS) of the companies, as well as firm values(Tobin Q, stock price, and stock returns). This report proposes a novel approach for generating standardized performance indicators obtained from multiple environmental and economic performances, so that it is able to enhance the generality of relevant researches and provide a deep insight into the sustainability of environmental management. Furthermore, using efficiency indicators obtained from the DEA model, the cause of change in eco-efficiency can be investigated and an effective strategy for environmental management can be suggested. Finally, this report can be a motive for environmental management by providing empirical evidence that environmental investments can improve economic performance.

  • PDF

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

Neuroprotective Effect of Cyclosporin A on Spinal Cord Ischemic Injury in Rabbits (토끼를 이용한 척수 허혈 손상 모델에서 Cyclosporin A의 척수 손상에 대한 보호 효과)

  • Shin Yoon-Cheol;Choe Ghee-Young;Kim Won-Gon
    • Journal of Chest Surgery
    • /
    • v.39 no.10 s.267
    • /
    • pp.739-748
    • /
    • 2006
  • Background: The purpose of this study is to ascertain the neuroprotective effect of cyclosporin A on the 25-min surgical ischemia model in the spinal cords of rabbits with neuropathological correlation and histoimmunochemical analyses, Material and Method: Thirty-two New Zealand white rabbits were randomly divided into four groups: Rabbits were randomly divided into four groups: the control 12 group (n=8), the control 17 group (n=8), the cyclosporin Cs2 group (n=8), and the cyclosporin Cs7 group (n=8). The 12 group underwent a 25-min aortic cross- clamp without intervention and were sacrificed on the 2nd day postoperatively, while the 17 group underwent a 25- min of aortic cross-clamp without intervention and were sacrificed on the 7th day postoperatively. The Cs2 group received cyclosporin A (25 mg/kg) intravenously 15 min after the 25-min cross-clamp and were sacrificed on the End day postoperatively, while the Cs7 group received cyclosporin A (25 mg/kg) intravenously 15 min after the 25-min cross-clamp and were sacrificed on the 7th day postoperatively. The rabbits underwent 25-min surgical aortic cross-clamp. Neurologic functions were evaluated on the 2nd day and 7th postoperative day using Tarlov scoring system. After scoring neurologic function, all rabbits were sacrificed for histopathologic observation. Result: All rabbits survived the experimental procedure. The values of Tarlov score did not show any differences between the control and cyclosporin groups on the 2nd day. The scores of group Cs7 ($2.75{\pm}0.89$) were significantly higher than those of group 17 ($1.25{\pm}1.39$) on the 7th day (p<0,05). On the histologic exanminations, specimens of the spinal cord showed necrosis and apoptosis. The pathologic scores of group Cs7 ($1,0{\pm}0.53$) was less than those of group 17 ($2.13{\pm}1.36$, p<0.05). TUNEL staing showed apoptosis of the specimen in group 12 and Cs2 but there was no stastically significant difference between groups on the score. There were more overexpression of HSP70 and nNOS in cyclosporine group than in control group. Conclusion: We think that cyclosporin A may decrease neuronal cell death with induced upregulation of HSP70 against 25-min ischemia of the spiral cord in the rabbit.