• Title/Summary/Keyword: Financial Information System

Search Result 1,239, Processing Time 0.03 seconds

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

CQI Action Team Approach to Prevent Pressure Sores in Intensive Care Unit of an Acute Hospital Korea (중환자의 욕창 예방 연구 : 욕창 예방 QI팀을 중심으로)

  • Kang, So Young;Choi, Eun-Kyung;Kim, Jin-Ju;Ju, Mi-Jung
    • Quality Improvement in Health Care
    • /
    • v.4 no.1
    • /
    • pp.50-63
    • /
    • 1997
  • Background : A pressure sore was defined as any skin lesion caused by unrelieved pressure and resulting in damage to underlying tissue. The health care institutions in the United States were reported the incident rate of pressure sores ranging from 6 to 14 %. Intensive Care Unit needed highest quality of care has been found over 40% incidence rate of pressure sore. Also, Annual expenditures for the care of pressure sores in patients in the United States have been estimated to be $7.5 billion; furthermore, 50 percent more nursing time is required to care for patients with pressure sore in comparison to the time needed to implement preventive measures against pressure sore formation. However, In Korea, there were little reliable reports, or researches, about incidence rates of pressure sore in health care institution including intensive care unit and about the integrated approach like CQI action team for risk assessment, prevention and treatment of pressure ulcers. Therefore, this study was to develop pressure sore risk assessment tool and the protocol for prevention of pressure sore formation through CQI action team activities, to monitor incident rate of pressure sore and the length of sore formation for patients at high risk, and to approximately estimate nursing time for sore dressing during research period as the effect of CQI action team. Method : CQI action team in intensive care unit, launched since early 1996, reviewed the literature for the standardized risk assessment tool, developed the pressure sore assessment tool based on the Braden Scale, tested its validity, compared on statistics including incidence rate of pressure sore for patients at high risk. Throughout these activities, CQI action team was developed the protocol, called as St. Marys hospital Intensive Care Unit Pressure Sore Protocol, shifted the emphasis from wound treatment to wound prevention. After applied the protocol to patients at high risk, the incident rate and the period of prevention against pressure development were tested with those for patients who received care before implementation of protocol by Chi-square and Kaplan-Meier Method of Survival Analysis. Result : The CQI action team found that these was significant difference of in incidence rate of pressure sores between patients at high risk (control group) who received care before implementation of protocol and those (experimental group) who received it after implementation of protocol (p<.05). 25% possibility of pressure sore formation was shown for the patients with 6th hospital day in ICU in control group. In experimental group, the patients with 10th hospital day had 10% possibility of pressure sore. Therefore, there was significant difference(p<.05) in survival rate between two groups. Also, nursing time for dressing on pressure sore in experimental group was decreased as much as 50% of it in control group. Conclusion : The collaborative team effort led to reduced incidence, increased the length of prevention against pressure sore, and declined nursing care times for sore dressing. However, there have had several suggestions for future study. The preventive care system for pressure sore should be applied to patients at moderate, or low risk throughout continuous CQI team activities based on Bed Sore Indicator Fact Sheet. Hospital-wide supports, such as incentives, would be offered to participants for keeping strong commitment to CQI team. Also, Quality Information System monitoring incidents and estimating cost of poor quality, like workload (full time equivalence) or financial loss, regularly in a hospital has to be developed first for supporting CQI team activities as well as empowering hospital-wide QI implementation. Being several limitations, this study would be one of the report cards for the CQI team activities in intensive care unit of an acute hospital and a trial of quality improvement of health care in Korea.

  • PDF

Effects of Customers' Relationship Networks on Organizational Performance: Focusing on Facebook Fan Page (고객 간 관계 네트워크가 조직성과에 미치는 영향: 페이스북 기업 팬페이지를 중심으로)

  • Jeon, Su-Hyeon;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.57-79
    • /
    • 2016
  • It is a rising trend that the number of users using one of the social media channels, the Social Network Service, so called the SNS, is getting increased. As per to this social trend, more companies have interest in this networking platform and start to invest their funds in it. It has received much attention as a tool spreading and expanding the message that a company wants to deliver to its customers and has been recognized as an important channel in terms of the relationship marketing with them. The environment of media that is radically changing these days makes possible for companies to approach their customers in various ways. Particularly, the social network service, which has been developed rapidly, provides the environment that customers can freely talk about products. For companies, it also works as a channel that gives customized information to customers. To succeed in the online environment, companies need to not only build the relationship between companies and customers but focus on the relationship between customers as well. In response to the online environment with the continuous development of technology, companies have tirelessly made the novel marketing strategy. Especially, as the one-to-one marketing to customers become available, it is more important for companies to maintain the relationship marketing with their customers. Among many SNS, Facebook, which many companies use as a communication channel, provides a fan page service for each company that supports its business. Facebook fan page is the platform that the event, information and announcement can be shared with customers using texts, videos, and pictures. Companies open their own fan pages in order to inform their companies and businesses. Such page functions as the websites of companies and has a characteristic of their brand communities such as blogs as well. As Facebook has become the major communication medium with customers, companies recognize its importance as the effective marketing channel, but they still need to investigate their business performances by using Facebook. Although there are infinite potentials in Facebook fan page that even has a function as a community between users, which other platforms do not, it is incomplete to regard companies' Facebook fan pages as communities and analyze them. In this study, it explores the relationship among customers through the network of the Facebook fan page users. The previous studies on a company's Facebook fan page were focused on finding out the effective operational direction by analyzing the use state of the company. However, in this study, it draws out the structural variable of the network, which customer committment can be measured by applying the social network analysis methodology and investigates the influence of the structural characteristics of network on the business performance of companies in an empirical way. Through each company's Facebook fan page, the network of users who engaged in the communication with each company is exploited and it is the one-mode undirected binary network that respectively regards users and the relationship of them in terms of their marketing activities as the node and link. In this network, it draws out the structural variable of network that can explain the customer commitment, who pressed "like," made comments and shared the Facebook marketing message, of each company by calculating density, global clustering coefficient, mean geodesic distance, diameter. By exploiting companies' historical performance such as net income and Tobin's Q indicator as the result variables, this study investigates influence on companies' business performances. For this purpose, it collects the network data on the subjects of 54 companies among KOSPI-listed companies, which have posted more than 100 articles on their Facebook fan pages during the data collection period. Then it draws out the network indicator of each company. The indicator related to companies' performances is calculated, based on the posted value on DART website of the Financial Supervisory Service. From the academic perspective, this study suggests a new approach through the social network analysis methodology to researchers who attempt to study the business-purpose utilization of the social media channel. From the practical perspective, this study proposes the more substantive marketing performance measurements to companies performing marketing activities through the social media and it is expected that it will bring a foundation of establishing smart business strategies by using the network indicators.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Policy Direction for The Farmland Sizing Suitable to Regional Trait (지역특성을 반영한 영농규모화사업의 발전방향-충남지역을 중심으로-)

  • Shim, Jae-Sung
    • The Journal of Natural Sciences
    • /
    • v.14 no.1
    • /
    • pp.83-121
    • /
    • 2004
  • This study was carried out to examine how solid the production foundation of rice in Chung-Nam Province is, and, if not, to probe alternative measures through the size of farms specializing in rice, of which direction would be a pivot of rice industry-oriented policy. The results obtained can be summarized as follows : 1. The amount of rice production in Chung-Nam Province is highest in Korea and the size of paddy field area is the second largest : This implying that the probability that rice production in Chung-Nam Province would be severely influenced by a global trend of market conditions. The number of farms specializing in rice becoming the core group of rice farming account for 7.7 percent of the total number of farm household in Korea. Average field area financial support which had been input to farm household by Government had a noticeable effect on the improvement of the policy of farm-size program. 2. Farm-size program in Chung-Nam Province established from 1980 to 2002 in creased the cultivation size of paddy field to 19,484 hectares, and this program enhanced the buying and selling of farmland and the number of farmland bargain reached 6,431 household and 16,517 hectares, respectively, in 1995-2002. Meanwhile, long-term letting and hiring of farmland appeared so active that the bargain acreage reached 6,970 hectares, and farm involved was 7,059 households, however, the farm-exchange-and-unity program did not satisfy our expectation, because the retirement farm operators reluctantly participated to sell their farms. Another reason that had delayed the bargain of farms rested on the general category of social complication attendant upon the exchange and unity operation for scattered farm. Such difficulties would work negative effects out to carry on the target of farm-size work in general. 3. The following measures were presented to propel the farm-size promotion program : a. Occupation shift project, followed by the social security program for retirement and elderly farm operators, should be promptly established and also a number of types of incentives for promoting the letting and hiring work and farm-exchange-and-unity program would also be set up. b. To establish the effective key system of rice production, all the farm operators should increase the unit area yield of rice and lower the production cost. To do so, a great deal of production teams of rice equipped with managerial techniques and capabilities need to be organized. And, also, there should be appropriate arrays of facilities including information system. This plan is desirable to be in line with a diversity of the structural implement of regional integration based on farm system building. c. To extend the size of farm and to improve farm management, we have to devise the enlargement of individual size of farm for maximized management and the utilization of farm-size grouping method. In conclusion, it can be said that the farm-size project in Chung-Nam Province which has continued since the 1980s was satisfactorily achieved. However, we still have a lot of problems to be solved to break down the barrier for attainment of the desirable farm-size operation work.. Farm-size project has fairly close relation with farm specialization in rice and, thus, the positive support for farm household including the integrated program for both retirement farmers and off-farm operators should be considered to pursue the progressive development of the farm-size program, which is key means to successful achievement of rice farming enforcement in Chung-Nam Province.

  • PDF

In Search of "Excess Competition" (과당경쟁(過當競爭)과 정부규제(政府規制))

  • Nam, II-chong;Kim, Jong-seok
    • KDI Journal of Economic Policy
    • /
    • v.13 no.4
    • /
    • pp.31-57
    • /
    • 1991
  • Korean firms of all sizes, from virtually every industry, have used and are using the term "excessive competition" to describe the state of their industry and to call for government interventions. Moreover, the Korean government has frequently responded to such calls in various ways favorable to the firms, such as controlling entry, curbing capacity investments, or allowing collusion. Despite such interventions' impact on the overall efficiency on the Korean economy as well as on the wealth distribution among diverse groups of economic agents, the term "excessive competition", the basis for the interventions, has so far escaped rigorous scrutiny. The objective of this paper is to clarify the notion of "excessive competition" and "over-investment" which usually accompanies "excessive competition", and to examine the circumstances under which they might occur. We first survey the cases where the terms are most widely used and proceed to examine those cases to determine if competition is indeed excessive, and if so, what causes "excessive competition". Our main concern deals with the case in which the firms must make investment decisions that involve large sunk costs while facing uncertain demand. In order to analyze this case, we developed a two period model of capacity precommitment and the ensuing competition. In the first period, oligopolistic firms make capacity investments that are irreversible. Demand is uncertain in period 1 and only the distribution is known. Thus, firms must make investment decisions under uncertainty. In the second period, demand is realized, and the firms compete with quantity under realized demand and capacity constraints. In the above setting, we find that there is "no over-investment," en ante, and there is "no excessive competition," ex post. As measured by the information available in period 1, expected return from investment of a firm is non-negative, overall industry capacity does not exceed the socially optimal level, and competition in the second period yields an outcome that gives each operating firm a non-negative second period profit. Thus, neither "excessive competition" nor "over-investment" is possible. This result will generally hold true if there is no externality and if the industry is not a natural monopoly. We also extend this result by examining a model in which the government is an active participant in the game with a well defined preference. Analysis of this model shows that over-investment arises if the government cannot credibly precommit itself to non-intervention when ex post idle capacity occurs, due to socio-political reasons. Firms invest in capacities that exceed socially optimal levels in this case because they correctly expect that the government will find it optimal for itself to intervene once over-investment and ensuing financial problems for the firms occur. Such planned over-investment and ensuing government intervention are the generic problems under the current system. These problems are expected to be repeated in many industries in years to come, causing a significant loss of welfare in the long run. As a remedy to this problem, we recommend a non-intervention policy by the government which creates and utilizes uncertainty. Based upon an argument which is essentially the same as that of Kreps and Wilson in the context of a chain-store game, we show that maintaining a consistent non-intervention policy will deter a planned over-investment by firms in the long run. We believe that the results obtained in this paper has a direct bearing on the public policies relating to many industries including the petrochemical industry that is currently in the center of heated debates.

  • PDF

Evaluation of Disaster Resilience Scorecard for the UN International Safety City Certification of Incheon Metropolitan City (인천시 UN 국제안전도시 인증을 위한 재난 복원력 스코어카드 평가)

  • Kim, Yong-Moon;Lee, Tae-Shik
    • Journal of Korean Society of Disaster and Security
    • /
    • v.13 no.1
    • /
    • pp.59-75
    • /
    • 2020
  • This study is a case study that applied 'UNDRR's Urban Disaster Resilience Scorecard', an evaluation tool necessary for Incheon Metropolitan City to be certified as an international safe city. I would like to present an example that the results derived from this scorecard contributed to the Incheon Metropolitan City Disaster Reduction Plan. Of course, the Disaster Resilience Scorecard can't provide a way to improve the resilience of every disaster facing the city. However, it is to find the weakness of the resilience that the city faces, and to propose a solution to reduce the city's disaster risk. This is to help practitioners to recognize the disaster risks that Incheon Metropolitan City faces. In addition, the solution recommended by UNDRR was suggested to provide resilience in areas vulnerable to disasters. It was confirmed that this process can contribute to improving the disaster resilience of Incheon Metropolitan City. UNDRR has been spreading 'Climate Change, Disaster-resistant City Creation Campaign', aka MCR (Making Cities Resilient) Campaign, to cities all over the world since 2010 to reduce global cities' disasters. By applying the disaster relief guidelines adopted by UNDRR, governments, local governments, and neighboring cities are encouraged to collaborate. As a result of this study, Incheon Metropolitan city's UN Urban Resilience Scorecard was evaluated as a strong resilience field by obtaining scores of 4 or more (4.3~5.0) in 5 of 10 essentials; 1. Prepare organization for disaster resilience and prepare for implementation, 4. Strong resilience Urban development and design pursuit, 5. Preservation of natural cushions to enhance the protection provided by natural ecosystems, 9. Ensure effective disaster preparedness and response, 10. Rapid restoration and better reconstruction. On the other hand, in the other five fields, scores of less than 4 (3.20~3.85) were obtained and evaluated as weak resilience field; 2. Analyze, understand and utilize current and future risk scenarios, 3. Strengthen financial capacity for resilience, 6. Strengthen institutional capacity for resilience, 7. Understanding and strengthening social competence for resilience, 8. Strengthen resilience of infrastructure. In addition, through this study, the risk factors faced by Incheon Metropolitan City could be identified by priority, resilience improvement measures to minimize disaster risks, urban safety-based urban development plans, available disaster reduction resources, and integrated disasters. Measures were prepared.