• Title/Summary/Keyword: Trade terms

Search Result 782, Processing Time 0.03 seconds

The Effect of Foreign Direct Investment on Corporate Financial Performances: Focused on Comparison between Korean SMEs and Large Enterprises (해외직접투자가 기업의 재무성과에 미치는 영향: 한국의 중소기업과 대기업 비교를 중심으로)

  • Maeng, Seon Bae;Kim, Soon Choul
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.6
    • /
    • pp.11-26
    • /
    • 2023
  • This study aimed to empirically analyze the effect of Korean companies' FDI(Foreign Direct Investment) on their financial performances, particularly divided into profitability, stability, growth and activity, while comparing distinct financial performances between SMEs(small and medium-sized enterprises) and large enterprises whose corporate attributes are different from each other. As research subjects, this study selected FDI Korean companies from the directory of oversea-expanded companies of KOTRA(Korea Trade-Investment Promotion Agency) and used financial data from a total of 409 companies(136 SMEs and 273 large enterprises) with complete financial data for the first five years after the initial investment out of all the financial data from 1990 to 2021. The analysis results can be summarized as follows; In the profitability sector, FDI had positive effects on ROA(Return on Assets) and ROS(Return on Sales) of SMEs, while having negative effects on those of large enterprises to the contrary. In the stability sector, FDI had no statistical significance for SMEs, while having significantly negative effect on LEV(Debt to Equity Ratio) of large enterprises. In the growth sector, FDI had significantly negative effect on AGR(Asset Growth) of SMEs, but showed no significant results for large enterprises. In the activity sector, FDI showed no statistical significance for SMEs, while having positive effects on ATR(Asset Turnover Ratio) and FATA(Fixed Asset Turnover Ratio) of large enterprises. In conclusion, it was found that when having made FDI, SMEs and large enterprises showed different financial performances from each other in terms of profitability, stability, growth and activity.

  • PDF

Evaluation of the operational efficiency of major coastal ports in China based on the PCA-DEA model (PCA-DEA 모델을 기반으로 한 중국 주요연안 항만의 운영 효율성 평가)

  • Haiqing Zhang;Hyangsook Lee
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.1
    • /
    • pp.87-118
    • /
    • 2024
  • Coastal ports play an essential role in developing a country and a city. Port efficiency is an important factor affecting port trade, and the importance of port efficiency for port performance has been recognized in previous literature. DEA (Data Envelopment Analysis) and SFA (Stochastic Frontier Analysis) are widely used in this field of research. However, these two methods are limited in selecting input and output variables. In addition, the literature studies on Chinese coastal ports mainly focus on the study of port clusters in local areas, which lacks a holistic approach and generally lacks up-to-date data. Therefore, to fill the gap in this area of research, this paper introduces a model combining principal component analysis and data envelopment analysis to analyze the operational efficiency of the top 17 coastal ports in China in terms of throughput based on the most recent data available in 2021. This paper identifies container throughput as the output variable, and 13 second indicators are selected as input variables from four primary indicators: land, capital, labor, and infrastructure. Four principal components were selected from 13 second indicators using PCA.After that, DEA (BBC) and DEA (CCR) were used to analyze the 17 ports, among which five were Shanghai, Ningbo-Zhoushan, Guangzhou, Xiamen, and Dongguan, respectively, DEA efficient, and the remaining 12 ports were non-DEA efficient. Finally, improvement directions for each port are derived, and brief suggestions are made. This paper provides some reference value for developing and constructing coastal ports in China.

An Empirical Study on Solidarity of Korean Unionists and Its Determinants : Focusing on Economic Interests, Worker Identification and Empathy (정규직 노동자의 연대의식과 결정요인에 관한 실증적 연구: 경제적 이해관계, 동일시, 공감을 중심으로)

  • Nam, Kyuseung;Shin, Eunjong
    • Korean Journal of Labor Studies
    • /
    • v.24 no.3
    • /
    • pp.143-178
    • /
    • 2018
  • This study is aimed at empirically examining the Korean unionists' solidarity using the survey of 476 full-time workers employed at the unionized workplace. It also questions the determinants affecting the unionist' willingness to be united with the contingent workers. The Korean unionism has faced the biggest challenge, that is, the crisis-in-worker solidarity. Although prior literature has noted the crisis in Korean unionism, it lacks a solid investigation of individual workers' perception of solidarity which may play a key role in building up worker-solidarity in the union movement. This study first examines the three sources of solidarity allowing for the historical and theoretical approach to the modern solidarity; economic interests, worker-identification and empathy, which provide an emprical framework for this study. The empirical evidences shows dynamic aspects as of how the full-timers perceive solidarity with the non-regular workers in the three terms of solidarity. First, full-time unionists share rare willingness to be united with contingent workers in terms of economic solidarity. In addition, the KCTU (Korean Confederation of Trade Unions) with social reformative orientation has little influence on increasing their member's orientation towards solidarity. Second, it is found that full-time unionists have more willingness to identify themselves with the non-regular workers as a member of the labor class. The KTCU is also positively associated with their member's will of identification with contingent workers. Third, the unionists, however, show little empathy toward non-regular workers, which is contrast to the willingness to worker identification. No causality is also found between the KTCU and their members' empathy for the others.

An Empirical Study on Perceived Value and Continuous Intention to Use of Smart Phone, and the Moderating Effect of Personal Innovativeness (스마트폰의 지각된 가치와 지속적 사용의도, 그리고 개인 혁신성의 조절효과)

  • Han, Joonhyoung;Kang, Sungbae;Moon, Taesoo
    • Asia pacific journal of information systems
    • /
    • v.23 no.4
    • /
    • pp.53-84
    • /
    • 2013
  • With rapid development of ICT (Information and Communications Technology), new services by the convergence of mobile network and application technology began to appear. Today, smart phone with new ICT convergence network capabilities is exceedingly popular and very useful as a new tool for the development of business opportunities. Previous studies based on Technology Acceptance Model (TAM) suggested critical factors, which should be considered for acquiring new customers and maintaining existing users in smart phone market. However, they had a limitation to focus on technology acceptance, not value based approach. Prior studies on customer's adoption of electronic utilities like smart phone product showed that the antecedents such as the perceived benefit and the perceived sacrifice could explain the causality between what is perceived and what is acquired over diverse contexts. So, this research conceptualizes perceived value as a trade-off between perceived benefit and perceived sacrifice, and we need to research the perceived value to grasp user's continuous intention to use of smart phone. The purpose of this study is to investigate the structured relationship between benefit (quality, usefulness, playfulness) and sacrifice (technicality, cost, security risk) of smart phone users, perceived value, and continuous intention to use. In addition, this study intends to analyze the differences between two subgroups of smart phone users by the degree of personal innovativeness. Personal innovativeness could help us to understand the moderating effect between how perceptions are formed and continuous intention to use smart phone. This study conducted survey through e-mail, direct mail, and interview with smart phone users. Empirical analysis based on 330 respondents was conducted in order to test the hypotheses. First, the result of hypotheses testing showed that perceived usefulness among three factors of perceived benefit has the highest positive impact on perceived value, and then followed by perceived playfulness and perceived quality. Second, the result of hypotheses testing showed that perceived cost among three factors of perceived sacrifice has significantly negative impact on perceived value, however, technicality and security risk have no significant impact on perceived value. Also, the result of hypotheses testing showed that perceived value has significant direct impact on continuous intention to use of smart phone. In this regard, marketing managers of smart phone company should pay more attention to improve task efficiency and performance of smart phone, including rate systems of smart phone. Additionally, to test the moderating effect of personal innovativeness, this research conducted multi-group analysis by the degree of personal innovativeness of smart phone users. In a group with high level of innovativeness, perceived usefulness has the highest positive influence on perceived value than other factors. Instead, the analysis for a group with low level of innovativeness showed that perceived playfulness was the highest positive factor to influence perceived value than others. This result of the group with high level of innovativeness explains that innovators and early adopters are able to cope with higher level of cost and risk, and they expect to develop more positive intentions toward higher performance through the use of an innovation. Also, hedonic behavior in the case of the group with low level of innovativeness aims to provide self-fulfilling value to the users, in contrast to utilitarian perspective, which aims to provide instrumental value to the users. However, with regard to perceived sacrifice, both groups in general showed negative impact on perceived value. Also, the group with high level of innovativeness had less overall negative impact on perceived value compared to the group with low level of innovativeness across all factors. In both group with high level of innovativeness and with low level of innovativeness, perceived cost has the highest negative influence on perceived value than other factors. Instead, the analysis for a group with high level of innovativeness showed that perceived technicality was the positive factor to influence perceived value than others. However, the analysis for a group with low level of innovativeness showed that perceived security risk was the second high negative factor to influence perceived value than others. Unlike previous studies, this study focuses on influencing factors on continuous intention to use of smart phone, rather than considering initial purchase and adoption of smart phone. First, perceived value, which was used to identify user's adoption behavior, has a mediating effect among perceived benefit, perceived sacrifice, and continuous intention to use smart phone. Second, perceived usefulness has the highest positive influence on perceived value, while perceived cost has significant negative influence on perceived value. Third, perceived value, like prior studies, has high level of positive influence on continuous intention to use smart phone. Fourth, in multi-group analysis by the degree of personal innovativeness of smart phone users, perceived usefulness, in a group with high level of innovativeness, has the highest positive influence on perceived value than other factors. Instead, perceived playfulness, in a group with low level of innovativeness, has the highest positive factor to influence perceived value than others. This result shows that early adopters intend to adopt smart phone as a tool to make their job useful, instead market followers intend to adopt smart phone as a tool to make their time enjoyable. In terms of marketing strategy for smart phone company, marketing managers should pay more attention to identify their customers' lifetime value by the phase of smart phone adoption, as well as to understand their behavior intention to accept the risk and uncertainty positively. The academic contribution of this study primarily is to employ the VAM (Value-based Adoption Model) as a conceptual foundation, compared to TAM (Technology Acceptance Model) used widely by previous studies. VAM is useful for understanding continuous intention to use smart phone in comparison with TAM as a new IT utility by individual adoption. Perceived value dominantly influences continuous intention to use smart phone. The results of this study justify our research model adoption on each antecedent of perceived value as a benefit and a sacrifice component. While TAM could be widely used in user acceptance of new technology, it has a limitation to explain the new IT adoption like smart phone, because of customer behavior intention to choose the value of the object. In terms of theoretical approach, this study provides theoretical contribution to the development, design, and marketing of smart phone. The practical contribution of this study is to suggest useful decision alternatives concerned to marketing strategy formulation for acquiring and retaining long-term customers related to smart phone business. Since potential customers are interested in both benefit and sacrifice when evaluating the value of smart phone, marketing managers in smart phone company has to put more effort into creating customer's value of low sacrifice and high benefit so that customers will continuously have higher adoption on smart phone. Especially, this study shows that innovators and early adopters with high level of innovativeness have higher adoption than market followers with low level of innovativeness, in terms of perceived usefulness and perceived cost. To formulate marketing strategy for smart phone diffusion, marketing managers have to pay more attention to identify not only their customers' benefit and sacrifice components but also their customers' lifetime value to adopt smart phone.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

A study on the establishment and regional strunture of Seoul metropolitan region (서울대도시권역의 설정과 지역구조에 관한 연구)

  • ;;Lee, Hee-Yeon;Song, Jong-Hong
    • Journal of the Korean Geographical Society
    • /
    • v.30 no.1
    • /
    • pp.35-56
    • /
    • 1995
  • During the last two decades, Korea has achieved remarkable economic growth. In this process the nation has become urbanized and industrialized. But we have also encountered widening regional disparity, housing shortage of larger cities, transportation congestion, environmental pollution and many other problems. Rapid increasing urbanization and continuous migration toward Seoul since the late 1960s have been one of the major concerns of government. Government has sought ways to moderate the population increase in Seoul. The regulation which include new town development near Seoul and dispersion strategies of higher education and other administration and living facilities outside of Seoul havemade a great expansion of the spatial influence of Seoul city. Seoul metropolitan reaion has evolved as the most powerful center of political and economical spaces. Generally within a metropolitan region, there exists a growing mutual interdependence economically, as well as socially between a central city and its surrounding area. Seoul metropolitan region manifests itself not only as a coherent system of urbanized regions, but also as an integral part of the daily urban system. The surrounding Gyunggi province and Seoul city become closely linked both economically and functionally, constituting true functlonai urban system. This study is primarily undertaken with the purpose of delineation of the sphere of influence of Seoul city in 1990. At the time of 1985, Seoul metropolitan region was delineated according to the result of the study which was performed by Korea Research Institute for Human Settlements. Afterward, the rapid speed of metropolitanization process with dramatic increase in mobility through the provision of wider transportation system across the Capital region have evolved, resulting in the great expansion of the spatial influence of Seoul city. So this study examines the expanded area of Seoul metropolitan regin during the period of 1985-90. In order to delineate Seoul metropolitan region, the indices of urbanization and functional linkage are selected. Variables included in the measurement of the urbanization level are agricultural structure, population characteristics, manufacturing and service industries, and cultural aspects such as newspaper circulation, the ratio of car ownership and piped water supply. Variables included in the measurement of functional linkage are commuting, shopping pattern, centralized service such as medical facilities and trade of agricultural products. The standardization method and factor analysis are employed in making the delineation of Seoul metropolitan region. According to the result of this study, 2 cities, 8 Eups and 46 Myuns are included Seoul metropolitan region in 1990. If we compare this delineated area in 1990 to that of 1985, we can find the distinctive pattern of expanded axes according to the main transportation routes such as Seoul-Suweon, Seoul-Gwangju, Seoul-Incheon. In 199O, all the Gyunggi province, except a few Myuns located at the north and northwest part of Gyunggi province, are included in Seoul metropolitan region. Furthermore, this study attempts to the analysis of regional structure of Seoul metropolitan region according to the functional characteristics of each city and Gun. Variables included in this analysis are the new residential function, manufacturing function, service function, education and infermation function, public facility function and agricultural function. Factor analysis and cluster analysis are employed in making regionalization. Seoul metropolitan reaion is subdivided into four subregions which reflect different functional specialization. The first group is the specialized region of newly formed residential function. The second group is the specialized reaion of manufacturing function. The third group is the specialized region of service function. And the fourth group has little specialized in terms of manufacturing, service, and residential function. But this region has some potentiality of development when Seoul metropolitan region grow continuously. Seoul metropolitan region accounted for 43% of national population, despite 11.8% of national land size in 1990. Although Seoul metropolitan region enjoys important agglomeration economies, it also has huge social cost in the form of transportation congestion, housing shortage, rapid increase of land value, environment pollution, and etc. Efficient metropolitan plan making is a vital element in promoting Seoul's economic development and providing high quality living environment at low cost. In the light of the result of this study, the outer ring of Seoul metropolitan region, especially northeastern part, are underdeveloped compared to overdeveloped southwestern area. It is needed to develop the guidelines for the implement of the growth control and management plan, inducing more balanced development for whole Seoul metropolitan reaion.

  • PDF

Bundled Discounting of Healthcare Services and Restraint of Competition (의료서비스의 결합판매와 경쟁제한성의 판단 - Cascade Health 사건을 중심으로 -)

  • Jeong, Jae Hun
    • The Korean Society of Law and Medicine
    • /
    • v.20 no.3
    • /
    • pp.175-209
    • /
    • 2019
  • The bundled discounting which the dominant undertakings engage in is problematic in terms of competition restraint. Bundled discounts generally benefit not only buyers but also sellers. Specifically, bundled discounts usually costs a firm less to sell multiple products. In addition, Bundled discounts always provide some immediate consumer benefit in the form of lower prices. Therefore, competition authorities and courts should not be too quick to condemn bundled discounts and apply the neutral and objective standard in bundled discounting cases. Cascade Health v. Peacehealth decision starts ruling from this prerequisite. This decision pointed out that the dominant undertaking can exclude rivals through bundled discounting without pricing its products below its cost when rivals do not sell as great a number of product lines. So bundled discounting may have the anticompetitive impact by excluding less diversified but more efficient producers. This decision did not adopt Lepage case's standard which does not require the court to consider whether the competitor was at least as efficient of a producer as the bundled discounter. Instead of that, based on cost based approach, this decision said that the exclusionary element can not be satisfied unless the discounts result in prices that are below an appropriate measures of the defendant's costs. By adopting a discount attribution standard, this decision said that the full amount of the discounts should be allocated to the competitive products. As the seller can easily ascertain its own prices and costs of production and calculate whether its discounting practices exclude competitors, not the competitor's costs but the dominant undertaking's costs should be considered in applying discount attribution standard. This case deals with bundled discounting practice of multiple healthcare services by the dominant undertaking in healthcare market. Under the Korean healthcare system and public health insurance system, the price competition primarily exists in non-medical care benefits because public healthcare insurance in Korea is in combination with the compulsory medical care institution system. The cases that Monopoly Regulation and Fair Trade Law deals with, such as cartel and the abuse of monopoly power, also mainly exist in non-medical care benefits. The dominant undertaking's exclusionary bundled discounting in Korean healthcare markets may be practiced in the contracts between the dominant undertaking and private insurance companies with regards to non-medical care benefits.

Overlay Multicast Network for IPTV Service using Bandwidth Adaptive Distributed Streaming Scheme (대역폭 적응형 분산 스트리밍 기법을 이용한 IPTV 서비스용 오버레이 멀티캐스트 네트워크)

  • Park, Eun-Yong;Liu, Jing;Han, Sun-Young;Kim, Chin-Chol;Kang, Sang-Ug
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.12
    • /
    • pp.1141-1153
    • /
    • 2010
  • This paper introduces ONLIS(Overlay Multicast Network for Live IPTV Service), a novel overlay multicast network optimized to deliver live broadcast IPTV stream. We analyzed IPTV reference model of ITU-T IPTV standardization group in terms of network and stream delivery from the source networks to the customer networks. Based on the analysis, we divide IPTV reference model into 3 networks; source network, core network and access network, ION(Infrastructure-based Overlay Multicast Network) is employed for the source and core networks and PON(P2P-based Overlay Multicast Network) is applied to the access networks. ION provides an efficient, reliable and stable stream distribution with very negligible delay while PON provides bandwidth efficient and cost effective streaming with a little tolerable delay. The most important challenge in live P2P streaming is to reduce end-to-end delay without sacrificing stream quality. Actually, there is always a trade-off between delay & stream quality in conventional live P2P streaming system. To solve this problem, we propose two approaches. Firstly, we propose DSPT(Distributed Streaming P2P Tree) which takes advantage of combinational overlay multicasting. In DSPT, a peer doesn't fully rely on SP(Supplying Peer) to get the live stream, but it cooperates with its local ANR(Access Network Relay) to reduce delay and improve stream quality. When RP detects bandwidth drop in SP, it immediately switches the connection from SP to ANR and continues to receive stream without any packet loss. DSPT uses distributed P2P streaming technique to let the peer share the stream to the extent of its available bandwidth. This means, if RP can't receive the whole stream from SP due to lack of SP's uploading bandwidth, then it receives only partial stream from SP and the rest from the ANR. The proposed distributed P2P streaming improves P2P networking efficiency.