• Title/Summary/Keyword: Market power

Search Result 2,108, Processing Time 0.039 seconds

Antecedents of Manufacturer's Private Label Program Engagement : A Focus on Strategic Market Management Perspective (제조업체 Private Labels 도입의 선행요인 : 전략적 시장관리 관점을 중심으로)

  • Lim, Chae-Un;Yi, Ho-Taek
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.65-86
    • /
    • 2012
  • The $20^{th}$ century was the era of manufacturer brands which built higher brand equity for consumers. Consumers moved from generic products of inconsistent quality produced by local factories in the $19^{th}$ century to branded products from global manufacturers and manufacturer brands reached consumers through distributors and retailers. Retailers were relatively small compared to their largest suppliers. However, sometime in the 1970s, things began to slowly change as retailers started to develop their own national chains and began international expansion, and consolidation of the retail industry from mom-and-pop stores to global players was well under way (Kumar and Steenkamp 2007, p.2) In South Korea, since the middle of the 1990s, the bulking up of retailers that started then has changed the balance of power between manufacturers and retailers. Retailer private labels, generally referred to as own labels, store brands, distributors own private-label, home brand or own label brand have also been performing strongly in every single local market (Bushman 1993; De Wulf et al. 2005). Private labels now account for one out of every five items sold every day in U.S. supermarkets, drug chains, and mass merchandisers (Kumar and Steenkamp 2007), and the market share in Western Europe is even larger (Euromonitor 2007). In the UK, grocery market share of private labels grew from 39% of sales in 2008 to 41% in 2010 (Marian 2010). Planet Retail (2007, p.1) recently concluded that "[PLs] are set for accelerated growth, with the majority of the world's leading grocers increasing their own label penetration." Private labels have gained wide attention both in the academic literature and popular business press and there is a glowing academic research to the perspective of manufacturers and retailers. Empirical research on private labels has mainly studies the factors explaining private labels market shares across product categories and/or retail chains (Dahr and Hoch 1997; Hoch and Banerji, 1993), factors influencing the private labels proneness of consumers (Baltas and Doyle 1998; Burton et al. 1998; Richardson et al. 1996) and factors how to react brand manufacturers towards PLs (Dunne and Narasimhan 1999; Hoch 1996; Quelch and Harding 1996; Verhoef et al. 2000). Nevertheless, empirical research on factors influencing the production in terms of a manufacturer-retailer is rather anecdotal than theory-based. The objective of this paper is to bridge the gap in these two types of research and explore the factors which influence on manufacturer's private label production based on two competing theories: S-C-P (Structure - Conduct - Performance) paradigm and resource-based theory. In order to do so, the authors used in-depth interview with marketing managers, reviewed retail press and research and presents the conceptual framework that integrates the major determinants of private labels production. From a manufacturer's perspective, supplying private labels often starts on a strategic basis. When a manufacturer engages in private labels, the manufacturer does not have to spend on advertising, retailer promotions or maintain a dedicated sales force. Moreover, if a manufacturer has weak marketing capabilities, the manufacturer can make use of retailer's marketing capability to produce private labels and lessen its marketing cost and increases its profit margin. Figure 1. is the theoretical framework based on a strategic market management perspective, integrated concept of both S-C-P paradigm and resource-based theory. The model includes one mediate variable, marketing capabilities, and the other moderate variable, competitive intensity. Manufacturer's national brand reputation, firm's marketing investment, and product portfolio, which are hypothesized to positively affected manufacturer's marketing capabilities. Then, marketing capabilities has negatively effected on private label production. Moderating effects of competitive intensity are hypothesized on the relationship between marketing capabilities and private label production. To verify the proposed research model and hypotheses, data were collected from 192 manufacturers (212 responses) who are producing private labels in South Korea. Cronbach's alpha test, explanatory / comfirmatory factor analysis, and correlation analysis were employed to validate hypotheses. The following results were drawing using structural equation modeling and all hypotheses are supported. Findings indicate that manufacturer's private label production is strongly related to its marketing capabilities. Consumer marketing capabilities, in turn, is directly connected with the 3 strategic factors (e.g., marketing investment, manufacturer's national brand reputation, and product portfolio). It is moderated by competitive intensity between marketing capabilities and private label production. In conclusion, this research may be the first study to investigate the reasons manufacturers engage in private labels based on two competing theoretic views, S-C-P paradigm and resource-based theory. The private label phenomenon has received growing attention by marketing scholars. In many industries, private labels represent formidable competition to manufacturer brands and manufacturers have a dilemma with selling to as well as competing with their retailers. The current study suggests key factors when manufacturers consider engaging in private label production.

  • PDF

Impact Resistance Testing of NK55 Ophthalmic Lenses in Domestic Market (국내 유통 NK55 재질 안경렌즈의 내충격 시험 평가)

  • Park, Mijung;Jeon, Inchul;Hwang, Kwang Hoon;Byun, Woongjin;Kim, So Ra
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • Purpose: The present study was performed to evaluate the safety of ophthalmic lenses in domestic market since eyeglasses wearers could be exposed to the negligent accident by damaged ophthalmic lenses. Method: Totally, 160 ophthalmic lenses (NK55, ${n_{d}}$ = 1.56) with the refractive powers of -3D, -6D, +3D, +6D manufactured by 4 companies in domestic market were evaluated using drop ball test. In accordance with FDA standard, steel ball (~16 g) was freely dropped on these ophthalmic lenses from 127 cm high and the surfaces of lenses were observed. Results: From the study, center thicknesses of NK55 ophthalmic lenses manufactured by 4 different companies showed somewhat different numbers even though the lenses had the same refractive index and powers. All convex lenses of +3D, +6D were evaluated as the safe lenses since there was no damage such as crack and broken found on the lens surfaces after drop ball testing. However, some noticeable broken was shown on the surfaces of concave lenses with relatively thinner center thickness. Especially, 59(73.8%) of total 80 concave lenses with the refractive power of -3D and -6D classified as unacceptable lenses to FDA standard. Conclusions: From the results, the negligent accident by damaged ophthalmic lenses should be considered as well as the correction of visual acuity, design and price when customers purchase eyeglasses. Thus, the enforcement regulation like drop ball testing of uncut ophthalmic lens could be suggested to guarantee the safety of ophthalmic lenses in domestic market.

Analysis of the relationship between interest rate spreads and stock returns by industry (금리 스프레드와 산업별 주식 수익률 관계 분석)

  • Kim, Kyuhyeong;Park, Jinsoo;Suh, Jihae
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.105-117
    • /
    • 2022
  • This study analyzes the effects between stock returns and interest rate spread, difference between long-term and short-term interest rate through the polynomial linear regression analysis. The existing research concentrated on the business forecast through the interest rate spread focusing on the US market. The previous studies verified the interest rate spread based on the leading indicators of business forecast by moderating the period of long-term/short-term interest rates and analyzing the degree of leading. After the 7th reform of composite indices of business indicators in Korea of 2006, the interest rate spread was included in the items of composing the business leading indicators, which is utilized till today. Nevertheless, there are a few research on stock returns of each industry and interest rate spread in domestic stock market. Therefore, this study analyzed the stock returns of each industry and interest rate spread targeting Korean stock market. This study selected the long-term/short-term interest rates with high causality through the regression analysis, and then understood the correlations with each leading period and industry. To overcome the limitation of the simple linear regression analysis, polynomial linear regression analysis is used, which raised explanatory power. As a result, the high causality was verified when using differences between returns of corporate bond(AA-) without guarantee for three years by leading six months and call rate returns as interest rate spread. In addition, analyzing the stock returns of each industry, the relation between the relevant interest rate spread and returns of the automobile industry was the closest. This study is significant in the aspect of verifying the causality of interest rate spread, business forecast, and stock returns in Korea. Even though it could be limited to forecast the stock price by using only the interest rate spread, it would be working as a strong factor when it is properly utilized with other various factors.

Estimating the Compliance Cost of the Power and Energy Sector in Korea during the First Phase of the Emissions Trading Scheme (발전·에너지업종의 배출권거래제 제1차 계획기간 배출권 구입비용 추정과 전력시장 반응)

  • Lee, Sanglim;Lee, Jiwoong;Lee, Yoon
    • Environmental and Resource Economics Review
    • /
    • v.25 no.3
    • /
    • pp.377-401
    • /
    • 2016
  • This study analyzes how much cost the power generation and energy sector in South Korea have to bear due to the introduction of emissions trading scheme during 2016 - 2017. To this end, the data on the seventh basic plan for long-term electricity supply and demand is applied to the electricity market simulation model called M-Core, and then the model forecasts carbon dioxide emissions to compare with the free emission allowances in the first national emissions permit allocation plan. The main results are as follows. Carbon dioxide emissions are estimated to be less in 2016 but more than the free emission allowances in 2017. When the price of the allowances is changed from \10,000/ton to \20,000/ton, the cost of purchasing the allowances is ranged from \70 billion to \140 billion. Under the assumption that CO2 cost is incorporated into the variable cost, a reversal of merit order between coal and LNG generation takes place when the price of the allowances exceeds \80,000/ton.

An Optimal Location of Superconducting Fault Current Limiter in Distribution Network with Distributed Generation Using an Index of Distribution Reliability Sensitivity (신뢰도 민감도 지수를 이용한 복합배전계통 내 초전도한류기의 최적 위치에 관한 연구)

  • Kim, Sung-Yul;Kim, Wook-Won;Bae, In-Su;Kim, Jin-O
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.24 no.6
    • /
    • pp.52-59
    • /
    • 2010
  • As electric power demand of customers is constantly increasing, more bulk power systems are needed to install in a network. By development of renewable energies and high-efficient facilities and deregulated electricity market, moreover, the amount of distributed resource is considerably increasing in distribution network consequently. Also, distribution network has become more and more complex as mesh network to improve the distribution system reliability and increase the flexibility and agility of network operation. These changes make fault current increase. Therefore, the fault current will exceed a circuit breaker capacity. In order to solve this problem, replacing breaker, changing operation mode of system and rectifying transformer parameters can be taken into account. The SFCL(Superconducting Fault Current Limiter) is one of the most promising power apparatus. This paper proposes a methodology for on optimal location of SFCL. This place is defined as considering the decrement of fault current by component type and the increment of reliability by customer type according to an location of SFCL in a distribution network connected with DG(Distributed Generation). With case studies on method of determining optimal location for SFCL applied to a radial network and a mesh network respectively, we proved that the proposed method is feasible.

The Determinants of Wage Premium (임금(賃金)프리미엄의 결정요인(決定要因))

  • Rhee, Chong-hoon
    • KDI Journal of Economic Policy
    • /
    • v.14 no.4
    • /
    • pp.79-106
    • /
    • 1992
  • This study analyzes the determinants of wage premium, defined as the excess of actual wage rate over opportunity wage, for the average worker in a Korean bargaining unit. Average wage premium of a firm is decomposed into quasi-rent per worker and rent-sharing rule. Per capita quasi-rent, representing a firm's ability to pay, is defined as the difference between sales revenue and the opportunity cost of mobile factors, divided by the number of employees. Rent-sharing rule, a measure of workers' bargaining power, is defined as the average wage premium divided by the per capita quasi-rent. Empirical results show that the differences in wage premium among Korean bargaining units are much better explained by the differences in quasi-rent than by the differences in bargaining power. Also, comparing the results of 1986 with those of 1988 show that the wage settlement mechanism in 1988 was not quite different from that of 1986, in spite of the drastic change in industrial relation system in 1987. It may simply yield higher opportunity wages, by raising the bargaining power of overall workers. The tendency of Korean labor market in 1988 to show a dual structure of high & low wage premium sectors, is not due to the fact that the differences in bargaing powers across firms tend to expand, but to the fact that unions tend to reduce the wage differences among the workers within an enterprise by pursuing more equal distribution of total wage premium. Hence, the policies for reducing the wage differentials across firms should focus on rent-regulating industrial policies, e.g. eliminating monopoly rents by deregulation.

  • PDF

The Public Television Crisis and the Mutation of the Public Sphere in Neo-liberalism (신자유주의 시대 공영방송의 위기와 공공영역의 변화)

  • Lee, Sang-Hoon
    • Korean journal of communication and information
    • /
    • v.57
    • /
    • pp.250-266
    • /
    • 2012
  • In neo-liberalism, the change-value of the market and of the machandise based on the individual as the consumer dominate the public sphere, and the capital power encroach on it. with the technological revolution. At the same time the public sphere as such represent the media sphere, which is more and more subordinate, and have no choice but to do to the governmental authority having political power privatized. The private usage of reason in the public sphere is carried out at the structual level. How can we call such a space in which the private usage of reason is generalized and dominant as the public sphere? And so now, we sound out the possibility of the public sphere such as a new space of the universality where the public usage of reason can be realized without any limits and with free. So, when we imagine the proletarian public sphere, in which co-exist the divers private interests, as a new public sphere capable to be constructed, we can address a question as follow. What is the caracteristic of the proletarian public sphere in modern society?, Is the public community able to be formed and realized in such space? How would have the proletarian public sphere the carateristics of the publis sphere? What is the attribute of the community that the proletarian public sphere would make, and what is its force of emancipation? The power is no longer stable and static. Rather, it is reconstructed and reorganized in the divers phases of the everyday life. It is the reason why we put on the order of the day the proletarian public sphere as alternative public space, which would be a place of divers hegemonic representation. And now, we are aware of the beginning of thses changes.

  • PDF

Using Mechanical Learning Analysis of Determinants of Housing Sales and Establishment of Forecasting Model (기계학습을 활용한 주택매도 결정요인 분석 및 예측모델 구축)

  • Kim, Eun-mi;Kim, Sang-Bong;Cho, Eun-seo
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.1
    • /
    • pp.181-200
    • /
    • 2020
  • This study used the OLS model to estimate the determinants affecting the tenure of a home and then compared the predictive power of each model with SVM, Decision Tree, Random Forest, Gradient Boosting, XGBooest and LightGBM. There is a difference from the preceding study in that the Stacking model, one of the ensemble models, can be used as a base model to establish a more predictable model to identify the volume of housing transactions in the housing market. OLS analysis showed that sales profits, housing prices, the number of household members, and the type of residential housing (detached housing, apartments) affected the period of housing ownership, and compared the predictability of the machine learning model with RMSE, the results showed that the machine learning model had higher predictability. Afterwards, the predictive power was compared by applying each machine learning after rebuilding the data with the influencing variables, and the analysis showed the best predictive power of Random Forest. In addition, the most predictable Random Forest, Decision Tree, Gradient Boosting, and XGBooost models were applied as individual models, and the Stacking model was constructed using Linear, Ridge, and Lasso models as meta models. As a result of the analysis, the RMSE value in the Ridge model was the lowest at 0.5181, thus building the highest predictive model.

Human Rights-based Approach toward International Development Cooperation and Canada's ODA Accountability Act (국제개발협력의 인권적 접근과 캐나다 ODA책무법)

  • Soh, Hyuk-Sang
    • International Area Studies Review
    • /
    • v.15 no.2
    • /
    • pp.403-425
    • /
    • 2011
  • Canada became the first OECD/DAC member state that legislated the ODA Accountability Act in 2008, which prescribe Canada ODA policies to meet the guidelines and norms of international human rights while other OECD/DAC member states was just emphasizing the importance of abiding by the international human rights norms. Paying attention to the Canadian case, this article critically examines under what structural environments and process this Act was passed. This article argues that the legislation of the ODA Accountability Act is closely related with Canada's international position as middle power and diplomatic strategies. Bring up the human security issues as a niche market, Canada demonstrates the characteristics of middle power state by emphasizing human rights agenda as new foreign policy strategies. Reflecting on the negative outcomes from neoliberal aid policy of structural adjustment and promoting the new aid norms in post cold war era would also help foster the enabling environment for the value-oriented aid policies and enactment of the Accountability Act. Civil society organizations were also playing catalyst role in constructing Canada's state identity of human rights defender.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.