• Title/Summary/Keyword: Investment Model

Search Result 1,889, Processing Time 0.033 seconds

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF

The Innovation Ecosystem and Implications of the Netherlands. (네덜란드의 혁신클러스터정책과 시사점)

  • Kim, Young-woo
    • Journal of Venture Innovation
    • /
    • v.5 no.1
    • /
    • pp.107-127
    • /
    • 2022
  • Global challenges such as the corona pandemic, climate change and the war-on-tech ensure that the demand who the technologies of the future develops and monitors prominently for will be on the agenda. Development of, and applications in, agrifood, biotech, high-tech, medtech, quantum, AI and photonics are the basis of the future earning capacity of the Netherlands and contribute to solving societal challenges, close to home and worldwide. To be like the Netherlands and Europe a strategic position in the to obtain knowledge and innovation chain, and with it our autonomy in relation to from China and the United States insurance, clear choices are needed. Brainport Eindhoven: Building on Philips' knowledge base, there is create an innovative ecosystem where more than 7,000 companies in the High-tech Systems & Materials (HTSM) collaborate on new technologies, future earning potential and international value chains. Nearly 20,000 private R&D employees work in 5 regional high-end campuses and for companies such as ASML, NXP, DAF, Prodrive Technologies, Lightyear and many others. Brainport Eindhoven has a internationally leading position in the field of system engineering, semicon, micro and nanoelectronics, AI, integrated photonics and additive manufacturing. What is being developed in Brainport leads to the growth of the manufacturing industry far beyond the region thanks to chain cooperation between large companies and SMEs. South-Holland: The South Holland ecosystem includes companies as KPN, Shell, DSM and Janssen Pharmaceutical, large and innovative SMEs and leading educational and knowledge institutions that have more than Invest €3.3 billion in R&D. Bearing Cores are formed by the top campuses of Leiden and Delft, good for more than 40,000 innovative jobs, the port-industrial complex (logistics & energy), the manufacturing industry cluster on maritime and aerospace and the horticultural cluster in the Westland. South Holland trains thematically key technologies such as biotech, quantum technology and AI. Twente: The green, technological top region of Twente has a long tradition of collaboration in triple helix bandage. Technological innovations from Twente offer worldwide solutions for the large social issues. Work is in progress to key technologies such as AI, photonics, robotics and nanotechnology. New technology is applied in sectors such as medtech, the manufacturing industry, agriculture and circular value chains, such as textiles and construction. Being for Twente start-ups and SMEs of great importance to the jobs of tomorrow. Connect these companies technology from Twente with knowledge regions and OEMs, at home and abroad. Wageningen in FoodValley: Wageningen Campus is a global agri-food magnet for startups and corporates by the national accelerator StartLife and student incubator StartHub. FoodvalleyNL also connects with an ambitious 2030 programme, the versatile ecosystem regional, national and international - including through the WEF European food innovation hub. The campus offers guests and the 3,000 private R&D put in an interesting programming science, innovation and social dialogue around the challenges in agro production, food processing, biobased/circular, climate and biodiversity. The Netherlands succeeded in industrializing in logistics countries, but it is striving for sustainable growth by creating an innovative ecosystem through a regional industry-academic research model. In particular, the Brainport Cluster, centered on the high-tech industry, pursues regional innovation and is opening a new horizon for existing industry-academic models. Brainport is a state-of-the-art forward base that leads the innovation ecosystem of Dutch manufacturing. The history of ports in the Netherlands is transforming from a logistics-oriented port symbolized by Rotterdam into a "port of digital knowledge" centered on Brainport. On the basis of this, it can be seen that the industry-academic cluster model linking the central government's vision to create an innovative ecosystem and the specialized industry in the region serves as the biggest stepping stone. The Netherlands' innovation policy is expected to be more faithful to its role as Europe's "digital gateway" through regional development centered on the innovation cluster ecosystem and investment in job creation and new industries.

The Effect of Retailer-Self Image Congruence on Retailer Equity and Repatronage Intention (자아이미지 일치성이 소매점자산과 고객의 재이용의도에 미치는 영향)

  • Han, Sang-Lin;Hong, Sung-Tai;Lee, Seong-Ho
    • Journal of Distribution Research
    • /
    • v.17 no.2
    • /
    • pp.29-62
    • /
    • 2012
  • As distribution environment is changing rapidly and competition is more intensive in the channel of distribution, the importance of retailer image and retailer equity is increasing as a different competitive advantages. Also, consumers are not functionally oriented and that their behavior is significantly affected by the symbols such as retailer image which identify retailer in the market place. That is, consumers do not choose products or retailers for their material utilities but consume the symbolic meaning of those products or retailers as expressed in their self images. The concept of self-image congruence has been utilized by marketers and researchers as an aid in better understanding how consumers identify themselves with the brands they buy and the retailer they patronize. Although self-image congruity theory has been tested across many product categories, the theory has not been tested extensively in the retailing. Therefore, this study attempts to investigate the impact of self image congruence between retailer image and self image of consumer on retailer equity such as retailer awareness, retailer association, perceived retailer quality, and retailer loyalty. The purpose of this study is to find out whether retailer-self image congruence can be a new antecedent of retailer equity. In addition, this study tries to examine how four-dimensional retailer equity constructs (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) affect customers' repatronage intention. For this study, data were gathered by survey and analyzed by structural equation modeling. The sample size in the present study was 254. The reliability of the all seven dimensions was estimated with Cronbach's alpha, composite reliability values and average variance extracted values. We determined whether the measurement model supports the convergent validity and discriminant validity by Exploratory factor analysis and Confirmatory Factor Analysis. For each pair of constructs, the square root of the average variance extracted values exceeded their correlations, thus supporting the discriminant validity of the constructs. Hypotheses were tested using the AMOS 18.0. As expected, the image congruence hypotheses were supported. The greater the degree of congruence between retailer image and self-image, the more favorable were consumers' retailer evaluations. The all two retailer-self image congruence (actual self-image congruence and ideal self-image congruence) affected customer based retailer equity. This result means that retailer-self image congruence is important cue for customers to estimate retailer equity. In other words, consumers are often more likely to prefer products and retail stores that have images similar to their own self-image. Especially, it appeared that effect for the ideal self-image congruence was consistently larger than the actual self-image congruence on the retailer equity. The results mean that consumers prefer or search for stores that have images compatible with consumer's perception of ideal-self. In addition, this study revealed that customers' estimations toward customer based retailer equity affected the repatronage intention. The results showed that all four dimensions (retailer awareness, retailer association, perceived retailer quality, and retailer loyalty) had positive effect on the repatronage intention. That is, management and investment to improve image congruence between retailer and consumers' self make customers' positive evaluation of retailer equity, and then the positive customer based retailer equity can enhance the repatonage intention. And to conclude, retailer's image management is an important part of successful retailer performance management, and the retailer-self image congruence is an important antecedent of retailer equity. Therefore, it is more important to develop and improve retailer's image similar to consumers' image. Given the pressure to provide increased image congruence, it is not surprising that retailers have made significant investments in enhancing the fit between retailer image and self image of consumer. The enhancing such self-image congruence may allow marketers to target customers who may be influenced by image appeals in advertising.

  • PDF

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Effect of Service Convenience on the Relationship Performance in B2B Markets: Mediating Effect of Relationship Factors (B2B 시장에서의 서비스 편의성이 관계성과에 미치는 영향 : 관계적 요인의 매개효과 분석)

  • Han, Sang-Lin;Lee, Seong-Ho
    • Journal of Distribution Research
    • /
    • v.16 no.4
    • /
    • pp.65-93
    • /
    • 2011
  • As relationship between buyer and seller has been brought closer and long-term relationship has been more important in B2B markets, the importance of service and service convenience increases as well as product. In homogeneous markets, where service offerings are similar and therefore not key competitive differentiator, providing greater convenience may enable a competitive advantage. Service convenience, as conceptualized by Berry et al. (2002), is defined as the consumers' time and effort perceptions related to buying or using a service. For this reason, B2B customers are interested in how fast the service is provided and how much save non-monetary cost like time or effort by the service convenience along with service quality. Therefore, this study attempts to investigate the impact of service convenience on relationship factors such as relationship satisfaction, relationship commitment, and relationship performance. The purpose of this study is to find out whether service convenience can be a new antecedent of relationship quality and relationship performance. In addition, this study tries to examine how five-dimensional service convenience constructs (decision convenience, access convenience, transaction convenience, benefit convenience, post-benefit convenience) affect customers' relationship satisfaction, relationship commitment, and relationship performance. The service convenience comprises five fundamental components - decision convenience (the perceived time and effort costs associated with service purchase or use decisions), access convenience(the perceived time and effort costs associated with initiating service delivery), transaction convenience(the perceived time and effort costs associated with finalizing the transaction), benefit convenience(the perceived time and effort costs associated with experiencing the core benefits of the offering) and post-benefit convenience (the perceived time and effort costs associated with reestablishing subsequent contact with the firm). Earlier studies of perceived service convenience in the industrial market are none. The conventional studies that have dealt with service convenience have usually been made in the consumer market, or they have dealt with convenience aspects in the service process. This service convenience measure for consumer market can be useful tool to estimate service quality in B2B market. The conceptualization developed by Berry et al. (2002) reflects a multistage, experiential consumption process in which evaluations of convenience vary at each stage. For this reason, the service convenience measure is good for B2B service environment which has complex processes and various types. Especially when categorizing B2B service as sequential stage of service delivery like Kumar and Kumar (2004), the Berry's service convenience measure which reflect sequential flow of service deliveries suitable to establish B2B service convenience. For this study, data were gathered from respondents who often buy business service and analyzed by structural equation modeling. The sample size in the present study is 119. Composite reliability values and average variance extracted values were examined for each variable to have reliability. We determine whether the measurement model supports the convergent validity by CFA, and discriminant validity was assessed by examining the correlation matrix of the constructs. For each pair of constructs, the square root of the average variance extracted exceeded their correlations, thus supporting the discriminant validity of the constructs. Hypotheses were tested using the Smart PLS 2.0 and we calculated the PLS path values and followed with a bootstrap re-sampling method to test the hypotheses. Among the five dimensional service convenience constructs, four constructs (decision convenience, transaction convenience, benefit convenience, post-benefit convenience) affected customers' positive relationship satisfaction, relationship commitment, and relationship performance. This result means that service convenience is important cue to improve relationship between buyer and seller. One of the five service convenience dimensions, access convenience, does not affect relationship quality and performance, which implies that the dimension of service convenience is not important factor of cumulative satisfaction. The Cumulative satisfaction can be distinguished from transaction-specific customer satisfaction, which is an immediate post-purchase evaluative judgment or an affective reaction to the most recent transactional experience with the firm. Because access convenience minimizes the physical effort associated with initiating an exchange, the effect on relationship satisfaction similar to cumulative satisfaction may be relatively low in terms of importance than transaction-specific customer satisfaction. Also, B2B firms focus on service quality, price, benefit, follow-up service and so on than convenience of time or place in service because it is relatively difficult to change existing transaction partners in B2B market compared to consumer market. In addition, this study using partial least squares methods reveals that customers' satisfaction and commitment toward relationship has mediating role between the service convenience and relationship performance. The result shows that management and investment to improve service convenience make customers' positive relationship satisfaction, and then the positive relationship satisfaction can enhance the relationship commitment and relationship performance. And to conclude, service convenience management is an important part of successful relationship performance management, and the service convenience is an important antecedent of relationship between buyer and seller such as the relationship commitment and relationship performance. Therefore, it has more important to improve relationship performance that service providers enhance service convenience although competitive service development or service quality improvement is important. Given the pressure to provide increased convenience, it is not surprising that organizations have made significant investments in enhancing the convenience aspect of their product and service offering.

  • PDF

Research Framework for International Franchising (국제프랜차이징 연구요소 및 연구방향)

  • Kim, Ju-Young;Lim, Young-Kyun;Shim, Jae-Duck
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.61-118
    • /
    • 2008
  • The purpose of this research is to construct research framework for international franchising based on existing literature and to identify research components in the framework. Franchise can be defined as management styles that allow franchisee use various management assets of franchisor in order to make or sell product or service. It can be divided into product distribution franchise that is designed to sell products and business format franchise that is designed for running it as business whatever its form is. International franchising can be defined as a way of internationalization of franchisor to foreign country by providing its business format or package to franchisee of host country. International franchising is growing fast for last four decades but academic research on this is quite limited. Especially in Korea, research about international franchising is carried out on by case study format with single case or empirical study format with survey based on domestic franchise theory. Therefore, this paper tries to review existing literature on international franchising research, providing research framework, and then stimulating new research on this field. International franchising research components include motives and environmental factors for decision of expanding to international franchising, entrance modes and development plan for international franchising, contracts and management strategy of international franchising, and various performance measures from different perspectives. First, motives of international franchising are fee collection from franchisee. Also it provides easier way to expanding to foreign country. The other motives including increase total sales volume, occupying better strategic position, getting quality resources, and improving efficiency. Environmental factors that facilitating international franchising encompasses economic condition, trend, and legal or political factors in host and/or home countries. In addition, control power and risk management capability of franchisor plays critical role in successful franchising contract. Final decision to enter foreign country via franchising is determined by numerous factors like history, size, growth, competitiveness, management system, bonding capability, industry characteristics of franchisor. After deciding to enter into foreign country, franchisor needs to set entrance modes of international franchising. Within contractual mode, there are master franchising and area developing franchising, licensing, direct franchising, and joint venture. Theories about entrance mode selection contain concepts of efficiency, knowledge-based approach, competence-based approach, agent theory, and governance cost. The next step after entrance decision is operation strategy. Operation strategy starts with selecting a target city and a target country for franchising. In order to finding, screening targets, franchisor needs to collect information about candidates. Critical information includes brand patent, commercial laws, regulations, market conditions, country risk, and industry analysis. After selecting a target city in target country, franchisor needs to select franchisee, in other word, partner. The first important criteria for selecting partners are financial credibility and capability, possession of real estate. And cultural similarity and knowledge about franchisor and/or home country are also recognized as critical criteria. The most important element in operating strategy is legal document between franchisor and franchisee with home and host countries. Terms and conditions in legal documents give objective information about characteristics of franchising agreement for academic research. Legal documents have definitions of terminology, territory and exclusivity, agreement of term, initial fee, continuing fees, clearing currency, and rights about sub-franchising. Also, legal documents could have terms about softer elements like training program and operation manual. And harder elements like law competent court and terms of expiration. Next element in operating strategy is about product and service. Especially for business format franchising, product/service deliverable, benefit communicators, system identifiers (architectural features), and format facilitators are listed for product/service strategic elements. Another important decision on product/service is standardization vs. customization. The rationale behind standardization is cost reduction, efficiency, consistency, image congruence, brand awareness, and competitiveness on price. Also standardization enables large scale R&D and innovative change in management style. Another element in operating strategy is control management. The simple way to control franchise contract is relying on legal terms, contractual control system. There are other control systems, administrative control system and ethical control system. Contractual control system is a coercive source of power, but franchisor usually doesn't want to use legal power since it doesn't help to build up positive relationship. Instead, self-regulation is widely used. Administrative control system uses control mechanism from ordinary work relationship. Its main component is supporting activities to franchisee and communication method. For example, franchisor provides advertising, training, manual, and delivery, then franchisee follows franchisor's direction. Another component is building franchisor's brand power. The last research element is performance factor of international franchising. Performance elements can be divided into franchisor's performance and franchisee's performance. The conceptual performance measures of franchisor are simple but not easy to obtain objectively. They are profit, sale, cost, experience, and brand power. The performance measures of franchisee are mostly about benefits of host country. They contain small business development, promotion of employment, introduction of new business model, and level up technology status. There are indirect benefits, like increase of tax, refinement of corporate citizenship, regional economic clustering, and improvement of international balance. In addition to those, host country gets socio-cultural change other than economic effects. It includes demographic change, social trend, customer value change, social communication, and social globalization. Sometimes it is called as westernization or McDonaldization of society. In addition, the paper reviews on theories that have been frequently applied to international franchising research, such as agent theory, resource-based view, transaction cost theory, organizational learning theory, and international expansion theories. Resource based theory is used in strategic decision based on resources, like decision about entrance and cooperation depending on resources of franchisee and franchisor. Transaction cost theory can be applied in determination of mutual trust or satisfaction of franchising players. Agent theory tries to explain strategic decision for reducing problem caused by utilizing agent, for example research on control system in franchising agreements. Organizational Learning theory is relatively new in franchising research. It assumes organization tries to maximize performance and learning of organization. In addition, Internalization theory advocates strategic decision of direct investment for removing inefficiency of market transaction and is applied in research on terms of contract. And oligopolistic competition theory is used to explain various entry modes for international expansion. Competency theory support strategic decision of utilizing key competitive advantage. Furthermore, research methodologies including qualitative and quantitative methodologies are suggested for more rigorous international franchising research. Quantitative research needs more real data other than survey data which is usually respondent's judgment. In order to verify theory more rigorously, research based on real data is essential. However, real quantitative data is quite hard to get. The qualitative research other than single case study is also highly recommended. Since international franchising has limited number of applications, scientific research based on grounded theory and ethnography study can be used. Scientific case study is differentiated with single case study on its data collection method and analysis method. The key concept is triangulation in measurement, logical coding and comparison. Finally, it provides overall research direction for international franchising after summarizing research trend in Korea. International franchising research in Korea has two different types, one is for studying Korean franchisor going overseas and the other is for Korean franchisee of foreign franchisor. Among research on Korean franchisor, two common patterns are observed. First of all, they usually deal with success story of one franchisor. The other common pattern is that they focus on same industry and country. Therefore, international franchise research needs to extend their focus to broader subjects with scientific research methodology as well as development of new theory.

  • PDF

Factors Affecting International Transfer Pricing of Multinational Enterprises in Korea (외국인투자기업의 국제이전가격 결정에 영향을 미치는 환경 및 기업요인)

  • Jun, Tae-Young;Byun, Yong-Hwan
    • Korean small business review
    • /
    • v.31 no.2
    • /
    • pp.85-102
    • /
    • 2009
  • With the continued globalization of world markets, transfer pricing has become one of the dominant sources of controversy in international taxation. Transfer pricing is the process by which a multinational corporation calculates a price for goods and services that are transferred to affiliated entities. Consider a Korean electronic enterprise that buys supplies from its own subsidiary located in China. How much the Korean parent company pays its subsidiary will determine how much profit the Chinese unit reports in local taxes. If the parent company pays above normal market prices, it may appear to have a poor profit, even if the group as a whole shows a respectable profit margin. In this way, transfer prices impact the taxable income reported in each country in which the multinational enterprise operates. It's importance lies in that around 60% of international trade involves transactions between two related parts of multinationals, according to the OECD. Multinational enterprises (hereafter MEs) exert much effort into utilizing organizational advantages to make global investments. MEs wish to minimize their tax burden. So MEs spend a fortune on economists and accountants to justify transfer prices that suit their tax needs. On the contrary, local governments are not prepared to cope with MEs' powerful financial instruments. Tax authorities in each country wish to ensure that the tax base of any ME is divided fairly. Thus, both tax authorities and MEs have a vested interest in the way in which a transfer price is determined, and this is why MEs' international transfer prices are at the center of disputes concerned with taxation. Transfer pricing issues and practices are sometimes difficult to control for regulators because the tax administration does not have enough staffs with the knowledge and resources necessary to understand them. The authors examine transfer pricing practices to provide relevant resources useful in designing tax incentives and regulation schemes for policy makers. This study focuses on identifying the relevant business and environmental factors that could influence the international transfer pricing of MEs. In this perspective, we empirically investigate how the management perception of related variables influences their choice of international transfer pricing methods. We believe that this research is particularly useful in the design of tax policy. Because it can concentrate on a few selected factors in consideration of the limited budget of the tax administration with assistance of this research. Data is composed of questionnaire responses from foreign firms in Korea with investment balances exceeding one million dollars in the end of 2004. We mailed questionnaires to 861 managers in charge of the accounting departments of each company, resulting in 121 valid responses. Seventy six percent of the sample firms are classified as small and medium sized enterprises with assets below 100 billion Korean won. Reviewing transfer pricing methods, cost-based transfer pricing is most popular showing that 60 firms have adopted it. The market-based method is used by 31 firms, and 13 firms have reported the resale-pricing method. Regarding the nationalities of foreign investors, the Japanese and the Americans constitute most of the sample. Logistic regressions have been performed for statistical analysis. The dependent variable is binary in that whether the method of international transfer pricing is a market-based method or a cost-based method. This type of binary classification is founded on the belief that the market-based method is evaluated as the relatively objective way of pricing compared with the cost-based methods. Cost-based pricing is assumed to give mangers flexibility in transfer pricing decisions. Therefore, local regulatory agencies are thought to prefer market-based pricing over cost-based pricing. Independent variables are composed of eight factors such as corporate tax rate, tariffs, relations with local tax authorities, tax audit, equity ratios of local investors, volume of internal trade, sales volume, and product life cycle. The first four variables are included in the model because taxation lies in the center of transfer pricing disputes. So identifying the impact of these variables in Korean business environments is much needed. Equity ratio is included to represent the interest of local partners. Volume of internal trade was sometimes employed in previous research to check the pricing behavior of managers, so we have followed these footsteps in this paper. Product life cycle is used as a surrogate of competition in local markets. Control variables are firm size and nationality of foreign investors. Firm size is controlled using dummy variables in that whether or not the specific firm is small and medium sized. This is because some researchers report that big firms show different behaviors compared with small and medium sized firms in transfer pricing. The other control variable is also expressed in dummy variable showing if the entrepreneur is the American or not. That's because some prior studies conclude that the American management style is different in that they limit branch manger's freedom of decision. Reviewing the statistical results, we have found that managers prefer the cost-based method over the market-based method as the importance of corporate taxes and tariffs increase. This result means that managers need flexibility to lessen the tax burden when they feel taxes are important. They also prefer the cost-based method as the product life cycle matures, which means that they support subsidiaries in local market competition using cost-based transfer pricing. On the contrary, as the relationship with local tax authorities becomes more important, managers prefer the market-based method. That is because market-based pricing is a better way to maintain good relations with the tax officials. Other variables like tax audit, volume of internal transactions, sales volume, and local equity ratio have shown only insignificant influence. Additionally, we have replaced two tax variables(corporate taxes and tariffs) with the data showing top marginal tax rate and mean tariff rates of each country, and have performed another regression to find if we could get different results compared with the former one. As a consequence, we have found something different on the part of mean tariffs, that shows only an insignificant influence on the dependent variable. We guess that each company in the sample pays tariffs with a specific rate applied only for one's own company, which could be located far from mean tariff rates. Therefore we have concluded we need a more detailed data that shows the tariffs of each company if we want to check the role of this variable. Considering that the present paper has heavily relied on questionnaires, an effort to build a reliable data base is needed for enhancing the research reliability.