• Title/Summary/Keyword: 효율적

Search Result 65,561, Processing Time 0.102 seconds

Factors Affecting International Transfer Pricing of Multinational Enterprises in Korea (외국인투자기업의 국제이전가격 결정에 영향을 미치는 환경 및 기업요인)

  • Jun, Tae-Young;Byun, Yong-Hwan
    • Korean small business review
    • /
    • v.31 no.2
    • /
    • pp.85-102
    • /
    • 2009
  • With the continued globalization of world markets, transfer pricing has become one of the dominant sources of controversy in international taxation. Transfer pricing is the process by which a multinational corporation calculates a price for goods and services that are transferred to affiliated entities. Consider a Korean electronic enterprise that buys supplies from its own subsidiary located in China. How much the Korean parent company pays its subsidiary will determine how much profit the Chinese unit reports in local taxes. If the parent company pays above normal market prices, it may appear to have a poor profit, even if the group as a whole shows a respectable profit margin. In this way, transfer prices impact the taxable income reported in each country in which the multinational enterprise operates. It's importance lies in that around 60% of international trade involves transactions between two related parts of multinationals, according to the OECD. Multinational enterprises (hereafter MEs) exert much effort into utilizing organizational advantages to make global investments. MEs wish to minimize their tax burden. So MEs spend a fortune on economists and accountants to justify transfer prices that suit their tax needs. On the contrary, local governments are not prepared to cope with MEs' powerful financial instruments. Tax authorities in each country wish to ensure that the tax base of any ME is divided fairly. Thus, both tax authorities and MEs have a vested interest in the way in which a transfer price is determined, and this is why MEs' international transfer prices are at the center of disputes concerned with taxation. Transfer pricing issues and practices are sometimes difficult to control for regulators because the tax administration does not have enough staffs with the knowledge and resources necessary to understand them. The authors examine transfer pricing practices to provide relevant resources useful in designing tax incentives and regulation schemes for policy makers. This study focuses on identifying the relevant business and environmental factors that could influence the international transfer pricing of MEs. In this perspective, we empirically investigate how the management perception of related variables influences their choice of international transfer pricing methods. We believe that this research is particularly useful in the design of tax policy. Because it can concentrate on a few selected factors in consideration of the limited budget of the tax administration with assistance of this research. Data is composed of questionnaire responses from foreign firms in Korea with investment balances exceeding one million dollars in the end of 2004. We mailed questionnaires to 861 managers in charge of the accounting departments of each company, resulting in 121 valid responses. Seventy six percent of the sample firms are classified as small and medium sized enterprises with assets below 100 billion Korean won. Reviewing transfer pricing methods, cost-based transfer pricing is most popular showing that 60 firms have adopted it. The market-based method is used by 31 firms, and 13 firms have reported the resale-pricing method. Regarding the nationalities of foreign investors, the Japanese and the Americans constitute most of the sample. Logistic regressions have been performed for statistical analysis. The dependent variable is binary in that whether the method of international transfer pricing is a market-based method or a cost-based method. This type of binary classification is founded on the belief that the market-based method is evaluated as the relatively objective way of pricing compared with the cost-based methods. Cost-based pricing is assumed to give mangers flexibility in transfer pricing decisions. Therefore, local regulatory agencies are thought to prefer market-based pricing over cost-based pricing. Independent variables are composed of eight factors such as corporate tax rate, tariffs, relations with local tax authorities, tax audit, equity ratios of local investors, volume of internal trade, sales volume, and product life cycle. The first four variables are included in the model because taxation lies in the center of transfer pricing disputes. So identifying the impact of these variables in Korean business environments is much needed. Equity ratio is included to represent the interest of local partners. Volume of internal trade was sometimes employed in previous research to check the pricing behavior of managers, so we have followed these footsteps in this paper. Product life cycle is used as a surrogate of competition in local markets. Control variables are firm size and nationality of foreign investors. Firm size is controlled using dummy variables in that whether or not the specific firm is small and medium sized. This is because some researchers report that big firms show different behaviors compared with small and medium sized firms in transfer pricing. The other control variable is also expressed in dummy variable showing if the entrepreneur is the American or not. That's because some prior studies conclude that the American management style is different in that they limit branch manger's freedom of decision. Reviewing the statistical results, we have found that managers prefer the cost-based method over the market-based method as the importance of corporate taxes and tariffs increase. This result means that managers need flexibility to lessen the tax burden when they feel taxes are important. They also prefer the cost-based method as the product life cycle matures, which means that they support subsidiaries in local market competition using cost-based transfer pricing. On the contrary, as the relationship with local tax authorities becomes more important, managers prefer the market-based method. That is because market-based pricing is a better way to maintain good relations with the tax officials. Other variables like tax audit, volume of internal transactions, sales volume, and local equity ratio have shown only insignificant influence. Additionally, we have replaced two tax variables(corporate taxes and tariffs) with the data showing top marginal tax rate and mean tariff rates of each country, and have performed another regression to find if we could get different results compared with the former one. As a consequence, we have found something different on the part of mean tariffs, that shows only an insignificant influence on the dependent variable. We guess that each company in the sample pays tariffs with a specific rate applied only for one's own company, which could be located far from mean tariff rates. Therefore we have concluded we need a more detailed data that shows the tariffs of each company if we want to check the role of this variable. Considering that the present paper has heavily relied on questionnaires, an effort to build a reliable data base is needed for enhancing the research reliability.

A Data-based Sales Forecasting Support System for New Businesses (데이터기반의 신규 사업 매출추정방법 연구: 지능형 사업평가 시스템을 중심으로)

  • Jun, Seung-Pyo;Sung, Tae-Eung;Choi, San
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.1-22
    • /
    • 2017
  • Analysis of future business or investment opportunities, such as business feasibility analysis and company or technology valuation, necessitate objective estimation on the relevant market and expected sales. While there are various ways to classify the estimation methods of these new sales or market size, they can be broadly divided into top-down and bottom-up approaches by benchmark references. Both methods, however, require a lot of resources and time. Therefore, we propose a data-based intelligent demand forecasting system to support evaluation of new business. This study focuses on analogical forecasting, one of the traditional quantitative forecasting methods, to develop sales forecasting intelligence systems for new businesses. Instead of simply estimating sales for a few years, we hereby propose a method of estimating the sales of new businesses by using the initial sales and the sales growth rate of similar companies. To demonstrate the appropriateness of this method, it is examined whether the sales performance of recently established companies in the same industry category in Korea can be utilized as a reference variable for the analogical forecasting. In this study, we examined whether the phenomenon of "mean reversion" was observed in the sales of start-up companies in order to identify errors in estimating sales of new businesses based on industry sales growth rate and whether the differences in business environment resulting from the different timing of business launch affects growth rate. We also conducted analyses of variance (ANOVA) and latent growth model (LGM) to identify differences in sales growth rates by industry category. Based on the results, we proposed industry-specific range and linear forecasting models. This study analyzed the sales of only 150,000 start-up companies in Korea in the last 10 years, and identified that the average growth rate of start-ups in Korea is higher than the industry average in the first few years, but it shortly shows the phenomenon of mean-reversion. In addition, although the start-up founding juncture affects the sales growth rate, it is not high significantly and the sales growth rate can be different according to the industry classification. Utilizing both this phenomenon and the performance of start-up companies in relevant industries, we have proposed two models of new business sales based on the sales growth rate. The method proposed in this study makes it possible to objectively and quickly estimate the sales of new business by industry, and it is expected to provide reference information to judge whether sales estimated by other methods (top-down/bottom-up approach) pass the bounds from ordinary cases in relevant industry. In particular, the results of this study can be practically used as useful reference information for business feasibility analysis or technical valuation for entering new business. When using the existing top-down method, it can be used to set the range of market size or market share. As well, when using the bottom-up method, the estimation period may be set in accordance of the mean reverting period information for the growth rate. The two models proposed in this study will enable rapid and objective sales estimation of new businesses, and are expected to improve the efficiency of business feasibility analysis and technology valuation process by developing intelligent information system. In academic perspectives, it is a very important discovery that the phenomenon of 'mean reversion' is found among start-up companies out of general small-and-medium enterprises (SMEs) as well as stable companies such as listed companies. In particular, there exists the significance of this study in that over the large-scale data the mean reverting phenomenon of the start-up firms' sales growth rate is different from that of the listed companies, and that there is a difference in each industry. If a linear model, which is useful for estimating the sales of a specific company, is highly likely to be utilized in practical aspects, it can be explained that the range model, which can be used for the estimation method of the sales of the unspecified firms, is highly likely to be used in political aspects. It implies that when analyzing the business activities and performance of a specific industry group or enterprise group there is political usability in that the range model enables to provide references and compare them by data based start-up sales forecasting system.

Preparation of Pure CO2 Standard Gas from Calcium Carbonate for Stable Isotope Analysis (탄산칼슘을 이용한 이산화탄소 안정동위원소 표준시료 제작에 대한 연구)

  • Park, Mi-Kyung;Park, Sunyoung;Kang, Dong-Jin;Li, Shanlan;Kim, Jae-Yeon;Jo, Chun Ok;Kim, Jooil;Kim, Kyung-Ryul
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.18 no.1
    • /
    • pp.40-46
    • /
    • 2013
  • The isotope ratios of $^{13}C/^{12}C$ and $^{18}O/^{16}O$ for a sample in a mass spectrometer are measured relative to those of a pure $CO_2$ reference gas (i.e., laboratory working standard). Thus, the calibration of a laboratory working standard gas to the international isotope scales (Pee Dee Belemnite (PDB) for ${\delta}^{13}C$ and Vienna Standard Mean Ocean Water (V-SMOW) for ${\delta}^{18}O$) is essential for comparisons between data sets obtained by other groups on other mass spectrometers. However, one often finds difficulties in getting well-calibrated standard gases, because of their production time and high price. Additional difficulty is that fractionation processes can occur inside the gas cylinder most likely due to pressure drop in long-term use. Therefore, studies on laboratory production of pure $CO_2$ isotope standard gas from stable solid calcium carbonate standard materials, have been performed. For this study, we propose a method to extract pure $CO_2$ gas without isotope fractionation from a solid calcium carbonate material. The method is similar to that suggested by Coplen et al., (1983), but is better optimized particularly to make a large amount of pure $CO_2$ gas from calcium carbonate material. The $CaCO_3$ releases $CO_2$ in reaction with 100% pure phosphoric acid at $25^{\circ}C$ in a custom designed, evacuated reaction vessel. Here we introduce optimal procedure, reaction conditions, and samples/reactants size for calcium carbonate-phosphoric acid reaction and also provide the details for extracting, purifying and collecting $CO_2$ gas out of the reaction vessel. The measurements for ${\delta}^{18}O$ and ${\delta}^{13}C$ of $CO_2$ were performed at Seoul National University using a stable isotope ratio mass spectrometer (VG Isotech, SIRA Series II) operated in dual-inlet mode. The entire analysis precisions for ${\delta}^{18}O$ and ${\delta}^{13}C$ were evaluated based on the standard deviations of multiple measurements on 15 separate samples of purified $CO_2$. The pure $CO_2$ samples were taken from 100-mg aliquots of a solid calcium carbonate (Solenhofen-ori $CaCO_3$) during 8-day experimental period. The multiple measurements yielded the $1{\sigma}$ precisions of ${\pm}0.01$‰ for ${\delta}^{13}C$ and ${\pm}0.05$‰ for ${\delta}^{18}O$, comparable to the internal instrumental precisions of SIRA. Therefore, we conclude the method proposed in this study can serve as a way to produce an accurate secondary and/or laboratory $CO_2$ standard gas. We hope this study helps resolve difficulties in placing a laboratory working standard onto the international isotope scales and does make accurate comparisons with other data sets from other groups.

Factors Related to Waiting and Staying Time for Patient Care in Emergency Care Center (응급의료센터 내원환자 진료시 소요시간과 관련된 요인)

  • Han, Nam Sook;Park, Jae Yong;Lee, Sam Beom;Do, Byung Soo;Kim, Seok Beom
    • Quality Improvement in Health Care
    • /
    • v.7 no.2
    • /
    • pp.138-155
    • /
    • 2000
  • Background: Factors related to waiting and staying time for patient care in emergency care center (ECC) were examined during 1 month from Apr. 1 to Apr. 30, 1997 at an ECC of Yeungnam university hospital in Taegu metropolitan city, to obtain the baseline data on the strategy of effective management of emergency patients. Method: The study subjects consisted of the 1,742 patients who visited at ECC and the data were obtained from the medical records of ECC and direct surveys. Results: The mean interval between ECC admission time and initial care time by each ECC duty residents was 83.1 minutes for male patients and 84.9 minutes for female patients, and mean ECC staying time (time interval between admission and final disposition from ECC) was 718.0 minutes in men and 670.5 minutes in women. As the results, the mean staying time in ECC was higher in older age, and especially the both of initial care time and staying time were highest in patients of medical aid, and shortest in patients of worker's accident compensation insurance. The on admission or not, previously endotracheal-intubation state of patient. The ECC staying ti initial care time was much more delayed in patients of not having previous medical records and the ECC staying time was higher in referred patients from out-patient department, in transferred patients from the other hospitals and patients having previous records, and in patients partly used the order-communicating system. The factors associated with the initial care time were the numbers of ECC patients and the existence of any true emergent patients, being cardiopulmonary resuscitation (CPR) statusme was much more longer in patients of drug intoxication, in CPR patients, in medical department patients, in transfused patients and in patients related to 3 or more departments. And according to the numbers of duty internships, the ECC staying time for four internships was more longer than for five internships and after admission ordering was done, also-more longer in status being of no available beds. As above mentioned results, the factors for the ECC staying time were thought to be statistically significant (P<0.01) according to the patient's age and the laboratory orders and the X-ray films checked. And also the factor for the ECC staying time were thought to be statistically significant (P<0.01) according to the status being of no available beds, the laboratory orders and/or the special laboratory orders, the X-ray films checked, final disposing department, transferred to other hospital or not, home medication or not, admission or not, the grades of beds, the year grades of residents, the causes of ECC visit, the being CPR status on admission or not, the surgical operation or not, being known personells in our hospital. Conclution: Authors concluded that the relieving method of long-staying time in ECC was being establishing the legally proved apparatus which could differentiate the true emergency or non-emergency patients, and that the methods of shortening ECC staying time were doing definitely necessary laboratory orders and managing beds more flexibly to admit for ECC patients and finally this methods were thought to be a method of unloading for ECC personnels and improving the quality of care in emergency patients.

  • PDF

The Concentration of Economic Power in Korea (경제력집중(經濟力集中) : 기본시각(基本視角)과 정책방향(政策方向))

  • Lee, Kyu-uck
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.31-68
    • /
    • 1990
  • The concentration of economic power takes the form of one or a few firms controlling a substantial portion of the economic resources and means in a certain economic area. At the same time, to the extent that these firms are owned by a few individuals, resource allocation can be manipulated by them rather than by the impersonal market mechanism. This will impair allocative efficiency, run counter to a decentralized market system and hamper the equitable distribution of wealth. Viewed from the historical evolution of Western capitalism in general, the concentration of economic power is a paradox in that it is a product of the free market system itself. The economic principle of natural discrimination works so that a few big firms preempt scarce resources and market opportunities. Prominent historical examples include trusts in America, Konzern in Germany and Zaibatsu in Japan in the early twentieth century. In other words, the concentration of economic power is the outcome as well as the antithesis of free competition. As long as judgment of the economic system at large depends upon the value systems of individuals, therefore, the issue of how to evaluate the concentration of economic power will inevitably be tinged with ideology. We have witnessed several different approaches to this problem such as communism, fascism and revised capitalism, and the last one seems to be the only surviving alternative. The concentration of economic power in Korea can be summarily represented by the "jaebol," namely, the conglomerate business group, the majority of whose member firms are monopolistic or oligopolistic in their respective markets and are owned by particular individuals. The jaebol has many dimensions in its size, but to sketch its magnitude, the share of the jaebol in the manufacturing sector reached 37.3% in shipment and 17.6% in employment as of 1989. The concentration of economic power can be ascribed to a number of causes. In the early stages of economic development, when the market system is immature, entrepreneurship must fill the gap inherent in the market in addition to performing its customary managerial function. Entrepreneurship of this sort is a scarce resource and becomes even more valuable as the target rate of economic growth gets higher. Entrepreneurship can neither be readily obtained in the market nor exhausted despite repeated use. Because of these peculiarities, economic power is bound to be concentrated in the hands of a few entrepreneurs and their business groups. It goes without saying, however, that the issue of whether the full exercise of money-making entrepreneurship is compatible with social mores is a different matter entirely. The rapidity of the concentration of economic power can also be traced to the diversification of business groups. The transplantation of advanced technology oriented toward mass production tends to saturate the small domestic market quite early and allows a firm to expand into new markets by making use of excess capacity and of monopoly profits. One of the reasons why the jaebol issue has become so acute in Korea lies in the nature of the government-business relationship. The Korean government has set economic development as its foremost national goal and, since then, has intervened profoundly in the private sector. Since most strategic industries promoted by the government required a huge capacity in technology, capital and manpower, big firms were favored over smaller firms, and the benefits of industrial policy naturally accrued to large business groups. The concentration of economic power which occured along the way was, therefore, not necessarily a product of the market system. At the same time, the concentration of ownership in business groups has been left largely intact as they have customarily met capital requirements by means of debt. The real advantage enjoyed by large business groups lies in synergy due to multiplant and multiproduct production. Even these effects, however, cannot always be considered socially optimal, as they offer disadvantages to other independent firms-for example, by foreclosing their markets. Moreover their fictitious or artificial advantages only aggravate the popular perception that most business groups have accumulated their wealth at the expense of the general public and under the behest of the government. Since Korea stands now at the threshold of establishing a full-fledged market economy along with political democracy, the phenomenon called the concentration of economic power must be correctly understood and the roles of business groups must be accordingly redefined. In doing so, we would do better to take a closer look at Japan which has experienced a demise of family-controlled Zaibatsu and a success with business groups(Kigyoshudan) whose ownership is dispersed among many firms and ultimately among the general public. The Japanese case cannot be an ideal model, but at least it gives us a good point of departure in that the issue of ownership is at the heart of the matter. In setting the basic direction of public policy aimed at controlling the concentration of economic power, one must harmonize efficiency and equity. Firm size in itself is not a problem, if it is dictated by efficiency considerations and if the firm behaves competitively in the market. As long as entrepreneurship is required for continuous economic growth and there is a discrepancy in entrepreneurial capacity among individuals, a concentration of economic power is bound to take place to some degree. Hence, the most effective way of reducing the inefficiency of business groups may be to impose competitive pressure on their activities. Concurrently, unless the concentration of ownership in business groups is scaled down, the seed of social discontent will still remain. Nevertheless, the dispersion of ownership requires a number of preconditions and, consequently, we must make consistent, long-term efforts on many fronts. We can suggest a long list of policy measures specifically designed to control the concentration of economic power. Whatever the policy may be, however, its intended effects will not be fully realized unless business groups abide by the moral code expected of socially responsible entrepreneurs. This is especially true, since the root of the problem of the excessive concentration of economic power lies outside the issue of efficiency, in problems concerning distribution, equity, and social justice.

  • PDF

Studies on the Effect of Diffusion Process to Decay Resistance of Mine Props (간이처리법(簡易處理法)에 의한 갱목(坑木)의 내부효력(耐腐効力)에 관한 연구(硏究))

  • Shim, Chong Supp;Shin, Dong So;Jung, Hee Suk
    • Journal of Korean Society of Forest Science
    • /
    • v.29 no.1
    • /
    • pp.1-19
    • /
    • 1976
  • This study has been made to make an observation regarding present status of the coal mine props which is desperately needed for coal production, despite of great shortage of the timber resources in this country, and investigate the effects of diffusion process on the decay resistances of the mine props as applied preservatives of Malenit and chromated zinc chloride. The results are as follows. 1. Present status of the coal mine props Total demand of coal mine props in the year of 1975 was approximately 456 thousand cubic meters. The main species used for mine props are conifer (mainly Pinus densiflora) and hardwood (mainly Quercus). Portions between them are half and half. With non fixed specification, wide varieties of timber in size and form are used. And volume of wood used per ton-of coal production shows also wide range from 0.017 cubic meter to 0.03 cubic meter. 2. Decay resistance test a) The oven dry weight decreased between untreated specimen and treated specimen has not shown any significantly, although it has shown some differences in average values between them. It may be caused by the shorter length of the test. b) The strength of compression test between untreated specimen and treated specimen has also shown the same results as shown in case of weight decrease. Reasons assumed are the same. c) The amounts of the extractives in one percent of sodium hydroxide (NaOH) between untreated and treated specimen have shown the large value in case of untreated specimen than that of treated. 3. The economical benifit between untreated and treated wood when applied in field has seen better in long term base in case of treated wood, although the primary cost of treated wood add a little bit more cost than that of the untreated wood.

  • PDF

A Study on the Utilzation of Two Furrow Combine (2조형(條型) Combine의 이용(利用)에 관(關)한 연구(硏究))

  • Lee, Sang Woo;Kim, Soung Rai
    • Korean Journal of Agricultural Science
    • /
    • v.3 no.1
    • /
    • pp.95-104
    • /
    • 1976
  • This study was conducted to test the harvesting operation of two kinds of rice varieties such as Milyang #15 and Tong-il with a imported two furrow Japanese combine and was performed to find out the operational accuracy of it, the adaptability of this machine, and the feasibility of supplying this machine to rural area in Korea. The results obtained in this study are summarized as follows; 1. The harvesting test of the Milyang #15 was carried out 5 times from the optimum harvesting operation was good regardless of its maturity. The field grain loss ratio and the rate of unthreshed paddy were all about 1 percent. 2. The field grain loss of Tong-il harvested was increased from 5.13% to 10.34% along its maturity as shown in Fig 1. In considering this, it was needed that the combine mechanism should be improved mechanically for harvesting of Tong-il rice variety. 3. The rate of unthreshed paddy of Tong-il rice variety of which stem was short was average 1.6 percent, because the sample combine used in this study was developed on basisof the long stem variety in Japan, therefore some ears owing to the uneven stem of Tong-il rice could nat reach the teeth of the threshing drum. 4. The cracking rates of brown rice depending mostly upon the revolution speed of the threshing drum(240-350 rpm) in harvesting of Tong-il and Milyang #15 were all below 1 percent, and there was no significance between two varieties. 5. Since the ears of Tong-il rice variety covered with its leaves, a lots of trashes was produced, especially when threshed in raw materials, and the cleaning and the trashout mechanisms were clogged with those trashes very often, and so these two mechanisms were needed for being improved. 6. The sample combine of which track pressure was $0.19kg/cm^2$ could drive on the soft ground of which sinking was even 25cm as shown in Fig 3. But in considering the reaping height adjustment, 5cm sinking may be afford to drive the combine on the irregular sinking level ground without any readjustment of the resaping height. 7. The harvesting expenses per ha. by the sample combine of which annual coverage area is 4.7 ha. under conditions that the yearly workable days is 40, percentage of days being good for harvesting operation is 60%, field efficiency is 56%, working speed is 0.273m/sec, and daily workable hours is 8 hrs is reasonable to spread this combine to rural area in Korea, comparing to the expenses by the conventional harvesting expenses, if mechanical improvement is supplemented so as to harvest Tong-il rice. 8. In order to harvest Tong-il rice, the two furrow combine should be needed some mechanical improvements that divider can control not to touch ears of paddy, the space between the feeding chain and the thrshing drum is reduced, trash treatment apparatus must be improved, fore and rear adjust-interval is enlarged, and width of track must be enlarged so as to drive on the soft ground.

  • PDF

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.