• Title/Summary/Keyword: Future issues

Search Result 2,656, Processing Time 0.03 seconds

A New Exploratory Research on Franchisor's Provision of Exclusive Territories (가맹본부의 배타적 영업지역보호에 대한 탐색적 연구)

  • Lim, Young-Kyun;Lee, Su-Dong;Kim, Ju-Young
    • Journal of Distribution Research
    • /
    • v.17 no.1
    • /
    • pp.37-63
    • /
    • 2012
  • In franchise business, exclusive sales territory (sometimes EST in table) protection is a very important issue from an economic, social and political point of view. It affects the growth and survival of both franchisor and franchisee and often raises issues of social and political conflicts. When franchisee is not familiar with related laws and regulations, franchisor has high chance to utilize it. Exclusive sales territory protection by the manufacturer and distributors (wholesalers or retailers) means sales area restriction by which only certain distributors have right to sell products or services. The distributor, who has been granted exclusive sales territories, can protect its own territory, whereas he may be prohibited from entering in other regions. Even though exclusive sales territory is a quite critical problem in franchise business, there is not much rigorous research about the reason, results, evaluation, and future direction based on empirical data. This paper tries to address this problem not only from logical and nomological validity, but from empirical validation. While we purse an empirical analysis, we take into account the difficulties of real data collection and statistical analysis techniques. We use a set of disclosure document data collected by Korea Fair Trade Commission, instead of conventional survey method which is usually criticized for its measurement error. Existing theories about exclusive sales territory can be summarized into two groups as shown in the table below. The first one is about the effectiveness of exclusive sales territory from both franchisor and franchisee point of view. In fact, output of exclusive sales territory can be positive for franchisors but negative for franchisees. Also, it can be positive in terms of sales but negative in terms of profit. Therefore, variables and viewpoints should be set properly. The other one is about the motive or reason why exclusive sales territory is protected. The reasons can be classified into four groups - industry characteristics, franchise systems characteristics, capability to maintain exclusive sales territory, and strategic decision. Within four groups of reasons, there are more specific variables and theories as below. Based on these theories, we develop nine hypotheses which are briefly shown in the last table below with the results. In order to validate the hypothesis, data is collected from government (FTC) homepage which is open source. The sample consists of 1,896 franchisors and it contains about three year operation data, from 2006 to 2008. Within the samples, 627 have exclusive sales territory protection policy and the one with exclusive sales territory policy is not evenly distributed over 19 representative industries. Additional data are also collected from another government agency homepage, like Statistics Korea. Also, we combine data from various secondary sources to create meaningful variables as shown in the table below. All variables are dichotomized by mean or median split if they are not inherently dichotomized by its definition, since each hypothesis is composed by multiple variables and there is no solid statistical technique to incorporate all these conditions to test the hypotheses. This paper uses a simple chi-square test because hypotheses and theories are built upon quite specific conditions such as industry type, economic condition, company history and various strategic purposes. It is almost impossible to find all those samples to satisfy them and it can't be manipulated in experimental settings. However, more advanced statistical techniques are very good on clean data without exogenous variables, but not good with real complex data. The chi-square test is applied in a way that samples are grouped into four with two criteria, whether they use exclusive sales territory protection or not, and whether they satisfy conditions of each hypothesis. So the proportion of sample franchisors which satisfy conditions and protect exclusive sales territory, does significantly exceed the proportion of samples that satisfy condition and do not protect. In fact, chi-square test is equivalent with the Poisson regression which allows more flexible application. As results, only three hypotheses are accepted. When attitude toward the risk is high so loyalty fee is determined according to sales performance, EST protection makes poor results as expected. And when franchisor protects EST in order to recruit franchisee easily, EST protection makes better results. Also, when EST protection is to improve the efficiency of franchise system as a whole, it shows better performances. High efficiency is achieved as EST prohibits the free riding of franchisee who exploits other's marketing efforts, and it encourages proper investments and distributes franchisee into multiple regions evenly. Other hypotheses are not supported in the results of significance testing. Exclusive sales territory should be protected from proper motives and administered for mutual benefits. Legal restrictions driven by the government agency like FTC could be misused and cause mis-understandings. So there need more careful monitoring on real practices and more rigorous studies by both academicians and practitioners.

  • PDF

A Contemplation on Measures to Advance Logistics Centers (물류센터 선진화를 위한 발전 방안에 대한 소고)

  • Sun, Il-Suck;Lee, Won-Dong
    • Journal of Distribution Science
    • /
    • v.9 no.1
    • /
    • pp.17-27
    • /
    • 2011
  • As the world becomes more globalized, business competition becomes fiercer, while consumers' needs for less expensive quality products are on the increase. Business operations make an effort to secure a competitive edge in costs and services, and the logistics industry, that is, the industry operating the storing and transporting of goods, once thought to be an expense, begins to be considered as the third cash cow, a source of new income. Logistics centers are central to storage, loading and unloading of deliveries, packaging operations, and dispensing goods' information. As hubs for various deliveries, they also serve as a core infrastructure to smoothly coordinate manufacturing and selling, using varied information and operation systems. Logistics centers are increasingly on the rise as centers of business supply activities, growing beyond their previous role of primarily storing goods. They are no longer just facilities; they have become logistics strongholds that encompass various features from demand forecast to the regulation of supply, manufacturing, and sales by realizing SCM, taking into account marketability and the operation of service and products. However, despite these changes in logistics operations, some centers have been unable to shed their past roles as warehouses. For the continuous development of logistics centers, various measures would be needed, including a revision of current supporting policies, formulating effective management plans, and establishing systematic standards for founding, managing, and controlling logistics centers. To this end, the research explored previous studies on the use and effectiveness of logistics centers. From a theoretical perspective, an evaluation of the overall introduction, purposes, and transitions in the use of logistics centers found issues to ponder and suggested measures to promote and further advance logistics centers. First, a fact-finding survey to establish demand forecast and standardization is needed. As logistics newspapers predicted that after 2012 supply would exceed demand, causing rents to fall, the business environment for logistics centers has faltered. However, since there is a shortage of fact-finding surveys regarding actual demand for domestic logistic centers, it is hard to predict what the future holds for this industry. Accordingly, the first priority should be to get to the essence of the current market situation by conducting accurate domestic and international fact-finding surveys. Based on those, management and evaluation indicators should be developed to build the foundation for the consistent advancement of logistics centers. Second, many policies for logistics centers should be revised or developed. Above all, a guideline for fair trade between a shipper and a commercial logistics center should be enacted. Since there are no standards for fair trade between them, rampant unfair trades according to market practices have brought chaos to market orders, and now the logistics industry is confronting its own difficulties. Therefore, unfair trade cases that currently plague logistics centers should be gathered by the industry and fair trade guidelines should be established and implemented. In addition, restrictive employment regulations for foreign workers should be eased, and logistics centers should be charged industry rates for the use of electricity. Third, various measures should be taken to improve the management environment. First, we need to find out how to activate value-added logistics. Because the traditional purpose of logistics centers was storage and loading/unloading of goods, their profitability had a limit, and the need arose to find a new angle to create a value added service. Logistic centers have been perceived as support for a company's storage, manufacturing, and sales needs, not as creators of profits. The center's role in the company's economics has been lowering costs. However, as the logistics' management environment spiraled, along with its storage purpose, developing a new feature of profit creation should be a desirable goal, and to achieve that, value added logistics should be promoted. Logistics centers can also be improved through cost estimation. In the meantime, they have achieved some strides in facility development but have still fallen behind in others, particularly in management functioning. Lax management has been rampant because the industry has not developed a concept of cost estimation. The centers have since made an effort toward unification, standardization, and informatization while realizing cost reductions by establishing systems for effective management, but it has been hard to produce profits. Thus, there is an urgent need to estimate costs by determining a basic cost range for each division of work at logistics centers. This undertaking can be the first step to improving the ineffective aspects of how they operate. Ongoing research and constant efforts have been made to improve the level of effectiveness in the manufacturing industry, but studies on resource management in logistics centers are hardly enough. Thus, a plan to calculate the optimal level of resources necessary to operate a logistics center should be developed and implemented in management behavior, for example, by standardizing the hours of operation. If logistics centers, shippers, related trade groups, academic figures, and other experts could launch a committee to work with the government and maintain an ongoing relationship, the constraint and cooperation among members would help lead to coherent development plans for logistics centers. If the government continues its efforts to provide financial support, nurture professional workers, and maintain safety management, we can anticipate the continuous advancement of logistics centers.

  • PDF

Review of 2015 Major Medical Decisions (2015년 주요 의료판결 분석)

  • Yoo, Hyun Jung;Lee, Dong Pil;Lee, Jung Sun;Jeong, Hye Seung;Park, Tae Shin
    • The Korean Society of Law and Medicine
    • /
    • v.17 no.1
    • /
    • pp.299-346
    • /
    • 2016
  • There were also various decisions made in medical area in 2015. In the case that an inmate in a sanatorium was injured due to the reason which can be attributable to the sanatorium and the social welfare foundation that operates the sanatorium request treatment of the patient, the court set the standard of fixation of a party in medical contract. In the case that the family of the patient who was declared brain dead required withdrawal of meaningless life sustaining treatment but the hospital rejected and continued the treatment, the court made a decision regarding chargeable fee for such treatment. When it comes to the eye brightening operation which received measure of suspension from the Ministry of Health and Welfare for the first time in February, 2011, because of uncertainty of its safety, the court did not accept the illegality of such operation itself, however, ordered compensation of the whole damage based on the violation of liability for explanation, which is the omission of explanation about the fact that the cost-effectiveness is not sure as it is still in clinical test stage. There were numerous cases that courts actively acknowledged malpractices; in the cases of paresis syndrome after back surgery, quite a few malpractices during the surgery were acknowledged by the court and in the case of nosocomial infection, hospital's negligence to cause such nosocomial infection was acknowledged by the court. There was a decision which acknowledged malpractice by distinguishing the duty of installation of emergency equipment according to the Emergency Medical Service Act and duty of emergency measure in emergency situations, and a decision which acknowledged negligence of a hospital if the hospital did not take appropriate measures, although it was a very rare disease. In connection with the scope of compensation for damage, there were decisions which comply with substantive truth such as; a court applied different labor ability loss rate as the labor ability loss rate decreased after result of reappraisal of physical ability in appeal compared to the one in the first trial, and a court acknowledged lower labor ability loss rate than the result of appraisal of physical ability considering the condition of a patient, etc. In the event of any damage caused by malpractice, in regard to whether there is a limitation on liability in fee charge after such medical malpractice, the court rejected the hospital's claim for setoff saying that if the hospital only continued treatments to cure the patient or prevent aggravation of disease, the hospital cannot charge Medical bills to the patient. In regard to the provision of the Medical Law that prohibit medical advertisement which was not reviewed preliminarily and punish the violation of such, a decision of unconstitutionality was made as it is a precensorship by an administrative agency as the deliberative bodies such as Korean Medical Association, etc. cannot be denied to be considered as administrative bodies. When it comes to the issue whether PRP treatment, which is commonly performed clinically, should be considered as legally determined uninsured treatment, the court made it clear that legally determined uninsured treatment should not be decided by theoretical possibility or actual implementation but should be acknowledged its medical safety and effectiveness and included in medical care or legally determined uninsured treatment. Moreover, court acknowledged the illegality of investigation method or process in the administrative litigation regarding evaluation of suitability of sanatorium, however, denied the compensation liability or restitution of unjust enrichment of the Health Insurance Review & Assessment Service and the National Health Insurance Corporation as the evaluation agents did not cause such violation intentionally or negligently. We hope there will be more decisions which are closer to substantive truth through clear legal principles in respect of variously arisen issues in the future.

  • PDF

The Role of the Soft Law for Space Debris Mitigation in International Law (국제법상 우주폐기물감축 연성법의 역할에 관한 연구)

  • Kim, Han-Taek
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.30 no.2
    • /
    • pp.469-497
    • /
    • 2015
  • In 2009 Iridium 33, a satellite owned by the American Iridium Communications Inc. and Kosmos-2251, a satellite owned by the Russian Space Forces, collided at a speed of 42,120 km/h and an altitude of 789 kilometers above the Taymyr Peninsula in Siberia. NASA estimated that the satellite collision had created approximately 1,000 pieces of debris larger than 10 centimeters, in addition to many smaller ones. By July 2011, the U.S. Space Surveillance Network(SSN) had catalogued over 2,000 large debris fragments. On January 11, 2007 China conducted a test on its anti-satellite missile. A Chinese weather satellite, the FY-1C polar orbit satellite, was destroyed by the missile that was launched using a multistage solid-fuel. The test was unprecedented for having created a record amount of debris. At least 2,317 pieces of trackable size (i.e. of golf ball size or larger) and an estimated 150,000 particles were generated as a result. As far as the Space Treaties such as 1967 Outer Space Treaty, 1968 Rescue Agreement, 1972 Liability Convention, 1975 Registration Convention and 1979 Moon Agreement are concerned, few provisions addressing the space environment and debris in space can be found. In the early years of space exploration dating back to the late 1950s, the focus of international law was on the establishment of a basic set of rules on the activities undertaken by various states in outer space.. Consequently environmental issues, including those of space debris, did not receive the priority they deserve when international space law was originally drafted. As shown in the case of the 1978 "Cosmos 954 Incident" between Canada and USSR, the two parties settled it by the memorandum between two nations not by the Space Treaties to which they are parties. In 1994 the 66th conference of International Law Association(ILA) adopted "International Instrument on the Protection of the Environment from Damage Caused by Space Debris". The Inter-Agency Space Debris Coordination Committee(IADC) issued some guidelines for the space debris which were the basis of "the UN Space Debris Mitigation Guidelines" which had been approved by the Committee on the Peaceful Uses of Outer Space(COPUOS) in its 527th meeting. On December 21 2007 this guideline was approved by UNGA Resolution 62/217. The EU has proposed an "International Code of Conduct for Outer Space Activities" as a transparency and confidence-building measure. It was only in 2010 that the Scientific and Technical Subcommittee began considering as an agenda item the long-term sustainability of outer space. A Working Group on the Long-term Sustainability of Outer Space Activities was established, the objectives of which include identifying areas of concern for the long-term sustainability of outer space activities, proposing measures that could enhance sustainability, and producing voluntary guidelines to reduce risks to long-term sustainability. By this effort "Guidelines on the Long-term Sustainability of Outer Space Activities" are being under consideration. In the case of "Declaration of Legal Principles Governing the Activities of States in the Exp1oration and Use of Outer Space" adopted by UNGA Resolution 1962(XVIII), December 13 1963, the 9 principles proclaimed in that Declaration, although all of them incorporated in the Space Treaties, could be regarded as customary international law binding all states considering the time and opinio juris by the responses of the world. Although the soft law such as resolutions, guidelines are not binding law, there are some provisions which have a fundamentally norm-creating character and customary international law. In November 12 1974 UN General Assembly recalled through a Resolution 3232(XXIX) "Review of the role of International Court of Justice" that the development of international law may be reflected, inter alia, by the declarations and resolutions of the General Assembly which may to that extend be taken into consideration by the judgements of the International Court of Justice. We are expecting COPUOS which gave birth 5 Space Treaties that it could give us binding space debris mitigation measures to be implemented based on space debris mitigation soft law in the near future.

A Study on the Passengers liability of the Carrier on the Montreal Convention (몬트리올협약상의 항공여객운송인의 책임(Air Carrier's Liability for Passenger on Montreal Convention 1999))

  • Kim, Jong-Bok
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.23 no.2
    • /
    • pp.31-66
    • /
    • 2008
  • Until Montreal Convention was established in 1999, the Warsaw System is undoubtedly accepted private international air law treaty and has played major role on the carrier's liability in international aviation transport industry. But the whole Warsaw System, though it was revised many times to meet the rapid developments of the aviation transport industry, is so complicated, tangled and outdated. This thesis, therefore, aim to introduce the Montreal Convention by interpreting it as a new legal instrument on the air carrier's liability, specially on the passenger's, and analyzing all the issues relating to it. The Montreal Convention markedly changed the rules governing international carriage by air. The Montreal Convention has modernized and consolidated the old Warsaw System of international instruments of private international air law into one legal instrument. One of the most significant features of the Montreal Convention is that it sifted its priority to the protection of the interest of the consumers from the protection of the carrier which originally the Warsaw Convention intended to protect the fledgling international air transport business. Two major features of the Montreal Convention adopts are the Two-tier Liability System and the Fifth Jurisdiction. In case of death or bodily injury to passengers, the Montreal Convention introduces a two-tier liability system. The first tier includes strict liability up to 100,000SDR, irrespective of carriers' fault. The second tier is based on presumption of fault of carrier and has no limit of liability. Regarding Jurisdiction, the Montreal Convention expands upon the four jurisdiction in which the carrier could be sued by adding a fifth jurisdiction, i.e., a passenger can bring suit in a country in which he or she has their permanent and principal residence and in which the carrier provides a services for the carriage of passengers by either its own aircraft or through a commercial agreement. Other features are introducing the advance payment, electronic ticketing, compulsory insurance and regulation on the contracting and actual carrier etc. As we see some major features of the Montreal Convention, the Convention heralds the single biggest change in the international aviation liability and there can be no doubt it will prevail the international aviation transport world in the future. Our government signed this Convention on 20th Sep. 2007 and it came into effect on 29th Dec. 2007 domestically. Thus, it was recognized that domestic carriers can adequately and independently manage the change of risks of liability. I, therefore, would like to suggest our country's aviation industry including newly-born low cost carrier prepare some countermeasures domestically that are necessary to the enforcement of the Convention.

  • PDF

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

An Empirical Study on Motivation Factors and Reward Structure for User's Createve Contents Generation: Focusing on the Mediating Effect of Commitment (창의적인 UCC 제작에 영향을 미치는 동기 및 보상 체계에 대한 연구: 몰입에 매개 효과를 중심으로)

  • Kim, Jin-Woo;Yang, Seung-Hwa;Lim, Seong-Taek;Lee, In-Seong
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.141-170
    • /
    • 2010
  • User created content (UCC) is created and shared by common users on line. From the user's perspective, the increase of UCCs has led to an expansion of alternative means of communications, while from the business perspective UCCs have formed an environment in which an abundant amount of new contents can be produced. Despite outward quantitative growth, however, many aspects of UCCs do not meet the expectations of general users in terms of quality, and this can be observed through pirated contents and user-copied contents. The purpose of this research is to investigate effective methods for fostering production of creative user-generated content. This study proposes two core elements, namely, reward and motivation, which are believed to enhance content creativity as well as the mediating factor and users' committement, which will be effective for bridging the increasing motivation and content creativity. Based on this perspective, this research takes an in-depth look at issues related to constructing the dimensions of reward and motivation in UCC services for creative content product, which are identified in three phases. First, three dimensions of rewards have been proposed: task dimension, social dimension, and organizational dimention. The task dimension rewards are related to the inherent characteristics of a task such as writing blog articles and pasting photos. Four concrete ways of providing task-related rewards in UCC environments are suggested in this study, which include skill variety, task significance, task identity, and autonomy. The social dimensioni rewards are related to the connected relationships among users. The organizational dimension consists of monetary payoff and recognition from others. Second, the two types of motivations are suggested to be affected by the diverse rewards schemes: intrinsic motivation and extrinsic motivation. Intrinsic motivation occurs when people create new UCC contents for its' own sake, whereas extrinsic motivation occurs when people create new contents for other purposes such as fame and money. Third, commitments are suggested to work as important mediating variables between motivation and content creativity. We believe commitments are especially important in online environments because they have been found to exert stronger impacts on the Internet users than other relevant factors do. Two types of commitments are suggested in this study: emotional commitment and continuity commitment. Finally, content creativity is proposed as the final dependent variable in this study. We provide a systematic method to measure the creativity of UCC content based on the prior studies in creativity measurement. The method includes expert evaluation of blog pages posted by the Internet users. In order to test the theoretical model of our study, 133 active blog users were recruited to participate in a group discussion as well as a survey. They were asked to fill out a questionnaire on their commitment, motivation and rewards of creating UCC contents. At the same time, their creativity was measured by independent experts using Torrance Tests of Creative Thinking. Finally, two independent users visited the study participants' blog pages and evaluated their content creativity using the Creative Products Semantic Scale. All the data were compiled and analyzed through structural equation modeling. We first conducted a confirmatory factor analysis to validate the measurement model of our research. It was found that measures used in our study satisfied the requirement of reliability, convergent validity as well as discriminant validity. Given the fact that our measurement model is valid and reliable, we proceeded to conduct a structural model analysis. The results indicated that all the variables in our model had higher than necessary explanatory powers in terms of R-square values. The study results identified several important reward shemes. First of all, skill variety, task importance, task identity, and automony were all found to have significant influences on the intrinsic motivation of creating UCC contents. Also, the relationship with other users was found to have strong influences upon both intrinsic and extrinsic motivation. Finally, the opportunity to get recognition for their UCC work was found to have a significant impact on the extrinsic motivation of UCC users. However, different from our expectation, monetary compensation was found not to have a significant impact on the extrinsic motivation. It was also found that commitment was an important mediating factor in UCC environment between motivation and content creativity. A more fully mediating model was found to have the highest explanation power compared to no-mediation or partially mediated models. This paper ends with implications of the study results. First, from the theoretical perspective this study proposes and empirically validates the commitment as an important mediating factor between motivation and content creativity. This result reflects the characteristics of online environment in which the UCC creation activities occur voluntarily. Second, from the practical perspective this study proposes several concrete reward factors that are germane to the UCC environment, and their effectiveness to the content creativity is estimated. In addition to the quantitive results of relative importance of the reward factrs, this study also proposes concrete ways to provide the rewards in the UCC environment based on the FGI data that are collected after our participants finish asnwering survey questions. Finally, from the methodological perspective, this study suggests and implements a way to measure the UCC content creativity independently from the content generators' creativity, which can be used later by future research on UCC creativity. In sum, this study proposes and validates important reward features and their relations to the motivation, commitment, and the content creativity in UCC environment, which is believed to be one of the most important factors for the success of UCC and Web 2.0. As such, this study can provide significant theoretical as well as practical bases for fostering creativity in UCC contents.