• Title/Summary/Keyword: Technology Issue

Search Result 3,541, Processing Time 0.036 seconds

An Empirical Study on the Determinants of Supply Chain Management Systems Success from Vendor's Perspective (참여자관점에서 공급사슬관리 시스템의 성공에 영향을 미치는 요인에 관한 실증연구)

  • Kang, Sung-Bae;Moon, Tae-Soo;Chung, Yoon
    • Asia pacific journal of information systems
    • /
    • v.20 no.3
    • /
    • pp.139-166
    • /
    • 2010
  • The supply chain management (SCM) systems have emerged as strong managerial tools for manufacturing firms in enhancing competitive strength. Despite of large investments in the SCM systems, many companies are not fully realizing the promised benefits from the systems. A review of literature on adoption, implementation and success factor of IOS (inter-organization systems), EDI (electronic data interchange) systems, shows that this issue has been examined from multiple theoretic perspectives. And many researchers have attempted to identify the factors which influence the success of system implementation. However, the existing studies have two drawbacks in revealing the determinants of systems implementation success. First, previous researches raise questions as to the appropriateness of research subjects selected. Most SCM systems are operating in the form of private industrial networks, where the participants of the systems consist of two distinct groups: focus companies and vendors. The focus companies are the primary actors in developing and operating the systems, while vendors are passive participants which are connected to the system in order to supply raw materials and parts to the focus companies. Under the circumstance, there are three ways in selecting the research subjects; focus companies only, vendors only, or two parties grouped together. It is hard to find researches that use the focus companies exclusively as the subjects probably due to the insufficient sample size for statistic analysis. Most researches have been conducted using the data collected from both groups. We argue that the SCM success factors cannot be correctly indentified in this case. The focus companies and the vendors are in different positions in many areas regarding the system implementation: firm size, managerial resources, bargaining power, organizational maturity, and etc. There are no obvious reasons to believe that the success factors of the two groups are identical. Grouping the two groups also raises questions on measuring the system success. The benefits from utilizing the systems may not be commonly distributed to the two groups. One group's benefits might be realized at the expenses of the other group considering the situation where vendors participating in SCM systems are under continuous pressures from the focus companies with respect to prices, quality, and delivery time. Therefore, by combining the system outcomes of both groups we cannot measure the system benefits obtained by each group correctly. Second, the measures of system success adopted in the previous researches have shortcoming in measuring the SCM success. User satisfaction, system utilization, and user attitudes toward the systems are most commonly used success measures in the existing studies. These measures have been developed as proxy variables in the studies of decision support systems (DSS) where the contribution of the systems to the organization performance is very difficult to measure. Unlike the DSS, the SCM systems have more specific goals, such as cost saving, inventory reduction, quality improvement, rapid time, and higher customer service. We maintain that more specific measures can be developed instead of proxy variables in order to measure the system benefits correctly. The purpose of this study is to find the determinants of SCM systems success in the perspective of vendor companies. In developing the research model, we have focused on selecting the success factors appropriate for the vendors through reviewing past researches and on developing more accurate success measures. The variables can be classified into following: technological, organizational, and environmental factors on the basis of TOE (Technology-Organization-Environment) framework. The model consists of three independent variables (competition intensity, top management support, and information system maturity), one mediating variable (collaboration), one moderating variable (government support), and a dependent variable (system success). The systems success measures have been developed to reflect the operational benefits of the SCM systems; improvement in planning and analysis capabilities, faster throughput, cost reduction, task integration, and improved product and customer service. The model has been validated using the survey data collected from 122 vendors participating in the SCM systems in Korea. To test for mediation, one should estimate the hierarchical regression analysis on the collaboration. And moderating effect analysis should estimate the moderated multiple regression, examines the effect of the government support. The result shows that information system maturity and top management support are the most important determinants of SCM system success. Supply chain technologies that standardize data formats and enhance information sharing may be adopted by supply chain leader organization because of the influence of focal company in the private industrial networks in order to streamline transactions and improve inter-organization communication. Specially, the need to develop and sustain an information system maturity will provide the focus and purpose to successfully overcome information system obstacles and resistance to innovation diffusion within the supply chain network organization. The support of top management will help focus efforts toward the realization of inter-organizational benefits and lend credibility to functional managers responsible for its implementation. The active involvement, vision, and direction of high level executives provide the impetus needed to sustain the implementation of SCM. The quality of collaboration relationships also is positively related to outcome variable. Collaboration variable is found to have a mediation effect between on influencing factors and implementation success. Higher levels of inter-organizational collaboration behaviors such as shared planning and flexibility in coordinating activities were found to be strongly linked to the vendors trust in the supply chain network. Government support moderates the effect of the IS maturity, competitive intensity, top management support on collaboration and implementation success of SCM. In general, the vendor companies face substantially greater risks in SCM implementation than the larger companies do because of severe constraints on financial and human resources and limited education on SCM systems. Besides resources, Vendors generally lack computer experience and do not have sufficient internal SCM expertise. For these reasons, government supports may establish requirements for firms doing business with the government or provide incentives to adopt, implementation SCM or practices. Government support provides significant improvements in implementation success of SCM when IS maturity, competitive intensity, top management support and collaboration are low. The environmental characteristic of competition intensity has no direct effect on vendor perspective of SCM system success. But, vendors facing above average competition intensity will have a greater need for changing technology. This suggests that companies trying to implement SCM systems should set up compatible supply chain networks and a high-quality collaboration relationship for implementation and performance.

Analysis of Foodborne Pathogens in Food and Environmental Samples from Foodservice Establishments at Schools in Gyeonggi Province (경기지역 학교 단체급식소 식품 및 환경 중 식중독균 분석)

  • Oh, Tae Young;Baek, Seung-Youb;Koo, Minseon;Lee, Jong-Kyung;Kim, Seung Min;Park, Kyung-Min;Hwang, Daekeun;Kim, Hyun Jung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.44 no.12
    • /
    • pp.1895-1904
    • /
    • 2015
  • Foodborne illness associated with food service establishments is an important food safety issue in Korea. In this study, foodborne pathogens (Bacillus cereus, Clostridium perfringens, Escherichia coli, pathogenic Escherichia coli, Listeria monocytogenes, Salmonella spp., Staphylococcus aureus, and Vibrio parahaemolyticus) and hygiene indicator organisms [total viable cell counts (TVC), coliforms] were analyzed for food and environmental samples from foodservice establishments at schools in Gyeonggi province. Virulence factors and antimicrobial resistance of detected foodborne pathogens were also characterized. A total of 179 samples, including food (n=66), utensil (n=68), and environmental samples (n=45), were collected from eight food service establishments at schools in Gyeonggi province. Average contamination levels of TVC for foods (including raw materials) and environmental samples were 4.7 and 4.0 log CFU/g, respectively. Average contamination levels of coliforms were 2.7 and 4.0 log CFU/g for foods and environmental swab samples, respectively. B. cereus contamination was detected in food samples with an average of 2.1 log CFU/g. E. coli was detected only in raw materials, and S. aureus was positive in raw materials as well as environmental swab samples. Other foodborne pathogens were not detected in all samples. The entire B. cereus isolates possessed at least one of the diarrheal toxin genes (hblACD, nheABC, entFM, and cytK enterotoxin gene). However, ces gene encoding emetic toxin was not detected in B. cereus isolates. S. aureus isolates (n=16) contained at least one or more of the tested enterotoxin genes, except for tst gene. For E. coli and S. aureus, 92.7% and 37.5% of the isolates were susceptible against 16 and 19 antimicrobials, respectively. The analyzed microbial hazards could provide useful information for quantitative microbial risk assessment and food safety management system to control foodborne illness outbreaks in food service establishments.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Legal Issues on the Collection and Utilization of Infectious Disease Data in the Infectious Disease Crisis (감염병 위기 상황에서 감염병 데이터의 수집 및 활용에 관한 법적 쟁점 -미국 감염병 데이터 수집 및 활용 절차를 참조 사례로 하여-)

  • Kim, Jae Sun
    • The Korean Society of Law and Medicine
    • /
    • v.23 no.4
    • /
    • pp.29-74
    • /
    • 2022
  • As social disasters occur under the Disaster Management Act, which can damage the people's "life, body, and property" due to the rapid spread and spread of unexpected COVID-19 infectious diseases in 2020, information collected through inspection and reporting of infectious disease pathogens (Article 11), epidemiological investigation (Article 18), epidemiological investigation for vaccination (Article 29), artificial technology, and prevention policy Decision), (3) It was used as an important basis for decision-making in the context of an infectious disease crisis, such as promoting vaccination and understanding the current status of damage. In addition, medical policy decisions using infectious disease data contribute to quarantine policy decisions, information provision, drug development, and research technology development, and interest in the legal scope and limitations of using infectious disease data has increased worldwide. The use of infectious disease data can be classified for the purpose of spreading and blocking infectious diseases, prevention, management, and treatment of infectious diseases, and the use of information will be more widely made in the context of an infectious disease crisis. In particular, as the serious stage of the Disaster Management Act continues, the processing of personal identification information and sensitive information becomes an important issue. Information on "medical records, vaccination drugs, vaccination, underlying diseases, health rankings, long-term care recognition grades, pregnancy, etc." needs to be interpreted. In the case of "prevention, management, and treatment of infectious diseases", it is difficult to clearly define the concept of medical practicesThe types of actions are judged based on "legislative purposes, academic principles, expertise, and social norms," but the balance of legal interests should be based on the need for data use in quarantine policies and urgent judgment in public health crises. Specifically, the speed and degree of transmission of infectious diseases in a crisis, whether the purpose can be achieved without processing sensitive information, whether it unfairly violates the interests of third parties or information subjects, and the effectiveness of introducing quarantine policies through processing sensitive information can be used as major evaluation factors. On the other hand, the collection, provision, and use of infectious disease data for research purposes will be used through pseudonym processing under the Personal Information Protection Act, consent under the Bioethics Act and deliberation by the Institutional Bioethics Committee, and data provision deliberation committee. Therefore, the use of research purposes is recognized as long as procedural validity is secured as it is reviewed by the pseudonym processing and data review committee, the consent of the information subject, and the institutional bioethics review committee. However, the burden on research managers should be reduced by clarifying the pseudonymization or anonymization procedures, the introduction or consent procedures of the comprehensive consent system and the opt-out system should be clearly prepared, and the procedure for re-identifying or securing security that may arise from technological development should be clearly defined.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

The study of MDCT of Radiation dose in the department of Radiology of general hospitals in the local area (일 지역 종합병원 영상의학과 MDCT선량에 대한 연구)

  • Shin, Jung-Sub
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.4
    • /
    • pp.281-290
    • /
    • 2012
  • The difference of radiation dose of MDCT due to different protocols between hospitals was analyzed by CTDI, DLP, the number of Slice and the number of DLP/Slice in 30 cases of the head, the abdomen and the chest that have 10 cases each from MDCT examination of the department of diagnostic imaging of three general hospitals in Gyeongsangbuk-do. The difference of image quality, CTDI, DLP, radiation dose in the eye and radiation dose in thyroid was analyzed after both helical scan and normal scan for head CT were performed because a protocol of head CT is relatively simple and head CT is the most frequent case. Head CT was significantly higher in two-thirds of hospitals compared to A hospital that does not exceed a CTDI diagnostic reference level (IAEA 50mGy, Korea 60mGy) (p<0.001). DLP was higher in one-third of hospitals than a diagnostic reference level of IAEA 1,050mGy.cm and Korea 1,000mGy.cm and two-thirds exceeded the recommendation of Korea and those were significantly higher than A hospital that does not exceed a diagnostic reference level (p<0.001). Abdomen CT showed 119mGy that was higher than a diagnostic reference level of IAEA 25mGy and Korea 20mGy in one-third. DLP in all hospitals was higher that Korea recommendation of 700mGy.cm. Among target hospitals, C hospital showed high radiation dose in all tests because MPR and 3D were of great importance due to low pitch and high Tube Curren. To analyze the difference of radiation dose by scan methods, normal scan and helical scan for head CT of the same patient were performed. In the result, CTDI and DLP of helical CT were higher 63.4% and 93.7% than normal scan (p<0.05, p<0.01). However, normal scan of radiation dose in thyroid was higher 87.26% (p<0.01). Beam of helical CT looked like a bell in the deep part and the marginal part so thyroid was exposed with low radiation dose deviated from central beam. In addition, helical scan used Gantry angle perpendicularly and normal scan used it parallel to the orbitomeatal line. Therefore, radiation dose in thyroid decreased in helical scan. However, a protocol in this study showed higher radiation dose than diagnostic reference level of KFDA. To obey the recommendation of KFDA, low Tube Curren and high pitch were demanded. In this study, the difference of image quality between normal scan and helical scan was not significant. Therefore, a standardized protocol of normal scan was generally used and protective gear for thyroid was needed except a special case. We studied a part of CT cases in the local area. Therefore, the result could not represent the entire cases. However, we confirmed that patient's radiation dose in some cases exceeded the recommendation and the deviation between hospitals was observed. To improve this issue, doctors of diagnostic imaging or technologists of radiology should perform CT by the optimized protocol to decrease a level of CT radiation and also reveal radiation dose for the right to know of patients. However, they had little understanding of the situation. Therefore, the effort of relevant agencies with education program for CT radiation dose, release of radiation dose from CT examination and addition of radiation dose control and open CT contents into evaluation for hospital services and certification, and also the effort of health professionals with the best protocol to realize optimized CT examination.

Structural Adjustment of Domestic Firms in the Era of Market Liberalization (시장개방(市場開放)과 국내기업(國內企業)의 구조조정(構造調整))

  • Seong, So-mi
    • KDI Journal of Economic Policy
    • /
    • v.13 no.4
    • /
    • pp.91-116
    • /
    • 1991
  • Market liberalization progressing simultaneously with high and rapidly rising domestic wages has created an adverse business environment for domestic firms. Korean firms are losing their international competitiveness in comparison to firms from LDC(Less Developed Countries) in low-tech industries. In high-tech industries, domestic firms without government protection (which is impossible due to the liberalization policy and the current international status of the Korean economy) are in a disadvantaged position relative to firms from advanced countries. This paper examines the division of roles between the private sector and the government in order to achieve a successful structural adjustment, which has become the impending industrial policy issue caused by high domestic wages, on the one hand, and the opening of domestic markets, on the other. The micro foundation of the economy-wide structural adjustment is actually the restructuring of business portfolios at the firm level. The firm-level business restructuring means that firms in low-value-added businesses or with declining market niches establish new major businesses in higher value-added segments or growing market niches. The adjustment of the business structure at the firm level can only be accomplished by accumulating firm-specific managerial assets necessary to establish a new business structure. This can be done through learning-by-doing in the whole system of management, including research and development, manufacturing, and marketing. Therefore, the voluntary cooperation among the people in the company is essential for making the cost of the learning process lower than that at the competing companies. Hence, firms that attempt to restructure their major businesses need to induce corporate-wide participation through innovations in organization and management, encourage innovative corporate culture, and maintain cooperative labor unions. Policy discussions on structural adjustments usually regard firms as a black box behind a few macro variables. But in reality, firm activities are not flows of materials but relationships among human resources. The growth potential of companies are embodied in the human resources of the firm; the balance of interest among stockholders, managers, and workers of the company' brings the accumulation of the company's core competencies. Therefore, policymakers and economists shoud change their old concept of the firm as a technological black box which produces a marketable commodities. Firms should be regarded as coalitions of interest groups such as stockholders, managers, and workers. Consequently the discussion on the structural adjustment both at the macroeconomic level and the firm level should be based on this new paradigm of understanding firms. The government's role in reducing the cost of structural adjustment and supporting should the creation of new industries emphasize the following: First, government must promote the competition in domestic markets by revising laws related to antitrust policy, bankruptcy, and the promotion of small and medium-sized companies. General consensus on the limitations of government intervention and the merit of deregulation should be sought among policymakers and people in the business world. In the age of internationalization, nation-specific competitive advantages cannot be exclusively in favor of domestic firms. The international competitiveness of a domestic firm derives from the firm-specific core competencies which can be accumulated by internal investment and organization of the firm. Second, government must build up a solid infrastructure of production factors including capital, technology, manpower, and information. Structural adjustment often entails bankruptcies and partial waste of resources. However, it is desirable for the government not to try to sustain marginal businesses, but to support the diversification or restructuring of businesses by assisting in factor creation. Institutional support for venture businesses needs to be improved, especially in the financing system since many investment projects in venture businesses are highly risky, even though they are very promising. The proportion of low-value added production processes and declining industries should be reduced by promoting foreign direct investment and factory automation. Moreover, one cannot over-emphasize the importance of future-oriented labor policies to be based on the new paradigm of understanding firm activities. The old laws and instititutions related to labor unions need to be reformed. Third, government must improve the regimes related to money, banking, and the tax system to change business practices dependent on government protection or undesirable in view of the evolution of the Korean economy as a whole. To prevent rational business decisions from contradicting to the interest of the economy as a whole, government should influence the business environment, not the business itself.

  • PDF

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.