• Title/Summary/Keyword: 본네트

Search Result 13,387, Processing Time 0.039 seconds

A Study on the Implications of Korea Through the Policy Analysis of AI Start-up Companies in Major Countries (주요국 AI 창업기업 정책 분석을 통한 국내 시사점 연구)

  • Kim, Dong Jin;Lee, Seong Yeob
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.2
    • /
    • pp.215-235
    • /
    • 2024
  • As artificial intelligence (AI) technology is recognized as a key technology that will determine future national competitiveness, competition for AI technology and industry promotion policies in major countries is intensifying. This study aims to present implications for domestic policy making by analyzing the policies of major countries on the start-up of AI companies, which are the basis of the AI industry ecosystem. The top four countries and the EU for the number of new investment attraction companies in the 2023 AI Index announced by the HAI Research Institute at Stanford University in the United States were selected, The United States enacted the National AI Initiative Act (NAIIA) in 2021. Through this law, The US Government is promoting continued leadership in the United States in AI R&D, developing reliable AI systems in the public and private sectors, building an AI system ecosystem across society, and strengthening DB management and access to AI policies conducted by all federal agencies. In the 14th Five-Year (2021-2025) Plan and 2035 Long-term Goals held in 2021, China has specified AI as the first of the seven strategic high-tech technologies, and is developing policies aimed at becoming the No. 1 AI global powerhouse by 2030. The UK is investing in innovative R&D companies through the 'Future Fund Breakthrough' in 2021, and is expanding related investments by preparing national strategies to leap forward as AI leaders, such as the implementation plan of the national AI strategy in 2022. Israel is supporting technology investment in start-up companies centered on the Innovation Agency, and the Innovation Agency is leading mid- to long-term investments of 2 to 15 years and regulatory reforms for new technologies. The EU is strengthening its digital innovation hub network and creating the InvestEU (European Strategic Investment Fund) and AI investment fund to support the use of AI by SMEs. This study aims to contribute to analyzing the policies of major foreign countries in making AI company start-up policies and providing a basis for Korea's strategy search. The limitations of the study are the limitations of the countries to be analyzed and the failure to attempt comparative analysis of the policy environments of the countries under the same conditions.

  • PDF

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Open Skies Policy : A Study on the Alliance Performance and International Competition of FFP (항공자유화정책상 상용고객우대제도의 제휴성과와 국제경쟁에 관한 연구)

  • Suh, Myung-Sun;Cho, Ju-Eun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.2
    • /
    • pp.139-162
    • /
    • 2010
  • In terms of the international air transport, the open skies policy implies freedom in the sky or opening the sky. In the normative respect, the open skies policy is a kind of open-door policy which gives various forms of traffic right to other countries, but on the other hand it is a policy of free competition in the international air transport. Since the Airline Deregulation Act of 1978, the United States has signed an open skies agreement with many countries, starting with the Netherlands, so that competitive large airlines can compete in the international air transport market where there exist a lot of business opportunities. South Korea now has an open skies agreement with more than 20 countries. The frequent flyer program (FFP) is part of a broad-based marketing alliance which has been used as an airfare strategy since the U.S. government's airline deregulation. The membership-based program is an incentive plan that provides mileage points to customers for using airline services and rewards customer loyalty in tangible forms based on their accumulated points. In its early stages, the frequent flyer program was focused on marketing efforts to attract customers, but now in the environment of intense competition among airlines, the program is used as an important strategic marketing tool for enhancing business performance. Therefore, airline companies agree that they need to identify customer needs in order to secure loyal customers more effectively. The outcomes from an airline's frequent flyer program can have a variety of effects on international competition. First, the airline can obtain a more dominant position in the air flight market by expanding its air route networks. Second, the availability of flight products for customers can be improved with an increase in flight frequency. Third, the airline can preferentially expand into new markets and thus gain advantages over its competitors. However, there are few empirical studies on the airline frequent flyer program. Accordingly, this study aims to explore the effects of the program on international competition, after reviewing the types of strategic alliance between airlines. Making strategic airline alliances is a worldwide trend resulting from the open skies policy. South Korea also needs to be making open skies agreements more realistic to promote the growth and competition of domestic airlines. The present study is about the performance of the airline frequent flyer program and international competition under the open skies policy. With a sample of five global alliance groups (Star, Oneworld, Wings, Qualiflyer and Skyteam), the study was attempted as an empirical study of the effects that the resource structures and levels of information technology held by airlines in each group have on the type of alliance, and one-way analysis of variance and regression analysis were used to test hypotheses. The findings of this study suggest that both large airline companies and small/medium-size airlines in an alliance group with global networks and organizations are able to achieve high performance and secure international competitiveness. Airline passengers earn mileage points by using non-flight services through an alliance network with hotels, car-rental services, duty-free shops, travel agents and more and show high interests in and preferences for related service benefits. Therefore, Korean airline companies should develop more aggressive marketing programs based on multilateral alliances with other services including hotels, as well as with other airlines.

  • PDF

The Effect of Push, Pull, and Push-Pull Interactive Factors for Internationalization of Contract Foodservice Management Company (위탁급식업체 국제화를 위한 추진, 유인 및 상호작용 요인의 영향 분석)

  • Lee, Hyun-A;Han, Kyung-Soo
    • Journal of Nutrition and Health
    • /
    • v.42 no.4
    • /
    • pp.386-396
    • /
    • 2009
  • The purpose of this study was to analyze the effect of push, pull and push-pull interactive factors for CFMC (Contract Foodservice Management Company)'s internationalization. The study was a quantitative study part in mixed methods (QUAL ${\rightarrow}$ quan) which was mainly qualitative study and quantitative study. Mail survey was carried out for quantitative study. For study subjects, 1,281 persons who completed 'Food Service Management Professional Program' of 'Y' University were selected as a population because the program was mainly for CFMC's workers. The analysis methods used in this study were frequency analysis, factor analysis, correlation analysis and multiple regression analysis with SPSS 17.0. Push factors had the saturation in domestic market and the manager's purpose (fac.1) and the investment for internationalization (fac.2). Pull factors had the company's external environment for internationalization (fac.3) and the global network and spread of culture (fac.4). Push-pull interactive factors had the information about foreign market (fac.5), the procedure and budget of overseas expansion (fac.6) and the national network and size of domestic market (fac.7). Internal dynamics factors had the deterrents for internationalization (fac.8) and the enablers for internationalization (fac.9). The result showed that the company's external environment in pull factors had positive effects on the deterrents for internationalization. The global network and the spread of culture had positive effects on the enablers for internationalization. The information about foreign market in push-pull interactive factors had positive effects on the deterrents and enablers for internationalization. The national network and the size of domestic market had positive effects on the enablers for internationalization. The deterrents and enablers for internationalization had positive effects on the level of internationalization, and the deterrents had more effects on the level of internationalization than the enablers did (${\beta}$= .492 > .177).

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

Transcriptome Analysis of Longissimus Tissue in Fetal Growth Stages of Hanwoo (Korean Native Cattle) with Focus on Muscle Growth and Development (한우 태아기 6, 9개월령 등심 조직의 전사체 분석을 통한 근생성 및 지방생성 관여 유전자 발굴)

  • Jeong, Taejoon;Chung, Ki-Yong;Park, Woncheol;Son, Ju-Hwan;Park, Jong-Eun;Chai, Han-Ha;Kwon, Eung-Gi;Ahn, Jun-Sang;Park, Mi-Rim;Lee, Jiwoong;Lim, Dajeong
    • Journal of Life Science
    • /
    • v.30 no.1
    • /
    • pp.45-57
    • /
    • 2020
  • The prenatal period in livestock animals is crucial for meat production because net increase in the number of muscle fibers is finished before birth. However, there is no study on the growth and development mechanism of muscles in Hanwoo during this period. Therefore, to find candidate genes involved in muscle growth and development during this period in Hanwoo, mRNA expression data of longissimus in Hanwoo at 6 and 9 months post-conceptional age (MPA) were analyzed. We independently identified differentially expressed genes (DEGs) using DESeq2 and edgeR which are R software packages, and considered the overlaps of the results as final-DEGs to use in downstream analysis. The DEGs were classified into several modules using WGCNA then the modules' functions were analyzed to identify modules which involved in myogenesis and adipogenesis. Finally, the hub genes which had the highest WGCNA module membership among the top 10% genes of the STRING network maximal clique centrality were identified. 913(6 MPA specific DEGs) and 233(9 MPA specific DEGs) DEGs were figured out, and these were classified into five and two modules, respectively. Two of the identified modules'(one was in 6, and another was in 9 MPA specific modules) functions was found to be related to myogenesis and adipogenesis. One of the hub genes belonging to the 6 MPA specific module was axin1 (AXIN1) which is known as an inhibitor of Wnt signaling pathway, another was succinate-CoA ligase ADP-forming beta subunit (SUCLA2) which is known as a crucial component of citrate cycle.

Effect of microwave radiation on physical special quality of normal, high amylose and waxy corn starches (마이크로웨이브를 조사한 옥수수전분의 물리적 특성변화)

  • Lee Su Jin;Choe Yeong Hui
    • Journal of Applied Tourism Food and Beverage Management and Research
    • /
    • v.15 no.1
    • /
    • pp.113-125
    • /
    • 2004
  • Effect of microwave radiation on physico-chemical properties of cor'n starches was studied. Waxy com, com and high amylose com starches of varying moisture content(20~35%) were subjected to microwave processing(2450MHz) at $120^{\circ}$ and the experimental starch samples were examined by a X-ray diffractometry, rapid viscosity analyzer(RVA) and. with the samples in temperature was observed and the peaks of high amylose com starches at $2^{\circ}$=5.0, 15.0 and $23.0^{\circ}$, were disappeared indicating the melting of crystallines while those of com and waxy com had not changed. A change in gelatinization pattern was observed in the case of corn starches from type A with nearly no peak-viscosity and breakdown to type C. Except a decreased viscosity, no change was observed in those of waxy com starches.

  • PDF

Social Network Analysis for the Effective Adoption of Recommender Systems (추천시스템의 효과적 도입을 위한 소셜네트워크 분석)

  • Park, Jong-Hak;Cho, Yoon-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.305-316
    • /
    • 2011
  • Recommender system is the system which, by using automated information filtering technology, recommends products or services to the customers who are likely to be interested in. Those systems are widely used in many different Web retailers such as Amazon.com, Netfix.com, and CDNow.com. Various recommender systems have been developed. Among them, Collaborative Filtering (CF) has been known as the most successful and commonly used approach. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. However, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting in advance whether the performance of CF recommender system is acceptable or not is practically important and needed. In this study, we propose a decision making guideline which helps decide whether CF is adoptable for a given application with certain transaction data characteristics. Several previous studies reported that sparsity, gray sheep, cold-start, coverage, and serendipity could affect the performance of CF, but the theoretical and empirical justification of such factors is lacking. Recently there are many studies paying attention to Social Network Analysis (SNA) as a method to analyze social relationships among people. SNA is a method to measure and visualize the linkage structure and status focusing on interaction among objects within communication group. CF analyzes the similarity among previous ratings or purchases of each customer, finds the relationships among the customers who have similarities, and then uses the relationships for recommendations. Thus CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. Under the assumption that SNA could facilitate an exploration of the topological properties of the network structure that are implicit in transaction data for CF recommendations, we focus on density, clustering coefficient, and centralization which are ones of the most commonly used measures to capture topological properties of the social network structure. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. We explore how these SNA measures affect the performance of CF performance and how they interact to each other. Our experiments used sales transaction data from H department store, one of the well?known department stores in Korea. Total 396 data set were sampled to construct various types of social networks. The dependant variable measuring process consists of three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used UCINET 6.0 for SNA. The experiments conducted the 3-way ANOVA which employs three SNA measures as dependant variables, and the recommendation accuracy measured by F1-measure as an independent variable. The experiments report that 1) each of three SNA measures affects the recommendation accuracy, 2) the density's effect to the performance overrides those of clustering coefficient and centralization (i.e., CF adoption is not a good decision if the density is low), and 3) however though the density is low, the performance of CF is comparatively good when the clustering coefficient is low. We expect that these experiment results help firms decide whether CF recommender system is adoptable for their business domain with certain transaction data characteristics.