• Title/Summary/Keyword: 현실 데이터

Search Result 1,556, Processing Time 0.027 seconds

Case Analysis of the Promotion Methodologies in the Smart Exhibition Environment (스마트 전시 환경에서 프로모션 적용 사례 및 분석)

  • Moon, Hyun Sil;Kim, Nam Hee;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.171-183
    • /
    • 2012
  • In the development of technologies, the exhibition industry has received much attention from governments and companies as an important way of marketing activities. Also, the exhibitors have considered the exhibition as new channels of marketing activities. However, the growing size of exhibitions for net square feet and the number of visitors naturally creates the competitive environment for them. Therefore, to make use of the effective marketing tools in these environments, they have planned and implemented many promotion technics. Especially, through smart environment which makes them provide real-time information for visitors, they can implement various kinds of promotion. However, promotions ignoring visitors' various needs and preferences can lose the original purposes and functions of them. That is, as indiscriminate promotions make visitors feel like spam, they can't achieve their purposes. Therefore, they need an approach using STP strategy which segments visitors through right evidences (Segmentation), selects the target visitors (Targeting), and give proper services to them (Positioning). For using STP Strategy in the smart exhibition environment, we consider these characteristics of it. First, an exhibition is defined as market events of a specific duration, which are held at intervals. According to this, exhibitors who plan some promotions should different events and promotions in each exhibition. Therefore, when they adopt traditional STP strategies, a system can provide services using insufficient information and of existing visitors, and should guarantee the performance of it. Second, to segment automatically, cluster analysis which is generally used as data mining technology can be adopted. In the smart exhibition environment, information of visitors can be acquired in real-time. At the same time, services using this information should be also provided in real-time. However, many clustering algorithms have scalability problem which they hardly work on a large database and require for domain knowledge to determine input parameters. Therefore, through selecting a suitable methodology and fitting, it should provide real-time services. Finally, it is needed to make use of data in the smart exhibition environment. As there are useful data such as booth visit records and participation records for events, the STP strategy for the smart exhibition is based on not only demographical segmentation but also behavioral segmentation. Therefore, in this study, we analyze a case of the promotion methodology which exhibitors can provide a differentiated service to segmented visitors in the smart exhibition environment. First, considering characteristics of the smart exhibition environment, we draw evidences of segmentation and fit the clustering methodology for providing real-time services. There are many studies for classify visitors, but we adopt a segmentation methodology based on visitors' behavioral traits. Through the direct observation, Veron and Levasseur classify visitors into four groups to liken visitors' traits to animals (Butterfly, fish, grasshopper, and ant). Especially, because variables of their classification like the number of visits and the average time of a visit can estimate in the smart exhibition environment, it can provide theoretical and practical background for our system. Next, we construct a pilot system which automatically selects suitable visitors along the objectives of promotions and instantly provide promotion messages to them. That is, based on the segmentation of our methodology, our system automatically selects suitable visitors along the characteristics of promotions. We adopt this system to real exhibition environment, and analyze data from results of adaptation. As a result, as we classify visitors into four types through their behavioral pattern in the exhibition, we provide some insights for researchers who build the smart exhibition environment and can gain promotion strategies fitting each cluster. First, visitors of ANT type show high response rate for promotion messages except experience promotion. So they are fascinated by actual profits in exhibition area, and dislike promotions requiring a long time. Contrastively, visitors of GRASSHOPPER type show high response rate only for experience promotion. Second, visitors of FISH type appear favors to coupon and contents promotions. That is, although they don't look in detail, they prefer to obtain further information such as brochure. Especially, exhibitors that want to give much information for limited time should give attention to visitors of this type. Consequently, these promotion strategies are expected to give exhibitors some insights when they plan and organize their activities, and grow the performance of them.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

The Impact of Organizational Internal IT Capability on Agility and Performance: The Moderating Effect of Managerial IT Capability and Top Management Championship (기업 내적 IT 자원이 기업 민첩성과 성과에 미치는 영향: 관리적 IT 능력과 경영진 존재의 조절효과)

  • Kim, Geuna;Kim, Sanghyun
    • Information Systems Review
    • /
    • v.15 no.3
    • /
    • pp.39-69
    • /
    • 2013
  • Business value of information technology has been the biggest interest of all such as practitioners and scholars for decades. Information technology is considered as the driving force or success factor of firm agility. The general assumption is that organizations making considerable efforts in IT investment are more agile than the organizations that are not. However, IT that should help the strategies of the firm that can hinder business or impede agility of the firm occasionally. In other words, it is still unknown if IT helps the agility of the firm or bothers it. Therefore, we note that contrary aspects of IT such as promotion and hindrance of firm agility have been observed frequently and theorize the relationships between them. In other words, we propose a rationale that firms should need to develop superior firm-wide IT capability to manage IT resources successfully in order to realize agility. Thus, this paper theorizes two IT capabilities, including technical IT capability and managerial IT capability as key factors impacting firm agility and firm performance. Further, we operationalize firm agility into two sub-types, including operational adjustment agility and market capitalizing agility. The data from 171 firms was analyzed using PLS approach. The results showed that technical IT capability has positive impact on firm agility and managerial IT capability had positive moderating effects between technical IT capability and firm agility. In addition, it was identified that top management championship positively moderates between agility and firm performance. Finally, it was indicated that firm agility was a very important causal variable of firm performance. Our study provides more exquisite and practical empirical evidences in the relationship between IT capability and firm agility by proposing applicable solution although IT has some contradicting effects on firm agility. Our findings suggest many useful implications to agility related researches in relatively primitive stage and working level officers in organizations.

  • PDF

The Study on the Priority of First Person Shooter game Elements using Delphi Methodology (FPS게임 구성요소의 중요도 분석방법에 관한 연구 1 -델파이기법을 이용한 독립요소의 계층설계와 검증을 중심으로-)

  • Bae, Hye-Jin;Kim, Suk-Tae
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.61-72
    • /
    • 2007
  • Having started with "Space War", the first game produced by MIT in the 1960's, the gaming industry expanded rapidly and grew to a large size over a short period of time: the brand new games being launched on the market are found to contain many different elements making up a single content in that it is often called the 'the most comprehensive ultimate fruits' of the design technologies. This also translates into a large increase in the number of things which need to be considered in developing games, complicating the plans on the financial budget, the work force, and the time to be committed. Therefore, an approach for analyzing the elements which make up a game, computing the importance of each of them, and assessing those games to be developed in the future, is the key to a successful development of games. Many decision-making activities are often required under such a planning process. The decision-making task involves many difficulties which are outlined as follows: the multi-factor problem; the uncertainty problem impeding the elements from being "quantified" the complex multi-purpose problem for which the outcome aims confusion among decision-makers and the problem with determining the priority order of multi-stages leading to the decision-making process. In this study we plan to suggest AHP (Analytic Hierarchy Process) so that these problems can be worked out comprehensively, and logical and rational alternative plan can be proposed through the quantification of the "uncertain" data. The analysis was conducted by taking FPS (First Person Shooting) which is currently dominating the gaming industry, as subjects for this study. The most important consideration in conducting AHP analysis is to accurately group the elements of the subjects to be analyzed objectively, and arrange them hierarchically, and to analyze the importance through pair-wise comparison between the elements. The study is composed of 2 parts of analyzing these elements and computing the importance between them, and choosing an alternative plan. Among these this paper is particularly focused on the Delphi technique-based objective element analyzing and hierarchy of the FPS games.

  • PDF

Types of business model in the 4th industrial revolution (4차 산업혁명시대의 비즈니스 모델 유형)

  • Jung, Sang-hee;Chung, Byoung-gyu
    • Journal of Venture Innovation
    • /
    • v.1 no.1
    • /
    • pp.1-14
    • /
    • 2018
  • The 4th Industrial Revolution is making a big change for our company like the tsunami. The CPS system, which is represented by the digital age, is based on the data accumulated in the physical domain and is making business that was not imagined in the past through digital technology. As a result, the business model of the 4th Industrial Revolution era is different from the previous one. In this study, we analyze the trends and the issues of business innovation theory research. Then, the business innovation model of the digital age was compared with the previous period. Based on this, we have searched for a business model suitable for the 4th Industrial Revolution era. The existing business models have many difficulties to explain the model of the digital era. Even though more empirical research should be supported, Michael Porter's diamond model is most suitable for four cases of business models by applying them. Type A sharing outcome with customer is a model that pay differently according to the basis of customer performance. Type B Value Chain Digitalization model provides products and services to customers with faster and lower cost by digitalizing products, services and SCM. Type C Digital Platform is the model that brings the biggest ripple effect. It is a model that can secure profitability by creating new market by creating the sharing economy based on digital platform. Finally, Type D Sharing Resources is a model for building a competitive advantage model by collaborating with partners in related industries. This is the most effective way to complement each other's core competencies and their core competencies. Even though numerous Unicorn companies have differentiated digital competitiveness with many digital technologies in their respective industries in the 4th Industrial Revolution era, there is a limit to the number of pieces to be listed. In future research, it is necessary to identify the business model of the digital age through more specific empirical analysis. In addition, since digital business models may be different in each industry, it is also necessary to conduct comparative analysis between industries

Cost-benefit Analysis of Installing Crime Preventive CCTV: Focused on Theft and Assault (범죄예방용 CCTV설치의 비용편익분석: 절도와 폭력범죄를 중심으로)

  • Yun, Woo-Suk;Lee, Chang-Hun;Shim, Hee-Sub
    • Korean Security Journal
    • /
    • no.50
    • /
    • pp.209-237
    • /
    • 2017
  • Theories on 'opportunity for crime' have utilized CCTV in crime prevention approach, and empirical studies showing crime prevention effects of CCTV have supported expansion of CCTV installation. Particularly, in Korea, the number of CCTV installation had tripled from 2011 to 2015, and governmental policies regarding CCTV have become one of the mainstream social control strategies. Although a couple of empirical studies showed decrease in crime rate due to CCTV installation, there is no study investigating B/C analysis(Benefit vs. cost analysis) of CCTV installation. B/C analysis results will be beneficial for official decision-making of criminal justice policy, and this study is purported to produce such fundamental evidence for policy making procedure. To fulfill this goal, this study collected data on financial information, crime data between 2011 and 2015 across the nation from 232 governmental district offices and the Korean National Police. This study then conducted two different B/C analyses(simple B/C analysis, regression-based B/C analysis). The simple B/C analysis results showed that 1) total costs for CCTV installation in 2014 was 68,626,000,000 won(approximately, US$57,188,333.00, money exchange rate 1200won=US$1), 2) benefits of crime reduction was 90,888,000,000 won(appx. US$75,740,000), and 3) B/C rate was 1.32. The regression-based B/C analysis results showed that 1) B/C rate was 1.52 when only reduced costs of criminal justice processes for crime employed, and 2) B/C rate was 3.62 when overall social costs including reduced costs of criminal justice processes and social benefits, e.g., reduction in costs for managing fear of crime, due to the crime reduction. Based on the results, this study provided policy implications.

  • PDF

A Study on Measures to Create Local Webtoon Ecosystem (지역웹툰 생태계 조성을 위한 방안 연구)

  • Choi, Sung-chun;Yoon, Ki-heon
    • Cartoon and Animation Studies
    • /
    • s.51
    • /
    • pp.181-201
    • /
    • 2018
  • The cartoon industry in Korea has continued to decline due to the contraction of published comics market and decrease in the number of comic books rental stores until the 2000s when it rapidly started to experience qualitative changes and quantitative growth due to the emergence of webtoon. The market size of webtoon industry, valued at 420 billion won in 2015, is expected to grow to 880.5 billion won by 2018. Notably, most cartoonists who draw cartoon strips are using digital devices and producing scripts in data, thereby overcoming the geographical, spatial and physical limitation of contents. As a result, a favorable environment for the creation of local ecosystems is generated. While the infrastructures of human resources are steadily growing by region, cartoon industries that are supported by the government policy have shown good performance combined with factors of creative infrastructures in local areas such as webtoon experience centers, webtoon campuses and webtoon creation centers, etc. Nevertheless, it is true that cartoon infrastructures are substantially based on a capital area which leads to an imbalanced structure of cartoon industry. To see the statistics, companies of offline cartoon business in Seoul and Gyeonggi Province make up 87%, except for distribution industry. In addition, companies of online cartoon business which are situated outside of Seoul and Gyeonggi Province form merely 7.5%. Studies and research on local webtoon are inadequate. The existing studies on local webtoon usually focus on its industrial and economic values, mentioning the word "local" only sometimes. Therefore, this study looked into the current status of local webtoon of the present time for the current state of local cartoon ecosystem, middle and long-term support from the government, and an alternative in the future. Main challenges include the expansion of opportunities to enjoy cartoon cultures, the independence of cartoon infrastructure, and the settlement of regionally specialized cartoon cultures. It means that, in order to enable the cartoon ecosystem to settle down in local areas, it is vital to utilize and link basic infrastructures. Furthermore, it is necessary to consider independence and autonomy beyond the limited support by the government. Finally, webtoon should be designated as a culture, which can be a new direction of the development of local webtoon. Furthermore, desirable models should be continuously researched and studied, which are suitable for each region and connect them with regional tourism, culture and art industry. It will allow the webtoon industry to soft land in the industry. Local webtoon, which is a growth engine of regions and main contents of the fourth industrial revolution, is expected to be a momentum for the decentralization of power and reindustrialization of regions.

Design of Comprehensive Security Vulnerability Analysis System through Efficient Inspection Method according to Necessity of Upgrading System Vulnerability (시스템 취약점 개선의 필요성에 따른 효율적인 점검 방법을 통한 종합 보안 취약성 분석 시스템 설계)

  • Min, So-Yeon;Jung, Chan-Suk;Lee, Kwang-Hyong;Cho, Eun-Sook;Yoon, Tae-Bok;You, Seung-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.1-8
    • /
    • 2017
  • As the IT environment becomes more sophisticated, various threats and their associated serious risks are increasing. Threats such as DDoS attacks, malware, worms, and APT attacks can be a very serious risk to enterprises and must be efficiently managed in a timely manner. Therefore, the government has designated the important system as the main information communication infrastructure in consideration of the impact on the national security and the economic society according to the 'Information and Communication Infrastructure Protection Act', which, in particular, protects the main information communication infrastructure from cyber infringement. In addition, it conducts management supervision such as analysis and evaluation of vulnerability, establishment of protection measures, implementation of protection measures, and distribution of technology guides. Even now, security consulting is proceeding on the basis of 'Guidance for Evaluation of Technical Vulnerability Analysis of Major IT Infrastructure Facilities'. There are neglected inspection items in the applied items, and the vulnerability of APT attack, malicious code, and risk are present issues that are neglected. In order to eliminate the actual security risk, the security manager has arranged the inspection and ordered the special company. In other words, it is difficult to check against current hacking or vulnerability through current system vulnerability checking method. In this paper, we propose an efficient method for extracting diagnostic data regarding the necessity of upgrading system vulnerability check, a check item that does not reflect recent trends, a technical check case for latest intrusion technique, a related study on security threats and requirements. Based on this, we investigate the security vulnerability management system and vulnerability list of domestic and foreign countries, propose effective security vulnerability management system, and propose further study to improve overseas vulnerability diagnosis items so that they can be related to domestic vulnerability items.