• Title/Summary/Keyword: Precision machine system

Search Result 1,250, Processing Time 0.031 seconds

New Business Success using Strategic Innovation Strategy: Marine Engine Business and HEMAPT System of the Hyundai Heavy Industries Co. (신규사업성공과 전략적 기술혁신전략: 현대중공업의 엔진사업진출과 HEMAPT시스템 개발)

  • Kim, Wha Young
    • Journal of Service Research and Studies
    • /
    • v.6 no.2
    • /
    • pp.23-35
    • /
    • 2016
  • Firms should seek greater profits and corporate growth through new businesses. New businesses contribute realizing creative economy that creates good jobs, and expanding the company by securing new markets and creating new profits and growth. However, new business is risky management decision-making to have a high failure rate because it involves the adaptation of new business environment and the burden of new investments, including the uncertainty of success in business. Therefore, innovation strategies play important roles for the new business entry, using product innovation, process innovation, business model innovation, disruptive innovation, and strategic innovation, etc. and company will get huge economic results by pushing them into successful business. It is essential that innovation strategy and IT development strategy along with business strategy of a firm are linked, and their strategic alignment is considered to be a critical success factor for new business success. Hyundai Heavy Industries(HHI) pursued marine engine business for the development of precision machinery industry and shipbuilding industry of Korea, and the company recognized the importance of new business strategy, innovation strategy, and IT strategy inter-linked, and pushed strategic alignment boldly. As a result, HHI won the competition in European and Japanese engine manufacturers and climbed into the world's largest engine manufacturer. This study suggests investigating and analyzing a case that HHI succeeded in marine engine business expansion using strategic innovation strategy as a way of the introduction of CNC machine tools and the development of HEMAPT system.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

Suggestion for Comprehensive Quality Assurance of Medical Linear Accelerator in Korea (국내 선형가속기의 포괄적인 품질관리체계에 대한 제언)

  • Choi, Sang Hyoun;Park, Dong-wook;Kim, Kum Bae;Kim, Dong Wook;Lee, Jaiki;Shin, Dong Oh
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.294-303
    • /
    • 2015
  • American Association of Physicists in Medicine (AAPM) Published Task Group 40 report which includes recommendations for comprehensive quality assurance (QA) for medical linear accelerator in 1994 and TG-142 report for recommendation for QA which includes procedures such as intensity-modulated radiotherapy (IMRT), stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) in 2010. Recently, Nuclear Safety and Security Commission (NSSC) published NSSC notification no. 2015-005 which is "Technological standards for radiation safety of medical field". This notification regulate to establish guidelines for quality assurance which includes organization and job, devices, methods/frequency/tolerances and action levels for QA, and to implement quality assurance in each medical institution. For this reason, all of these facilities using medical machine for patient treatment should establish items, frequencies and tolerances for proper QA for medical treatment machine that use the techniques such as non-IMRT, IMRT and SRS/SBRT, and perform quality assurance. For domestic, however, there are lack of guidelines and reports of Korean Society of Medical Physicists (KSMP) for reference to establish systematic QA report in medical institutes. This report, therefore, suggested comprehensive quality assurance system such as the scheme of quality assurance system, which is considered for domestic conditions, based the notification of NSSC and AAPM TG-142 reports. We think that the quality assurance system suggested for medical linear accelerator also help establishing QA system for another high-precision radiation treatment machines.

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Elemental alteration of the surface of dental casting alloys induced by electro discharge machining (치과용 주조 합금의 방전가공에 따른 표면 성분 변화)

  • Jang, Yong-Chul;Lee, Myung-Kon
    • Journal of Technologic Dentistry
    • /
    • v.31 no.1
    • /
    • pp.55-61
    • /
    • 2009
  • Passive fitting of meso-structure and super-structures is a predominant requirement for the longevity and clinical success of osseointegrated dental implants. However, precision and passive fitting has been unpredictable with conventional methods of casting as well as for corrective techniques. Alternative to conventional techniques, electro discharge machining(EDM) is an advanced method introduced to dental technology to improve the passive fitting of implant prosthesis. In this technique material is removed by melting and vaporization in electric sparks. Regarding the efficacy of EDM, the application of this technique induces severe surface morphological and elemental alterations due to the high temperatures developed during machining, which vary between $10,000{\sim}20,000^{\circ}C$. The aim of this study was to investigate the morphological and elemental alterations induced by EDM process of casting dental gold alloy and non-precious alloy used for the production of implant-supported prosthesis. A conventional clinical dental casting alloys were used for experimental specimens patterns, which were divided in three groups, high fineness gold alloy(Au 75%, HG group), low fineness gold alloy(Au 55%, LG group) and nonprecious metal alloy(Ni-Cr, NP group). The UCLA type plastic abutment patterns were invested with conventional investment material and were cast in a centrifugal casting machine. Castings were sandblasted with $50{\mu}m\;Al_2O_3$. One casting specimen of each group was polished by conventional finishing(HGCON, LGCON, NPCON) and one specimen of each group was subjected to EDM in a system using Cu electrodes, kerosene as dielectric fluid in 10 min for gold alloy and 20 min for Ni-Cr alloy(HGEDM. LGEDM, NOEDM). The surface morphology of all specimens was studied under an energy dispersive X-ray spectrometer (EDS). The quantitative results from EDS analysis are presented on the HGEDM and LGEDM specimens a significant increase in C and Cu concentrations was found after EDM finishing. The different result was documented for C on the NPEDM with a significant uptake of O after EDM finishing, whereas Al, Si showed a significant decrease in their concentrations. EDS analysis showed a serious uptake of C and Cu after the EDM procedure in the alloys studied. The C uptake after the EDM process is a common finding and it is attributed to the decomposition of the dielectric fluid in the plasma column, probably due to the development of extremely high temperatures. The Cu uptake is readily explained from the decomposition of Cu electrodes, something which is also a common finding after the EDM procedure. However, all the aforementioned mechanisms require further research. The clinical implication of these findings is related with the biological and corrosion resistance of surfaces prepared by the EDM process.

  • PDF

Design of V-Band Waveguide Slot Sub-Array Antenna for Wireless Communication Back-haul (무선통신 백-홀용 V-밴드 도파관 슬롯 서브-배열 안테나의 설계)

  • Noh, Kwang-Hyun;Kang, Young-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.7
    • /
    • pp.334-341
    • /
    • 2016
  • In this paper, the study of a waveguide aperture-coupled feed-structured antenna has been conducted for the purpose of applying it to a wireless back-haul system sufficient for high-capacity gigabits-per-second data rates. For this study, a $32{\times}32$ waveguide slot sub-array antenna with a corporate-feed structure was designed and produced. Also, this antenna is used at 57 GHz to 66 GHz in the V-band. The construction of the antenna is a laminated form with radiating parts (outer groove and slot, cavity), a coupled aperture, and feeds in each. The antenna was designed with HFSS, which is based on 3D-FEM, produced with aluminum processed by a precision-controlled milling machine, and assembled after a silver-plating process. The measurement result from analysis of the characteristics of the antenna shows that return loss is less than -12 dB, VSWR < 2.0, and a wide bandwidth ranges up to 16%. An overall first side lobe level is less than -12.3 dB, and a 3 dB beam width is narrow at about $1.85^{\circ}$. Also, antenna gain is 38.5 dBi, offering high efficiency exceeding 90%.

The Characteristic of Radiation Exposure for Radiologist with Applying Condition in Interventional Radiology in Cardiology (심장내과의 중재적 시술시 시술조건에 따른 방사선사의 방사선 노출 특성)

  • Park, Jeong-Kyu;Cho, Euy-Hyun
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.421-429
    • /
    • 2012
  • Lately, the number of interventional radiology is increased by the extension of procedure in medical radiation, and radiation exposure may be appeared differently by interventional radiologists, it is caused increase of radiation dose for radiation worker, patient, and radiologists. This study has done a comparative analysis characteristic of radiation exposure for five radiologists who executed interventional cardiology for 303 patients in S university hospital of Gyeong-Buk from Nov. 1, 2011 to Jan. 31, 2011. The average exposure time of five radiologists was 697.95sec. The average of cumulative DAP(exp) for patients was $52,730mGycm^2$ and the average of total DAP for patients was $104,875.14mGycm^2$. The average of frames for image was 855.52 frames in acquired images, and the average of frames for images was 802.2 frames in exposure images. They were statistically significant differences (p<0.05). Exposure time, cumulative DAP(fluro), cumulative DAP(exp), total DAP, acquired image, and exposure image were high correlation except cumulative DAP(exp), and acquired runs in x-ray exposure characteristics of machine. Exposure time was a great influence on radiologist. It signified that the more exposure time lead to the more radiation dose for radiologist. Radiation dose is related to ability, experience, difficulty, and precision of procedures in interventional procedure. The number of angiography and exposure time is difficult to control by radiologists. Therefore, it is in need of reasonable system which was evaluated the real dose of medical teams in interventional proceedings. We think that self education and training are required to reduce radiation dose for radiologists and radiation workers.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.