• Title/Summary/Keyword: Word2Vec

Search Result 218, Processing Time 0.037 seconds

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

An Efficient BotNet Detection Scheme Exploiting Word2Vec and Accelerated Hierarchical Density-based Clustering (Word2Vec과 가속화 계층적 밀집도 기반 클러스터링을 활용한 효율적 봇넷 탐지 기법)

  • Lee, Taeil;Kim, Kwanhyun;Lee, Jihyun;Lee, Suchul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.11-20
    • /
    • 2019
  • Numerous enterprises, organizations and individual users are exposed to large DDoS (Distributed Denial of Service) attacks. DDoS attacks are performed through a BotNet, which is composed of a number of computers infected with a malware, e.g., zombie PCs and a special computer that controls the zombie PCs within a hierarchical chain of a command system. In order to detect a malware, a malware detection software or a vaccine program must identify the malware signature through an in-depth analysis, and these signatures need to be updated in priori. This is time consuming and costly. In this paper, we propose a botnet detection scheme that does not require a periodic signature update using an artificial neural network model. The proposed scheme exploits Word2Vec and accelerated hierarchical density-based clustering. Botnet detection performance of the proposed method was evaluated using the CTU-13 dataset. The experimental result shows that the detection rate is 99.9%, which outperforms the conventional method.

News Article Analysis of the 4th Industrial Revolution and Advertising before and after COVID-19: Focusing on LDA and Word2vec (코로나 이전과 이후의 4차 산업혁명과 광고의 뉴스기사 분석 : LDA와 Word2vec을 중심으로)

  • Cha, Young-Ran
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.9
    • /
    • pp.149-163
    • /
    • 2021
  • The 4th industrial revolution refers to the next-generation industrial revolution led by information and communication technologies such as artificial intelligence (AI), Internet of Things (IoT), robot technology, drones, autonomous driving and virtual reality (VR) and it also has made a significant impact on the development of the advertising industry. However, the world is rapidly changing to a non-contact, non-face-to-face living environment to prevent the spread of COVID 19. Accordingly, the role of the 4th industrial revolution and advertising is changing. Therefore, in this study, text analysis was performed using Big Kinds to examine the 4th industrial revolution and changes in advertising before and after COVID 19. Comparisons were made between 2019 before COVID 19 and 2020 after COVID 19. Main topics and documents were classified through LDA topic model analysis and Word2vec, a deep learning technique. As the result of the study showed that before COVID 19, policies, contents, AI, etc. appeared, but after COVID 19, the field gradually expanded to finance, advertising, and delivery services utilizing data. Further, education appeared as an important issue. In addition, if the use of advertising related to the 4th industrial revolution technology was mainstream before COVID 19, keywords such as participation, cooperation, and daily necessities, were more actively used for education on advanced technology, while talent cultivation appeared prominently. Thus, these research results are meaningful in suggesting a multifaceted strategy that can be applied theoretically and practically, while suggesting the future direction of advertising in the 4th industrial revolution after COVID 19.

LDA Topic Modeling and Recommendation of Similar Patent Document Using Word2vec (LDA 토픽 모델링과 Word2vec을 활용한 유사 특허문서 추천연구)

  • Apgil Lee;Keunho Choi;Gunwoo Kim
    • Information Systems Review
    • /
    • v.22 no.1
    • /
    • pp.17-31
    • /
    • 2020
  • With the start of the fourth industrial revolution era, technologies of various fields are merged and new types of technologies and products are being developed. In addition, the importance of the registration of intellectual property rights and patent registration to gain market dominance of them is increasing in oversea as well as in domestic. Accordingly, the number of patents to be processed per examiner is increasing every year, so time and cost for prior art research are increasing. Therefore, a number of researches have been carried out to reduce examination time and cost for patent-pending technology. This paper proposes a method to calculate the degree of similarity among patent documents of the same priority claim when a plurality of patent rights priority claims are filed and to provide them to the examiner and the patent applicant. To this end, we preprocessed the data of the existing irregular patent documents, used Word2vec to obtain similarity between patent documents, and then proposed recommendation model that recommends a similar patent document in descending order of score. This makes it possible to promptly refer to the examination history of patent documents judged to be similar at the time of examination by the examiner, thereby reducing the burden of work and enabling efficient search in the applicant's prior art research. We expect it will contribute greatly.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • Application of Gaussian Mixture Model for Text-based Biomarker Detection (텍스트 기반의 바이오마커 검출을 위한 가우시안 혼합 모델의 응용)

    • Oh, Byoung-Doo;Kim, Ki-Hyun;Kim, Yu-Seop
      • Annual Conference on Human and Language Technology
      • /
      • 2018.10a
      • /
      • pp.550-551
      • /
      • 2018
    • 바이오마커는 체내의 상태 및 변화를 파악할 수 있는 지표이다. 이는 암을 비롯한 다양한 질병에 대하여 진단하는데 활용도가 높은 것으로 알려져 있으나, 새로운 바이오마커를 찾아내기 위한 임상 실험은 많은 시간과 비용을 소비되며, 모든 바이오마커가 실제 질병을 진단하는데 유용하게 사용되는 것은 아니다. 따라서 본 연구에서는 자연어처리 기술을 활용해 바이오마커를 발굴할 때 요구되는 많은 시간과 비용을 줄이고자 한다. 이 때 다양한 의미를 가진 어휘들이 해당 질병과 연관성이 높은 것으로 나타나며, 이들을 분류하는 것은 매우 어렵다. 따라서 우리는 Word2Vec과 가우시안 혼합 모델을 사용하여 바이오마커를 분류하고자 한다. 실험 결과, 대다수의 바이오마커 어휘들이 하나의 군집에 나타나는 것을 확인할 수 있었다.

    • PDF

    Hotel Service Quality Evaluation Based on LQI using Sentiment Analysis of Online Reviews (온라인 후기에 내재된 고객의 감성분석과 LQI 차원별 호텔 서비스 품질 평가)

    • Sakong, Won;Ha, Sung Ho;Park, KyungBae
      • The Journal of Information Systems
      • /
      • v.25 no.3
      • /
      • pp.217-245
      • /
      • 2016
    • Purpose With the increasing number of foreign travelers visiting Korea, it is a heavy question to evaluate service quality of typical domestic hotel companies. Our research aims to evaluate service quality of domestic hotels in Korea from the perspective of foreign travelers in order to provide the quality improvements that call attention for the hotel management. Design/Methodology/Approach In this paper, topics of sentiment followed Lodging Quality Index(LQI) dimensions classifying lodging service quality appropriately. Also, we employed word2vec algorithm which calculates similarity and affinity among the vocabularies accurately. To calculate sentiment of each dimension, we adopted scores from SentiWordNet. Findings From the result, we found the number of foreign travelers particularly satisfied with cleanliness, politeness, and problem solving skills. In contrast, it has also been found out that both promptness of services and efficiency of communication do not fulfill the requirements of travelers.

    The Research Trends and Keywords Modeling of Shoulder Rehabilitation using the Text-mining Technique (텍스트 마이닝 기법을 활용한 어깨 재활 연구분야 동향과 키워드 모델링)

    • Kim, Jun-hee;Jung, Sung-hoon;Hwang, Ui-jae
      • Journal of the Korean Society of Physical Medicine
      • /
      • v.16 no.2
      • /
      • pp.91-100
      • /
      • 2021
    • PURPOSE: This study analyzed the trends and characteristics of shoulder rehabilitation research through keyword analysis, and their relationships were modeled using text mining techniques. METHODS: Abstract data of 10,121 articles in which abstracts were registered on the MEDLINE of PubMed with 'shoulder' and 'rehabilitation' as keywords were collected using python. By analyzing the frequency of words, 10 keywords were selected in the order of the highest frequency. Word-embedding was performed using the word2vec technique to analyze the similarity of words. In addition, the groups were classified and analyzed based on the distance (cosine similarity) through the t-SNE technique. RESULTS: The number of studies related to shoulder rehabilitation is increasing year after year, keywords most frequently used in relation to shoulder rehabilitation studies are 'patient', 'pain', and 'treatment'. The word2vec results showed that the words were highly correlated with 12 keywords from studies related to shoulder rehabilitation. Furthermore, through t-SNE, the keywords of the studies were divided into 5 groups. CONCLUSION: This study was the first study to model the keywords and their relationships that make up the abstracts of research in the MEDLINE of Pub Med related to 'shoulder' and 'rehabilitation' using text-mining techniques. The results of this study will help increase the diversifying research topics of shoulder rehabilitation studies to be conducted in the future.

    Tax Judgment Analysis and Prediction using NLP and BiLSTM (NLP와 BiLSTM을 적용한 조세 결정문의 분석과 예측)

    • Lee, Yeong-Keun;Park, Koo-Rack;Lee, Hoo-Young
      • Journal of Digital Convergence
      • /
      • v.19 no.9
      • /
      • pp.181-188
      • /
      • 2021
    • Research and importance of legal services applied with AI so that it can be easily understood and predictable in difficult legal fields is increasing. In this study, based on the decision of the Tax Tribunal in the field of tax law, a model was built through self-learning through information collection and data processing, and the prediction results were answered to the user's query and the accuracy was verified. The proposed model collects information on tax decisions and extracts useful data through web crawling, and generates word vectors by applying Word2Vec's Fast Text algorithm to the optimized output through NLP. 11,103 cases of information were collected and classified from 2017 to 2019, and verified with 70% accuracy. It can be useful in various legal systems and prior research to be more efficient application.

    A Study on the Definition of Data Literacy for Elementary and Secondary Artificial Intelligence Education (초·중등 인공지능 교육을 위한 데이터 리터러시 정의 연구)

    • Kim, SeulKi;Kim, Taeyoung
      • 한국정보교육학회:학술대회논문집
      • /
      • 2021.08a
      • /
      • pp.59-67
      • /
      • 2021
    • The development of AI technology has brought about a big change in our lives. As AI's influence grows from life to society to the economy, the importance of education on AI and data is also growing. In particular, the OECD Education Research Report and various domestic information and curriculum studies address data literacy and present it as an essential competency. Looking at domestic and international studies, one can see that the definition of data literacy differs in its specific content and scope from researchers to researchers. Thus, the definition of major research related to data literacy was analyzed from various angles and derived from various angles. In key studies, Word2vec natural language processing methods, along with word frequency analysis used to define data literacy, are used to analyze semantic similarities and nominate them based on content elements of curriculum research to derive the definition of 'understanding and using data to process information'. Based on the definition of data literacy derived from this study, we hope that the contents will be revised and supplemented, and more research will be conducted to provide a good foundation for educational research that develops students' future capabilities.

    • PDF