• Title/Summary/Keyword: doc2vec

Search Result 41, Processing Time 0.022 seconds

COVID-19-related Korean Fake News Detection Using Occurrence Frequencies of Parts of Speech (품사별 출현 빈도를 활용한 코로나19 관련 한국어 가짜뉴스 탐지)

  • Jihyeok Kim;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.267-283
    • /
    • 2023
  • The COVID-19 pandemic, which began in December 2019 and continues to this day, has left the public needing information to help them cope with the pandemic. However, COVID-19-related fake news on social media seriously threatens the public's health. In particular, if fake news related to COVID-19 is massively spread with similar content, the time required for verification to determine whether it is genuine or fake will be prolonged, posing a severe threat to our society. In response, academics have been actively researching intelligent models that can quickly detect COVID-19-related fake news. Still, the data used in most of the existing studies are in English, and studies on Korean fake news detection are scarce. In this study, we collect data on COVID-19-related fake news written in Korean that is spread on social media and propose an intelligent fake news detection model using it. The proposed model utilizes the frequency information of parts of speech, one of the linguistic characteristics, to improve the prediction performance of the fake news detection model based on Doc2Vec, a document embedding technique mainly used in prior studies. The empirical analysis shows that the proposed model can more accurately identify Korean COVID-19-related fake news by increasing the recall and F1 score compared to the comparison model.

Fake News Detection on YouTube Using Related Video Information (관련 동영상 정보를 활용한 YouTube 가짜뉴스 탐지 기법)

  • Junho Kim;Yongjun Shin;Hyunchul Ahn
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.19-36
    • /
    • 2023
  • As advances in information and communication technology have made it easier for anyone to produce and disseminate information, a new problem has emerged: fake news, which is false information intentionally shared to mislead people. Initially spread mainly through text, fake news has gradually evolved and is now distributed in multimedia formats. Since its founding in 2005, YouTube has become the world's leading video platform and is used by most people worldwide. However, it has also become a primary source of fake news, causing social problems. Various researchers have been working on detecting fake news on YouTube. There are content-based and background information-based approaches to fake news detection. Still, content-based approaches are dominant when looking at conventional fake news research and YouTube fake news detection research. This study proposes a fake news detection method based on background information rather than content-based fake news detection. In detail, we suggest detecting fake news by utilizing related video information from YouTube. Specifically, the method detects fake news through CNN, a deep learning network, from the vectorized information obtained from related videos and the original video using Doc2vec, an embedding technique. The empirical analysis shows that the proposed method has better prediction performance than the existing content-based approach to detecting fake news on YouTube. The proposed method in this study contributes to making our society safer and more reliable by preventing the spread of fake news on YouTube, which is highly contagious.

A CF-based Health Functional Recommender System using Extended User Similarity Measure (확장된 사용자 유사도를 이용한 CF-기반 건강기능식품 추천 시스템)

  • Sein Hong;Euiju Jeong;Jaekyeong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.1-17
    • /
    • 2023
  • With the recent rapid development of ICT(Information and Communication Technology) and the popularization of digital devices, the size of the online market continues to grow. As a result, we live in a flood of information. Thus, customers are facing information overload problems that require a lot of time and money to select products. Therefore, a personalized recommender system has become an essential methodology to address such issues. Collaborative Filtering(CF) is the most widely used recommender system. Traditional recommender systems mainly utilize quantitative data such as rating values, resulting in poor recommendation accuracy. Quantitative data cannot fully reflect the user's preference. To solve such a problem, studies that reflect qualitative data, such as review contents, are being actively conducted these days. To quantify user review contents, text mining was used in this study. The general CF consists of the following three steps: user-item matrix generation, Top-N neighborhood group search, and Top-K recommendation list generation. In this study, we propose a recommendation algorithm that applies an extended similarity measure, which utilize quantified review contents in addition to user rating values. After calculating review similarity by applying TF-IDF, Word2Vec, and Doc2Vec techniques to review content, extended similarity is created by combining user rating similarity and quantified review contents. To verify this, we used user ratings and review data from the e-commerce site Amazon's "Health and Personal Care". The proposed recommendation model using extended similarity measure showed superior performance to the traditional recommendation model using only user rating value-based similarity measure. In addition, among the various text mining techniques, the similarity obtained using the TF-IDF technique showed the best performance when used in the neighbor group search and recommendation list generation step.

Comparison of Document Clustering Performance Using Various Dimension Reduction Methods (다양한 차원 축소 기법을 적용한 문서 군집화 성능 비교)

  • Cho, Heeryon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.437-438
    • /
    • 2018
  • 문서 군집화 성능을 높이기 위한 한 방법으로 차원 축소를 적용한 문서 벡터로 군집화를 실시하는 방법이 있다. 본 발표에서는 특이값 분해(SVD), 커널 주성분 분석(Kernel PCA), Doc2Vec 등의 차원 축소 기법을, K-평균 군집화(K-means clustering), 계층적 병합 군집화(hierarchical agglomerative clustering), 스펙트럼 군집화(spectral clustering)에 적용하고, 그 성능을 비교해 본다.

Comparison of Anomaly Detection Performance Based on GRU Model Applying Various Data Preprocessing Techniques and Data Oversampling (다양한 데이터 전처리 기법과 데이터 오버샘플링을 적용한 GRU 모델 기반 이상 탐지 성능 비교)

  • Yoo, Seung-Tae;Kim, Kangseok
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.201-211
    • /
    • 2022
  • According to the recent change in the cybersecurity paradigm, research on anomaly detection methods using machine learning and deep learning techniques, which are AI implementation technologies, is increasing. In this study, a comparative study on data preprocessing techniques that can improve the anomaly detection performance of a GRU (Gated Recurrent Unit) neural network-based intrusion detection model using NGIDS-DS (Next Generation IDS Dataset), an open dataset, was conducted. In addition, in order to solve the class imbalance problem according to the ratio of normal data and attack data, the detection performance according to the oversampling ratio was compared and analyzed using the oversampling technique applied with DCGAN (Deep Convolutional Generative Adversarial Networks). As a result of the experiment, the method preprocessed using the Doc2Vec algorithm for system call feature and process execution path feature showed good performance, and in the case of oversampling performance, when DCGAN was used, improved detection performance was shown.

A Study on the Service Improvement Strategies by Enterprise through the Analysis of Customer Response Reviews in Smart Home Applications : Based on the Classification of Functional Elements and Design Elements of smart Home Usability Values (스마트 홈 어플리케이션의 고객반응리뷰분석을 통한 기업별 서비스개선전략에 대한 연구 : 스마트 홈 사용성 가치의 기능적요소와 디자인적 요소 분류를 바탕으로)

  • Heo, Ji Yeon;Kim, Min Ji;Cha, Kyung Jin
    • Journal of Information Technology Services
    • /
    • v.19 no.4
    • /
    • pp.85-107
    • /
    • 2020
  • The Internet of Things market, a technology that connects the Internet to various things, is growing day by day. Besides, various smart home services using IoT and AI (Artificial Intelligence) are being launched in homes. Related to this, existing smart home-related studies focus primarily on ICT technology, not on what service improvements should be made in customer positions. In this study, we will use smart home application customer review data to classify functional and design elements of smart home usability value and examine the ways customers think of service improvement. For this, LG Electronics and Samsung Electronics" Smart Home application, the main provider of Smart Home in Korea, customer reviews were crawled to conduct a comparative analysis between them. In this study, the review of IoT home-applications was analyzed to find service improvement insights from customer perspective, and related analysis of text mining, social network analysis and Doc2vec was used to efficiently analyze data equivalent to about 16,000 user reviews. Through this research, we hope that related companies effectively seek ways to improve smart home services that reflect customer needs and are expected to help them establish competitive strategies by identifying weaknesses and strengths among competitors.

Legal search method using S-BERT

  • Park, Gil-sik;Kim, Jun-tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.57-66
    • /
    • 2022
  • In this paper, we propose a legal document search method that uses the Sentence-BERT model. The general public who wants to use the legal search service has difficulty searching for relevant precedents due to a lack of understanding of legal terms and structures. In addition, the existing keyword and text mining-based legal search methods have their limits in yielding quality search results for two reasons: they lack information on the context of the judgment, and they fail to discern homonyms and polysemies. As a result, the accuracy of the legal document search results is often unsatisfactory or skeptical. To this end, This paper aims to improve the efficacy of the general public's legal search in the Supreme Court precedent and Legal Aid Counseling case database. The Sentence-BERT model embeds contextual information on precedents and counseling data, which better preserves the integrity of relevant meaning in phrases or sentences. Our initial research has shown that the Sentence-BERT search method yields higher accuracy than the Doc2Vec or TF-IDF search methods.

Sentence Similarity Analysis using Ontology Based on Cosine Similarity (코사인 유사도를 기반의 온톨로지를 이용한 문장유사도 분석)

  • Hwang, Chi-gon;Yoon, Chang-Pyo;Yun, Dai Yeol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.441-443
    • /
    • 2021
  • Sentence or text similarity is a measure of the degree of similarity between two sentences. Techniques for measuring text similarity include Jacquard similarity, cosine similarity, Euclidean similarity, and Manhattan similarity. Currently, the cosine similarity technique is most often used, but since this is an analysis according to the occurrence or frequency of a word in a sentence, the analysis on the semantic relationship is insufficient. Therefore, we try to improve the efficiency of analysis on the similarity of sentences by giving relations between words using ontology and including semantic similarity when extracting words that are commonly included in two sentences.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

ShipMate: Marine Logistics Specialist Consultation Chatbot using Deep Learning (ShipMate: 딥러닝을 이용한 해상물류 전문상담 챗봇)

  • Hyun-Su Yu;Seo-Yeon Nam;Joo-Yeong Baek;So-Yeong Ahn;Se-Jin Hwang;Gyu-Young Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1092-1093
    • /
    • 2023
  • 본 논문에서는 한국무역협회(KITA)의 오픈상담 자료들을 바탕으로, 딥러닝 기술을 이용하여 구현한 해상물류 대화형 챗봇 ShipMate를 제안한다. 챗봇 ShipMate는 KoGPT2를 활용한 답변과 Doc2Vec 기반의 유사 상담사례 추천이 가능하고, 무역상담을 시간제약 없이 진행할 수 있기 때문에, 기존 해상물류 서비스의 접근성을 한층 더 높일 수 있으며 이를 실험을 통해 입증하였다.