• Title/Summary/Keyword: Semantic analysis

Search Result 1,355, Processing Time 0.03 seconds

Relations between Reputation and Social Media Marketing Communication in Cryptocurrency Markets: Visual Analytics using Tableau

  • Park, Sejung;Park, Han Woo
    • International Journal of Contents
    • /
    • v.17 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Visual analytics is an emerging research field that combines the strength of electronic data processing and human intuition-based social background knowledge. This study demonstrates useful visual analytics with Tableau in conjunction with semantic network analysis using examples of sentiment flow and strategic communication strategies via Twitter in a blockchain domain. We comparatively investigated the sentiment flow over time and language usage patterns between companies with a good reputation and firms with a poor reputation. In addition, this study explored the relations between reputation and marketing communication strategies. We found that cryptocurrency firms more actively produced information when there was an increased public demand and increased transactions and when the coins' prices were high. Emotional language strategies on social media did not affect cryptocurrencies' reputations. The pattern in semantic representations of keywords was similar between companies with a good reputation and firms with a poor reputation. However, the reputable firms communicated on a wide range of topics and used more culturally focused strategies, and took more advantages of social media marketing by expanding their outreach to other social media networks. The visual big data analytics provides insights into business intelligence that helps informed policies.

Semantic Network Analysis of 'Young-Kl(panic buying)': Focusing on News Source Diversity ('영끌' 보도에 대한 언어망 분석: 뉴스 정보원 다양성을 중심으로)

  • Lee, Jeng Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.23-33
    • /
    • 2021
  • This study analyzed news articles about 'Young-Kl' reported by 11 media outlets, identifying news frames and quotation frames. Using a semantic network analysis, this study inspected the quotations frames and measured the frequency of the quotes and sources types. Also, the concentration index of the frames was measured. The results showed that news frames consisted of 10 topics and quotation frames consisted of 14 topics. Although the differences among quotation frames by media as well as by source types were observed, the concentration index of sources such as government, political arena, and business appeared high. Therefore, this study suggested that numerical diversity of news sources would not establish the diversity of news frames.

Using Syntax and Shallow Semantic Analysis for Vietnamese Question Generation

  • Phuoc Tran;Duy Khanh Nguyen;Tram Tran;Bay Vo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2718-2731
    • /
    • 2023
  • This paper presents a method of using syntax and shallow semantic analysis for Vietnamese question generation (QG). Specifically, our proposed technique concentrates on investigating both the syntactic and shallow semantic structure of each sentence. The main goal of our method is to generate questions from a single sentence. These generated questions are known as factoid questions which require short, fact-based answers. In general, syntax-based analysis is one of the most popular approaches within the QG field, but it requires linguistic expert knowledge as well as a deep understanding of syntax rules in the Vietnamese language. It is thus considered a high-cost and inefficient solution due to the requirement of significant human effort to achieve qualified syntax rules. To deal with this problem, we collected the syntax rules in Vietnamese from a Vietnamese language textbook. Moreover, we also used different natural language processing (NLP) techniques to analyze Vietnamese shallow syntax and semantics for the QG task. These techniques include: sentence segmentation, word segmentation, part of speech, chunking, dependency parsing, and named entity recognition. We used human evaluation to assess the credibility of our model, which means we manually generated questions from the corpus, and then compared them with the generated questions. The empirical evidence demonstrates that our proposed technique has significant performance, in which the generated questions are very similar to those which are created by humans.

EVALUATION OF STATIC ANALYSIS TOOLS USED TO ASSESS SOFTWARE IMPORTANT TO NUCLEAR POWER PLANT SAFETY

  • OURGHANLIAN, ALAIN
    • Nuclear Engineering and Technology
    • /
    • v.47 no.2
    • /
    • pp.212-218
    • /
    • 2015
  • We describe a comparative analysis of different tools used to assess safety-critical software used in nuclear power plants. To enhance the credibility of safety assessments and to optimize safety justification costs, $Electricit{\acute{e}}$ de France (EDF) investigates the use of methods and tools for source code semantic analysis, to obtain indisputable evidence and help assessors focus on the most critical issues. EDF has been using the PolySpace tool for more than 10 years. Currently, new industrial tools based on the same formal approach, Abstract Interpretation, are available. Practical experimentation with these new tools shows that the precision obtained on one of our shutdown systems software packages is substantially improved. In the first part of this article, we present the analysis principles of the tools used in our experimentation. In the second part, we present the main characteristics of protection-system software, and why these characteristics are well adapted for the new analysis tools. In the last part, we present an overview of the results and the limitations of the tools.

A Study on Gamification Consumer Perception Analysis Using Big Data

  • Se-won Jeon;Youn Ju Ahn;Gi-Hwan Ryu
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.3
    • /
    • pp.332-337
    • /
    • 2023
  • The purpose of the study was to analyze consumers' perceptions of gamification. Based on the analyzed data, we would like to provide data by systematically organizing the concept, game elements, and mechanisms of gamification. Recently, gamification can be easily found around medical care, corporate marketing, and education. This study collected keywords from social media portal sites Naver, Daum, and Google from 2018 to 2023 using TEXTOM, a social media analysis tool. In this study, data were analyzed using text mining, semantic network analysis, and CONCOR analysis methods. Based on the collected data, we looked at the relevance and clusters related to gamification. The clusters were divided into a total of four clusters: 'Awareness of Gamification', 'Gamification Program', 'Future Technology of Gamification', and 'Use of Gamification'. Through social media analysis, we want to investigate and identify consumers' perceptions of gamification use, and check market and consumer perceptions to make up for the shortcomings. Through this, we intend to develop a plan to utilize gamification.

A Comparative Analysis Study of IFLA School Library Guidelines Using Semantic Network Analysis (언어 네트워크 분석을 통한 IFLA의 학교도서관 가이드라인 비교·분석에 관한 연구)

  • Lee, Byeong-Kee
    • Journal of Korean Library and Information Science Society
    • /
    • v.51 no.2
    • /
    • pp.1-21
    • /
    • 2020
  • The purpose of this study is to explore semantic characteristics of IFLA school library guidelines through network analysis. There are two versions, 2002 edition and 2015 revision of the guidelines. This study analyzed the 2002 edition and 2015 revision of the IFLA school library guidelines view point of semantic network, and compared characteristics of two versions. The keywords were to extracted from two texts, semantic network were composed based on co-occurrence relations with keywords. The centrality(degree centrality, closeness centrality, betweenness centrality) was analyzed from the network. In addition, this study conducted topic modeling analysis using LDA function of NetMiner4.0. The result of this study is following these. First, When comparing the centrality, the 'Program, Teaching, Reading, Inquiry, Literacy, Media' keyword was higher in the 2015 revision than in the 2002 edition. Second, 'Inquiry' in degree centrality and 'Achievement' in closeness centrality which were not included in the 2002 edition top-ranked keyword list, have new appeared in 2015 revision. third, As a result of the analysis of topic modeling, compared to the 2002 version, the importance of topics on programs and services, teaching and learning activities of librarian teacher, and media and information literacy is increasing in the 2015 revision.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Big Data Analysis on the Perception of Home Training According to the Implementation of COVID-19 Social Distancing

  • Hyun-Chang Keum;Kyung-Won Byun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.211-218
    • /
    • 2023
  • Due to the implementation of COVID-19 distancing, interest and users in 'home training' are rapidly increasing. Therefore, the purpose of this study is to identify the perception of 'home training' through big data analysis on social media channels and provide basic data to related business sector. Social media channels collected big data from various news and social content provided on Naver and Google sites. Data for three years from March 22, 2020 were collected based on the time when COVID-19 distancing was implemented in Korea. The collected data included 4,000 Naver blogs, 2,673 news, 4,000 cafes, 3,989 knowledge IN, and 953 Google channel news. These data analyzed TF and TF-IDF through text mining, and through this, semantic network analysis was conducted on 70 keywords, big data analysis programs such as Textom and Ucinet were used for social big data analysis, and NetDraw was used for visualization. As a result of text mining analysis, 'home training' was found the most frequently in relation to TF with 4,045 times. The next order is 'exercise', 'Homt', 'house', 'apparatus', 'recommendation', and 'diet'. Regarding TF-IDF, the main keywords are 'exercise', 'apparatus', 'home', 'house', 'diet', 'recommendation', and 'mat'. Based on these results, 70 keywords with high frequency were extracted, and then semantic indicators and centrality analysis were conducted. Finally, through CONCOR analysis, it was clustered into 'purchase cluster', 'equipment cluster', 'diet cluster', and 'execute method cluster'. For the results of these four clusters, basic data on the 'home training' business sector were presented based on consumers' main perception of 'home training' and analysis of the meaning network.

A Study on the Metadata based on the Semantic Structure of the Korean Studies Research Articles (한국학 연구 논문의 의미 구조 기반 메타데이터 연구)

  • Song, Min-Sun;Ko, Young Man
    • Journal of Korean Library and Information Science Society
    • /
    • v.46 no.3
    • /
    • pp.277-299
    • /
    • 2015
  • The purpose of this study is to build a metadata set based on the semantic structure of the Korean studies research articles. For this purpose, we analyzed the related researches which suggested the semantic structure of the research articles, categorized the concepts of author keywords of the Korean studies research articles, and drew the metadata set of 16 elements from the results of the analysis and the categorization. The significance of this study is that it propose a semantic metadata configuration methodology which can reflect the scholarly sense-making of researchers in Korean studies. Especially, this study is significant because it reflects the keywords which was given by the actual researchers to examine the content characteristics of the Korean studies research articles.

Improving The Performance of Triple Generation Based on Distant Supervision By Using Semantic Similarity (의미 유사도를 활용한 Distant Supervision 기반의 트리플 생성 성능 향상)

  • Yoon, Hee-Geun;Choi, Su Jeong;Park, Seong-Bae
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.653-661
    • /
    • 2016
  • The existing pattern-based triple generation systems based on distant supervision could be flawed by assumption of distant supervision. For resolving flaw from an excessive assumption, statistics information has been commonly used for measuring confidence of patterns in previous studies. In this study, we proposed a more accurate confidence measure based on semantic similarity between patterns and properties. Unsupervised learning method, word embedding and WordNet-based similarity measures were adopted for learning meaning of words and measuring semantic similarity. For resolving language discordance between patterns and properties, we adopted CCA for aligning bilingual word embedding models and a translation-based approach for a WordNet-based measure. The results of our experiments indicated that the accuracy of triples that are filtered by the semantic similarity-based confidence measure was 16% higher than that of the statistics-based approach. These results suggested that semantic similarity-based confidence measure is more effective than statistics-based approach for generating high quality triples.