• Title/Summary/Keyword: Keywords Extraction

Search Result 139, Processing Time 0.025 seconds

Using Text Network Analysis for Analyzing Academic Papers in Nursing (간호학 학술논문의 주제 분석을 위한 텍스트네크워크분석방법 활용)

  • Park, Chan Sook
    • Perspectives in Nursing Science
    • /
    • v.16 no.1
    • /
    • pp.12-24
    • /
    • 2019
  • Purpose: This study examined the suitability of using text network analysis (TNA) methodology for topic analysis of academic papers related to nursing. Methods: TNA background theories, software programs, and research processes have been described in this paper. Additionally, the research methodology that applied TNA to the topic analysis of the academic nursing papers was analyzed. Results: As background theories for the study, we explained information theory, word co-occurrence analysis, graph theory, network theory, and social network analysis. The TNA procedure was described as follows: 1) collection of academic articles, 2) text extraction, 3) preprocessing, 4) generation of word co-occurrence matrices, 5) social network analysis, and 6) interpretation and discussion. Conclusion: TNA using author-keywords has several advantages. It can utilize recognized terms such as MeSH headings or terms chosen by professionals, and it saves time and effort. Additionally, the study emphasizes the necessity of developing a sophisticated research design that explores nursing research trends in a multidimensional method by applying TNA methodology.

Keyword identifications on dimensions for service quality of Healthcare providers (헬스케어 서비스 리뷰를 활용한 서비스 품질 차원 별 중요 단어 파악 방안)

  • Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.19 no.4
    • /
    • pp.171-185
    • /
    • 2018
  • Studies on online review have carried out analysis of the rating and topic as a whole. However, it is necessary to analyze opinions on various dimensions of service quality. This study classifies reviews of healthcare services into service quality dimensions, and proposes a method to identify words that are mainly referred to in each dimension. Service quality was based on the dimensions provided by SERVQUAL, and patient reviews have collected from NHSChoice. The 2,000 sentences sampled were classified into service quality dimension of SERVQUAL and a method of extracting important keywords from sentences by service quality dimension was suggested. The RAKE algorithm is used to extract key words from a single document and an index is considered to consider frequently used words in various documents. Since we need to identify key words in various reviews, we have considered frequency and discrimination (IDF) at the same time, rather than identifying key words based only on the RAKE score. In SERVQUAL dimension, we identified the words that patients mentioned mainly, and also identified the words that patients mainly refer to by review rating.

Similar Image Retrieval Technique based on Semantics through Automatic Labeling Extraction of Personalized Images

  • Jung-Hee, Seo
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.56-63
    • /
    • 2024
  • Despite the rapid strides in content-based image retrieval, a notable disparity persists between the visual features of images and the semantic features discerned by humans. Hence, image retrieval based on the association of semantic similarities recognized by humans with visual similarities is a difficult task for most image-retrieval systems. Our study endeavors to bridge this gap by refining image semantics, aligning them more closely with human perception. Deep learning techniques are used to semantically classify images and retrieve those that are semantically similar to personalized images. Moreover, we introduce a keyword-based image retrieval, enabling automatic labeling of images in mobile environments. The proposed approach can improve the performance of a mobile device with limited resources and bandwidth by performing retrieval based on the visual features and keywords of the image on the mobile device.

A Document Summarization System Using Dynamic Connection Graph (동적 연결 그래프를 이용한 자동 문서 요약 시스템)

  • Song, Won-Moon;Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.62-69
    • /
    • 2009
  • The purpose of document summarization is to provide easy and quick understanding of documents by extracting summarized information from the documents produced by various application programs. In this paper, we propose a document summarization method that creates and analyzes a connection graph representing the similarity of keyword lists of sentences in a document taking into account the mean length(the number of keywords) of sentences of the document. We implemented a system that automatically generate a summary from a document using the proposed method. To evaluate the performance of the method, we used a set of 20 documents associated with their correct summaries and measured the precision, the recall and the F-measure. The experiment results show that the proposed method is more efficient compared with the existing methods.

Analysis of the National Police Agency business trends using text mining (텍스트 마이닝 기법을 이용한 경찰청 업무 트렌드 분석)

  • Sun, Hyunseok;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.301-317
    • /
    • 2019
  • There has been significant research conducted on how to discover various insights through text data using statistical techniques. In this study we analyzed text data produced by the Korean National Police Agency to identify trends in the work by year and compare work characteristics among local authorities by identifying distinctive keywords in documents produced by each local authority. A preprocessing according to the characteristics of each data was conducted and the frequency of words for each document was calculated in order to draw a meaningful conclusion. The simple term frequency shown in the document is difficult to describe the characteristics of the keywords; therefore, the frequency for each term was newly calculated using the term frequency-inverse document frequency weights. The L2 norm normalization technique was used to compare the frequency of words. The analysis can be used as basic data that can be newly for future police work improvement policies and as a method to improve the efficiency of the police service that also help identify a demand for improvements in indoor work.

Emotional effect of the Covid-19 pandemic on oral surgery procedures: a social media analysis

  • Altan, Ahmet
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.21 no.3
    • /
    • pp.237-244
    • /
    • 2021
  • Background: This study aimed to analyze Twitter users' emotional tendencies regarding oral surgery procedures before and after the coronavirus disease 2019 (COVID-19) pandemic worldwide. Methods: Tweets posted in English before and after the COVID-19 pandemic were included in the study. Popular tweets in 2019 were searched using the keywords "tooth removal", "tooth extraction", "dental pain", "wisdom tooth", "wisdom teeth", "oral surgery", "oral surgeon", and "OMFS". In 2020, another search was conducted by adding the words "COVID" and "corona" to the abovementioned keywords. Emotions underlying the tweets were analyzed using CrystalFeel - Multidimensional Emotion Analysis. In this analysis, we focused on four emotions: fear, anger, sadness, and joy. Results: A total of 1240 tweets, which were posted before and after the COVID-19 pandemic, were analyzed. There was a statistically significant difference between the emotions' distribution before and after the pandemic (p < 0.001). While the sense of joy decreased after the pandemic, anger and fear increased. There was a statistically significant difference between the emotional valence distributions before and after the pandemic (p < 0.001). While a negative emotion intensity was noted in 52.9% of the messages before the pandemic, it was observed in 74.3% of the messages after the pandemic. A positive emotional intensity was observed in 29.8% of the messages before the pandemic, but was seen in 10.7% of the messages after the pandemic. Conclusion: Infectious diseases, such as COVID-19, may lead to mental, emotional, and behavioral changes in people. Unpredictability, uncertainty, disease severity, misinformation, and social isolation may further increase dental anxiety and fear among people.

Analyzing Media Bias in News Articles Using RNN and CNN (순환 신경망과 합성곱 신경망을 이용한 뉴스 기사 편향도 분석)

  • Oh, Seungbin;Kim, Hyunmin;Kim, Seungjae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.999-1005
    • /
    • 2020
  • While search portals' 'Portal News' account for the largest portion of aggregated news outlet, its neutrality as an outlet is questionable. This is because news aggregation may lead to prejudiced information consumption by recommending biased news articles. In this paper we introduce a new method of measuring political bias of news articles by using deep learning. It can provide its readers with insights on critical thinking. For this method, we build the dataset for deep learning by analyzing articles' bias from keywords, sourced from the National Assembly proceedings, and assigning bias to said keywords. Based on these data, news article bias is calculated by applying deep learning with a combination of Convolution Neural Network and Recurrent Neural Network. Using this method, 95.6% of sentences are correctly distinguished as either conservative or progressive-biased; on the entire article, the accuracy is 46.0%. This enables analyzing any articles' bias between conservative and progressive unlike previous methods that were limited on article subjects.

Knowledge Domain and Emerging Trends of Intelligent Green Building and Smart City - A Visual Analysis Using CiteSpace

  • Li, Hongyang;Dai, Mingjie
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.24-31
    • /
    • 2017
  • As the concept of sustainability becomes more and more popular, a large amount of literature have been recorded recently on intelligent green building and smart city (IGB&SC). It is therefore needed to systematically analyse the existing knowledge structure as well as the future new development of this domain through the identification of the thematic trends, landmark articles, typical keywords together with co-operative researchers. In this paper, Citespace software package is applied to analyse the citation networks and other relevant data of the past eleven years (from 2006 to 2016) collected from Web of Science (WOS). Through this, a series of professional document analysis are conducted, including the production of core authors, the influence made by the most cited authors, keywords extraction and timezone analysis, hot topics of research, highly cited papers and trends with regard to co-citation analysis, etc. As a result, the development track of the IGB&SC domains is revealed and visualized and the following results reached: (i) in the research area of IGB&SC, the most productive researcher is Winters JV and Caragliu A is most influential on the other hand; (ii) different focuses of IGB&SC research have been emerged continually from 2006 to 2016 e.g. smart growth, sustainability, smart city, big data, etc.; (iii) Hollands's work is identified with the most citations and the emerging trends, as revealed from the bursts analysis in document co-citations, can be concluded as smart growth, the assessment of intelligent green building and smart city.

  • PDF

Metadata extraction using AI and advanced metadata research for web services (AI를 활용한 메타데이터 추출 및 웹서비스용 메타데이터 고도화 연구)

  • Sung Hwan Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.499-503
    • /
    • 2024
  • Broadcasting programs are provided to various media such as Internet replay, OTT, and IPTV services as well as self-broadcasting. In this case, it is very important to provide keywords for search that represent the characteristics of the content well. Broadcasters mainly use the method of manually entering key keywords in the production process and the archive process. This method is insufficient in terms of quantity to secure core metadata, and also reveals limitations in recommending and using content in other media services. This study supports securing a large number of metadata by utilizing closed caption data pre-archived through the DTV closed captioning server developed in EBS. First, core metadata was automatically extracted by applying Google's natural language AI technology. The next step is to propose a method of finding core metadata by reflecting priorities and content characteristics as core research contents. As a technology to obtain differentiated metadata weights, the importance was classified by applying the TF-IDF calculation method. Successful weight data were obtained as a result of the experiment. The string metadata obtained by this study, when combined with future string similarity measurement studies, becomes the basis for securing sophisticated content recommendation metadata from content services provided to other media.

A Study on Analysis of national R&D research trends for Artificial Intelligence using LDA topic modeling (LDA 토픽모델링을 활용한 인공지능 관련 국가R&D 연구동향 분석)

  • Yang, MyungSeok;Lee, SungHee;Park, KeunHee;Choi, KwangNam;Kim, TaeHyun
    • Journal of Internet Computing and Services
    • /
    • v.22 no.5
    • /
    • pp.47-55
    • /
    • 2021
  • Analysis of research trends in specific subject areas is performed by examining related topics and subject changes by using topic modeling techniques through keyword extraction for most of the literature information (paper, patents, etc.). Unlike existing research methods, this paper extracts topics related to the research topic using the LDA topic modeling technique for the project information of national R&D projects provided by the National Science and Technology Knowledge Information Service (NTIS) in the field of artificial intelligence. By analyzing these topics, this study aims to analyze research topics and investment directions for national R&D projects. NTIS provides a vast amount of national R&D information, from information on tasks carried out through national R&D projects to research results (thesis, patents, etc.) generated through research. In this paper, the search results were confirmed by performing artificial intelligence keywords and related classification searches in NTIS integrated search, and basic data was constructed by downloading the latest three-year project information. Using the LDA topic modeling library provided by Python, related topics and keywords were extracted and analyzed for basic data (research goals, research content, expected effects, keywords, etc.) to derive insights on the direction of research investment.