• Title/Summary/Keyword: Latent semantic analysis

Search Result 64, Processing Time 0.024 seconds

Comparative Analysis of Vectorization Techniques in Electronic Medical Records Classification (의무 기록 문서 분류를 위한 자연어 처리에서 최적의 벡터화 방법에 대한 비교 분석)

  • Yoo, Sung Lim
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.2
    • /
    • pp.109-115
    • /
    • 2022
  • Purpose: Medical records classification using vectorization techniques plays an important role in natural language processing. The purpose of this study was to investigate proper vectorization techniques for electronic medical records classification. Material and methods: 403 electronic medical documents were extracted retrospectively and classified using the cosine similarity calculated by Scikit-learn (Python module for machine learning) in Jupyter Notebook. Vectors for medical documents were produced by three different vectorization techniques (TF-IDF, latent sematic analysis and Word2Vec) and the classification precisions for three vectorization techniques were evaluated. The Kruskal-Wallis test was used to determine if there was a significant difference among three vectorization techniques. Results: 403 medical documents were relevant to 41 different diseases and the average number of documents per diagnosis was 9.83 (standard deviation=3.46). The classification precisions for three vectorization techniques were 0.78 (TF-IDF), 0.87 (LSA) and 0.79 (Word2Vec). There was a statistically significant difference among three vectorization techniques. Conclusions: The results suggest that removing irrelevant information (LSA) is more efficient vectorization technique than modifying weights of vectorization models (TF-IDF, Word2Vec) for medical documents classification.

Comparing Corporate and Public ESG Perceptions Using Text Mining and ChatGPT Analysis: Based on Sustainability Reports and Social Media (텍스트마이닝과 ChatGPT 분석을 활용한 기업과 대중의 ESG 인식 비교: 지속가능경영보고서와 소셜미디어를 기반으로)

  • Jae-Hoon Choi;Sung-Byung Yang;Sang-Hyeak Yoon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.347-373
    • /
    • 2023
  • As the significance of ESG (Environmental, Social, and Governance) management amplifies in driving sustainable growth, this study delves into and compares ESG trends and interrelationships from both corporate and societal viewpoints. Employing a combination of Latent Dirichlet Allocation Topic Modeling (LDA) and Semantic Network Analysis, we analyzed sustainability reports alongside corresponding social media datasets. Additionally, an in-depth examination of social media content was conducted using Joint Sentiment Topic Modeling (JST), further enriched by Semantic Network Analysis (SNA). Complementing text mining analysis with the assistance of ChatGPT, this study identified 25 different ESG topics. It highlighted differences between companies aiming to avoid risks and build trust, and the general public's diverse concerns like investment options and working conditions. Key terms like 'greenwashing,' 'serious accidents,' and 'boycotts' show that many people doubt how companies handle ESG issues. The findings from this study set the foundation for a plan that serves key ESG groups, including businesses, government agencies, customers, and investors. This study also provide to guide the creation of more trustworthy and effective ESG strategies, helping to direct the discussion on ESG effectiveness.

Analysis of Structured and Unstructured Data and Construction of Criminal Profiling System using LSA (LSA를 이용한 정형·비정형데이터 분석과 범죄 프로파일링 시스템 구현)

  • Kim, Yonghoon;Chung, Mokdong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.66-73
    • /
    • 2017
  • Due to the recent rapid changes in society and wide spread of information devices, diverse digital information is utilized in a variety of economic and social analysis. Information related to the crime statistics by type of crime has been used as a major factor in crime. However, statistical analysis using only the structured data has the difficulty in the investigation by providing limited information to investigators and users. In this paper, structured data and unstructured data are analyzed by applying Korean Natural Language Processing (Ko-NLP) and the Latent Semantic Analysis (LSA) technique. It will provide a crime profile optimum system that can be applied to the crime profiling system or statistical analysis.

Representation of ambiguous word in Latent Semantic Analysis (LSA모형에서 다의어 의미의 표상)

  • 이태헌;김청택
    • Korean Journal of Cognitive Science
    • /
    • v.15 no.2
    • /
    • pp.23-31
    • /
    • 2004
  • Latent Semantic Analysis (LSA Landauer & Dumais, 1997) is a technique to represent the meanings of words using co-occurrence information of words appearing in he same context, which is usually a sentence or a document. In LSA, a word is represented as a point in multidimensional space where each axis represents a context, and a word's meaning is determined by its frequency in each context. The space is reduced by singular value decomposition (SVD). The present study elaborates upon LSA for use of representation of ambiguous words. The proposed LSA applies rotation of axes in the document space which makes possible to interpret the meaning of un. A simulation study was conducted to illustrate the performance of LSA in representation of ambiguous words. In the simulation, first, the texts which contain an ambiguous word were extracted and LSA with rotation was performed. By comparing loading matrix, we categorized the texts according to meanings. The first meaning of an ambiguous wold was represented by LSA with the matrix excluding the vectors for the other meaning. The other meanings were also represented in the same way. The simulation showed that this way of representation of an ambiguous word can identify the meanings of the word. This result suggest that LSA with axis rotation can be applied to representation of ambiguous words. We discussed that the use of rotation makes it possible to represent multiple meanings of ambiguous words, and this technique can be applied in the area of web searching.

  • PDF

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

User-Centered Document Ranking Technique using Term Association Analysis (용어 연관성 분석을 이용한 사용자 위주의 문서순위결정 기법)

  • U, Seon-Mi;Yu, Chun-Sik;Kim, Yong-Seong
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.2
    • /
    • pp.149-156
    • /
    • 2001
  • 정보의 가치와 사용자의 정보획득 요구가 증대됨에 따라 특정 개인 위주의 서비스를 제공하는 정보검색 시스템의 필요성이 증대되고 있다. 그러나 현재의 정보검색 시스템들은 사용자의 선호도를 반영하고 편의성을 제공하는 면에서 매우 미흡한 점들이 많다. 따라서 본 논문에서는 적합성 정도에 따라 최적의 문서를 제공하기 위하여 사용자 위주의 문서순위결정 기법을 제안한다. 특정 개인의 선호도(preference)를 반영하기 위하여 사용자 프로파일(User Profile)을 구성 및 갱신하고, LSA(Latent Semantic Analysis)를 적용하여 적합율에 따라 문서의 순위를 결정한다.

  • PDF

Analysis on the Trend of The Journal of Information Systems Using TLS Mining (TLS 마이닝을 이용한 '정보시스템연구' 동향 분석)

  • Yun, Ji Hye;Oh, Chang Gyu;Lee, Jong Hwa
    • The Journal of Information Systems
    • /
    • v.31 no.1
    • /
    • pp.289-304
    • /
    • 2022
  • Purpose The development of the network and mobile industries has induced companies to invest in information systems, leading a new industrial revolution. The Journal of Information Systems, which developed the information system field into a theoretical and practical study in the 1990s, retains a 30-year history of information systems. This study aims to identify academic values and research trends of JIS by analyzing the trends. Design/methodology/approach This study aims to analyze the trend of JIS by compounding various methods, named as TLS mining analysis. TLS mining analysis consists of a series of analysis including Term Frequency-Inverse Document Frequency (TF-IDF) weight model, Latent Dirichlet Allocation (LDA) topic modeling, and a text mining with Semantic Network Analysis. Firstly, keywords are extracted from the research data using the TF-IDF weight model, and after that, topic modeling is performed using the Latent Dirichlet Allocation (LDA) algorithm to identify issue keywords. Findings The current study used the summery service of the published research paper provided by Korea Citation Index to analyze JIS. 714 papers that were published from 2002 to 2012 were divided into two periods: 2002-2011 and 2012-2021. In the first period (2002-2011), the research trend in the information system field had focused on E-business strategies as most of the companies adopted online business models. In the second period (2012-2021), data-based information technology and new industrial revolution technologies such as artificial intelligence, SNS, and mobile had been the main research issues in the information system field. In addition, keywords for improving the JIS citation index were presented.

Analysis of Changes in Discourse of Major Media on Park Issues - Focusing on Newspaper Articles Published from 1995 to 2019 - (공원 이슈에 대한 주요 언론의 담론변화분석 - 1995년부터 2019년까지 신문 기사를 중심으로 -)

  • Ko, Ha-jung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.46-58
    • /
    • 2021
  • Parks became essential to people after the introduction of modern parks in Korea. Following mayoral elections by popular vote, issues surrounding parks, such as the creation of parks, have arisen and have been publicized by the media, allowing for the formation of discourse. Accordingly, this study conducted a topic analysis by collecting news articles from major media outlets in Korea that addressed issues related to parks since 1995, after the introduction of mayoral elections by popular vote, and analyzed changes over time in the discourse on parks through semantic network analysis. As a result of a Latent Dirichlet allocation topic modeling analysis, the following five topics were classified: urban park expansion (Topic 1), historical and cultural parks (Topic 2), use programs (Topic 3), zoo event (Topic 4), and conflicts in the park creation process (Topic 5). The park-related discourse addressed by the media is as follows. First, the creation process and conflicts regarding the quantitative expansion of parks are treated as the central discourse. Second, the names of parks appear as keywords every time a new park is created, and they are mentioned continuously from then on, thereby playing an important role in the formation of discourse. Third, 'residents' form discourse about the public nature of the park as the principal agent in park-related media. This study has significance in that it examines how parks are interpreted and how discourse is formed and changed by the media. It is expected that discourse on parks will be addressed from various perspectives in further research focusing on other media, such as regional and specialized magazines.

Patent Technology Trends of Oral Health: Application of Text Mining

  • Hee-Kyeong Bak;Yong-Hwan Kim;Han-Na Kim
    • Journal of dental hygiene science
    • /
    • v.24 no.1
    • /
    • pp.9-21
    • /
    • 2024
  • Background: The purpose of this study was to utilize text network analysis and topic modeling to identify interconnected relationships among keywords present in patent information related to oral health, and subsequently extract latent topics and visualize them. By examining key keywords and specific subjects, this study sought to comprehend the technological trends in oral health-related innovations. Furthermore, it aims to serve as foundational material, suggesting directions for technological advancement in dentistry and dental hygiene. Methods: The data utilized in this study consisted of information registered over a 20-year period until July 31st, 2023, obtained from the patent information retrieval service, KIPRIS. A total of 6,865 patent titles related to keywords, such as "dentistry," "teeth," and "oral health," were collected through the searches. The research tools included a custom-designed program coded specifically for the research objectives based on Python 3.10. This program was used for keyword frequency analysis, semantic network analysis, and implementation of Latent Dirichlet Allocation for topic modeling. Results: Upon analyzing the centrality of connections among the top 50 frequently occurring words, "method," "tooth," and "manufacturing" displayed the highest centrality, while "active ingredient" had the lowest. Regarding topic modeling outcomes, the "implant" topic constituted the largest share at 22.0%, while topics concerning "devices and materials for oral health" and "toothbrushes and oral care" exhibited the lowest proportions at 5.5% each. Conclusion: Technologies concerning methods and implants are continually being researched in patents related to oral health, while there is comparatively less technological development in devices and materials for oral health. This study is expected to be a valuable resource for uncovering potential themes from a large volume of patent titles and suggesting research directions.

Article Recommendation based on Latent Place Topic (장소에 내재된 토픽 기반 기사 추천)

  • Noh, Yunseok;Son, Jung-Woo;Park, Seong-Bae;Park, Se-Young;Lee, Sang-Jo
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.41-46
    • /
    • 2011
  • 스마트폰의 대중화와 함께 그에 내장된 GPS를 활용하여 컨텐츠를 제공하는 서비스들이 점차 늘어나고 있다. 그러나 이런 컨텐츠를 단지 위도, 경도 좌표 정보만을 기초로 구성하게 되면 실제 그 위치가 가지는 의미적 특성을 제대로 반영하지 못하게 된다. 사용자의 위치를 기반으로 그에 맞는 서비스를 제공하기 위해서는 장소의 토픽을 고려해야한다. 본 논문은 장소에 내재된 토픽을 바탕으로 한 기사 추천 방법을 제안한다. 장소와 관련된 문서로부터 장소의 토픽을 표현하고 그 토픽을 기사 추천에 이용한다. 제안한 방법이 실제로 장소에 내재된 토픽을 잘 반영함을 보이고 또한 이를 바탕으로 장소와 관련된 적합한 기사를 추천하는 것을 보여준다.

  • PDF