• Title/Summary/Keyword: Text clustering

Search Result 206, Processing Time 0.024 seconds

Making a Science Map of Korea (국내 광역 과학 지도 생성 연구)

  • Lee, Jae-Yun
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.3
    • /
    • pp.363-383
    • /
    • 2007
  • Global map of science, which is visualizing large scientific domains, can be used to visually analyze the structural relationships between major areas of science. This paper reviewed previous efforts on global science map, and then tried to making a science map of Korea with some new methods. There are several research groups on making global map of science including Dr. Small and Dr. Garfield of ISI (now Thompson Scientific), SCImago research group at the University of Granada, and Dr. Borner's InfoVis Lab at the Indiana University. They called their maps as science map or scientogram and called the activity of mapping science as scientography. Most of the previous works are based on citations between scientific articles. However citation database for Korean journal articles is still under construction. This research tried to make a Korean science map with the text in the proposals suggested for funding from Korean Research Foundation. Two kinds of method for generating networks of scientific fields are used. One is Pathfinder network (PFNet) alogorithm which has been used in several published bibliometric studies. The other is clustering-based network (CBnet) algorithm which was proposed recently as an alternative to PFNet. In order to take into account both views of the two algorithms, the resulting maps are combined to a final science map of Korea.

Improvement of Naturalness for a HMM-based Korean TTS using the prosodic boundary information (운율경계정보를 이용한 HMM기반 한국어 TTS 자연성 향상 연구)

  • Lim, Gi-Jeong;Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.9
    • /
    • pp.75-84
    • /
    • 2012
  • HMM-based Text-to-Speech systems generally utilize context dependent tri-phone units from a large corpus speech DB to enhance the synthetic speech. To downsize a large corpus speech DB, acoustically similar tri-phone units are clustered based on the decision tree using context dependent information. Context dependent information includes phoneme sequence as well as prosodic information because the naturalness of synthetic speech highly depends on the prosody such as pause, intonation pattern, and segmental duration. However, if the prosodic information was complicated, many context dependent phonemes would have no examples in the training data, and clustering would provide a smoothed feature which will generate unnatural synthetic speech. In this paper, instead of complicate prosodic information we propose a simple three prosodic boundary types and decision tree questions that use rising tone, falling tone, and monotonic tone to improve naturalness. Experimental results show that our proposed method can improve naturalness of a HMM-based Korean TTS and get high MOS in the perception test.

Trends Analysis on Research Articles of the Sharing Economy through a Meta Study Based on Big Data Analytics (빅데이터 분석 기반의 메타스터디를 통해 본 공유경제에 대한 학술연구 동향 분석)

  • Kim, Ki-youn
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.97-107
    • /
    • 2020
  • This study aims to conduct a comprehensive meta-study from the perspective of content analysis to explore trends in Korean academic research on the sharing economy by using the big data analytics. Comprehensive meta-analysis methodology can examine the entire set of research results historically and wholly to illuminate the tendency or properties of the overall research trend. Academic research related to the sharing economy first appeared in the year in which Professor Lawrence Lessig introduced the concept of the sharing economy to the world in 2008, but research began in earnest in 2013. In particular, between 2006 and 2008, research improved dramatically. In order to grasp the overall flow of domestic academic research of trends, 8 years of papers from 2013 to the present have been selected as target analysis papers, focusing on titles, keywords, and abstracts using database of electronic journals. Big data analysis was performed in the order of cleaning, analysis, and visualization of the collected data to derive research trends and insights by year and type of literature. We used Python3.7 and Textom analysis tools for data preprocessing, text mining, and metrics frequency analysis for key word extraction, and N-gram chart, centrality and social network analysis and CONCOR clustering visualization based on UCINET6/NetDraw, Textom program, the keywords clustered into 8 groups were used to derive the typologies of each research trend. The outcomes of this study will provide useful theoretical insights and guideline to future studies.

Automatic Clustering of Same-Name Authors Using Full-text of Articles (논문 원문을 이용한 동명 저자 자동 군집화)

  • Kang, In-Su;Jung, Han-Min;Lee, Seung-Woo;Kim, Pyung;Goo, Hee-Kwan;Lee, Mi-Kyung;Goo, Nam-Ang;Sung, Won-Kyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.652-656
    • /
    • 2006
  • Bibliographic information retrieval systems require bibliographic data such as authors, organizations, source of publication to be uniquely identified using keys. In particular, when authors are represented simply as their names, users bear the burden of manually discriminating different users of the same name. Previous approaches to resolving the problem of same-name authors rely on bibliographic data such as co-author information, titles of articles, etc. However, these methods cannot handle the case of single author articles, or the case when articles do not have common terms in their titles. To complement the previous methods, this study introduces a classification-based approach using similarity between full-text of articles. Experiments using recent domestic proceedings showed that the proposed method has the potential to supplement the previous meta-data based approaches.

  • PDF

Analysis method of patent document to Forecast Patent Registration (특허 등록 예측을 위한 특허 문서 분석 방법)

  • Koo, Jung-Min;Park, Sang-Sung;Shin, Young-Geun;Jung, Won-Kyo;Jang, Dong-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.4
    • /
    • pp.1458-1467
    • /
    • 2010
  • Recently, imitation and infringement rights of an intellectual property are being recognized as impediments to nation's industrial growth. To prevent the huge loss which comes from theses impediments, many researchers are studying protection and efficient management of an intellectual property in various ways. Especially, the prediction of patent registration is very important part to protect and assert intellectual property rights. In this study, we propose the patent document analysis method by using text mining to predict whether the patent is registered or rejected. In the first instance, the proposed method builds the database by using the word frequencies of the rejected patent documents. And comparing the builded database with another patent documents draws the similarity value between each patent document and the database. In this study, we used k-means which is partitioning clustering algorithm to select criteria value of patent rejection. In result, we found conclusion that some patent which similar to rejected patent have strong possibility of rejection. We used U.S.A patent documents about bluetooth technology, solar battery technology and display technology for experiment data.

A Semi-Noniterative VQ Design Algorithm for Text Dependent Speaker Recognition (문맥종속 화자인식을 위한 준비반복 벡터 양자기 설계 알고리즘)

  • Lim, Dong-Chul;Lee, Haing-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.1
    • /
    • pp.67-72
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text dependent speaker recognition. In a concrete way, we present the non-Iterative method which makes a vector quantization codebook and this method Is nut Iterative learning so that the computational complexity is epochally reduced. The proposed semi-noniterative VQ design method contrasts with the existing design method which uses the iterative learning algorithm for every training speaker. The characteristics of a semi-noniterative VQ design is as follows. First, the proposed method performs the iterative learning only for the reference speaker, but the existing method performs the iterative learning for every speaker. Second, the quantization region of the non-reference speaker is equivalent for a quantization region of the reference speaker. And the quantization point of the non-reference speaker is the optimal point for the statistical distribution of the non-reference speaker In the numerical experiment, we use the 12th met-cepstrum feature vectors of 20 speakers and compare it with the existing method, changing the codebook size from 2 to 32. The recognition rate of the proposed method is 100% for suitable codebook size and adequate training data. It is equal to the recognition rate of the existing method. Therefore the proposed semi-noniterative VQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal.

An Analysis of the Research Trends for Urban Study using Topic Modeling (토픽모델링을 이용한 도시 분야 연구동향 분석)

  • Jang, Sun-Young;Jung, Seunghyun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.661-670
    • /
    • 2021
  • Research trends can be usefully used to determine the importance of research topics by period, identify insufficient research fields, and discover new fields. In this study, research trends of urban spaces, where various problems are occurring due to population concentration and urbanization, were analyzed by topic modeling. The analysis target was the abstracts of papers listed in the Korea Citation Index (KCI) published between 2002 and 2019. Topic modeling is an algorithm-based text mining technique that can discover a certain pattern in the entire content, and it is easy to cluster. In this study, the frequency of keywords, trends by year, topic derivation, cluster by topic, and trend by topic type were analyzed. Research in urban regeneration is increasing continuously, and it was analyzed as a field where detailed topics could be expanded in the future. Furthermore, urban regeneration is now becoming a regular research field. On the other hand, topics related to development/growth and energy/environment have entered a stagnation period. This study is meaningful because the correlation and trends between keywords were analyzed using topic modeling targeting all domestic urban studies.

Analysis of Changes in Restaurant Attributes According to the Spread of Infectious Diseases: Application of Text Mining Techniques (감염병 확산에 따른 레스토랑 선택속성 변화 분석: 텍스트마이닝 기법 적용)

  • Joonil Yoo;Eunji Lee;Chulmo Koo
    • Information Systems Review
    • /
    • v.25 no.4
    • /
    • pp.89-112
    • /
    • 2023
  • In March 2020, as it was declared a COVID-19 pandemic, various quarantine measures were taken. Accordingly, many changes have occurred in the tourism and hospitality industries. In particular, quarantine guidelines, such as the introduction of non-face-to-face services and social distancing, were implemented in the restaurant industry. For decades, research on restaurant attributes has emphasized the importance of three attributes: atmosphere, service quality, and food quality. Nevertheless, to the best of our knowledge, research on restaurant attributes considering the COVID-19 situation is insufficient. To respond to this call, this study attempted an exploratory approach to classify new restaurant attributes based on understanding environmental changes. This study considered 31,115 online reviews registered in Naverplace as an analysis unit, with 475 general restaurants located in Euljiro, Seoul. Further, we attempted to classify restaurant attributes by clustering words within online reviews through TF-IDF and LDA topic modeling techniques. As a result of the analysis, the factors of "prevention of infectious diseases" were derived as new attributes of restaurants in the context of COVID-19 situations, along with the atmosphere, service quality, and food quality. This study is of academic significance by expanding the literature of existing restaurant attributes in that it categorized the three attributes presented by existing restaurant attributes and further presented new attributes. Moreover, the analysis results have led to the formulation of practical recommendations, considering both the operational aspects of restaurants and policy implications.

Categorizing Sub-Categories of Mobile Application Services using Network Analysis: A Case of Healthcare Applications (네트워크 분석을 이용한 애플리케이션 서비스 하위 카테고리 분류: 헬스케어 어플리케이션 중심으로)

  • Ha, Sohee;Geum, Youngjung
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.3
    • /
    • pp.15-40
    • /
    • 2020
  • Due to the explosive growth of mobile application services, categorizing mobile application services is in need in practice from both customers' and developers' perspectives. Despite the fact, however, there have been limited studies regarding systematic categorization of mobile application services. In response, this study proposed a method for categorizing mobile application services, and suggested a service taxonomy based on the network clustering results. Total of 1,607 mobile healthcare services are collected through the Google Play store. The network analysis is conducted based on the similarity of descriptions in each application service. Modularity detection analysis is conducted to detects communities in the network, and service taxonomy is derived based on each cluster. This study is expected to provide a systematic approach to the service categorization, which is helpful to both customers who want to navigate mobile application service in a systematic manner and developers who desire to analyze the trend of mobile application services.

Identify the Failure Mode of Weapon System (or equipment) using Machine Learning (Machine Learning을 이용한 무기 체계(or 구성품) 고장 유형 식별)

  • Park, Yun-Kyung;Lee, Hye-Won;Kim, Sang-Moon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.8
    • /
    • pp.64-70
    • /
    • 2018
  • The development of weapon systems (or components) is hindered by the number of tests due to the limited development period and cost, which reduces the scale of accumulated data related to failures. Nevertheless, because a large amount of failure data and maintenance details during the operational period are managed by computerized data, the cause of failure of weapon systems (or components) can be analyzed using the data. On the other hand, analyzing the failure and maintenance details of various weapon systems is difficult because of the variation among groups and companies, and details of the cause of failure are described as unstructured text data. Fortunately, the recent developments of big data processing technology, machine learning algorithm, and improved HW computation ability have supported major research into various methods for processing the above unstructured data. In this paper, unstructured data related to the failure / maintenance of defense weapon systems (or components) is presented by applying doc2vec, a machine learning technique, to analyze the failure cases.