• Title/Summary/Keyword: Search Term Frequency Data

Search Result 22, Processing Time 0.021 seconds

A Study on Change in Perception of Community Service and Demand Prediction based on Big Data

  • Chun-Ok, Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.230-237
    • /
    • 2022
  • The Community Social Service Investment project started as a state subsidy project in 2007 and has grown very rapidly in quantitative terms in a short period of time. It is a bottom-up project that discovers the welfare needs of people and plans and provides services suitable for them. The purpose of this study is to analyze using big data to determine the social response to local community service investment projects. For this, data was collected and analyzed by crawling with a specific keyword of community service investment project on Google and Naver sites. As for the analysis contents, monthly search volume, related keywords, monthly search volume, search rate by age, and gender search rate were conducted. As a result, 10 items were found as related keywords in Google, and 3 items were found in Naver. The overall results of Google and Naver sites were slightly different, but they increased and decreased at almost the same time. Therefore, it can be seen that the community service investment project continues to attract users' interest.

Predicting the Number of Confirmed COVID-19 Cases Using Deep Learning Models with Search Term Frequency Data (검색어 빈도 데이터를 반영한 코로나 19 확진자수 예측 딥러닝 모델)

  • Sungwook Jung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.387-398
    • /
    • 2023
  • The COVID-19 outbreak has significantly impacted human lifestyles and patterns. It was recommended to avoid face-to-face contact and over-crowded indoor places as much as possible as COVID-19 spreads through air, as well as through droplets or aerosols. Therefore, if a person who has contacted a COVID-19 patient or was at the place where the COVID-19 patient occurred is concerned that he/she may have been infected with COVID-19, it can be fully expected that he/she will search for COVID-19 symptoms on Google. In this study, an exploratory data analysis using deep learning models(DNN & LSTM) was conducted to see if we could predict the number of confirmed COVID-19 cases by summoning Google Trends, which played a major role in surveillance and management of influenza, again and combining it with data on the number of confirmed COVID-19 cases. In particular, search term frequency data used in this study are available publicly and do not invade privacy. When the deep neural network model was applied, Seoul (9.6 million) with the largest population in South Korea and Busan (3.4 million) with the second largest population recorded lower error rates when forecasting including search term frequency data. These analysis results demonstrate that search term frequency data plays an important role in cities with a population above a certain size. We also hope that these predictions can be used as evidentiary materials to decide policies, such as the deregulation or implementation of stronger preventive measures.

Does the general public have concerns with dental anesthetics?

  • Razon, Jonathan;Mascarenhas, Ana Karina
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.21 no.2
    • /
    • pp.113-118
    • /
    • 2021
  • Background: Consumers and patients in the last two decades have increasingly turned to various internet search engines including Google for information. Google Trends records searches done using the Google search engine. Google Trends is free and provides data on search terms and related queries. One recent study found a large public interest in "dental anesthesia". In this paper, we further explore this interest in "dental anesthesia" and assess if any patterns emerge. Methods: In this study, Google Trends and the search term "dental pain" was used to record the consumer's interest over a five-year period. Additionally, using the search term "Dental anesthesia," a top ten related query list was generated. Queries are grouped into two sections, a "top" category and a "rising" category. We then added additional search term such as: wisdom tooth anesthesia, wisdom tooth general anesthesia, dental anesthetics, local anesthetic, dental numbing, anesthesia dentist, and dental pain. From the related queries generated from each search term, repeated themes were grouped together and ranked according to the total sum of their relative search frequency (RSF) values. Results: Over the five-year time period, Google Trends data show that there was a 1.5% increase in the search term "dental pain". Results of the related queries for dental anesthesia show that there seems to be a large public interest in how long local anesthetics last (Total RSF = 231) - even more so than potential side effects or toxicities (Total RSF = 83). Conclusion: Based on these results it is recommended that clinicians clearly advice their patients on how long local anesthetics last to better manage patient expectations.

An Al Approach with Tabu Search to solve Multi-level Knapsack Problems:Using Cycle Detection, Short-term and Long-term Memory

  • Ko, Il-Sang
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.22 no.3
    • /
    • pp.37-58
    • /
    • 1997
  • An AI approach with tabu search is designed to solve multi-level knapsack problems. The approach performs intelligent actions with memories of historic data and learning effect. These action are developed ont only by observing the attributes of the optimal solution, the solution space, and its corresponding path to the optimal, but also by applying human intelligence, experience, and intuition with respect to the search strategies. The approach intensifies, or diversifies the search process appropriately in time and space. In order to create a good neighborhood structure, this approach uses two powerful choice rules that emphasize the impact of candidate variables on the current solution with respect to their profit contribution. "Pseudo moves", similar to "aspirations", support these choice rules during the evaluation process. For the purpose of visiting as many relevant points as possible, strategic oscillation between feasible and infeasible solutions around the boundary is applied. To avoid redundant moves, short-term (tabu-lists), intemediate-term (cycle-detection), and long-term (recording frequency and significant solutions for diversfication) memories are used. Test results show that among the 45 generated problems (these problems pose significant or insurmountable challenges to exact methods) the approach produces the optimal solutions in 39 cases.lutions in 39 cases.

  • PDF

Searching Patents Effectively in terms of Keyword Distributions (키워드 분포를 고려한 효과적 특허검색기법)

  • Lee, Wookey;Song, Justin Jongsu;Kang, Michael Mingu
    • Journal of Information Technology and Architecture
    • /
    • v.9 no.3
    • /
    • pp.323-331
    • /
    • 2012
  • With the advancement of the area of knowledge and information, Intellectual Property, especially, patents have captured attention more and more emergent. The increasing need for efficient way of patent information search has been essential, but the prevailing patent search engines have included too many noises for the results due to the Boolean models. This has occasioned too much time for the professional experts to investigate the results manually. In this paper, we reveal the differences between the conventional document search and patent search and analyze the limitations of existing patent search. Furthermore, we propose a specialized in patent search, so that the relationship between the keywords within each document and their significance within each patent document search keyword can be identified. Which in turn, the keywords and the relationships have been appointed a ranking for this patent in the upper ranks and the noise in the data sub-ranked. Therefore this approach is proposed to significantly reduce noise ratio of the data from the search results. Finally, in, we demonstrate the superiority of the proposed methodology by comparing the Kipris dataset.

Feature-selection algorithm based on genetic algorithms using unstructured data for attack mail identification (공격 메일 식별을 위한 비정형 데이터를 사용한 유전자 알고리즘 기반의 특징선택 알고리즘)

  • Hong, Sung-Sam;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.1-10
    • /
    • 2019
  • Since big-data text mining extracts many features and data, clustering and classification can result in high computational complexity and low reliability of the analysis results. In particular, a term document matrix obtained through text mining represents term-document features, but produces a sparse matrix. We designed an advanced genetic algorithm (GA) to extract features in text mining for detection model. Term frequency inverse document frequency (TF-IDF) is used to reflect the document-term relationships in feature extraction. Through a repetitive process, a predetermined number of features are selected. And, we used the sparsity score to improve the performance of detection model. If a spam mail data set has the high sparsity, detection model have low performance and is difficult to search the optimization detection model. In addition, we find a low sparsity model that have also high TF-IDF score by using s(F) where the numerator in fitness function. We also verified its performance by applying the proposed algorithm to text classification. As a result, we have found that our algorithm shows higher performance (speed and accuracy) in attack mail classification.

Multi-Dimensional Keyword Search and Analysis of Hotel Review Data Using Multi-Dimensional Text Cubes (다차원 텍스트 큐브를 이용한 호텔 리뷰 데이터의 다차원 키워드 검색 및 분석)

  • Kim, Namsoo;Lee, Suan;Jo, Sunhwa;Kim, Jinho
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.1
    • /
    • pp.63-73
    • /
    • 2014
  • As the advance of WWW, unstructured data including texts are taking users' interests more and more. These unstructured data created by WWW users represent users' subjective opinions thus we can get very useful information such as users' personal tastes or perspectives from them if we analyze appropriately. In this paper, we provide various analysis efficiently for unstructured text documents by taking advantage of OLAP (On-Line Analytical Processing) multidimensional cube technology. OLAP cubes have been widely used for the multidimensional analysis for structured data such as simple alphabetic and numberic data but they didn't have used for unstructured data consisting of long texts. In order to provide multidimensional analysis for unstructured text data, however, Text Cube model has been proposed precently. It incorporates term frequency and inverted index as measurements to search and analyze text databases which play key roles in information retrieval. The primary goal of this paper is to apply this text cube model to a real data set from in an Internet site sharing hotel information and to provide multidimensional analysis for users' reviews on hotels written in texts. To achieve this goal, we first build text cubes for the hotel review data. By using the text cubes, we design and implement the system which provides multidimensional keyword search features to search and to analyze review texts on various dimensions. This system will be able to help users to get valuable guest-subjective summary information easily. Furthermore, this paper evaluats the proposed systems through various experiments and it reveals the effectiveness of the system.

타부탐색, 메모리, 싸이클 탐지를 이용한 배낭문제 풀기

  • 고일상
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.04a
    • /
    • pp.514-517
    • /
    • 1996
  • In solving multi-level knapsack problems, conventional heuristic approaches often assume a short-sighted plan within a static decision enviornment to find a near optimal solution. These conventional approaches are inflexible, and lack the ability to adapt to different problem structures. This research approaches the problem from a totally different viewpoint, and a new method is designed and implemented. This method performs intelligent actions based on memories of historic data and learning. These actions are developed not only by observing the attributes of the optimal solution, the solution space, and its corresponding path to the optimal solution, but also by applying human intelligence, experience, and intuition with respect to the search strategies. The method intensifies, or diversifies the search process appropriately in time and space. In order to create a good neighborhood structure, this method uses two powerful choice rules that emphasize the impact of candidate variables on the current solution with respect to their profit contribution. A side effect of so-called "pseudo moves", similar to "aspirations", supports these choice rules during the evaluation process. For the purpose of visiting as many relevant points as possible, strategic oscillation between feasible and infeasible solutions around the boundary is applied for intensification. To avoid redundant moves, short-term (tabu-lists), intermediate-term (cycle detection), and long-term (recording frequency and significant solutions for diversification) memories are used. Test results show that among the 45 generated problems (these problems pose significant or insurmountable challenges to exact methods) the approach produces the optimal solutions in 39 cases.lutions in 39 cases.

  • PDF

Occupational Therapy in Long-Term Care Insurance For the Elderly Using Text Mining (텍스트 마이닝을 활용한 노인장기요양보험에서의 작업치료: 2007-2018년)

  • Cho, Min Seok;Baek, Soon Hyung;Park, Eom-Ji;Park, Soo Hee
    • Journal of Society of Occupational Therapy for the Aged and Dementia
    • /
    • v.12 no.2
    • /
    • pp.67-74
    • /
    • 2018
  • Objective : The purpose of this study is to quantitatively analyze the role of occupational therapy in long - term care insurance for the elderly using text mining, one of the big data analysis techniques. Method : For the analysis of newspaper articles, "Long - Term Care Insurance for the Elderly + Occupational Therapy for the Elderly" was collected after the period from 2007 to 208. Naver, which has a high share of the domestic search engine, utilized the database of Naver News by utilizing Textom, a web crawling tool. After collecting the article title and original text of 510 news data from the collection of the elderly long term care insurance + occupational therapy search, we analyzed the article frequency and key words by year. Result : In terms of the frequency of articles published by year, the number of articles published in 2015 and 2017 was the highest with 70 articles (13.7%), and the top 10 terms of the key word analysis showed the highest frequency of 'dementia' (344) In terms of key words, dementia, treatment, hospital, health, service, rehabilitation, facilities, institution, grade, elderly, professional, salary, industrial complex and people are related. Conclusion : In this study, it is meaningful that the textual mining technique was used to more objectively confirm the social needs and the role of the occupational therapist for the dementia and rehabilitation in the related key keywords based on the media reporting trend of the elderly long - term care insurance for 11 years. Based on the results of this study, future research should expand research field and period and supplement the research methodology through various analysis methods according to the year.

Impact of Diverse Document-evaluation Measure-based Searching Methods in Big Data Search Accuracy (빅데이터 검색 정확도에 미치는 다양한 측정 방법 기반 검색 기법의 효과)

  • Kim, Ji young;Han, DaHyeon;Kim, Jongkwon
    • Journal of KIISE
    • /
    • v.44 no.5
    • /
    • pp.553-558
    • /
    • 2017
  • With the rapid growth of Big Data, research on extracting meaningful information is being pursued by both academia and industry. Especially, data characteristics derived from analysis, and researcher intention are key factors for search algorithms to obtain accurate output. Therefore, reflecting both data characteristics and researcher intention properly is the final goal of data analysis research. The data analyzed properly can help users to increase loyalty to the service provided by company, and to utilize information more effectively and efficiently. In this paper, we explore various methods of document-evaluation, so that we can improve the accuracy of searching article one of the most frequently searches used in real life. We also analyze the experiment result, and suggest the proper manners to use various methods.