• Title/Summary/Keyword: 빈도 기반 텍스트 분석

Search Result 105, Processing Time 0.032 seconds

How National Water Management Plans lead Hydrological Survey Projects? (텍스트 마이닝을 이용한 국가 물관리 정책 변화 시점별 수문조사사업의 방향 분석)

  • Chan Woo Kim;Min Kuk Kim;Jung Hwan Koh;Seung Won Han;In Jae Choi;Dong Ho Hyun;Seok Geun Park
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.429-429
    • /
    • 2023
  • 우리나라의 물 관련 정책 방향이 환경 중심의 수자원 관리에서 친수공간 및 정보의 확보와 같은 안전한 물관리로 확대되면서 정책추진에 기초가 될 수 있는 신뢰도 높은 수문자료의 생산이 보다 중요시되고 있다. 국가 수문조사사업은 이러한 정책기조에 맞춰 제도적인 뒷받침과 함께 조사의 범위와 기술, 품질관리 등의 영역을 넓히며 그 기능을 활발히 하고 있으나, 물관리 정책의 경향에 따른 수문조사사업의 방향성과 특징을 구조적으로 살펴본 연구는 부족한 것으로 파악된다. 따라서 본 연구는 친수·친환경적 물관리가 강조된 시기('97~현재)를 중점으로 하여 물관리 정책과 관련 계획의 변화가 수문조사사업에 어떠한 영향을 주는지 고찰하였다. 이를 위해 물관리 여건의 변화에 따라 달라진 관련 정책별 주제어의 분포와 수문조사사업과 연관된 주요어의 출현빈도 및 경향을 살펴보고, 주요 연관어와 연계한 사업의 방향과 구조를 분석하였다. 분석자료로는 물관리 관련 법령 등의 제도와 언론기사자료, 정책별 추진방향을 활용하였다. 정책의 추진방향은 1) 수자원의 종합적 개발에서 친환경적 측면과 지속가능성이 강조된 수자원장기종합계획(3-1차~4-3차)과 2) 사람과 자연이 함께 고려된 맑고 안전한 물, 통합물관리 등의 전략이 수록된 국가물관리기본계획(1차), 3) 정책의 기조에 따라 수립 및 보완된 수문조사 기본계획(1~2차)을 바탕으로 하였다. R프로그램을 통한 텍스트 마이닝을 활용하여 각 자료에서의 주제어 분포와 출현빈도를 분석하고, 정책별 추진방향과 수문조사사업의 연계성을 나타내었다. 연구의 함의를 담은 결과로서 물관리 여건이 변화된 시점별 주요연관어를 중심으로 한 정책동향과 수문조사사업의 특징 및 방향을 요약·비교하여 제시하였으며, 이는 물관리 분야에서의 국정운영 목표와 연계하여 국가 수문조사사업의 사업성을 고찰하는 연구의 기반이 될 수 있으리라 생각된다.

  • PDF

Analysis of Seasonal Importance of Construction Hazards Using Text Mining (텍스트마이닝을 이용한 건설공사 위험요소의 계절별 중요도 분석)

  • Park, Kichang;Kim, Hyoungkwan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.3
    • /
    • pp.305-316
    • /
    • 2021
  • Construction accidents occur due to a number of reasons-worker carelessness, non-adoption of safety equipment, and failure to comply with safety rules are some examples. Because much construction work is done outdoors, weather conditions can also be a factor in accidents. Past construction accident data are useful for accident prevention, but since construction accident data are often in a text format consisting of natural language, extracting construction hazards from construction accident data can take a lot of time and that entails extra cost. Therefore, in this study, we extracted construction hazards from 2,026 domestic construction accident reports using text mining and performed a seasonal analysis of construction hazards through frequency analysis and centrality analysis. Of the 254 construction hazards defined by Korea's Ministry of Land, Infrastructure, and Transport, we extracted 51 risk factors from the construction accident data. The results showed that a significant hazard was "Formwork" in spring and autumn, "Scaffold" in summer, and "Crane" in winter. The proposed method would enable construction safety managers to prepare better safety measures against outdoor construction accidents according to weather, season, and climate.

Analysis of Traffic Improvement Measures in Transportation Impact Assessment Using Text Mining : Focusing on City Development Projects in Gyeonggi Province (텍스트마이닝을 활용한 교통영향평가 교통개선대책 분석 : 경기도 도시개발사업을 대상으로)

  • Eun Hye Yang;Hee Chan Kang;Woo-Young Ahn
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.2
    • /
    • pp.182-194
    • /
    • 2023
  • Traffic impact assessment plays a crucial role in resolving traffic issues that may arise during the implementation of urban and transportation projects. However, reported results diverge, presumably because the items reviewed differ. In this study, we analyze traffic improvement measures approved for traffic impact assessment, identify key items, and present items that should be included in assessments. Specifically, TF-IDF and N-gram analysis and text mining were performed with focus on urban development projects approved in Gyeonggi Province. The results obtained show that keywords associated with newly established transportation infrastructure, such as roads and intersections, were essential assessment items, followed by the locations of entrances and exits and pedestrian connectivity. We recommend that considerations of the items presented in this study be incorporated into future traffic impact assessment guidelines and standards to improve the consistency and objectivity of the assessment process.

A Study on the Feature Point Extraction Methodology based on XML for Searching Hidden Vault Anti-Forensics Apps (은닉형 Vault 안티포렌식 앱 탐색을 위한 XML 기반 특징점 추출 방법론 연구)

  • Kim, Dae-gyu;Kim, Chang-soo
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.61-70
    • /
    • 2022
  • General users who use smartphone apps often use the Vault app to protect personal information such as photos and videos owned by individuals. However, there are increasing cases of criminals using the Vault app function for anti-forensic purposes to hide illegal videos. These apps are one of the apps registered on Google Play. This paper proposes a methodology for extracting feature points through XML-based keyword frequency analysis to explore Vault apps used by criminals, and text mining techniques are applied to extract feature points. In this paper, XML syntax was compared and analyzed using strings.xml files included in the app for 15 hidden Vault anti-forensics apps and non-hidden Vault apps, respectively. In hidden Vault anti-forensics apps, more hidden-related words are found at a higher frequency in the first and second rounds of terminology processing. Unlike most conventional methods of static analysis of APK files from an engineering point of view, this paper is meaningful in that it approached from a humanities and sociological point of view to find a feature of classifying anti-forensics apps. In conclusion, applying text mining techniques through XML parsing can be used as basic data for exploring hidden Vault anti-forensics apps.

WCTT: Web Crawling System based on HTML Document Formalization (WCTT: HTML 문서 정형화 기반 웹 크롤링 시스템)

  • Kim, Jin-Hwan;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.495-502
    • /
    • 2022
  • Web crawler, which is mainly used to collect text on the web today, is difficult to maintain and expand because researchers must implement different collection logic by collection channel after analyzing tags and styles of HTML documents. To solve this problem, the web crawler should be able to collect text by formalizing HTML documents to the same structure. In this paper, we designed and implemented WCTT(Web Crawling system based on Tag path and Text appearance frequency), a web crawling system that collects text with a single collection logic by formalizing HTML documents based on tag path and text appearance frequency. Because WCTT collects texts with the same logic for all collection channels, it is easy to maintain and expand the collection channel. In addition, it provides the preprocessing function that removes stopwords and extracts only nouns for keyword network analysis and so on.

Development of big data based Skin Care Information System SCIS for skin condition diagnosis and management

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.137-147
    • /
    • 2022
  • Diagnosis and management of skin condition is a very basic and important function in performing its role for workers in the beauty industry and cosmetics industry. For accurate skin condition diagnosis and management, it is necessary to understand the skin condition and needs of customers. In this paper, we developed SCIS, a big data-based skin care information system that supports skin condition diagnosis and management using social media big data for skin condition diagnosis and management. By using the developed system, it is possible to analyze and extract core information for skin condition diagnosis and management based on text information. The skin care information system SCIS developed in this paper consists of big data collection stage, text preprocessing stage, image preprocessing stage, and text word analysis stage. SCIS collected big data necessary for skin diagnosis and management, and extracted key words and topics from text information through simple frequency analysis, relative frequency analysis, co-occurrence analysis, and correlation analysis of key words. In addition, by analyzing the extracted key words and information and performing various visualization processes such as scatter plot, NetworkX, t-SNE, and clustering, it can be used efficiently in diagnosing and managing skin conditions.

An Analysis of Flood Vulnerability by Administrative Region through Big Data Analysis (빅데이터 분석을 통한 행정구역별 홍수 취약성 분석)

  • Yu, Yeong UK;Seong, Yeon Jeong;Park, Tae Gyeong;Jung, Young Hun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.193-193
    • /
    • 2021
  • 전 세계적으로 기후변화가 지속되면서 그에 따른 자연재난의 강도와 발생 빈도가 증가하고 있다. 자연재난의 발생 유형 중 집중호우와 태풍으로 인한 수문학적 재난이 대부분을 차지하고 있으며, 홍수피해는 지역적 수문학적 특성에 따라 피해의 규모와 범위가 달라지는 경향을 보인다. 이러한 이질적인 피해를 관리하기 위해서는 많은 홍수피해 정보를 수집하는 것이 필연적이다. 정보화 시대인 요즘 방대한 양의 데이터가 발생하면서 '빅데이터', '머신러닝', '인공지능'과 같은 말들이 다양한 분야에서 주목을 받고 있다. 홍수피해 정보에 대해서도 과거 국가에서 발간하는 정보외에 인터넷에는 뉴스기사나 SNS 등 미디어를 통하여 수많은 정보들이 생성되고 있다. 이러한 방대한 규모의 데이터는 미래 경쟁력의 우위를 좌우하는 중요한 자원이 될 것이며, 홍수대비책으로 활용될 소중한 정보가 될 수 있다. 본 연구는 인터넷기반으로 한 홍수피해 현상 조사를 통해 홍수피해 규모에 따라 발생하는 홍수피해 현상을 파악하고자 하였다. 이를 위해 과거에 발생한 홍수피해 사례를 조사하여 강우량, 홍수피해 현상 등 홍수피해 관련 정보를 조사하였다. 홍수피해 현상은 뉴스기사나 보고서 등 미디어 정보를 활용하여 수집하였으며, 수집된 비정형 형태의 텍스트 데이터를 '텍스트 마이닝(Text Mining)' 기법을 이용하여 데이터를 정형화 및 주요 홍수피해 현상 키워드를 추출하여 데이터를 수치화하여 표현하였다.

  • PDF

Analysis of Keywords in national river occupancy permits by region using text mining and network theory (텍스트 마이닝과 네트워크 이론을 활용한 권역별 국가하천 점용허가 키워드 분석)

  • Seong Yun Jeong
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.185-197
    • /
    • 2023
  • This study was conducted using text mining and network theory to extract useful information for application for occupancy and performance of permit tasks contained in the permit contents from the permit register, which is used only for the simple purpose of recording occupancy permit information. Based on text mining, we analyzed and compared the frequency of vocabulary occurrence and topic modeling in five regions, including Seoul, Gyeonggi, Gyeongsang, Jeolla, Chungcheong, and Gangwon, as well as normalization processes such as stopword removal and morpheme analysis. By applying four types of centrality algorithms, including stage, proximity, mediation, and eigenvector, which are widely used in network theory, we looked at keywords that are in a central position or act as an intermediary in the network. Through a comprehensive analysis of vocabulary appearance frequency, topic modeling, and network centrality, it was found that the 'installation' keyword was the most influential in all regions. This is believed to be the result of the Ministry of Environment's permit management office issuing many permits for constructing facilities or installing structures. In addition, it was found that keywords related to road facilities, flood control facilities, underground facilities, power/communication facilities, sports/park facilities, etc. were at a central position or played a role as an intermediary in topic modeling and networks. Most of the keywords appeared to have a Zipf's law statistical distribution with low frequency of occurrence and low distribution ratio.

Frequency and Social Network Analysis of the Bible Data using Big Data Analytics Tools R (빅데이터 분석도구 R을 이용한 성경 데이터의 빈도와 소셜 네트워크 분석)

  • Ban, ChaeHoon;Ha, JongSoo;Kim, Dong Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.166-171
    • /
    • 2020
  • Big data processing technology that can store and analyze data and obtain new knowledge has been adjusted for importance in many fields of the society. Big data is emerging as an important problem in the field of information and communication technology, but the mind of continuous technology is rising. the R, a tool that can analyze big data, is a language and environment that enables information analysis of statistical bases. In this paper, we use this to analyze the Bible data. We analyze the four Gospels of the New Testament in the Bible. We collect the Bible data and perform filtering for analysis. The R is used to investigate the frequency of what text is distributed and analyze the Bible through social network analysis, in which words from a sentence are paired and analyzed between words for accurate data analysis.

A Study on Social media Opinion Mining based Enterprise Crisis Management (소셜 미디어 오피니언 마이닝에 기반한 기업의 위기관리에 관한 연구)

  • Cha, Seun-Joon;Kang, Jae-Woo;Choi, Jae-Hoon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.142-144
    • /
    • 2012
  • 소셜 미디어가 확산되고 사용자가 증가하면서, 사용자들은 소셜 미디어를 통해 의견을 공유한다. 소셜 미디어는 실시간 정보에 대한 전달이 빠르며 데이터를 수집, 분석할 수 있다. 오피니언 마이닝은 텍스트로부터 사용자의 의견이 포함된 패턴을 추출하여 특정 제품이나 서비스에 대한 의견의 긍정, 부정 표현의 정도를 측정한다. 본 논문에서는 오피니언 마이닝을 기반으로 소셜 미디어 데이터에서 기업의 제품, 서비스와 관련된 사용자의 의견을 분석하여 긍정, 부정인지를 판단한다. 그리고 부정 패턴의 빈도를 통해 기업의 위기 상황을 인지하며, 위기 대응을 위한 4단계의 위기관리 모델을 제시한다. 또한 소셜 미디어에서 기업의 위기관리 사례를 확인하고, 표본조사를 통하여 평가 및 분석을 수행한다. 이 모델을 이용하여 방대한 소셜 미디어 데이터에서 기업의 제품이나 서비스에 대한 부정적 의견을 초기에 감지하고, 체계적으로 대응 할 수 있다.