• Title/Summary/Keyword: Unstructured text data

Search Result 228, Processing Time 0.023 seconds

A Study on the Analysis of Accident Types in Public and Private Construction Using Web Scraping and Text Mining (웹 스크래핑과 텍스트마이닝을 이용한 공공 및 민간공사의 사고유형 분석)

  • Yoon, Younggeun;Oh, Taekeun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.729-734
    • /
    • 2022
  • Various studies using accident cases are being conducted to identify the causes of accidents in the construction industry, but studies on the differences between public and private construction are insignificant. In this study, web scraping and text mining technologies were applied to analyze the causes of accidents by order type. Through statistical analysis and word cloud analysis of more than 10,000 structured and unstructured data collected, it was confirmed that there was a difference in the types and causes of accidents in public and private construction. In addition, it can contribute to the establishment of safety management measures in the future by identifying the correlation between major accident causes.

A Comparative Analysis of Cognitive Change about Big Data Using Social Media Data Analysis (소셜 미디어 데이터 분석을 활용한 빅데이터에 대한 인식 변화 비교 분석)

  • Yun, Youdong;Jo, Jaechoon;Hur, Yuna;Lim, Heuiseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.7
    • /
    • pp.371-378
    • /
    • 2017
  • Recently, with the spread of smart device and the introduction of web services, the data is rapidly increasing online, and it is utilized in various fields. In particular, the emergence of social media in the big data field has led to a rapid increase in the amount of unstructured data. In order to extract meaningful information from such unstructured data, interest in big data technology has increased in various fields. Big data is becoming a key resource in many areas. Big data's prospects for the future are positive, but concerns about data breaches and privacy are constantly being addressed. On this subject of big data, where positive and negative views coexist, the research of analyzing people's opinions currently lack. In this study, we compared the changes in peoples perception on big data based on unstructured data collected from the social media using a text mining. As a results, yearly keywords for domestic big data, declining positive opinions, and increasing negative opinions were observed. Based on these results, we could predict the flow of domestic big data.

An Analysis of IT Proposal Evaluation Results using Big Data-based Opinion Mining (빅데이터 분석 기반의 오피니언 마이닝을 이용한 정보화 사업 평가 분석)

  • Kim, Hong Sam;Kim, Chong Su
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.41 no.1
    • /
    • pp.1-10
    • /
    • 2018
  • Current evaluation practices for IT projects suffer from several problems, which include the difficulty of self-explanation for the evaluation results and the improperly scaled scoring system. This study aims to develop a methodology of opinion mining to extract key factors for the causal relationship analysis and to assess the feasibility of quantifying evaluation scores from text comments using opinion mining based on big data analysis. The research has been performed on the domain of publicly procured IT proposal evaluations, which are managed by the National Procurement Service. Around 10,000 sets of comments and evaluation scores have been gathered, most of which are in the form of digital data but some in paper documents. Thus, more refined form of text has been prepared using various tools. From them, keywords for factors and polarity indicators have been extracted, and experts on this domain have selected some of them as the key factors and indicators. Also, those keywords have been grouped into into dimensions. Causal relationship between keyword or dimension factors and evaluation scores were analyzed based on the two research models-a keyword-based model and a dimension-based model, using the correlation analysis and the regression analysis. The results show that keyword factors such as planning, strategy, technology and PM mostly affects the evaluation result and that the keywords are more appropriate forms of factors for causal relationship analysis than the dimensions. Also, it can be asserted from the analysis that evaluation scores can be composed or calculated from the unstructured text comments using opinion mining, when a comprehensive dictionary of polarity for Korean language can be provided. This study may contribute to the area of big data-based evaluation methodology and opinion mining for IT proposal evaluation, leading to a more reliable and effective IT proposal evaluation method.

Structuring of Unstructured SNS Messages on Rail Services using Deep Learning Techniques

  • Park, JinGyu;Kim, HwaYeon;Kim, Hyoung-Geun;Ahn, Tae-Ki;Yi, Hyunbean
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.7
    • /
    • pp.19-26
    • /
    • 2018
  • This paper presents a structuring process of unstructured social network service (SNS) messages on rail services. We crawl messages about rail services posted on SNS and extract keywords indicating date and time, rail operating company, station name, direction, and rail service types from each message. Among them, the rail service types are classified by machine learning according to predefined rail service types, and the rest are extracted by regular expressions. Words are converted into vector representations using Word2Vec and a conventional Convolutional Neural Network (CNN) is used for training and classification. For performance measurement, our experimental results show a comparison with a TF-IDF and Support Vector Machine (SVM) approach. This structured information in the database and can be easily used for services for railway users.

A Study on Effective Sentiment Analysis through News Classification in Bankruptcy Prediction Model (부도예측 모형에서 뉴스 분류를 통한 효과적인 감성분석에 관한 연구)

  • Kim, Chansong;Shin, Minsoo
    • Journal of Information Technology Services
    • /
    • v.18 no.1
    • /
    • pp.187-200
    • /
    • 2019
  • Bankruptcy prediction model is an issue that has consistently interested in various fields. Recently, as technology for dealing with unstructured data has been developed, researches applied to business model prediction through text mining have been activated, and studies using this method are also increasing in bankruptcy prediction. Especially, it is actively trying to improve bankruptcy prediction by analyzing news data dealing with the external environment of the corporation. However, there has been a lack of study on which news is effective in bankruptcy prediction in real-time mass-produced news. The purpose of this study was to evaluate the high impact news on bankruptcy prediction. Therefore, we classify news according to type, collection period, and analyzed the impact on bankruptcy prediction based on sentiment analysis. As a result, artificial neural network was most effective among the algorithms used, and commentary news type was most effective in bankruptcy prediction. Column and straight type news were also significant, but photo type news was not significant. In the news by collection period, news for 4 months before the bankruptcy was most effective in bankruptcy prediction. In this study, we propose a news classification methods for sentiment analysis that is effective for bankruptcy prediction model.

Sentiment Analyses of the Impacts of Online Experience Subjectivity on Customer Satisfaction (감성분석을 이용한 온라인 체험 내 비정형데이터의 주관도가 고객만족에 미치는 영향 분석)

  • Yeeun Seo;Sang-Yong Tom Lee
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.233-255
    • /
    • 2023
  • The development of information technology(IT) has brought so-called "online experience" to satisfy our daily needs. The market for online experiences grew more during the COVID-19 pandemic. Therefore, this study attempted to analyze how the features of online experience services affect customer satisfaction by crawling structured and unstructured data from the online experience web site newly launched by Airbnb after COVID-19. As a result of the analysis, it was found that the structured data generated by service users on a C2C online sharing platform had a positive effect on the satisfaction of other users. In addition, unstructured text data such as experience introductions and host introductions generated by service providers turned out to have different subjectivity scores depending on the purpose of its text. It was confirmed that the subjective host introduction and the objective experience introduction affect customer satisfaction positively. The results of this study are to provide various implications to stakeholders of the online sharing economy platform and researchers interested in online experience knowledge management.

Similar Patent Search Service System using Latent Dirichlet Allocation (잠재 의미 분석을 적용한 유사 특허 검색 서비스 시스템)

  • Lim, HyunKeun;Kim, Jaeyoon;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.8
    • /
    • pp.1049-1054
    • /
    • 2018
  • Keyword searching used in the past as a method of finding similar patents, and automated classification by machine learning is using in recently. Keyword searching is a method of analyzing data that is formalized through data refinement. While the accuracy for short text is high, long one consisted of several words like as document that is not able to analyze the meaning contained in sentences. In semantic analysis level, the method of automatic classification is used to classify sentences composed of several words by unstructured data analysis. There was an attempt to find similar documents by combining the two methods. However, it have a problem in the algorithm w the methods of analysis are different ways to use simultaneous unstructured data and regular data. In this paper, we study the method of extracting keywords implied in the document and using the LDA(Latent Semantic Analysis) method to classify documents efficiently without human intervention and finding similar patents.

A Study on the Finding of Promising Export Items in Defense industry for Export Market Expansion-Focusing on Text Mining Analysis-

  • Yeo, Seoyoon;Jeong, Jong Hee;Kim, Seong Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.235-243
    • /
    • 2022
  • This paper aims to find promising export items for market expansion of defense export items. Germany, the UK, and France were selected as export target countries to obtain unstructured forecast data on weapons system acquisition plans for the next ten years by each country. Using the TF-IDF in text mining analysis, keywords that appeared frequently in data from three countries were derived. As a result of this paper, keywords for each country's major acquisition projects drawing. However, most of the derived keywords were related to mainstay weapon systems produced by domestic defense companies in each country. To discover promising export items from text mining, we proposed that the drawn keywords are distinguished as similar weapon systems. In addition, we assort the weapon systems that the three countries will get a plan to acquire commonly. As a result of this paper, it can be seen that the current promising export item is a weapon system related to the information system. Prioritizing overseas demands using key words can set clear market entry goals. In the case of domestic companies based on needs, it is possible to establish a specific entry strategy. Relevant organizations also can provide customized marketing support.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.