• Title/Summary/Keyword: 비정형데이터

Search Result 589, Processing Time 0.024 seconds

A Study on an Automatic Classification Model for Facet-Based Multidimensional Analysis of Civil Complaints (패싯 기반 민원 다차원 분석을 위한 자동 분류 모델)

  • Na Rang Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.135-144
    • /
    • 2024
  • In this study, we propose an automatic classification model for quantitative multidimensional analysis based on facet theory to understand public opinions and demands on major issues through big data analysis. Civil complaints, as a form of public feedback, are generated by various individuals on multiple topics repeatedly and continuously in real-time, which can be challenging for officials to read and analyze efficiently. Specifically, our research introduces a new classification framework that utilizes facet theory and political analysis models to analyze the characteristics of citizen complaints and apply them to the policy-making process. Furthermore, to reduce administrative tasks related to complaint analysis and processing and to facilitate citizen policy participation, we employ deep learning to automatically extract and classify attributes based on the facet analysis framework. The results of this study are expected to provide important insights into understanding and analyzing the characteristics of big data related to citizen complaints, which can pave the way for future research in various fields beyond the public sector, such as education, industry, and healthcare, for quantifying unstructured data and utilizing multidimensional analysis. In practical terms, improving the processing system for large-scale electronic complaints and automation through deep learning can enhance the efficiency and responsiveness of complaint handling, and this approach can also be applied to text data processing in other fields.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

A Study on Intelligent Skin Image Identification From Social media big data

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.191-203
    • /
    • 2022
  • In this paper, we developed a system that intelligently identifies skin image data from big data collected from social media Instagram and extracts standardized skin sample data for skin condition diagnosis and management. The system proposed in this paper consists of big data collection and analysis stage, skin image analysis stage, training data preparation stage, artificial neural network training stage, and skin image identification stage. In the big data collection and analysis stage, big data is collected from Instagram and image information for skin condition diagnosis and management is stored as an analysis result. In the skin image analysis stage, the evaluation and analysis results of the skin image are obtained using a traditional image processing technique. In the training data preparation stage, the training data were prepared by extracting the skin sample data from the skin image analysis result. And in the artificial neural network training stage, an artificial neural network AnnSampleSkin that intelligently predicts the skin image type using this training data was built up, and the model was completed through training. In the skin image identification step, skin samples are extracted from images collected from social media, and the image type prediction results of the trained artificial neural network AnnSampleSkin are integrated to intelligently identify the final skin image type. The skin image identification method proposed in this paper shows explain high skin image identification accuracy of about 92% or more, and can provide standardized skin sample image big data. The extracted skin sample set is expected to be used as standardized skin image data that is very efficient and useful for diagnosing and managing skin conditions.

A Classification Model for Illegal Debt Collection Using Rule and Machine Learning Based Methods

  • Kim, Tae-Ho;Lim, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.93-103
    • /
    • 2021
  • Despite the efforts of financial authorities in conducting the direct management and supervision of collection agents and bond-collecting guideline, the illegal and unfair collection of debts still exist. To effectively prevent such illegal and unfair debt collection activities, we need a method for strengthening the monitoring of illegal collection activities even with little manpower using technologies such as unstructured data machine learning. In this study, we propose a classification model for illegal debt collection that combine machine learning such as Support Vector Machine (SVM) with a rule-based technique that obtains the collection transcript of loan companies and converts them into text data to identify illegal activities. Moreover, the study also compares how accurate identification was made in accordance with the machine learning algorithm. The study shows that a case of using the combination of the rule-based illegal rules and machine learning for classification has higher accuracy than the classification model of the previous study that applied only machine learning. This study is the first attempt to classify illegalities by combining rule-based illegal detection rules with machine learning. If further research will be conducted to improve the model's completeness, it will greatly contribute in preventing consumer damage from illegal debt collection activities.

Noise Control Boundary Image Matching Using Time-Series Moving Average Transform (시계열 이동평균 변환을 이용한 노이즈 제어 윤곽선 이미지 매칭)

  • Kim, Bum-Soo;Moon, Yang-Sae;Kim, Jin-Ho
    • Journal of KIISE:Databases
    • /
    • v.36 no.4
    • /
    • pp.327-340
    • /
    • 2009
  • To achieve the noise reduction effect in boundary image matching, we use the moving average transform of time-series matching. Our motivation is based on an intuition that using the moving average transform we may exploit the noise reduction effect in boundary image matching as in time-series matching. To confirm this simple intuition, we first propose $\kappa$-order image matching, which applies the moving average transform to boundary image matching. A boundary image can be represented as a sequence in the time-series domain, and our $\kappa$-order image matching identifies similar images in this time-series domain by comparing the $\kappa$-moving average transformed sequences. Next, we propose an index-based matching method that efficiently performs $\kappa$-order image matching on a large volume of image databases, and formally prove the correctness of the index-based method. Moreover, we formally analyze the relationship between an order $\kappa$ and its matching result, and present a systematic way of controlling the noise reduction effect by changing the order $\kappa$. Experimental results show that our $\kappa$-order image matching exploits the noise reduction effect, and our index-based matching method outperforms the sequential scan by one or two orders of magnitude.

Evaluation of Major Projects of the 5th Basic Forest Plan Utilizing Big Data Analysis (빅데이터 분석을 활용한 제5차 산림기본계획 주요 사업에 대한 평가)

  • Byun, Seung-Yeon;Koo, Ja-Choon;Seok, Hyun-Deok
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.340-352
    • /
    • 2017
  • In This study, we examined the gap between supply and demand of forest policy by year through big data analysis for macroscopic evaluation of the 5th Basic Forest Plan. We collected unstructured data based on keywords related to the projects mentioned in the news, SNS and so on in the relevant year for the policy demand side; and based on the documents published by the Korea Forest Service for the policy supply side. based on the collected data, we specified the network structure through the social network analysis technique, and identified the gap between supply and demand of the Korea Forest Service's policies by comparing the network of the demand side and that of the supply side. The results of big data analysis indicated that the network of the supply side is less radial than that of the demand side, implying that various keywords other than forest could considerably influence on the network. Also we compared the trends of supply and demand for 33 keywords related to 27 major projects. The results showed that 7 keywords shows increasing demand but decreasing supply: sustainable, forest management, forest biota, forest protection, forest disease and pest, urban forest, and North Korea. Since the supply-demand gap is confirmed for the 7 keywords, it is necessary to strengthen the forest policy regarding the 7 keywords in the 6th Basic Plan.

Application of Advertisement Filtering Model and Method for its Performance Improvement (광고 글 필터링 모델 적용 및 성능 향상 방안)

  • Park, Raegeun;Yun, Hyeok-Jin;Shin, Ui-Cheol;Ahn, Young-Jin;Jeong, Seungdo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.1-8
    • /
    • 2020
  • In recent years, due to the exponential increase in internet data, many fields such as deep learning have developed, but side effects generated as commercial advertisements, such as viral marketing, have been discovered. This not only damages the essence of the internet for sharing high-quality information, but also causes problems that increase users' search times to acquire high-quality information. In this study, we define advertisement as "a text that obscures the essence of information transmission" and we propose a model for filtering information according to that definition. The proposed model consists of advertisement filtering and advertisement filtering performance improvement and is designed to continuously improve performance. We collected data for filtering advertisements and learned document classification using KorBERT. Experiments were conducted to verify the performance of this model. For data combining five topics, accuracy and precision were 89.2% and 84.3%, respectively. High performance was confirmed, even if atypical characteristics of advertisements are considered. This approach is expected to reduce wasted time and fatigue in searching for information, because our model effectively delivers high-quality information to users through a process of determining and filtering advertisement paragraphs.

A Method of Mining Visualization Rules from Open Online Text for Situation Aware Business Chart Recommendation (상황인식형 비즈니스 차트 추천기 개발을 위한 개방형 온라인 텍스트로부터의 시각화 규칙 추출 방법 연구)

  • Zhang, Qingxuan;Kwon, Ohbyung
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.1
    • /
    • pp.83-107
    • /
    • 2020
  • Selecting business charts based on the nature of the data and the purpose of the visualization is useful in business analysis. However, current visualization tools lack the ability to help choose the right business chart for the context. Also, soliciting expert help about visualization methods for every analysis is inefficient. Therefore, the purpose of this study is to propose an accessible method to improve business chart productivity by creating rules for selecting business charts from online published documents. To this end, Korean, English, and Chinese unstructured data describing business charts were collected from the Internet, and the relationships between the contexts and the business charts were calculated using TF-IDF. We also used a Galois lattice to create rules for business chart selection. In order to evaluate the adequacy of the rules generated by the proposed method, experiments were conducted on experimental and control groups. The results confirmed that meaningful rules were extracted by the proposed method. To the best of our knowledge, this is the first study to recommend customizing business charts through open unstructured data analysis and to propose a method that enables efficient selection of business charts for office workers without expert assistance. This method should be useful for staff training by recommending business charts based on the document that he/she is working on.

A Study on the Research Trends on Domestic Platform Government using Topic Modeling (토픽 모델링을 활용한 한국의 플랫폼정부 연구동향 분석)

  • Suh, Byung-Jo;Shin, Sun-Young
    • Informatization Policy
    • /
    • v.24 no.3
    • /
    • pp.3-26
    • /
    • 2017
  • The amount of unstructured data generated online is increasing exponentially and the analysis of text data is being done in various fields. In order to identify the research trends on the platform government, the title, year, academic society, and abstract information of the academic papers on the subject of platform government were collected from the database of the domestic papers, DBPIA(www.dbpia.co.kr). The results of the existing research on the platform government and related fields were analyzed based on each stage of the national informatization promotion. The technology, service, and governance topics were extracted from papers on platform government and the trends of core topics were analyzed by year. Entering the era of the intelligent information society, this study has significance for providing the basis for defining a new role of government - the platform government that sets the stage for the private sector to lead the innovation, and plays the role of an 'enabler' and 'facilitator' instead. The purpose of this study is to understand the platform government research through objective analysis of its trends. Looking for future directions, this study will contribute to future research by providing reference materials.

Analysis of related words for each private security service through collection of unstructured data

  • Park, Su-Hyeon;Cho, Cheol-Kyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.6
    • /
    • pp.219-224
    • /
    • 2020
  • The purpose of this study is mainly to provide theoretical basis of private security industry by analyzing the perception and flow of private security from the press-released materials according to periodic classification and duties through 'Big Kinds', a website of analyzing news big data. The research method has been changed to structured data to allow an analysis of various scattered unstructured data, and the keywords trend and related words by duties of private security were analyzed in growth period of private security. The perception of private security based on the results of the study was exposed a lot by the media through various crimes, accidents and incidents, and the issues related permanent position. Also, it tended to be perceived as a simple security guard, not recognized as the area of private security, and judging from the high correlation between private security and police, it was recognized not only as a role to assist the police force, but also as a common agent in charge of the public peace. Therefore, it should objectively judge the perception of private security, and through this, it is believed that it should be a foundation for recognizing private security as a main agent responsible for the safety of the nation and maintaining social orders.