• Title/Summary/Keyword: Mining Difficulty

Search Result 74, Processing Time 0.042 seconds

A Study on an Estimation Method of Domestic Market Size by Using the Standard Statistical Classifications (표준통계분류를 이용한 내수시장 규모 추정방법에 관한 연구)

  • Yoo, Hyoung Sun;Seo, Ju Hwan;Jun, Seung-pyo;Seo, Jinny
    • Journal of Korea Technology Innovation Society
    • /
    • v.18 no.3
    • /
    • pp.387-415
    • /
    • 2015
  • In this study, we have proposed an estimation model of domestic market size using the linking between standard statistical classification systems, and reviewed the practical applicability of the model. The results of the mining and manufacturing survey of Statistics Korea conducted on the basis of KSIC (Korea Standard Industrial Classification) and Korea trade statistics based on HS (The Harmonized Commodity Description and Coding System; Harmonized System) classification were linked for the model by using the correspondence tables provided by Statistics Korea and United Nations Statistics Division. The most serious problem to adopt the integrated KSIC-ISIC-HS correspondence table for the estimation of domestic market size is the complex multiple linkages among KSIC and HS codes. In this study, we have suggested the method to divide the amount of trade corresponding to the HS codes linked to more than two ISIC codes based on the ratio of shipments corresponding to the ISIC codes as the weight. Then, it is possible to analyze the domestic market size of 125 ISIC codes in the manufacturing industry and to forecast the market size in the near future by using the model. Although the model has some limitations such as the difficulty in analysis on more subdivided items than ISIC items, the impossibility of the analysis on items in industries except for manufacturing, errors in the shipment due to some missing data, this study has significance in the sense that it provided the analysis method of domestic market size by using the most objective, reliable and sustainably useful data.

Complexity Metrics for Analysis Classes in the Unified Software Development Process (Unified Process의 분석 클래스에 대한 복잡도 척도)

  • 김유경;박재년
    • The KIPS Transactions:PartD
    • /
    • v.8D no.1
    • /
    • pp.71-80
    • /
    • 2001
  • Object-Oriented (OO) methodology to use the concept like encapsulation, inheritance, polymorphism, and message passing demands metrics that are different from structured methodology. There are many studies for OO software metrics such as program complexity or design metrics. But the metrics for the analysis class need to decrease the complexity in the analysis phase so that greatly reduce the effort and the cost of system development. In this paper, we propose new metrics to measure the complexity of analysis classes which draw out in the analysis phase based on Unified Process. By the collaboration complexity, is denoted by CC, we mean the maximum number of the collaborations can be achieved with each of the collaborator and detennine the potential complexity. And the interface complexity, is denoted by IC, shows the difficulty related to understand the interface of collaborators each other. We prove mathematically that the suggested metrics satisfy OO characteristics such as class size and inheritance. And we verify it theoretically for Weyuker' s nine properties. Moreover, we show the computation results for analysis classes of the system which automatically respond to questions of the it's user using the text mining technique. As we compared CC and IC to CBO and WMC, the complexity can be represented by CC and IC more than CBO and WMC. We expect to develop the cost-effective OO software by reviewing the complexity of analysis classes in the first stage of SDLC (Software Development Life Cycle).

  • PDF

Web Site Keyword Selection Method by Considering Semantic Similarity Based on Word2Vec (Word2Vec 기반의 의미적 유사도를 고려한 웹사이트 키워드 선택 기법)

  • Lee, Donghun;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.2
    • /
    • pp.83-96
    • /
    • 2018
  • Extracting keywords representing documents is very important because it can be used for automated services such as document search, classification, recommendation system as well as quickly transmitting document information. However, when extracting keywords based on the frequency of words appearing in a web site documents and graph algorithms based on the co-occurrence of words, the problem of containing various words that are not related to the topic potentially in the web page structure, There is a difficulty in extracting the semantic keyword due to the limit of the performance of the Korean tokenizer. In this paper, we propose a method to select candidate keywords based on semantic similarity, and solve the problem that semantic keyword can not be extracted and the accuracy of Korean tokenizer analysis is poor. Finally, we use the technique of extracting final semantic keywords through filtering process to remove inconsistent keywords. Experimental results through real web pages of small business show that the performance of the proposed method is improved by 34.52% over the statistical similarity based keyword selection technique. Therefore, it is confirmed that the performance of extracting keywords from documents is improved by considering semantic similarity between words and removing inconsistent keywords.

Tunnel Blasting Design Suited to Given Specific Charge (비장약량 맞춤형 터널발파 설계방법)

  • Choi, Byung-Hee;Ryu, Chang-Ha;Jeong, Ju-Hwan
    • Explosives and Blasting
    • /
    • v.27 no.2
    • /
    • pp.33-41
    • /
    • 2009
  • Specific charge, also called powder factor, is defined as the total explosive mass in a blast divided with the total volume or weight of rock to be fragmented. It is a well-known fact that change in explosive consumption per ton or per cubic meter of rock is always a good indication of changed rock conditions. In mining, it is common to use explosive consumption per ton of ore as a measure of the blastability for rock. On the contrary, in civil engineering, it is common to use explosive consumption per cubic meter of rock. In this paper, we adopt the definition of the civil engineering because we are mainly concerned with tunnel blasting. Up to now, although various methods for tunnel blast design have been proposed, there are so many cases in which the proposed methods do not work well. These may be caused by the differences in rock conditions between countries or regions, and can give a serious technical difficulty to a contractor. But if we know the specific charge for a given rock, then the blast design can become much more easier. In this respect, we suggest an algorithm for tunnel blast design that can exactly produce the predetermined specific charge as a result of the design. The algorithm is based on the concept of assigning different fixation factors to various parts of tunnel section, and may be used in combination with the known methods of tunnel blast design.

Measurement of Classes Complexity in the Object-Oriented Analysis Phase (객체지향 분석 단계에서의 클래스 복잡도 측정)

  • Kim, Yu-Kyung;Park, Jai-Nyun
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.10
    • /
    • pp.720-731
    • /
    • 2001
  • Complexity metrics have been developed for the structured paradigm of software development are not suitable for use with the object-oriented(OO) paradigm, because they do not support key object-oriented concepts such as inheritance, polymorphism. message passing and encapsulation. There are many researches on OO software metrics such as program complexity or design metrics. But metrics measuring the complexity of classes at the OO analysis phase are needed because they provide earlier feedback to the development project. and earlier feedback means more effective developing and less costly maintenance. In this paper, we propose the new metrics to measure the complexity of analysis classes which draw out in the analysis based on RUP(Rational Unified Process). By the collaboration complexity, is denoted by CC, we mean the maximum number of the collaborations can be achieved with each of the collaborator and determine the potential complexity. And the interface complexity, is denoted by IC, shows the difficulty related to understand the interface of collaborators each other. We verify theoretically the suggested metrics for Weyuker's nine properties. Moreover, we show the computation results for analysis classes of the system which automatically respond to questions of the user using the text mining technique. As a result of the comparison of CC and CBO and WMC suggested by Chidamber and Kemerer, the class that have highly the proposed metric value maintain the high complexity at the design phase too. And the complexity can be represented by CC and IC more than CBO and WMC. We can expect that our metrics may provide us the earlier feedback and hence possible to predict the efforts, costs and time required to remainder processes. As a result, we expect to develop the cost-effective OO software by reviewing the complexity of analysis classes in the first stage of SDLC(Software Development Life Cycle).

  • PDF

Topic-Specific Mobile Web Contents Adaptation (주제기반 모바일 웹 콘텐츠 적응화)

  • Lee, Eun-Shil;Kang, Jin-Beom;Choi, Joong-Min
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.6
    • /
    • pp.539-548
    • /
    • 2007
  • Mobile content adaptation is a technology of effectively representing the contents originally built for the desktop PC on wireless mobile devices. Previous approaches for Web content adaptation are mostly device-dependent. Also, the content transformation to suit to a smaller device is done manually. Furthermore, the same contents are provided to different users regardless of their individual preferences. As a result, the user has difficulty in selecting relevant information from a heavy volume of contents since the context information related to the content is not provided. To resolve these problems, this paper proposes an enhanced method of Web content adaptation for mobile devices. In our system, the process of Web content adaptation consists of 4 stages including block filtering, block title extraction, block content summarization, and personalization through learning. Learning is initiated when the user selects the full content menu from the content summary page. As a result of learning, personalization is realized by showing the information for the relevant block at the top of the content list. A series of experiments are performed to evaluate the content adaptation for a number of Web sites including online newspapers. The results of evaluation are satisfactory, both in block filtering accuracy and in user satisfaction by personalization.

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.

Semi-automatic Construction of Learning Set and Integration of Automatic Classification for Academic Literature in Technical Sciences (기술과학 분야 학술문헌에 대한 학습집합 반자동 구축 및 자동 분류 통합 연구)

  • Kim, Seon-Wu;Ko, Gun-Woo;Choi, Won-Jun;Jeong, Hee-Seok;Yoon, Hwa-Mook;Choi, Sung-Pil
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.4
    • /
    • pp.141-164
    • /
    • 2018
  • Recently, as the amount of academic literature has increased rapidly and complex researches have been actively conducted, researchers have difficulty in analyzing trends in previous research. In order to solve this problem, it is necessary to classify information in units of academic papers. However, in Korea, there is no academic database in which such information is provided. In this paper, we propose an automatic classification system that can classify domestic academic literature into multiple classes. To this end, first, academic documents in the technical science field described in Korean were collected and mapped according to class 600 of the DDC by using K-Means clustering technique to construct a learning set capable of multiple classification. As a result of the construction of the training set, 63,915 documents in the Korean technical science field were established except for the values in which metadata does not exist. Using this training set, we implemented and learned the automatic classification engine of academic documents based on deep learning. Experimental results obtained by hand-built experimental set-up showed 78.32% accuracy and 72.45% F1 performance for multiple classification.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.