• Title/Summary/Keyword: utility mining

Search Result 52, Processing Time 0.029 seconds

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Discovery-Driven Exploration Method in Lung Cancer 2-DE Gel Images Using the Data Cube (데이터 큐브를 이용한 폐암 2-DE 젤 이미지에서의 예외 탐사)

  • Shim, Jung-Eun;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.15D no.5
    • /
    • pp.681-690
    • /
    • 2008
  • In proteomics research, the identification of differentially expressed proteins observed under specific conditions is one of key issues. There are several ways to detect the change of a specific protein's expression level such as statistical analysis and graphical visualization. However, it is quiet difficult to handle the spot information of an individual protein manually by these methods, because there are a considerable number of proteins in a tissue sample. In this paper, using database and data mining techniques, the application plan of OLAP data cube and Discovery-driven exploration is proposed. By using data cubes, it is possible to analyze the relationship between proteins and relevant clinical information as well as analyzing the differentially expressed proteins by disease. We propose the measure and exception indicators which are suitable to analyzing protein expression level changes are proposed. In addition, we proposed the reducing method of calculating InExp in Discovery-driven exploration. We also evaluate the utility and effectiveness of the data cube and Discovery-driven exploration in the lung cancer 2-DE gel image.

Factors Clustering Approach to Parametric Cost Estimates And OLAP Driver

  • JaeHo, Cho;BoSik, Son;JaeYoul, Chun
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.707-716
    • /
    • 2009
  • The role of cost modeller is to facilitate the design process by systematic application of cost factors so as to maintain a sensible and economic relationship between cost, quantity, utility and appearance which thus helps in achieving the client's requirements within an agreed budget. There are a number of research on cost estimates in the early design stage based on the improvement of accuracy or impact factors. It is common knowledge that cost estimates are undertaken progressively throughout the design stage and make use of the information that is available at each phase, through the related research up to now. In addition, Cost estimates in the early design stage shall analyze the information under the various kinds of precondition before reaching the more developed design because a design can be modified and changed in all process depending on clients' requirements. Parametric cost estimating models have been adopted to support decision making in a changeable environment, in the early design stage. These models are using a similar instance or a pattern of historical case to be constituted in project information, geographic design features, relevant data to quantity or cost, etc. OLAP technique analyzes a subject data by multi-dimensional points of view; it supports query, analysis, comparison of required information by diverse queries. OLAP's data structure matches well with multiview-analysis framework. Accordingly, this study implements multi-dimensional information system for case based quantity data related to design information that is utilizing OLAP's technology, and then analyzes impact factors of quantity by the design criteria or parameter of the same meaning. On the basis of given factors examined above, this study will generate the rules on quantity measure and produce resemblance class using clustering of data mining. These sorts of knowledge-base consist of a set of classified data as group patterns, of which will be appropriate stand on the parametric cost estimating method.

  • PDF

Anatomy of Sentiment Analysis of Tweets Using Machine Learning Approach

  • Misbah Iram;Saif Ur Rehman;Shafaq Shahid;Sayeda Ambreen Mehmood
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.97-106
    • /
    • 2023
  • Sentiment analysis using social network platforms such as Twitter has achieved tremendous results. Twitter is an online social networking site that contains a rich amount of data. The platform is known as an information channel corresponding to different sites and categories. Tweets are most often publicly accessible with very few limitations and security options available. Twitter also has powerful tools to enhance the utility of Twitter and a powerful search system to make publicly accessible the recently posted tweets by keyword. As popular social media, Twitter has the potential for interconnectivity of information, reviews, updates, and all of which is important to engage the targeted population. In this work, numerous methods that perform a classification of tweet sentiment in Twitter is discussed. There has been a lot of work in the field of sentiment analysis of Twitter data. This study provides a comprehensive analysis of the most standard and widely applicable techniques for opinion mining that are based on machine learning and lexicon-based along with their metrics. The proposed work is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous, and polarized positive, negative or neutral. In order to validate the performance of the proposed framework, an extensive series of experiments has been performed on the real world twitter dataset that alter to show the effectiveness of the proposed framework. This research effort also highlighted the recent challenges in the field of sentiment analysis along with the future scope of the proposed work.

A Study of the "erlaubtes Risiko" in Aviation (항공 운항에서의 허용된 위험 법리에 대한 연구)

  • Ham, Se-Hoon
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.25 no.2
    • /
    • pp.201-230
    • /
    • 2010
  • With starting the industry of automobiles, railroads and mining, the legal principle of "erlaubtes Risiko" that began as a means of maintaining the revitalized world for the cause of social utility has interpreted as a system of negligence theory in the precedent while it has gained academic recognition. Yet in aircraft operation, which is one area of high technology, CAT which can be the cause of some accidents or events or thunderstorm with turbulence is an abnormal meteorological phenomenon with frequent change that cannot be monitored perfectly just as some patient with unstable condition and that cannot be ascertained about not only the possibility of its happening but also the degree of how big the accident is. Yet the use of jet current which has the possibility of CAT can be an act of high social utility where we not only drastically cut down on time fuel also guarantee the arrival and departure on schedule when landing in airports that have thunderstorm which does not appear as fatal risk. Although we could take some measures where we can predict and avoid the potential risk, easing the regular duty of care is necessary by applying the legal principles of permitted risk concerning the incidents and accidents caused by operating in areas with the risk of turbulence or CAT with the low probability by the reason of social utility.

  • PDF

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.

DATA MININING APPROACH TO PARAMETRIC COST ESTIMATE IN EARLY DESIGN STAGE AND ANALYTICAL CHARACTERIZATION ON OLAP (ON-LINE ANALYTICAL PROCESSING)

  • JaeHo Cho;HyunKyun Jung;JaeYoul Chun
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.176-181
    • /
    • 2011
  • A role of cost modeler is that of facilitating design process by the systematic application of cost factors so as to maintain sensible and economic relationships between cost, quantity, utility and appearance. These relationships help to achieve the client's requirements within an agreed budget. The purpose of this study is to develop a parametric cost estimating model for the early design stage by using the multi-dimensional system of OLAP (On-line Analytical Processing) based on the case of quantity data related to architectural design features. The parametric cost estimating models have been adopted to support decision making in the early design stage. These models typically use a similar instance or a pattern of historical case. In order to effectively use this type of data model, it is required to set data classification and prediction methods. One of the methods is to find the similar class in line with attribute selection measure in the multi-dimensional data model. Therefore, this research is to analyze the relevance attribute influenced by architectural design features with the subject of case-based quantity data used for the parametric cost estimating model. The relevance attributes can be analyzed by Analytical Characterization. It helps determine what attributes to be included in the OLAP multi-dimension.

  • PDF

A Study on the Financial Strength of Households on House Investment Demand (가계 재무건전성이 주택투자수요에 미치는 영향에 관한 연구)

  • Rho, Sang-Youn;Yoon, Bo-Hyun;Choi, Young-Min
    • Journal of Distribution Science
    • /
    • v.12 no.4
    • /
    • pp.31-39
    • /
    • 2014
  • Purpose - This study investigates the following two issues. First, we attempt to find the important determinants of housing investment and to identify their significance rank using survey panel data. Recently, the expansion of global uncertainty in the real estate market has directly and indirectly influenced the Korean housing market; households demonstrate a sensitive reaction to changes in that market. Therefore, this study aims to draw conclusions from understanding how the impact of financial strength of the household is related to house investment. Second, we attempt to verify the effectiveness of diverse indices of financial strength such as DTI, LTV, and PIR as measures to monitor the housing market. In the continuous housing market recession after the global crisis, the government places top priority on residence stability. However, the government still imposes forceful restraints on indices of financial strength. We believe this study verifies the utility of these regulations when used in the housing market. Research design, data, and methodology - The data source for this study is the "National Survey of Tax and Benefit" from 2007 (1st) to 2011 (5th) by the Korea Institute of Public Finance. Based on this survey data, we use panel data of 3,838 households that have been surveyed continuously for 5 years. We sort the base variables according to relevance of house investment criteria using the decision tree model (DTM), which is the standard decision-making model for data-mining techniques. The DTM method is known as a powerful methodology to identify contributory variables for predictive power. In addition, we analyze how important explanatory variables and the financial strength index of households affect housing investment with the binary logistic multi-regressive model. Based on the analyses, we conclude that the financial strength index has a significant role in house investment demand. Results - The results of this research are as follows: 1) The determinants of housing investment are age, consumption expenditures, income, total assets, rent deposit, housing price, habits satisfaction, housing scale, number of household members, and debt related to housing. 2) The impact power of these determinants has changed more or less annually due to economic situations and housing market conditions. The level of consumption expenditure and income are the main determinants before 2009; however, the determinants of housing investment changed to indices of the financial strength of households, i.e., DTI, LTV, and PIR, after 2009. 3) Most of all, since 2009, housing loans has been a more important variable than the level of consumption in making housing market decisions. Conclusions - The results of this research show that sound financing of households has a stronger effect on housing investment than reduced consumption expenditures. At the same time, the key indices that must be monitored by the government under economic emergency conditions differ from those requiring monitoring under normal market conditions; therefore, political indices to encourage and promote the housing market must be divided based on market conditions.

Exploration of relationship between confirmation measures and association thresholds (기준 확인 측도와 연관성 평가기준과의 관계 탐색)

  • Park, Hee Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.835-845
    • /
    • 2013
  • Association rule of data mining techniques is the method to quantify the relevance between a set of items in a big database, andhas been applied in various fields like manufacturing industry, shopping mall, healthcare, insurance, and education. Philosophers of science have proposed interestingness measures for various kinds of patterns, analyzed their theoretical properties, evaluated them empirically, and suggested strategies to select appropriate measures for particular domains and requirements. Such interestingness measures are divided into objective, subjective, and semantic measures. Objective measures are based on data used in the discovery process and are typically motivated by statistical considerations. Subjective measures take into account not only the data but also the knowledge and interests of users who examine the pattern, while semantic measures additionally take into account utility and actionability. In a very different context, researchers have devoted a lot of attention to measures of confirmation or evidential support. The focus in this paper was on asymmetric confirmation measures, and we compared confirmation measures with basic association thresholds using some simulation data. As the result, we could distinguish the direction of association rule by confirmation measures, and interpret degree of association operationally by them. Futhermore, the result showed that the measure by Rips and that by Kemeny and Oppenheim were better than other confirmation measures.

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF