• Title/Summary/Keyword: decision algorithm

Search Result 2,369, Processing Time 0.03 seconds

Fast Quadtree Structure Decision for HEVC Intra Coding Using Histogram Statistics

  • Li, Yuchen;Liu, Yitong;Yang, Hongwen;Yang, Dacheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1825-1839
    • /
    • 2015
  • The final draft of the latest video coding standard, High Efficiency Video Coding (HEVC), was approved in January 2013. The coding efficiency of HEVC surpasses its predecessor, H.264/MPEG-4 Advanced Video Coding (AVC), by using only half of the bitrate to encode the same sequence with similar quality. However, the complexity of HEVC is sharply increased compared to H.264/AVC. In this paper, a method is proposed to decrease the complexity of intra coding in HEVC. Early pruning and an early splitting strategy are applied to the quadtree structure of coding tree units (CTU) and residual quadtree (RQT). According to our experiment, when our method is applied to sequences from Class A to Class E, the coding time is decreased by 44% at the cost of a 1.08% Bjontegaard delta rate (BD-rate) increase on average.

A Novel Feature Selection Method in the Categorization of Imbalanced Textual Data

  • Pouramini, Jafar;Minaei-Bidgoli, Behrouze;Esmaeili, Mahdi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3725-3748
    • /
    • 2018
  • Text data distribution is often imbalanced. Imbalanced data is one of the challenges in text classification, as it leads to the loss of performance of classifiers. Many studies have been conducted so far in this regard. The proposed solutions are divided into several general categories, include sampling-based and algorithm-based methods. In recent studies, feature selection has also been considered as one of the solutions for the imbalance problem. In this paper, a novel one-sided feature selection known as probabilistic feature selection (PFS) was presented for imbalanced text classification. The PFS is a probabilistic method that is calculated using feature distribution. Compared to the similar methods, the PFS has more parameters. In order to evaluate the performance of the proposed method, the feature selection methods including Gini, MI, FAST and DFS were implemented. To assess the proposed method, the decision tree classifications such as C4.5 and Naive Bayes were used. The results of tests on Reuters-21875 and WebKB figures per F-measure suggested that the proposed feature selection has significantly improved the performance of the classifiers.

Development of ECG Identification System Using the Fuzzy Processor (퍼지 프로세서를 이용한 심전도 판별 시스템 개발)

  • 장원석;이응혁
    • Journal of Biomedical Engineering Research
    • /
    • v.16 no.4
    • /
    • pp.403-414
    • /
    • 1995
  • It is very difficult to quantize the ECG analysis because the decision criterion for ECG is different with each other depending on the medical specialists of the heart and there are measured detecting errors for each ECG measurement system. Therefore, we developed the real-time ECG identification system using digital fuzzy processor for STD-BUS, in order to reduce ambiguity generated in the process of ECG identification and to analyze the irregular ECG stastically to ECG's repetition interval. The variables such as AGE (months), width of QRS, average RRI, and RRI were used to classify the ECG, and were applied to ECG signal indentification system which is developed for the purpose of research. It was found that the automatic diagnosis of ECG signal was possible in the real time process which was impossible in general process of algorithm.

  • PDF

QuLa: Queue and Latency-Aware Service Selection and Routing in Service-Centric Networking

  • Smet, Piet;Simoens, Pieter;Dhoedt, Bart
    • Journal of Communications and Networks
    • /
    • v.17 no.3
    • /
    • pp.306-320
    • /
    • 2015
  • Due to an explosive growth in services running in different datacenters, there is need for service selection and routing to deliver user requests to the best service instance. In current solutions, it is generally the client that must first select a datacenter to forward the request to before an internal load-balancer of the selected datacenter can select the optimal instance. An optimal selection requires knowledge of both network and server characteristics, making clients less suitable to make this decision. Information-Centric Networking (ICN) research solved a similar selection problem for static data retrieval by integrating content delivery as a native network feature. We address the selection problem for services by extending the ICN-principles for services. In this paper we present Queue and Latency, a network-driven service selection algorithm which maps user demand to service instances, taking into account both network and server metrics. To reduce the size of service router forwarding tables, we present a statistical method to approximate an optimal load distribution with minimized router state required. Simulation results show that our statistical routing approach approximates the average system response time of source-based routing with minimized state in forwarding tables.

Using Missing Values in the Model Tree to Change Performance for Predict Cholesterol Levels (모델트리의 결측치 처리 방법에 따른 콜레스테롤수치 예측의 성능 변화)

  • Jung, Yong Gyu;Won, Jae Kang;Sihn, Sung Chul
    • Journal of Service Research and Studies
    • /
    • v.2 no.2
    • /
    • pp.35-43
    • /
    • 2012
  • Data mining is an interest area in all field around us not in any specific areas, which could be used applications in a number of areas heavily. In other words, it is used in the decision-making process, data and correlation analysis in hidden relations, for finding the actionable information and prediction. But some of the data sets contains many missing values in the variables and do not exist a large number of records in the data set. In this paper, missing values are handled in accordance with the model tree algorithm. Cholesterol value is applied for predicting. For the performance analysis, experiments are approached for each treatment. Through this, efficient alternative is presented to apply the missing data.

  • PDF

Fast Eye-Detection Algorithm for Embedded System (임베디드시스템을 위한 고속 눈검출 알고리즘)

  • Lee, Seung-Ik
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.164-168
    • /
    • 2007
  • In this paper, we propose the eye detection algorithms which can apply to the Real-Time Embedded systems. To detect the eye region, the feature vectors are obtained at the first step and then, PCA(Principal Component Analysis) and amplitude projection method is applied to composite the feature vectors. In the decision state, the estimated probability density functions (PDFs) are applied by the proposed Bayesian method to detect eye region in an image from the CCD camera. The simulation results show that our proposed method has a good detection rate on the frontal face and this can be applied to the embedded system because of its small amount of the mathematical complexity.

  • PDF

Troubleshooting System for Environmental Problems in a Livestock Building Using an Expert System and a Neural Network (전문가시스템과 신경회로망에 의한 축사환경개선시스템)

  • ;Don D. Jones
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.36 no.1
    • /
    • pp.95-102
    • /
    • 1994
  • Since parameters influencing the indoor environment of livestock building interrelate so complicatedly, it is of great difficulty to identify the exact cause of environmental problems in a livestock building. Therefore, the approaches for the problem solving based on experience not numerical calculation will be helpful to the management of livestock building This study was attempt to develop the decision supporting system to diagnose environmen- tal problems in a livestock building based on an expert system and a neural network. HClips$^3$), attaching the Hangeul user interface to Clips which is known as a powerful shell for develop- ing expert system, was used. The multilayer perceptron consisting of 4 layers including back propagation learning algorithm was adpoted, which was rapidly converged within the allowable range at 50,000 learning sweeps. The expert system and neural network seemed to work well for this specific application, providing proper suggestions for some environmental problems: particularly, the neural net- work trained by an environmental problem and its corresponding answer with certainty factor, produced the same results as those by expert system.

  • PDF

Utilizing Case-based Reasoning for Consumer Choice Prediction based on the Similarity of Compared Alternative Sets

  • SEO, Sang Yun;KIM, Sang Duck;JO, Seong Chan
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.2
    • /
    • pp.221-228
    • /
    • 2020
  • This study suggests an alternative to the conventional collaborative filtering method for predicting consumer choice, using case-based reasoning. The algorithm of case-based reasoning determines the similarity between the alternative sets that each subject chooses. Case-based reasoning uses the inverse of the normalized Euclidian distance as a similarity measurement. This normalized distance is calculated by the ratio of difference between each attribute level relative to the maximum range between the lowest and highest level. The alternative case-based reasoning based on similarity predicts a target subject's choice by applying the utility values of the subjects most similar to the target subject to calculate the utility of the profiles that the target subject chooses. This approach assumes that subjects who deliberate in a similar alternative set may have similar preferences for each attribute level in decision making. The result shows the similarity between comparable alternatives the consumers consider buying is a significant factor to predict the consumer choice. Also the interaction effect has a positive influence on the predictive accuracy. This implies the consumers who looked into the same alternatives can probably pick up the same product at the end. The suggested alternative requires fewer predictors than conjoint analysis for predicting customer choices.

Design & Evaluation of an Intelligent Model for Extracting the Web User' Preference (웹 사용자의 선호도 추출을 위한 지능모델 설계 및 평가)

  • Kim, Kwang-Nam;Yoon, Hee-Byung;Kim, Hwa-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.443-450
    • /
    • 2005
  • In this paper, we propose an intelligent model lot extraction of the web user's preference and present the results of evaluation. For this purpose, we analyze shortcomings of current information retrieval engine being used and reflect preference weights on learner. As it doesn't depend on frequency of each word but intelligently learns patterns of user behavior, the mechanism Provides the appropriate set of results about user's questions. Then, we propose the concept of preference trend and its considerations and present an algorithm for extracting preference with examples. Also, we design an intelligent model for extraction of behavior patterns and propose HTML index and process of intelligent learning for preference decision. Finally, we validate the proposed model by comparing estimated results(after applying the Preference) of document ranking measurement.

Accounting Information Processing Model Using Big Data Mining (빅데이터마이닝을 이용한 회계정보처리 모형)

  • Kim, Kyung-Ihl
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.7
    • /
    • pp.14-19
    • /
    • 2020
  • This study suggests an accounting information processing model based on internet standard XBRL which applies an extensible business reporting language, the XML technology. Due to the differences in document characteristics among various companies, this is very important with regard to the purpose of accounting that the system should provide useful information to the decision maker. This study develops a data mining model based on XML hierarchy which is stored as XBRL in the X-Hive data base. The data ming analysis is experimented by the data mining association rule. And based on XBRL, the DC-Apriori data mining method is suggested combining Apriori algorithm and X-query together. Finally, the validity and effectiveness of the suggested model is investigated through experiments.