• Title/Summary/Keyword: Problem Classifying Rule

Search Result 8, Processing Time 0.018 seconds

Variable Ordering Algorithms Using Problem Classifying (문제분류규칙을 이용한 변수 순서화 알고리즘)

  • Sohn, Surg-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.127-135
    • /
    • 2011
  • Efficient ordering of decision variables is one of the methods that find solutions quickly in the depth first search using backtracking. At this time, development of variables ordering algorithms considering dynamic and static properties of the problems is very important. However, to exploit optimal variable ordering algorithms appropriate to the problems. In this paper, we propose a problem classifying rule which provides problem type based on variables' properties, and use this rule to predict optimal type of variable ordering algorithms. We choose frequency allocation problem as a DS-type whose decision variables have dynamic and static properties, and estimate optimal variable ordering algorithm. We also show the usefulness of problem classifying rule by applying base station problem as a special case whose problem type is not generated from the presented rule.

A Backtracking Search Framework for Constraint Satisfaction Optimization Problems (제약만족 최적화 문제를 위한 백트래킹 탐색의 구조화)

  • Sohn, Surg-Won
    • The KIPS Transactions:PartA
    • /
    • v.18A no.3
    • /
    • pp.115-122
    • /
    • 2011
  • It is very hard to obtain a general algorithm for solution of all the constraint satisfaction optimization problems. However, if the whole problem is separated into subproblems by characteristics of decision variables, we can assume that an algorithm to obtain solutions of these subproblems is easier. Under the assumption, we propose a problem classifying rule which subdivide the whole problem, and develop backtracking algorithms fit for these subproblems. One of the methods of finding a quick solution is efficiently arrange for any order of the search tree nodes. We choose the cluster head positioning problem in wireless sensor networks in which static characteristics is dominant and interference minimization problem of RFID readers that has hybrid mixture of static and dynamic characteristics. For these problems, we develop optimal variable ordering algorithms, and compare with the conventional methods. As a result of classifying the problem into subproblems, we can realize a backtracking framework for systematic search. We also have shown that developed backtracking algorithms have good performance in their quality.

Design of HCBKA-Based IT2TSK Fuzzy Prediction System (HCBKA 기반 IT2TSK 퍼지 예측시스템 설계)

  • Bang, Young-Keun;Lee, Chul-Heui
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.7
    • /
    • pp.1396-1403
    • /
    • 2011
  • It is not easy to analyze the strong nonlinear time series and effectively design a good prediction system especially due to the difficulties in handling the potential uncertainty included in data and prediction method. To solve this problem, a new design method for fuzzy prediction system is suggested in this paper. The proposed method contains the followings as major parts ; the first-order difference detection to extract the stable information from the nonlinear characteristics of time series, the fuzzy rule generation based on the hierarchically classifying clustering technique to reduce incorrectness of the system parameter identification, and the IT2TSK fuzzy logic system to reasonably handle the potential uncertainty of the series. In addition, the design of the multiple predictors is considered to reflect sufficiently the diverse characteristics concealed in the series. Finally, computer simulations are performed to verify the performance and the effectiveness of the proposed prediction system.

Hybrid Behavior Evolution Model Using Rule and Link Descriptors (규칙 구성자와 연결 구성자를 이용한 혼합형 행동 진화 모델)

  • Park, Sa Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.3
    • /
    • pp.67-82
    • /
    • 2006
  • We propose the HBEM(Hybrid Behavior Evolution Model) composed of rule classification and evolutionary neural network using rule descriptor and link descriptor for evolutionary behavior of virtual robots. In our model, two levels of the knowledge of behaviors were represented. In the upper level, the representation was improved using rule and link descriptors together. And then in the lower level, behavior knowledge was represented in form of bit string and learned adapting their chromosomes by the genetic operators. A virtual robot was composed by the learned chromosome which had the best fitness. The composed virtual robot perceives the surrounding situations and they were classifying the pattern through rules and processing the result in neural network and behaving. To evaluate our proposed model, we developed HBES(Hybrid Behavior Evolution System) and adapted the problem of gathering food of the virtual robots. In the results of testing our system, the learning time was fewer than the evolution neural network of the condition which was same. And then, to evaluate the effect improving the fitness by the rules we respectively measured the fitness adapted or not about the chromosomes where the learning was completed. In the results of evaluating, if the rules were not adapted the fitness was lowered. It showed that our proposed model was better in the learning performance and more regular than the evolutionary neural network in the behavior evolution of the virtual robots.

  • PDF

The Content Analysis of the Elementary Science Textbooks in the 6th National Curriculum (제 6차 교육과정에 의한 초등학교 자연 교과서의 내용 분석)

  • 최영란;이형철
    • Journal of Korean Elementary Science Education
    • /
    • v.17 no.2
    • /
    • pp.55-65
    • /
    • 1998
  • This study was intended to suggest the desirable direction in the 7th national curriculum revision through the analysis of the elementary science textbooks in the 6th national curriculum. The analysis system was composed of three categories, (1)knowledge (2)inquiry process and (3)attitude. And knowledge was divided into fact, concept and rule. And inquiry process was divided into thirteen subcategories such as manipulating experimental apparatus, observing, measuring, recording data, classifying, interpreting/ predicting, determining relationship/ causal explanation, extrapolating/ interpolating, drawing conclusions/ formulating a generalization or model, evaluating, formulating a problem, generating a hypothesis and designing an experiment/ controlling variables. Each sentence in the textbooks was considered as an analyzing unit. The frequency and percentage of each category were counted and the ratios were calculated. The findings could be summarized as follows: 1. The content of the elementary science textbooks was composed of knowledge 10.3%, inquiry process 88.8%, attitude 0.8% respectively. 2. As increasing the grades, the ratio of knowledge showed high frequency, but that of attitude showed low frequency. 3. In All the grades, the ratio of observing was the highest in inquiry process. 4. In the domain of physics and chemistry, the manipulating experimental apparatus showed high frequency. In the domain of biology and earth science, the role of observing was emphasized.

  • PDF

A new classification method using penalized partial least squares (벌점 부분최소자승법을 이용한 분류방법)

  • Kim, Yun-Dae;Jun, Chi-Hyuck;Lee, Hye-Seon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.931-940
    • /
    • 2011
  • Classification is to generate a rule of classifying objects into several categories based on the learning sample. Good classification model should classify new objects with low misclassification error. Many types of classification methods have been developed including logistic regression, discriminant analysis and tree. This paper presents a new classification method using penalized partial least squares. Penalized partial least squares can make the model more robust and remedy multicollinearity problem. This paper compares the proposed method with logistic regression and PCA based discriminant analysis by some real and artificial data. It is concluded that the new method has better power as compared with other methods.

Tyue Classification of Korean Characters Considering Relative Type Size (유형의 상대적 크기를 고려한 한글문자의 유형 분류)

  • Kim, Pyeoung-Kee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.6 s.44
    • /
    • pp.99-106
    • /
    • 2006
  • Type classification is a very needed step in recognizing huge character set language such as korean characters. Since most previous researches are based on the composition rule of Korean characters, it has been difficult to correctly classify composite vowel characters and problem space was not divided equally for the lack of classification of last consonant which is relatively bigger than other graphemes. In this paper, I Propose a new type classification method in which horizontal vowel is extracted before vortical vowel and last consonants are further classified into one of five small groups based on horizontal projection profile. The new method uses 19 character types which is more stable than previous 6 types or 15 types. Through experiments on 1.000 frequently used character sets and 30.614 characters scanned from several magazines, I showed that the proposed method is more useful classifying Korean characters of huge set.

  • PDF

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.