• Title/Summary/Keyword: fuzzy-Sets

Search Result 784, Processing Time 0.023 seconds

Improved Algorithm of Hybrid c-Means Clustering for Supervised Classification of Remote Sensing Images (원격탐사 영상의 감독분류를 위한 개선된 하이브리드 c-Means 군집화 알고리즘)

  • Jeon, Young-Joon;Kim, Jin-Il
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.3
    • /
    • pp.185-191
    • /
    • 2007
  • Remote sensing images are multispectral image data collected from several band divided by wavelength ranges. The classification of remote sensing images is the method of classifying what has similar spectral characteristics together among each pixel composing an image as the important algorithm in this field. This paper presents a pattern classification method of remote sensing images by applying a possibilistic fuzzy c-means (PFCM) algorithm. The PFCM algorithm is a hybridization of a FCM algorithm, which adopts membership degree depending on the distance between data and the center of a certain cluster, combined with a PCM algorithm, which considers class typicality of the pattern sets. In this proposed method, we select the training data for each class and perform supervised classification using the PFCM algorithm with spectral signatures of the training data. The application of the PFCM algorithm is tested and verified by using Landsat TM and IKONOS remote sensing satellite images. As a result, the overall accuracy showed a better results than the FCM, PCM algorithm or conventional maximum likelihood classification(MLC) algorithm.

  • PDF

Gene filtering based on fuzzy pattern matching for whole genome micro array data analysis (마이크로어레이 데이터의 게놈수준 분석을 위한 퍼지 패턴 매칭에 의한 유전자 필터링)

  • Lee, Sun-A;Lee, Keon-Myung;Lee, Seung-Joo;Kim, Wun-Jea;Kim, Yong-June;Bae, Suk-Cheol
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.471-475
    • /
    • 2008
  • Microarray technology in biological science enables molecular level observations and analyses on the biological phenomina by allowing to measure the RNA expression profiles in cells. Microarray data analysis is applied in various purposes such as identifying significant genes which react to drug treatment, understanding the genome scale phenomina. In drug response experiments, the microarray-based gene expression analysis could provide meaningful information. It is sometimes needed to identify the genes which shows different expression behavior for treatment group and normal group each other. When the normal group shows the medium level expression, it is not easy to discriminate the group just by expression level comparison. This paper proposes a method which selects group-wise representative values for each gene and sets the value range of the groups in order to filter out the genes with specific pattern. It also shows some experiment results.

Structural Analysis of Scientific Information Usage (해사관계 연구자의 문헌정보 이용에 관한 구조분석)

  • 이철영
    • Journal of the Korean Institute of Navigation
    • /
    • v.4 no.2
    • /
    • pp.7-38
    • /
    • 1980
  • Nowadays researchers attach a great importance to the problems concerned with scientific information in the field of science and engineering. There are some reasons for it, that is, ⅰ) the amount of scientific information increases in proportion to the activities of scientists and engineers, so it is difficult to pick up a real valuable information ⅱ) it becomes more important to use a variety of information in proportion to the spread ofthe branch of science ⅲ) since the medium of scientific information is mostly technical papers, it is very difficult to mechanically transact these papers and to keep all documents and scientific informations for a long time. To cope with these difficult situations, many technical skills have been developed, for example, data-base, automatic information retrieval, micro-film and so on. But there are comparatively few investigation on the matter how the researchers who are users and producers think about the systematization of scientific information usage, so this paper investigates the thought and information needs of researchers, and proposes a basis of a method for systematization of scientific information usage. The author inspects the actual conditions of scientific information, reconsider the method which has been used and investigates the matter of how researchers whose interest is closely related to the study of marine affairs think about problems of scientific information usage by thequestionarie of Fuzzy-DEMATEL method. Also, FSM which is method for structuring hierarchy for the several complex problems on the basis of fuzzy sets theory is adopted as a tool of analysis. We can understand the key problems and make a story to solve the systematization of scientific information usage from the results of the analysis and those results will be directly applicable to construct a new system for scientific information usage.

  • PDF

Clustering Method for Reduction of Cluster Center Distortion (클러스터 중심 왜곡 저감을 위한 클러스터링 기법)

  • Jeong, Hye-C.;Seo, Suk-T.;Lee, In-K.;Kwon, Soon-H.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.354-359
    • /
    • 2008
  • Clustering is a method to classify the given data set with same property into several classes. To cluster data, many methods such as K-Means, Fuzzy C-Means(FCM), Mountain Method(MM), and etc, have been proposed and used. But the clustering results of conventional methods are sensitively influenced by initial values given for clustering in each method. Especially, FCM is very sensitive to noisy data, and cluster center distortion phenomenon is occurred because the method dose clustering through minimization of within-clusters variance. In this paper, we propose a clustering method which reduces cluster center distortion through merging the nearest data based on the data weight, and not being influenced by initial values. We show the effectiveness of the proposed through experimental results applied it to various types of data sets, and comparison of cluster centers with those of FCM.

Distributed Construction of the Multiple-Ring Topology of the Connected Dominating Set for the Mobile Ad Hoc Networks: Boltzmann Machine Approach (무선 애드혹 망을 위한 연결 지배 집합 다중-링 위상의 분산적 구성-볼츠만 기계적 접근)

  • Park, Jae-Hyun
    • Journal of KIISE:Information Networking
    • /
    • v.34 no.3
    • /
    • pp.226-238
    • /
    • 2007
  • In this paper, we present a novel fully distributed topology control protocol that can construct the multiple-ring topology of Minimal Connected Dominating Set (MCDS) as the transport backbone for mobile ad hoc networks. It makes a topology from the minimal nodes that are chosen from all the nodes, and the constructed topology is comprised of the minimal physical links while preserving connectivity. This topology reduces the interference. The all nodes work as the nodes of the distributed parallel Boltzmann machine, of which the objective function is consisted of two Boltzmann factors: the link degree and the connection domination degree. To define these Boltzmann factors, we extend the Connected Dominating Set into a fuzzy set, and also define the fuzzy set of nodes by which the multiple-ring topology can be constructed. To construct the transport backbone of the mobile ad hoc network, the proposed protocol chooses the nodes that are the strong members of these two fuzzy sets as the clusterheads. We also ran simulations to provide the quantitative comparison against the related works in terms of the packet loss rate and the energy consumption rate. As a result, we show that the network that is constructed by the proposed protocol has far better than the other ones with respect to the packet loss rate and the energy consumption rate.

Support Vector Machine for Interval Regression

  • Hong Dug Hun;Hwang Changha
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.67-72
    • /
    • 2004
  • Support vector machine (SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval linear and nonlinear regression models combining the possibility and necessity estimation formulation with the principle of SVM. For data sets with crisp inputs and interval outputs, the possibility and necessity models have been recently utilized, which are based on quadratic programming approach giving more diverse spread coefficients than a linear programming one. SVM also uses quadratic programming approach whose another advantage in interval regression analysis is to be able to integrate both the property of central tendency in least squares and the possibilistic property In fuzzy regression. However this is not a computationally expensive way. SVM allows us to perform interval nonlinear regression analysis by constructing an interval linear regression function in a high dimensional feature space. In particular, SVM is a very attractive approach to model nonlinear interval data. The proposed algorithm here is model-free method in the sense that we do not have to assume the underlying model function for interval nonlinear regression model with crisp inputs and interval output. Experimental results are then presented which indicate the performance of this algorithm.

  • PDF

Design of Hierarchically Structured Clustering Algorithm and its Application (계층 구조 클러스터링 알고리즘 설계 및 그 응용)

  • Bang, Young-Keun;Park, Ha-Yong;Lee, Chul-Heui
    • Journal of Industrial Technology
    • /
    • v.29 no.B
    • /
    • pp.17-23
    • /
    • 2009
  • In many cases, clustering algorithms have been used for extracting and discovering useful information from non-linear data. They have made a great effect on performances of the systems dealing with non-linear data. Thus, this paper presents a new approach called hierarchically structured clustering algorithm, and it is applied to the prediction system for non-linear time series data. The proposed hierarchically structured clustering algorithm (called HCKA: Hierarchical Cross-correlation and K-means clustering Algorithms) in which the cross-correlation and k-means clustering algorithm are combined can accept the correlationship of non-linear time series as well as statistical characteristics. First, the optimal differences of data are generated, which can suitably reveal the characteristics of non-linear time series. Second, the generated differences are classified into the upper clusters for their predictors by the cross-correlation clustering algorithm, and then each classified differences are classified again into the lower fuzzy sets by the k-means clustering algorithm. As a result, the proposed method can give an efficient classification and improve the performance. Finally, we demonstrates the effectiveness of the proposed HCKA via typical time series examples.

  • PDF

Design of Fingerprints Identification Based on RBFNN Using Image Processing Techniques (영상처리 기법을 통한 RBFNN 패턴 분류기 기반 개선된 지문인식 시스템 설계)

  • Bae, Jong-Soo;Oh, Sung-Kwun;Kim, Hyun-Ki
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.6
    • /
    • pp.1060-1069
    • /
    • 2016
  • In this paper, we introduce the fingerprint recognition system based on Radial Basis Function Neural Network(RBFNN). Fingerprints are classified as four types(Whole, Arch, Right roof, Left roof). The preprocessing methods such as fast fourier transform, normalization, calculation of ridge's direction, filtering with gabor filter, binarization and rotation algorithm, are used in order to extract the features on fingerprint images and then those features are considered as the inputs of the network. RBFNN uses Fuzzy C-Means(FCM) clustering in the hidden layer and polynomial functions such as linear, quadratic, and modified quadratic are defined as connection weights of the network. Particle Swarm Optimization (PSO) algorithm optimizes a number of essential parameters needed to improve the accuracy of RBFNN. Those optimized parameters include the number of clusters and the fuzzification coefficient used in the FCM algorithm, and the orders of polynomial of networks. The performance evaluation of the proposed fingerprint recognition system is illustrated with the use of fingerprint data sets that are collected through Anguli program.

Some Observations for Portfolio Management Applications of Modern Machine Learning Methods

  • Park, Jooyoung;Heo, Seongman;Kim, Taehwan;Park, Jeongho;Kim, Jaein;Park, Kyungwook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.44-51
    • /
    • 2016
  • Recently, artificial intelligence has reached the level of top information technologies that will have significant influence over many aspects of our future lifestyles. In particular, in the fields of machine learning technologies for classification and decision-making, there have been a lot of research efforts for solving estimation and control problems that appear in the various kinds of portfolio management problems via data-driven approaches. Note that these modern data-driven approaches, which try to find solutions to the problems based on relevant empirical data rather than mathematical analyses, are useful particularly in practical application domains. In this paper, we consider some applications of modern data-driven machine learning methods for portfolio management problems. More precisely, we apply a simplified version of the sparse Gaussian process (GP) classification method for classifying users' sensitivity with respect to financial risk, and then present two portfolio management issues in which the GP application results can be useful. Experimental results show that the GP applications work well in handling simulated data sets.

A Modified Approach to Density-Induced Support Vector Data Description

  • Park, Joo-Young;Kang, Dae-Sung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.1
    • /
    • pp.1-6
    • /
    • 2007
  • The SVDD (support vector data description) is one of the most well-known one-class support vector learning methods, in which one tries the strategy of utilizing balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. Recently, with the objective of generalizing the SVDD which treats all training data with equal importance, the so-called D-SVDD (density-induced support vector data description) was proposed incorporating the idea that the data in a higher density region are more significant than those in a lower density region. In this paper, we consider the problem of further improving the D-SVDD toward the use of a partial reference set for testing, and propose an LMI (linear matrix inequality)-based optimization approach to solve the improved version of the D-SVDD problems. Our approach utilizes a new class of density-induced distance measures based on the RSDE (reduced set density estimator) along with the LMI-based mathematical formulation in the form of the SDP (semi-definite programming) problems, which can be efficiently solved by interior point methods. The validity of the proposed approach is illustrated via numerical experiments using real data sets.