• Title/Summary/Keyword: set-based algorithm

Search Result 2,218, Processing Time 0.031 seconds

A Stigmergy-and-Neighborhood Based Ant Algorithm for Clustering Data

  • Lee, Hee-Sang;Shim, Gyu-Seok
    • Management Science and Financial Engineering
    • /
    • v.15 no.1
    • /
    • pp.81-96
    • /
    • 2009
  • Data mining, specially clustering is one of exciting research areas for ant based algorithms. Ant clustering algorithm, however, has many difficulties for resolving practical situations in clustering. We propose a new grid-based ant colony algorithm for clustering of data. The previous ant based clustering algorithms usually tried to find the clusters during picking up or dropping down process of the items of ants using some stigmergy information. In our ant clustering algorithm we try to make the ants reflect neighborhood information within the storage nests. We use two ant classes, search ants and labor ants. In the initial step of the proposed algorithm, the search ants try to guide the characteristics of the storage nests. Then the labor ants try to classify the items using the guide in-formation that has set by the search ants and the stigmergy information that has set by other labor ants. In this procedure the clustering decision of ants is quickly guided and keeping out of from the stagnated process. We experimented and compared our algorithm with other known algorithms for the known and statistically-made data. From these experiments we prove that the suggested ant mining algorithm found the clusters quickly and effectively comparing with a known ant clustering algorithm.

Convergence Analysis of the Modified Adaptive Sign (MAS) Algorithm Using a Mixed Norm Error Criterion

  • Lee, Young-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.3E
    • /
    • pp.62-68
    • /
    • 1997
  • In this paper, a modified adaptive sign (MAS) algorithm based on a mixed norm error criterion is proposed. The mixed norm error criterion of be minimized is constructed as a combined convex function of the mean-absolute error and the mean-absolute error to the third power. A convergence analysis of the MAS algorithm is also presented. Under a set of mild assumptions, a set of nonlinear evolution equations that characterizes the statistical mean and mean-squared behavior of the algorithm is derived. Computed simulations are carried out to verify the validity of our derivations.

  • PDF

Generic Training Set based Multimanifold Discriminant Learning for Single Sample Face Recognition

  • Dong, Xiwei;Wu, Fei;Jing, Xiao-Yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.368-391
    • /
    • 2018
  • Face recognition (FR) with a single sample per person (SSPP) is common in real-world face recognition applications. In this scenario, it is hard to predict intra-class variations of query samples by gallery samples due to the lack of sufficient training samples. Inspired by the fact that similar faces have similar intra-class variations, we propose a virtual sample generating algorithm called k nearest neighbors based virtual sample generating (kNNVSG) to enrich intra-class variation information for training samples. Furthermore, in order to use the intra-class variation information of the virtual samples generated by kNNVSG algorithm, we propose image set based multimanifold discriminant learning (ISMMDL) algorithm. For ISMMDL algorithm, it learns a projection matrix for each manifold modeled by the local patches of the images of each class, which aims to minimize the margins of intra-manifold and maximize the margins of inter-manifold simultaneously in low-dimensional feature space. Finally, by comprehensively using kNNVSG and ISMMDL algorithms, we propose k nearest neighbor virtual image set based multimanifold discriminant learning (kNNMMDL) approach for single sample face recognition (SSFR) tasks. Experimental results on AR, Multi-PIE and LFW face datasets demonstrate that our approach has promising abilities for SSFR with expression, illumination and disguise variations.

An Automatic Segmentation System Based on HMM and Correction Algorithm (HMM 및 보정 알고리즘을 이용한 자동 음성 분할 시스템)

  • Kim, Mu-Jung;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.265-274
    • /
    • 2002
  • In this paper we propose an automatic segmentation system that outputs the time alignment information of phoneme boundary using Viterbi search with HMM (Hidden Markov Model) and corrects these results by an UVS (unvoiced/voiced/silence) classification algorithm. We selecte a set of 39 monophones and a set of 647 extended phones for HMM models. For the UVS classification we use the feature parameters such as ZCR (Zero Crossing Rate), log energy, spectral distribution. The result of forced alignment using the extended phone set is 11% better than that of the monophone set. The UVS classification algorithm shows high performance to correct the segmentation results.

  • PDF

Implementation of Hardware Circuits for Fuzzy Controller Using $\alpha$-Cut Decomposition of fuzzy set

  • Lee, Yo-Seob;Hong, Soon-Ill
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.28 no.2
    • /
    • pp.200-209
    • /
    • 2004
  • The fuzzy control based on $\alpha$-level fuzzy set decomposition. It is known to produce quick response and calculating time of fuzzy inference. This paper derived the embodiment computational algorithm for defuzzification by min-max fuzzy inference and the center of gravity method based on $\alpha$-level fuzzy set decomposition. It is easy to realize the fuzzy controller hardware. based on the calculation formula. In addition. this study proposed a circuit that generates PWM actual signals ranging from fuzzy inference to defuzzification. The fuzzy controller was implemented with mixed analog-digital logic circuit using the computational fuzzy inference algorithm by min-min-max and defuzzification by the center of gravity method. This study confirmed that the fuzzy controller worked satisfactorily when it was applied to the position control of a dc servo system.

A Study on the Development of DGA based on Deep Learning (Deep Learning 기반의 DGA 개발에 대한 연구)

  • Park, Jae-Gyun;Choi, Eun-Soo;Kim, Byung-June;Zhang, Pan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.1
    • /
    • pp.18-28
    • /
    • 2017
  • Recently, there are many companies that use systems based on artificial intelligence. The accuracy of artificial intelligence depends on the amount of learning data and the appropriate algorithm. However, it is not easy to obtain learning data with a large number of entity. Less data set have large generalization errors due to overfitting. In order to minimize this generalization error, this study proposed DGA which can expect relatively high accuracy even though data with a less data set is applied to machine learning based genetic algorithm to deep learning based dropout. The idea of this paper is to determine the active state of the nodes. Using Gradient about loss function, A new fitness function is defined. Proposed Algorithm DGA is supplementing stochastic inconsistency about Dropout. Also DGA solved problem by the complexity of the fitness function and expression range of the model about Genetic Algorithm As a result of experiments using MNIST data proposed algorithm accuracy is 75.3%. Using only Dropout algorithm accuracy is 41.4%. It is shown that DGA is better than using only dropout.

Dropout Genetic Algorithm Analysis for Deep Learning Generalization Error Minimization

  • Park, Jae-Gyun;Choi, Eun-Soo;Kang, Min-Soo;Jung, Yong-Gyu
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.2
    • /
    • pp.74-81
    • /
    • 2017
  • Recently, there are many companies that use systems based on artificial intelligence. The accuracy of artificial intelligence depends on the amount of learning data and the appropriate algorithm. However, it is not easy to obtain learning data with a large number of entity. Less data set have large generalization errors due to overfitting. In order to minimize this generalization error, this study proposed DGA(Dropout Genetic Algorithm) which can expect relatively high accuracy even though data with a less data set is applied to machine learning based genetic algorithm to deep learning based dropout. The idea of this paper is to determine the active state of the nodes. Using Gradient about loss function, A new fitness function is defined. Proposed Algorithm DGA is supplementing stochastic inconsistency about Dropout. Also DGA solved problem by the complexity of the fitness function and expression range of the model about Genetic Algorithm As a result of experiments using MNIST data proposed algorithm accuracy is 75.3%. Using only Dropout algorithm accuracy is 41.4%. It is shown that DGA is better than using only dropout.

Discrete-Time State Feedback Algorithm for State Consensus of Uncertain Homogeneous Multi-Agent Systems (불확실성을 포함한 다 개체 시스템의 상태 일치를 위한 이산 시간 출력 궤환 협조 제어 알고리즘)

  • Yoon, Moon-Chae;Kim, Jung-Su;Back, Juhoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.390-397
    • /
    • 2013
  • This paper presents a consensus algorithm for uMAS (uncertain Multi-Agent Systems). Unlike previous results in which only nominal models for agents are considered, it is assumed that the uncertain agent model belongs to a known polytope set. In the middle of deriving the proposed algorithm, a convex set is found which includes all uncertainties in the problem using convexity of the polytope set. This set plays an important role in designing the consensus algorithm for uMAS. Based on the set, a consensus condition for uMAS is proposed and the corresponding consensus design problem is solved using LMI (Linear Matrix Inequality). Simulation result shows that the proposed consensus algorithm successfully leads to consensus of the state of uMAS.

A Novel Random Scheduling Algorithm based on Subregions Coverage for SET K-Cover Problem in Wireless Sensor Networks

  • Muhammad, Zahid;Roy, Abhishek;Ahn, Chang Wook;Sachan, Ruchi;Saxena, Navrati
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2658-2679
    • /
    • 2018
  • This paper proposes a novel Random Scheduling Algorithm based on Subregion Coverage (RSASC), to solve the SET K-cover problem (an NP-complete problem). SET K-cover problem distributes the set of sensors into the maximum number of mutually exclusive subsets (MESSs) in such a way that each of them can be scheduled for lifetime extension of WSN. Sensor coverage divides the target region into different subregions. RSASC first sorts the subregions in the ascending order concerning their sensor coverage. Then, it forms the subregion groups according to their similar sensor coverage. Lastly, RSASC ensures the K-coverage of each subregion from every group by randomly scheduling the sensors. We consider the target-coverage and area-coverage applications of WSN to analyze the usefulness of our proposed RSASC algorithm. The distinct quality of RSASC is that it utilizes less number of deployed sensors (33% less) to form the optimum number of MESSs with the higher computational speed (saves more than 93% of the time) as compared to the existing three algorithms.

GPU Algorithm for Outer Boundaries of a Triangle Set (GPU를 이용한 삼각형 집합의 외경계 계산 알고리즘)

  • Kyung, Min-Ho
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.4
    • /
    • pp.262-273
    • /
    • 2012
  • We present a novel GPU algorithm to compute outer cell boundaries of 3D arrangement subdivided by a given set of triangles. An outer cell boundary is defined as a 2-manifold surface consisting of subdivided polygons facing outward. Many geometric problems, such as Minkowski sum, sweep volume, lower/upper envelop, Bool operations, can be reduced to finding outer cell boundaries with specific properties. Computing outer cell boundaries, however, is a very time-consuming job and also is susceptible to numerical errors. To address these problems, we develop an algorithm based on GPU with a robust scheme combining interval arithmetic and multi-level precisions. The proposed algorithm is tested on Minkowski sum of several polygonal models, and shows 5-20 times speedup over an existing algorithm running on CPU.