• Title/Summary/Keyword: polynomial networks

Search Result 235, Processing Time 0.02 seconds

A Study On Bi-Criteria Shortest Path Model Development Using Genetic Algorithm (유전 알고리즘을 이용한 이중목적 최단경로 모형개발에 관한 연구)

  • 이승재;장인성;박민희
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.3
    • /
    • pp.77-86
    • /
    • 2000
  • The shortest path problem is one of the mathematical Programming models that can be conveniently solved through the use of networks. The common shortest Path Problem is to minimize a single objective function such as distance, time or cost between two specified nodes in a transportation network. The sing1e objective model is not sufficient to reflect any Practical Problem with multiple conflicting objectives in the real world applications. In this paper, we consider the shortest Path Problem under multiple objective environment. Wile the shortest path problem with single objective is solvable in Polynomial time, the shortest Path Problem with multiple objectives is NP-complete. A genetic a1gorithm approach is developed to deal with this Problem. The results of the experimental investigation of the effectiveness of the algorithm are also Presented.

  • PDF

Joint Mode Selection and Resource Allocation for Mobile Relay-Aided Device-to-Device Communication

  • Tang, Rui;Zhao, Jihong;Qu, Hua;Zhu, Zhengcang;Zhang, Yanpeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.950-975
    • /
    • 2016
  • Device-to-Device (D2D) communication underlaying cellular networks is a promising add-on component for future radio communication systems. It provides more access opportunities for local device pairs and enhances system throughput (ST), especially when mobile relays (MR) are further enabled to facilitate D2D links when the channel condition of their desired links is unfavorable. However, mutual interference is inevitable due to spectral reuse, and moreover, selecting a suitable transmission mode to benefit the correlated resource allocation (RA) is another difficult problem. We aim to optimize ST of the hybrid system via joint consideration of mode selection (MS) and RA, which includes admission control (AC), power control (PC), channel assignment (CA) and relay selection (RS). However, the original problem is generally NP-hard; therefore, we decompose it into two parts where a hierarchical structure exists: (i) PC is mode-dependent, but its optimality can be perfectly addressed for any given mode with additional AC design to achieve individual quality-of-service requirements. (ii) Based on that optimality, the joint design of MS, CA and RS can be viewed from the graph perspective and transferred into the maximum weighted independent set problem, which is then approximated by our greedy algorithm in polynomial-time. Thanks to the numerical results, we elucidate the efficacy of our mechanism and observe a resulting gain in MR-aided D2D communication.

Design of Reed-Solomon Decoder for High Speed Data Networks

  • Park, Young-Shig;Park, Heyk-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.1
    • /
    • pp.170-178
    • /
    • 2004
  • In this work a high speed 8-error correcting Reed-Solomon decoder is designed using the modified Euclid algorithm. Decoding algorithm of Reed-Solomon codes consists of four steps, those are, compute syndromes, find error-location polynomials, decide error-locations, and determine error values. The decoding speed is increased and the latency is reduced by using the parallel architecture in the syndrome generator and a faster clock speed in the modified Euclid algorithm block. In addition. the error locator polynomial in Chien search block is separated into even and odd terms to increase the overall speed of the decoder. All the functionalities of the decoder are verified first through C++ programs. Verilog is used for hardware description, and then the decoder is synthesized with a $.25{\mu}m$ CMOS TML library. The functionalities of the chip is also verified through test vectors. The clock speed of the chip is 250MHz, and the maximum data rate is 1Gbps.

Identification Methodology of FCM-based Fuzzy Model Using Particle Swarm Optimization (입자 군집 최적화를 이용한 FCM 기반 퍼지 모델의 동정 방법론)

  • Oh, Sung-Kwun;Kim, Wook-Dong;Park, Ho-Sung;Son, Myung-Hee
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.1
    • /
    • pp.184-192
    • /
    • 2011
  • In this study, we introduce a identification methodology for FCM-based fuzzy model. The two underlying design mechanisms of such networks involve Fuzzy C-Means (FCM) clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on FCM clustering method for efficient processing of data and the optimization of model was carried out using PSO. The premise part of fuzzy rules does not construct as any fixed membership functions such as triangular, gaussian, ellipsoidal because we build up the premise part of fuzzy rules using FCM. As a result, the proposed model can lead to the compact architecture of network. In this study, as the consequence part of fuzzy rules, we are able to use four types of polynomials such as simplified, linear, quadratic, modified quadratic. In addition, a Weighted Least Square Estimation to estimate the coefficients of polynomials, which are the consequent parts of fuzzy model, can decouple each fuzzy rule from the other fuzzy rules. Therefore, a local learning capability and an interpretability of the proposed fuzzy model are improved. Also, the parameters of the proposed fuzzy model such as a fuzzification coefficient of FCM clustering, the number of clusters of FCM clustering, and the polynomial type of the consequent part of fuzzy rules are adjusted using PSO. The proposed model is illustrated with the use of Automobile Miles per Gallon(MPG) and Boston housing called Machine Learning dataset. A comparative analysis reveals that the proposed FCM-based fuzzy model exhibits higher accuracy and superb predictive capability in comparison to some previous models available in the literature.

Design of Digits Recognition System Based on RBFNNs : A Comparative Study of Pre-processing Algorithms (방사형 기저함수 신경회로망 기반 숫자 인식 시스템의 설계 : 전처리 알고리즘을 이용한 인식성능의 비교연구)

  • Kim, Eun-Hu;Kim, Bong-Youn;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.2
    • /
    • pp.416-424
    • /
    • 2017
  • In this study, we propose a design of digits recognition system based on RBFNNs through a comparative study of pre-processing algorithms in order to recognize digits in handwritten. Histogram of Oriented Gradient(HOG) is used to get the features of digits in the proposed digits recognition system. In the pre-processing part, a dimensional reduction is executed by using Principal Component Analysis(PCA) and (2D)2PCA which are widely adopted methods in order to minimize a loss of the information during the reduction process of feature space. Also, The architecture of radial basis function neural networks consists of three functional modules such as condition, conclusion, and inference part. In the condition part, the input space is partitioned with the use of fuzzy clustering realized by means of the Fuzzy C-Means algorithm. Also, it is used instead of gaussian function to consider the characteristic of input data. In the conclusion part, the connection weights are used as the extended type of polynomial expression such as constant, linear, quadratic and modified quadratic. By using MNIST handwritten digit benchmarking database, experimental results show the effectiveness and efficiency of proposed digit recognition system when compared with other studies.

Design of Face Recognition Algorithm based Optimized pRBFNNs Using Three-dimensional Scanner (최적 pRBFNNs 패턴분류기 기반 3차원 스캐너를 이용한 얼굴인식 알고리즘 설계)

  • Ma, Chang-Min;Yoo, Sung-Hoon;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.748-753
    • /
    • 2012
  • In this paper, Face recognition algorithm is designed based on optimized pRBFNNs pattern classifier using three-dimensional scanner. Generally two-dimensional image-based face recognition system enables us to extract the facial features using gray-level of images. The environmental variation parameters such as natural sunlight, artificial light and face pose lead to the deterioration of the performance of the system. In this paper, the proposed face recognition algorithm is designed by using three-dimensional scanner to overcome the drawback of two-dimensional face recognition system. First face shape is scanned using three-dimensional scanner and then the pose of scanned face is converted to front image through pose compensation process. Secondly, data with face depth is extracted using point signature method. Finally, the recognition performance is confirmed by using the optimized pRBFNNs for solving high-dimensional pattern recognition problems.

Efficient Mining of Frequent Subgraph with Connectivity Constraint

  • Moon, Hyun-S.;Lee, Kwang-H.;Lee, Do-Heon
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.267-271
    • /
    • 2005
  • The goal of data mining is to extract new and useful knowledge from large scale datasets. As the amount of available data grows explosively, it became vitally important to develop faster data mining algorithms for various types of data. Recently, an interest in developing data mining algorithms that operate on graphs has been increased. Especially, mining frequent patterns from structured data such as graphs has been concerned by many research groups. A graph is a highly adaptable representation scheme that used in many domains including chemistry, bioinformatics and physics. For example, the chemical structure of a given substance can be modelled by an undirected labelled graph in which each node corresponds to an atom and each edge corresponds to a chemical bond between atoms. Internet can also be modelled as a directed graph in which each node corresponds to an web site and each edge corresponds to a hypertext link between web sites. Notably in bioinformatics area, various kinds of newly discovered data such as gene regulation networks or protein interaction networks could be modelled as graphs. There have been a number of attempts to find useful knowledge from these graph structured data. One of the most powerful analysis tool for graph structured data is frequent subgraph analysis. Recurring patterns in graph data can provide incomparable insights into that graph data. However, to find recurring subgraphs is extremely expensive in computational side. At the core of the problem, there are two computationally challenging problems. 1) Subgraph isomorphism and 2) Enumeration of subgraphs. Problems related to the former are subgraph isomorphism problem (Is graph A contains graph B?) and graph isomorphism problem(Are two graphs A and B the same or not?). Even these simplified versions of the subgraph mining problem are known to be NP-complete or Polymorphism-complete and no polynomial time algorithm has been existed so far. The later is also a difficult problem. We should generate all of 2$^n$ subgraphs if there is no constraint where n is the number of vertices of the input graph. In order to find frequent subgraphs from larger graph database, it is essential to give appropriate constraint to the subgraphs to find. Most of the current approaches are focus on the frequencies of a subgraph: the higher the frequency of a graph is, the more attentions should be given to that graph. Recently, several algorithms which use level by level approaches to find frequent subgraphs have been developed. Some of the recently emerging applications suggest that other constraints such as connectivity also could be useful in mining subgraphs : more strongly connected parts of a graph are more informative. If we restrict the set of subgraphs to mine to more strongly connected parts, its computational complexity could be decreased significantly. In this paper, we present an efficient algorithm to mine frequent subgraphs that are more strongly connected. Experimental study shows that the algorithm is scaling to larger graphs which have more than ten thousand vertices.

  • PDF

Design of Multi-FPNN Model Using Clustering and Genetic Algorithms and Its Application to Nonlinear Process Systems (HCM 클러스처링과 유전자 알고리즘을 이용한 다중 FPNN 모델 설계와 비선형 공정으로의 응용)

  • 박호성;오성권;안태천
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.4
    • /
    • pp.343-350
    • /
    • 2000
  • In this paper, we propose the Multi-FPNN(Fuzzy Polynomial Neural Networks) model based on FNN and PNN(Polyomial Neural Networks) for optimal system identifacation. Here FNN structure is designed using fuzzy input space divided by each separated input variable, and urilized both in order to get better output performace. Each node of PNN structure based on GMDH(Group Method of Data handing) method uses two types of high-order polynomials such as linearane and quadratic, and the input of that node uses three kinds of multi-variable inputs such as linear and quadratic, and the input of that node and Genetic Algorithms(GAs) to identify both the structure and the prepocessing of parameters of a Multi-FPNN model. Here, HCM clustering method, which is carried out for data preproessing of process system, is utilized to determine the structure method, which is carried out for data preprocessing of process system, is utilized to determance index with a weighting factor is used to according to the divisions of input-output space. A aggregate performance inddex with a wegihting factor is used to achieve a sound balance between approximation and generalization abilities of the model. According to the selection and adjustment of a weighting factor of this aggregate abjective function which it is acailable and effective to design to design and optimal Multi-FPNN model. The study is illustrated with the aid of two representative numerical examples and the aggregate performance index related to the approximation and generalization abilities of the model is evaluated and discussed.

  • PDF

Design of Summer Very Short-term Precipitation Forecasting Pattern in Metropolitan Area Using Optimized RBFNNs (최적화된 다항식 방사형 기저함수 신경회로망을 이용한 수도권 여름철 초단기 강수예측 패턴 설계)

  • Kim, Hyun-Ki;Choi, Woo-Yong;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.533-538
    • /
    • 2013
  • The damage caused by Recent frequently occurring locality torrential rains is increasing rapidly. In case of densely populated metropolitan area, casualties and property damage is a serious due to landslides and debris flows and floods. Therefore, the importance of predictions about the torrential is increasing. Precipitation characteristic of the bad weather in Korea is divided into typhoons and torrential rains. This seems to vary depending on the duration and area. Rainfall is difficult to predict because regional precipitation is large volatility and nonlinear. In this paper, Very short-term precipitation forecasting pattern model is implemented using KLAPS data used by Korea Meteorological Administration. we designed very short term precipitation forecasting pattern model using GA-based RBFNNs. the structural and parametric values such as the number of Inputs, polynomial type,number of fcm cluster, and fuzzification coefficient are optimized by GA optimization algorithm.

Low Cost and Acceptable Delay Unicast Routing Algorithm Based on Interval Estimation (구간 추정 기반의 지연시간을 고려한 저비용 유니캐스트 라우팅 방식)

  • Kim, Moon-Seong;Bang, Young-Cheol;Choo, Hyun-Seung
    • The KIPS Transactions:PartC
    • /
    • v.11C no.2
    • /
    • pp.263-268
    • /
    • 2004
  • The end-to-end characteristic Is an important factor for QoS support. Since network users and required bandwidths for applications increase, the efficient usage of networks has been intensively investigated for the better utilization of network resources. The distributed adaptive routing is the typical routing algorithm that is used in the current Internet. The DCLC(Delay Constrained 1.east Cost) path problem has been shown to be NP-hard problem. The path cost of LD path is relatively more expensive than that of LC path, and the path delay of LC path is relatively higher than that of LD path in DCLC problem. In this paper, we investigate the performance of heuristic algorithm for the DCLC problem with new factor which is probabilistic combination of cost and delay. Recently Dr. Salama proposed a polynomial time algorithm called DCUR. The algorithm always computes a path, where the cost of the path is always within 10% from the optimal CBF. Our evaluation showed that heuristic we propose is more than 38% better than DCUR with cost when number of nodes is more than 200. The new factor takes in account both cost and delay at the same time.