• Title/Summary/Keyword: input space partitioning

Search Result 28, Processing Time 0.027 seconds

Design of Neurofuzzy Networks by Means of Linear Fuzzy Inference and Its Application to Software Engineering (선형 퍼지추론을 이용한 뉴로퍼지 네트워크의 설계와 소프트웨어 공학으로의 응용)

  • Park, Byoung-Jun;Park, Ho-Sung;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2818-2820
    • /
    • 2002
  • In this paper, we design neurofuzzy networks architecture by means of linear fuzzy inference. The proposed neurofuzzy networks are equivalent to linear fuzzy rules, and the structure of these networks is composed of two main substructures, namely premise part and consequence part. The premise part of neurofuzzy networks use fuzzy space partitioning in terms of all variables for considering correlation between input variables. The consequence part is networks constituted as first-order linear form. The consequence part of neurofuzzy networks in general structure(for instance ANFIS networks) consists of nodes with a function that is a linear combination of input variables. But that of the proposed neurofuzzy networks consists of not nodes but networks that are constructed by connection weight and itself correspond to a linear combination of input variables functionally. The connection weights in consequence part are learned by back-propagation algorithm. For the evaluation of proposed neurofuzzy networks. The experimental results include a well-known NASA dataset concerning software cost estimation.

  • PDF

Implementation and Performance Evaluation of Parallel Programming Translator for High Performance Fortran (High Performance Fortran 병렬 프로그래밍 변환기의 구현 및 성능 평가)

  • Kim, Jung-Gwon;Hong, Man-Pyo;Kim, Dong-Gyu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.901-915
    • /
    • 1999
  • Parallel computers are known to be excellent in performance per cost also satisfying scalability and high performance. However parallel machines have enjoyed limited success because of difficulty in parallel programming and non-portability between parallel machines. Recently, researchers have sought to develop data parallel language that provides machine independent programming systems. Data parallel language such as High Performance Fortran provides a basis to write a parallel program based on a global name space by partitioning data and computation, generating message-passing function. In this paper, we describe the Parallel Programming Translator(PPTran), source-to-source data parallel compiler, generating MPI SPMD parallel program from HPF input program through four phases such as data dependence analysis, partitioning data, partitioning computation, and code generation with explicit message-passing and verify the performance of PPTran

  • PDF

Quad Tree Based 2D Smoke Super-resolution with CNN (CNN을 이용한 Quad Tree 기반 2D Smoke Super-resolution)

  • Hong, Byeongsun;Park, Jihyeok;Choi, Myungjin;Kim, Changhun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.105-113
    • /
    • 2019
  • Physically-based fluid simulation takes a lot of time for high resolution. To solve this problem, there are studies that make up the limitation of low resolution fluid simulation by using deep running. Among them, Super-resolution, which converts low-resolution simulation data to high resolution is under way. However, traditional techniques require to the entire space where there are no density data, so there are problems that are inefficient in terms of the full simulation speed and that cannot be computed with the lack of GPU memory as input resolution increases. In this paper, we propose a new method that divides and classifies 2D smoke simulation data into the space using the quad tree, one of the spatial partitioning methods, and performs Super-resolution only required space. This technique accelerates the simulation speed by computing only necessary space. It also processes the divided input data, which can solve GPU memory problems.

SOC Verification Based on WGL

  • Du, Zhen-Jun;Li, Min
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.12
    • /
    • pp.1607-1616
    • /
    • 2006
  • The growing market of multimedia and digital signal processing requires significant data-path portions of SoCs. However, the common models for verification are not suitable for SoCs. A novel model--WGL (Weighted Generalized List) is proposed, which is based on the general-list decomposition of polynomials, with three different weights and manipulation rules introduced to effect node sharing and the canonicity. Timing parameters and operations on them are also considered. Examples show the word-level WGL is the only model to linearly represent the common word-level functions and the bit-level WGL is especially suitable for arithmetic intensive circuits. The model is proved to be a uniform and efficient model for both bit-level and word-level functions. Then Based on the WGL model, a backward-construction logic-verification approach is presented, which reduces time and space complexity for multipliers to polynomial complexity(time complexity is less than $O(n^{3.6})$ and space complexity is less than $O(n^{1.5})$) without hierarchical partitioning. Finally, a construction methodology of word-level polynomials is also presented in order to implement complex high-level verification, which combines order computation and coefficient solving, and adopts an efficient backward approach. The construction complexity is much less than the existing ones, e.g. the construction time for multipliers grows at the power of less than 1.6 in the size of the input word without increasing the maximal space required. The WGL model and the verification methods based on WGL show their theoretical and applicable significance in SoC design.

  • PDF

Spatial Partitioning for Query Result Size Estimation in Spatial Databases (공간 데이터베이스에서 질의 결과 크기 추정을 위한 공간 분할)

  • 황환규
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.23-32
    • /
    • 2004
  • The query optimizer's important task while a query is invoked is to estimate the fraction of records in the databases that satisfy the given query condition. The query result size estimation in spatial databases, like relational databases, proceeds to partition the whole input into a small number of subsets called “buckets” and then estimate the fraction of the input in the buckets. The accuracy of estimation is determined by the difference between the real data counts and approximations in the buckets, and is dependent on how to partition the buckets. Existing techniques for spatial databases are equi-area and equi-count techniques, which are respectively analogous in relation databases to equi-height histogram that divides the input value range into buckets of equal size and equi-depth histogram that is equal to the number of records within each bucket. In this paper we propose a new partitioning technique that determines buckets according to the maximal difference of area which is defined as the product of data ranges End frequencies of input. In this new technique we consider both data values and frequencies of input data simultaneously, and thus achieve substantial improvements in accuracy over existing approaches. We present a detailed experimental study of the accuracy of query result size estimation comparing the proposed technique and the existing techniques using synthetic as well as real-life datasets. Experiments confirm that our proposed techniques offer better accuracy in query result size estimation than the existing techniques for space query size, bucket number, data number and data size.

Integrity Assessment Models for Bridge Structures Using Fuzzy Decision-Making (퍼지의사결정을 이용한 교량 구조물의 건전성평가 모델)

  • 안영기;김성칠
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.6
    • /
    • pp.1022-1031
    • /
    • 2002
  • This paper presents efficient models for bridge structures using CART-ANFIS (classification and regression tree-adaptive neuro fuzzy inference system). A fuzzy decision tree partitions the input space of a data set into mutually exclusive regions, each region is assigned a label, a value, or an action to characterize its data points. Fuzzy decision trees used for classification problems are often called fuzzy classification trees, and each terminal node contains a label that indicates the predicted class of a given feature vector. In the same vein, decision trees used for regression problems are often called fuzzy regression trees, and the terminal node labels may be constants or equations that specify the predicted output value of a given input vector. Note that CART can select relevant inputs and do tree partitioning of the input space, while ANFIS refines the regression and makes it continuous and smooth everywhere. Thus it can be seen that CART and ANFIS are complementary and their combination constitutes a solid approach to fuzzy modeling.

Structure Identification of a Neuro-Fuzzy Model Can Reduce Inconsistency of Its Rulebase

  • Wang, Bo-Hyeun;Cho, Hyun-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.276-283
    • /
    • 2007
  • It has been shown that the structure identification of a neuro-fuzzy model improves their accuracy performances in a various modeling problems. In this paper, we claim that the structure identification of a neuro-fuzzy model can also reduce the degree of inconsistency of its fuzzy rulebase. Thus, the resulting neuro-fuzzy model serves as more like a structured knowledge representation scheme. For this, we briefly review a structure identification method of a neuro-fuzzy model and propose a systematic method to measure inconsistency of a fuzzy rulebase. The proposed method is applied to problems or fuzzy system reproduction and nonlinear system modeling in order to validate our claim.

Nonlinear Characteristics of Fuzzy Scatter Partition-Based Fuzzy Inference System

  • Park, Keon-Jun;Huang, Wei;Yu, C.;Kim, Yong K.
    • International journal of advanced smart convergence
    • /
    • v.2 no.1
    • /
    • pp.12-17
    • /
    • 2013
  • This paper introduces the fuzzy scatter partition-based fuzzy inference system to construct the model for nonlinear process to analyze nonlinear characteristics. The fuzzy rules of fuzzy inference systems are generated by partitioning the input space in the scatter form using Fuzzy C-Means (FCM) clustering algorithm. The premise parameters of the rules are determined by membership matrix by means of FCM clustering algorithm. The consequence part of the rules is represented in the form of polynomial functions and the parameters of the consequence part are estimated by least square errors. The proposed model is evaluated with the performance using the data widely used in nonlinear process. Finally, this paper shows that the proposed model has the good result for high-dimension nonlinear process.

An Automatic Fuzzy Rule Extraction using CFCM and Fuzzy Equalization Method (CFCM과 퍼지 균등화를 이용한 퍼지 규칙의 자동 생성)

  • 곽근창;이대종;유정웅;전명근
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.3
    • /
    • pp.194-202
    • /
    • 2000
  • In this paper, an efficient fuzzy rule generation scheme for Adaptive Network-based Fuzzy Inference System(ANFIS) using the conditional fuzzy-means(CFCM) and fuzzy equalization(FE) methods is proposed. Usually, the number of fuzzy rules exponentially increases by applying the gird partitioning of the input space, in conventional ANFIS approaches. Therefore, CFCM method is adopted to render the clusters which represent the given input and output fuzzy and FE method is used to automatically construct the fuzzy membership functions. From this, one can systematically obtain a small size of fuzzy rules which shows satisfying performance for the given problems. Finally, we applied the proposed method to the truck backer-upper control and Box-Jenkins modeling problems and obtained a better performance than previous works.

  • PDF

An efficient VLSI Implementation of the 2-D DCT with the Algorithm Decomposition (알고리즘 분해를 이용한 2-D DCT)

  • Jeong, Jae-Gil
    • The Journal of Natural Sciences
    • /
    • v.7
    • /
    • pp.27-35
    • /
    • 1995
  • This paper introduces a VLSI (Very Large Scale Integrated Circuit) implementation of the 2-D Discrete Cosine Transform (DCT) with an application to image and video coding. This implementation, which is based upon a state space model, uses both algorithm and data partitioning to achieve high efficiency. With this implementation, the amount of data transfers between the processing elements (PEs) are reduced and all the data transfers are limitted to be local. This system accepts the input as a progressively scanned data stream which reduces the hardware required for the input data control module. With proper ordering of computations, a matrix transposition between two matrix by matrix multiplications, which is required in many 2-D DCT systems based upon a row-column decomposition, can be also removed. The new implementation scheme makes it feasible to implement a single 2-D DCT VLSI chip which can be easily expanded for a larger 2-D DCT by cascading these chips.

  • PDF