• Title/Summary/Keyword: Graph-based

Search Result 1,784, Processing Time 0.028 seconds

Automatic decomposition of unstructured meshes employing genetic algorithms for parallel FEM computations

  • Rama Mohan Rao, A.;Appa Rao, T.V.S.R.;Dattaguru, B.
    • Structural Engineering and Mechanics
    • /
    • v.14 no.6
    • /
    • pp.625-647
    • /
    • 2002
  • Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.

An Efficient Large Graph Clustering Technique based on Min-Hash (Min-Hash를 이용한 효율적인 대용량 그래프 클러스터링 기법)

  • Lee, Seok-Joo;Min, Jun-Ki
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.380-388
    • /
    • 2016
  • Graph clustering is widely used to analyze a graph and identify the properties of a graph by generating clusters consisting of similar vertices. Recently, large graph data is generated in diverse applications such as Social Network Services (SNS), the World Wide Web (WWW), and telephone networks. Therefore, the importance of graph clustering algorithms that process large graph data efficiently becomes increased. In this paper, we propose an effective clustering algorithm which generates clusters for large graph data efficiently. Our proposed algorithm effectively estimates similarities between clusters in graph data using Min-Hash and constructs clusters according to the computed similarities. In our experiment with real-world data sets, we demonstrate the efficiency of our proposed algorithm by comparing with existing algorithms.

ShareSafe: An Improved Version of SecGraph

  • Tang, Kaiyu;Han, Meng;Gu, Qinchen;Zhou, Anni;Beyah, Raheem;Ji, Shouling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5731-5754
    • /
    • 2019
  • In this paper, we redesign, implement, and evaluate ShareSafe (Based on SecGraph), an open-source secure graph data sharing/publishing platform. Within ShareSafe, we propose De-anonymization Quantification Module and Recommendation Module. Besides, we model the attackers' background knowledge and evaluate the relation between graph data privacy and the structure of the graph. To the best of our knowledge, ShareSafe is the first platform that enables users to perform data perturbation, utility evaluation, De-A evaluation, and Privacy Quantification. Leveraging ShareSafe, we conduct a more comprehensive and advanced utility and privacy evaluation. The results demonstrate that (1) The risk of privacy leakage of anonymized graph increases with the attackers' background knowledge. (2) For a successful de-anonymization attack, the seed mapping, even relatively small, plays a much more important role than the auxiliary graph. (3) The structure of graph has a fundamental and significant effect on the utility and privacy of the graph. (4) There is no optimal anonymization/de-anonymization algorithm. For different environment, the performance of each algorithm varies from each other.

Spectral Clustering with Sparse Graph Construction Based on Markov Random Walk

  • Cao, Jiangzhong;Chen, Pei;Ling, Bingo Wing-Kuen;Yang, Zhijing;Dai, Qingyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2568-2584
    • /
    • 2015
  • Spectral clustering has become one of the most popular clustering approaches in recent years. Similarity graph constructed on the data is one of the key factors that influence the performance of spectral clustering. However, the similarity graphs constructed by existing methods usually contain some unreliable edges. To construct reliable similarity graph for spectral clustering, an efficient method based on Markov random walk (MRW) is proposed in this paper. In the proposed method, theMRW model is defined on the raw k-NN graph and the neighbors of each sample are determined by the probability of the MRW. Since the high order transition probabilities carry complex relationships among data, the neighbors in the graph determined by our proposed method are more reliable than those of the existing methods. Experiments are performed on the synthetic and real-world datasets for performance evaluation and comparison. The results show that the graph obtained by our proposed method reflects the structure of the data better than those of the state-of-the-art methods and can effectively improve the performance of spectral clustering.

The Implementation of Graph-based SLAM Using General Graph Optimization (일반 그래프 최적화를 활용한 그래프 기반 SLAM 구현)

  • Ko, Nak-Yong;Chung, Jun-Hyuk;Jeong, Da-Bin
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.4
    • /
    • pp.637-644
    • /
    • 2019
  • This paper describes an implementation of a graph-based simultaneous localization and mapping(SLAM) method called the General Graph Optimization. The General Graph Optimization formulates the SLAM problem using nodes and edges. The nodes represent the location and attitude of a robot in time sequence, and the edge between the nodes depict the constraint between the nodes. The constraints are imposed by sensor measurements. The General Graph Optimization solves the problem by optimizing the performance index determined by the constraints. The implementation is verified using the measurement data sets which are open for test of various SLAM methods.

Improved approach of calculating the same shape in graph mining (그래프 마이닝에서 그래프 동형판단연산의 향상기법)

  • No, Young-Sang;Yun, Un-Il;Kim, Myung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.251-258
    • /
    • 2009
  • Data mining is a method that extract useful knowledges from huge size of data. Recently, a focussing research part of data mining is to find interesting patterns in graph databases. More efficient methods have been proposed in graph mining. However, graph analysis methods are in NP-hard problem. Graph pattern mining based on pattern growth method is to find complete set of patterns satisfying certain property through extending graph pattern edge by edge with avoiding generation of duplicated patterns. This paper suggests an efficient approach of reducing computing time of pattern growth method through pattern growth's property that similar patterns cause similar tasks. we suggest pruning methods which reduce search space. Based on extensive performance study, we discuss the results and the future works.

Automatic C Source Code Generation Technique for DirectShow Programming (DirectShow 프로그래밍을 위한 C 소스 코드 자동 생성 기법)

  • 동지연;박선화;엄성용
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.1
    • /
    • pp.114-124
    • /
    • 2004
  • In this paper, we present an automatic C source code generation system for DirectShow based multimedia application programming. In this system, C source code is automatically synthesized from the filter connection graph edited with GraphEdit, a utility tool provided with DirectShow SDK package from Microsoft. In traditional DirectShow programming environments, program design and brief testing steps are usually done with GraphEdit tool just by inserting filters and connecting them properly, while actual implementation of the program should be done separately. The filter connection graph information from GraphEdit is used just as a reference in such the implementation step. Therefore, our system which automatically generates C source code directly from the filter connection graph of GraphEdit seems very useful and many programmers can develop DirectShow based multimedia application programs more effectively and quickly using our system. In addition, our system supports more various media stream control functions for the generated application programs than the existing system such as Wizard which supports limited and fixed number of media control functions only. This feature allows more flexibility in the user interface of the generated source program and makes our system more practical for DirectShow based programming.

Use of Graph Database for the Integration of Heterogeneous Biological Data

  • Yoon, Byoung-Ha;Kim, Seon-Kyu;Kim, Seon-Young
    • Genomics & Informatics
    • /
    • v.15 no.1
    • /
    • pp.19-27
    • /
    • 2017
  • Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

Design of Quasi-Cyclic Low-Density Parity Check Codes with Large Girth

  • Jing, Long-Jiang;Lin, Jing-Li;Zhu, Wei-Le
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.381-389
    • /
    • 2007
  • In this paper we propose a graph-theoretic method based on linear congruence for constructing low-density parity check (LDPC) codes. In this method, we design a connection graph with three kinds of special paths to ensure that the Tanner graph of the parity check matrix mapped from the connection graph is without short cycles. The new construction method results in a class of (3, ${\rho}$)-regular quasi-cyclic LDPC codes with a girth of 12. Based on the structure of the parity check matrix, the lower bound on the minimum distance of the codes is found. The simulation studies of several proposed LDPC codes demonstrate powerful bit-error-rate performance with iterative decoding in additive white Gaussian noise channels.

  • PDF

A Study on CRM(Center of Rotation Method) based on MST(Minimum Spanning Tree) Matching Algorithm for Fingerprint Recognition

  • Kwon, Hyoung-Ki;Lee, Jun-Ho;Ryu, Young-Kee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.55.5-55
    • /
    • 2001
  • The MST (Minimum Spanning Tree) matching algorithm had been used for searching the part accord points extracted from the gray level fingerprint image. The method, however, had some limitations. To obtain the relationship between enrolled and inputted fingerprint, the MST was used to generate the tree graph that represent the unique graph for given fingerprint data. From the graph, the accord points are estimated. However, the shape of the graph highly depends on the positions of the minutiae. If there are some pseudo minutiae caused by noise, the shape of the graph will be different In this paper, to overcome the limitations of the MST, we proposed CRM (Center of Rotation Method) algorithm that found the true part accord points. The proposed method is based on the assumption ...

  • PDF