• Title/Summary/Keyword: Goal graph

Search Result 78, Processing Time 0.026 seconds

Toxicity prediction of chemicals using OECD test guideline data with graph-based deep learning models (OECD TG데이터를 이용한 그래프 기반 딥러닝 모델 분자 특성 예측)

  • Daehwan Hwang;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.3
    • /
    • pp.355-380
    • /
    • 2024
  • In this paper, we compare the performance of graph-based deep learning models using OECD test guideline (TG) data. OECD TG are a unique tool for assessing the potential effects of chemicals on health and environment. but many guidelines include animal testing. Animal testing is time-consuming and expensive, and has ethical issues, so methods to find or minimize alternatives are being studied. Deep learning is used in various fields using chemicals including toxicity prediciton, and research on graph-based models is particularly active. Our goal is to compare the performance of graph-based deep learning models on OECD TG data to find the best performance model on there. We collected the results of OECD TG from the website eChemportal.org operated by the OECD, and chemicals that were impossible or inappropriate to learn were removed through pre-processing. The toxicity prediction performance of five graph-based models was compared using the collected OECD TG data and MoleculeNet data, a benchmark dataset for predicting chemical properties.

Compromise Scheme for Assigning Tasks on a Homogeneous Distributed System

  • Kim, Joo-Man
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.141-149
    • /
    • 2011
  • We consider the problem of assigning tasks to homogeneous nodes in the distributed system, so as to minimize the amount of communication, while balancing the processors' loads. This issue can be posed as the graph partitioning problem. Given an undirected graph G=(nodes, edges), where nodes represent task modules and edges represent communication, the goal is to divide n, the number of processors, as to balance the processors' loads, while minimizing the capacity of edges cut. Since these two optimization criteria conflict each other, one has to make a compromise between them according to the given task type. We propose a new cost function to evaluate static task assignments and a heuristic algorithm to solve the transformed problem, explicitly describing the tradeoff between the two goals. Simulation results show that our approach outperforms an existing representative approach for a variety of task and processing systems.

LINEAR AND NON-LINEAR LOOP-TRANSVERSAL CODES IN ERROR-CORRECTION AND GRAPH DOMINATION

  • Dagli, Mehmet;Im, Bokhee;Smith, Jonathan D.H.
    • Bulletin of the Korean Mathematical Society
    • /
    • v.57 no.2
    • /
    • pp.295-309
    • /
    • 2020
  • Loop transversal codes take an alternative approach to the theory of error-correcting codes, placing emphasis on the set of errors that are to be corrected. Hitherto, the loop transversal code method has been restricted to linear codes. The goal of the current paper is to extend the conceptual framework of loop transversal codes to admit nonlinear codes. We present a natural example of this nonlinearity among perfect single-error correcting codes that exhibit efficient domination in a circulant graph, and contrast it with linear codes in a similar context.

Structural results and a solution for the product rate variation problem : A graph-theoretic approach

  • Choe Sang-Woong
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.250-278
    • /
    • 2004
  • The product rate variation problem, to be called the PRVP, is to sequence different type units that minimizes the maximum value of a deviation function between ideal and actual rates. The PRVP is an important scheduling problem that arises on mixed-model assembly lines. A surge of research has examined very interesting methods for the PRVP. We believe, however, that several issues are still open with respect to this problem. In this study, we consider convex bipartite graphs, perfect matchings, permanents and balanced sequences. The ultimate objective of this study is to show that we can provide a more efficient and in-depth procedure with a graph theoretic approach in order to solve the PRVP. To achieve this goal, we propose formal alternative proofs for some of the results stated in the previous studies, and establish several new results.

  • PDF

PROXIMAL TYPE CONVERGENCE RESULTS USING IMPLICIT RELATION AND APPLICATIONS

  • Om Prakash Chauhan;Basant Chaudhary;Harsha Atre
    • Nonlinear Functional Analysis and Applications
    • /
    • v.29 no.1
    • /
    • pp.209-224
    • /
    • 2024
  • The goal of this study is to instigate various new and novel optimum proximity point theorems using the notion of implicit relation type ℶ-proximal contraction for non-self mappings. An illustrated example is used to demonstrate the validity of the obtained results. Furthermore, some uniqueness results for proximal contractions are also furnished with partial order and graph. Various well-known discoveries in the present state-of-the-art are enhanced, extended, unified, and generalized by our findings. As an application, we generate some fixed point results fulfilling a modified contraction and a graph contraction, using the profundity of the established results.

Efficient Mining of Frequent Subgraph with Connectivity Constraint

  • Moon, Hyun-S.;Lee, Kwang-H.;Lee, Do-Heon
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.267-271
    • /
    • 2005
  • The goal of data mining is to extract new and useful knowledge from large scale datasets. As the amount of available data grows explosively, it became vitally important to develop faster data mining algorithms for various types of data. Recently, an interest in developing data mining algorithms that operate on graphs has been increased. Especially, mining frequent patterns from structured data such as graphs has been concerned by many research groups. A graph is a highly adaptable representation scheme that used in many domains including chemistry, bioinformatics and physics. For example, the chemical structure of a given substance can be modelled by an undirected labelled graph in which each node corresponds to an atom and each edge corresponds to a chemical bond between atoms. Internet can also be modelled as a directed graph in which each node corresponds to an web site and each edge corresponds to a hypertext link between web sites. Notably in bioinformatics area, various kinds of newly discovered data such as gene regulation networks or protein interaction networks could be modelled as graphs. There have been a number of attempts to find useful knowledge from these graph structured data. One of the most powerful analysis tool for graph structured data is frequent subgraph analysis. Recurring patterns in graph data can provide incomparable insights into that graph data. However, to find recurring subgraphs is extremely expensive in computational side. At the core of the problem, there are two computationally challenging problems. 1) Subgraph isomorphism and 2) Enumeration of subgraphs. Problems related to the former are subgraph isomorphism problem (Is graph A contains graph B?) and graph isomorphism problem(Are two graphs A and B the same or not?). Even these simplified versions of the subgraph mining problem are known to be NP-complete or Polymorphism-complete and no polynomial time algorithm has been existed so far. The later is also a difficult problem. We should generate all of 2$^n$ subgraphs if there is no constraint where n is the number of vertices of the input graph. In order to find frequent subgraphs from larger graph database, it is essential to give appropriate constraint to the subgraphs to find. Most of the current approaches are focus on the frequencies of a subgraph: the higher the frequency of a graph is, the more attentions should be given to that graph. Recently, several algorithms which use level by level approaches to find frequent subgraphs have been developed. Some of the recently emerging applications suggest that other constraints such as connectivity also could be useful in mining subgraphs : more strongly connected parts of a graph are more informative. If we restrict the set of subgraphs to mine to more strongly connected parts, its computational complexity could be decreased significantly. In this paper, we present an efficient algorithm to mine frequent subgraphs that are more strongly connected. Experimental study shows that the algorithm is scaling to larger graphs which have more than ten thousand vertices.

  • PDF

An Approach to the Graph-based Representation and Analysis of Building Circulation using BIM - MRP Graph Structure as an Extension of UCN - (BIM과 그래프를 기반으로 한 건물 동선의 표현과 분석 접근방법 - UCN의 확장형인 MRP 그래프의 제안 -)

  • Kim, Jisoo;Lee, Jin-Kook
    • Korean Journal of Construction Engineering and Management
    • /
    • v.16 no.5
    • /
    • pp.3-11
    • /
    • 2015
  • This paper aims to review and discuss a graph-based approach for the representation and analysis of building circulation using BIM models. To propose this approach, the authors survey diverse researches and developments which are related to building circulation issues such as circulation requirements in Korea Building Act, spatial network analysis, as well as BIM applications. As the basis of this paper, UCN (Universal Circulation Network) is the main reference of the research, and the major goal of this paper is to extend the coverage of UCN with additional features we examined in the survey. In this paper we restructured two major perspectives on top of UCN: 1) finding major factors of graph-based circulation analysis based on UCN and 2) restructuring the UCN approach and others for adjusting to Korean Building Act. As a result of the further studies in this paper, two major additions have demonstrated in the article: 1) the most remote point-based circulation representation, and 2) virtual space-based circulation analysis.

Graph Construction Based on Fast Low-Rank Representation in Graph-Based Semi-Supervised Learning (그래프 기반 준지도 학습에서 빠른 낮은 계수 표현 기반 그래프 구축)

  • Oh, Byonghwa;Yang, Jihoon
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.15-21
    • /
    • 2018
  • Low-Rank Representation (LRR) based methods are widely used in many practical applications, such as face clustering and object detection, because they can guarantee high prediction accuracy when used to constructing graphs in graph - based semi-supervised learning. However, in order to solve the LRR problem, it is necessary to perform singular value decomposition on the square matrix of the number of data points for each iteration of the algorithm; hence the calculation is inefficient. To solve this problem, we propose an improved and faster LRR method based on the recently published Fast LRR (FaLRR) and suggests ways to introduce and optimize additional constraints on the underlying optimization goals in order to address the fact that the FaLRR is fast but actually poor in classification problems. Our experiments confirm that the proposed method finds a better solution than LRR does. We also propose Fast MLRR (FaMLRR), which shows better results when the goal of minimizing is added.

k-Fragility Maximization Problem to Attack Robust Terrorist Networks

  • Thornton, Jabre L.;Kim, Donghyun;Kwon, Sung-Sik;Li, Deying;Tokuta, Alade O.
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.1
    • /
    • pp.33-38
    • /
    • 2014
  • This paper investigates the shaping operation problem introduced by Callahan et al., namely the k-fragility maximization problem (k-FMP), whose goal is to find a subset of personals within a terrorist group such that the regeneration capability of the residual group without the personals is minimized. To improve the impact of the shaping operation, the degree centrality of the residual graph needs to be maximized. In this paper, we propose a new greedy algorithm for k-FMP. We discover some interesting discrete properties and use this to design a more thorough greedy algorithm for k-FMP. Our simulation result shows that the proposed algorithm outperforms Callahan et al.'s algorithm in terms of maximizing degree centrality. While our algorithm incurs higher running time (factor of k), given that the applications of the problem is expected to allow sufficient amount of time for thorough computation and k is expected to be much smaller than the size of input graph in reality, our algorithm has a better merit in practice.

Optimizing Employment and Learning System Using Big Data and Knowledge Management Based on Deduction Graph

  • Vishkaei, Behzad Maleki;Mahdavi, Iraj;Mahdavi-Amiri, Nezam;Askari, Masoud
    • Journal of Information Technology Applications and Management
    • /
    • v.23 no.3
    • /
    • pp.13-23
    • /
    • 2016
  • In recent years, big data has usefully been deployed by organizations with the aim of getting a better prediction for the future. Moreover, knowledge management systems are being used by organizations to identify and create knowledge. Here, the output from analysis of big data and a knowledge management system are used to develop a new model with the goal of minimizing the cost of implementing new recognized processes including staff training, transferring and employment costs. Strategies are proposed from big data analysis and new processes are defined accordingly. The company requires various skills to execute the proposed processes. Organization's current experts and their skills are known through a pre-established knowledge management system. After a gap analysis, managers can make decisions about the expert arrangement, training programs and employment to bridge the gap and accomplish their goals. Finally, deduction graph is used to analyze the model.