• Title/Summary/Keyword: Graph Sampling

Search Result 39, Processing Time 0.02 seconds

Low-Complexity Graph Sampling Algorithm Based on Thresholding (임계값 적용에 기반한 저 복잡도 그래프 신호 샘플링 알고리즘)

  • Yoon-Hak Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.895-900
    • /
    • 2023
  • We study low-complexity graph sampling which selects a subset of nodes from graph nodes so as to reconstruct the original signal from the sampled one. To achieve complexity reduction, we propose a graph sampling algorithm with thresholding which selects a node with a cost lower than a given threshold at each step without fully searching all of the remaining nodes to find one with the minimum cost. Since it is important to find the threshold as close to a minimum cost as possible to avoid degradation of the reconstruction performance, we present a mathematical expression to compute the threshold at each step. We investigate the performance of the different sampling methods for various graphs, showing that the proposed algorithm runs 1.3 times faster than the previous method while maintaining the reconstruction performance.

Low-complexity Sampling Set Selection for Bandlimited Graph Signals (대역폭 제한 그래프신호를 위한 저 복잡도 샘플링 집합 선택 알고리즘)

  • Kim, Yoon Hak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.12
    • /
    • pp.1682-1687
    • /
    • 2020
  • We study the problem of sampling a subset of nodes of graphs for bandlimited graph signals such that the signal values on the sampled nodes provide the most information in order to reconstruct the original graph signal. Instead of directly minimizing the reconstruction error, we focus on minimizing the upper bound of the reconstruction error to reduce the complexity of the selection process. We further simplify the upper bound by applying useful approximations to propose a low-weight greedy selection process that is iteratively conducted to find a suboptimal sampling set. Through the extensive experiments for various graphs, we inspect the performance of the proposed algorithm by comparing with different sampling set selection methods and show that the proposed technique runs fast while preserving a competitive reconstruction performance, yielding a practical solution to real-time applications.

Efficient Sampling of Graph Signals with Reduced Complexity (저 복잡도를 갖는 효율적인 그래프 신호의 샘플링 알고리즘)

  • Kim, Yoon Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.367-374
    • /
    • 2022
  • A sampling set selection algorithm is proposed to reconstruct original graph signals from the sampled signals generated on the nodes in the sampling set. Instead of directly minimizing the reconstruction error, we focus on minimizing the upper bound on the reconstruction error to reduce the algorithm complexity. The metric is manipulated by using QR factorization to produce the upper triangular matrix and the analytic result is presented to enable a greedy selection of the next nodes at iterations by using the diagonal entries of the upper triangular matrix, leading to an efficient sampling process with reduced complexity. We run experiments for various graphs to demonstrate a competitive reconstruction performance of the proposed algorithm while offering the execution time about 3.5 times faster than one of the previous selection methods.

Fast Sampling Set Selection Algorithm for Arbitrary Graph Signals (임의의 그래프신호를 위한 고속 샘플링 집합 선택 알고리즘)

  • Kim, Yoon-Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1023-1030
    • /
    • 2020
  • We address the sampling set selection problem for arbitrary graph signals such that the original graph signal is reconstructed from the signal values on the nodes in the sampling set. We introduce a variation difference as a new indirect metric that measures the error of signal variations caused by sampling process without resorting to the eigen-decomposition which requires a huge computational cost. Instead of directly minimizing the reconstruction error, we propose a simple and fast greedy selection algorithm that minimizes the variation differences at each iteration and justify the proposed reasoning by showing that the principle used in the proposed process is similar to that in the previous novel technique. We run experiments to show that the proposed method yields a competitive reconstruction performance with a substantially reduced complexity for various graphs as compared with the previous selection methods.

Sampling Set Selection Algorithm for Weighted Graph Signals (가중치를 갖는 그래프신호를 위한 샘플링 집합 선택 알고리즘)

  • Kim, Yoon Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.153-160
    • /
    • 2022
  • A greedy algorithm is proposed to select a subset of nodes of a graph for bandlimited graph signals in which each signal value is generated with its weight. Since graph signals are weighted, we seek to minimize the weighted reconstruction error which is formulated by using the QR factorization and derive an analytic result to find iteratively the node minimizing the weighted reconstruction error, leading to a simplified iterative selection process. Experiments show that the proposed method achieves a significant performance gain for graph signals with weights on various graphs as compared with the previous novel selection techniques.

2D Pose Nodes Sampling Heuristic for Fast Loop Closing (빠른 루프 클로징을 위한 2D 포즈 노드 샘플링 휴리스틱)

  • Lee, Jae-Jun;Ryu, Jee-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.12
    • /
    • pp.1021-1026
    • /
    • 2016
  • The graph-based SLAM (Simultaneous Localization and Mapping) approach has been gaining much attention in SLAM research recently thanks to its ability to provide better maps and full trajectory estimations when compared to the filtering-based SLAM approach. Even though graph-based SLAM requires batch processing causing it to be computationally heavy, recent advancements in optimization and computing power enable it to run fast enough to be used in real-time. However, data association problems still require large amount of computation when building a pose graph. For example, to find loop closures it is necessary to consider the whole history of the robot trajectory and sensor data within the confident range. As a pose graph grows, the number of candidates to be searched also grows. It makes searching the loop closures a bottleneck when solving the SLAM problem. Our approach to alleviate this bottleneck is to sample a limited number of pose nodes in which loop closures are searched. We propose a heuristic for sampling pose nodes that are most advantageous to closing loops by providing a way of ranking pose nodes in order of usefulness for closing loops.

Spatial Resolution Improvement Using Over Sampling and High Agile Maneuver in Remote Sensing Satellite

  • Kim, Hee-Seob;Kim, Gyu-Sun;Chung, Dae-Won;Kim, Eung-Hyun
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.8 no.2
    • /
    • pp.37-43
    • /
    • 2007
  • Coordination of multiple UAVs is an essential technology for various applications in robotics, automation, and artificial intelligence. In general, it includes 1) waypoints assignment and 2) trajectory generation. In this paper, we propose a new method for this problem. First, we modify the concept of the standard visibility graph to greatly improve the optimality of the generated trajectories and reduce the computational complexity. Second, we propose an efficient stochastic approach using simulated annealing that assigns waypoints to each UAV from the constructed visibility graph. Third, we describe a method to detect collision between two UAVs. FinallY, we suggest an efficient method of controlling the velocity of UAVs using A* algorithm in order to avoid inter-UAV collision. We present simulation results from various environments that verify the effectiveness of our approach.

A Case Study on Understanding of the Concept of Sampling and Data Analysis by Elementary 6th Graders (6학년 학생들의 표본개념 이해 및 자료 분석에 관한 연구)

  • Lee, Mi-Suk;Park, Young-Hee
    • School Mathematics
    • /
    • v.8 no.4
    • /
    • pp.441-463
    • /
    • 2006
  • The purpose of this research is to investigate how elementary school students execute sampling with what designs in order to gather information under a situation that requires collecting data and information about their household and everyday life, and to examine how they use tools, including table or graph, etc., in order to perform efficient analysis of data and information they surveyed, also what results they acquire. To test this, the researcher set up a situation in advance that requires collecting data, and, under this circumstance, the researcher instructed and guided school students to look for methods how to design and survey in order to gather data by having them discuss tasks, involving small groups or entire class, and seek its solutions by themselves through trial and errors. The results from surveys revealed that a lesson, which will have students do sampling and arrange statistical data and analyze the results, was possible to carry out in the class of 6th grade of elementary school.

  • PDF

Large Scale Protein Side-chain Packing Based on Maximum Edge-weight Clique Finding Algorithm

  • K.C., Dukka Bahadur;Brown, J.B.;Tomita, Etsuji;Suzuki, Jun'ichi;Akutsu, Tatsuya
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.228-233
    • /
    • 2005
  • The protein side-chain packing problem (SCPP) is known to be NP-complete. Various graph theoretic based side-chain packing algorithms have been proposed. However as the size of the protein becomes larger, the sampling space increases exponentially. Hence, one approach to cope with the time complexity is to decompose the graph of the protein into smaller subgraphs. Some existing approaches decompose the graph into biconnected components at an articulation point (resulting in an at-most 21-residue subgraph) or solve the SCPP by tree decomposition (4-, 5-residue subgraph). In this regard, we had also presented a deterministic based approach called as SPWCQ using the notion of maximum edge weight clique in which we reduce SCPP to a graph and then obtain the maximum edge-weight clique of the obtained graph. This algorithm performs well for a protein of less than 500 residues. However, it fails to produce a feasible solution for larger proteins because of the size of the search space. In this paper, we present a new heuristic approach for the side-chain packing problem based on the maximum edge-weight clique finding algorithm that enables us to compute the side-chain packing of much larger proteins. Our new approach can compute side-chain packing of a protein of 874 residues with an RMSD of 1.423${\AA}$.

  • PDF

Synthesizing multi-loop control systems with period adjustment and Kernel compilation (주기 조정과 커널 자동 생성을 통한 다중 루프 시스템의 구현)

  • Hong, Seong-Soo;Choi, Chong-Ho;Park, Hong-Seong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.2
    • /
    • pp.187-196
    • /
    • 1997
  • This paper presents a semi-automatic methodology to synthesize executable digital controller saftware in a multi-loop control system. A digital controller is described by a task graph and end-to-end timing requirements. A task graph denotes the software structure of the controller, and the end-to-end requirements establish timing relationships between external inputs and outputs. Our approach translates the end-to-end requirements into a set of task attributes such as task periods and deadlines using nonlinear optimization techniques. Such attributes are essential for control engineers to implement control programs and schedule them in a control system with limited resources. In current engineering practice, human programmers manually derive those attributes in an ad hoc manner: they often resort to radical over-sampling to safely guarantee the given timing requirements, and thus render the resultant system poorly utilized. After task-specific attributes are derived, the tasks are scheduled on a single CPU and the compiled kernel is synthesized. We illustrate this process with a non-trivial servo motor control system.

  • PDF