• Title/Summary/Keyword: Graph Search

Search Result 294, Processing Time 0.027 seconds

Classification and Improvement Directions for Mobile Crane Path Planning Algorithms: A Comprehensive Review

  • Sangmin Park;Maxwell Fordjour Antwi-Afari;SangHyeok Han;Sungkon Moon
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.18-24
    • /
    • 2024
  • Efficient path planning for mobile crane lifting operations in the construction industry is essential for ensuring smooth machinery operation, worker safety, and the timely completion of projects. The inherently complex construction sites, characterized by dynamic environments, constantly changing conditions, and numerous static and mobile obstacles, underscore the necessity for advanced algorithms capable of generating optimal paths under various constraints. Mobile crane path planning algorithms have been researched extensively and possess the potential to resolve the challenges presented by construction sites. However, the application of these algorithms in actual construction sites is rare, suggesting a need for ongoing research and development in this field. This paper begins by systematically identifying and analyzing relevant research papers using predetermined keywords, providing a comprehensive review of the current state of mobile crane path planning algorithms. Specifically, it categorizes mobile crane path planning algorithms into four main groups: Graph search-based algorithms, Sampling-based algorithms, Nature-inspired algorithms, and Newly developed algorithms. It performs a critical analysis of each category, offering guidance to researchers exploring path planning solutions suitable for the dynamic and complex environments of construction sites. Through this review, we affirm the need for continued interest and attempts at new methodologies in mobile crane path planning, suggesting improvements for further research and practical application of these algorithms.

Dynamic Priority Search Algorithm Of Multi-Agent (멀티에이전트의 동적우선순위 탐색 알고리즘)

  • Jin-Soo Kim
    • The Journal of Engineering Research
    • /
    • v.6 no.2
    • /
    • pp.11-22
    • /
    • 2004
  • A distributed constraint satisfaction problem (distributed CSP) is a constraint satisfaction problem(CSP) in which variables and constraints are distributed among multiple automated agents. ACSP is a problem to find a consistent assignment of values to variables. Even though the definition of a CSP is very simple, a surprisingly wide variety of AI problems can be formalized as CSPs. Similarly, various application problems in DAI (Distributed AI) that are concerned with finding a consistent combination of agent actions can be formalized as distributed CAPs. In recent years, many new backtracking algorithms for solving distributed CSPs have been proposed. But most of all, they have common drawbacks that the algorithm assumes the priority of agents is static. In this thesis, we establish a basic algorithm for solving distributed CSPs called dynamic priority search algorithm that is more efficient than common backtracking algorithms in which the priority order is static. In this algorithm, agents act asynchronously and concurrently based on their local knowledge without any global control, and have a flexible organization, in which the hierarchical order is changed dynamically, while the completeness of the algorithm is guaranteed. And we showed that the dynamic priority search algorithm can solve various problems, such as the distributed 200-queens problem, the distributed graph-coloring problem that common backtracking algorithm fails to solve within a reasonable amount of time. The experimental results on example problems show that this algorithm is by far more efficient than the backtracking algorithm, in which the priority order is static. The priority order represents a hierarchy of agent authority, i.e., the priority of decision-making. Therefore, these results imply that a flexible agent organization, in which the hierarchical order is changed dynamically, actually performs better than an organization in which the hierarchical order is static and rigid. Furthermore, we describe that the agent can be available to hold multiple variables in the searching scheme.

  • PDF

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

A Route Repair Scheme for Reducing DIO Poisoning Overhead in RPL-based IoT Networks (RPL 기반 IoT 네트워크에서 DIO Poisoning 오버헤드를 감소시키는 경로 복구 방법)

  • Lee, Sung-Jun;Chung, Sang-Hwa
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1233-1244
    • /
    • 2016
  • In the IoT network environments for LLNs(Low power and Lossy networks), IPv6 Routing Protocol for Low Power and Lossy networks(RPL) has been proposed by IETF(Internet Engineering Task Force). The goal of RPL is to create a directed acyclic graph, without loops. As recommended by the IETF standard, RPL route recovery mechanisms in the event of a failure of a node should avoid loop, loop detection, DIO Poisoning. In this process, route recovery time and control message might be increased in the sub-tree because of the repeated route search. In this paper, we suggested RPL route recovery method to solve the routing overhead problem in the sub-tree during a loss of a link in the RPL routing protocol based on IoT wireless networks. The proposed method improved local repair process by utilizing a route that could not be selected as the preferred existing parents. This reduced the traffic control packet, especially in the disconnected node's sub tree. It also resulted in a quick recovery. Our simulation results showed that the proposed RPL local repair reduced the recovery time and the traffic of control packets of RPL. According to our experiment results, the proposed method improved the recovery performance of RPL.

Development of Workbench for Analysis and Visualization of Whole Genome Sequence (전유전체(Whole gerlome) 서열 분석과 가시화를 위한 워크벤치 개발)

  • Choe, Jeong-Hyeon;Jin, Hui-Jeong;Kim, Cheol-Min;Jang, Cheol-Hun;Jo, Hwan-Gyu
    • The KIPS Transactions:PartA
    • /
    • v.9A no.3
    • /
    • pp.387-398
    • /
    • 2002
  • As whole genome sequences of many organisms have been revealed by small-scale genome projects, the intensive research on individual genes and their functions has been performed. However on-memory algorithms are inefficient to analysis of whole genome sequences, since the size of individual whole genome is from several million base pairs to hundreds billion base pairs. In order to effectively manipulate the huge sequence data, it is necessary to use the indexed data structure for external memory. In this paper, we introduce a workbench system for analysis and visualization of whole genome sequence using string B-tree that is suitable for analysis of huge data. This system consists of two parts : analysis query part and visualization part. Query system supports various transactions such as sequence search, k-occurrence, and k-mer analysis. Visualization system helps biological scientist to easily understand whole structure and specificity by many kinds of visualization such as whole genome sequence, annotation, CGR (Chaos Game Representation), k-mer, and RWP (Random Walk Plot). One can find the relations among organisms, predict the genes in a genome, and research on the function of junk DNA using our workbench.

A Study on the Visual Representation of TREC Text Documents in the Construction of Digital Library (디지털도서관 구축과정에서 TREC 텍스트 문서의 시각적 표현에 관한 연구)

  • Jeong, Ki-Tai;Park, Il-Jong
    • Journal of the Korean Society for information Management
    • /
    • v.21 no.3
    • /
    • pp.1-14
    • /
    • 2004
  • Visualization of documents will help users when they do search similar documents. and all research in information retrieval addresses itself to the problem of a user with an information need facing a data source containing an acceptable solution to that need. In various contexts. adequate solutions to this problem have included alphabetized cubbyholes housing papyrus rolls. microfilm registers. card catalogs and inverted files coded onto discs. Many information retrieval systems rely on the use of a document surrogate. Though they might be surprise to discover it. nearly every information seeker uses an array of document surrogates. Summaries. tables of contents. abstracts. reviews, and MARC recordsthese are all document surrogates. That is, they stand infor a document allowing a user to make some decision regarding it. whether to retrieve a book from the stacks, whether to read an entire article, etc. In this paper another type of document surrogate is investigated using a grouping method of term list. lising Multidimensional Scaling Method (MDS) those surrogates are visualized on two-dimensional graph. The distances between dots on the two-dimensional graph can be represented as the similarity of the documents. More close the distance. more similar the documents.

K-th Path Search Algorithms with the Link Label Correcting (링크표지갱신 다수경로탐색 알고리즘)

  • Lee, Mee-Young;Baik, Nam-Cheol;Choi, Dae-Soon;Shin, Seong-Il
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.2 s.73
    • /
    • pp.131-143
    • /
    • 2004
  • Given a path represented by a sequence of link numbers in a graph, the vine is differentiated from the loop in a sense that any link number can be visited in the path no more than once, while more than once in the loop. The vine provides a proper idea on complicated travel patterns such as U-turn and P-turn witnessed near intersections in urban transportation networks. Application of the link label method(LLM) to the shortest Path algorithms(SPA) enables to take into account these vine travel features. This study aims at expanding the LLM to a K-th path search algorithm (KPSA), which adopts the node-based-label correcting method to find a group of K number of paths. The paths including the vine type of travels are conceptualized as drivers reasonable route choice behaviors(RRCB) based on non-repetition of the same link in the paths, and the link-label-based MPSA is proposed on the basis of the RRCB. The small-scaled network test shows that the algorithm sequence works correctly producing multiple paths satisfying the RRCB. The large-scaled network study detects the solution degeneration (SD) problem in case the number of paths (K) is not sufficient enough, and the (K-1) dimension algorithm is developed to prevent the SD from the 1st path of each link, so that it may be applied as reasonable alternative route information tool, an important requirement of which is if it can generate small number of distinct alternative paths.

Effects of Inlet Water Temperature and Heat Load on Fan Power of Counter-Flow Wet Cooling Tower (입구 물온도와 열부하가 냉각탑의 팬동력에 미치는 영향 분석)

  • Nguyen, Minh Phu;Lee, Geun Sik
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.37 no.3
    • /
    • pp.267-273
    • /
    • 2013
  • In order to provide effective operating conditions for the fan in a wet cooling tower with film fill, a new program to search for the minimum fan power was developed using a model of the optimal total annual cost of the tower based on Merkel's model. In addition, a type of design map for a cooling tower was also developed. The inlet water temperature and heat load were considered as key parameters. The present program was first validated using several typical examples. The results showed that for a given heat load, a three-dimensional graph of the fan power (z-axis), mass flux of air (x-axis, minimum fan power), and inlet water temperature (y-axis, maximum of minimum fan power) showed a saddle configuration. The minimum fan power increased as the heat load increased. The conventionally known fact that the most effective cooling tower operation coincides with a high inlet water temperature and low air flow rate can be replaced by the statement that there exists an optimum mass flux of air corresponding to a minimum fan power for a given inlet water temperature, regardless of the heat load.

Path-finding Algorithm using Heuristic-based Genetic Algorithm (휴리스틱 기반의 유전 알고리즘을 활용한 경로 탐색 알고리즘)

  • Ko, Jung-Woon;Lee, Dong-Yeop
    • Journal of Korea Game Society
    • /
    • v.17 no.5
    • /
    • pp.123-132
    • /
    • 2017
  • The path-finding algorithm refers to an algorithm for navigating the route order from the current position to the destination in a virtual world in a game. The conventional path-finding algorithm performs graph search based on cost such as A-Star and Dijkstra. A-Star and Dijkstra require movable node and edge data in the world map, so it is difficult to apply online games with lots of map data. In this paper, we provide a Heuristic-based Genetic Algorithm Path-finding(HGAP) using Genetic Algorithm(GA). Genetic Algorithm is a path-finding algorithm applicable to game with variable environment and lots of map data. It seek solutions through mating, crossing, mutation and evolutionary operations without the map data. The proposed algorithm is based on Binary-Coded Genetic Algorithm and searches for a path by performing a heuristic operation that estimates a path to a destination to arrive at a destination more quickly.

Line Drawings from 2D Images (이차원 영상의 라인 드로잉)

  • Son, Min-Jung;Lee, Seung-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.12
    • /
    • pp.665-682
    • /
    • 2007
  • Line drawing is a widely used style in non-photorealistic rendering because it generates expressive descriptions of object shapes with a set of strokes. Although various techniques for line drawing of 3D objects have been developed, line drawing of 2D images has attracted little attention despite interesting applications, such as image stylization. This paper presents a robust and effective technique for generating line drawings from 2D images. The algorithm consists of three parts; filtering, linking, and stylization. In the filtering process, it constructs a likelihood function that estimates possible positions of lines in an image. In the linking process, line strokes are extracted from the likelihood function using clustering and graph search algorithms. In the stylization process, it generates various kinds of line drawings by applying curve fitting and texture mapping to the extracted line strokes. Experimental results demonstrate that the proposed technique can be applied to the various kinds of line drawings from 2D images with detail control.