• Title/Summary/Keyword: Time Weighted Algorithm

Search Result 311, Processing Time 0.027 seconds

INVERSE CONSTRAINED MINIMUM SPANNING TREE PROBLEM UNDER HAMMING DISTANCE

  • Jiao, Li;Tang, Heng-Young
    • Journal of applied mathematics & informatics
    • /
    • v.28 no.1_2
    • /
    • pp.283-293
    • /
    • 2010
  • In this paper, inverse constrained minimum spanning tree problem under Hamming distance. Such an inverse problem is to modify the weights with bound constrains so that a given feasible solution becomes an optimal solution, and the deviation of the weights, measured by the weighted Hamming distance, is minimum. We present a strongly polynomial time algorithm to solve the inverse constrained minimum spanning tree problem under Hamming distance.

A Shortest Bypass Search Algorithm by using Positions of a Certain Obstacle Boundary (임의형태의 장애물 경계정보를 이용한 최소거리 우회경로 탐색 알고리즘)

  • Kim, Yun-Sung;Park, Soo-Hyun
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.129-137
    • /
    • 2010
  • Currently used shortest path search algorithms involve graphs with vertices and weighted edges between each vertex. However, when finding the shortest path with a randomly shaped obstacle(an island, for instance) positioned in between the starting point and the destination, using such algorithms involves high memory inefficiency and is significantly time consuming - all positions in the map should be considered as vertices and every line connecting any of the two adjacent vertices should be considered an edge. Therefore, we propose a new method for finding the shortest path in such conditions without using weighted graphs. This algorithm will allow finding the shortest obstacle bypass given only the positions of the obstacle boundary, the starting point and the destination. When the row and column size of the minimum boundary rectangle to include an obstacle is m and n, respectively, the proposed algorithm has the maximum time complexity, O(mn). This performance shows the proposed algorithm is very efficient comparing with the currently used algorithms.

Dynamic data-base Typhoon Track Prediction (DYTRAP) (동적 데이터베이스 기반 태풍 진로 예측)

  • Lee, Yunje;Kwon, H. Joe;Joo, Dong-Chan
    • Atmosphere
    • /
    • v.21 no.2
    • /
    • pp.209-220
    • /
    • 2011
  • A new consensus algorithm for the prediction of tropical cyclone track has been developed. Conventional consensus is a simple average of a few fixed models that showed the good performance in track prediction for the past few years. Meanwhile, the consensus in this study is a weighted average of a few models that may change for every individual forecast time. The models are selected as follows. The first step is to find the analogous past tropical cyclone tracks to the current track. The next step is to evaluate the model performances for those past tracks. Finally, we take the weighted average of the selected models. More weight is given to the higher performance model. This new algorithm has been named as DYTRAP (DYnamic data-base Typhoon tRAck Prediction) in the sense that the data base is used to find the analogous past tracks and the effective models for every individual track prediction case. DYTRAP has been applied to all 2009 tropical cyclone track prediction. The results outperforms those of all models as well as all the official forecasts of the typhoon centers. In order to prove the real usefulness of DYTRAP, it is necessary to apply the DYTRAP system to the real time prediction because the forecast in typhoon centers usually uses 6-hour or 12-hour-old model guidances.

Weighted Bayesian Automatic Document Categorization Based on Association Word Knowledge Base by Apriori Algorithm (Apriori알고리즘에 의한 연관 단어 지식 베이스에 기반한 가중치가 부여된 베이지만 자동 문서 분류)

  • 고수정;이정현
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.2
    • /
    • pp.171-181
    • /
    • 2001
  • The previous Bayesian document categorization method has problems that it requires a lot of time and effort in word clustering and it hardly reflects the semantic information between words. In this paper, we propose a weighted Bayesian document categorizing method based on association word knowledge base acquired by mining technique. The proposed method constructs weighted association word knowledge base using documents in training set. Then, classifier using Bayesian probability categorizes documents based on the constructed association word knowledge base. In order to evaluate performance of the proposed method, we compare our experimental results with those of weighted Bayesian document categorizing method using vocabulary dictionary by mutual information, weighted Bayesian document categorizing method, and simple Bayesian document categorizing method. The experimental result shows that weighted Bayesian categorizing method using association word knowledge base has improved performance 0.87% and 2.77% and 5.09% over weighted Bayesian categorizing method using vocabulary dictionary by mutual information and weighted Bayesian method and simple Bayesian method, respectively.

  • PDF

Export Container Remarshaling Planning in Automated Container Terminals Considering Time Value (시간가치를 고려한 자동화 컨테이너 터미널의 수출 컨테이너 이적계획)

  • Bae, Jong-Wook;Park, Young-Man;Kim, Kap-Hwan
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.33 no.2
    • /
    • pp.75-86
    • /
    • 2008
  • A remarshalling is one of the operational strategies considered importantly at a port container terminal for the fast ship operations and heighten efficiency of slacking yard. The remarshalling rearranges the containers scattered at a yard block in order to reduce the transfer time and the rehandling time of container handling equipments. This Paper deals with the rearrangement problem, which decides to where containers are transported considering time value of each operations. We propose the mixed integer programming model minimizing the weighted total operation cost. This model is a NP-hard problem. Therefore we develope the heuristic algorithm for rearrangement problem to real world adaption. We compare the heuristic algorithm with the optimum model in terms of the computation times and total cost. For the sensitivity analysis of configuration of storage and cost weight, a variety of scenarios are experimented.

Determination of Minimum Eigenvalue in a Continuous-time Weighted Least Squares Estimator (연속시간 하중최소자승 식별기의 최소고우치 결정)

  • Kim, Sung-Duck
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.9
    • /
    • pp.1021-1030
    • /
    • 1992
  • When using a least squares estimator with exponential forgetting factor to identify continuous-time deterministic system, the problem of determining minimum eigenvalue is described in this paper. It is well known fact that the convergence rate of parameter estimates relies on various factors consisting of the estimator and especially, theirproperties can be directly affected by all eigenvalues in the parameter error differential equation. Fortunately, there exists only one adjusting eigenvalue in the given estimator and then, the parameter convergence rates depend on this minimum eigenvalue. In this note, a new result to determine the minimum eigenvalue is proposed. Under the assumption that the input has as many spectral lines as the number of parameter estimates, it can be proven that the minimum eigenvalue converges to a constant value, which is a function of the forgetting factor and the parameter estimates number.

  • PDF

A Fair Queuing Algorithm to Reduce Energy Consumption in Wireless Channels (무선 채널의 에너지 소비를 줄이기 위한 공평 큐잉 알고리즘)

  • Kim, Tae-Joon
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.7
    • /
    • pp.893-901
    • /
    • 2007
  • Since real-time multimedia applications requiring duality-of-service guarantees are spreading over mobile and wireless networks, energy efficiency in wireless channels is becoming more important. Energy consumption in the channels can be reduced with decreasing the rate of scheduler's outgoing link by means of Dynamic Modulation Scaling (DMS). This paper proposes a fair queuing algorithm, termed Rate Efficient Fair Queuing (REFQ), in order to reduce the outgoing link's rate, which is based on the Latency-Optimized Fair Queuing algorithm developed to enhance Weighted Fair Queuing (WFQ). The performance evaluation result shows that REFQ does decrease the link rate by up to 35% in comparison with that in WFQ, which results in reducing the energy consumption by up to 90% when applied to the DMS based radio modem.

  • PDF

The Implementation of RRTs for a Remote-Controlled Mobile Robot

  • Roh, Chi-Won;Lee, Woo-Sub;Kang, Sung-Chul;Lee, Kwang-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2237-2242
    • /
    • 2005
  • The original RRT is iteratively expanded by applying control inputs that drive the system slightly toward randomly-selected states, as opposed to requiring point-to-point convergence, as in the probabilistic roadmap approach. It is generally known that the performance of RRTs can be improved depending on the selection of the metrics in choosing the nearest vertex and bias techniques in choosing random states. We designed a path planning algorithm based on the RRT method for a remote-controlled mobile robot. First, we considered a bias technique that is goal-biased Gaussian random distribution along the command directions. Secondly, we selected the metric based on a weighted Euclidean distance of random states and a weighted distance from the goal region. It can save the effort to explore the unnecessary regions and help the mobile robot to find a feasible trajectory as fast as possible. Finally, the constraints of the actuator should be considered to apply the algorithm to physical mobile robots, so we select control inputs distributed with commanded inputs and constrained by the maximum rate of input change instead of random inputs. Simulation results demonstrate that the proposed algorithm is significantly more efficient for planning than a basic RRT planner. It reduces the computational time needed to find a feasible trajectory and can be practically implemented in a remote-controlled mobile robot.

  • PDF

Design and Realization of Precise Indoor Localization Mechanism for Wi-Fi Devices

  • Su, Weideng;Liu, Erwu;Auge, Anna Calveras;Garcia-Villegas, Eduard;Wang, Rui;You, Jiayi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5422-5441
    • /
    • 2016
  • Despite the abundant literature in the field, there is still the need to find a time-efficient, highly accurate, easy to deploy and robust localization algorithm for real use. The algorithm only involves minimal human intervention. We propose an enhanced Received Signal Strength Indicator (RSSI) based positioning algorithm for Wi-Fi capable devices, called the Dynamic Weighted Evolution for Location Tracking (DWELT). Due to the multiple phenomena affecting the propagation of radio signals, RSSI measurements show fluctuations that hinder the utilization of straightforward positioning mechanisms from widely known propagation loss models. Instead, DWELT uses data processing of raw RSSI values and applies a weighted posterior-probabilistic evolution for quick convergence of localization and tracking. In this paper, we present the first implementation of DWELT, intended for 1D location (applicable to tunnels or corridors), and the first step towards a more generic implementation. Simulations and experiments show an accuracy of 1m in more than 81% of the cases, and less than 2m in the 95%.

Implementation of Real-Time Post-Processing for High-Quality Stereo Vision

  • Choi, Seungmin;Jeong, Jae-Chan;Chang, Jiho;Shin, Hochul;Lim, Eul-Gyoon;Cho, Jae Il;Hwang, Daehwan
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.752-765
    • /
    • 2015
  • We propose a novel post-processing algorithm and its very-large-scale integration architecture that simultaneously uses the passive and active stereo vision information to improve the reliability of the three-dimensional disparity in a hybrid stereo vision system. The proposed architecture consists of four steps - left-right consistency checking, semi-2D hole filling, a tiny adaptive variance checking, and a 2D weighted median filter. The experimental results show that the error rate of the proposed algorithm (5.77%) is less than that of a raw disparity (10.12%) for a real-world camera image having a $1,280{\times}720$ resolution and maximum disparity of 256. Moreover, for the famous Middlebury stereo image sets, the proposed algorithm's error rate (8.30%) is also less than that of the raw disparity (13.7%). The proposed architecture is implemented on a single commercial field-programmable gate array using only 13.01% of slice resources, which achieves a rate of 60 fps for $1,280{\times}720$ stereo images with a disparity range of 256.