• Title/Summary/Keyword: computational algorithm

Search Result 4,412, Processing Time 0.023 seconds

Acceleration of Delaunay Refinement Algorithm by Geometric Hashing (기하학적 해싱을 이용한 딜러니 개선 알고리듬의 가속화)

  • Kim, Donguk
    • Korean Journal of Computational Design and Engineering
    • /
    • v.22 no.2
    • /
    • pp.110-117
    • /
    • 2017
  • Delaunay refinement algorithm is a classical method to generate quality triangular meshes when point cloud and/or constrained edges are given in two- or three-dimensional space. It computes the Delaunay triangulation for given points and edges to obtain an initial solution, and update the triangulation by inserting steiner points one by one to get an improved quality triangulation. This process repeats until it satisfies given quality criteria. The efficiency of the algorithm depends on the criteria and point insertion method. In this paper, we propose a method to accelerate the Delaunay refinement algorithm by applying geometric hashing technique called bucketing when inserting a new steiner point so that it can localize necessary computation. We have tested the proposed method with a few types of data sets, and the experimental result shows strong linear time behavior.

Sequencing the Mixed Model Assembly Line with Multiple Stations to Minimize the Total Utility Work and Idle Time

  • Kim, Yearnmin;Choi, Won-Joon
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.1
    • /
    • pp.1-10
    • /
    • 2016
  • This paper presents a fast sequencing algorithm for a mixed model assembly line with multiple workstations which minimize the total utility work and idle time. We compare the proposed algorithms with another heuristic, the Tsai-based heuristic, for a sequencing problem that minimizes the total utility works. Numerical experiments are used to evaluate the performance and effectiveness of the proposed algorithm. The Tsai-based heuristic performs best in terms of utility work, but the fast sequencing algorithm performs well for both utility work and idle time. However, the computational complexity of the fast sequencing algorithm is O (KN) while the Tsai-based algorithm is O (KNlogN). Actual computational time of the fast sequencing heuristic is 2-6 times faster than that of the Tsai-based heuristic.

Two-sided assembly line balancing using a branch-and-bound method (분지한계법을 이용한 양면조립라인 밸런싱)

  • Kim, Yeo-Keun;Lee, Tae-Ok;Shin, Tae-Ho
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.24 no.3
    • /
    • pp.417-429
    • /
    • 1998
  • This paper considers two-sided (left and right side) assembly lines which are often used, especially in assembling large-sized products such as trucks and buses. A large number of exact algorithms and heuristics have been proposed to balance one-sided lines. However, little attention has been paid to balancing two-sided assembly lines. We present an efficient algorithm based on a branch and bound for balancing two-sided assembly lines. The algorithm involves a procedure for generating an enumeration tree. To efficiently search for the near optimal solutions to the problem, assignment rules are used in the method. New and existing bound strategies and dominance rules are else employed. The proposed algorithm can find a near optimal solution by enumerating feasible solutions partially. Extensive computational experiments are carried out to make the performance comparison between the proposed algorithm and existing ones. The computational results show that our algorithm is promising and robust in solution quality.

  • PDF

Accelerated Generation Algorithm for an Elemental Image Array Using Depth Information in Computational Integral Imaging

  • Piao, Yongri;Kwon, Young-Man;Zhang, Miao;Lee, Joon-Jae
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.132-138
    • /
    • 2013
  • In this paper, an accelerated generation algorithm to effectively generate an elemental image array in computational integral imaging system is proposed. In the proposed method, the depth information of 3D object is extracted from the images picked up by a stereo camera or depth camera. Then, the elemental image array can be generated by using the proposed accelerated generation algorithm with the depth information of 3D object. The resultant 3D image generated by the proposed accelerated generation algorithm was compared with that the conventional direct algorithm for verifying the efficiency of the proposed method. From the experimental results, the accuracy of elemental image generated by the proposed method could be confirmed.

Computational Complexity Reduction of Speech Recognizers Based on the Modified Bucket Box Intersection Algorithm (변형된 BBI 알고리즘에 기반한 음성 인식기의 계산량 감축)

  • Kim, Keun-Yong;Kim, Dong-Hwa
    • MALSORI
    • /
    • no.60
    • /
    • pp.109-123
    • /
    • 2006
  • Since computing the log-likelihood of Gaussian mixture density is a major computational burden for the speech recognizer based on the continuous HMM, several techniques have been proposed to reduce the number of mixtures to be used for recognition. In this paper, we propose a modified Bucket Box Intersection (BBI) algorithm, in which two relative thresholds are employed: one is the relative threshold in the conventional BBI algorithm and the other is used to reduce the number of the Gaussian boxes which are intersected by the hyperplanes at the boxes' edges. The experimental results show that the proposed algorithm reduces the number of Gaussian mixtures by 12.92% during the recognition phase with negligible performance degradation compared to the conventional BBI algorithm.

  • PDF

Complexity Reduction of Blind Algorithms based on Cross-Information Potential and Delta Functions (상호 정보 포텐셜과 델타함수를 이용한 블라인드 알고리듬의 복잡도 개선)

  • Kim, Namyong
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.71-77
    • /
    • 2014
  • The equalization algorithm based on the cross-information potential concept and Dirac-delta functions (CIPD) has outstanding ISI elimination performance even under impulsive noise environments. The main drawback of the CIPD algorithm is a heavy computational burden caused by the use of a block processing method for its weight update process. In this paper, for the purpose of reducing the computational complexity, a new method of the gradient calculation is proposed that can replace the double summation with a single summation for the weight update of the CIPD algorithm. In the simulation results, the proposed method produces the same gradient learning curves as the CIPD algorithm. Even under strong impulsive noise, the proposed method yields the same results while having significantly reduced computational complexity regardless of the number of block data, to which that of the e conventional algorithm is proportional.

Variable Step LMS Algorithm using Fibonacci Sequence (피보나치 수열을 활용한 가변스텝 LMS 알고리즘)

  • Woo, Hong-Chae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.19 no.2
    • /
    • pp.42-46
    • /
    • 2018
  • Adaptive signal processing is quite important in various signal and communication environments. In adaptive signal processing methods since the least mean square(LMS) algorithm is simple and robust, it is used everywhere. As the step is varied in the variable step(VS) LMS algorithm, the fast convergence speed and the small excess mean square error can be obtained. Various variable step LMS algorithms are researched for better performances. But in some of variable step LMS algorithms the computational complexity is quite large for better performances. The fixed step LMS algorithm with a low computational complexity merit and the variable step LMS algorithm with a fast convergence merit are combined in the proposed sporadic step algorithm. As the step is sporadically updated, the performances of the variable step LMS algorithm can be maintained in the low update rate using Fibonacci sequence. The performances of the proposed variable step LMS algorithm are proved in the adaptive equalizer.

fast running FIR filter structure based on Wavelet adaptive algorithm for computational complexity (웨이블렛 기반 적응 알고리즘의 계산량 감소에 적합한 Fast running FIR filter에 관한 연구)

  • Lee, Jae-Kyun;Lee, Chae-Wook
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.250-255
    • /
    • 2005
  • In this paper, we propose a new fast running FIR filter structure that improves the convergence speed of adaptive signal processing and reduces the computational complexity. The proposed filter is applied to wavelet based adaptive algorithm. Actually we compared the performance of the proposed algorithm with other algorithm using computer simulation of adaptive noise canceler based on synthesis speech. As the result, the frequency domain algorithm is prefer than the existent time domain. we analyzed the Wavelet algorithm, short-length fast running FIR algorithm, fast-short-length fast running FIR algorithm and proposed algorithm.

  • PDF

Research on Low-energy Adaptive Clustering Hierarchy Protocol based on Multi-objective Coupling Algorithm

  • Li, Wuzhao;Wang, Yechuang;Sun, Youqiang;Mao, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1437-1459
    • /
    • 2020
  • Wireless Sensor Networks (WSN) is a distributed Sensor network whose terminals are sensors that can sense and check the environment. Sensors are typically battery-powered and deployed in where the batteries are difficult to replace. Therefore, maximize the consumption of node energy and extend the network's life cycle are the problems that must to face. Low-energy adaptive clustering hierarchy (LEACH) protocol is an adaptive clustering topology algorithm, which can make the nodes in the network consume energy in a relatively balanced way and prolong the network lifetime. In this paper, the novel multi-objective LEACH protocol is proposed, in order to solve the proposed protocol, we design a multi-objective coupling algorithm based on bat algorithm (BA), glowworm swarm optimization algorithm (GSO) and bacterial foraging optimization algorithm (BFO). The advantages of BA, GSO and BFO are inherited in the multi-objective coupling algorithm (MBGF), which is tested on ZDT and SCH benchmarks, the results are shown the MBGF is superior. Then the multi-objective coupling algorithm is applied in the multi-objective LEACH protocol, experimental results show that the multi-objective LEACH protocol can greatly reduce the energy consumption of the node and prolong the network life cycle.

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.