• Title/Summary/Keyword: sum-product algorithm

Search Result 61, Processing Time 0.027 seconds

A Study of Position Control Performance Enhancement in a Real-Time OS Based Laparoscopic Surgery Robot Using Intelligent Fuzzy PID Control Algorithm (Intelligent Fuzzy PID 제어 알고리즘을 이용한 실시간 OS 기반 복강경 수술 로봇의 위치 제어 성능 강화에 관한 연구)

  • Song, Seung-Joon;Park, Jun-Woo;Shin, Jung-Wook;Lee, Duck-Hee;Kim, Yun-Ho;Choi, Jae-Soon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.3
    • /
    • pp.518-526
    • /
    • 2008
  • The fuzzy self-tuning PID controller is a PID controller with a fuzzy logic mechanism for tuning its gains on-line. In this structure, the proportional, integral and derivative gains are tuned on-line with respect to the change of the output of system under control. This paper deals with two types of fuzzy self-tuning PID controllers, rule-based fuzzy PID controller and learning fuzzy PID controller. As a medical application of fuzzy PID controller, the proposed controllers were implemented and evaluated in a laparoscopic surgery robot system. The proposed fuzzy PID structures maintain similar performance as conventional PID controller, and enhance the position tracking performance over wide range of varying input. For precise approximation, the fuzzy PID controller was realized using the linear reasoning method, a type of product-sum-gravity method. The proposed controllers were compared with conventional PID controller without fuzzy gain tuning and was proved to have better performance in the experiment.

Heuristics for Scheduling Wafer Lots at the Deposition Workstation in a Semiconductor Wafer Fab (반도체 웨이퍼 팹의 흡착공정에서 웨이퍼 로트들의 스케쥴링 알고리듬)

  • Choi, Seong-Woo;Lim, Tae-Kyu;Kim, Yeong-Dae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.36 no.2
    • /
    • pp.125-137
    • /
    • 2010
  • This study focuses on the problem of scheduling wafer lots of several product families in the deposition workstation in a semiconductor wafer fabrication facility. There are multiple identical parallel machines in the deposition workstation, and two types of setups, record-dependent setup and family setup, may be required at the deposition machines. A record-dependent setup is needed to find optimal operational conditions for a wafer lot on a machine, and a family setup is needed between processings of different families. We suggest two-phase heuristic algorithms in which a priority-rule-based scheduling algorithm is used to generate an initial schedule in the first phase and the schedule is improved in the second phase. Results of computational tests on randomly generated test problems show that the suggested algorithms outperform a scheduling method used in a real manufacturing system in terms of the sum of weighted flowtimes of the wafer lots.

Novel Class of Entanglement-Assisted Quantum Codes with Minimal Ebits

  • Dong, Cao;Yaoliang, Song
    • Journal of Communications and Networks
    • /
    • v.15 no.2
    • /
    • pp.217-221
    • /
    • 2013
  • Quantum low-density parity-check (LDPC) codes based on the Calderbank-Shor-Steane construction have low encoding and decoding complexity. The sum-product algorithm(SPA) can be used to decode quantum LDPC codes; however, the decoding performance may be significantly decreased by the many four-cycles required by this type of quantum codes. All four-cycles can be eliminated using the entanglement-assisted formalism with maximally entangled states (ebits). The proposed entanglement-assisted quantum error-correcting code based on Euclidean geometry outperform differently structured quantum codes. However, the large number of ebits required to construct the entanglement-assisted formalism is a substantial obstacle to practical application. In this paper, we propose a novel class of entanglement-assisted quantum LDPC codes constructed using classical Euclidean geometry LDPC codes. Notably, the new codes require one copy of the ebit. Furthermore, we propose a construction scheme for a corresponding zigzag matrix and show that the algebraic structure of the codes could easily be expanded. A large class of quantum codes with various code lengths and code rates can be constructed. Our methods significantly improve the possibility of practical implementation of quantum error-correcting codes. Simulation results show that the entanglement-assisted quantum LDPC codes described in this study perform very well over a depolarizing channel with iterative decoding based on the SPA and that these codes outperform other quantum codes based on Euclidean geometries.

A Class of Check Matrices Constructed from Euclidean Geometry and Their Application to Quantum LDPC Codes

  • Dong, Cao;Yaoliang, Song
    • Journal of Communications and Networks
    • /
    • v.15 no.1
    • /
    • pp.71-76
    • /
    • 2013
  • A new class of quantum low-density parity-check (LDPC) codes whose parity-check matrices are dual-containing matrices constructed based on lines of Euclidean geometries (EGs) is presented. The parity-check matrices of our quantum codes contain one and only one 4-cycle in every two rows and have better distance properties. However, the classical parity-check matrix constructed from EGs does not satisfy the condition of dual-containing. In some parameter conditions, parts of the rows in the matrix maybe have not any nonzero element in common. Notably, we propose four families of fascinating structure according to changes in all the parameters, and the parity-check matrices are adopted to satisfy the requirement of dual-containing. Series of matrix properties are proved. Construction methods of the parity-check matrices with dual-containing property are given. The simulation results show that the quantum LDPC codes constructed by this method perform very well over the depolarizing channel when decoded with iterative decoding based on the sum-product algorithm. Also, the quantum codes constructed in this paper outperform other quantum codes based on EGs.

Adaptive time-step control for modal methods to integrate the neutron diffusion equation

  • Carreno, A.;Vidal-Ferrandiz, A.;Ginestar, D.;Verdu, G.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.2
    • /
    • pp.399-413
    • /
    • 2021
  • The solution of the time-dependent neutron diffusion equation can be approximated using quasi-static methods that factorise the neutronic flux as the product of a time dependent function times a shape function that depends both on space and time. A generalization of this technique is the updated modal method. This strategy assumes that the neutron flux can be decomposed into a sum of amplitudes multiplied by some shape functions. These functions, known as modes, come from the solution of the eigenvalue problems associated with the static neutron diffusion equation that are being updated along the transient. In previous works, the time step used to update the modes is set to a fixed value and this implies the need of using small time-steps to obtain accurate results and, consequently, a high computational cost. In this work, we propose the use of an adaptive control time-step that reduces automatically the time-step when the algorithm detects large errors and increases this value when it is not necessary to use small steps. Several strategies to compute the modes updating time step are proposed and their performance is tested for different transients in benchmark reactors with rectangular and hexagonal geometry.

8.1 Gbps High-Throughput and Multi-Mode QC-LDPC Decoder based on Fully Parallel Structure (전 병렬구조 기반 8.1 Gbps 고속 및 다중 모드 QC-LDPC 복호기)

  • Jung, Yongmin;Jung, Yunho;Lee, Seongjoo;Kim, Jaeseok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.78-89
    • /
    • 2013
  • This paper proposes a high-throughput and multi-mode quasi-cyclic (QC) low-density parity-check (LDPC) decoder based on a fully parallel structure. The proposed QC-LDPC decoder employs the fully parallel structure to provide very high throughput. The high interconnection complexity, which is the general problem in the fully parallel structure, is solved by using a broadcasting-based sum-product algorithm and proposing a low-complexity cyclic shift network. The high complexity problem, which is caused by using a large amount of check node processors and variable node processors, is solved by proposing a combined check and variable node processor (CCVP). The proposed QC-LDPC decoder can support the multi-mode decoding by proposing a routing-based interconnection network, the flexible CCVP and the flexible cyclic shift network. The proposed QC-LDPC decoder is operated at 100 MHz clock frequency. The proposed QC-LDPC decoder supports multi-mode decoding and provides 8.1 Gbps throughput for a (1944, 1620) QC-LDPC code.

System Structure and Reliability Optimization of VVVF Urban Transit Brake System Through Cost Function Construction (비용함수를 이용한 VVVF 전동차 제동장치의 시스템 구조 및 신뢰도 최적화)

  • Kim, Se-Hoon;Kim, Hyun-Jung;Bae, Chul-Ho;Lee, Jung-Hwan;Lee, Ho-Yong;Suh, Myung-Won
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.15 no.3
    • /
    • pp.63-71
    • /
    • 2007
  • During the design phase of a product, reliability and design engineers are called upon to evaluate the reliability of the system, The question of how to meet target reliability for the system arises when estimated reliability or cost is inadequate. This then becomes a problem of reliability allocation and system structure design. This study proposes the optimization methodology to achieve target reliability with minimum cost through construction of the cost function of system. In cost function, total cost means the sum of initial cost, repair cost and maintenance cost. This study constructs optimization problem about system structure design and reliability allocation using cost function. This problem constructed is solved by Multi-island Genetic Algorithm(MIGA), and applies to urban transit brake system. Current brake system of the urban transit is series system. Series system is the simplest and perhaps one of the most common system, but it demands high reliability and maintenance cost because all components must be operating to ensure system operation. Thus this study makes a comparative study by applying k-out-of-n system to brake system. This methodology presented can be a great tool for aiding reliability and design engineers in their decision-makings.

Selecting Target Sites for Non-point Source Pollution Management Using Analytic Hierarchy Process (계층분석적 의사결정기법을 이용한 비점원오염 관리지역의 선정)

  • Shin, Jung-Bum;Park, Seung-Woo;Kim, Hak-Kwan;Choi, Ra-Young
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.49 no.3
    • /
    • pp.79-88
    • /
    • 2007
  • This paper suggests a hierarchial method to select the target sites for the nonpoint source pollution management considering factors which reflect the interrelationships of significant outflow characteristics of nonpoint source pollution at given sites. The factors consist of land slope, delivery distance to the outlet, effective rainfall, impervious area ratio and soil loss. The weight of each factor was calculated by an analytic hierarchy process(AHP) algorithm and the resulting influencing index was defined from the sum of the product of each factor and its computed weight value. The higher index reflect the proposed target sites for nonpoint source pollution management. The proposed method was applied to the Baran HP#6 watershed, located southwest from Suwon city. The Agricultural Nonpoint Pollution Source(AGNPS) model was also applied to identify sites contributing significantly to the nonpoint source pollution loads from the watershed. The spatial correlation between the two results for sites was analyzed using Moran's I values. The I values were $0.38{\sim}0.45$ for total nitrogen(T-N), and $0.15{\sim}0.22$ for total phosphorus(T-P), respectively. The results showed that two independent estimates for sites within the test water-shed were highly correlated, and that the proposed hierarchial method may be applied to select the target sites for nonpoint source pollution management.

Performance Improvement in Speech Recognition by Weighting HMM Likelihood (은닉 마코프 모델 확률 보정을 이용한 음성 인식 성능 향상)

  • 권태희;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.145-152
    • /
    • 2003
  • In this paper, assuming that the score of speech utterance is the product of HMM log likelihood and HMM weight, we propose a new method that HMM weights are adapted iteratively like the general MCE training. The proposed method adjusts HMM weights for better performance using delta coefficient defined in terms of misclassification measure. Therefore, the parameter estimation and the Viterbi algorithms of conventional 1:.um can be easily applied to the proposed model by constraining the sum of HMM weights to the number of HMMs in an HMM set. Comparing with the general segmental MCE training approach, computing time decreases by reducing the number of parameters to estimate and avoiding gradient calculation through the optimal state sequence. To evaluate the performance of HMM-based speech recognizer by weighting HMM likelihood, we perform Korean isolated digit recognition experiments. The experimental results show better performance than the MCE algorithm with state weighting.

Construction of Research Fronts Using Factor Graph Model in the Biomedical Literature (팩터그래프 모델을 이용한 연구전선 구축: 생의학 분야 문헌을 기반으로)

  • Kim, Hea-Jin;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.1
    • /
    • pp.177-195
    • /
    • 2017
  • This study attempts to infer research fronts using factor graph model based on heterogeneous features. The model suggested by this study infers research fronts having documents with the potential to be cited multiple times in the future. To this end, the documents are represented by bibliographic, network, and content features. Bibliographic features contain bibliographic information such as the number of authors, the number of institutions to which the authors belong, proceedings, the number of keywords the authors provide, funds, the number of references, the number of pages, and the journal impact factor. Network features include degree centrality, betweenness, and closeness among the document network. Content features include keywords from the title and abstract using keyphrase extraction techniques. The model learns these features of a publication and infers whether the document would be an RF using sum-product algorithm and junction tree algorithm on a factor graph. We experimentally demonstrate that when predicting RFs, the FG predicted more densely connected documents than those predicted by RFs constructed using a traditional bibliometric approach. Our results also indicate that FG-predicted documents exhibit stronger degrees of centrality and betweenness among RFs.