• Title/Summary/Keyword: New Algorithm

Search Result 11,746, Processing Time 0.038 seconds

Advanced Algorithm for $H_{\infty}$ Optimal controller synthesis ($H_{\infty}$ 최적 제어기 구성을 위한 개선된 알고리즘)

  • 김용규;양도철;유창근;장호성
    • Proceedings of the IEEK Conference
    • /
    • 2002.06e
    • /
    • pp.149-152
    • /
    • 2002
  • The aim of this study is to analyse the problems occurred by using classical algorithm to synthesize the H$\infty$ optimal controller. The obtained result of analysis applied to the composition of algorithm for the new H$\infty$ optimal controller which was introduced in this study. The study investigates and compares H$\infty$ optimal controller formed by new algorithm with the one formed by classical algorithm. In particular, robustness related to the robust control is systematically described by using the composition of algorithm for the classical H$\infty$ optimal controller. In addition, the flow charts classified into classical algorithm and new one are discussed to synthesize the H$\infty$ optimal controller.

  • PDF

Efficient LDPC Decoding Algorithm Using Node Monitoring (노드 모니터링에 의한 효율적인 LDPC 디코딩 알고리듬)

  • Suh, Hee-Jong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.11
    • /
    • pp.1231-1238
    • /
    • 2015
  • In this paper, we proposed an efficient algorithm using Node monitoring (NM) and Piecewise Linear Function Approximation(: NP) for reducing the complexity of LDPC code decoding. Proposed NM algorithm is based on a new node-threshold method together with message passing algorithm. Piecewise linear function approximation is used to reduce the complexity of the algorithm. This new algorithm was simulated in order to verify its efficiency. Complexity of our new NM algorithm is improved to about 20% compared with well-known methods according to simulation results.

A Study on Memetic Algorithm-Based Scheduling for Minimizing Makespan in Unrelated Parallel Machines without Setup Time (작업준비시간이 없는 이종 병렬설비에서 총 소요 시간 최소화를 위한 미미틱 알고리즘 기반 일정계획에 관한 연구)

  • Tehie Lee;Woo-Sik Yoo
    • Journal of the Korea Safety Management & Science
    • /
    • v.25 no.2
    • /
    • pp.1-8
    • /
    • 2023
  • This paper is proposing a novel machine scheduling model for the unrelated parallel machine scheduling problem without setup times to minimize the total completion time, also known as "makespan". This problem is a NP-complete problem, and to date, most approaches for real-life situations are based on the operator's experience or simple heuristics. The new model based on the Memetic Algorithm, which was proposed by P. Moscato in 1989, is a hybrid algorithm that includes genetic algorithm and local search optimization. The new model is tested on randomly generated datasets, and is compared to optimal solution, and four scheduling models; three rule-based heuristic algorithms, and a genetic algorithm based scheduling model from literature; the test results show that the new model performed better than scheduling models from literature.

Modified K-means algorithm (수정된 K-means 알고리즘)

  • Kim Hyungcheol;Cho CheHwang
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.115-118
    • /
    • 1999
  • One of the typical methods to design a codebook is K-means algorithm. This algorithm has the drawbacks that converges to a locally optimal codebook and its performance is mainly decided by an initial codebook. D. Lee's method is almost same as the K-means algorithm except for a modification of a distance value. Those methods have a fixed distance value during all iterations. After many iterations. because the distance between new codevectors and old codevectors is much shorter than the distance in the early stage of iterations, the new codevectors are not affected by distance value. But new codevectors decided in the early stage of learning iterations are much affected by distance value. Therefore it is not appropriate to fix the distance value during all iterations. In this paper, we propose a new algorithm using each different distance value between codevectors for a limited iterations in the early stage of learning iteration. In the experiment, the result show that the proposed method can design better codebooks than the conventional K-means algorithms.

  • PDF

A New Pivot Algorithm for Star Identification

  • Nah, Jakyoung;Yi, Yu;Kim, Yong Ha
    • Journal of Astronomy and Space Sciences
    • /
    • v.31 no.3
    • /
    • pp.205-214
    • /
    • 2014
  • In this study, a star identification algorithm which utilizes pivot patterns instead of apparent magnitude information was developed. The new star identification algorithm consists of two steps of recognition process. In the first step, the brightest star in a sensor image is identified using the orientation of brightness between two stars as recognition information. In the second step, cell indexes are used as new recognition information to identify dimmer stars, which are derived from the brightest star already identified. If we use the cell index information, we can search over limited portion of the star catalogue database, which enables the faster identification of dimmer stars. The new pivot algorithm does not require calibrations on the apparent magnitude of a star but it shows robust characteristics on the errors of apparent magnitude compared to conventional pivot algorithms which require the apparent magnitude information.

A New Decision Tree Algorithm Based on Rough Set and Entity Relationship (러프셋 이론과 개체 관계 비교를 통한 의사결정나무 구성)

  • Han, Sang-Wook;Kim, Jae-Yearn
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.33 no.2
    • /
    • pp.183-190
    • /
    • 2007
  • We present a new decision tree classification algorithm using rough set theory that can induce classification rules, the construction of which is based on core attributes and relationship between objects. Although decision trees have been widely used in machine learning and artificial intelligence, little research has focused on improving classification quality. We propose a new decision tree construction algorithm that can be simplified and provides an improved classification quality. We also compare the new algorithm with the ID3 algorithm in terms of the number of rules.

An Efficient diagnosis Algorithm for High Density Memory (고집적 메모리를 위한 효율적인 고장 진단 알고리즘)

  • Park, Han-Won;Kang, Sung-Ho
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.4
    • /
    • pp.192-200
    • /
    • 2001
  • As the high density memory is widely used in the various applications, the need for reproduction of memory is increased. In this paper we propose an efficient fault diagnosis algorithm of linear order O(n) that enables the reproduction of memory. The new algorithm can distinguish various fault models and identify all the cells related to the faults. In addition, a new BIST architecture for fault diagnosis is developed. Using the new algorithm, fault diagnosis can be performed efficiently. And the performance evaluation with previous approaches proves the efficiency of the new algorithm.

  • PDF

A NEW EXPLICIT EXTRAGRADIENT METHOD FOR SOLVING EQUILIBRIUM PROBLEMS WITH CONVEX CONSTRAINTS

  • Muangchoo, Kanikar
    • Nonlinear Functional Analysis and Applications
    • /
    • v.27 no.1
    • /
    • pp.1-22
    • /
    • 2022
  • The purpose of this research is to formulate a new proximal-type algorithm to solve the equilibrium problem in a real Hilbert space. A new algorithm is analogous to the famous two-step extragradient algorithm that was used to solve variational inequalities in the Hilbert spaces previously. The proposed iterative scheme uses a new step size rule based on local bifunction details instead of Lipschitz constants or any line search scheme. The strong convergence theorem for the proposed algorithm is well-proven by letting mild assumptions about the bifunction. Applications of these results are presented to solve the fixed point problems and the variational inequality problems. Finally, we discuss two test problems and computational performance is explicating to show the efficiency and effectiveness of the proposed algorithm.

A New Blind Equalization Algorithm with A Stop-and-Go Flag (Stop-and-Go 플래그를 가지는 새로운 블라인드 등화 알고리즘)

  • Jeong, Young-Hwa
    • The Journal of Information Technology
    • /
    • v.8 no.3
    • /
    • pp.105-115
    • /
    • 2005
  • The CMA and MMA blind equalization algorithm has the inevitable large residual error caused by mismatching between the symbol constellation at a steady state after convergence. Stop-and-Go algorithm has a very superior residual error characteristics at a steady state but a relatively slow convergence characteristics. In this paper, we propose a SAG-Flagged MMA as a new adaptive blind equalization algorithm with a Stop-and-Go flag which follows a flagged MMA in update scheme of tap weights as appling the flag obtaining from Stop-and-Go algorithm to MMA. Using computer simulation, it is confirmed that the proposed algorithm has an enhancing performance from the viewpoint of residual ISI, residual error and convergence speed in comparison with MMA and Stop-and-Go algorithm. Algorithm has a new error function using the decided original constellation instead of the reduced constellation. By computer simulation, it is confirmed that the proposed algorithm has the performance superiority in terms of residual ISI and convergence speed compared with the adaptive blind equalization algorithm of CMA family, Constant Modulus Algorithm with Carrier Phase Recovery and Modified CMA(MCMA).

  • PDF

Tolerance Computation for Process Parameter Considering Loss Cost : In Case of the Larger is better Characteristics (손실 비용을 고려한 공정 파라미터 허용차 산출 : 망대 특성치의 경우)

  • Kim, Yong-Jun;Kim, Geun-Sik;Park, Hyung-Geun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.2
    • /
    • pp.129-136
    • /
    • 2017
  • Among the information technology and automation that have rapidly developed in the manufacturing industries recently, tens of thousands of quality variables are estimated and categorized in database every day. The former existing statistical methods, or variable selection and interpretation by experts, place limits on proper judgment. Accordingly, various data mining methods, including decision tree analysis, have been developed in recent years. Cart and C5.0 are representative algorithms for decision tree analysis, but these algorithms have limits in defining the tolerance of continuous explanatory variables. Also, target variables are restricted by the information that indicates only the quality of the products like the rate of defective products. Therefore it is essential to develop an algorithm that improves upon Cart and C5.0 and allows access to new quality information such as loss cost. In this study, a new algorithm was developed not only to find the major variables which minimize the target variable, loss cost, but also to overcome the limits of Cart and C5.0. The new algorithm is one that defines tolerance of variables systematically by adopting 3 categories of the continuous explanatory variables. The characteristics of larger-the-better was presumed in the environment of programming R to compare the performance among the new algorithm and existing ones, and 10 simulations were performed with 1,000 data sets for each variable. The performance of the new algorithm was verified through a mean test of loss cost. As a result of the verification show, the new algorithm found that the tolerance of continuous explanatory variables lowered loss cost more than existing ones in the larger is better characteristics. In a conclusion, the new algorithm could be used to find the tolerance of continuous explanatory variables to minimize the loss in the process taking into account the loss cost of the products.