• Title/Summary/Keyword: Gradient method algorithm

Search Result 703, Processing Time 0.024 seconds

Nulling algorithm design using approximated gradient method (근사화된 Gradient 방법을 사용한 널링 알고리즘 설계)

  • Shin, Chang Eui;Choi, Seung Won
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.95-102
    • /
    • 2013
  • This paper covers nulling algorithm. In this algorithm, we assume that nulling points are already known. In general, nulling algorithm using matrix equation was utilized. But, this algorithm is pointed out that computational complexity is disadvantage. So, we choose gradient method to reduce the computational complexity. In order to further reduce the computational complexity, we propose approximate gradient method using characteristic of trigonometric functions. The proposed method has same performance compared with conventional method while having half the amount of computation when the number of antenna and nulling point are 20 and 1, respectively. In addition, we could virtually eliminate the trigonometric functions arithmetic. Trigonometric functions arithmetic cause a big problem in actual implementation like FPGA processor(Field Programmable gate array). By utilizing the above algorithm in a multi-cell environment, beamforming gain can be obtained and interference can be reduced at same time. By the above results, the algorithm can show excellent performance in the cell boundary.

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

Compression of Image Data Using Neural Networks based on Conjugate Gradient Algorithm and Dynamic Tunneling System

  • Cho, Yong-Hyun;Kim, Weon-Ook;Bang, Man-Sik;Kim, Young-il
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.740-749
    • /
    • 1998
  • This paper proposes compression of image data using neural networks based on conjugate gradient method and dynamic tunneling system. The conjugate gradient method is applied for high speed optimization .The dynamic tunneling algorithms, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Converging to the local minima by using the conjugate gradient method, the new initial point for escaping the local minima is estimated by dynamic tunneling system. The proposed method has been applied the image data compression of 12 ${\times}$12 pixels. The simulation results shows the proposed networks has better learning performance , in comparison with that using the conventional BP as learning algorithm.

Modified Watershed Algorithm Considering Zero-Crossing of Gradient (Gradient의 Zero-Crossing을 이용한 개선된 Watershed Algorithm)

  • Park, Dong-In;Ko, Yun-Ho;Park, Young-Woo
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.389-390
    • /
    • 2007
  • In this paper, we propose a modified watershed algorithm to obtain exact edge of region. The proposed method adjusts priority at zero-crossing point of gradient in order to make the point of region decision time postponed. We compare the proposed method with a previous method and prove that this method can extract more correct edge of region.

  • PDF

Tuning Method of the Membership Function for FLC using a Gradient Descent Algorithm (Gradient Descent 알고리즘을 이용한 퍼지제어기의 멤버십함수 동조 방법)

  • Choi, Hansoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7277-7282
    • /
    • 2014
  • In this study, the gradient descent algorithm was used for FLC analysis and the algorithm was used to represent the effects of nonlinear parameters, which alter the antecedent and consequence fuzzy variables of FLC. The controller parameters choose the control variable by iteration for gradient descent algorithm. The FLC consists of 7 membership functions, 49 rules and a two inputs - one output system. The system adopted the Min-Max inference method and triangle type membership function with a 13 quantization level.

Automatic generation of Fuzzy Parameters Using Genetic and gradient Optimization Techniques (유전과 기울기 최적화기법을 이용한 퍼지 파라메터의 자동 생성)

  • Ryoo, Dong-Wan;La, Kyung-Taek;Chun, Soon-Yong;Seo, Bo-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 1998.07b
    • /
    • pp.515-518
    • /
    • 1998
  • This paper proposes a new hybrid algorithm for auto-tuning fuzzy controllers improving the performance. The presented algorithm estimates automatically the optimal values of membership functions, fuzzy rules, and scaling factors for fuzzy controllers, using a genetic-MGM algorithm. The object of the proposed algorithm is to promote search efficiency by a genetic and modified gradient optimization techniques. The proposed genetic and MGM algorithm is based on both the standard genetic algorithm and a gradient method. If a maximum point don't be changed around an optimal value at the end of performance during given generation, the genetic-MGM algorithm searches for an optimal value using the initial value which has maximum point by converting the genetic algorithms into the MGM(Modified Gradient Method) algorithms that reduced the number of variables. Using this algorithm is not only that the computing time is faster than genetic algorithm as reducing the number of variables, but also that can overcome the disadvantage of genetic algorithms. Simulation results verify the validity of the presented method.

  • PDF

Optimum Design of Frame Structures Using Generalized Transfer Stiffness Coefficient Method and Genetic Algorithm (일반화 전달강성계수법과 유전알고리즘을 이용한 골조구조물의 최적설계)

  • Choi, Myung-Soo
    • Journal of Power System Engineering
    • /
    • v.9 no.4
    • /
    • pp.202-208
    • /
    • 2005
  • The genetic algorithm (GA) which is one of the popular optimum algorithm has been used to solve a variety of optimum problems. Because it need not require the gradient of objective function and is easier to find global solution than gradient-based optimum algorithm using the gradient of objective function. However optimum method using the GA and the finite element method (FEM) takes many computational time to solve the optimum structural design problem which has a great number of design variables, constraints, and system with many degrees of freedom. In order to overcome the drawback of the optimum structural design using the GA and the FEM, the author developed a computer program which can optimize frame structures by using the GA and the generalized transfer stiffness coefficient method. In order to confirm the effectiveness of the developed program, it is applied to optimum design of plane frame structures. The computational results by the developed program were compared with those of iterative design.

  • PDF

Study on Optimum Design of Steel Plane Frame By Using Gradient Projection Method (Gradient Projection법을 이용한 철골평면구조물의 최적설계연구)

  • LEE HAN-SEON;HONG SUNG-MOK
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1994.04a
    • /
    • pp.38-45
    • /
    • 1994
  • The general conceptual constitution of structural optimization is formulated. The algorithm using the gradient projection method and design sensitivity analysis is discussed. Examples of minimum-weight design for six-story steel plane frame are taken to illustrate the application of this algorithm. The advantages of this algorithm such as marginal cost and design sensitivity analysis as well as system analysis are explained.

  • PDF

An Analysis of the Optimal Control of Air-Conditioning System with Slab Thermal Storage by the Gradient Method Algorithm (구배법 알고리즘에 의한 슬래브축열의 최적제어 해석)

  • Jung, Jae-Hoon
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.20 no.8
    • /
    • pp.534-540
    • /
    • 2008
  • In this paper, the optimal bang-bang control problem of an air-conditioning system with slab thermal storage was formulated by gradient method. Furthermore, the numeric solution obtained by gradient method algorithm was compared with the analytic solution obtained on the basis of maximum principle. The control variable is changed uncontinuously at the start time of thermal storage operation in an analytic solution. On the other hand, it is showed as a continuous solution in a numeric solution. The numeric solution reproduces the analytic solution when a tolerance for convergence is applied severely. It is conceivable that gradient method is effective in the analysis of the optimal bang-bang control of the large-scale system like an air-conditioning system with slab thermal storage.

Algorithm for stochastic Neighbor Embedding: Conjugate Gradient, Newton, and Trust-Region

  • Hongmo, Je;Kijoeng, Nam;Seungjin, Choi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.697-699
    • /
    • 2004
  • Stochastic Neighbor Embedding(SNE) is a probabilistic method of mapping high-dimensional data space into a low-dimensional representation with preserving neighbor identities. Even though SNE shows several useful properties, the gradient-based naive SNE algorithm has a critical limitation that it is very slow to converge. To overcome this limitation, faster optimization methods should be considered by using trust region method we call this method fast TR SNE. Moreover, this paper presents a couple of useful optimization methods(i.e. conjugate gradient method and Newton's method) to embody fast SNE algorithm. We compared above three methods and conclude that TR-SNE is the best algorithm among them considering speed and stability. Finally, we show several visualizing experiments of TR-SNE to confirm its stability by experiments.

  • PDF