• Title/Summary/Keyword: Stochastic gradient

Search Result 122, Processing Time 0.026 seconds

Large-Scale Phase Retrieval via Stochastic Reweighted Amplitude Flow

  • Xiao, Zhuolei;Zhang, Yerong;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4355-4371
    • /
    • 2020
  • Phase retrieval, recovering a signal from phaseless measurements, is generally considered to be an NP-hard problem. This paper adopts an amplitude-based nonconvex optimization cost function to develop a new stochastic gradient algorithm, named stochastic reweighted phase retrieval (SRPR). SRPR is a stochastic gradient iteration algorithm, which runs in two stages: First, we use a truncated sample stochastic variance reduction algorithm to initialize the objective function. The second stage is the gradient refinement stage, which uses continuous updating of the amplitude-based stochastic weighted gradient algorithm to improve the initial estimate. Because of the stochastic method, each iteration of the two stages of SRPR involves only one equation. Therefore, SRPR is simple, scalable, and fast. Compared with the state-of-the-art phase retrieval algorithm, simulation results show that SRPR has a faster convergence speed and fewer magnitude-only measurements required to reconstruct the signal, under the real- or complex- cases.

STOCHASTIC GRADIENT METHODS FOR L2-WASSERSTEIN LEAST SQUARES PROBLEM OF GAUSSIAN MEASURES

  • YUN, SANGWOON;SUN, XIANG;CHOI, JUNG-IL
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.162-172
    • /
    • 2021
  • This paper proposes stochastic methods to find an approximate solution for the L2-Wasserstein least squares problem of Gaussian measures. The variable for the problem is in a set of positive definite matrices. The first proposed stochastic method is a type of classical stochastic gradient methods combined with projection and the second one is a type of variance reduced methods with projection. Their global convergence are analyzed by using the framework of proximal stochastic gradient methods. The convergence of the classical stochastic gradient method combined with projection is established by using diminishing learning rate rule in which the learning rate decreases as the epoch increases but that of the variance reduced method with projection can be established by using constant learning rate. The numerical results show that the present algorithms with a proper learning rate outperforms a gradient projection method.

Adaptive stochastic gradient method under two mixing heterogenous models (두 이종 혼합 모형에서의 수정된 경사 하강법)

  • Moon, Sang Jun;Jeon, Jong-June
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1245-1255
    • /
    • 2017
  • The online learning is a process of obtaining the solution for a given objective function where the data is accumulated in real time or in batch units. The stochastic gradient descent method is one of the most widely used for the online learning. This method is not only easy to implement, but also has good properties of the solution under the assumption that the generating model of data is homogeneous. However, the stochastic gradient method could severely mislead the online-learning when the homogeneity is actually violated. We assume that there are two heterogeneous generating models in the observation, and propose the a new stochastic gradient method that mitigate the problem of the heterogeneous models. We introduce a robust mini-batch optimization method using statistical tests and investigate the convergence radius of the solution in the proposed method. Moreover, the theoretical results are confirmed by the numerical simulations.

Stochastic Optimization Method Using Gradient Based on Control Variates (통제변수 기반 Gradient를 이용한 확률적 최적화 기법)

  • Kwon, Chi-Myung;Kim, Seong-Yeon
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.2
    • /
    • pp.49-55
    • /
    • 2009
  • In this paper, we investigate an optimal allocation of constant service resources in stochastic system to optimize the expected performance of interest. For this purpose, we use the control variates to estimate the gradients of expected performance with respect to given resource parameters, and apply these estimated gradients in stochastic optimization algorithm to find the optimal allocation of resources. The proposed gradient estimation method is advantageous in that it uses simulation results of a single design point without increasing the number of design points in simulation experiments and does not need to describe the logical relationship among realized performance of interest and perturbations in input parameters. We consider the applications of this research to various models and extension of input parameter space as the future research.

Learning Behaviors of Stochastic Gradient Radial Basis Function Network Algorithms for Odor Sensing Systems

  • Kim, Nam-Yong;Byun, Hyung-Gi;Kwon, Ki-Hyeon
    • ETRI Journal
    • /
    • v.28 no.1
    • /
    • pp.59-66
    • /
    • 2006
  • Learning behaviors of a radial basis function network (RBFN) using a singular value decomposition (SVD) and stochastic gradient (SG) algorithm, together named RBF-SVD-SG, for odor sensing systems are analyzed, and a fast training method is proposed. RBF input data is from a conducting polymer sensor array. It is revealed in this paper that the SG algorithm for the fine-tuning of centers and widths still shows ill-behaving learning results when a sufficiently small convergence coefficient is not used. Since the tuning of centers in RBFN plays a dominant role in the performance of RBFN odor sensing systems, our analysis is focused on the center-gradient variance of the RBFN-SVD-SG algorithm. We found analytically that the steadystate weight fluctuation and large values of a convergence coefficient can lead to an increase in variance of the center-gradient estimate. Based on this analysis, we propose to use the least mean square algorithm instead of SVD in adjusting the weight for stable steady-state weight behavior. Experimental results of the proposed algorithm have shown faster learning speed and better classification performance.

  • PDF

A STOCHASTIC VARIANCE REDUCTION METHOD FOR PCA BY AN EXACT PENALTY APPROACH

  • Jung, Yoon Mo;Lee, Jae Hwa;Yun, Sangwoon
    • Bulletin of the Korean Mathematical Society
    • /
    • v.55 no.4
    • /
    • pp.1303-1315
    • /
    • 2018
  • For principal component analysis (PCA) to efficiently analyze large scale matrices, it is crucial to find a few singular vectors in cheaper computational cost and under lower memory requirement. To compute those in a fast and robust way, we propose a new stochastic method. Especially, we adopt the stochastic variance reduced gradient (SVRG) method [11] to avoid asymptotically slow convergence in stochastic gradient descent methods. For that purpose, we reformulate the PCA problem as a unconstrained optimization problem using a quadratic penalty. In general, increasing the penalty parameter to infinity is needed for the equivalence of the two problems. However, in this case, exact penalization is guaranteed by applying the analysis in [24]. We establish the convergence rate of the proposed method to a stationary point and numerical experiments illustrate the validity and efficiency of the proposed method.

Nonlinear optimization algorithm using monotonically increasing quantization resolution

  • Jinwuk Seok;Jeong-Si Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.119-130
    • /
    • 2023
  • We propose a quantized gradient search algorithm that can achieve global optimization by monotonically reducing the quantization step with respect to time when quantization is composed of integer or fixed-point fractional values applied to an optimization algorithm. According to the white noise hypothesis states, a quantization step is sufficiently small and the quantization is well defined, the round-off error caused by quantization can be regarded as a random variable with identically independent distribution. Thus, we rewrite the searching equation based on a gradient descent as a stochastic differential equation and obtain the monotonically decreasing rate of the quantization step, enabling the global optimization by stochastic analysis for deriving an objective function. Consequently, when the search equation is quantized by a monotonically decreasing quantization step, which suitably reduces the round-off error, we can derive the searching algorithm evolving from an optimization algorithm. Numerical simulations indicate that due to the property of quantization-based global optimization, the proposed algorithm shows better optimization performance on a search space to each iteration than the conventional algorithm with a higher success rate and fewer iterations.

Design of Equalizer using Fussy Stochastic Gradient Algorithm (퍼지 확률 기울기 알고리즘을 이용한 등화기 설계)

  • Park, Hyoung-Keun;Ra, Yoo-Chan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.152-159
    • /
    • 2005
  • For high-speed data communication in band-limited channels, main of the bit error are fading and ISI(Inter-Symbol Interference). The common way of dealing with ISI is using equalization in the receiver. In this thesis, channel adaptive equalizer which uses Fuzzy Stochastic Gradient(FSG) and Constant Modulus Algorithm(CMA) is nonlinear equalizer, or Blind equalizer, that works directly on the signals with no training sequences required. This equalizer employs Takagi-Sugeno's fuzzy model that uses the FSG algorithm, to automatically regulate the step size of the descent gradient vector, combining fast convergence rate and low mean square error(MSE), and the CMA which is a special case of Godard's algorithm, to having multiple dispersion constants($R_p$).

An Adaptive Radial Basis Function Network algorithm for nonlinear channel equalization

  • Kim Nam yong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.141-146
    • /
    • 2005
  • The authors investigate the convergence speed problem of nonlinear adaptive equalization. Convergence constraints and time constant of radial basis function network using stochastic gradient (RBF-SG) algorithm is analyzed and a method of making time constant independent of hidden-node output power by using sample-by-sample node output power estimation is derived. The method for estimating the node power is to use a single-pole low-pass filter. It is shown by simulation that the proposed algorithm gives faster convergence and lower minimum MSE than the RBF-SG algorithm.

Self-Organized Reinforcement Learning Using Fuzzy Inference for Stochastic Gradient Ascent Method

  • K, K.-Wong;Akio, Katuki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.96.3-96
    • /
    • 2001
  • In this paper the self-organized and fuzzy inference used stochastic gradient ascent method is proposed. Fuzzy rule and fuzzy set increase as occasion demands autonomously according to the observation information. And two rules(or two fuzzy sets)becoming to be similar each other as progress of learning are unified. This unification causes the reduction of a number of parameters and learning time. Using fuzzy inference and making a rule with an appropriate state division, our proposed method makes it possible to construct a robust reinforcement learning system.

  • PDF