• 제목/요약/키워드: Space approximation

검색결과 499건 처리시간 0.029초

ON THE LEBESGUE SPACE OF VECTOR MEASURES

  • Choi, Chang-Sun;Lee, Keun-Young
    • 대한수학회보
    • /
    • 제48권4호
    • /
    • pp.779-789
    • /
    • 2011
  • In this paper we study the Banach space $L^1$(G) of real valued measurable functions which are integrable with respect to a vector measure G in the sense of D. R. Lewis. First, we investigate conditions for a scalarly integrable function f which guarantee $f{\in}L^1$(G). Next, we give a sufficient condition for a sequence to converge in $L^1$(G). Moreover, for two vector measures F and G with values in the same Banach space, when F can be written as the integral of a function $f{\in}L^1$(G), we show that certain properties of G are inherited to F; for instance, relative compactness or convexity of the range of vector measure. Finally, we give some examples of $L^1$(G) related to the approximation property.

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • 제15권2호
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

Analytical Approximation Algorithm for the Inverse of the Power of the Incomplete Gamma Function Based on Extreme Value Theory

  • Wu, Shanshan;Hu, Guobing;Yang, Li;Gu, Bin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권12호
    • /
    • pp.4567-4583
    • /
    • 2021
  • This study proposes an analytical approximation algorithm based on extreme value theory (EVT) for the inverse of the power of the incomplete Gamma function. First, the Gumbel function is used to approximate the power of the incomplete Gamma function, and the corresponding inverse problem is transformed into the inversion of an exponential function. Then, using the tail equivalence theorem, the normalized coefficient of the general Weibull distribution function is employed to replace the normalized coefficient of the random variable following a Gamma distribution, and the approximate closed form solution is obtained. The effects of equation parameters on the algorithm performance are evaluated through simulation analysis under various conditions, and the performance of this algorithm is compared to those of the Newton iterative algorithm and other existing approximate analytical algorithms. The proposed algorithm exhibits good approximation performance under appropriate parameter settings. Finally, the performance of this method is evaluated by calculating the thresholds of space-time block coding and space-frequency block coding pattern recognition in multiple-input and multiple-output orthogonal frequency division multiplexing. The analytical approximation method can be applied to other related situations involving the maximum statistics of independent and identically distributed random variables following Gamma distributions.

APPROXIMATION ORDER TO A FUNCTION IN Lp SPACE BY GENERALIZED TRANSLATION NETWORKS

  • HAHM, NAHMWOO;HONG, BUM IL
    • 호남수학학술지
    • /
    • 제28권1호
    • /
    • pp.125-133
    • /
    • 2006
  • We investigate the approximation order to a function in $L_p$[-1, 1] for $0{\leq}p<{\infty}$ by generalized translation networks. In most papers related to neural network approximation, sigmoidal functions are adapted as an activation function. In our research, we choose an infinitely many times continuously differentiable function as an activation function. Using the integral modulus of continuity and the divided difference formula, we get the approximation order to a function in $L_p$[-1, 1].

  • PDF

일반화된 이동최소자승법과 이를 이용한 얇은 보의 무요소 해석 (Generalized Moving Least Squares Method and its use in Meshless Analysis of Thin Beam)

  • 조진연
    • 한국전산구조공학회:학술대회논문집
    • /
    • 한국전산구조공학회 2002년도 봄 학술발표회 논문집
    • /
    • pp.497-504
    • /
    • 2002
  • In meshless methods, the moving least squares approximation technique is widely used to approximate a solution space because of its useful numerical characters such as non-element approximation, easily controllable smoothness, and others. In this work, a generalized version of the moving least squares method Is introduced to enhance the approximation performance through the Information converning to the derivative of the field variable. The results of numerical tests for approximation verify the improved accuracy of the generalized meshless approximation procedure compared to the conventional moving least squares method. By using this generalized moving least squares method, meshless analysis of thin beam is carried out, and its performance is investigated.

  • PDF

APPROXIMATION PROPERTIES OF PAIRS OF SUBSPACES

  • Lee, Keun Young
    • 대한수학회보
    • /
    • 제56권3호
    • /
    • pp.563-568
    • /
    • 2019
  • This study is concerned with the approximation properties of pairs. For ${\lambda}{\geq}1$, we prove that given a Banach space X and a closed subspace $Z_0$, if the pair ($X,Z_0$) has the ${\lambda}$-bounded approximation property (${\lambda}$-BAP), then for every ideal Z containing $Z_0$, the pair ($Z,Z_0$) has the ${\lambda}$-BAP; further, if Z is a closed subspace of X and the pair (X, Z) has the ${\lambda}$-BAP, then for every separable subspace $Y_0$ of X, there exists a separable closed subspace Y containing $Y_0$ such that the pair ($Y,Y{\cap}Z$) has the ${\lambda}$-BAP. We also prove that if Z is a separable closed subspace of X, then the pair (X, Z) has the ${\lambda}$-BAP if and only if for every separable subspace $Y_0$ of X, there exists a separable closed subspace Y containing $Y_0{\cup}Z$ such that the pair (Y, Z) has the ${\lambda}$-BAP.

GENERALIZED SYMMETRICAL SIGMOID FUNCTION ACTIVATED NEURAL NETWORK MULTIVARIATE APPROXIMATION

  • ANASTASSIOU, GEORGE A.
    • Journal of Applied and Pure Mathematics
    • /
    • 제4권3_4호
    • /
    • pp.185-209
    • /
    • 2022
  • Here we exhibit multivariate quantitative approximations of Banach space valued continuous multivariate functions on a box or ℝN, N ∈ ℕ, by the multivariate normalized, quasi-interpolation, Kantorovich type and quadrature type neural network operators. We treat also the case of approximation by iterated operators of the last four types. These approximations are achieved by establishing multidimensional Jackson type inequalities involving the multivariate modulus of continuity of the engaged function or its high order Fréchet derivatives. Our multivariate operators are defined by using a multidimensional density function induced by the generalized symmetrical sigmoid function. The approximations are point-wise and uniform. The related feed-forward neural network is with one hidden layer.