• Title/Summary/Keyword: convex operator function

Search Result 34, Processing Time 0.019 seconds

HYPERGEOMETRIC DISTRIBUTION SERIES AND ITS APPLICATION OF CERTAIN CLASS OF ANALYTIC FUNCTIONS BASED ON SPECIAL FUNCTIONS

  • Murugusundaramoorthy, Gangadharan;Porwal, Saurabh
    • Communications of the Korean Mathematical Society
    • /
    • v.36 no.4
    • /
    • pp.671-684
    • /
    • 2021
  • The tenacity of the current paper is to find connections between various subclasses of analytic univalent functions by applying certain convolution operator involving generalized hypergeometric distribution series. To be more specific, we examine such connections with the classes of analytic univalent functions k - 𝓤𝓒𝓥* (𝛽), k - 𝓢*p (𝛽), 𝓡 (𝛽), 𝓡𝜏 (A, B), k - 𝓟𝓤𝓒𝓥* (𝛽) and k - 𝓟𝓢*p (𝛽) in the open unit disc 𝕌.

On triple sequence space of Bernstein-Stancu operator of rough Iλ-statistical convergence of weighted g (A)

  • Esi, A.;Subramanian, N.;Esi, Ayten
    • Annals of Fuzzy Mathematics and Informatics
    • /
    • v.16 no.3
    • /
    • pp.337-361
    • /
    • 2018
  • We introduce and study some basic properties of rough $I_{\lambda}$-statistical convergent of weight g (A), where $g:{\mathbb{N}}^3{\rightarrow}[0,\;{\infty})$ is a function statisying $g(m,\;n,\;k){\rightarrow}{\infty}$ and $g(m,\;n,\;k){\not{\rightarrow}}0$ as $m,\;n,\;k{\rightarrow}{\infty}$ and A represent the RH-regular matrix and also prove the Korovkin approximation theorem by using the notion of weighted A-statistical convergence of weight g (A) limits of a triple sequence of Bernstein-Stancu polynomials.

STUDY OF YOUNG INEQUALITIES FOR MATRICES

  • M. AL-HAWARI;W. GHARAIBEH
    • Journal of applied mathematics & informatics
    • /
    • v.41 no.6
    • /
    • pp.1181-1191
    • /
    • 2023
  • This paper investigates Young inequalities for matrices, a problem closely linked to operator theory, mathematical physics, and the arithmetic-geometric mean inequality. By obtaining new inequalities for unitarily invariant norms, we aim to derive a fresh Young inequality specifically designed for matrices.To lay the foundation for our study, we provide an overview of basic notation related to matrices. Additionally, we review previous advancements made by researchers in the field, focusing on Young improvements.Building upon this existing knowledge, we present several new enhancements of the classical Young inequality for nonnegative real numbers. Furthermore, we establish a matrix version of these improvements, tailored to the specific characteristics of matrices. Through our research, we contribute to a deeper understanding of Young inequalities in the context of matrices.

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.