• Title/Summary/Keyword: penalty functions

Search Result 85, Processing Time 0.028 seconds

A NON-OVERLAPPING DOMAIN DECOMPOSITION METHOD FOR A DISCONTINUOUS GALERKIN METHOD: A NUMERICAL STUDY

  • Eun-Hee Park
    • Korean Journal of Mathematics
    • /
    • v.31 no.4
    • /
    • pp.419-431
    • /
    • 2023
  • In this paper, we propose an iterative method for a symmetric interior penalty Galerkin method for heterogeneous elliptic problems. The iterative method consists mainly of two parts based on a non-overlapping domain decomposition approach. One is an intermediate preconditioner constructed by understanding the properties of the discontinuous finite element functions and the other is a preconditioning related to the dual-primal finite element tearing and interconnecting (FETI-DP) methodology. Numerical results for the proposed method are presented, which demonstrate the performance of the iterative method in terms of various parameters associated with the elliptic model problem, the finite element discretization, and non-overlapping subdomain decomposition.

Variable Selection in Frailty Models using FrailtyHL R Package: Breast Cancer Survival Data (frailtyHL 통계패키지를 이용한 프레일티 모형의 변수선택: 유방암 생존자료)

  • Kim, Bohyeon;Ha, Il Do;Noh, Maengseok;Na, Myung Hwan;Song, Ho-Chun;Kim, Jahae
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.5
    • /
    • pp.965-976
    • /
    • 2015
  • Determining relevant variables for a regression model is important in regression analysis. Recently, a variable selection methods using a penalized likelihood with various penalty functions (e.g. LASSO and SCAD) have been widely studied in simple statistical models such as linear models and generalized linear models. The advantage of these methods is that they select important variables and estimate regression coefficients, simultaneously; therefore, they delete insignificant variables by estimating their coefficients as zero. We study how to select proper variables based on penalized hierarchical likelihood (HL) in semi-parametric frailty models that allow three penalty functions, LASSO, SCAD and HL. For the variable selection we develop a new function in the "frailtyHL" R package. Our methods are illustrated with breast cancer survival data from the Medical Center at Chonnam National University in Korea. We compare the results from three variable-selection methods and discuss advantages and disadvantages.

Improvement of Basis-Screening-Based Dynamic Kriging Model Using Penalized Maximum Likelihood Estimation (페널티 적용 최대 우도 평가를 통한 기저 스크리닝 기반 크리깅 모델 개선)

  • Min-Geun Kim;Jaeseung Kim;Jeongwoo Han;Geun-Ho Lee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.391-398
    • /
    • 2023
  • In this paper, a penalized maximum likelihood estimation (PMLE) method that applies a penalty to increase the accuracy of a basis-screening-based Kriging model (BSKM) is introduced. The maximum order and set of basis functions used in the BSKM are determined according to their importance. In this regard, the cross-validation error (CVE) for the basis functions is employed as an indicator of importance. When constructing the Kriging model (KM), the maximum order of basis functions is determined, the importance of each basis function is evaluated according to the corresponding maximum order, and finally the optimal set of basis functions is determined. This optimal set is created by adding basis functions one by one in order of importance until the CVE of the KM is minimized. In this process, the KM must be generated repeatedly. Simultaneously, hyper-parameters representing correlations between datasets must be calculated through the maximum likelihood evaluation method. Given that the optimal set of basis functions depends on such hyper-parameters, it has a significant impact on the accuracy of the KM. The PMLE method is applied to accurately calculate hyper-parameters. It was confirmed that the accuracy of a BSKM can be improved by applying it to Branin-Hoo problem.

Material distribution optimization of 2D heterogeneous cylinder under thermo-mechanical loading

  • Asgari, Masoud
    • Structural Engineering and Mechanics
    • /
    • v.53 no.4
    • /
    • pp.703-723
    • /
    • 2015
  • In this paper optimization of volume fraction distribution in a thick hollow cylinder with finite length made of two-dimensional functionally graded material (2D-FGM) and subjected to steady state thermal and mechanical loadings is considered. The finite element method with graded material properties within each element (graded finite elements) is used to model the structure. Volume fractions of constituent materials on a finite number of design points are taken as design variables and the volume fractions at any arbitrary point in the cylinder are obtained via cubic spline interpolation functions. The objective function selected as having the normalized effective stress equal to one at all points that leads to a uniform stress distribution in the structure. Genetic Algorithm jointed with interior penalty-function method for implementing constraints is effectively employed to find the global solution of the optimization problem. Obtained results indicates that by using the uniform distribution of normalized effective stress as objective function, considerably more efficient usage of materials can be achieved compared with the power law volume fraction distribution. Also considering uniform distribution of safety factor as design criteria instead of minimizing peak effective stress affects remarkably the optimum volume fractions.

Damage detection using finite element model updating with an improved optimization algorithm

  • Xu, Yalan;Qian, Yu;Song, Gangbing;Guo, Kongming
    • Steel and Composite Structures
    • /
    • v.19 no.1
    • /
    • pp.191-208
    • /
    • 2015
  • The sensitivity-based finite element model updating method has received increasing attention in damage detection of structures based on measured modal parameters. Finding an optimization technique with high efficiency and fast convergence is one of the key issues for model updating-based damage detection. A new simple and computationally efficient optimization algorithm is proposed and applied to damage detection by using finite element model updating. The proposed method combines the Gauss-Newton method with region truncation of each iterative step, in which not only the constraints are introduced instead of penalty functions, but also the searching steps are restricted in a controlled region. The developed algorithm is illustrated by a numerically simulated 25-bar truss structure, and the results have been compared and verified with those obtained from the trust region method. In order to investigate the reliability of the proposed method in damage detection of structures, the influence of the uncertainties coming from measured modal parameters on the statistical characteristics of detection result is investigated by Monte-Carlo simulation, and the probability of damage detection is estimated using the probabilistic method.

Review of statistical methods for survival analysis using genomic data

  • Lee, Seungyeoun;Lim, Heeju
    • Genomics & Informatics
    • /
    • v.17 no.4
    • /
    • pp.41.1-41.12
    • /
    • 2019
  • Survival analysis mainly deals with the time to event, including death, onset of disease, and bankruptcy. The common characteristic of survival analysis is that it contains "censored" data, in which the time to event cannot be completely observed, but instead represents the lower bound of the time to event. Only the occurrence of either time to event or censoring time is observed. Many traditional statistical methods have been effectively used for analyzing survival data with censored observations. However, with the development of high-throughput technologies for producing "omics" data, more advanced statistical methods, such as regularization, should be required to construct the predictive survival model with high-dimensional genomic data. Furthermore, machine learning approaches have been adapted for survival analysis, to fit nonlinear and complex interaction effects between predictors, and achieve more accurate prediction of individual survival probability. Presently, since most clinicians and medical researchers can easily assess statistical programs for analyzing survival data, a review article is helpful for understanding statistical methods used in survival analysis. We review traditional survival methods and regularization methods, with various penalty functions, for the analysis of high-dimensional genomics, and describe machine learning techniques that have been adapted to survival analysis.

A Combined Approach of Pricing and (S-1, S) Inventory Policy in a Two-Echelon Supply Chain with Lost Sales Allowed (다단계 SCM 환경에서 품절을 고려한 최적의 제품가격 및 재고정책 결정)

  • Sung, Chang Sup;Park, Sun Hoo
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.30 no.2
    • /
    • pp.146-158
    • /
    • 2004
  • This paper considers a continuous-review two-echelon inventory control problem with one-to-one replenishment policy incorporated and with lost sales allowed where demand arrives in a stationary Poisson process. The problem is formulated using METRIC-approximation in a combined approach of pricing and (S-l, S) inventory policy, for which a heuristic solution algorithm is derived with respect to the corresponding one-warehouse multi-retailer supply chain. Specifically, decisions on retail pricing and warehouse inventory policies are made in integration to maximize total profit in the supply chain. The objective function of the model consists of sub-functions of revenue and cost (holding cost and penalty cost). To test the effectiveness and efficiency of the proposed algorithm, numerical experiments are performed with two cases. The first case deals with identical retailers and the second case deals with different retailers with different market sizes. The computational results show that the proposed algorithm is efficient and derives quite good decisions.

Mooring Cost Sensitivity Study Based on Cost-Optimum Mooring Design

  • Ryu, Sam Sangsoo;Heyl, Caspar;Duggal, Arun
    • Journal of Ocean Engineering and Technology
    • /
    • v.23 no.1
    • /
    • pp.1-6
    • /
    • 2009
  • The paper describes results of a sensitivity study on an optimum mooring cost as a function of safety factor and allowable maximum offset of the offshore floating structure by finding the anchor leg component size and the declination angle. A harmony search (HS) based mooring optimization program was developed to conduct the study. This mooring optimization model was integrated with a frequency-domain global motion analysis program to assess both cost and design constraints of the mooring system. To find a trend of anchor leg system cost for the proposed sensitivity study, optimum costs after a certain number of improvisation were found and compared. For a case study a turret-moored FPSO with 3 ${\times}$ 3 anchor leg system was considered. To better guide search for the optimum cost, three different penalty functions were applied. The results show that the presented HS-based cost-optimum offshore mooring design tool can be used to find optimum mooring design values such as declination angle and horizontal end point separation as well as a cost-optimum mooring system in case either the allowable maximum offset or factor of safety varies.

Comparison of Feature Selection Methods in Support Vector Machines (지지벡터기계의 변수 선택방법 비교)

  • Kim, Kwangsu;Park, Changyi
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.1
    • /
    • pp.131-139
    • /
    • 2013
  • Support vector machines(SVM) may perform poorly in the presence of noise variables; in addition, it is difficult to identify the importance of each variable in the resulting classifier. A feature selection can improve the interpretability and the accuracy of SVM. Most existing studies concern feature selection in the linear SVM through penalty functions yielding sparse solutions. Note that one usually adopts nonlinear kernels for the accuracy of classification in practice. Hence feature selection is still desirable for nonlinear SVMs. In this paper, we compare the performances of nonlinear feature selection methods such as component selection and smoothing operator(COSSO) and kernel iterative feature extraction(KNIFE) on simulated and real data sets.

Development of an Optimization Algorithm Using Orthogonal Arrays in Discrete Design Space (직교배열표를 이용한 이산공간에서의 최적화 알고리듬 개발)

  • Lee, Jeong-Uk;Park, Jun-Seong;Lee, Gwon-Hui;Park, Gyeong-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.25 no.10
    • /
    • pp.1621-1626
    • /
    • 2001
  • The structural optimization have been carried out in the continuous design space or in the discrete design space. Methods fur discrete variables such as genetic algorithms , are extremely expensive in computational cost. In this research, an iterative optimization algorithm using orthogonal arrays is developed for design in discrete space. An orthogonal array is selected on a discrete des inn space and levels are selected from candidate values. Matrix experiments with the orthogonal array are conducted. New results of matrix experiments are obtained with penalty functions leer constraints. A new design is determined from analysis of means(ANOM). An orthogonal array is defined around the new values and matrix experiments are conducted. The final optimum design is found from iterative process. The suggested algorithm has been applied to various problems such as truss and frame type structures. The results are compared with those from a genetic algorithm and discussed.