• Title/Summary/Keyword: and regularization

Search Result 460, Processing Time 0.025 seconds

Sub-pixel Motion Compensated Deinterlacing Algorithm (부화소 단위의 움직임 정보를 고려한 순차 주사화)

  • 박민규;최종성;강문기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.322-331
    • /
    • 2003
  • Advances of high-definition television(HDTV) and personal computers call for the mutual conversion between interlaced signal and progressive signal. Especially, deinterlacing which is known as an interlaced to progressive conversion has been recently required and investigated. In this paper, we propose new deinterlacing algorithm considering sub-pixel motion information. In order to reduce the error of motion estimation, we analyze the effect of inaccurate sub-pixel motion information and model it as zero-mean Gaussian noises added respectively to each low resolution image(field). The error caused by inaccurate motion information is reduced by determining regularization parameter according to the error of motion estimation in each channel. The validity of the proposed algorithm is demonstrated both theoretically and experimentally in this paper.

The Joint Effect of factors on Generalization Performance of Neural Network Learning Procedure (신경망 학습의 일반화 성능향상을 위한 인자들의 결합효과)

  • Yoon YeoChang
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.343-348
    • /
    • 2005
  • The goal of this paper is to study the joint effect of factors of neural network teaming procedure. There are many factors, which may affect the generalization ability and teaming speed of neural networks, such as the initial values of weights, the learning rates, and the regularization coefficients. We will apply a constructive training algerian for neural network, then patterns are trained incrementally by considering them one by one. First, we will investigate the effect of these factors on generalization performance and learning speed. Based on these factors' effect, we will propose a joint method that simultaneously considers these three factors, and dynamically hue the learning rate and regularization coefficient. Then we will present the results of some experimental comparison among these kinds of methods in several simulated nonlinear data. Finally, we will draw conclusions and make plan for future work.

An integrated method of flammable cloud size prediction for offshore platforms

  • Zhang, Bin;Zhang, Jinnan;Yu, Jiahang;Wang, Boqiao;Li, Zhuoran;Xia, Yuanchen;Chen, Li
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.13 no.1
    • /
    • pp.321-339
    • /
    • 2021
  • Response Surface Method (RSM) has been widely used for flammable cloud size prediction as it can reduce computational intensity for further Explosion Risk Analysis (ERA) especially during the early design phase of offshore platforms. However, RSM encounters the overfitting problem under very limited simulations. In order to overcome the disadvantage of RSM, Bayesian Regularization Artificial Neural (BRANN)-based model has been recently developed and its robustness and efficiency have been widely verified. However, for ERA during the early design phase, there seems to be room to further reduce the computational intensity while ensuring the model's acceptable accuracy. This study aims to develop an integrated method, namely the combination of Center Composite Design (CCD) method with Bayesian Regularization Artificial Neural Network (BRANN), for flammable cloud size prediction. A case study with constant and transient leakages is conducted to illustrate the feasibility and advantage of this hybrid method. Additionally, the performance of CCD-BRANN is compared with that of RSM. It is concluded that the newly developed hybrid method is more robust and computational efficient for ERAs during early design phase.

Developing an approach for fast estimation of range of ion in interaction with material using the Geant4 toolkit in combination with the neural network

  • Khalil Moshkbar-Bakhshayesh;Soroush Mohtashami
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.4209-4214
    • /
    • 2022
  • Precise modelling of the interaction of ions with materials is important for many applications including material characterization, ion implantation in devices, thermonuclear fusion, hadron therapy, secondary particle production (e.g. neutron), etc. In this study, a new approach using the Geant4 toolkit in combination with the Bayesian regularization (BR) learning algorithm of the feed-forward neural network (FFNN) is developed to estimate the range of ions in materials accurately and quickly. The different incident ions at different energies are interacted with the target materials. The Geant4 is utilized to model the interactions and to calculate the range of the ions. Afterward, the appropriate architecture of the FFNN-BR with the relevant input features is utilized to learn the modelled ranges and to estimate the new ranges for the new cases. The notable achievements of the proposed approach are: 1- The range of ions in different materials is given as quickly as possible and the time required for estimating the ranges can be neglected (i.e. less than 0.01 s by a typical personal computer). 2- The proposed approach can generalize its ability for estimating the new untrained cases. 3- There is no need for a pre-made lookup table for the estimation of the range values.

A technique for extracting complex building boundaries from segmented LiDAR points (라이다 분할포인트로부터 복잡한 건물의 외곽선 추출 기법)

  • Lee, Jeong-Ho;Han, Soo-Hee;Byun, Young-Gi;Yu, Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2007.04a
    • /
    • pp.153-156
    • /
    • 2007
  • There have been many studies on extracting building boundaries from LiDAR(Light Detection And Ranging) data. In such studies, points are first segmented, then are further processed to get straight boundary lines that better approximate the real boundaries. In most research in this area, processes like generalization or regularization assume that buildings have only right angles, i.e. all the line segments of the building boundaries are either parallel or perpendicular. However, this assumption is not valid for many buildings. We present a new approach consisting of three steps that is applicable to more complex building boundaries. The three steps consist of boundary tracing, generalization, and regularization. Each step contains algorithms that range from slight modifications of conventional algorithms to entirely new concepts. Four typical building shapes were selected to test the performance of out new approach and the results were compared with digital maps. The results show that the proposed approach has good potential for extracting building boundaries of various shapes.

  • PDF

Sparsity-constrained Extended Kalman Filter concept for damage localization and identification in mechanical structures

  • Ginsberg, Daniel;Fritzen, Claus-Peter;Loffeld, Otmar
    • Smart Structures and Systems
    • /
    • v.21 no.6
    • /
    • pp.741-749
    • /
    • 2018
  • Structural health monitoring (SHM) systems are necessary to achieve smart predictive maintenance and repair planning as well as they lead to a safe operation of mechanical structures. In the context of vibration-based SHM the measured structural responses are employed to draw conclusions about the structural integrity. This usually leads to a mathematically illposed inverse problem which needs regularization. The restriction of the solution set of this inverse problem by using prior information about the damage properties is advisable to obtain meaningful solutions. Compared to the undamaged state typically only a few local stiffness changes occur while the other areas remain unchanged. This change can be described by a sparse damage parameter vector. Such a sparse vector can be identified by employing $L_1$-regularization techniques. This paper presents a novel framework for damage parameter identification by combining sparse solution techniques with an Extended Kalman Filter. In order to ensure sparsity of the damage parameter vector the measurement equation is expanded by an additional nonlinear $L_1$-minimizing observation. This fictive measurement equation accomplishes stability of the Extended Kalman Filter and leads to a sparse estimation. For verification, a proof-of-concept example on a quadratic aluminum plate is presented.

Finite Element Mesh Dependency in Nonlinear Earthquake Analysis of Concrete Dams (콘크리트 댐의 비선형 지진해석에서의 유한요소망 영향)

  • 이지호
    • Journal of the Korea Concrete Institute
    • /
    • v.13 no.6
    • /
    • pp.637-644
    • /
    • 2001
  • A regularization method based on the Duvaut-Lions viscoplastic scheme for plastic-damage and continuum damage models, which provides mesh-independent and well-posed solutions in nonlinear earthquake analysis of concrete dams, is presented. A plastic-damage model regularized using the proposed rate-dependent viscosity method and its original rate-independent version are used for the earthquake damage analysis of a concrete dam to analyze the effect of the regualarization and mesh. The computational analysis shows that the regularized plastic-damage model gives well-posed solutions regardless mesh size and arrangement, while the rate-independent counterpart produces mesh-dependent ill-posed results.

Multiple Group Testing Procedures for Analysis of High-Dimensional Genomic Data

  • Ko, Hyoseok;Kim, Kipoong;Sun, Hokeun
    • Genomics & Informatics
    • /
    • v.14 no.4
    • /
    • pp.187-195
    • /
    • 2016
  • In genetic association studies with high-dimensional genomic data, multiple group testing procedures are often required in order to identify disease/trait-related genes or genetic regions, where multiple genetic sites or variants are located within the same gene or genetic region. However, statistical testing procedures based on an individual test suffer from multiple testing issues such as the control of family-wise error rate and dependent tests. Moreover, detecting only a few of genes associated with a phenotype outcome among tens of thousands of genes is of main interest in genetic association studies. In this reason regularization procedures, where a phenotype outcome regresses on all genomic markers and then regression coefficients are estimated based on a penalized likelihood, have been considered as a good alternative approach to analysis of high-dimensional genomic data. But, selection performance of regularization procedures has been rarely compared with that of statistical group testing procedures. In this article, we performed extensive simulation studies where commonly used group testing procedures such as principal component analysis, Hotelling's $T^2$ test, and permutation test are compared with group lasso (least absolute selection and shrinkage operator) in terms of true positive selection. Also, we applied all methods considered in simulation studies to identify genes associated with ovarian cancer from over 20,000 genetic sites generated from Illumina Infinium HumanMethylation27K Beadchip. We found a big discrepancy of selected genes between multiple group testing procedures and group lasso.

Super-Resolution Image Reconstruction Using Multi-View Cameras (다시점 카메라를 이용한 초고해상도 영상 복원)

  • Ahn, Jae-Kyun;Lee, Jun-Tae;Kim, Chang-Su
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.463-473
    • /
    • 2013
  • In this paper, we propose a super-resolution (SR) image reconstruction algorithm using multi-view images. We acquire 25 images from multi-view cameras, which consist of a $5{\times}5$ array of cameras, and then reconstruct an SR image of the center image using a low resolution (LR) input image and the other 24 LR reference images. First, we estimate disparity maps from the input image to the 24 reference images, respectively. Then, we interpolate a SR image by employing the LR image and matching points in the reference images. Finally, we refine the SR image using an iterative regularization scheme. Experimental results demonstrate that the proposed algorithm provides higher quality SR images than conventional algorithms.

Joint Identification of Multiple Genetic Variants of Obesity in a Korean Genome-wide Association Study

  • Oh, So-Hee;Cho, Seo-Ae;Park, Tae-Sung
    • Genomics & Informatics
    • /
    • v.8 no.3
    • /
    • pp.142-149
    • /
    • 2010
  • In recent years, genome-wide association (GWA) studies have successfully led to many discoveries of genetic variants affecting common complex traits, including height, blood pressure, and diabetes. Although GWA studies have made much progress in finding single nucleotide polymorphisms (SNPs) associated with many complex traits, such SNPs have been shown to explain only a very small proportion of the underlying genetic variance of complex traits. This is partly due to that fact that most current GWA studies have relied on single-marker approaches that identify single genetic factors individually and have limitations in considering the joint effects of multiple genetic factors on complex traits. Joint identification of multiple genetic factors would be more powerful and provide a better prediction of complex traits, since it utilizes combined information across variants. Recently, a new statistical method for joint identification of genetic variants for common complex traits via the elastic-net regularization method was proposed. In this study, we applied this joint identification approach to a large-scale GWA dataset (i.e., 8842 samples and 327,872 SNPs) in order to identify genetic variants of obesity for the Korean population. In addition, in order to test for the biological significance of the jointly identified SNPs, gene ontology and pathway enrichment analyses were further conducted.