• Title/Summary/Keyword: and regularization

Search Result 460, Processing Time 0.029 seconds

A Study on Reconstructing Impact Forces of an Aircraft Wing Using Impact Response Functions and Regularization Methods (충격응답함수와 조정법을 이용한 항공기 날개의 충격하중 복원 연구)

  • 박찬익
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.8
    • /
    • pp.41-46
    • /
    • 2006
  • The capability for reconstructing impact forces of an aircraft wing using impact response functions and regularization methods were examined. The impact response function which expresses the relation between the structure response and the impact force was derived using the information on mass and stiffness data of a finite element model for the wing. Iterative Tikhonov regularization method and generalized singular value decomposition method were used to inverse the impact response function that was generally ill-posed. For the numerical verification, a fighter aircraft wing was used. Strain and deflection histories obtained from finite element analysis were compared with the results calculated using impact response functions. And the impact forces were reconstructed with the strain histories obtained from finite element analysis. The numerical verification results showed that this method can be used to monitor impact forces on aircraft structures.

Disparity Estimation using a Region-Dividing Technique and Edge-preserving Regularization (영역 분할 기법과 경계 보존 변이 평활화를 이용한 스테레오 영상의 변이 추정)

  • 김한성;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.25-32
    • /
    • 2004
  • We propose a hierarchical disparity estimation algorithm with edge-preserving energy-based regularization. Initial disparity vectors are obtained from downsampled stereo images using a feature-based region-dividing disparity estimation technique. Dense disparities are estimated from these initial vectors with shape-adaptive windows in full resolution images. Finally, the vector fields are regularized with the minimization of the energy functional which considers both fidelity and smoothness of the fields. The first two steps provide highly reliable disparity vectors, so that local minimum problem can be avoided in regularization step. The proposed algorithm generates accurate disparity map which is smooth inside objects while preserving its discontinuities in boundaries. Experimental results are presented to illustrate the capabilities of the proposed disparity estimation technique.

Improved Density-Independent Fuzzy Clustering Using Regularization (레귤러라이제이션 기반 개선된 밀도 무관 퍼지 클러스터링)

  • Han, Soowhan;Heo, Gyeongyong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.1-7
    • /
    • 2020
  • Fuzzy clustering, represented by FCM(Fuzzy C-Means), is a simple and efficient clustering method. However, the object function in FCM makes clusters affect clustering results proportional to the density of clusters, which can distort clustering results due to density difference between clusters. One method to alleviate this density problem is EDI-FCM(Extended Density-Independent FCM), which adds additional terms to the objective function of FCM to compensate for the density difference. In this paper, proposed is an enhanced EDI-FCM using regularization, Regularized EDI-FCM. Regularization is commonly used to make a solution space smooth and an algorithm noise insensitive. In clustering, regularization can reduce the effect of a high-density cluster on clustering results. The proposed method converges quickly and accurately to real centers when compared with FCM and EDI-FCM, which can be verified with experimental results.

Network-based regularization for analysis of high-dimensional genomic data with group structure (그룹 구조를 갖는 고차원 유전체 자료 분석을 위한 네트워크 기반의 규제화 방법)

  • Kim, Kipoong;Choi, Jiyun;Sun, Hokeun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1117-1128
    • /
    • 2016
  • In genetic association studies with high-dimensional genomic data, regularization procedures based on penalized likelihood are often applied to identify genes or genetic regions associated with diseases or traits. A network-based regularization procedure can utilize biological network information (such as genetic pathways and signaling pathways in genetic association studies) with an outstanding selection performance over other regularization procedures such as lasso and elastic-net. However, network-based regularization has a limitation because cannot be applied to high-dimension genomic data with a group structure. In this article, we propose to combine data dimension reduction techniques such as principal component analysis and a partial least square into network-based regularization for the analysis of high-dimensional genomic data with a group structure. The selection performance of the proposed method was evaluated by extensive simulation studies. The proposed method was also applied to real DNA methylation data generated from Illumina Innium HumanMethylation27K BeadChip, where methylation beta values of around 20,000 CpG sites over 12,770 genes were compared between 123 ovarian cancer patients and 152 healthy controls. This analysis was also able to indicate a few cancer-related genes.

An Extension of Unified Bayesian Tikhonov Regularization Method and Application to Image Restoration (통합 베이즈 티코노프 정규화 방법의 확장과 영상복원에 대한 응용)

  • Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.161-166
    • /
    • 2020
  • This paper suggests an extension of the unified Bayesian Tikhonov regularization method. The unified method establishes the relationship between Tikhonov regularization parameter and Bayesian hyper-parameters, and presents a formula for obtaining the regularization parameter using the maximum posterior probability and the evidence framework. When the dimension of the data matrix is m by n (m >= n), we derive that the total misfit has the range of m ± n instead of m. Thus the search range is extended from one to 2n + 1 integer points. Golden section search rather than linear one is applied to reduce the time. A new benchmark for optimizing relative error and new model selection criteria to target it are suggested. The experimental results show the effectiveness of the proposed method in the image restoration problem.

A study on the performance improvement of learning based on consistency regularization and unlabeled data augmentation (일치성규칙과 목표값이 없는 데이터 증대를 이용하는 학습의 성능 향상 방법에 관한 연구)

  • Kim, Hyunwoong;Seok, Kyungha
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.167-175
    • /
    • 2021
  • Semi-supervised learning uses both labeled data and unlabeled data. Recently consistency regularization is very popular in semi-supervised learning. Unsupervised data augmentation (UDA) that uses unlabeled data augmentation is also based on the consistency regularization. The Kullback-Leibler divergence is used for the loss of unlabeled data and cross-entropy for the loss of labeled data through UDA learning. UDA uses techniques such as training signal annealing (TSA) and confidence-based masking to promote performance. In this study, we propose to use Jensen-Shannon divergence instead of Kullback-Leibler divergence, reverse-TSA and not to use confidence-based masking for performance improvement. Through experiment, we show that the proposed technique yields better performance than those of UDA.

Adaptive Image Restoration Considering the Edge Direction (윤곽 방향성을 고려한 적응적 영상복원)

  • Jeon, Woo-Sang;Lee, Myung-Sub;Jang, Ho
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.1-6
    • /
    • 2009
  • It is very difficult to restore the images degraded by motion blur and additive noise. In conventional methods, regularization usually applies to all the images without considering local characteristics of the images. As a result, ringing artifacts appear in the edge regions and noise amplification is in the flat regions, as well. To solve these problems, we propose an adaptive iterative regularization method, using the way of regularization operator considering edge directions. In addition, we suggest an adaptive regularization parameter and an relaxation parameter. In conclusion, We have verified that the new method shows the suppression of the noise amplification in the flat regions, also does less ringing artifacts in the edge regions. Furthermore, it offers better images and improves the quality of ISNR, comparing with those of conventional methods.

Learning Domain Invariant Representation via Self-Rugularization (자기 정규화를 통한 도메인 불변 특징 학습)

  • Hyun, Jaeguk;Lee, ChanYong;Kim, Hoseong;Yoo, Hyunjung;Koh, Eunjin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.4
    • /
    • pp.382-391
    • /
    • 2021
  • Unsupervised domain adaptation often gives impressive solutions to handle domain shift of data. Most of current approaches assume that unlabeled target data to train is abundant. This assumption is not always true in practices. To tackle this issue, we propose a general solution to solve the domain gap minimization problem without any target data. Our method consists of two regularization steps. The first step is a pixel regularization by arbitrary style transfer. Recently, some methods bring style transfer algorithms to domain adaptation and domain generalization process. They use style transfer algorithms to remove texture bias in source domain data. We also use style transfer algorithms for removing texture bias, but our method depends on neither domain adaptation nor domain generalization paradigm. The second regularization step is a feature regularization by feature alignment. Adding a feature alignment loss term to the model loss, the model learns domain invariant representation more efficiently. We evaluate our regularization methods from several experiments both on small dataset and large dataset. From the experiments, we show that our model can learn domain invariant representation as much as unsupervised domain adaptation methods.

Extraction and Regularization of Various Building Boundaries with Complex Shapes Utilizing Distribution Characteristics of Airborne LIDAR Points

  • Lee, Jeong-Ho;Han, Soo-Hee;Byun, Young-Gi;Kim, Yong-Il
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.547-557
    • /
    • 2011
  • This study presents an approach for extracting boundaries of various buildings, which have concave boundaries, inner yards, non-right-angled corners, and nonlinear edges. The approach comprises four steps: building point segmentation, boundary tracing, boundary grouping, and regularization. In the second and third steps, conventional algorithms are improved for more accurate boundary extraction, and in the final step, a new algorithm is presented to extract nonlinear edges. The unique characteristics of airborne light detection and ranging (LIDAR) data are considered in some steps. The performance and practicality of the presented algorithm were evaluated for buildings of various shapes, and the average omission and commission error of building polygon areas were 0.038 and 0.033, respectively.

Face Sketch Synthesis Based on Local and Nonlocal Similarity Regularization

  • Tang, Songze;Zhou, Xuhuan;Zhou, Nan;Sun, Le;Wang, Jin
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1449-1461
    • /
    • 2019
  • Face sketch synthesis plays an important role in public security and digital entertainment. In this paper, we present a novel face sketch synthesis method via local similarity and nonlocal similarity regularization terms. The local similarity can overcome the technological bottlenecks of the patch representation scheme in traditional learning-based methods. It improves the quality of synthesized sketches by penalizing the dissimilar training patches (thus have very small weights or are discarded). In addition, taking the redundancy of image patches into account, a global nonlocal similarity regularization is employed to restrain the generation of the noise and maintain primitive facial features during the synthesized process. More robust synthesized results can be obtained. Extensive experiments on the public databases validate the generality, effectiveness, and robustness of the proposed algorithm.