• Title/Summary/Keyword: boundary regularization

Search Result 40, Processing Time 0.024 seconds

Regularized Modified Newton-Raphson Algorithm for Electrical Impedance Tomography Based on the Exponentially Weighted Least Square Criterion (전기 임피던스 단층촬영을 위한 지수적으로 가중된 최소자승법을 이용한 수정된 조정 Newton-Raphson 알고리즘)

  • Kim, Kyung-Youn;Kim, Bong-Seok
    • Journal of IKEEE
    • /
    • v.4 no.2 s.7
    • /
    • pp.249-256
    • /
    • 2000
  • In EIT(electrical impedance tomography), the internal resistivity(or conductivity) distribution of the unknown object is estimated using the boundary voltage data induced by different current patterns using various reconstruction algorithms. In this paper, we present a regularized modified Newton-Raphson(mNR) scheme which employs additional a priori information in the cost functional as soft constraint and the weighting matrices in the cost functional are selected based on the exponentially weighted least square criterion. The computer simulation for the 32 channels synthetic data shows that the reconstruction performance of the proposed scheme is improved compared to that of the conventional regularized mNR at the expense of slightly increased computational burden.

  • PDF

Adaptive Weight Collaborative Complementary Learning for Robust Visual Tracking

  • Wang, Benxuan;Kong, Jun;Jiang, Min;Shen, Jianyu;Liu, Tianshan;Gu, Xiaofeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.305-326
    • /
    • 2019
  • Discriminative correlation filter (DCF) based tracking algorithms have recently shown impressive performance on benchmark datasets. However, amount of recent researches are vulnerable to heavy occlusions, irregular deformations and so on. In this paper, we intend to solve these problems and handle the contradiction between accuracy and real-time in the framework of tracking-by-detection. Firstly, we propose an innovative strategy to combine the template and color-based models instead of a simple linear superposition and rely on the strengths of both to promote the accuracy. Secondly, to enhance the discriminative power of the learned template model, the spatial regularization is introduced in the learning stage to penalize the objective boundary information corresponding to features in the background. Thirdly, we utilize a discriminative multi-scale estimate method to solve the problem of scale variations. Finally, we research strategies to limit the computational complexity of our tracker. Abundant experiments demonstrate that our tracker performs superiorly against several advanced algorithms on both the OTB2013 and OTB2015 datasets while maintaining the high frame rates.

Beta and Alpha Regularizers of Mish Activation Functions for Machine Learning Applications in Deep Neural Networks

  • Mathayo, Peter Beatus;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.136-141
    • /
    • 2022
  • A very complex task in deep learning such as image classification must be solved with the help of neural networks and activation functions. The backpropagation algorithm advances backward from the output layer towards the input layer, the gradients often get smaller and smaller and approach zero which eventually leaves the weights of the initial or lower layers nearly unchanged, as a result, the gradient descent never converges to the optimum. We propose a two-factor non-saturating activation functions known as Bea-Mish for machine learning applications in deep neural networks. Our method uses two factors, beta (𝛽) and alpha (𝛼), to normalize the area below the boundary in the Mish activation function and we regard these elements as Bea. Bea-Mish provide a clear understanding of the behaviors and conditions governing this regularization term can lead to a more principled approach for constructing better performing activation functions. We evaluate Bea-Mish results against Mish and Swish activation functions in various models and data sets. Empirical results show that our approach (Bea-Mish) outperforms native Mish using SqueezeNet backbone with an average precision (AP50val) of 2.51% in CIFAR-10 and top-1accuracy in ResNet-50 on ImageNet-1k. shows an improvement of 1.20%.

Deep Learning based Singing Voice Synthesis Modeling (딥러닝 기반 가창 음성합성(Singing Voice Synthesis) 모델링)

  • Kim, Minae;Kim, Somin;Park, Jihyun;Heo, Gabin;Choi, Yunjeong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.127-130
    • /
    • 2022
  • This paper is a study on singing voice synthesis modeling using a generator loss function, which analyzes various factors that may occur when applying BEGAN among deep learning algorithms optimized for image generation to Audio domain. and we conduct experiments to derive optimal quality. In this paper, we focused the problem that the L1 loss proposed in the BEGAN-based models degrades the meaning of hyperparameter the gamma(𝛾) which was defined to control the diversity and quality of generated audio samples. In experiments we show that our proposed method and finding the optimal values through tuning, it can contribute to the improvement of the quality of the singing synthesis product.

  • PDF

Level Set Based Shape Optimization of Linear Structures using Topological Derivatives (위상민감도를 이용한 선형구조물의 레벨셋 기반 형상 최적설계)

  • Yoon, Minho;Ha, Seung-Hyun;Kim, Min-Geun;Cho, Seonho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.27 no.1
    • /
    • pp.9-16
    • /
    • 2014
  • Using a level set method and topological derivatives, a topological shape optimization method that is independent of an initial design is developed for linearly elastic structures. In the level set method, the initial domain is kept fixed and its boundary is represented by an implicit moving boundary embedded in the level set function, which facilitates to handle complicated topological shape changes. The "Hamilton-Jacobi(H-J)" equation and computationally robust numerical technique of "up-wind scheme" lead the initial implicit boundary to an optimal one according to the normal velocity field while minimizing the objective function of compliance and satisfying the constraint of allowable volume. Based on the asymptotic regularization concept, the topological derivative is considered as the limit of shape derivative as the radius of hole approaches to zero. The required velocity field to update the H-J equation is determined from the descent direction of Lagrangian derived from optimality conditions. It turns out that the initial holes are not required to get the optimal result since the developed method can create holes whenever and wherever necessary using indicators obtained from the topological derivatives. It is demonstrated that the proper choice of control parameters for nucleation is crucial for efficient optimization process.

Electrical Impedance Tomography for Material Profile Reconstruction of Concrete Structures (콘크리트 구조의 재료 물성 재구성을 위한 전기 임피던스 단층촬영 기법)

  • Jung, Bong-Gu;Kim, Boyoung;Kang, Jun Won;Hwang, Jin-Ha
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.4
    • /
    • pp.249-256
    • /
    • 2019
  • This paper presents an optimization framework of electrical impedance tomography for characterizing electrical conductivity profiles of concrete structures in two dimensions. The framework utilizes a partial-differential-equation(PDE)-constrained optimization approach that can obtain the spatial distribution of electrical conductivity using measured electrical potentials from several electrodes located on the boundary of the concrete domain. The forward problem is formulated based on a complete electrode model(CEM) for the electrical potential of a medium due to current input. The CEM consists of a Laplace equation for electrical potential and boundary conditions to represent the current inputs to the electrodes on the surface. To validate the forward solution, electrical potential calculated by the finite element method is compared with that obtained using TCAD software. The PDE-constrained optimization approach seeks the optimal values of electrical conductivity on the domain of investigation while minimizing the Lagrangian function. The Lagrangian consists of least-squares objective functional and regularization terms augmented by the weak imposition of the governing equation and boundary conditions via Lagrange multipliers. Enforcing the stationarity of the Lagrangian leads to the Karush-Kuhn-Tucker condition to obtain an optimal solution for electrical conductivity within the target medium. Numerical inversion results are reported showing the reconstruction of the electrical conductivity profile of a concrete specimen in two dimensions.

Non-self-intersecting Multiresolution Deformable Model (자체교차방지 다해상도 변형 모델)

  • Park, Ju-Yeong;Kim, Myeong-Hui
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.1
    • /
    • pp.19-27
    • /
    • 2000
  • This paper proposes a non-self-intersecting multiresolution deformable model to extract and reconstruct three-dimensional boundaries of objects from volumetric data. Deformable models offer an attractive method for extracting and reconstructing the boundary surfaces. However, convensional deformable models have three limitations- sensitivity to model initialization, difficulties in dealing with severe object concavities, and model self-intersections. We address the initialization problem by multiresolution model representation, which progressively refines the deformable model based on multiresolution volumetric data in order to extract the boundaries of the objects in a coarse-to-fine fashion. The concavity problem is addressed by mesh size regularization, which matches its size to the unit voxel of the volumetric data. We solve the model self-intersection problem by including a non-self-intersecting force among the customary internal and external forces in the physics-based formulation. This paper presents results of applying our new deformable model to extracting a sphere surface with concavities from a computer-generated volume data and a brain cortical surface from a MR volume data.

  • PDF

Using Bayesian Approaches to Reduce Truncation Artifact in Magnetic Resonance Imaging

  • Lee, Su-Jin
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.6
    • /
    • pp.585-593
    • /
    • 1998
  • In Fourier magnetic resonance imaging (MRI), the number of phase encoded signals is often reduced to minimize the duration of the studies and maintain adequate signal-to-noise ratio. However, this results in the well-known truncation artifact, whose effect manifests itself as blurring and ringing in the image domain. In this paper, we propose a new regularization method in the context of a Bayesian framework to reduce truncation artifact. Since the truncation artifact appears in t도 phase direction only, the use of conventional piecewise-smoothness constraints with symmetric neighbors may result in the loss of small details and soft edge structures in the read direction. Here, we propose more elaborate forms of constraints than the conventional piecewise-smoothness constraints, which can capture actual spatial information about the MR images. Our experimental results indicate that the proposed method not only reduces the truncation artifact, but also improves tissue regularity and boundary definition without oversmoothing soft edge regions.

  • PDF

Context Dependent Fusion with Support Vector Machines (Support Vector Machine을 이용한 문맥 민감형 융합)

  • Heo, Gyeongyong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.37-45
    • /
    • 2013
  • Context dependent fusion (CDF) is a fusion algorithm that combines multiple outputs from different classifiers to achieve better performance. CDF tries to divide the problem context into several homogeneous sub-contexts and to fuse data locally with respect to each sub-context. CDF showed better performance than existing methods, however, it is sensitive to noise due to the large number of parameters optimized and the innate linearity limits the application of CDF. In this paper, a variant of CDF using support vector machines (SVMs) for fusion and kernel principal component analysis (K-PCA) for context extraction is proposed to solve the problems in CDF, named CDF-SVM. Kernel PCA can shape irregular clusters including elliptical ones through the non-linear kernel transformation and SVM can draw a non-linear decision boundary. Regularization terms is also included in the objective function of CDF-SVM to mitigate the noise sensitivity in CDF. CDF-SVM showed better performance than CDF and its variants, which is demonstrated through the experiments with a landmine data set.

Recent Developments in Space Law (우주법(宇宙法)의 최근동향(最近動向))

  • Choi, June-Sun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.1
    • /
    • pp.223-243
    • /
    • 1989
  • The practical application of modern space science and technology have resulted in many actual and potential gains of mankind. These successes have conditioned and increased the need for a viable space law regime and the challenge of space has ultimately led to the formation of an international legal regime for space. Space law is no longer a primitive law. It is a modern law. Yet, in its stages of growth, it has not reached the condition of perfection. Therefore, under the existing state of thing, we could carefully say that the space law is one of the most newest fields of jurisprudence despite the fact that no one has so far defined it perfectly. However, if space law can be a true jurisprudential entity, it must be definable. In defining the space law, first of all, the grasp of it's nature iis inevitable. Although space law encompasses many tenets and facets of other legal discriplines, its principal nature is public international law, because space law affects and effects law relating intercourse among nations. Since early 1960s when mankind was first able to flight and stay in outer space, the necessity to control and administrate the space activities of human beings has growingly increased. The leading law-formulating agency to this purpose is the United Nation's ad hoc Committee on Peaceful Uses of Outer Space("COPUOS"). COPUOS gave direction to public international space law by establishing the 1963 Declaration of Legal Principles Governing the Activities of the States in the Exploration and Use of Outer Space("1963 Declaration"). The 1963 Declaration is very foundation of the five international multilateral treaties that were established successively after the 1963 Declaration. The five treaties are as follows: 1) The Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space including Moon and other Celestial Bodies, 1967. 2) The Agreement on the Rescue of Astronauts, the Return of Astronauts, and the Return of Objects Launched into Outer Space, 1968. 3) The Convention on International Liability for Damage Caused by Space Objects, 1972. 4) The Convention on Registration of Objects Launched into Outer Space, 1974. 5) The Agreement Governing Activities of States on the Moon and Other Celestial Bodies: Moon Treaty, 1979. The other face of space law is it's commercial aspect. Space is no longer the sole domination of governments. Many private enterprise have already moved directly or indirectly into space activities in the parts such as telecommunications and space manufacturing. Since space law as the public international law has already advanced in accordance with the developments of space science and technology, there left only a few areas untouched in this field of law. Therefore the possibility of rapid growth of space law is expected in the parts of commerical space law, as it is, at this time, in a nascent state. The resources of the space environment are also commercially both valuable and important since the resources include the tangible natural resources to be found on the moon and other celestial bodies. Other space-based resources are solar energy, geostationary and geosynchronous orbital positions, radio frequencies, area possibly suited to human habitations, all areas and materials lending themselves to scientific research and inquiry. Remote sensing, space manufacturing and space transportation services are also another potential areas in which commercial. endeavors of Mankind can be carried out. In this regard, space insurance is also one of the most important devices allowing mankind to proceed with commercial space venture. Thus, knowlege of how space insurance came into existence and what it covers is necessary to understand the legal issues peculiar to space law. As a conclusion the writer emphasized the international cooperation of all nations in space activities of mankind, because space commerce, by its nature, will give rise many legal issues of international scope and concern. Important national and world-community interests would be served over time through the acceptance of new international agreements relating to remote sencing, direct television broadcasting, the use of nuclear power sources in space, the regularization of the activities of space transportation systems. standards respecting contamination and pollution, and a practical boundary between outer space and air space. If space activity regulation does not move beyond the national level, the peaceful exploration of space for all mankind will not be realized. For the efficient regulation on private and governmental space activities, the creation of an international space agency, similar to the International Civil Aviation Organization but modified to meet the needs of space technology, will be required. But prior to creation of an international organization, it will be necessary to establish, at national level, the Office of Air and Space Bureau, which will administrate liscence liscence application process, safety review and sale of launch equipment, and will carry out launch service.

  • PDF