• 제목/요약/키워드: Small-error approximation

Search Result 54, Processing Time 0.023 seconds

A HIGHER ORDER NUMERICAL SCHEME FOR SINGULARLY PERTURBED BURGER-HUXLEY EQUATION

  • Jiwrai, Ram;Mittal, R.C.
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.3_4
    • /
    • pp.813-829
    • /
    • 2011
  • In this article, we present a numerical scheme for solving singularly perturbed (i.e. highest -order derivative term multiplied by small parameter) Burgers-Huxley equation with appropriate initial and boundary conditions. Most of the traditional methods fail to capture the effect of layer behavior when small parameter tends to zero. The presence of perturbation parameter and nonlinearity in the problem leads to severe difficulties in the solution approximation. To overcome such difficulties the present numerical scheme is constructed. In construction of the numerical scheme, the first step is the dicretization of the time variable using forward difference formula with constant step length. Then, the resulting non linear singularly perturbed semidiscrete problem is linearized using quasi-linearization process. Finally, differential quadrature method is used for space discretization. The error estimate and convergence of the numerical scheme is discussed. A set of numerical experiment is carried out in support of the developed scheme.

Successive Approximated Log Operation Circuit for SoftMax in CNN (CNN의 SoftMax 연산을 위한 연속 근사 방식의 로그 연산 회로)

  • Kang, Hyeong-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.2
    • /
    • pp.330-333
    • /
    • 2021
  • In a CNN for image classification, a SoftMax layer is usually placed at the end. The exponentinal and logarithmic operations in the SoftMax layer are not adequate to be implemented in an accelerator circuit. The operations are usually implemented with look-up tables, and the exponential operation can be implemented in an iterative method. This paper proposes a successive approximation method to calculate a logarithm to remove a very large look-up table. By substituing the large table with two very small tables, the circuit can be reduced much. The experimental results show that the 85% area reduction can be reached with a small error degradation.

ON THE SUFFICIENT CONDITION FOR THE LINEARIZED APPROXIMATION OF THE B$\"{E}$NARD CONVECTION PROBLEM

  • Song, Jong-Chul;Jeon, Chang-Ho
    • Bulletin of the Korean Mathematical Society
    • /
    • v.29 no.1
    • /
    • pp.125-135
    • /
    • 1992
  • In various viscus flow problems it has been the custom to replace the convective derivative by the ordinary partial derivative in problems for which the data are small. In this paper we consider the Benard Convection problem with small data and compare the solution of this problem (assumed to exist) with that of the linearized system resulting from dropping the nonlinear terms in the expression for the convective derivative. The objective of the present work is to derive an estimate for the error introduced in neglecting the convective inertia terms. In fact, we derive an explicit bound for the L$_{2}$ error. Indeed, if the initial data are O(.epsilon.) where .epsilon. << 1, and the Rayleigh number is sufficiently small, we show that this error is bounded by the product of a term of O(.epsilon.$^{2}$) times a decaying exponential in time. The results of the present paper then give a justification for linearizing the Benard Convection problem. We remark that although our results are derived for classical solutions, extensions to appropriately defined weak solutions are obvious. Throughout this paper we will make use of a comma to denote partial differentiation and adopt the summation convention of summing over repeated indices (in a term of an expression) from one to three. As reference to work of continuous dependence on modelling and initial data, we mention the papers of Payne and Sather [8], Ames [2] Adelson [1], Bennett [3], Payne et al. [9], and Song [11,12,13,14]. Also, a similar analysis of a micropolar fluid problem backward in time (an ill-posed problem) was given by Payne and Straughan [10] and Payne [7].

  • PDF

Mesh Simplification Algorithm Using Differential Error Metric (미분 오차 척도를 이용한 메쉬 간략화 알고리즘)

  • 김수균;김선정;김창헌
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.288-296
    • /
    • 2004
  • This paper proposes a new mesh simplification algorithm using differential error metric. Many simplification algorithms make use of a distance error metric, but it is hard to measure an accurate geometric error for the high-curvature region even though it has a small distance error measured in distance error metric. This paper proposes a new differential error metric that results in unifying a distance metric and its first and second order differentials, which become tangent vector and curvature metric. Since discrete surfaces may be considered as piecewise linear approximation of unknown smooth surfaces, theses differentials can be estimated and we can construct new concept of differential error metric for discrete surfaces with them. For our simplification algorithm based on iterative edge collapses, this differential error metric can assign the new vertex position maintaining the geometry of an original appearance. In this paper, we clearly show that our simplified results have better quality and smaller geometry error than others.

A Study on the MTF Graphics using Simpson Approximation (심프슨 근사법을 이용한 MTF 그래프 작성에 관한 연구)

  • Che, Gyu-Shik;Chang, Won-Seok;Oh, Jake
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.2
    • /
    • pp.401-408
    • /
    • 2012
  • There is a clear need for characterizing optical components with the growing role played by optical devices in measurement, communication, and photonics. A basic and useful measuring parameter to meet this need, especially for imaging systems, is the Modulation Transfer Function, or MTF. Over the past few decades new instrument, including the laser interferometer, the CCD camera, and the computer have revolutionized the measurement and calculation of the MTF. This has made what was tedious and involved into virtually an instantaneous measurement. We proposed a Simpson approxiamtion method to create MTF graph and illustrated real example to verify its method in this paper. This method is very useful while it is very useful because its error is very minor and small although its approximation.

New Message-Passing Decoding Algorithm of LDPC Codes by Partitioning Check Nodes (체크 노드 분할에 의한 LDPC 부호의 새로운 메시지 전달 복호 알고리즘)

  • Kim Sung-Hwan;Jang Min-Ho;No Jong-Seon;Hong Song-Nam;Shin Dong-Joon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.310-317
    • /
    • 2006
  • In this paper, we propose a new sequential message-passing decoding algorithm of low-density parity-check (LDPC) codes by partitioning check nodes. This new decoding algorithm shows better bit error rate(BER) performance than that of the conventional message-passing decoding algorithm, especially for small number of iterations. Analytical results tell us that as the number of partitioned subsets of check nodes increases, the BER performance becomes better. We also derive the recursive equations for mean values of messages at variable nodes by using density evolution with Gaussian approximation. Simulation results also confirm the analytical results.

Planar Curve Smoothing with Individual Weighted Averaging (개별적 가중치 평균을 이용한 2차원 곡선의 스무딩)

  • Lyu, Sungpil
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1194-1208
    • /
    • 2017
  • A traditional average smoothing method is designed for smoothing out noise, which, however, unintentionally results in smooth corner points on the curvature accompanied with a shrinkage of curves. In this paper, we propose a novel curve smoothing method via polygonal approximation of the input curve. The proposed method determines the smoothing weight for each point of the input curve based on the angle and approximation error between the approximated polygon and the input curve. The weight constrains a displacement of the point after smoothing not to significantly exceed the average noise error of the region. In the experiment, we observed that the resulting smoothed curve is close to the original curve since the point moves toward the average position of the noise after smoothing. As an application to digital cartography, for the same amount of smoothing, the proposed method yields a less area reduction even on small curve segments than the existing smoothing methods.

Experimental Study on Source Locating Technique for Transversely Isotropic Media (횡등방성 매질의 음원추적기법에 대한 실험적 연구)

  • Choi, Seung-Beum;Jeon, Seokwon
    • Tunnel and Underground Space
    • /
    • v.25 no.1
    • /
    • pp.56-67
    • /
    • 2015
  • In this study, a source locating technique applicable to transversely isotropic media was developed. Wave velocity anisotropy was considered based on the partition approximation method, which simply enabled AE source locating. Sets of P wave arrival time were decided by the two-step AIC algorithm and they were later used to locate the AE sources when having the least error compared with the partitioned elements. In order to validate the technique, pencil lead break test on artificial transversely isotropic mortar specimen was carried out. Defining the absolute error as the distance between the pencil lead break point and the located point, 1.60 mm ~ 14.46 mm of range and 8.57 mm of average were estimated therefore it was regarded as thought to be 'acceptable' considering the size of the specimen and the AE sensors. Comparing each absolute error under different threshold levels, results showed small discrepancies therefore this technique was hardly affected by background noise. Absolute error could be decomposed into each coordinate axis error and through it, effect of AE sensor position could be understood so if optimum sensor position was able to be decided, one could get more precise outcome.

An Optimal Design of Neuro-Fuzzy Logic Controller Using Lamarckian Co-adaptation of Learning and Evolution (학습과 진화의 Lamarckian 상호 적응에 의한 뉴로-퍼지 제어기의 최적 설계)

  • 김대진;이한별;강대성
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.12
    • /
    • pp.85-98
    • /
    • 1998
  • This paper proposes a new design method of neuro-FLC by the Lamarckian co-adaptation scheme that incorporates the backpropagation learning into the GA evolution in an attempt to find optimal design parameters (fuzzy rule base and membership functions) of application-specific FLC. The design parameters are determined by evolution and learning in a way that the evolution performs the global search and makes inter-FLC parameter adjustments in order to obtain both the optimal rule base having high covering value and small number of useful fuzzy rules and the optimal membership functions having small approximation error and good control performance while the learning performs the local search and makes intra-FLC parameter adjustments by interacting each FLC with its environment. The proposed co-adaptive design method produces better approximation ability because it includes the backpropagation learning in every generation of GA evolution, shows better control performance because the used COG defuzzifier computes the crisp value accurately, and requires small workspace because the optimization procedure of fuzzy rule base and membership functions is performed concurrently by an integrated fitness function on the same fuzzy partition. Simulation results show that the Lamarckian co-adapted FLC produces the most superior one among the differently generated FLCs in all aspects such as the number of fuzzy rules, the approximation ability, and the control performance.

  • PDF

Sensitivity Approach of Sequential Sampling Using Adaptive Distance Criterion (적응거리 조건을 이용한 순차적 실험계획의 민감도법)

  • Jung, Jae-Jun;Lee, Tae-Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.9 s.240
    • /
    • pp.1217-1224
    • /
    • 2005
  • To improve the accuracy of a metamodel, additional sample points can be selected by using a specified criterion, which is often called sequential sampling approach. Sequential sampling approach requires small computational cost compared to one-stage optimal sampling. It is also capable of monitoring the process of metamodeling by means of identifying an important design region for approximation and further refining the fidelity in the region. However, the existing critertia such as mean squared error, entropy and maximin distance essentially depend on the distance between previous selected sample points. Therefore, although sufficient sample points are selected, these sequential sampling strategies cannot guarantee the accuracy of metamodel in the nearby optimum points. This is because criteria of the existing sequential sampling approaches are inefficient to approximate extremum and inflection points of original model. In this research, new sequential sampling approach using the sensitivity of metamodel is proposed to reflect the response. Various functions that can represent a variety of features of engineering problems are used to validate the sensitivity approach. In addition to both root mean squared error and maximum error, the error of metamodel at optimum points is tested to access the superiority of the proposed approach. That is, optimum solutions to minimization of metamodel obtained from the proposed approach are compared with those of true functions. For comparison, both mean squared error approach and maximin distance approach are also examined.