• Title/Summary/Keyword: spline approximation

Search Result 110, Processing Time 0.023 seconds

Modelling of noise-added saturated steam table using the neural networks (신경회로망을 사용한 노이즈가 첨가된 포화증기표의 모델링)

  • Lee, Tae-Hwan;Park, Jin-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.205-208
    • /
    • 2008
  • In numerical analysis numerical values of thermodynamic properties such as temperature, pressure, specific volume, enthalpy and entropy are required. But most of the thermodynamic properties of the steam table are determined by experiment. Therefore they are supposed to have measurement errors. In order to make noised thermodynamic properties corresponding to errors, random numbers are generated, adjusted to appropriate magnitudes and added to original thermodynamic properties. the neural networks and quadratic spline interpolation method are introduced for function approximation of these modified thermodynamic properties in the saturated water based on pressure. It was proved that the neural networks give smaller percentage error compared with quadratic spline interpolation. From this fact it was confirmed that the neural networks trace the original values of thermodynamic properties better than the quadratic interpolation method.

  • PDF

Optimized Neural Network Weights and Biases Using Particle Swarm Optimization Algorithm for Prediction Applications

  • Ahmadzadeh, Ezat;Lee, Jieun;Moon, Inkyu
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1406-1420
    • /
    • 2017
  • Artificial neural networks (ANNs) play an important role in the fields of function approximation, prediction, and classification. ANN performance is critically dependent on the input parameters, including the number of neurons in each layer, and the optimal values of weights and biases assigned to each neuron. In this study, we apply the particle swarm optimization method, a popular optimization algorithm for determining the optimal values of weights and biases for every neuron in different layers of the ANN. Several regression models, including general linear regression, Fourier regression, smoothing spline, and polynomial regression, are conducted to evaluate the proposed method's prediction power compared to multiple linear regression (MLR) methods. In addition, residual analysis is conducted to evaluate the optimized ANN accuracy for both training and test datasets. The experimental results demonstrate that the proposed method can effectively determine optimal values for neuron weights and biases, and high accuracy results are obtained for prediction applications. Evaluations of the proposed method reveal that it can be used for prediction and estimation purposes, with a high accuracy ratio, and the designed model provides a reliable technique for optimization. The simulation results show that the optimized ANN exhibits superior performance to MLR for prediction purposes.

A Fuzzy System Representation of Functions of Two Variables and its Application to Gray Scale Images

  • Moon, Byung-soo;Kim, Young-taek;Kim, Jang-yeol
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.569-573
    • /
    • 2001
  • An approximate representation of discrete functions {f$_{i,j}\mid$|i, j=-1, 0, 1, …, N+1}in two variables by a fuzzy system is described. We use the cubic B-splines as fuzzy sets for the input fuzzification and spike functions as the output fuzzy sets. The ordinal number of f$_{i,j}$ in the sorted list is taken to be the out put fuzzy set number in the (i, j) th entry of the fuzzy rule table. We show that the fuzzy system is an exact representation of the cubic spline function s(x, y)=$\sum_{N+1}^{{i,j}=-1}f_{i,j}B_i(x)B_j(y)$ and that the approximation error S(x, y)-f(x, y) is surprisingly O($h^2$) when f(x, y) is three times continuously differentiable. We prove that when f(x, y) is a gray scale image, then the fuzzy system is a smoothed representation of the image and the original image can be recovered exactly from its fuzzy system representation when it is a digitized image.e.

  • PDF

Performance Improvement of Towed Array Shape Estimation Using Interpolation (보간법을 이용한 견인 어레이 형상 추정 기법의 성능 개선)

  • 박민수;도경철;오원천;윤대희;이충용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.72-76
    • /
    • 2000
  • A calibration technique is proposed to improve the performance of 2-D towed array shape estimation using the Kalman filter. In the case of using displacement sensors, 2-D hydrophone positions estimated by the Kalman filter are calculated by assuming that the adjacent hydrophones are horizontally equi-spaced so that maximum distance is equal to the array length. The assumption causes errors in estimating hydrophone positions. The proposed technique using linear model approximation or spline interpolation can reduce the errors by exploiting the fact that the whole length of array is preserved whatever the array shape is. The numerical experiments show that the proposed method is very effective.

  • PDF

Hull Form Generation of Minimum Wave Resistance by a Nonlinear Optimization Method (비선형 최적화 기법에 의한 최소 조파저항 선형 생성)

  • Hee-Jung Kim;Ho-Hwan Chun
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.37 no.4
    • /
    • pp.11-18
    • /
    • 2000
  • This paper is concerned with the generation of an optimal forward hull form by a nonlinear programming method. A Rankine source panel method based on the inviscid and potential flow approximation is employed to calculate the wave-making resistance and SQP method is also used for the optimization. The hull form is represented by a spline function. The forward hull form of a minimum wave resistance with the given design constraints is generated. In addition, the forward hull form of a minimum total resistance by considering the frictional resistance together with an empirical form factor is produced and compared with the former result.

  • PDF

A New Dynamic Prediction Algorithm for Highway Traffic Rate (고속도로 통행량 예측을 위한 새로운 동적 알고리즘)

  • Lee, Gwangyeon;Park, Kisoeb
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.3
    • /
    • pp.41-48
    • /
    • 2020
  • In this paper, a dynamic prediction algorithm using the cumulative distribution function for traffic volume is presented as a new method for predicting highway traffic rate more accurately, where an approximation function of the cumulative distribution function is obtained through numerical methods such as natural cubic spline interpolation and Levenberg-Marquardt method. This algorithm is a new structure of random number generation algorithm using the cumulative distribution function used in financial mathematics to be suitable for predicting traffic flow. It can be confirmed that if the highway traffic rate is simulated with this algorithm, the result is very similar to the actual traffic volume. Therefore, this algorithm is a new one that can be used in a variety of areas that require traffic forecasting as well as highways.

A Development of Stem Analysis Program and its Comparison with other Method for Increment Calculation (수간석해(樹幹析解) 전산(電算)프로그램 개발(開發) 및 생장량(生長量) 계산방법(計算方法)의 비교(比較)에 관(關)한 연구(硏究))

  • Byun, Woo Hyuk;Lee, Woo Kyun;Yun, Kwang Bae
    • Journal of Korean Society of Forest Science
    • /
    • v.79 no.1
    • /
    • pp.1-15
    • /
    • 1990
  • In this study the stem analysis program, which can be operated with personal computer was developed to reduce time and cost of calculation, and to increase accuracy of analysis. The stem analysis method used in this program was compared with other methods. The results obtained were as follows : The value, 1/100mm measured from the latest annual ring measurement machine (Jahrringme${\beta}$geraete Johan Type II) was automatically inputed to the computer and saved into given file name. Turbo Pascal program was written to do this. The measured data was analyzed by stem analysis calculation program written by Fortran-77. Volume and height increments were approximated by spline function, and diameter of the stem disk was calculated by quadratic mean method. The increment values calculated by the programs were printed annually and in every five-year. Stem analysis diagram and several increment graphs were also easily printed. The result compared between those analysis methods showed that quadratic mean could reduce the error caused from eccentric pith. When the stem taper curve method, approximated by spline function, was used in the calculation of tree height and volume, increments would be more exactly calculated.

  • PDF

Sampling Strategies for Computer Experiments: Design and Analysis

  • Lin, Dennis K.J.;Simpson, Timothy W.;Chen, Wei
    • International Journal of Reliability and Applications
    • /
    • v.2 no.3
    • /
    • pp.209-240
    • /
    • 2001
  • Computer-based simulation and analysis is used extensively in engineering for a variety of tasks. Despite the steady and continuing growth of computing power and speed, the computational cost of complex high-fidelity engineering analyses and simulations limit their use in important areas like design optimization and reliability analysis. Statistical approximation techniques such as design of experiments and response surface methodology are becoming widely used in engineering to minimize the computational expense of running such computer analyses and circumvent many of these limitations. In this paper, we compare and contrast five experimental design types and four approximation model types in terms of their capability to generate accurate approximations for two engineering applications with typical engineering behaviors and a wide range of nonlinearity. The first example involves the analysis of a two-member frame that has three input variables and three responses of interest. The second example simulates the roll-over potential of a semi-tractor-trailer for different combinations of input variables and braking and steering levels. Detailed error analysis reveals that uniform designs provide good sampling for generating accurate approximations using different sample sizes while kriging models provide accurate approximations that are robust for use with a variety of experimental designs and sample sizes.

  • PDF

Compensation of Geometric Error by the Correction of Control Surface (제어곡면 수정에 의한 기하오차 보정)

  • Ko, Tae-Jo;Park, Sang-Shin;Kim, Hee-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.4
    • /
    • pp.97-103
    • /
    • 2001
  • Accuracy of a machined part is determined by the relative motion between the cutting tool and the workpiece. One of the important factors which affects the relative motion is the geometric errors of a machine tool. In this study, firstly, geometric errors are measured by laser interferometer, and the positioning error of each control point selected uniformly on the control surface CAD model can be estimated from th oirm shaping model and geometric error data base. Where a form shaping function is derived from the link of homogeneous transformation matrix. Secondly, control points are shifted to the estimated amount of positioning errors. A new control surface is modeled with NURBS(Non Uniform Rational B-Spline) surface approximation to the shifted control points. By generating tool paths to the redesigned control surface, we reduce the machining error quite.

  • PDF

Energy Based Multiple Refitting for Skinning

  • Jha, Kailash
    • International Journal of CAD/CAM
    • /
    • v.5 no.1
    • /
    • pp.11-18
    • /
    • 2005
  • The traditional method of manipulation of knots and degrees gives poor quality of surface, if compatibility of input curves is not good enough. In this work, a new algorithm of multiple refitting of curves has been developed using minimum energy based formulation to get compatible curves for skinning. The present technique first reduces the number of control points and gives smoother surface for given accuracy and the surface obtained is then skinned by compatible curves. This technique is very useful to reduce data size when a large number of data have to be handled. Energy based technique is suitable for approximating the missing data. The volumetric information can also be obtained from the surface data for analysis.