• Title/Summary/Keyword: Re-interpolation

Search Result 42, Processing Time 0.02 seconds

Use of a Solution-Adaptive Grid (SAG) Method for the Solution of the Unsaturated Flow Equation (불포화 유동 방정식의 해를 위한 해적응격자법의 이용 연구)

  • Koo, Min-Ho
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.6 no.1
    • /
    • pp.23-32
    • /
    • 1999
  • A new numerical method using solution-adaptive grids (SAG) is developed to solve the Richards' equation (RE) for unsaturated flow in porous media. Using a grid generation technique, the SAG method automatically redistributes a fixed number of grid points during the flow process, so that more grid points are clustered in regions of large solution gradients. The method uses the coordinate transformation technique to employ a new transformed RE, which is solved with the standard finite difference method. The movement of grid points is incorporated into the transformed RE, and therefore all computation is performed on fixed grid points of the transformed domain without using any interpolation techniques. Thus, numerical difficulties arising from the movement of the wetting front during the infiltration process have been substantially overcome by the new method. Numerical experiments for an one-dimensional infiltration problem are presented to compare the SAG method to the modified Picard method using a fixed grid. Results show that accuracy of a SAG solution using 41 nodes is comparable with the solution of the fixed grid method using 201 nodes, while it requires only 50% of the CPU time. The global mass balance and the convergence of SAG solutions are strongly affected by the time step size (Δt) and the weighting parameter (${\gamma}$) used for generating solution-adaptive grids. Thus, the method requires automated readjustment of Δt and ${\gamma}$ to yield mass-conservative and convergent solutions, although it may increase computational costs. The method can be effective especially for simulating unsaturated flow and other transport problems involving the propagation of a sharp-front.

  • PDF

Evaluation of T-stress for cracks in elastic sheets

  • Su, R.K.L.
    • Structural Engineering and Mechanics
    • /
    • v.20 no.3
    • /
    • pp.335-346
    • /
    • 2005
  • The T-stress of cracks in elastic sheets is solved by using the fractal finite element method (FFEM). The FFEM, which had been developed to determine the stress intensity factors of cracks, is re-applied to evaluate the T-stress which is one of the important fracture parameters. The FFEM combines an exterior finite element model with a localized inner model near the crack tip. The mesh geometry of the latter is self-similar in radial layers around the tip. The higher order Williams series is used to condense the large numbers of nodal displacements at the inner model near the crack tip to a small set of unknown coefficients. Numerical examples revealed that the present approach is simple and accurate for calculating the T-stresses and the stress intensity factors. Some errors of the T-stress solutions shown in the previous literature are identified and the new solutions for the T-stress calculations are presented.

Multiple Description Coding Using Directional Discrete Cosine Transform

  • Lama, Ramesh Kumar;Kwon, Goo-Rak
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.293-297
    • /
    • 2013
  • Delivery of high quality video over a wide area network with large number of users poses great challenges for the video communication system. To ensure video quality, multiple descriptions have recently attracted various attention as a way of encoding and visual information delivery over wireless network. We propose a new efficient multiple description coding (MDC) technique. Quincunx lattice sub-sampling is used for generating multiple descriptions of an image. In this paper, we propose the application of a directional discrete cosine transform (DCT) to a sub-sampled quincunx lattice to create an MDC representation. On the decoder side, the image is decoded from the received side information. If all the descriptions arrive successfully, the image is reconstructed by combining the descriptions. However, if only one side description is received, decoding is executed using an interpolation process. The experimental results show that such the directional DCT can achieve a better coding gain as well as energy packing efficiency than the conventional DCT with re-alignment.

A Study on Speed Improvement of Medical Image Reconstruction (의료영상 재구성의 속도개선에 관한 연구)

  • Ryu, Jong-Hyun;Beack, Seung-Hwa
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2489-2491
    • /
    • 1998
  • The study of 3D image reconstruction re has developed along the progress of computer. Therfore Great deal of research on it have been done. 3D medical image reconstruction techniques are useful to figure out human's complex 3D structures from the set of 2D section. But 3D medical image reconstruction require a lot of calculation, it takes long time and expensive system. That gives a reason to the improvement of study on speed. In this paper. applying the interpolation to only the part which can appear as cube, I come up with a method that calculates the speed by reducing the a mount of calculation.

  • PDF

Adaptive Frame Rate Up-Conversion Algorithm using the Neighbouring Pixel Information and Bilateral Motion Estimation (이웃하는 블록 정보와 양방향 움직임 예측을 이용한 적응적 프레임 보간 기법)

  • Oh, Hyeong-Chul;Lee, Joo-Hyun;Min, Chang-Ki;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.761-770
    • /
    • 2010
  • In this paper, we propose a new Frame Rate Up-Conversion (FRUC) scheme to increase the frame rate from a lower number into a higher one and enhance the decoded video quality at the decoder. The proposed algorithm utilizes the preliminary frames of forward and backward direction using bilateral prediction. In the process of the preliminary frames, an additional interpolation is performed for the occlusion area because if the calculated value of the block with reference frame if larger than the predetermine thresholdn the block is selected as the occlusion area. In order to interpolate the occlusion area, we perform re-search to obtain the osiomal block considerhe osiomnumber of available ne block consblock. The experimental results show that performance of the proposed algorithm has better PSNR and visual quality than the conventional methods.

Motion Map Generation for Maintaining the Temporal Coherence of Brush Strokes in the Painterly Animation (회화적 애니메이션에서 브러시 스트로크의 시간적 일관성을 유지하기 위한 모션 맵 생성)

  • Park Youngs-Up;Yoon Kyung-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.8
    • /
    • pp.536-546
    • /
    • 2006
  • Painterly animation is a method that expresses painterly images with a hand-painted appearance from a video, and the most crucial element for it is the temporal coherence of brush strokes between frames. A motion map is proposed in this paper as a solution to the issue of maintaining the temporal coherence in the brush strokes between the frames. A motion map is the region that frame-to-frame motions have occurred. Namely, this map refers to the region frame-to-frame edges move by the motion information with the motion occurred edges as a starting point. In this paper, we employ the optical flow method and block-based method to estimate the motion information. The method that yielded the biggest PSNR using the motion information (the directions and magnitudes) acquired by various methods of motion estimation has been chosen as the final motion information to form a motion map. The created motion map determine the part of the frame that should be re-painted. In order to express painterly images with a hand- painted appearance and maintain the temporal coherence of brush strokes, the motion information was applied to only the strong edges that determine the directions of the brush strokes. Also, this paper seek to reduce the flickering phenomenon between the frames by using the multiple exposure method and the difference map created by the difference between images of the source and the canvas. Maintenance of the coherence in the direction of the brush strokes was also attempted by a local gradient interpolation to maintain the structural coherence.

Shape Design Sensitivity Analysis using Isogeometric Approach (CAD 형상을 활용한 설계 민감도 해석)

  • Ha, Seung-Hyun;Cho, Seon-Ho
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2007.04a
    • /
    • pp.577-582
    • /
    • 2007
  • A variational formulation for plane elasticity problems is derived based on an isogeometric approach. The isogeometric analysis is an emerging methodology such that the basis functions in analysis domain arc generated directly from NURBS (Non-Uniform Rational B-Splines) geometry. Thus. the solution space can be represented in terms of the same functions to represent the geometry. The coefficients of basis functions or the control variables play the role of degrees-of-freedom. Furthermore, due to h-. p-, and k-refinement schemes, the high order geometric features can be described exactly and easily without tedious re-meshing process. The isogeometric sensitivity analysis method enables us to analyze arbitrarily shaped structures without re-meshing. Also, it provides a precise construction method of finite element model to exactly represent geometry using B-spline base functions in CAD geometric modeling. To obtain precise shape sensitivity, the normal and curvature of boundary should be taken into account in the shape sensitivity expressions. However, in conventional finite element methods, the normal information is inaccurate and the curvature is generally missing due to the use of linear interpolation functions. A continuum-based adjoint sensitivity analysis method using the isogeometric approach is derived for the plane elasticity problems. The conventional shape optimization using the finite element method has some difficulties in the parameterization of boundary. In isogeometric analysis, however, the geometric properties arc already embedded in the B-spline shape functions and control points. The perturbation of control points in isogeometric analysis automatically results in shape changes. Using the conventional finite clement method, the inter-element continuity of the design space is not guaranteed so that the normal vector and curvature arc not accurate enough. On tile other hand, in isogeometric analysis, these values arc continuous over the whole design space so that accurate shape sensitivity can be obtained. Through numerical examples, the developed isogeometric sensitivity analysis method is verified to show excellent agreement with finite difference sensitivity.

  • PDF

Application of Artificial Neural Network to Flamelet Library for Gaseous Hydrogen/Liquid Oxygen Combustion at Supercritical Pressure (초임계 압력조건에서 기체수소-액체산소 연소해석의 층류화염편 라이브러리에 대한 인공신경망 학습 적용)

  • Jeon, Tae Jun;Park, Tae Seon
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.25 no.6
    • /
    • pp.1-11
    • /
    • 2021
  • To develop an efficient procedure related to the flamelet library, the machine learning process based on artificial neural network(ANN) is applied for the gaseous hydrogen/liquid oxygen combustor under a supercritical pressure condition. For hidden layers, 25 combinations based on Rectified Linear Unit(ReLU) and hyperbolic tangent are adopted to find an optimum architecture in terms of the computational efficiency and the training performance. For activation functions, the hyperbolic tangent is proper to get the high learning performance for accurate properties. A transformation learning data is proposed to improve the training performance. When the optimal node is arranged for the 4 hidden layers, it is found to be the most efficient in terms of training performance and computational cost. Compared to the interpolation procedure, the ANN procedure reduces computational time and system memory by 37% and 99.98%, respectively.

An Estimation of Springing Responses for Recent Ships

  • Park, In-Kyu;Kim, Jong-Jin
    • Journal of Ocean Engineering and Technology
    • /
    • v.19 no.6 s.67
    • /
    • pp.58-63
    • /
    • 2005
  • The estimation of springing responses for recent ships is carried out, and application to a ship design is described. To this aim, springing effects on hull girder were re-evaluated, including non-linear wave excitations and torsional vibrations of the hull. The Timoshenko beam model was used to calculate stress distribution on the hull girder, using the superposition method. The quadratic strip method was employed to calculate the hydrodynamic forces and moments on the hull. In order to remove the irregular frequencies, we adopted 'rigid lid' on the hull free surface level, and addedasymptotic interpolation along the high frequency range. Several applications were carried out on the following existing ships: The Bishop and Price's container ship, S-175 container ship, large container, VLCC, and ore carrier. One of them is compared with the ship measurement result, while another with that of the model test. The comparison between the analytical solution and the numerical solution for a homogeneous beam-type artificial ship shows good agreement. It is found that Most springing energy comesfrom high frequency waves for the ships having low natural frequency and North Atlantic route etc. Therefore, the high frequency tail of the wave spectrum should be increased by $\omega$$\^{-3}$ instead of $\omega$$\^{-4}$ or $\omega$$\^{-5}$ for the springing calculation.

Comparison of Topex/Poseidon sea surface heights and Tide Gauge sea levels in the South Indian Ocean

  • Yoon, Hong-Joo
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.70-75
    • /
    • 1998
  • The comparison of Topex/Poseidon sea surface heights and Tide Gauge sea levels was studied in the South Indian Ocean after Topex/Poseidon mission of about 3 years (11- 121 cycles) from January 1993 through December 1995. The user's handbook (AVISO) for sea surface height data process was used in this study Topex/Poseidon sea suface heights ($\zeta$$^{T/P}$), satellite data at the point which is very closed to Tide Gauge station, were chosen in the same latitude of Tide Gauge station. These data were re-sampled by a linear interpolation with the interval of about 10 days, and were filtered by the gaussian filter with a 60 day-window. Tide Gauge sea levels ($\zeta$$^{Argos}$, $\zeta$$^{In-situ}$ and $\zeta$$^{Model}$), were also treated with the same method as satellite data. The main conclusions obtained from the root-mean-square and correlation coefficient were as follows: 1) to Produce Tide Gauge sea levels from bottom pressure, in-situ data of METEO-FRANCE showed very good values against to the model data of ECMWF and 2) to compare Topex/Poseidon sea surface heights of Tide Gauge sea levels, the results of the open sea areas were better than those of the coast and island areas.

  • PDF