• Title/Summary/Keyword: Iterative SENSE

Search Result 50, Processing Time 0.028 seconds

A Study on Matching Method of Hull Blocks Based on Point Clouds for Error Prediction (선박 블록 정합을 위한 포인트 클라우드 기반의 오차예측 방법에 대한 연구)

  • Li, Runqi;Lee, Kyung-Ho;Lee, Jung-Min;Nam, Byeong-Wook;Kim, Dae-Seok
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.29 no.2
    • /
    • pp.123-130
    • /
    • 2016
  • With the development of fast construction mode in shipbuilding market, the demand on accuracy management of hull is becoming higher and higher in shipbuilding industry. In order to enhance production efficiency and reduce manufacturing cycle time in shipbuilding industry, it is important for shipyards to have the accuracy of ship components evaluated efficiently during the whole manufacturing cycle time. In accurate shipbuilding process, block accuracy is the key part, which has significant meaning in shortening the period of shipbuilding process, decreasing cost and improving the quality of ship. The key of block accuracy control is to create a integrate block accuracy controlling system, which makes great sense in implementing comprehensive accuracy controlling, increasing block accuracy, standardization of proceeding of accuracy controlling, realizing "zero-defect transferring" and advancing non-allowance shipbuilding. Generally, managers of accuracy control measure the vital points at section surface of block by using the heavy total station, which is inconvenient and time-consuming for measurement of vital points. In this paper, a new measurement method based on point clouds technique has been proposed. This method is to measure the 3D coordinates values of vital points at section surface of block by using 3D scanner, and then compare the measured point with design point based on ICP algorithm which has an allowable error check process that makes sure that whether or not the error between design point and measured point is within the margin of error.

Upper Bounds for the Performance of Turbo-Like Codes and Low Density Parity Check Codes

  • Chung, Kyu-Hyuk;Heo, Jun
    • Journal of Communications and Networks
    • /
    • v.10 no.1
    • /
    • pp.5-9
    • /
    • 2008
  • Researchers have investigated many upper bound techniques applicable to error probabilities on the maximum likelihood (ML) decoding performance of turbo-like codes and low density parity check (LDPC) codes in recent years for a long codeword block size. This is because it is trivial for a short codeword block size. Previous research efforts, such as the simple bound technique [20] recently proposed, developed upper bounds for LDPC codes and turbo-like codes using ensemble codes or the uniformly interleaved assumption. This assumption bounds the performance averaged over all ensemble codes or all interleavers. Another previous research effort [21] obtained the upper bound of turbo-like code with a particular interleaver using a truncated union bound which requires information of the minimum Hamming distance and the number of codewords with the minimum Hamming distance. However, it gives the reliable bound only in the region of the error floor where the minimum Hamming distance is dominant, i.e., in the region of high signal-to-noise ratios. Therefore, currently an upper bound on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix cannot be calculated because of heavy complexity so that only average bounds for ensemble codes can be obtained using a uniform interleaver assumption. In this paper, we propose a new bound technique on ML decoding performance for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix using ML estimated weight distributions and we also show that the practical iterative decoding performance is approximately suboptimal in ML sense because the simulation performance of iterative decoding is worse than the proposed upper bound and no wonder, even worse than ML decoding performance. In order to show this point, we compare the simulation results with the proposed upper bound and previous bounds. The proposed bound technique is based on the simple bound with an approximate weight distribution including several exact smallest distance terms, not with the ensemble distribution or the uniform interleaver assumption. This technique also shows a tighter upper bound than any other previous bound techniques for turbo-like code with a particular interleaver and LDPC code with a particular parity check matrix.

A Study on the Model of Safety Guideline based on Affordance Design (어포던스 디자인 관점의 안전가이드라인 개발 모형 연구)

  • Kim, Hwoikwang;Kimm, Hyoil
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.447-454
    • /
    • 2017
  • Human desires start from the factors that make up the basic living environment. The development model of the safety guidelines will start from a sense of departure from collective psychology and systemic environment, at the time of inducing threats to threats. The design and methodology of the safety guidelines should be drawn up repeatedly by the sequence of iterative models, and the design methodology should be designed in the chronological order of the public services through the methodology of service design methodology. Various and complex threats attempt to derive insights from the user's perspective and to derive insight into the development of the safety guidelines and the sophistication of the user experience, and to extract insights from the perspective of a user's perspective and supervision model. This derived insight can be implemented as a multi-dimensional service model derived from the same problem consciousness methodology.

Finite Element Method for Structural Concrete Based on the Compression Field Theory (압축응력장 이론을 적용한 콘크리트 유한요소법 개발)

  • 조순호
    • Computational Structural Engineering
    • /
    • v.9 no.1
    • /
    • pp.151-159
    • /
    • 1996
  • A finite element formulation based on the CFT(Compression Field Theory) concept such as the effect of compression softening in cracked concrete, and macroscopic and rotating crack models etc. was presented for the nonlinear behaviour of structural concrete. In this category, tangential or secant material stiffnesses for cracked concrete were also defined and discussed in view of the iterative solution schemes for nonlinear equations. Considering the computational efficiency and the ability of modelling the post-ultimate behaviour as major concerns, the incremental displacement solution algorithm involving initial material stiffnesses and the relaxation procedure for fast convergence was adopted and formulated in a type of 8-noded quadrilateral isoparametric elements. The analysis program NASCOM(Nonlinear Analysis of structrual Concrete by FEM : Monotonic Loading) developed baed on the CFT constitutive relationships and the incremetal solution strategy described enables the predictions of strength and deformation capacities in a full range. crack patterns and their corresponding widths, and yield extents of reinforcement. As the verfication purpose of NASCOM, the prediction of Cervenka's panel test results including the load resistance and the deformation history was made. A limited number of predictions indicate a good correlation in a general sense.

  • PDF

High-resolution image restoration based on image fusion (영상융합 기반 고해상도 영상복원)

  • Shin Jeongho;Lee Jungsoo;Paik Joonki
    • Journal of Broadcast Engineering
    • /
    • v.10 no.2
    • /
    • pp.238-246
    • /
    • 2005
  • This paper proposes an iterative high-resolution image interpolation algorithm using spatially adaptive constraints and regularization functional. The proposed algorithm adapts adaptive constraints according to the direction of..edges in an image, and can restore high-resolution image by optimizing regularization functional at each iteration, which is suitable for edge directional regularization. The proposed algorithm outperforms the conventional adaptive interpolation methods as well as non-adaptive ones, which not only can restore high frequency components, but also effectively reduce undesirable effects such as noise. Finally, in order to evaluate the performance of the proposed algorithm, various experiments are performed so that the proposed algorithm can provide good results in the sense of subjective and objective views.

Derivation of response spectrum compatible non-stationary stochastic processes relying on Monte Carlo-based peak factor estimation

  • Giaralis, Agathoklis;Spanos, Pol D.
    • Earthquakes and Structures
    • /
    • v.3 no.5
    • /
    • pp.719-747
    • /
    • 2012
  • In this paper a novel approach is proposed to address the problem of deriving non-stationary stochastic processes which are compatible in the mean sense with a given (target) response (uniform hazard) spectrum (UHS) as commonly desired in the aseismic structural design regulated by contemporary codes of practice. The appealing feature of the approach is that it is non-iterative and "one-step". This is accomplished by solving a standard over-determined minimization problem in conjunction with appropriate median peak factors. These factors are determined by a plethora of reported new Monte Carlo studies which on their own possess considerable stochastic dynamics merit. In the proposed approach, generation and treatment of samples of the processes individually on a deterministic basis is not required as is the case with the various "two-step" approaches found in the literature addressing the herein considered task. The applicability and usefulness of the approach is demonstrated by furnishing extensive numerical data associated with the elastic design UHS of the current European (EC8) and the Chinese (GB 50011) aseismic code provisions. Purposely, simple and thus attractive from a practical viewpoint, uniformly modulated processes assuming either the Kanai-Tajimi (K-T) or the Clough-Penzien (C-P) spectral form are employed. The Monte Carlo studies yield damping and duration dependent median peak factor spectra, given in a polynomial form, associated with the first passage problem for UHS compatible K-T and C-P uniformly modulated stochastic processes. Hopefully, the herein derived stochastic processes and median peak factor spectra can be used to facilitate the aseismic design of structures regulated by contemporary code provisions in a Monte Carlo simulation-based or stochastic dynamics-based context of analysis.

Derivation of response spectrum compatible non-stationary stochastic processes relying on Monte Carlo-based peak factor estimation

  • Giaralis, Agathoklis;Spanos, Pol D.
    • Earthquakes and Structures
    • /
    • v.3 no.3_4
    • /
    • pp.581-609
    • /
    • 2012
  • In this paper a novel non-iterative approach is proposed to address the problem of deriving non-stationary stochastic processes which are compatible in the mean sense with a given (target) response (uniform hazard) spectrum (UHS) as commonly desired in the aseismic structural design regulated by contemporary codes of practice. This is accomplished by solving a standard over-determined minimization problem in conjunction with appropriate median peak factors. These factors are determined by a plethora of reported new Monte Carlo studies which on their own possess considerable stochastic dynamics merit. In the proposed approach, generation and treatment of samples of the processes individually on a deterministic basis is not required as is the case with the various approaches found in the literature addressing the herein considered task. The applicability and usefulness of the approach is demonstrated by furnishing extensive numerical data associated with the elastic design UHS of the current European (EC8) and the Chinese (GB 50011) aseismic code provisions. Purposely, simple and thus attractive from a practical viewpoint, uniformly modulated processes assuming either the Kanai-Tajimi (K-T) or the Clough-Penzien (C-P) spectral form are employed. The Monte Carlo studies yield damping and duration dependent median peak factor spectra, given in a polynomial form, associated with the first passage problem for UHS compatible K-T and C-P uniformly modulated stochastic processes. Hopefully, the herein derived stochastic processes and median peak factor spectra can be used to facilitate the aseismic design of structures regulated by contemporary code provisions in a Monte Carlo simulation-based or stochastic dynamics-based context of analysis.

Isogeometric Analysis of FGM Plates in Combination with Higher-order Shear Deformation Theory (등기하해석에 의한 기능경사복합재 판의 역학적 거동 예측)

  • Jeon, Juntai
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.4
    • /
    • pp.832-841
    • /
    • 2020
  • Purpose: This study attempts at analyzing mechanical response of functionally graded material (FGM) plates in bending. An accurate and effective numerical approach based on isogeometric analysis (IGA) combined with higher-order shear deformation plate theory to predict the nonlinear flexural behavior is developed. Method: A higher-order shear deformation theory(HSDT) which accounts for the geometric nonlinearity in the von Karman sense is presented and used to derive the equilibrium and governing equations for FGM plate in bending. The nonlinear equations are solved by the modified Newton-Raphson iterative technique. Result: The volume fraction, plate length-to-thickness ratio and boundary condition have signifiant effects on the nonlinear flexural behavior of FGM plates. Conclusion: The proposed IGA method can be used as an accurate and effective numerical tool for analyzing the mechanical responses of FGM plates in flexure.

Review on the Three-Dimensional Inversion of Magnetotelluric Date (MT 자료의 3차원 역산 개관)

  • Kim Hee Joon;Nam Myung Jin;Han Nuree;Choi Jihyang;Lee Tae Jong;Song Yoonho;Suh Jung Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.7 no.3
    • /
    • pp.207-212
    • /
    • 2004
  • This article reviews recent developments in three-dimensional (3-D) magntotelluric (MT) imaging. The inversion of MT data is fundamentally ill-posed, and therefore the resultant solution is non-unique. A regularizing scheme must be involved to reduce the non-uniqueness while retaining certain a priori information in the solution. The standard approach to nonlinear inversion in geophysis has been the Gauss-Newton method, which solves a sequence of linearized inverse problems. When running to convergence, the algorithm minimizes an objective function over the space of models and in the sense produces an optimal solution of the inverse problem. The general usefulness of iterative, linearized inversion algorithms, however is greatly limited in 3-D MT applications by the requirement of computing the Jacobian(partial derivative, sensitivity) matrix of the forward problem. The difficulty may be relaxed using conjugate gradients(CG) methods. A linear CG technique is used to solve each step of Gauss-Newton iterations incompletely, while the method of nonlinear CG is applied directly to the minimization of the objective function. These CG techniques replace computation of jacobian matrix and solution of a large linear system with computations equivalent to only three forward problems per inversion iteration. Consequently, the algorithms are efficient in computational speed and memory requirement, making 3-D inversion feasible.

Space-Time Concatenated Convolutional and Differential Codes with Interference Suppression for DS-CDMA Systems (간섭 억제된 DS-CDMA 시스템에서의 시공간 직렬 연쇄 컨볼루션 차등 부호 기법)

  • Yang, Ha-Yeong;Sin, Min-Ho;Song, Hong-Yeop;Hong, Dae-Sik;Gang, Chang-Eon
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.39 no.1
    • /
    • pp.1-10
    • /
    • 2002
  • A space-time concatenated convolutional and differential coding scheme is employed in a multiuser direct-sequence code-division multiple-access(DS-CDMA) system. The system consists of single-user detectors (SUD), which are used to suppress multiple-access interference(MAI) with no requirement of other users' spreading codes, timing, or phase information. The space-time differential code, treated as a convolutional code of code rate 1 and memory 1, does not sacrifice the coding efficiency and has the least number of states. In addition, it brings a diversity gain through the space-time processing with a simple decoding process. The iterative process exchanges information between the differential decoder and the convolutional decoder. Numerical results show that this space-time concatenated coding scheme provides better performance and more flexibility than conventional convolutional codes in DS-CDMA systems, even in the sense of similar complexity Further study shows that the performance of this coding scheme applying to DS-CDMA systems with SUDs improves by increasing the processing gain or the number of taps of the interference suppression filter, and degrades for higher near-far interfering power or additional near-far interfering users.