• Title/Summary/Keyword: iterative processes

Search Result 132, Processing Time 0.025 seconds

Edge-Preserving Iterative Reconstruction in Transmission Tomography Using Space-Variant Smoothing (투과 단층촬영에서 공간가변 평활화를 사용한 경계보존 반복연산 재구성)

  • Jung, Ji Eun;Ren, Xue;Lee, Soo-Jin
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.5
    • /
    • pp.219-226
    • /
    • 2017
  • Penalized-likelihood (PL) reconstruction methods for transmission tomography are known to provide improved image quality for reduced dose level by efficiently smoothing out noise while preserving edges. Unfortunately, however, most of the edge-preserving penalty functions used in conventional PL methods contain at least one free parameter which controls the shape of a non-quadratic penalty function to adjust the sensitivity of edge preservation. In this work, to avoid difficulties in finding a proper value of the free parameter involved in a non-quadratic penalty function, we propose a new adaptive method of space-variant smoothing with a simple quadratic penalty function. In this method, the smoothing parameter is adaptively selected for each pixel location at each iteration by using the image roughness measured by a pixel-wise standard deviation image calculated from the previous iteration. The experimental results demonstrate that our new method not only preserves edges, but also suppresses noise well in monotonic regions without requiring additional processes to select free parameters that may otherwise be included in a non-quadratic penalty function.

A Study on Interpretation of Gravity Data on Two-Dimensional Geologic Structures by Iterative Nonlinear Inverse (반복적 비선형역산에 의한 2차원 지질구조의 중력자료 해석 연구)

  • Ko, Chin-Surk;Yang, Seung-Jin
    • Economic and Environmental Geology
    • /
    • v.27 no.5
    • /
    • pp.479-489
    • /
    • 1994
  • In this paper, the iterative least-squares inversion method is used to determine shapes and density contrasts of 2-D structures from the gravity data. The 2-D structures are represented by their cross-sections of N-sided polygons with density contrasts which are constant or varying with depth. Gravity data are calculated by theoretical formulas for the above structure models. The data are considered as observed ones and used for inversions. The inversions are performed by the following processes: I) polygon's vertices and density contrast are initially assumed, 2) gravity are calculated for the assumed model and error between the true (observed) and calculated gravity are determined, 3) new vertices and density contrast are determined from the error by using the damped least-squares inversion method, and 4) final model is determined when the error is very small. Results of this study show that the shape and density contrast of each model are accurately determined when the density contrast is constant or vertical density gradient is known. In case where the density gradient is unknown, the inversion gives incorrect results. But the shape and density gradient of the model are determined when the surface density contrast is known.

  • PDF

An Adaptive Decomposition Technique for Multidisciplinary Design Optimization (다분야통합최적설계를 위한 적응분해기법)

  • Park, Hyeong Uk;Choe, Dong Hun;An, Byeong Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.31 no.5
    • /
    • pp.18-24
    • /
    • 2003
  • The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative sybcycles. Previous researches predifined the numbers of design processes in groups, but these group sizes should be determined optimally to balance the computing time of each groups. This paper proposes adaptive decomposition method, which determines the group sizes and the order of processes simultaneously to raise design efficiency by expanding the chromosome of the genetic algorithm. Finally, two sample cases are presented to show the effects of optimizing the sequence of processes with the adaptive decomposition method.

Comparison Study on Projection and Backprojection Methods for CT Simulation (투사 및 역투사 방법에 따른 컴퓨터단층촬영 영상 비교)

  • Oh, Ohsung;Lee, Seung Wook
    • Journal of radiological science and technology
    • /
    • v.37 no.4
    • /
    • pp.323-330
    • /
    • 2014
  • Image reconstruction is one of the most important processes in CT (Computed tomography) technology. For fast scanning and low dose to the objects, iterative reconstruction is becoming more and more important. In the implementation of iterative reconstruction, projection and backprojection processes are considered to be indispensable parts. However, many approaches for projection and backprojection may result severe image artifacts due to their discrete characteristics and affects the reconstructed image quality. Thus, new approaches for projection and backprojection are highly demanded these days. In this paper, distance-driven approach was evaluated and compared with other conventional methods. The numerical simulator was developed to make the phantoms, and projection and backprojection images were compared using these approaches. As a result, it turned out that there are less artifacts during projection and backprojection in parallel and fan beam geometry.

Adaptive Hard Decision Aided Fast Decoding Method using Parity Request Estimation in Distributed Video Coding (패리티 요구량 예측을 이용한 적응적 경판정 출력 기반 고속 분산 비디오 복호화 기술)

  • Shim, Hiuk-Jae;Oh, Ryang-Geun;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.635-646
    • /
    • 2011
  • In distributed video coding, low complexity encoder can be realized by shifting encoder-side complex processes to decoder-side. However, not only motion estimation/compensation processes but also complex LDPC decoding process are imposed to the Wyner-Ziv decoder, therefore decoder-side complexity has been one important issue to improve. LDPC decoding process consists of numerous iterative decoding processes, therefore complexity increases as the number of iteration increases. This iterative LDPC decoding process accounts for more than 60% of whole WZ decoding complexity, therefore it can be said to be a main target for complexity reduction. Previously, HDA (Hard Decision Aided) method is introduced for fast LDPC decoding process. For currently received parity bits, HDA method certainly reduces the complexity of decoding process, however, LDPC decoding process is still performed even with insufficient amount of parity request which cannot lead to successful LDPC decoding. Therefore, we can further reduce complexity by avoiding the decoding process for insufficient parity bits. In this paper, therefore, a parity request estimation method is proposed using bit plane-wise correlation and temporal correlation. Joint usage of HDA method and the proposed method achieves about 72% of complexity reduction in LDPC decoding process, while rate distortion performance is degraded only by -0.0275 dB in BDPSNR.

A Study on the Dimension of Quality Metrics for Information Systems Development and Success : An Application of Information Processing Theory

  • An, Joon M.
    • The Journal of Information Technology and Database
    • /
    • v.3 no.2
    • /
    • pp.97-118
    • /
    • 1996
  • Information systems quality engineering is one of the most problematic areas in practice and research, and needs cooperative efforts between practice and theory [Glass, 1996]. A model for evaluating the quality of system development process and ensuing success is proposed based on information processing theory of project unit design. A nomological net among a set of quality variables is identified from prior research in the areas of organization science, software engineering, and management information systems. More specifically, system development success was modelled as a function of project complexity, system development modelling environment, user participation, project unit structure, resource availability, and the level of iterative nature of development methodology. Based on the model developed from the information processing theory of project unit design in organization science. appropriate quality metrics for each variable in the proposed model are matched. In this way, a framework of relevant systems development and success quality metrics for controlling systems development processes and ensuing success is proposed. The causal relationships among the constructs in the proposed model are proposed as future empirical research for academicians and as managerial tools for quality managers. The framework and propositions help quality manager to select more parsimonious quality metrics for controlling information systems development processes and project success in an integrated way. Also this model can be utilized for evaluating software quality assurance programmes, which are developed and marketed by many vendors.

  • PDF

Mathematical Modeling of Combustion Characteristics in HVOF Thermal Spray Processes(I): Chemical Composition of Combustion Products and Adiabatic Flame Temperature (HVOF 열용사 프로세스에서의 연소특성에 관한 수학적 모델링(I): 연소생성물의 화학조성 및 단열화염온도)

  • Yang, Young-Myung;Kim, Ho-Yeon
    • Journal of the Korean Society of Combustion
    • /
    • v.3 no.1
    • /
    • pp.21-29
    • /
    • 1998
  • Mathematical modeling of combustion characteristics in HVOF thermal spray processes was carried out on the basis of equilibrium chemistry. The main objective of this work was the development of a computation code which allows to determine chemical composition of combustion products, adiabatic flame temperature, thermodynamic and transport properties. The free energy minimization method was employed with the descent Newton-Raphson technique for numerical solution of systems of nonlinear thermochemical equations. Adiabatic flame temperature was calculated by using a Newton#s iterative method incorporating the computation module of chemical composition. The performance of this code was verified by comparing computational results with data obtained by ChemKin code and in the literature. Comparisons between the calculated and measured flame temperatures showed a deviation less than 2%. It was observed that adiabatic flame temperature augments with increase in combustion pressure; the influence was significant in the region of low pressure but becomes weaker and weaker with increase in pressure. Relationships of adiabatic flame temperature, dissociation ratio and combustion pressure were also analyzed.

  • PDF

The implementation of the integrated design process in the hole-plan system

  • Ruy, Won-Sun;Ko, Dae-Eun;Yang, Young-Soon
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.4 no.4
    • /
    • pp.353-361
    • /
    • 2012
  • All current shipyards are using the customized CAD/CAM programs in order to improve the design quality and increase the design efficiency. Even though the data structures for ship design and construction are almost completed, the implementation related to the ship design processes are still in progress so that it has been the main causes of the bottleneck and delay during the middle of design process. In this study, we thought that the hole-plan system would be a good example which is remained to be improved. The people of outfitting division who don't have direct authority to edit the structural panels, should request the hull design division to install the holes for the outfitting equipment. For acceptance, they should calculate the hole position, determine the hole type, and find the intersected contour of panel. After consideration of the hull people, the requested holes are manually installed on the hull structure. As the above, many processes are needed such as communication and discussion between the divisions, drawings for hole-plan, and the consideration for the structural or production compatibility. However this iterative process takes a lot of working time and requires mental pressure to the related people and cross-division conflict. This paper will handle the hole-plan system in detail to automate the series of process and minimize the human efforts and time-consumption.

Vertex Selection Scheme for Shape Approximation Based on Dynamic Programming (동적 프로그래밍에 기반한 윤곽선 근사화를 위한 정점 선택 방법)

  • 이시웅;최재각;남재열
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.121-127
    • /
    • 2004
  • This paper presents a new vertex selection scheme for shape approximation. In the proposed method, final vertex points are determined by "two-step procedure". In the first step, initial vertices are simply selected on the contour, which constitute a subset of the original contour, using conventional methods such as an iterated refinement method (IRM) or a progressive vertex selection (PVS) method In the second step, a vertex adjustment Process is incorporated to generate final vertices which are no more confined to the contour and optimal in the view of the given distortion measure. For the optimality of the final vertices, the dynamic programming (DP)-based solution for the adjustment of vertices is proposed. There are two main contributions of this work First, we show that DP can be successfully applied to vertex adjustment. Second, by using DP, the global optimality in the vertex selection can be achieved without iterative processes. Experimental results are presented to show the superiority of our method over the traditional methods.

The Optimal Design of Preform in 3-D Forging by using Electric Field Theory (전기장 이론을 이용한 3차원 단조공정에서의 예비형상 설계)

  • 신현기;이석렬;박철현;양동열
    • Transactions of Materials Processing
    • /
    • v.11 no.2
    • /
    • pp.165-170
    • /
    • 2002
  • The preform design of forging processes plays a key role in improving product qualities, such as defect prevention, dimensional accuracy and mechanical strengths. In the industry, preforms are generally designed by the iterative trial-and-error approach, but it results in significant tooling cost and time. It is thus necessary to minimize lead-time and human intervention through an effective preform design method. In this paper, the equi-potential lines designed in the electric field are introduced to find the preform shape, and then the optimization process is used to choose the equi-potential lines that will keep the die wear to a minimum Because, in the forging process, the die wear is a function of various important factors, such as forming stress and strain, microstructure and mechanical properties of a Product.