• Title/Summary/Keyword: computational reconstruction

Search Result 269, Processing Time 0.024 seconds

Analysis of Skin Movements with Respect to Bone Motions using MR Images

  • Ryu, Jae-Hun;Miyata, Natsuki;Kouchi, Makiko;Mochimaru, Masaaki;Lee, Kwan H.
    • International Journal of CAD/CAM
    • /
    • v.3 no.1_2
    • /
    • pp.61-66
    • /
    • 2003
  • This paper describes a novel experiment that measures skin movement with respect to the flexional motion of a hand. The study was based on MR images in conjunction with CAD techniques. The MR images of the hand were captured in 3 different postures with surface markers. The surface markers attached to the skin where employed to trace skin movement during the flexional motion of the hand. After reconstructing 3D isosurfaces from the segmented MR images, the global registration was applied to the 3D models based on the particular bone shape of different postures. Skin movement was interpreted by measuring the centers of the surface markers in the registered models.

Image Coding by Block Based Fractal Approximation (블록단위의 프래탈 근사화를 이용한 영상코딩)

  • 정현민;김영규;윤택현;강현철;이병래;박규태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.45-55
    • /
    • 1994
  • In this paper, a block based image approximation technique using the Self Affine System(SAS) from the fractal theory is suggested. Each block of an image is divided into 4 tiles and 4 affine mapping coefficients are found for each tile. To find the affine mapping cefficients that minimize the error between the affine transformed image block and the reconstructed image block, the matrix euation is solved by setting each partial differential coefficients to aero. And to ensure the convergence of coding block. 4 uniformly partitioned affine transformation is applied. Variable block size technique is employed in order to applynatural image reconstruction property of fractal image coding. Large blocks are used for encoding smooth backgrounds to yield high compression efficiency and texture and edge blocks are divided into smaller blocks to preserve the block detail. Affine mapping coefficinets are found for each block having 16$\times$16, 8$\times$8 or 4$\times$4 size. Each block is classified as shade, texture or edge. Average gray level is transmitted for shade bolcks, and coefficients are found for texture and edge blocks. Coefficients are quantized and only 16 bytes per block are transmitted. Using the proposed algorithm, the computational load increases linearly in proportion to image size. PSNR of 31.58dB is obtained as the result using 512$\times$512, 8 bits per pixel Lena image.

  • PDF

FLOW PHYSICS ANALYSES USING HIGHER-ORDER DISCONTINUOUS GALERKIN-MLP METHODS ON UNSTRUCTURED GRIDS (비정렬 격자계에서 고차 정확도 불연속 갤러킨-다차원 공간 제한 기법을 이용한 유동 물리 해석)

  • Park, J.S.;Kim, C.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2011.05a
    • /
    • pp.311-317
    • /
    • 2011
  • The present paper deals with the continuous works of extending the multi-dimensional limiting process (MLP) for compressible flows, which has been quite successful in finite volume methods, into discontinuous Galerkin (DG) methods. From the series of the previous, it was observed that the MLP shows several superior characteristics, such as an efficient controlling of multi-dimensional oscillations and accurate capturing of both discontinuous and continuous flow features. Mathematically, fundamental mechanism of oscillation-control in multiple dimensions has been established by satisfaction of the maximum principle. The MLP limiting strategy is extended into DG framework, which takes advantage of higher-order reconstruction within compact stencil, to capture detailed flow structures very accurately. At the present, it is observed that the proposed approach yields outstanding performances in resolving non-compressive as well as compressive flaw features. In the presentation, further numerical analyses and results are going to be presented to validate that the newly developed DG-MLP methods provide quite desirable performances in controlling numerical oscillations as well as capturing key flow features.

  • PDF

Image deblurring via adaptive proximal conjugate gradient method

  • Pan, Han;Jing, Zhongliang;Li, Minzhe;Dong, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4604-4622
    • /
    • 2015
  • It is not easy to reconstruct the geometrical characteristics of the distorted images captured by the devices. One of the most popular optimization methods is fast iterative shrinkage/ thresholding algorithm. In this paper, to deal with its approximation error and the turbulence of the decrease process, an adaptive proximal conjugate gradient (APCG) framework is proposed. It contains three stages. At first stage, a series of adaptive penalty matrices are generated iterate-to-iterate. Second, to trade off the reconstruction accuracy and the computational complexity of the resulting sub-problem, a practical solution is presented, which is characterized by solving the variable ellipsoidal-norm based sub-problem through exploiting the structure of the problem. Third, a correction step is introduced to improve the estimated accuracy. The numerical experiments of the proposed algorithm, in comparison to the favorable state-of-the-art methods, demonstrate the advantages of the proposed method and its potential.

A Dehazing Algorithm using the Prediction of Adaptive Transmission Map for Each Pixel (화소 단위 적응적 전달량 예측을 이용한 효율적인 안개 제거 기술)

  • Lee, Sang-Won;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.118-127
    • /
    • 2017
  • We propose the dehazing algorithm which consists of two main parts, the derivation of the Atmospheric light and adaptive transmission map. In the getting the Atmospheric light value, we utilize the quad-tree partitioning where the depth of the partitioning is decided based on the difference between the averaged pixel values of the parent and children blocks. The proposed transmission map is adaptive for each pixel by using the parameter ${\beta}(x)$ to make the histogram of the pixel values in the map uniform. The simulation results showed that the proposed algorithm outperforms the conventional methods in the respect of the visual quality of the dehazed images and the computational complexity.

An Improvement on FFT-Based Digital Implementation Algorithm for MC-CDMA Systems (MC-CDMA 시스템을 위한 FFT 기반의 디지털 구현 알고리즘 개선)

  • 김만제;나성주;신요안
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.7A
    • /
    • pp.1005-1015
    • /
    • 1999
  • This paper is concerned with an improvement on IFFT (inverse fast Fourier transform) and FFT based baseband digital implementation algorithm for BPSK (binary phase shift keying)-modulated MC-CDMA (multicarrier-code division multiple access) systems, that is functionally equivalent to the conventional implementation algorithm, while reducing computational complexity and bandwidth requirement. We also derive an equalizer structure for the proposed implementation algorithm. The proposed algorithm is based on a variant of FFT algorithm that utilizes a N/2-point FFT/IFFT for simultaneous transformation and reconstruction of two N/2-point real signals. The computer simulations under additive white Gaussian noise channels and frequency selective fading channels using equal gain combiner and maximal ratio combiner diversities, demonstrate the performance of the proposed algorithm.

  • PDF

Statistical Analysis of 3D Volume of Red Blood Cells with Different Shapes via Digital Holographic Microscopy

  • Yi, Faliu;Lee, Chung-Ghiu;Moon, In-Kyu
    • Journal of the Optical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.115-120
    • /
    • 2012
  • In this paper, we present a method to automatically quantify the three-dimensional (3D) volume of red blood cells (RBCs) using off-axis digital holographic microscopy. The RBCs digital holograms are recorded via a CCD camera using an off-axis interferometry setup. The RBCs' phase image is reconstructed from the recorded off-axis digital hologram by a computational reconstruction algorithm. The watershed segmentation algorithm is applied to the reconstructed phase image to remove background parts and obtain clear targets in the phase image with many single RBCs. After segmenting the reconstructed RBCs' phase image, all single RBCs are extracted, and the 3D volume of each single RBC is then measured with the surface area and the phase values of the corresponding RBC. In order to demonstrate the feasibility of the proposed method to automatically calculate the 3D volume of RBC, two typical shapes of RBCs, i.e., stomatocyte/discocyte, are tested via experiments. Statistical distributions of 3D volume for each class of RBC are generated by using our algorithm. Statistical hypothesis testing is conducted to investigate the difference between the statistical distributions for the two typical shapes of RBCs. Our experimental results illustrate that our study opens the possibility of automated quantitative analysis of 3D volume in various types of RBCs.

Motion Estimation for Transcoding Using Intermediate data on the Compressed Video (압축 비디오에서 중간정보를 이용한 트랜스 부호화의 움직임 추정)

  • 구성조;김강욱;김종훈;황찬식
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2001.05a
    • /
    • pp.288-299
    • /
    • 2001
  • In transcoding, simply reusing the motion vectors extracted from an incoming video bit stream may not result in the best quality because the incoming motion vectors become non-optimal due to the reconstruction errors. To achieve the best video quality possible, a new motion estimation should be performed in the transcoder. An adaptive motion vector refinement is proposed that refines the base motion vector according to the activity of macroblock using intermediate data extracted from an incoming video bit stream. Experimental results shows that the proposed method can improve the video quality to the level achieved by using the full-scale motion estimation with minimal computational complexity.

  • PDF

Construction of Branching Surface from 2-D Contours

  • Jha, Kailash
    • International Journal of CAD/CAM
    • /
    • v.8 no.1
    • /
    • pp.21-28
    • /
    • 2009
  • In the present work, an attempt has been made to construct branching surface from 2-D contours, which are given at different layers and may have branches. If a layer having more than one contour and corresponds to contour at adjacent layers, then it is termed as branching problem and approximated by adding additional points in between the layers. Firstly, the branching problem is converted to single contour case in which there is no branching at any layer and the final branching surface is obtained by skinning. Contours are constructed from the given input points at different layers by energy-based B-Spline approximation. 3-D curves are constructed after adding additional points into the contour points for all the layers having branching problem by using energy-based B-Spline formulation. Final 3-D surface is obtained by skinning 3-D curves and 2-D contours. There are three types of branching problems: (a) One-to-one, (b) One-to-many and (c) Many-to-many. Oneto-one problem has been done by plethora of researchers based on minimizations of twist and curvature and different tiling techniques. One-to-many problem is the one in which at least one plane must have more than one contour and have correspondence with the contour at adjacent layers. Many-to-many problem is stated as m contours at i-th layer and n contours at (i+1)th layer. This problem can be solved by combining one-to-many branching methodology. Branching problem is very important in CAD, medical imaging and geographical information system(GIS).

Supervised-learning-based algorithm for color image compression

  • Liu, Xue-Dong;Wang, Meng-Yue;Sa, Ji-Ming
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.258-271
    • /
    • 2020
  • A correlation exists between luminance samples and chrominance samples of a color image. It is beneficial to exploit such interchannel redundancy for color image compression. We propose an algorithm that predicts chrominance components Cb and Cr from the luminance component Y. The prediction model is trained by supervised learning with Laplacian-regularized least squares to minimize the total prediction error. Kernel principal component analysis mapping, which reduces computational complexity, is implemented on the same point set at both the encoder and decoder to ensure that predictions are identical at both the ends without signaling extra location information. In addition, chrominance subsampling and entropy coding for model parameters are adopted to further reduce the bit rate. Finally, luminance information and model parameters are stored for image reconstruction. Experimental results show the performance superiority of the proposed algorithm over its predecessor and JPEG, and even over JPEG-XR. The compensation version with the chrominance difference of the proposed algorithm performs close to and even better than JPEG2000 in some cases.