• 제목/요약/키워드: data reconstruction

검색결과 1,471건 처리시간 0.022초

DC offset을 보정한 나선 주사 초고속 자기공명영상의 재구성 알고리즘 (Improved Reconstruction Algorithm for Spiral Scan Fast MR Imaging with DC offset Correction)

  • 안창범;김휴정
    • 대한의용생체공학회:의공학회지
    • /
    • 제19권3호
    • /
    • pp.243-250
    • /
    • 1998
  • 초고속 자기공명 영상 기법의 일종인 나선 주사 영상의 재구성을 위하여 k-공간에서 극좌표와 직각 좌표계를 기초로한 재구성방법들을 분석하였다. 나선 주사 영상의 재구성은 나선 궤적상에서 측정된 데이터를 극좌표나 직각 좌표계로 변환시키기 위하여 보간 기술들이 사용된다. 나선주사 영상의 다양한 재구성 알고리즘들을 시험하여 보았고, 재구성된 영상의 질을 비교하였다. 본 연구진이 제안한 투영 영역에서 dc-offset보정을 한 향상된 재구성 알고리즘이 시뮬레이션을 통하여 가장 우수한 것으로 나타났다. 또한, 기존의 재구성 방법들에서 나타났던 영상 artifact도 제안된 방법에서는 완전히 사라짐을 확인할 수 있었다.

  • PDF

Accelerating Magnetic Resonance Fingerprinting Using Hybrid Deep Learning and Iterative Reconstruction

  • Cao, Peng;Cui, Di;Ming, Yanzhen;Vardhanabhuti, Varut;Lee, Elaine;Hui, Edward
    • Investigative Magnetic Resonance Imaging
    • /
    • 제25권4호
    • /
    • pp.293-299
    • /
    • 2021
  • Purpose: To accelerate magnetic resonance fingerprinting (MRF) by developing a flexible deep learning reconstruction method. Materials and Methods: Synthetic data were used to train a deep learning model. The trained model was then applied to MRF for different organs and diseases. Iterative reconstruction was performed outside the deep learning model, allowing a changeable encoding matrix, i.e., with flexibility of choice for image resolution, radiofrequency coil, k-space trajectory, and undersampling mask. In vivo experiments were performed on normal brain and prostate cancer volunteers to demonstrate the model performance and generalizability. Results: In 400-dynamics brain MRF, direct nonuniform Fourier transform caused a slight increase of random fluctuations on the T2 map. These fluctuations were reduced with the proposed method. In prostate MRF, the proposed method suppressed fluctuations on both T1 and T2 maps. Conclusion: The deep learning and iterative MRF reconstruction method described in this study was flexible with different acquisition settings such as radiofrequency coils. It is generalizable for different in vivo applications.

Stripmap-mode SAR에서의 영상복원 알고리즘의 성능분석 (Performance Analysis of the reconstruction Algorithms in the Stripmap-mode SAR)

  • 박현복;김형주;최정희
    • 한국전자파학회:학술대회논문집
    • /
    • 한국전자파학회 2000년도 종합학술발표회 논문집 Vol.10 No.1
    • /
    • pp.29-33
    • /
    • 2000
  • Stripmap SAR 시스템에서 레이더는 Slant range-domain에서 고정된 Strip상에 Data acquisition period 동안 계속해서 같은 Broadside 방사 패턴을 유지하며, Range domain에서 고정돈 Strip내의 지형의 지도를 제공하는 SAR 영상 시스템이다. Stripmap SAR를 위한 고전적인 영상 복원을 Synthetic aperture (slow-time) domain에서 Deramping 또는 Chirp deconvolution을 이용하는 Fresnel approximaptio에 의존하였다. 또 다른 Stripmap SAR 영상화의 접근방법으로 SAR wavefront reconstruction이론과 레이더 방사형태에 대한 spherical wave Fourier decomposition을 통하여 slow-time domain에서의 SAR 신호의 분석에 기본을 두고 있다. 본 논문에서는 컴퓨터 모의 실험을 통해 생성된 Stripmap SAR 데이터를 이용하여 Fresnel approximation 기법과 Wavefront Reconstruction 기법을 비교 분석한다.

  • PDF

핵의학 단층영상 재구성을 위한 통계학적 방법 (Statistical Methods for Tomographic Image Reconstruction in Nuclear Medicine)

  • 이수진
    • Nuclear Medicine and Molecular Imaging
    • /
    • 제42권2호
    • /
    • pp.118-126
    • /
    • 2008
  • Statistical image reconstruction methods have played an important role in emission computed tomography (ECT) since they accurately model the statistical noise associated with gamma-ray projection data. Although the use of statistical methods in clinical practice in early days was of a difficult problem due to high per-iteration costs and large numbers of iterations, with the development of fast algorithms and dramatically improved speed of computers, it is now inevitably becoming more practical. Some statistical methods are indeed commonly available from nuclear medicine equipment suppliers. In this paper, we first describe a mathematical background for statistical reconstruction methods, which includes assumptions underlying the Poisson statistical model, maximum likelihood and maximum a posteriori approaches, and prior models in the context of a Bayesian framework. We then review a recent progress in developing fast iterative algorithms.

New Geometric modeling method: reconstruction of surface using Reverse Engineering techniques

  • Jihan Seo
    • 대한안전경영과학회:학술대회논문집
    • /
    • 대한안전경영과학회 1999년도 추계학술대회
    • /
    • pp.565-574
    • /
    • 1999
  • In reverse engineering area, it is rapidly developing reconstruction of surfaces from scanning or digitizing data, but geometric models of existing objects unavailable many industries. This paper describes new methodology of reverse engineering area, good strategies and important algorithms in reverse engineering area. Furthermore, proposing reconstruction of surface technique is presented. A method find base geometry and blending surface between them. Each based geometry is divided by triangular patch which are compared their normal vector for face grouping. Each group is categorized analytical surface such as a part of the cylinder, the sphere, the cone, and the plane that mean each based geometry surface. And then, each based geometry surface is implemented infinitive surface. Infinitive average surface's intersections are trimmed boundary representation model reconstruction. This method has several benefits such as the time efficiency and automatic functional modeling system in reverse engineering. Especially, it can be applied 3D scanner and 3D copier.

  • PDF

Fast 3D reconstruction method based on UAV photography

  • Wang, Jiang-An;Ma, Huang-Te;Wang, Chun-Mei;He, Yong-Jie
    • ETRI Journal
    • /
    • 제40권6호
    • /
    • pp.788-793
    • /
    • 2018
  • 3D reconstruction of urban architecture, land, and roads is an important part of building a "digital city." Unmanned aerial vehicles (UAVs) are gradually replacing other platforms, such as satellites and aircraft, in geographical image collection; the reason for this is not only lower cost and higher efficiency, but also higher data accuracy and a larger amount of obtained information. Recent 3D reconstruction algorithms have a high degree of automation, but their computation time is long and the reconstruction models may have many voids. This paper decomposes the object into multiple regional parallel reconstructions using the clustering principle, to reduce the computation time and improve the model quality. It is proposed to detect the planar area under low resolution, and then reduce the number of point clouds in the complex area.

다각 주사법을 이용한 비대칭 매연분포의 재구성 (Tomographic Reconstruction of Asymmetric Soot Structure from Multi-angular Scanning)

  • 이상민;황준영;정석호
    • 한국연소학회:학술대회논문집
    • /
    • 한국연소학회 1999년도 제19회 KOSCO SYMPOSIUM 논문집
    • /
    • pp.55-61
    • /
    • 1999
  • A convolution algorithm combined with Fourier transformation is applied to the tomographic reconstruction of the asymmetric soot structure to identify the local soot volume fraction distribution. The line of sight integrated data from light extinction measurement with multi-angular scanning form basic information for the deconvolution. Multi-peak following interpolation technique is applied to obtain the effect of increasing number of scanning angles. Measurement of LII signal for the same flame shows the validity of this reconstruction technigue.

  • PDF

회전기기 진동의 차수 추종을 위한 재합성 필터의 설계 (The Design of Reconstruction Filter for Order Tracking in Rotating Machinery)

  • 정승호;박영필
    • 소음진동
    • /
    • 제2권2호
    • /
    • pp.117-123
    • /
    • 1992
  • In the study, the design method of reconstruction filter is studied for synchronized sampling which is necessary for order tracking in rotating machinery. The original data sampled at constant intervals, using fixed anti- aliasing filters, is reconstructed by digital reconstruction filter and is resampled at new sampling times calculated by a suitable shaft angle encoder pulse arrival times in order to synchronize with shaft velocity. In addition to eliminating the tracking synthesizer and filters, this new method has no phase noise due to phase-locked loops.

  • PDF

Weak-lensing Mass Reconstruction of Galaxy Clusters with Convolutional Neural Network

  • Hong, Sungwook E.;Park, Sangnam;Jee, M. James;Bak, Dongsu;Cha, Sangjun
    • 천문학회보
    • /
    • 제45권1호
    • /
    • pp.49.4-50
    • /
    • 2020
  • We introduce a novel method for reconstructing the projected matter distributions of galaxy clusters with weak-lensing (WL) data based on convolutional neural network (CNN). We control the noise level of the galaxy shear catalog such that it mimics the typical properties of the existing Subaru/Suprime-Cam WL observations of galaxy clusters. We find that our mass reconstruction based on multi-layered CNN with architectures of alternating convolution and trans-convolution filters significantly outperforms the traditional mass reconstruction methods.

  • PDF

볼륨 데이터를 위한 셀 기반 웨이브릿 압축 기법 (Cell-Based Wavelet Compression Method for Volume Data)

  • 김태영;신영길
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제26권11호
    • /
    • pp.1285-1295
    • /
    • 1999
  • 본 논문은 방대한 크기의 볼륨 데이타를 효율적으로 렌더링하기 위한 셀 기반 웨이브릿 압축 방법을 제시한다. 이 방법은 볼륨을 작은 크기의 셀로 나누고, 셀 단위로 웨이브릿 변환을 한 다음 복원 순서에 따른 런-길이(run-length) 인코딩을 수행하여 높은 압축율과 빠른 복원을 제공한다. 또한 최근 복원 정보를 캐쉬 자료 구조에 효율적으로 저장하여 복원 시간을 단축시키고, 에러 임계치의 정규화로 비정규화된 웨이브릿 압축보다 빠른 속도로 정규화된 압축과 같은 고화질의 이미지를 생성하였다. 본 연구의 성능을 평가하기 위하여 {{}} 해상도의 볼륨 데이타를 압축하여 쉬어-? 분해(shear-warp factorization) 알고리즘에 적용한 결과, 손상이 거의 없는 상태로 약 27:1의 압축율이 얻어졌고, 약 3초의 렌더링 시간이 걸렸다.Abstract This paper presents an efficient cell-based wavelet compression method of large volume data. Volume data is divided into individual cell of {{}} voxels, and then wavelet transform is applied to each cell. The transformed cell is run-length encoded according to the reconstruction order resulting in a fairly good compression ratio and fast reconstruction. A cache structure is used to speed up the process of reconstruction and a threshold normalization scheme is presented to produce a higher quality rendered image. We have combined our compression method with shear-warp factorization, which is an accelerated volume rendering algorithm. Experimental results show the space requirement to be about 27:1 and the rendering time to be about 3 seconds for {{}} data sets while preserving the quality of an image as like as using original data.