• 제목/요약/키워드: Computational imaging

검색결과 249건 처리시간 0.02초

Construction of Branching Surface from 2-D Contours

  • Jha, Kailash
    • International Journal of CAD/CAM
    • /
    • 제8권1호
    • /
    • pp.21-28
    • /
    • 2009
  • In the present work, an attempt has been made to construct branching surface from 2-D contours, which are given at different layers and may have branches. If a layer having more than one contour and corresponds to contour at adjacent layers, then it is termed as branching problem and approximated by adding additional points in between the layers. Firstly, the branching problem is converted to single contour case in which there is no branching at any layer and the final branching surface is obtained by skinning. Contours are constructed from the given input points at different layers by energy-based B-Spline approximation. 3-D curves are constructed after adding additional points into the contour points for all the layers having branching problem by using energy-based B-Spline formulation. Final 3-D surface is obtained by skinning 3-D curves and 2-D contours. There are three types of branching problems: (a) One-to-one, (b) One-to-many and (c) Many-to-many. Oneto-one problem has been done by plethora of researchers based on minimizations of twist and curvature and different tiling techniques. One-to-many problem is the one in which at least one plane must have more than one contour and have correspondence with the contour at adjacent layers. Many-to-many problem is stated as m contours at i-th layer and n contours at (i+1)th layer. This problem can be solved by combining one-to-many branching methodology. Branching problem is very important in CAD, medical imaging and geographical information system(GIS).

Monte Carlo approach for calculation of mass energy absorption coefficients of some amino acids

  • Bozkurt, Ahmet;Sengul, Aycan
    • Nuclear Engineering and Technology
    • /
    • 제53권9호
    • /
    • pp.3044-3050
    • /
    • 2021
  • This study offers a Monte Carlo alternative for computing mass energy absorption coefficients of any material through calculation of photon energy deposited per mass of the sample and the energy flux obtained inside a sample volume. This approach is applied in this study to evaluate mass energy absorption coefficients of some amino acids found in human body at twenty-eight different photon energies between 10 keV and 20 MeV. The simulations involved a pencil beam source modeled to emit a parallel beam of mono-energetic photons toward a 1 mean free path thick sample of rectangular parallelepiped geometry. All the components in the problem geometry were surrounded by a 100 cm vacuum sphere to avoid any interactions in materials other than the absorber itself. The results computed using the Monte Carlo radiation transport packages MCNP6.2 and GAMOS5.1 were checked against the theoretical values available from the tables of XMUDAT database. These comparisons indicate very good agreement and support the conclusion that Monte Carlo technique utilized in this fashion may be used as a computational tool for determining the mass energy absorption coefficients of any material whose data are not available in the literature.

Fast Noise Reduction Approach in Multifocal Multiphoton Microscopy Based on Monte-Carlo Simulation

  • Kim, Dongmok;Shin, Younghoon;Kwon, Hyuk-Sang
    • Current Optics and Photonics
    • /
    • 제5권4호
    • /
    • pp.421-430
    • /
    • 2021
  • The multifocal multiphoton microscopy (MMM) enables high-speed imaging by the concurrent scanning and detection of multiple foci generated by lenslet array or diffractive optical element. The MMM system mainly suffers from crosstalk generated by scattered emission photons that form ghost images among adjacent channels. The ghost image which is a duplicate of the image acquired in sub-images significantly degrades overall image quality. To eliminate the ghost image, the photon reassignment method was established using maximum likelihood estimation. However, this post-processing method generally takes a longer time than image acquisition. In this regard, we propose a novel strategy for rapid noise reduction in the MMM system based upon Monte-Carlo (MC) simulation. Ballistic signal, scattering signal, and scattering noise of each channel are quantified in terms of photon distribution launched in tissue model based on MC simulation. From the analysis of photon distribution, we successfully eliminated the ghost images in the MMM sub-images. If the priori MC simulation under a certain optical condition is established at once, our simple, but robust post-processing technique will continuously provide the noise-reduced images, while significantly reducing the computational cost.

Compression and Enhancement of Medical Images Using Opposition Based Harmony Search Algorithm

  • Haridoss, Rekha;Punniyakodi, Samundiswary
    • Journal of Information Processing Systems
    • /
    • 제15권2호
    • /
    • pp.288-304
    • /
    • 2019
  • The growth of telemedicine-based wireless communication for images-magnetic resonance imaging (MRI) and computed tomography (CT)-leads to the necessity of learning the concept of image compression. Over the years, the transform based and spatial based compression techniques have attracted many types of researches and achieve better results at the cost of high computational complexity. In order to overcome this, the optimization techniques are considered with the existing image compression techniques. However, it fails to preserve the original content of the diagnostic information and cause artifacts at high compression ratio. In this paper, the concept of histogram based multilevel thresholding (HMT) using entropy is appended with the optimization algorithm to compress the medical images effectively. However, the method becomes time consuming during the measurement of the randomness from the image pixel group and not suitable for medical applications. Hence, an attempt has been made in this paper to develop an HMT based image compression by utilizing the opposition based improved harmony search algorithm (OIHSA) as an optimization technique along with the entropy. Further, the enhancement of the significant information present in the medical images are improved by the proper selection of entropy and the number of thresholds chosen to reconstruct the compressed image.

특이값분해 기반 동적의료영상 재구성기법의 특징 파악을 위한 시뮬레이션 연구 (Simulation Study for Feature Identification of Dynamic Medical Image Reconstruction Technique Based on Singular Value Decomposition)

  • 김도휘;정영진
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제42권2호
    • /
    • pp.119-130
    • /
    • 2019
  • Positron emission tomography (PET) is widely used imaging modality for effective and accurate functional testing and medical diagnosis using radioactive isotopes. However, PET has difficulties in acquiring images with high image quality due to constraints such as the amount of radioactive isotopes injected into the patient, the detection time, the characteristics of the detector, and the patient's motion. In order to overcome this problem, we have succeeded to improve the image quality by using the dynamic image reconstruction method based on singular value decomposition. However, there is still some question about the characteristics of the proposed technique. In this study, the characteristics of reconstruction method based on singular value decomposition was estimated over computational simulation. As a result, we confirmed that the singular value decomposition based reconstruction technique distinguishes the images well when the signal - to - noise ratio of the input image is more than 20 decibels and the feature vector angle is more than 60 degrees. In addition, the proposed methode to estimate the characteristics of reconstruction technique can be applied to other spatio-temporal feature based dynamic image reconstruction techniques. The deduced conclusion of this study can be useful guideline to apply medical image into SVD based dynamic image reconstruction technique to improve the accuracy of medical diagnosis.

Application of Deep Learning: A Review for Firefighting

  • Shaikh, Muhammad Khalid
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.73-78
    • /
    • 2022
  • The aim of this paper is to investigate the prevalence of Deep Learning in the literature on Fire & Rescue Service. It is found that deep learning techniques are only beginning to benefit the firefighters. The popular areas where deep learning techniques are making an impact are situational awareness, decision making, mental stress, injuries, well-being of the firefighter such as his sudden fall, inability to move and breathlessness, path planning by the firefighters while getting to an fire scene, wayfinding, tracking firefighters, firefighter physical fitness, employment, prediction of firefighter intervention, firefighter operations such as object recognition in smoky areas, firefighter efficacy, smart firefighting using edge computing, firefighting in teams, and firefighter clothing and safety. The techniques that were found applied in firefighting were Deep learning, Traditional K-Means clustering with engineered time and frequency domain features, Convolutional autoencoders, Long Short-Term Memory (LSTM), Deep Neural Networks, Simulation, VR, ANN, Deep Q Learning, Deep learning based on conditional generative adversarial networks, Decision Trees, Kalman Filters, Computational models, Partial Least Squares, Logistic Regression, Random Forest, Edge computing, C5 Decision Tree, Restricted Boltzmann Machine, Reinforcement Learning, and Recurrent LSTM. The literature review is centered on Firefighters/firemen not involved in wildland fires. The focus was also not on the fire itself. It must also be noted that several deep learning techniques such as CNN were mostly used in fire behavior, fire imaging and identification as well. Those papers that deal with fire behavior were also not part of this literature review.

특이값 분해 기반 Dynamic PET 영상의 노이즈 제거 기법 : 예비 연구 (Singular Value Decomposition based Noise Reduction Technique for Dynamic PET I mage : Preliminary study)

  • 편도영;김정수;백철하;정영진
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제39권2호
    • /
    • pp.227-236
    • /
    • 2016
  • 동적 양전자방출단층 촬영은 3차원의 공간적 정보와 추가적인 1차원의 시계열 정보가 함께 존재하는 시공간 정보(4차원)의 데이터를 활용할 수 있어서 전통적인 영상기법에 비해 임상 진단 및 분석에 활용할 수 있는 정보의 양이 급격히 증가된 의료영상 촬영기법이다. 그러나, 인체에 주입할 수 있는 방사성 동위원소의 양의 제한 및 검출기 특징에 따른 감마선 검출의 제한 등이 영상을 재구성 하는 것에 제약사항으로 존재하여, 고화질 의료 영상의 획득에 어려움이 존재하여 임상적 활용의 제한사항이 되어왔다. 본 연구에서는, 4차원 영상의 적극적인 임상 활용을 위해서 영상의 화질을 개선하고 정량적인 평가를 할 수 있는 영상 기법을 연구하였다. 이를 위해, Matlab을 이용하여 영상의 여러 독립적인 신호원을 분리하여 영상의 신호와 노이즈로 구분할 수 있도록, 선형 대수학의 기법 중 하나인 특이값 분해를 활용하였다. 이를 통해, 개선된 동적 양전자방출단층 영상은 정량적인 평가를 통하여, 원래 영상에 비해 SNR이 최소 5%에서 최대 30%까지 증가하는 것을 확인하였다. 이러한 연구 결과는 향후 dynamic PET 연구의 기초적인 도구로 활용될 것이라 기대된다.

박동 혈액 순환 모의 시스템에서 시간 동기화된 혈압 및 혈액의 초음파 영상 측정 및 주기적 분석 (Time-synchronized measurement and cyclic analysis of ultrasound imaging from blood with blood pressure in the mock pulsatile blood circulation system)

  • 민수홍;김창수;팽동국
    • 한국음향학회지
    • /
    • 제36권5호
    • /
    • pp.361-369
    • /
    • 2017
  • 뇌혈관 질환의 발생 및 진행 기작을 이해하고 그 질환의 조기진단과 진행예측을 위해서는 경동맥 분지에서의 혈류역학 정보가 매우 중요하다. 본 논문에서는 정상인 경동맥 분지 탄성 모형 혈관과 생체 외 돼지혈액을 이용하여 모의박동 혈액 순환 시스템을 구축하여 혈류를 조절하면서 혈관과 혈액의 초음파 영상을 내부 압력과 시간 동기화하여 측정하였다. 박동 펌프의 박동률이 분당 20회, 40회, 60회(r/min)일 때의 초음파 영상의 에코 값, 혈류속도, 혈관 벽의 움직임, 혈압을 펌프의 5주기 동안 평균하여 한 주기의 데이터를 추출하였다. 결과로 박동률이 20 r/min, 40 r/min, 60 r/min일때 수축기 최고 혈류 속도는 각각 20 cm/s, 25 cm/s, 40 cm/s, 혈압 차는 각각 30 mmHg, 70 mmHg, 85 mmHg, 동맥벽은 각각 0.05 mm, 0.15 mm, 0.25 mm로 확장 하였다. 에코의 주기적 변화는 혈류속도와 압력과는 시간 지연이 있었으며 20 r/min에서는 변화량이 최소였다. 이러한 시간 동기화된 인자들의 주기적 변화는 전산혈류역학 실험의 정확한 입력정보와 검증을 위한 중요한 정보이며 경동맥 협착 질환의 발생 및 진행 기작을 밝히는데도 유용한 정보를 제공할 것이다.

전산화 단층 촬영(Computed tomography, CT) 이미지에 대한 EfficientNet 기반 두개내출혈 진단 및 가시화 모델 개발 (Diagnosis and Visualization of Intracranial Hemorrhage on Computed Tomography Images Using EfficientNet-based Model)

  • 윤예빈;김민건;김지호;강봉근;김구태
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권4호
    • /
    • pp.150-158
    • /
    • 2021
  • Intracranial hemorrhage (ICH) refers to acute bleeding inside the intracranial vault. Not only does this devastating disease record a very high mortality rate, but it can also cause serious chronic impairment of sensory, motor, and cognitive functions. Therefore, a prompt and professional diagnosis of the disease is highly critical. Noninvasive brain imaging data are essential for clinicians to efficiently diagnose the locus of brain lesion, volume of bleeding, and subsequent cortical damage, and to take clinical interventions. In particular, computed tomography (CT) images are used most often for the diagnosis of ICH. In order to diagnose ICH through CT images, not only medical specialists with a sufficient number of diagnosis experiences are required, but even when this condition is met, there are many cases where bleeding cannot be successfully detected due to factors such as low signal ratio and artifacts of the image itself. In addition, discrepancies between interpretations or even misinterpretations might exist causing critical clinical consequences. To resolve these clinical problems, we developed a diagnostic model predicting intracranial bleeding and its subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and epidural) by applying deep learning algorithms to CT images. We also constructed a visualization tool highlighting important regions in a CT image for predicting ICH. Specifically, 1) 27,758 CT brain images from RSNA were pre-processed to minimize the computational load. 2) Three different CNN-based models (ResNet, EfficientNet-B2, and EfficientNet-B7) were trained based on a training image data set. 3) Diagnosis performance of each of the three models was evaluated based on an independent test image data set: As a result of the model comparison, EfficientNet-B7's performance (classification accuracy = 91%) was a way greater than the other models. 4) Finally, based on the result of EfficientNet-B7, we visualized the lesions of internal bleeding using the Grad-CAM. Our research suggests that artificial intelligence-based diagnostic systems can help diagnose and treat brain diseases resolving various problems in clinical situations.

열전도 문제에 관한 위상 최적설계의 실험적 검증 (Topology Design Optimization and Experimental Validation of Heat Conduction Problems)

  • 차송현;김현석;조선호
    • 한국전산구조공학회논문집
    • /
    • 제28권1호
    • /
    • pp.9-18
    • /
    • 2015
  • 본 논문에서는 애조인 설계민감도(DSA)를 사용하여 평형상태의 열전도문제에서 수치적으로 얻어진 위상 최적설계를 실험적으로 검증하였다. 애조인 변수법을 이용하면 해석에서 사용되었던 행렬시스템을 애조인 문제를 풀 때 그대로 활용가능하기 때문에 설계민감도를 얻는데 필요한 계산을 매우 효율적으로 수행할 수 있다. 위상 최적설계를 위해서 설계변수는 정규화된 재료밀도 함수로 정하였다. 목적함수는 구조물의 열 컴플라이언스이고 제한조건은 허용 가능한 재료량이다. 또한 열화상카메라를 활용하여 이러한 위상 최적설계로 얻어진 수치적 결과를 부피가 동일하도록 직관적으로 설계된 디자인과 비교하여 실험적으로 검증하였다. 위상 최적설계로 얻어진 결과를 실제로 제작하기 위해 간단한 수치기법을 통해 점 정보로 변환한 후 역설계 상용프로그램을 이용하여 CAD 모델링을 수행한다. 이를 바탕으로 위상 최적설계 결과를 CNC(Computerized Numerically Controlled machine tools) 선반으로 제작하였다.