• Title/Summary/Keyword: Image data-sets

Search Result 370, Processing Time 0.025 seconds

Chemical Shift Artifact Correction in MREIT

  • Minhas, Atul S.;Kim, Young-Tae;Jeong, Woo-Chul;Kim, Hyung-Joong;Lee, Soo-Yeol;Woo, Eung-Je
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.6
    • /
    • pp.461-468
    • /
    • 2009
  • Magnetic resonance electrical impedance tomography (MREIT) enables us to perform high-resolution conductivity imaging of an electrically conducting object. Injecting low-frequency current through a pair of surface electrodes, we measure an induced magnetic flux density using an MRI scanner and this requires a sophisticated MR phase imaging method. Applying a conductivity image reconstruction algorithm to measured magnetic flux density data subject to multiple injection currents, we can produce multi-slice cross-sectional conductivity images. When there exists a local region of fat, the well-known chemical shift phenomenon produces misalignments of pixels in MR images. This may result in artifacts in magnetic flux density image and consequently in conductivity image. In this paper, we investigate chemical shift artifact correction in MREIT based on the well-known three-point Dixon technique. The major difference is in the fact that we must focus on the phase image in MREIT. Using three Dixon data sets, we explain how to calculate a magnetic flux density image without chemical shift artifact. We test the correction method through imaging experiments of a cheese phantom and postmortem canine head. Experimental results clearly show that the method effectively eliminates artifacts related with the chemical shift phenomenon in a reconstructed conductivity image.

Deriving the Effective Atomic Number with a Dual-Energy Image Set Acquired by the Big Bore CT Simulator

  • Jung, Seongmoon;Kim, Bitbyeol;Kim, Jung-in;Park, Jong Min;Choi, Chang Heon
    • Journal of Radiation Protection and Research
    • /
    • v.45 no.4
    • /
    • pp.171-177
    • /
    • 2020
  • Background: This study aims to determine the effective atomic number (Zeff) from dual-energy image sets obtained using a conventional computed tomography (CT) simulator. The estimated Zeff can be used for deriving the stopping power and material decomposition of CT images, thereby improving dose calculations in radiation therapy. Materials and Methods: An electron-density phantom was scanned using Philips Brilliance CT Big Bore at 80 and 140 kVp. The estimated Zeff values were compared with those obtained using the calibration phantom by applying the Rutherford, Schneider, and Joshi methods. The fitting parameters were optimized using the nonlinear least squares regression algorithm. The fitting curve and mass attenuation data were obtained from the National Institute of Standards and Technology. The fitting parameters obtained from stopping power and material decomposition of CT images, were validated by estimating the residual errors between the reference and calculated Zeff values. Next, the calculation accuracy of Zeff was evaluated by comparing the calculated values with the reference Zeff values of insert plugs. The exposure levels of patients under additional CT scanning at 80, 120, and 140 kVp were evaluated by measuring the weighted CT dose index (CTDIw). Results and Discussion: The residual errors of the fitting parameters were lower than 2%. The best and worst Zeff values were obtained using the Schneider and Joshi methods, respectively. The maximum differences between the reference and calculated values were 11.3% (for lung during inhalation), 4.7% (for adipose tissue), and 9.8% (for lung during inhalation) when applying the Rutherford, Schneider, and Joshi methods, respectively. Under dual-energy scanning (80 and 140 kVp), the patient exposure level was approximately twice that in general single-energy scanning (120 kVp). Conclusion: Zeff was calculated from two image sets scanned by conventional single-energy CT simulator. The results obtained using three different methods were compared. The Zeff calculation based on single-energy exhibited appropriate feasibility.

Accelerated Resting-State Functional Magnetic Resonance Imaging Using Multiband Echo-Planar Imaging with Controlled Aliasing

  • Seo, Hyung Suk;Jang, Kyung Eun;Wang, Dingxin;Kim, In Seong;Chang, Yongmin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.21 no.4
    • /
    • pp.223-232
    • /
    • 2017
  • Purpose: To report the use of multiband accelerated echo-planar imaging (EPI) for resting-state functional MRI (rs-fMRI) to achieve rapid high temporal resolution at 3T compared to conventional EPI. Materials and Methods: rs-fMRI data were acquired from 20 healthy right-handed volunteers by using three methods: conventional single-band gradient-echo EPI acquisition (Data 1), multiband gradient-echo EPI acquisition with 240 volumes (Data 2) and 480 volumes (Data 3). Temporal signal-to-noise ratio (tSNR) maps were obtained by dividing the mean of the time course of each voxel by its temporal standard deviation. The resting-state sensorimotor network (SMN) and default mode network (DMN) were estimated using independent component analysis (ICA) and a seed-based method. One-way analysis of variance (ANOVA) was performed between the tSNR map, SMN, and DMN from the three data sets for between-group analysis. P < 0.05 with a family-wise error (FWE) correction for multiple comparisons was considered statistically significant. Results: One-way ANOVA and post-hoc two-sample t-tests showed that the tSNR was higher in Data 1 than Data 2 and 3 in white matter structures such as the striatum and medial and superior longitudinal fasciculus. One-way ANOVA revealed no differences in SMN or DMN across the three data sets. Conclusion: Within the adapted metrics estimated under specific imaging conditions employed in this study, multiband accelerated EPI, which substantially reduced scan times, provides the same quality image of functional connectivity as rs-fMRI by using conventional EPI at 3T. Under employed imaging conditions, this technique shows strong potential for clinical acceptance and translation of rs-fMRI protocols with potential advantages in spatial and/or temporal resolution. However, further study is warranted to evaluate whether the current findings can be generalized in diverse settings.

A Low Cost IBM PC/AT Based Image Processing System for Satellite Image Analysis: A New Analytical Tool for the Resource Managers

  • Yang, Young-Kyu;Cho, Seong-Ik;Lee, Hyun-Woo;Miller, Lee-D.
    • Korean Journal of Remote Sensing
    • /
    • v.4 no.1
    • /
    • pp.31-40
    • /
    • 1988
  • Low-cost microcomputer systems can be assembled which possess computing power, color display, memory, and storage capacity approximately equal to graphic workstactions. A low-cost, flexible, and user-friendly IBM/PC/XT/AT based image processing system has been developed and named as KMIPS(KAIST (Korea Advanced Institute of Science & Technology) Map and Image Processing Station). It can be easily utilized by the resource managers who are not computer specialists. This system can: * directly access Landsat MSS and TM, SPOT, NOAA AVHRR, MOS-1 satellite imagery and other imagery from different sources via magnetic tape drive connected with IBM/PC; * extract image up to 1024 line by 1024 column and display it up to 480 line by 672 column with 512 colors simultaneously available; * digitize photographs using a frame grabber subsystem(512 by 512 picture elements); * perform a variety of image analyses, GIS and terrain analyses, and display functions; and * generate map and hard copies to the various scales. All raster data input to the microcomputer system is geographically referenced to the topographic map series in any rater cell size selected by the user. This map oriented, georeferenced approach of this system enables user to create a very accurately registered(.+-.1 picture element), multivariable, multitemporal data sets which can be subsequently subsequently subjected to various analyses and display functions.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

GENERATION OF FUTURE MAGNETOGRAMS FROM PREVIOUS SDO/HMI DATA USING DEEP LEARNING

  • Jeon, Seonggyeong;Moon, Yong-Jae;Park, Eunsu;Shin, Kyungin;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.82.3-82.3
    • /
    • 2019
  • In this study, we generate future full disk magnetograms in 12, 24, 36 and 48 hours advance from SDO/HMI images using deep learning. To perform this generation, we apply the convolutional generative adversarial network (cGAN) algorithm to a series of SDO/HMI magnetograms. We use SDO/HMI data from 2011 to 2016 for training four models. The models make AI-generated images for 2017 HMI data and compare them with the actual HMI magnetograms for evaluation. The AI-generated images by each model are very similar to the actual images. The average correlation coefficient between the two images for about 600 data sets are about 0.85 for four models. We are examining hundreds of active regions for more detail comparison. In the future we will use pix2pix HD and video2video translation networks for image prediction.

  • PDF

Improve object recognition using UWB SAR imaging with compressed sensing

  • Pham, The Hien;Hong, Ic-Pyo
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.76-82
    • /
    • 2021
  • In this paper, the compressed sensing basic pursuit denoise algorithm adopted to synthetic aperture radar imaging is investigated to improve the object recognition. From the incomplete data sets for image processing, the compressed sensing algorithm had been integrated to recover the data before the conventional back- projection algorithm was involved to obtain the synthetic aperture radar images. This method can lead to the reduction of measurement events while scanning the objects. An ultra-wideband radar scheme using a stripmap synthetic aperture radar algorithm was utilized to detect objects hidden behind the box. The Ultra-Wideband radar system with 3.1~4.8 GHz broadband and UWB antenna were implemented to transmit and receive signal data of two conductive cylinders located inside the paper box. The results confirmed that the images can be reconstructed by using a 30% randomly selected dataset without noticeable distortion compared to the images generated by full data using the conventional back-projection algorithm.

Priority based Image Transmission Technique with DPCM in Wireless Multimedia (무선 멀티미디어 센서 네트워크에서 예측부호화를 통한 우선순위 기반 이미지 전송 기법)

  • Lee, Joa-Hyoung;Jung, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.4
    • /
    • pp.1023-1031
    • /
    • 2010
  • With recent advances in hardware and wireless communication techniques, wireless multimedia sensor network which collects multimedia data through wireless sensor network has started to receive a lot of attentions from many researchers. Wireless multimedia sensor network requires a research of efficient compression and transmission to process the multimedia data which has large size, in the wireless sensor network that has very low network bandwidth. In this paper, we propose PIT protocol for the transmission based on the priority that classified by the DPCM compression. The PIT protocol sets different priority to the each subbands which are divided by the wavelet transform. The PIT protocol transmits the data with higher priority to guarantee the high image quality. The PIT protocol uses the characteristic of wavelet transform that the transformed image is very insensible to the data loss. In PIT protocol, each subbands of wavelet transformed image has fair weight in the compressed image to utilize the prioriy based transmission. The experiment results show that the PIT protocol improves the quality of image in spite of data loss.

Towards Next Generation Multimedia Information Retrieval by Analyzing User-centered Image Access and Use (이용자 중심의 이미지 접근과 이용 분석을 통한 차세대 멀티미디어 검색 패러다임 요소에 관한 연구)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.121-138
    • /
    • 2017
  • As information users seek multimedia with a wide variety of information needs, information environments for multimedia have been developed drastically. More specifically, as seeking multimedia with emotional access points has been popular, the needs for indexing in terms of abstract concepts including emotions have grown. This study aims to analyze the index terms extracted from Getty Image Bank. Five basic emotion terms, which are sadness, love, horror, happiness, anger, were used when collected the indexing terms. A total 22,675 index terms were used for this study. The data are three sets; entire emotion, positive emotion, and negative emotion. For these three data sets, co-word occurrence matrices were created and visualized in weighted network with PNNC clusters. The entire emotion network demonstrates three clusters and 20 sub-clusters. On the other hand, positive emotion network and negative emotion network show 10 clusters, respectively. The results point out three elements for next generation of multimedia retrieval: (1) the analysis on index terms for emotions shown in people on image, (2) the relationship between connotative term and denotative term and possibility for inferring connotative terms from denotative terms using the relationship, and (3) the significance of thesaurus on connotative term in order to expand related terms or synonyms for better access points.

Developments of Semi-Automatic Vertebra Bone Segmentation Tool using Valley Tracking Deformable Model (계곡 추적 Deformable Model을 이용한 반자동 척추뼈 분할 도구의 개발)

  • Kim, Yie-Bin;Kim, Dong-Sung
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.6
    • /
    • pp.791-797
    • /
    • 2007
  • This paper proposes a semiautomatic vertebra segmentation method that overcomes limitations of both manual segmentation requiring tedious user interactions and fully automatic segmentation that is sensitive to initial conditions. The proposed method extracts fence surfaces between vertebrae, and segments a vertebra using fence-limited region growing. A fence surface is generated by a deformable model utilizing valley information in a valley emphasized Gaussian image. Fence-limited region growing segments a vertebra using gray value homogeneity and fence surfaces acting as barriers. The proposed method has been applied to ten patient data sets, and produced promising results accurately and efficiently with minimal user interaction.